Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
1,500 | 2,364 | Near-Minimax Optimal Classification with
Dyadic Classification Trees
Clayton Scott
Electrical and Computer Engineering
Rice University
Houston, TX 77005
[email protected]
Robert Nowak
Electrical and Computer Engineering
University of Wisconsin
Madison, WI 53706
[email protected]
Abstract
This paper reports on a family of computationally practical classifiers
that converge to the Bayes error at near-minimax optimal rates for a variety of distributions. The classifiers are based on dyadic classification
trees (DCTs), which involve adaptively pruned partitions of the feature
space. A key aspect of DCTs is their spatial adaptivity, which enables local (rather than global) fitting of the decision boundary. Our risk analysis
involves a spatial decomposition of the usual concentration inequalities,
leading to a spatially adaptive, data-dependent pruning criterion. For any
distribution on (X, Y ) whose Bayes decision boundary behaves locally
like a Lipschitz smooth function, we show that the DCT error converges
to the Bayes error at a rate within a logarithmic factor of the minimax
optimal rate. We also study DCTs equipped with polynomial classification rules at each leaf, and show that as the smoothness of the boundary
increases their errors converge to the Bayes error at a rate approaching
n?1/2 , the parametric rate. We are not aware of any other practical classifiers that provide similar rate of convergence guarantees. Fast algorithms
for tree pruning are discussed.
1
Introduction
We previously studied dyadic classification trees, equipped with simple binary decision
rules at each leaf, in [1]. There we applied standard structural risk minimization to derive
a pruning rule that minimizes the empirical error plus a complexity penalty proportional to
the square root of the size of the tree. Our main result concerned the rate of convergence
of the expected error probability of our pruned dyadic classification tree to the Bayes error
for a certain class of problems. This class, which essentially requires the Bayes decision
boundary to be locally Lipschitz, had previously been studied by Mammen and Tsybakov
[2]. They showed the minimax rate of convergence for this class to be n?1/d , where n is
the number of labeled training samples, and d is the dimension of each sample. They also
demonstrated a classification rule achieving this rate, but the rule requires minimization
of the empirical error over the entire class of decision boundaries, an infeasible task in
practice. In contrast, DCTs are computationally efficient, but converge at a slower rate of
n?1/(d+1) .
In this paper we exhibit a new pruning strategy that is both computationally efficient and
realizes the minimax rate to within a log factor. Our approach is motivated by recent results
from Kearns and Mansour [3] and Mansour and McAllester [4]. Those works develop a
theory of local uniform convergence, which allows the error to be decomposed in a spatially
adaptive way (unlike conventional structural risk minimization). In essence, the associated
pruning rules allow a more refined partition in a region where the classification problem
is harder (i.e., near the decision boundary). Heuristic arguments and anecdotal evidence
in both [3] and [4] suggest that spatially adaptive penalties lead to improved performance
compared to ?global? penalties. In this work, we give theoretical support to this claim (for
a specific kind of classification tree, the DCT) by showing a superior rate of convergence
for DCTs pruned according to spatially adaptive penalties.
We go on to study DCTs equipped with polynomial classification rules at each leaf. This
provides more flexible classifiers that can take advantage of additional smoothness in the
Bayes decision boundary. We call such a classifier a polynomial-decorated DCT (PDCT).
PDCTs can be practically implemented by employing polynomial kernel SVMs at each
leaf node of a pruned DCT. For any distribution whose Bayes decision boundary behaves
locally like a H?older-? smooth function, we show that the PDCT error converges to the
Bayes error at a rate no slower than O((log n/n)?/(d+2??2) ). As ? ? ? the rate tends to
within a log factor of the parametric rate, n?1/2 .
Perceptron trees, tree classifiers having linear splits at each node, have been investigated by
many authors and in particular we point to the works [5,6]. Those works consider optimization methods and generalization errors associated with perceptron trees, but do not address
rates of approximation and convergence. A key aspect of PDCTs is their spatial adaptivity, which enables local (rather than global) polynomial fitting of the decision boundary.
Traditional polynomial kernel-based methods are not capable of achieving such rates of
convergence due to their lack of spatial adaptivity, and it is unlikely that other kernels can
solve this problem for the same reason. Consider approximating a H?older-? smooth function on a bounded domain with a single polynomial. Then the error in approximation is
O(1), a constant, which is the best one could hope for in learning a H?older smooth boundary with a traditional polynomial kernel scheme. On the other hand, if we partition the
domain into hypercubes of side length O(1/m) and fit an individual polynomial on each
hypercube, then the approximation error decays like O(m?? ). Letting m grow with the
sample size n guarantees that the approximation error will tend to zero. On the other hand,
pruning back the partition helps to avoid overfitting. This is precisely the idea behind the
PDCT.
2
Dyadic Classification Trees
In this section we review our earlier results on dyadic classification trees. Let X be a
d-dimensional observation, and Y ? {0, 1} its class label. Assume X ? [0, 1]d . This
is a realistic assumption for real-world data, provided appropriate translation and scaling
has been applied. DCTs are based on the concept of a cyclic dyadic partition (CDP). Let
P = {R1 , . . . , Rk } be a tree-structured partition of the input space, where each Ri is a
hyperrectangle with sides parallel to the coordinate axes. Given an integer `, let [`]d denote
the element of {1, . . . , d} that is congruent to ` modulo d. If Ri ? P is a cell at depth
(1)
(2)
j in the tree, let Ri and Ri be the rectangles formed by splitting Ri at its midpoint
along coordinate [j + 1]d . A CDP is a partition P constructed according to the rules:
(i) The trivial partition P = [0, 1]d is a CDP; (ii) If {R1 , . . . , Rk } is a CDP, then so is
(1)
(2)
{R1 , . . . , Ri?1 , Ri , Ri , Ri+1 , . . . , Rk }, where 1 ? i ? d. The term ?cyclic? refers to
how the splits cycle through the coordinates of the input space as one traverses a path down
the tree. We define a dyadic classification tree (DCT) to be a cyclic dyadic partition with
(a)
(b)
(c)
Figure 1: Example of a dyadic classification tree when d = 2. (a) Training samples from
two classes, and Bayes decision boundary. (b) Initial dyadic partition. (c) Pruned dyadic
classification tree. Polynomial-decorated DCTs, discussed in Section 4, are similar in structure, but a polynomial decision rule is employed at each leaf of the pruned tree, instead of
a simple binary label.
a class label (0 or 1) assigned to each node in the tree. We use the notation T to denote a
DCT. Figure 1 (c) shows an example of a DCT in the two-dimensional case.
Previously we presented a rule for pruning DCTs with consistency and rate of convergence
properties. In this section we review those results, setting the stage for our main result in
the next section. Let m = 2J be a dyadic integer, and define T0 to be the DCT that has
every leaf node at depth dJ. Then each leaf of T0 corresponds to a cube of side length 1/m,
and T0 has md total leaf nodes. Assume a training sample of size n is given, and each node
of T0 is labeled according to a majority vote with respect to the training data reaching that
node. A subtree T of T0 is referred to as a pruned subtree, denoted T ? T0 , if T includes
the root of T0 , if every internal node of T has both its children in T , and if the nodes of T
inherit their labels from T0 . The size of a tree T , denoted |T |, is the number of leaf nodes.
We defined the complexity penalized dyadic classification tree Tn0 to be the solution of
p
Tn0 = arg min ?(T ) + ?n |T |,
(1)
T ?T0
p
where ?n = 32 log(en)/n, and ?(T ) is the empirical error, i.e., the fraction of training
data misclassified by T . (The solution to this pruning problem can be computed efficiently
[7].) We showed that if X ? [0, 1]d with probability one, and md = o(n/ log n), then
E{(Tn0 )} ? ? with probability one (i.e., Tn0 is consistent). Here, (T ) = P{T (X) 6=
Y } is the true error probability for T , and ? is the Bayes error, i.e., the minimum error
probability over all classifiers (not just trees). We also demonstrated a rate of convergence
result for Tn0 , under certain assumptions on the distribution of (X, Y ). Let us recall the
definition of this class of distributions. Again, let X ? [0, 1]d with probability one.
Definition 1 Let c1 , c2 > 0, and let m0 be a dyadic integer. Define F = F(c1 , c2 , m0 ) to
be the collection of all distributions on (X, Y ) such that
A1 (Bounded density): For any measurable set A, P{X ? A} ? c1 ?(A), where ? denotes
the Lebesgue measure.
A2 (Regularity): For all dyadic integers m ? m0 , if we subdivide the unit cube into cubes
of side length 1/m, The Bayes decision boundary passes through at most c2 md?1
of the resulting md cubes.
These assumptions are satisfied when the density of X is essentially bounded with respect
to Lebesgue measure, and when the Bayes decision boundary for the distribution on (X, Y )
behaves locally like a Lipschitz function. See, for example, the boundary fragment class
of [2] with ? = 1 therein.
In [1], we showed that if the distribution of (X, Y ) belongs to F, and m ?
1/(d+1)
(n/ log n)1/(d+1) , then E{(Tn0 )} ? ? = O((log n/n)
). However, this upper
bound on the rate of convergence is not tight. The results of Mammen and Tsybakov [2]
show that the minimax rate of convergence, inf ?n supF E{(?n )} ? ? , is on the order of
n?1/d (here ?n ranges over all possible discrimination rules). In the next section, we introduce a new strategy for pruning DCTs, which leads to an improved rate of convergence
of (log n/n)1/d (i.e., within a logarithmic factor of the minimax rate). We are not aware of
other practically implementable classifiers that can achieve this rate.
3
Improved Tree Pruning with Spatially Adaptive Penalties
An improved rate of convergence is achieved by pruning the initial tree T0 using a new
complexity penalty. Given a node v in a tree T , let Tv denote the subtree of T rooted at v.
Let S denote the training data, and let nv denote the number of training samples reaching
node v. Let R denote a pruned subtree of T . In the language of [4], R is called a root
fragment. Let L(R) denote the set of leaf nodes of R.
Consider the pruning rule that selects
Tn = arg min ?(T ) + min ?(T, S, R) ,
T ?T0
where
?(T, S, R) =
R?T
(2)
i
X 1 hp
p
48nv |Tv | log(2n) + 48nv d log(m) .
n
v?L(R)
Observe that the penalty is data-dependent (since nv depends on S) and spatially adaptive
(choosing R ? T to minimize ?). The penalty
can be interpreted as follows. The first
p
P
term in the penalty is written v?L(R) pbv 48|Tv | log(2n)/nv , where pbv = nv /n. This
can be viewed as an empirical average of the complexity penalties for each of the subtrees
Tv , which depend on the local data associated with each subtree. The second term can be
interpreted as the ?cost? of spatially decomposing the bound on the generalization error.
The penalty has the following property. Consider pruning one of two subtrees, both with the
same size, and assume that both options result in the same increase in the empirical error.
Then the subtree with more data is selected for pruning. Since deeper nodes typically have
less data, this shows that the penalty favors unbalanced trees, which may promote higher
resolution (deeper leaf nodes) in the vicinity of the decision boundary. In contrast, the
pruning rule (1) penalizes balanced and unbalanced trees (with the same size) equally.
The following theorem bounds the expected error of Tn . This kind of bound is known as
an index of resolvability result [3,8]. Recall that m specifies the depth of the initial tree T0 .
Theorem 1 If m ? (n/ log n)1/d , then
!
r
log
n
+O
.
E{(Tn ) ? ? } ? min ((T ) ? ? ) + E min ?(T, S, R)
T ?T0
R?T
n
The first term in braces on the right is the approximation error. The remaining terms on the
right-hand side bound the estimation error. Since the bound holds for all T , one feature of
the pruning rule (2) is that Tn performs at least as well as the subtree T ? T0 that minimizes
the bound. This theorem may be applied to give us our desired rate of convergence result.
Theorem 2 Assume the density of (X, Y ) belongs to F. If m ? (n/ log n)1/d , then
E{(Tn )} ? ? = O((log n/n)1/d ).
In other words, the pruning rule (2) comes within a log factor of the minimax rate. These
theorems are proved in the last section.
4
Faster Rates for Smoother Boundaries
In this section we extend Theorem 2 to the case of smoother decision boundaries. Define F(?, c1 , c2 , m0 ) ? F(c1 , c2 , m0 ) to be those distributions on (X, Y ) satisfying the
following additional assumption. Here ? ? 1 is fixed.
A3 (?-regularity): Subdivide [0, 1]d into cubes of side length 1/m, m ? m0 . Within each
cube the Bayes decision boundary is described by a function (one coordinate is a
function of the others) with H?older regularity ?.
The collection G contains all distributions whose Bayes decision boundaries behave locally
like the graph of a function with H?older regularity ?. The ?boundary fragments? class of
Mammen and Tsybakov is a special case of boundaries satisfying A1 and A3.
We propose a classifier, called a polynomial-decorated dyadic classification tree (PDCT),
that achieves fast rates of convergence for distributions satisfying A3. Given a positive
integer r, a PDCT of degree r is a DCT, with class labels at each leaf node assigned by a
degree r polynomial classifier.
Consider the pruning rule that selects
Tn,r = arg min ?(T ) + min ?r (T, S, R) ,
T ?T0
where
?r (T, S, R) =
R?T
(3)
X 1 q
p
48nv Vd,r |Tv | log(2n) + 48nv (d + ?) log(m) .
n
v
Here Vd,r = d+r
is the V C dimension of the collection of degree r polynomial classifiers
r
in d dimensions. Also, the notation T ? T0 in (3) is rough. We actually consider a search
over all pruned subtrees of T0 , and with all possible configurations of degree r polynomial
classifiers at the leaf nodes.
An index of resolvability result analgous to Theorem 1 for Tn,r can be derived. Moreover,
If r = d?e ? 1, then a decision boundary with H?older regularity ? is well approximated by
a PDCT of degree r. In this case, Tn,r converges to the Bayes risk at rates bounded by the
next theorem.
Theorem 3 Assume the density of (X, Y ) belongs to G and that r = d?e ? 1. If m ?
(n/ log n)1/(d+2??2) , then
E{(Tn,r )} ? ? = O((log n/n)?/(d+2??2) ).
Note that in the case ? = 1 this result coincides with the near-minimax rate in Theorem 2.
Also notice that as ? ? ?, the rate of convergence comes within a logarithmic factor of
the parametric rate n?1/2 . The proof is discussed in the final section.
5
Efficient Algorithms
The optimally pruned subtree Tn of rule (2) can be computed exactly in O(|T0 |2 ) operations. This follows from a simple bottom-up dynamic programming algorithm, which we
describe below, and uses a method for ?square-root? pruning studied in [7]. In the context
of Theorem 2, we have |T0 | = md ? n, so the algorithm runs in time O(n2 ).
Note that an algorithm for finding the optimal R ? T was provided in [4]. We now describe
an algorithm for finding both the optimal T ? T0 and R ? T solving (2). Given a node
v ? T0 , let Tv? be the subtree of T0 rooted at v that minimizes the objective function of (2),
and let Rv? be the associated subtree that minimizes ?(Tv? , S, R). The problem is solved
?
?
by finding Troot
and Rroot
using a bottom-up procedure.
If v is a leaf node of T0 , then clearly Tv? = Rv? = {v}. If v is an internal node, denote
the children of v by u and w. There are three cases for Tv? and Rv? : (i) |Tv? | = |Rv? | = 1,
in which case Tv? = Rv? = {v}; (ii) |Tv? | ? |Rv? | > 1, in which case Tv? and Rv? can be
?
computed by merging Tu? with Tw? and Ru? with Rw
, respectively; (iii) |Tv? | > |Rv? | = 1, in
?
?
which case Rv = {v}, and Tv is determined by solving a square root pruning problem, just
like the one in (1). At each node, these three candidates are determined, and Tv? and Rv?
are the candidates minimizing the objective function (empirical error plus penalty) at each
node. Using the first algorithm in [7], the overall pruning procedure may be accomplished
in (|T0 |2 ) operations.
Determining the optimally pruned degree r PDCT is more challenging. The problem requires the construction, at each node of T0 , a polynomial classifier of degree r having
minimum empirical error. Unfortunately, this task is computationally infeasible for large
sample sizes. As an alternative, we recommend the use of polynomial support vector machines. SVMs are well known for their good generalization ability in practical problems.
Moreover, linear SVMs in perceptron trees have been shown to work well [6].
6
Conclusions
A key aspect of DCTs is their spatial adaptivity, which enables local (rather than global)
fitting of the decision boundary. Our risk analysis involves a spatial decomposition of the
usual concentration inequalities, leading to a spatially adaptive, data-dependent pruning
criterion that promotes unbalanced trees that focus on the decision boundary. For distributions on (X, Y ) whose Bayes decision boundary behave locally like a H?older-? smooth
function, we show that the PDCT error converges to the Bayes error at a rate no slower
than O((log n/n)?/(d+2??2) ). Polynomial kernel methods are not capable of achieving
such rates due to their lack of spatial adaptivity. When ? = 1, the DCT convergence rate is
within a logarithmic factor of the minimax optimal rate. As ? ? ? the rate tends to within
a log factor of n?1/2 , the parametric rate. However, the rates for ? > 1 are not within a
logarithmic factor of the minimax rate [2]. It may be possible to tighten the bounds further.
On the other hand, near-minimax rates might not be achievable using rectangular partitions,
and more flexible partitioning schemes, such as adaptive triangulations, may be required.
7
Proof Sketches
The key to proving Theorem 1 is the following result, which is a modified version of a
theorem of Mansour and McAllester [4].
Lemma 1 Let ? ? (0, 1). With probability at least 1 ? ?, every T ? T0 satisfies
(T ) ? ?(T ) + min f (T, S, R, ?),
R?T
where
f (T, S, R, ?)
=
X 1 hp
48nv |Tv | log(2n)
n
v?L(R)
+
p
i
24nv [d log(m) + log(3/?)] + 2[d log(m) + log(3/?)]
Our primary modification to the lemma is to replace one local uniform deviation inequality
(which holds for countable collections of classifiers [4, Lemma 4]) with another (which
holds for infinite collections of classifiers [3, Lemma 2]). This eases our extension to
polynomial-decorated DCTs in Section 4, by allowing us to avoid tedious quantization
arguments.
To prove Theorem 1, define the event ?m to be the collection of all training samples S
such that for all T ? T0 , the bound of Lemma 1 holds, with ? = 3/md . By that lemma,
P(?m ) ? 1 ? 3/md . Let T ? T0 be arbitrary. We have
E{(Tn ) ? (T )}
= P (?m )E{(Tn ) ? (T ) | ?m } + P (?cm )E{(Tn ) ? (T ) | ?cm }
3
? E{(Tn ) ? (T ) | ?m } + d .
m
Given S ? ?m , we know
(Tn ) ? ?(Tn ) + min f (Tn , S, R, 3m?d )
R?Tn
4d log(m)
R?Tn
n
4d log(m)
? ?(T ) + min ?(T, S, R) +
,
R?T
n
= ?(Tn ) + min ?(Tn , S, R) +
where the last inequality comes from the definition of Tn .
2
From Chernoff?s inequality, we
(T ) ? (T ) + t} ? e?2nt . By applying this
R ?know P{?
bound, and the fact E{Z} ? 0 P{Z > t} dt, the theorem is proved.
2
7.1
Proof of Theorem 2
By Theorem 1, it suffices to find a tree T ? ? T0 such that
1 !
log n d
?
?
?
.
E min? ?(T , S, R) + ((T ) ? ) = O
R?T
n
Define T ? to be the tree obtained by pruning back T0 at every node (thought of as a region
of space) that does not intersect the Bayes decision boundary. It can be shown without
much difficulty that (T ? ) ? ? = O((log n/n)1/d ) [9, Lemma 1]. It remains to bound the
estimation error.
Recall that T0 (and hence T ? ) has depth Jd, where J = log2 (m). Define R? to be the
pruned subtree of T ? consisting of all nodes in T ? up to depth j0 d, where j0 = J ?
(1/d)
log2 (J)
?
? (truncated if necessary). Let ?v be the set of all training samples such that
nv ? 2 npv . Let ? be the set of all training samples S such that S ? ?v for all
v ? L(R? ).
Now
E min? ?(T , S, R)
R?T
?
c
?
c
? P(?)E min? ?(T , S, R)|? + P(? )E min? ?(T , S, R)|? .
?
R?T
R?T
It can be shown, by applying the union bound, A2, and a theorem of Okamoto [10], that
P(?c ) = O((log n/n)1/d ). Moreover, the second expectation on the right is easily seen to
be O(1) by considering the root fragment consisting of only the root node. Hence it remains
to bound the first term on the right-hand side. We use P(?) ? 1, and focus on bounding the
expectation. It can be shown, assuming S ? ?, that ?(T ? , S, R? ) = O((log n/n)1/d ). It
suffices to bound the first term of ?(T ? , S, R? ), which clearly dominates the second term.
The first term, consisting of a sum of terms over the leaf nodes of R? , is dominated by the
sum of those terms over the leaf nodes of R? at depth j0 d. The number of such nodes may
be bounded by assumption A2. The remaining expression is bounded using assumptions
A1 and A2, as well as the definitions of T ? , R? , and ?.
7.2
Proof of Theorem 3
p
The estimation error is increased by a constant ? Vd,r , so its asymptotic analysis remains
unchanged. The only significant change is in the analysis of the approximation error. The
tree T ? is defined as in the previous proof. Recall the leaf nodes of T ? at maximum depth
are cells of side length 1/m. By a simple Taylor series argument, the approximation error
(T ? ) ? ? behaves like m?? . The remainder of the proof is essentially the same as the
proof of Theorem 2.
Acknowledgments
This work was partially supported by the National Science Foundation, grant nos. MIP?
9701692 and ANI-0099148, the Army Research Office, grant no. DAAD19-99-1-0349,
and the Office of Naval Research, grant no. N00014-00-1-0390.
References
[1] C. Scott and R. Nowak, ?Dyadic classification trees via structural risk minimization,? in Advances in Neural Information Processing Systems 14, S. Becker, S. Thrun, and K. Obermayer,
Eds., Cambridge, MA, 2002, MIT Press.
[2] E. Mammen and A. B. Tsybakov, ?Smooth discrimination analysis,? Annals of Statistics, vol.
27, pp. 1808?1829, 1999.
[3] M. Kearns and Y. Mansour, ?A fast, bottom-up decision tree pruning algorithm with nearoptimal generalization,? in International Conference on Machine Learning, 1998, pp. 269?277.
[4] Y. Mansour and D. McAllester, ?Generalization bounds for decision trees,? in Proceedings of
the Thirteenth Annual Conference on Computational Learning Theory, Palo Alto, California,
Nicol`o Cesa-Bianchi and Sally A. Goldman, Eds., 2000, pp. 69?74.
[5] K. Bennett and J. Blue, ?A support vector machine approach to decision trees,? in Proceedings
of the IEEE International Joint Conference on Neural Networks, Anchorage, Alaska, 1998,
vol. 41, pp. 2396?2401.
[6] K. Bennett, N. Cristianini, J. Shawe-Taylor, and D. Wu, ?Enlarging the margins in perceptron
decision trees,? Machine Learning, vol. 41, pp. 295?313, 2000.
[7] C. Scott, ?Tree pruning using a non-additive penalty,? Tech. Rep. TREE 0301, Rice University,
2003, available at http://www.dsp.rice.edu/?cscott/pubs.html.
[8] A. Barron, ?Complexity regularization with application to artificial neural networks,? in Nonparametric functional estimation and related topics, G. Roussas, Ed., pp. 561?576. NATO ASI
series, Kluwer Academic Publishers, Dordrecht, 1991.
[9] C. Scott and R. Nowak, ?Complexity-regularized dyadic classification trees: Efficient pruning and rates of convergence,? Tech. Rep. TREE0201, Rice University, 2002, available at
http://www.dsp.rice.edu/?cscott/pubs.html.
[10] M. Okamoto, ?Some inequalities relating to the partial sum of binomial probabilites,? Annals
of the Institute of Statistical Mathematics, vol. 10, pp. 29?35, 1958.
| 2364 |@word version:1 achievable:1 polynomial:19 tedious:1 decomposition:2 harder:1 initial:3 cyclic:3 contains:1 fragment:4 configuration:1 series:2 pub:2 nt:1 written:1 dct:10 realistic:1 additive:1 partition:11 enables:3 discrimination:2 leaf:17 selected:1 provides:1 node:30 traverse:1 along:1 constructed:1 c2:5 anchorage:1 prove:1 fitting:3 introduce:1 expected:2 dcts:12 decomposed:1 goldman:1 equipped:3 considering:1 provided:2 bounded:6 notation:2 moreover:3 alto:1 kind:2 interpreted:2 minimizes:4 probabilites:1 cm:2 finding:3 guarantee:2 every:4 exactly:1 classifier:15 partitioning:1 unit:1 grant:3 positive:1 engineering:2 local:6 cdp:4 tends:2 path:1 might:1 plus:2 therein:1 studied:3 challenging:1 range:1 practical:3 acknowledgment:1 practice:1 union:1 daad19:1 procedure:2 j0:3 intersect:1 empirical:7 asi:1 thought:1 word:1 refers:1 suggest:1 risk:6 context:1 applying:2 www:2 conventional:1 measurable:1 demonstrated:2 go:1 rectangular:1 resolution:1 splitting:1 rule:17 proving:1 coordinate:4 annals:2 construction:1 modulo:1 programming:1 us:1 element:1 satisfying:3 approximated:1 labeled:2 bottom:3 electrical:2 solved:1 region:2 cycle:1 balanced:1 complexity:6 cristianini:1 dynamic:1 engr:1 depend:1 tight:1 solving:2 easily:1 joint:1 tx:1 fast:3 describe:2 artificial:1 choosing:1 refined:1 dordrecht:1 whose:4 heuristic:1 solve:1 favor:1 ability:1 statistic:1 final:1 advantage:1 propose:1 remainder:1 tu:1 achieve:1 convergence:18 regularity:5 r1:3 congruent:1 converges:4 help:1 derive:1 develop:1 implemented:1 involves:2 come:3 mcallester:3 suffices:2 generalization:5 extension:1 hold:4 practically:2 claim:1 m0:6 achieves:1 a2:4 estimation:4 realizes:1 label:5 palo:1 minimization:4 anecdotal:1 hope:1 rough:1 clearly:2 mit:1 modified:1 rather:3 reaching:2 avoid:2 office:2 ax:1 derived:1 focus:2 naval:1 dsp:2 tech:2 contrast:2 dependent:3 entire:1 unlikely:1 typically:1 misclassified:1 selects:2 arg:3 classification:19 flexible:2 overall:1 denoted:2 html:2 spatial:7 special:1 cube:6 aware:2 having:2 chernoff:1 promote:1 report:1 others:1 recommend:1 national:1 individual:1 consisting:3 lebesgue:2 behind:1 subtrees:3 nowak:4 capable:2 necessary:1 partial:1 tree:43 alaska:1 taylor:2 penalizes:1 desired:1 mip:1 theoretical:1 increased:1 earlier:1 cost:1 deviation:1 uniform:2 optimally:2 nearoptimal:1 adaptively:1 hypercubes:1 density:4 international:2 eas:1 decorated:4 again:1 satisfied:1 cesa:1 leading:2 includes:1 depends:1 root:7 bayes:19 option:1 parallel:1 minimize:1 square:3 formed:1 efficiently:1 ed:3 definition:4 pp:7 associated:4 proof:7 okamoto:2 proved:2 cscott:3 recall:4 actually:1 back:2 higher:1 dt:1 improved:4 just:2 stage:1 hand:5 sketch:1 lack:2 roussas:1 concept:1 true:1 vicinity:1 assigned:2 hence:2 spatially:8 regularization:1 essence:1 rooted:2 mammen:4 coincides:1 criterion:2 tn:22 performs:1 superior:1 behaves:4 functional:1 discussed:3 extend:1 kluwer:1 relating:1 significant:1 cambridge:1 smoothness:2 consistency:1 mathematics:1 hp:2 language:1 had:1 shawe:1 dj:1 showed:3 recent:1 triangulation:1 inf:1 belongs:3 certain:2 n00014:1 inequality:6 binary:2 rep:2 accomplished:1 seen:1 minimum:2 additional:2 houston:1 employed:1 converge:3 ii:2 rv:10 smoother:2 smooth:6 faster:1 academic:1 equally:1 promotes:1 a1:3 essentially:3 expectation:2 kernel:5 achieved:1 cell:2 c1:5 thirteenth:1 grow:1 publisher:1 brace:1 unlike:1 pass:1 nv:11 tend:1 call:1 integer:5 structural:3 near:5 split:2 iii:1 concerned:1 variety:1 fit:1 approaching:1 idea:1 t0:31 motivated:1 expression:1 becker:1 penalty:14 involve:1 nonparametric:1 tsybakov:4 locally:6 svms:3 rw:1 http:2 specifies:1 notice:1 blue:1 vol:4 key:4 achieving:3 wisc:1 troot:1 ani:1 rectangle:1 graph:1 fraction:1 sum:3 run:1 family:1 wu:1 decision:26 scaling:1 bound:15 annual:1 precisely:1 ri:9 dominated:1 aspect:3 argument:3 min:15 pruned:12 structured:1 tv:17 according:3 wi:1 tw:1 modification:1 computationally:4 previously:3 remains:3 rroot:1 letting:1 know:2 available:2 decomposing:1 operation:2 observe:1 barron:1 appropriate:1 alternative:1 slower:3 subdivide:2 jd:1 denotes:1 remaining:2 binomial:1 log2:2 madison:1 approximating:1 hypercube:1 unchanged:1 objective:2 parametric:4 concentration:2 strategy:2 usual:2 traditional:2 md:7 primary:1 exhibit:1 obermayer:1 thrun:1 majority:1 vd:3 topic:1 trivial:1 reason:1 analgous:1 assuming:1 ru:1 length:5 index:2 minimizing:1 unfortunately:1 robert:1 countable:1 allowing:1 upper:1 bianchi:1 observation:1 implementable:1 behave:2 truncated:1 mansour:5 arbitrary:1 clayton:1 required:1 hyperrectangle:1 california:1 address:1 below:1 scott:4 event:1 difficulty:1 regularized:1 minimax:12 older:7 scheme:2 review:2 nicol:1 determining:1 asymptotic:1 wisconsin:1 adaptivity:5 proportional:1 foundation:1 degree:7 consistent:1 translation:1 penalized:1 supported:1 last:2 infeasible:2 side:8 allow:1 deeper:2 perceptron:4 institute:1 midpoint:1 boundary:26 dimension:3 depth:7 world:1 author:1 collection:6 adaptive:8 employing:1 tighten:1 pruning:26 nato:1 global:4 overfitting:1 search:1 investigated:1 domain:2 inherit:1 main:2 bounding:1 n2:1 dyadic:19 child:2 referred:1 en:1 candidate:2 rk:3 down:1 theorem:20 enlarging:1 specific:1 showing:1 decay:1 evidence:1 a3:3 dominates:1 quantization:1 merging:1 subtree:11 margin:1 supf:1 logarithmic:5 army:1 partially:1 sally:1 corresponds:1 satisfies:1 rice:6 ma:1 viewed:1 resolvability:2 lipschitz:3 replace:1 bennett:2 change:1 determined:2 infinite:1 kearns:2 lemma:7 total:1 called:2 vote:1 internal:2 support:3 npv:1 unbalanced:3 |
1,501 | 2,365 | Large Scale Online Learning.
L?eon Bottou
NEC Labs America
Princeton NJ 08540
[email protected]
Yann Le Cun
NEC Labs America
Princeton NJ 08540
[email protected]
Abstract
We consider situations where training data is abundant and computing
resources are comparatively scarce. We argue that suitably designed online learning algorithms asymptotically outperform any batch learning
algorithm. Both theoretical and experimental evidences are presented.
1
Introduction
The last decade brought us tremendous improvements in the performance and price of mass
storage devices and network systems. Storing and shipping audio or video data is now
inexpensive. Network traffic itself provides new and abundant sources of data in the form
of server log files. The availability of such large data sources provides clear opportunities
for the machine learning community.
These technological improvements have outpaced the exponential evolution of the computing power of integrated circuits (Moore?s law). This remark suggests that learning algorithms must process increasing amounts of data using comparatively smaller computing
resources.
This work assumes that datasets have grown to practically infinite sizes and discusses which
learning algorithms asymptotically provide the best generalization performance using limited computing resources.
? Online algorithms operate by repetitively drawing a fresh random example and
adjusting the parameters on the basis of this single example only. Online algorithms can quickly process a large number of examples. On the other hand, they
usually are not able to fully optimize the cost function defined on these examples.
? Batch algorithms avoid this issue by completely optimizing the cost function defined on a set of training examples. On the other hand, such algorithms cannot
process as many examples because they must iterate several times over the training set to achieve the optimum.
As datasets grow to practically infinite sizes, we argue that online algorithms outperform
learning algorithms that operate by repetitively sweeping over a training set.
2
Gradient Based Learning
Many learning algorithms optimize an empirical cost function Cn (?) that can be expressed
as the average of a large number of terms L(z, ?). Each term measures the cost associated with running a model with parameter vector ? on independent examples z i (typically
input/output pairs zi = (xi , yi ).)
n
4
Cn (?) =
1X
L(zi , ?)
n i=1
(1)
Two kinds of optimization procedures are often mentioned in connection with this problem:
? Batch gradient: Parameter updates are performed on the basis of the gradient and
Hessian information accumulated over a predefined training set:
?(k)
?Cn
(?(k ? 1))
??
n
X
1
?L
= ?(k ? 1) ? ?k
(zi , ?(k ? 1))
n
??
i=1
= ?(k ? 1) ? ?k
(2)
where ?k is an appropriately chosen positive definite symmetric matrix.
? Online gradient: Parameter updates are performed on the basis of a single sample
zt picked randomly at each iteration:
1 ?L
?t
(zt , ?(t ? 1))
(3)
t
??
where ?t is again an appropriately chosen positive definite symmetric matrix.
Very often the examples zt are chosen by cycling over a randomly permuted training set. Each cycle is called an epoch. This paper however considers situations
where the supply of training samples is practically unlimited. Each iteration of
the online algorithm utilizes a fresh sample, unlikely to have been presented to the
system before.
?(t) = ?(t ? 1) ?
Simple batch algorithms converge linearly1 to the optimum ?n? of the empirical cost. Careful choices of ?k make the convergence super-linear or even quadratic2 in favorable cases
(Dennis and Schnabel, 1983).
Whereas online algorithms may converge to the general area of the optimum at least as fast
as batch algorithms (Le Cun et al., 1998), the optimization proceeds rather slowly during
the final convergence phase (Bottou and Murata, 2002). The noisy gradient estimate causes
the parameter vector to fluctuate around the optimum in a bowl whose size decreases like
1/t at best.
Online algorithms therefore seem hopelessly slow. However, the above discussion compares the speed of convergence toward the minimum of the empirical cost C n , whereas one
should be much more interested in the convergence toward the minimum ? ? of the expected
cost C? , which measures the generalization performance:
Z
4
C? (?) =
L(z, ?) p(z) dz
(4)
Density p(z) represents the unknown distribution from which the examples are drawn (Vapnik, 1974). This is the fundamental difference between optimization speed and learning
speed.
1
2
Linear convergence speed: log 1/|?(k) ? ?n? |2 grows linearly with k.
Quadratic convergence speed: log log 1/|?(k) ? ?n? |2 grows linearly with k.
3
Learning Speed
Running an efficient batch algorithm on a training set of size n quickly yields the empirical
optimum ?n? . The sequence of empirical optima ?n? usually converges to the solution ? ?
when the training set size n increases.
In contrast, online algorithms randomly draw one example zt at each iteration. When these
examples are drawn from a set of n examples, the online algorithm minimizes the empirical
error Cn . When these examples are drawn from the asymptotic distribution p(z), it minimizes the expected cost C? . Because the supply of training samples is practically unlimited, each iteration of the online algorithm utilizes a fresh example. These fresh examples
follow the asymptotic distribution. The parameter vectors ?(t) thus directly converge to the
optimum ? ? of the expected cost C? .
The convergence speed of the batch ?n? and online ?(t) sequences were first compared by
Murata and Amari (1999). This section reports a similar result whose derivation uncovers
a deeper relationship between these two sequences. This approach also provides a mathematically rigorous treatment (Bottou and Le Cun, 2003).
Let us first define the Hessian matrix H and Fisher information matrix G:
T !
2
?L
?L
?
4
4
?
?
?
L(z, ? )
G = E
(z, ? )
(z, ? )
H = E
????
??
??
?
Manipulating a Taylor expansion of the gradient of Cn (?) in the vicinity of ?n?1
immedi?
?
ately provides the following recursive relation between ?n and ?n?1 .
1
?L
1
?
?
?
?n = ?n?1 ? ?n
(zn , ?n?1 ) + O
(5)
n
??
n2
with
n
4
?n =
1 X ?2
?
L(zi , ?n?1
)
n i=1 ????
!?1
?? H?1
t??
Relation (5) describes the ?n? sequence as a recursive stochastic process that is essentially
similar to the online learning algorithm (3). Each iteration of this ?algorithm? consists
in picking a fresh example zn and updating the parameters according to (5). This is not a
practical algorithm because we have no analytical expression for the second order term. We
can however apply the mathematics of online learning algorithms to this stochastic process.
The similarity between (5) and (3) suggests that both the batch and online sequences converge at the same speed for adequate choices of the scaling matrix ?t . Under customary
regularity conditions, the following asymptotic speed results holds when the scaling matrix
?t converges to the inverse H?1 of the Hessian matrix.
tr H?1 G H?1
1
1
?
? 2
? 2
E |?(t) ? ? | + o
= E |?t ? ? | + o
=
(6)
t
t
t
This convergence speed expression has been discovered many times. Tsypkin (1973) establishes (6) for linear systems. Murata and Amari (1999) address generic stochastic gradient
algorithms with a constant scaling matrix. Our result (Bottou and Le Cun, 2003) holds
when the scaling matrix ?t depends on the previously seen examples, and also holds when
the stochastic update is perturbed by unspecified second order terms, as in equation (5).
See the appendix for a proof sketch (Bottou and LeCun, 2003).
Result (6) applies to both the online ?(t) and batch ?(t) sequences. Not only does it establish that both sequences have O (1/t) convergence, but also it provides the value of
the constant. This constant is neither affected by the second order terms of (5) nor by the
convergence speed of the scaling matrix ?t toward H?1 .
In the Maximum Likelihood case, it is well known that both H and G are equal on the
optimum. Equation (6) then indicates that the convergence speed saturates the Cramer-Rao
bound. This fact was known in the case of the natural gradient algorithm (Amari, 1998). It
remains true for a large class of online learning algorithms.
Result (6) suggests that the scaling matrix ?t should be a full rank approximation of the
Hessian H. Maintaining such an approximation becomes expensive when the dimension of
the parameter vector increases. The computational cost of each iteration can be drastically
reduced by maintaining only a coarse approximations of the Hessian (e.g. diagonal, blockdiagonal, multiplicative, etc.). A proper setup ensures that the convergence speed remains
O (1/t) despite a less favorable constant factor.
The similar nature of the convergence of the batch and online sequences can be summarized
as follows. Consider two optimally designed batch and online learning algorithms. The
best generalization error is asymptotically achieved by the learning algorithm that uses the
most examples within the allowed time.
4
Computational Cost
The discussion so far has established that a properly designed online learning algorithm
performs as well as any batch learning algorithm for a same number of examples. We
now establish that, given the same computing resources, an online learning algorithm can
asymptotically process more examples than a batch learning algorithm.
Each iteration of a batch learning algorithm running on N training examples requires a time
K1 N + K2 . Constants K1 and K2 respectively represent the time required to process each
example, and the time required to update the parameters. Result (6) provides the following
asymptotic equivalence:
1
?
(?N
? ? ? )2 ?
N
?
The batch algorithm must perform enough iterations to approximate ?N
with at least the
same accuracy (? 1/N ). An efficient algorithm with quadratic convergence achieves this
after a number of iterations asymptotically proportional to log log N .
Running an online learning algorithm requires a constant time K3 per processed example.
Let us call T the number of examples processed by the online learning algorithm using the
same computing resources as the batch algorithm. We then have:
K3 T ? (K1 N + K2 ) log log N =? T ? N log log N
The parameter ?(T ) of the online algorithm also converges according to (6). Comparing
the accuracies of both algorithms shows that the online algorithm asymptotically provides
a better solution by a factor O (log log N ).
1
1
?
(?(T ) ? ? ? )2 ?
? (?N
? ? ? )2
N log log N
N
This log log N factor corresponds to the number of iterations required by the batch algorithm. This number increases slowly with the desired accuracy of the solution. In practice,
this factor is much less significant than the actual value of the constants K1 , K2 and K3 .
Experience shows however that online algorithms are considerably easier to implement.
Each iteration of the batch algorithm involves a large summation over all the available
examples. Memory must be allocated to hold these examples. On the other hand, each
iteration of the online algorithm only involves one random example which can then be
discarded.
5
Experiments
A simple validation experiment was carried out using synthetic data. The examples are
input/output pairs (x, y) with x ? R20 and y = ?1. The model is a single sigmoid unit
trained using the least square criterion.
2
L(x, y, ?) = (1.5y ? f (?x))
where f (x) = 1.71 tanh(0.66x) is the standard sigmoid discussed in LeCun et al. (1998).
The sigmoid generates various curvature conditions in the parameter space, including negative curvature and plateaus. This simple model represents well the final convergence phase
of the learning process. Yet it is also very similar to the widely used generalized linear
models (GLIM) (Chambers and Hastie, 1992).
The first component of the input x is always 1 in order to compensate the absence of a
bias parameter in the model. The remaining 19 components are drawn from two Gaussian
distributions, centered on (?1, ?1, . . . , ?1) for the first class and (+1, +1, . . . , +1) for the
second class. The eigenvalues of the covariance matrix of each class range from 1 to 20.
Two separate sets for training and testing were drawn with 1 000 000 examples each. One
hundred permutations of the first set are generated. Each learning algorithm is trained using
various number of examples taken sequentially from the beginning of the permuted sets.
The resulting performance is then measured on the testing set and averaged over the one
hundred permutations.
Batch-Newton algorithm
The reference batch algorithm uses the Newton-Raphson algorithm with Gauss-Newton
approximation (Le Cun et al., 1998). Each iteration visits all the training and computes
both gradient g and the Gauss-Newton approximation H of the Hessian matrix.
X
X ?L
2
(xi , yi , ?k?1 )
H=
(f 0 (?k?1 xi )) xi xT
g=
i
??
i
i
The parameters are then updated using Newton?s formula:
?k = ?k?1 ? H ?1 g
Iterations are repeated until the parameter vector moves by less than 0.01/N where N is
the number of training examples. This algorithm yields quadratic convergence speed.
Online-Kalman algorithm
The online algorithm performs a single sequential sweep over the training examples. The
parameter vector is updated after processing each example (xt , yt ) as follows:
1
?L
?t = ?t?1 ? ?t
(xt , yt , ?t?1 )
?
??
The scalar ? = max (20, t ? 40) makes sure that the first few examples do not cause
impractically large parameter updates. The scaling matrix ?t is equal to the inverse of a
leaky average of the per-example Gauss-Newton approximation of the Hessian.
?1
2
2
2
?1
0
T
?t =
1?
?t?1 +
(f (?t?1 xt )) xt xt
?
?
The implementation avoids the matrix inversions by directly computing ?t from ?t?1
using the matrix inversion lemma. (see (Bottou, 1998) for instance.)
1
(Au)(Au)T
?1
T ?1
?A + ?uu
=
A?
?
?/? + uT Au
1e?1
1e?1
1e?2
1e?2
1e?3
1e?3
1e?4
1e?4
1000
10000
100000
Figure 1: Average (? ?? ? )2 as a function
of the number of examples. The gray line
represents the theoretical prediction (6).
Filled circles: batch. Hollow circles: online. The error bars indicate a 95% confidence interval.
100
1000
10000
Figure 2: Average (? ?? ? )2 as a function
of the training time (milliseconds). Hollow circles: online. Filled circles: batch.
The error bars indicate a 95% confidence
interval.
The resulting algorithm slightly differs from the Adaptive Natural Gradient algorithm
(Amari, Park, and Fukumizu, 1998). In particular, there is little need to adjust a learning
rate parameter in the Gauss-Newton approach. The 1/t (or 1/? ) schedule is asymptotically
optimal.
Results
The optimal parameter vector ? ? was first computed on the testing set using the batchnewton approach. The matrices H and G were computed on the testing set as well in order
to determine the constant in relation (6).
Figure 1 plots the average squared distance between the optimal parameter vector ? ? and
the parameter vector ? achieved on training sets of various sizes. The gray line represents
the theoretical prediction. Both the batch points and the online points join the theoretical
prediction when the training set size increases. Figure 2 shows the same data points as a
function of the CPU time required to run the algorithm on a standard PC. The online algorithm gradually becomes more efficient when the training set size increases. This happens
because the batch algorithm needs to perform additional iterations in order to maintain the
same level of accuracy.
In practice, the test set mean squared error (MSE) is usually more relevant than the accuracy
of the parameter vector. Figure 3 displays a logarithmic plot of the difference between the
MSE and the best achievable MSE, that is to say the MSE achieved by parameter vector ? ? .
This difference can be approximated as (? ? ? ? )T H (? ? ?? ). Both algorithms yield virtually
identical errors for the same training set size. This suggests that the small differences shown
in figure 1 occur along the low curvature directions of the cost function. Figure 4 shows the
MSE as a function of the CPU time. The online algorithm always provides higher accuracy
in significantly less time.
As expected from the theoretical argument, the online algorithm asymptotically outperforms the super-linear Newton-Raphson algorithm3 . More importantly, the online algorithm achieves this result by performing a single sweep over the training data. This is a
very significant advantage when the data does not fit in central memory and must be sequentially accessed from a disk based database.
3
Generalized linear models are usually trained using the IRLS method (Chambers and Hastie,
1992) which is closely related to the Newton-Raphson algorithm and requires similar computational
resources.
Mse*
+1e?1
0.366
Mse*
+1e?2
0.362
0.358
Mse*
+1e?3
0.354
Mse*
+1e?4
0.350
0.346
1000
10000
100000
0.342
Figure 3: Average test MSE as a function
of the number of examples (left). The
vertical axis shows the logarithm of the
difference between the error and the best
error achievable on the testing set. Both
curves are essentially superposed.
6
100
1000
10000
Figure 4: Average test MSE as a function of the training time (milliseconds).
Hollow circles: online. Filled circles:
batch. The gray line indicates the best
mean squared error achievable on the test
set.
Conclusion
Many popular algorithms do not scale well to large number of examples because they were
designed with small data sets in mind. For instance, the training time for Support Vector
Machines scales somewhere between N 2 and N 3 , where N is the number of examples.
Our baseline super-linear batch algorithm learns in N log log N time. We demonstrate that
adequate online algorithms asymptotically achieve the same generalization performance in
N time after a single sweep on the training set.
The convergence of learning algorithms is usually described in terms of a search phase
followed by a final convergence phase (Bottou and Murata, 2002). Solid empirical evidence (Le Cun et al., 1998) suggests that online algorithms outperform batch algorithms
during the search phase. The present work provides both theoretical and experimental evidence that an adequate online algorithm outperforms any batch algorithm during the final
convergence phase as well.
Appendix4 : Sketch of the convergence speed proof
Lemma ? Let (ut ) be a sequence of positive reals verifying the following recurrence:
1
?
1
?
ut?1 + 2 + o 2
(7)
ut = 1 ? + o
t
t
t
t
?
The lemma states that t ut ?? ??1
when ? > 1 and ? > 0. The proof is delicate because
the result holds regardless of the unspecified low order terms of the recurrence. However,
it is easy to illustrate this convergence with simple numerical simulations.
Convergence speed ? Consider the following recursive stochastic process:
1 ?L
1
?(t) = ?(t ? 1) ? ?t
(zt , ?(t ? 1)) + O
t
??
n2
(8)
Our discussion addresses the final convergence phase of this process. Therefore we assume
that the parameters ? remain confined in a bounded domain D where the cost function
C? (?) is convex and has a single non degenerate minimum ? ? ? D. We can assume
4
This section has been added for the final version
?? = 0 without loss of generality. We write Et (X) the conditional expectation of X given
all that is known before time t, including the initial conditions ?0 and the selected examples
z1 , . . . , zt?1 . We initially assume also that ?t is a function of z1 , . . . , zt?1 only.
Using (8), we write Et (?t ?t0 ) as a function of ?t?1 . Then we simplify5 and take the trace.
tr H?1 G H?1
|?t?1 |2
1
2
2
2
2
+
+
o
Et |?t | = |?t?1 | ? |?t?1 | + o
t
t
t2
t2
Taking the unconditional expectation yields a recurence similar
to (7). We then apply the
lemma and conclude that t E(|?t |2 ) ?? tr H?1 G H?1 .
Remark 1 ? The notation o (Xt ) is quite ambiguous when dealing with stochastic processes. There are many possible flavors of convergence, including uniform convergence,
almost sure convergence, convergence in probability, etc. Furthermore, it is not true in
general that E (o (Xt )) = o (E (Xt )). The complete proof precisely defines the meaning
of these notations and carefully checks their properties.
Remark 2 ? The proof sketch assumes that ?t is a function of z1 , . . . , zt?1 only. In (5),
?t also depends on zt . The result still holds because the contribution of zt vanishes quickly
when t grows large.
Remark 3 ? The same 1t behavior holds when ?t ? ?? and when ?? is greater than
1 ?1
in the semi definite sense. The constant however is worse by a factor roughly equal
2H
to ||H?? ||.
Acknowledgments
The authors acknowledge extensive discussions with Yoshua Bengio, Sami Bengio, Ronan
Collobert, Noboru Murata, Kenji Fukumizu, Susanna Still, and Barak Pearlmutter.
References
Amari, S. (1998). Natural Gradient Works Efficiently in Learning. Neural Computation, 10(2):251?
276.
Bottou, L. (1998). Online Algorithms and Stochastic Approximations, 9-42. In Saad, D., editor,
Online Learning and Neural Networks. Cambridge University Press, Cambridge, UK.
Bottou, L. and Murata, N. (2002). Stochastic Approximations and Efficient Learning. In Arbib,
M. A., editor, The Handbook of Brain Theory and Neural Networks, Second edition,. The MIT
Press, Cambridge, MA.
Bottou, L. and Le Cun, Y. (2003). Online Learning for Very Large Datasets. NEC Labs TR-2003L039. To appear: Applied Stochastic Models in Business and Industry. Wiley.
Chambers, J. M. and Hastie, T. J. (1992). Statistical Models in S, Chapman & Hall, London.
Dennis, J. and Schnabel, R. B. (1983). Numerical Methods For Unconstrained Optimization and
Nonlinear Equations. Prentice-Hall, Inc., Englewood Cliffs, New Jersey.
Amari, S. and Park, H. and Fukumizu, K. (1998). Adaptive Method of Realizing Natural Gradient
Learning for Multilayer Perceptrons, Neural Computation, 12(6):1399?1409
Le Cun, Y., Bottou, L., Orr, G. B., and Mu? ller, K.-R. (1998). Efficient Back-prop. In Neural
Networks, Tricks of the Trade, Lecture Notes in Computer Science 1524. Springer Verlag.
Murata, N. and Amari, S. (1999). Statistical analysis of learning dynamics. Signal Processing,
74(1):3?28.
Vapnik, V. N. and Chervonenkis, A. (1974). Theory of Pattern Recognition (in russian). Nauka.
Tsypkin, Ya. (1973). Foundations of the theory of learning systems. Academic Press.
5
Recall Et ?t ?L
(zt , ?) = ?t ?C
(?) = ?t H? + o (|?|) = ? + o (|?|)
??
??
| 2365 |@word version:1 inversion:2 achievable:3 suitably:1 disk:1 simulation:1 uncovers:1 covariance:1 tr:4 solid:1 initial:1 chervonenkis:1 outperforms:2 com:1 comparing:1 yet:1 must:5 ronan:1 numerical:2 designed:4 plot:2 update:5 selected:1 device:1 beginning:1 realizing:1 provides:9 coarse:1 org:1 accessed:1 along:1 supply:2 consists:1 expected:4 roughly:1 behavior:1 nor:1 brain:1 actual:1 little:1 cpu:2 increasing:1 becomes:2 bounded:1 notation:2 circuit:1 mass:1 kind:1 unspecified:2 minimizes:2 nj:2 k2:4 uk:1 unit:1 appear:1 positive:3 before:2 despite:1 cliff:1 au:3 equivalence:1 suggests:5 limited:1 range:1 averaged:1 practical:1 lecun:3 acknowledgment:1 testing:5 recursive:3 practice:2 definite:3 implement:1 differs:1 procedure:1 area:1 empirical:7 significantly:1 confidence:2 cannot:1 storage:1 prentice:1 superposed:1 optimize:2 dz:1 yt:2 regardless:1 convex:1 importantly:1 updated:2 us:2 trick:1 expensive:1 approximated:1 updating:1 recognition:1 database:1 verifying:1 ensures:1 cycle:1 decrease:1 technological:1 trade:1 mentioned:1 vanishes:1 mu:1 dynamic:1 trained:3 basis:3 completely:1 bowl:1 various:3 america:2 jersey:1 grown:1 derivation:1 fast:1 london:1 whose:2 quite:1 widely:1 say:1 drawing:1 amari:7 itself:1 noisy:1 final:6 online:44 sequence:9 eigenvalue:1 advantage:1 analytical:1 relevant:1 degenerate:1 achieve:2 convergence:27 regularity:1 optimum:8 converges:3 illustrate:1 measured:1 kenji:1 involves:2 indicate:2 uu:1 direction:1 closely:1 stochastic:9 centered:1 generalization:4 summation:1 mathematically:1 hold:7 practically:4 around:1 cramer:1 hall:2 k3:3 achieves:2 favorable:2 tanh:1 establishes:1 fukumizu:3 brought:1 mit:1 always:2 gaussian:1 super:3 rather:1 avoid:1 fluctuate:1 improvement:2 properly:1 rank:1 likelihood:1 indicates:2 check:1 contrast:1 rigorous:1 baseline:1 sense:1 accumulated:1 integrated:1 typically:1 unlikely:1 ately:1 initially:1 relation:3 manipulating:1 interested:1 issue:1 equal:3 chapman:1 identical:1 represents:4 park:2 report:1 t2:2 yoshua:1 few:1 randomly:3 phase:7 maintain:1 delicate:1 shipping:1 englewood:1 adjust:1 pc:1 unconditional:1 predefined:1 experience:1 filled:3 taylor:1 logarithm:1 abundant:2 desired:1 circle:6 theoretical:6 instance:2 industry:1 rao:1 zn:2 cost:13 hundred:2 uniform:1 optimally:1 perturbed:1 considerably:1 synthetic:1 density:1 fundamental:1 picking:1 quickly:3 again:1 squared:3 central:1 slowly:2 worse:1 orr:1 summarized:1 availability:1 inc:1 depends:2 collobert:1 performed:2 multiplicative:1 picked:1 lab:3 traffic:1 contribution:1 square:1 accuracy:6 efficiently:1 murata:7 yield:4 irls:1 plateau:1 inexpensive:1 associated:1 proof:5 adjusting:1 treatment:1 popular:1 recall:1 ut:5 schedule:1 carefully:1 glim:1 back:1 higher:1 follow:1 generality:1 furthermore:1 until:1 hand:3 sketch:3 dennis:2 nonlinear:1 defines:1 noboru:1 gray:3 grows:3 russian:1 true:2 evolution:1 vicinity:1 symmetric:2 moore:1 during:3 recurrence:2 ambiguous:1 criterion:1 generalized:2 complete:1 demonstrate:1 pearlmutter:1 performs:2 meaning:1 sigmoid:3 permuted:2 discussed:1 significant:2 cambridge:3 unconstrained:1 mathematics:1 similarity:1 etc:2 curvature:3 optimizing:1 verlag:1 server:1 yi:2 seen:1 minimum:3 additional:1 greater:1 converge:4 determine:1 ller:1 signal:1 semi:1 full:1 academic:1 repetitively:2 compensate:1 raphson:3 visit:1 prediction:3 multilayer:1 essentially:2 expectation:2 iteration:15 represent:1 achieved:3 confined:1 whereas:2 interval:2 grow:1 source:2 allocated:1 appropriately:2 saad:1 operate:2 file:1 sure:2 virtually:1 seem:1 call:1 bengio:2 enough:1 easy:1 sami:1 iterate:1 fit:1 zi:4 hastie:3 arbib:1 cn:5 t0:1 expression:2 hessian:7 cause:2 remark:4 adequate:3 clear:1 amount:1 processed:2 reduced:1 outperform:3 millisecond:2 per:2 write:2 affected:1 drawn:5 neither:1 asymptotically:9 run:1 inverse:2 almost:1 yann:2 utilizes:2 draw:1 appendix:1 scaling:7 bound:1 followed:1 display:1 quadratic:3 occur:1 precisely:1 unlimited:2 generates:1 speed:16 argument:1 leon:1 performing:1 according:2 smaller:1 describes:1 slightly:1 remain:1 cun:8 happens:1 gradually:1 taken:1 resource:6 equation:3 previously:1 remains:2 discus:1 mind:1 tsypkin:2 available:1 apply:2 generic:1 chamber:3 batch:28 customary:1 assumes:2 running:4 remaining:1 opportunity:1 maintaining:2 newton:9 somewhere:1 eon:1 k1:4 establish:2 comparatively:2 sweep:3 move:1 added:1 diagonal:1 cycling:1 gradient:12 distance:1 separate:1 algorithm3:1 argue:2 considers:1 toward:3 fresh:5 kalman:1 relationship:1 setup:1 trace:1 negative:1 implementation:1 zt:11 proper:1 unknown:1 perform:2 vertical:1 datasets:3 discarded:1 acknowledge:1 situation:2 saturates:1 discovered:1 sweeping:1 community:1 princeton:2 pair:2 required:4 extensive:1 connection:1 z1:3 tremendous:1 established:1 address:2 able:1 bar:2 proceeds:1 usually:5 pattern:1 including:3 memory:2 video:1 max:1 power:1 natural:4 business:1 scarce:1 axis:1 carried:1 epoch:1 blockdiagonal:1 asymptotic:4 law:1 fully:1 loss:1 permutation:2 lecture:1 proportional:1 validation:1 foundation:1 editor:2 storing:1 last:1 drastically:1 bias:1 deeper:1 barak:1 taking:1 leaky:1 curve:1 dimension:1 avoids:1 computes:1 author:1 adaptive:2 far:1 approximate:1 r20:1 dealing:1 sequentially:2 handbook:1 conclude:1 xi:4 search:2 decade:1 nature:1 expansion:1 mse:11 bottou:12 domain:1 linearly:2 edition:1 n2:2 allowed:1 repeated:1 join:1 slow:1 wiley:1 exponential:1 learns:1 formula:1 xt:9 evidence:3 vapnik:2 sequential:1 nec:3 easier:1 outpaced:1 flavor:1 logarithmic:1 susanna:1 expressed:1 hopelessly:1 scalar:1 applies:1 springer:1 corresponds:1 ma:1 prop:1 conditional:1 careful:1 price:1 fisher:1 absence:1 infinite:2 impractically:1 lemma:4 called:1 experimental:2 gauss:4 ya:1 perceptrons:1 nauka:1 support:1 schnabel:2 hollow:3 audio:1 |
1,502 | 2,366 | Learning a Distance Metric from Relative
Comparisons
Matthew Schultz and Thorsten Joachims
Department of Computer Science
Cornell University
Ithaca, NY 14853
{schultz,tj}@cs.cornell.edu
Abstract
This paper presents a method for learning a distance metric from relative comparison such as ?A is closer to B than A is to C?. Taking a
Support Vector Machine (SVM) approach, we develop an algorithm that
provides a flexible way of describing qualitative training data as a set of
constraints. We show that such constraints lead to a convex quadratic
programming problem that can be solved by adapting standard methods for SVM training. We empirically evaluate the performance and the
modelling flexibility of the algorithm on a collection of text documents.
1
Introduction
Distance metrics are an essential component in many applications ranging from supervised
learning and clustering to product recommendations and document browsing. Since designing such metrics by hand is difficult, we explore the problem of learning a metric from
examples. In particular, we consider relative and qualitative examples of the form ?A is
closer to B than A is to C?. We believe that feedback of this type is more easily available
in many application setting than quantitative examples (e.g. ?the distance between A and
B is 7.35?) as considered in metric Multidimensional Scaling (MDS) (see [4]), or absolute
qualitative feedback (e.g. ?A and B are similar?, ?A and C are not similar?) as considered
in [11].
Building on the study in [7], search-engine query logs are one example where feedback of
the form ?A is closer to B than A is to C? is readily available for learning a (more semantic)
similarity metric on documents. Given a ranked result list for a query, documents that
are clicked on can be assumed to be semantically closer than those documents that the
user observed but decided to not click on (i.e. ?Aclick is closer to Bclick than Aclick is to
Cnoclick ?). In contrast, drawing the conclusion that ?Aclick and Cnoclick are not similar? is
probably less justified, since a Cnoclick high in the presented ranking is probably still closer
to Aclick than most documents in the collection.
In this paper, we present an algorithm that can learn a distance metric from such relative
and qualitative examples. Given a parametrized family of distance metrics, the algorithms
discriminately searches for the parameters that best fulfill the training examples. Taking a
maximum-margin approach [9], we formulate the training problem as a convex quadratic
program for the case of learning a weighting of the dimensions. We evaluate the performance and the modelling flexibility of the algorithm on a collection of text documents.
The notation used throughout this paper is as follows. Vectors are denoted with an arrow ~x
where xi is the ith entry in vector ~x. The vector ~0 is the vector composed of all zeros, and
~1 is the vector composed of all ones. ~xT is the transpose of vector ~x and the dot product
is denoted by ~xT ~y . We denote the element-wise product of two vectors ~x = (x1 , ..., xn )T
and ~y = (y1 , ..., yn )T as ~x ? ~y = (x1 y1 , ..., xn yn )T .
2
Learning from Relative Qualitative Feedback
We consider the following learning setting. Given is a set Xtrain of objects ~xi ? <N . As
training data, we receive a subset Ptrain of all potential relative comparisons defined over
the set Xtrain . Each relative comparison (i, j, k) ? Ptrain with ~xi , ~xj , ~xk ? Xtrain has
the semantic
~xi is closer to ~xj than ~xi is to ~xk .
The goal of the learner is to learn a weighted distance metric dw~ (?, ?) from Ptrain and
Xtrain that best approximates the desired notion of distance on a new set of test points
Xtest , Xtrain ? Xtest = ?. We evaluate the performance of a metric dw~ (?, ?) by how many
relative comparisons Ptest it fulfills on the test set.
3
Parameterized Distance Metrics
A (pseudo) distance metric d(~x, ~y ) is a function over pairs of objects ~x and ~y from some
set X. d(~x, ~y ) is a pseudo metric, iff it obeys the four following properties for all ~x, ~y , and
~z:
d(~x, ~x) = 0,
d(~x, ~y ) = d(~y , ~x),
d(~x, ~y ) ? 0,
d(~x, ~y ) + d(~y , ~z) ? d(~x, ~z)
It is a metric, iff it also obeys d(~x, ~y ) = 0 ? ~x = ~y .
In this paper, we consider a distance metric dA,W (~x, ~y ) between vectors ~x, ~y ? <N parameterized by two matrices, A and W .
q
dA,W (~x, ~y ) = (~x ? ~y )T AW AT (~x ? ~y )
(1)
W is a diagonal matrix with non-negative entries and A is any real matrix. Note that the
matrix AW AT is semi-positive definite so that dA,W (~x, ~y ) is a valid distance metric.
This parametrization
case, A is the identity matrix, I,
p is very flexible. In the simplest
p
T (~
and dI,W (~x, ~y ) = (~x ? ~y )T IW
I
x
?
~
y
)
=
(~
x
?
~y )T W (~x ? ~y ) is a weighted, EupP
2
clidean distance dI,W (~x, ~y ) =
i Wii (xi ? yi ) .
In a general case, A can be any real matrix. This corresponds to applying a linear transformation to the input data with the matrix A. After the transformation, the distance becomes
a Euclidean distance on the transformed input points AT ~x, AT ~y .
q
dA,W (~x, ~y ) = ((~x ? ~y )T A)W (AT (~x ? ~y ))
(2)
The use of kernels K(~x, ~y ) = ?(~x)?(~y ) suggests a particular choice of A. Let ? be the
matrix where the i-th column is the (training) vector ~xi projected into a feature space using
the function ?(~xi ). Then
d?,W (?(~x), ?(~y ))
=
=
q
((?(~x) ? ?(~y ))T ?)W (?T (?(~x) ? ?(~y )))
v
u n
uX
t
Wii (K(~x, ~xi ) ? K(~y , ~xi ))2
(3)
(4)
i=1
is a distance metric in the feature space.
4
An SVM Algorithm for Learning from Relative Comparisons
Given a training set Ptrain of n relative comparisons over a set of vectors Xtrain , and
the matrix A, we aim to fit the parameters in the diagonal matrix W of distance metric
dA,W (~x, ~y ) so that the training error (i.e. the number of violated constraints) is minimized.
Finding a solution of zero training error is equivalent to finding a W that fulfills the following set of constraints.
?(i, j, k) ? Ptrain : dA,W (x~i , x~k ) ? dA,W (x~i , x~j ) > 0
(5)
If the set of constraints is feasible and a W exists that fulfills all constraints, the solution
is typically not unique. We aim to select a matrix AW AT such that dA,W (~x, ~y ) remains
as close to an unweighted Euclidean metric as possible. Following [8], we minimize the
norm of the eigenvalues ||?||2 of AW AT . Since ||?||2 = ||AW AT ||2F , this leads to the
following optimization problem.
1
min
||AW AT ||2F
2
s.t. ?(i,j,k) ? Ptrain : (~xi ?~xk )TAWAT(~xi ?~xk ) ? (~xi ?~xj )TAWAT(~xi ?~xj ) ? 1
Wii ? 0
Unlike in [8], this formulation ensures that dA,W (~x, ~y ) is a metric, avoiding the need for
semi-definite programming like in [11].
As in classification SVMs, we add slack variables [3] to account for constraints that cannot
be satisfied. This leads to the following optimization problem.
X
1
min
||AW AT ||2F + C
?ijk
2
i,j,k
?(i,j,k) ? Ptrain : (~xi ?~xk )TAWAT(~xi ?~xk ) ? (~xi ?~xj )TAWAT(~xi ?~xj ) ? 1 ? ?ijk
?ijk ? 0
Wii ? 0
The sum of the slack variables ?ijk in the objective is an upper bound on the number of
violated constraints.
s.t.
All distances dA,W (~x, ~y ) can be written in the following linear form. If we let w
~ be the
diagonal elements of W then the distance dA,W can be written as
q
((~x ? ~y )T A)W (AT (~x ? ~y ))
dA,W (~x, ~y ) =
q
=
w
~ T (AT ~x ? AT ~y ) ? (AT ~x ? AT ~y )
(6)
~ xi ,xj = (AT x~i ? AT x~k ) ? (AT x~i ?
where ? denotes the element-wise product. If we let ?
T
A x~k ), then the constraints in the optimization problem can be rewritten in the following
linear form.
~ xi ,xk ? ?
~ xi ,xk ) ? 1 ? ?ijk
?(i, j, k) ? Ptrain : w
~ T (?
(7)
1a)
1b)
2a)
2b)
Figure 1: Graphical example of using different A matrices. In example 1, A is the identity matrix and in example 2 A is composed of the training examples projected into high
dimensional space using an RBF kernel.
Furthermore, the objective function is quadratic, so that the optimization problem can be
written as
X
1 T
min
w
~ Lw
~ +C
?ijk
2
i,j,k
s.t.
~ xi ,xk ? ?
~ xi ,xj ) ? 1 ? ?ijk
?(i, j, k) ? Ptrain : w
~ T (?
?ijk ? 0
Wii ? 0
(8)
For the case of A = I, ||AW AT ||2F = wT Lw with L = I. For the case of A = ?, we
define L = (AT A) ? (AT A) so that ||AW AT ||2F = wT Lw. Note that L is positive semidefinite in both cases and that, therefore, the optimization problem is convex quadratic.
5
Experiments
In Figure 1, we display a graphical example of our method. Example 1 is an example of
a weighted Euclidean distance. The input data points are shown in 1a) and our training
constraints specify that the distance between two square points should be less than the distance to a circle. Similarly, circles should be closer to each other than to squares. Figure 1
(1b) shows the points after an MDS analysis with the learned distance metric as input. This
learned distance metric intuitively correponds to stretching the x-axis and shrinking the
y-axis in the original input space.
Example 2 in Figure 1 is an example where we have a similar goal of grouping the squares
together and separating them from the circles. In this example though, there is no way to
use a linear weighting measure to accomplish this task. We used an RBF kernel and learned
a distance metric to separate the clusters. The result is shown in 2b.
To validate the method using a real world example, we ran several experiments on the
WEBKB data set [5]. In order to illustrate the versatility of relative comparisons, we generated three different distance metrics from the same data set and ran three types of tests: an
accuracy test, a learning curve to show how the method generalizes from differing amounts
of training data, and an MDS test to graphically illustrate the new distance measures.
The experimental setup for each of the experiments was the same. We first split X, the set
of all 4,183 documents, into separate training and test sets, Xtrain and Xtest . 70% of the
all examples X added to Xtrain and the remaining 30% are in Xtest . We used a binary
feature vector without stemming or stop word removal (63,949 features) to represent each
document because it is the least biased distance metric to start out with. It also performed
best among several different variations of term weighting, stemming and stopword removal.
The relative comparison sets, Ptrain and Ptest , were generated as follows. We present
results for learning three different notions of distance.
? University Distance: This distance is small when the two examples, ~x, ~y , are from
the same university and larger otherwise. For this data set we used webpages from
seven universities.
? Topic Distance: This distance metric is small when the two examples, ~x, ~y , are
from the same topic (e.g. both are student webpages) and larger when they are
each from a different topic. There are four topics: Student, Faculty, Course and
Project webpages.
? Topic+FacultyStudent Distance: Again when two examples, ~x, ~y , are from the
same topic then they have a small distance between them and a larger distance
when they come from different topics. However, we add the additional constraint
that the distance between a faculty and a student page is smaller than the distance
to pages from other topics.
To build the training constraints, Ptrain , we first randomly selected three documents,
xi , xj , xk , from Xtrain . For the University Distance we added the triplet (i, j, k) to Ptrain
if xi and xj were from the same university and xk was from a different university. In building Ptrain for the Topic Distance we added the (i, j, k) to Ptrain if xi and xj were from
the same topic (e.g. ?Student Webpages?) and xk was from a different topic (e.g. ?Project
Webpages?). For the Topic+FacultyStudent Distance, the training triple (i, j, k) was added
to Ptrain if either the topic rule occurred, when xi and xj were from the same topic and
xk was from a different topic, or if xi was a faculty webpage, xj was a student webpage
and xk was either a project or course webpage. Thus the constraints would specify that
a student webpage is closer to a faculty webpage than a faculty webpage is to a course
webpage.
University Distance
Topic Distance
Topic+FacultyStudent Distance
Learned dw~ (?, ?)
98.43%
75.40%
79.67%
Binary
67.88%
61.82%
63.08%
TFIDF
80.72%
55.57%
55.06%
Table 1: Accuracy of different distance metrics on an unseen test set Ptest .
The results of the learned distance measures on unseen test sets Ptest are reported in Table
1. In each experiment the regularization parameter C was set to 1 and we used A = I.
We report the percentage of the relative comparisons in Ptest that were satisfied for each of
the three experiments. As a baseline for comparison, we give the results for the static (not
learned) distance metric that performs best on the test set. The best performing metric for
all static Euclidean distances (Binary and TFIDF) used stemming and stopword removal,
which our learned distance did not use. The learned University Distance satisfied 98.43%
of the constraints. This verifies that the learning method can effectively find the relevant
features, since pages usually mentioned which university they were from. For the other
distances, both the Topic Distance and Topic+FacultyStudent Distance satisfied more than
13% more constraints in Ptest than the best unweighted distance. Using a kernel instead of
A = I did not yield improved results.
For the second test, we illustrate on the Topic+FacultyStudent data set how the prediction
accuracy of the method scales with the number of training constraints. The learning curve
Percent of Test Set Constraints Satisfied
0.8
0.75
0.7
Learned Distance
Binary L2
TFIDF L2
0.65
0.6
0.55
0.5
0
50
100
150
200
250
Size of Training Set in Thousands of Constraints
Figure 2: Learning curves for the Topic+FacultyStudent dataset where the x axis is the size
of the training set Ptrain plotted against the y axis which is the percent of constraints in
Ptest that were satisfied.
is shown in Figure 2 where we plot the training set size (in number of constraints) versus
the percentage of test constraints satisfied. The test set Ptest was held constant and sampled
in the same way as the training set (|Ptest | = 85,907). As Figure 2 illustrates, after the data
set contained more than 150,000 constraints, the performance of the algorithm remained
relatively constant.
As a final test of our method, we graphically display our distance metrics in Table 7. We
plot three distance metrics: The standard binary distance (Figure a) for the Topic Distance, the learned metric for Topic Distance (Figure b) and, and the learned metric for the
Topic+FacultyStudent Distance (Figure c). To produce the plots in Table 7, all pairwise
distances between the points in Xtest were computed and then projected into 2D using a
classical, metric MDS algorithm [1].
Figure a) in Table 7 is the result of using the pairwise distances resulting from the unweighted, binary L2 norm in MDS. There is no clear distinction between any of the clusters
in 2 dimensions. In Figure b) we see the results of the learned Topic Distance measure. The
classes were reasonably separated from each other. Figure c) shows the result of using the
learned Topic+FacultyStudent Distance metric. When compared to Figure b), the Faculty
and Student webpages have now moved closer together as desired.
6
Related Work
The most relevant related work is the work of Xing et al [11] which focused on the problem
of learning a distance metric to increase the accuracy of nearest neighbor algorithms. Their
work used absolute, qualitative feedback such as ?A is similar to B? or ?A is dissimilar to
B? which is different from the relative constraints considered here. Secondly, their method
does not use regularization.
Related are also techniques for semi-supervised clustering, as it is also considered in [11].
While [10] does not change the distance metric, [2] uses gradient descent to adapt a parameterized distance metric according to user feedback.
Other related work are dimension reduction techniques such as Multidimensional Scaling
(MDS) [4] and Latent Semantic Indexing [6]. Metric MDS techniques take as input a
matrix D of dissimilarities (or similarities) between all points in some collection and then
seeks to arrange the points in a d-dimensional space to minimize the stress. The stress of the
arrangement is roughly the difference between the distances in the d-dimensional space and
the distances input in matrix D. LSI uses an eigenvalue decomposition of the original input
space to find the first d principal eigenvectors to describe the data in d dimensions. Our
work differs because the input is a set of relative comparisons, not quantitative distances
and does not project the data into a lower dimensional space. Non-metric MDS is more
similar to our technique than metric MDS. Instead of preserving the exact distances input,
the non-metric MDS seeks to maintain the rank order of the distances. However, the goal
of our method is not a low dimensional projection, but a new distance metric in the original
space.
7
Conclusion and Future Work
In this paper we presented a method for learning a weighted Euclidean distance from relative constraints. This was accomplished by solving a convex optimization problem similar
to SVMs to find the maximum margin weight vector. One of the main benefits of the algorithm is that the new type of the constraint enables its use in a wider range of applications
than conventional methods. We evaluated the method on a collection of high dimensional
text documents and showed that it can successfully learn different notions of distance.
Future work is needed both with respect to theory and application. In particular, we do
not yet know generalization error bounds for this problem. Furthermore, the power of the
method would be increased, if it was possible to learn more complex metrics that go beyond
feature weighting, for example by incorporating kernels in a more adaptive way.
References
[1] A. Buja, D. Swayne, M. Littman, and N. Dean. Xgvis: Interactive data visualization
with multidimensional scaling. Journal of Computational and Graphical Statistics,
to appear.
[2] D. Cohn, R. Caruana, and A. McCallum. Semi-supervised clustering with user feedback. Technical Report TR2003-1892, Cornell University, 2003.
[3] Corinna Cortes and Vladimir Vapnik. Support-vector networks. Machine Learning,
20(3):273?297, 1995.
[4] T. Cox and M. Cox. Multidimensional Scaling. Chapman & Hall, London, 1994.
[5] M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. Mitchell, K. Nigam, and
S. Slattery. Learning to extract symbolic knowledge from the world wide web. Proceedings of the 15th National Conference on Artificial Intelligence (AAAI-98), 1998.
[6] Scott C. Deerwester, Susan T. Dumais, Thomas K. Landauer, George W. Furnas, and
Richard A. Harshman. Indexing by latent semantic analysis. Journal of the American
Society of Information Science, 41(6):391?407, 1990.
[7] T. Joachims. Optimizing search engines using clickthrough data. Proceedings of the
ACM Conference on Knowledge Discovery and Data Mining (KDD), 2002.
[8] I.W. Tsang and J.T. Kwok. Distance metric learning with kernels. Proceedings of the
International Conference on Artificial Neural Networks, 2003.
[9] V. Vapnik. Statistical Learning Theory. Wiley, Chichester, GB, 1998.
[10] Kiri Wagstaff, Claire Cardie, Seth Rogers, and Stefan Schroedl. Constrained K-means
clustering with background knowledge. In Proc. 18th International Conf. on Machine
Learning, pages 577?584. Morgan Kaufmann, San Francisco, CA, 2001.
[11] E.P. Xing, A.Y. Ng, M.I. Jordan, and S. Russell. Distance metric learning, with application to clustering with side information. Advances in Neural Information Processing Systems, 2002.
a)
3
Course
Project
Student
Faculty
2
1
0
-1
-2
-3
-2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 2.5 3
b)
2
1
0
-1
-2
Course
Project
Student
Faculty
-3
-4
-3
-2
-1
0
1
2
3
c)
3
2
1
0
-1
-2
Course
Project
Student
Faculty
-3
-4
-2
-1
0
1
2
3
4
Table 2: MDS plots of distance functions: a) is the unweighted L2 distance, b) is the Topic
Distance, and c) is the Topic+FacultyStudent distance.
| 2366 |@word cox:2 faculty:9 norm:2 seek:2 decomposition:1 xtest:5 reduction:1 document:11 yet:1 written:3 readily:1 stemming:3 kdd:1 enables:1 plot:4 intelligence:1 selected:1 xk:14 mccallum:2 parametrization:1 ith:1 provides:1 qualitative:6 freitag:1 pairwise:2 roughly:1 clicked:1 becomes:1 webkb:1 notation:1 project:7 differing:1 finding:2 transformation:2 pseudo:2 quantitative:2 multidimensional:4 interactive:1 yn:2 appear:1 harshman:1 positive:2 suggests:1 range:1 obeys:2 decided:1 unique:1 definite:2 differs:1 adapting:1 projection:1 word:1 symbolic:1 cannot:1 close:1 applying:1 equivalent:1 conventional:1 dean:1 graphically:2 go:1 convex:4 focused:1 formulate:1 rule:1 dw:3 notion:3 variation:1 user:3 exact:1 programming:2 us:2 designing:1 element:3 observed:1 solved:1 tsang:1 thousand:1 susan:1 ensures:1 russell:1 ran:2 mentioned:1 slattery:1 littman:1 stopword:2 solving:1 learner:1 easily:1 seth:1 separated:1 describe:1 london:1 query:2 artificial:2 larger:3 drawing:1 otherwise:1 statistic:1 unseen:2 final:1 eigenvalue:2 product:4 relevant:2 iff:2 flexibility:2 moved:1 validate:1 webpage:13 cluster:2 produce:1 object:2 wider:1 illustrate:3 develop:1 nearest:1 c:1 come:1 rogers:1 generalization:1 tfidf:3 secondly:1 considered:4 hall:1 matthew:1 arrange:1 proc:1 iw:1 successfully:1 weighted:4 stefan:1 aim:2 fulfill:1 cornell:3 joachim:2 modelling:2 rank:1 contrast:1 baseline:1 typically:1 transformed:1 classification:1 flexible:2 among:1 denoted:2 constrained:1 ng:1 chapman:1 future:2 minimized:1 report:2 tr2003:1 richard:1 randomly:1 composed:3 national:1 versatility:1 maintain:1 mining:1 chichester:1 semidefinite:1 tj:1 held:1 closer:10 euclidean:5 desired:2 circle:3 plotted:1 increased:1 column:1 dipasquo:1 caruana:1 entry:2 subset:1 reported:1 aw:9 accomplish:1 dumais:1 international:2 together:2 again:1 aaai:1 satisfied:7 conf:1 american:1 account:1 potential:1 student:10 ranking:1 performed:1 start:1 xing:2 minimize:2 square:3 accuracy:4 kaufmann:1 stretching:1 yield:1 cardie:1 against:1 di:2 static:2 stop:1 sampled:1 dataset:1 mitchell:1 knowledge:3 ptrain:16 supervised:3 specify:2 improved:1 formulation:1 evaluated:1 though:1 furthermore:2 ptest:9 hand:1 web:1 cohn:1 believe:1 building:2 regularization:2 semantic:4 correponds:1 stress:2 performs:1 percent:2 ranging:1 wise:2 empirically:1 occurred:1 approximates:1 similarly:1 dot:1 similarity:2 add:2 showed:1 optimizing:1 binary:6 yi:1 accomplished:1 preserving:1 morgan:1 additional:1 george:1 semi:4 technical:1 adapt:1 prediction:1 metric:47 kernel:6 represent:1 justified:1 receive:1 background:1 ithaca:1 biased:1 unlike:1 probably:2 jordan:1 split:1 xj:13 fit:1 click:1 gb:1 clear:1 eigenvectors:1 xtrain:9 amount:1 svms:2 simplest:1 percentage:2 lsi:1 four:2 sum:1 deerwester:1 parameterized:3 family:1 throughout:1 scaling:4 bound:2 display:2 quadratic:4 constraint:25 min:3 performing:1 relatively:1 department:1 according:1 craven:1 smaller:1 intuitively:1 indexing:2 wagstaff:1 thorsten:1 visualization:1 remains:1 describing:1 slack:2 needed:1 know:1 generalizes:1 available:2 wii:5 rewritten:1 kwok:1 corinna:1 original:3 thomas:1 denotes:1 clustering:5 remaining:1 graphical:3 build:1 classical:1 society:1 objective:2 added:4 arrangement:1 schroedl:1 md:11 diagonal:3 gradient:1 distance:82 separate:2 separating:1 parametrized:1 seven:1 topic:28 vladimir:1 difficult:1 setup:1 negative:1 clickthrough:1 upper:1 descent:1 y1:2 buja:1 pair:1 engine:2 learned:13 distinction:1 beyond:1 usually:1 scott:1 program:1 power:1 ranked:1 axis:4 extract:1 text:3 l2:4 removal:3 discovery:1 relative:16 versus:1 triple:1 claire:1 course:6 transpose:1 side:1 neighbor:1 wide:1 taking:2 absolute:2 benefit:1 feedback:7 dimension:4 xn:2 valid:1 world:2 unweighted:4 curve:3 collection:5 adaptive:1 projected:3 san:1 schultz:2 assumed:1 francisco:1 xi:28 landauer:1 search:3 latent:2 triplet:1 table:6 learn:4 reasonably:1 ca:1 nigam:1 complex:1 da:12 did:2 main:1 arrow:1 verifies:1 x1:2 ny:1 wiley:1 furnas:1 shrinking:1 lw:3 weighting:4 remained:1 xt:2 list:1 svm:3 cortes:1 grouping:1 essential:1 exists:1 incorporating:1 vapnik:2 effectively:1 dissimilarity:1 illustrates:1 margin:2 browsing:1 explore:1 contained:1 ux:1 recommendation:1 corresponds:1 acm:1 goal:3 identity:2 rbf:2 clidean:1 feasible:1 change:1 semantically:1 wt:2 principal:1 experimental:1 ijk:8 select:1 support:2 fulfills:3 dissimilar:1 violated:2 evaluate:3 avoiding:1 |
1,503 | 2,367 | Robustness in Markov Decision Problems with
Uncertain Transition Matrices?
Arnab Nilim
Department of EECS ?
University of California
Berkeley, CA 94720
[email protected]
Laurent El Ghaoui
Department of EECS
University of California
Berkeley, CA 94720
[email protected]
Abstract
Optimal solutions to Markov Decision Problems (MDPs) are very sensitive with respect to the state transition probabilities. In many practical problems, the estimation of those probabilities is far from accurate.
Hence, estimation errors are limiting factors in applying MDPs to realworld problems. We propose an algorithm for solving finite-state and
finite-action MDPs, where the solution is guaranteed to be robust with
respect to estimation errors on the state transition probabilities. Our algorithm involves a statistically accurate yet numerically efficient representation of uncertainty, via Kullback-Leibler divergence bounds. The
worst-case complexity of the robust algorithm is the same as the original Bellman recursion. Hence, robustness can be added at practically no
extra computing cost.
1 Introduction
We consider a finite-state and finite-action Markov decision problem in which the transition probabilities themselves are uncertain, and seek a robust decision for it. Our work
is motivated by the fact that in many practical problems, the transition matrices have to
be estimated from data. This may be a difficult task and the estimation errors may have
a huge impact on the solution, which is often quite sensitive to changes in the transition
probabilities [3]. A number of authors have addressed the issue of uncertainty in the transition matrices of an MDP. A Bayesian approach such as described by [9] requires a perfect
knowledge of the whole prior distribution on the transition matrix, making it difficult to
apply in practice. Other authors have considered the transition matrix to lie in a given set,
most typically a polytope: see [8, 10, 5]. Although our approach allows to describe the
uncertainty on the transition matrix by a polytope, we may argue against choosing such a
model for the uncertainty. First, a general polytope is often not a tractable way to address
the robustness problem, as it incurs a significant additional computational effort to handle
uncertainty. Perhaps more importantly, polytopic models, especially interval matrices, may
be very poor representations of statistical uncertainty and lead to very conservative robust
?
Research funded in part by Eurocontrol-014692, DARPA-F33615-01-C-3150, and NSF-ECS9983874
?
Electrical Engineering and Computer Sciences
policies. In [1], the authors consider a problem dual to ours, and provide a general statement according to which the cost of solving their problem is polynomial in problem size,
provided the uncertainty on the transition matrices is described by convex sets, without
proposing any specific algorithm. This paper is a short version of a longer report [2], which
contains all the proofs of the results summarized here.
Notation. P > 0 or P ? 0 refers to the strict or non-strict componentwise inequality for
matrices or vectors. For a vector p > 0, log p refers to the componentwise operation. The
notation 1 refers to the vector of ones, with size determined from context. The probability
simplex in Rn is denoted ?n = {p ? Rn+ : pT 1 = 1}, while ?n is the set of n?n transition
matrices (componentwise non-negative matrices with rows summing to one). We use ?P
to denote the support function of a set P ? Rn , with for v ? Rn , ?P (v) := sup{pT v :
p ? P}.
2 The problem description
We consider a finite horizon Markov decision process with finite decision horizon T =
{0, 1, 2, . . . , N ? 1}. At each stage, the system occupies a state i ? X , where n = |X | is
finite, and a decision maker is allowed to choose an action a deterministically from a finite
set of allowable actions A = {a1 , . . . , am } (for notational simplicity we assume that A is
not state-dependent). The system starts in a given initial state i0 . The states make Markov
transitions according to a collection of (possibly time-dependent) transition matrices ? :=
(Pta )a?A,t?T , where for every a ? A, t ? T , the n ? n transition matrix Pta contains the
probabilities of transition under action a at stage t. We denote by ? = (a0 , . . . , aN ?1 ) a
generic controller policy, where at (i) denotes the controller action when the system is in
state i ? X at time t ? T . Let ? = AnN be the corresponding strategy space. Define by
ct (i, a) the cost corresponding to state i ? X and action a ? A at time t ? T , and by cN
the cost function at the terminal stage. We assume that ct (i, a) is non-negative and finite
for every i ? X and a ? A.
For a given set of transition matrices ? , we define the finite-horizon nominal problem by
?N (?, ? ) := min CN (?, ? ),
(1)
???
where CN (?, ? ) denotes the expected total cost under controller policy ? and transitions ? :
?N ?1
!
X
CN (?, ? ) := E
(2)
ct (it , at (i)) + cN (iN ) .
t=0
A special case of interest is when the expected total cost function bears the form (2), where
the terminal cost is zero, and ct (i, a) = ? t c(i, a), with c(i, a) now a constant cost function,
which we assume non-negative and finite everywhere, and ? ? (0, 1) is a discount factor.
We refer to this cost function as the discounted cost function, and denote by C? (?, ? ) the
limit of the discounted cost (2) as N ? ?.
When the transition matrices are exactly known, the corresponding nominal problem can
be solved via a dynamic programming algorithm, which has total complexity of nmN flops
in the finite-horizon case. In the infinite-horizon case with a discounted cost function, the
cost of computing an ?-suboptimal policy via the Bellman recursion is O(nm log(1/?));
see [7] for more details.
2.1 The robust control problems
At first we assume that when for each action a and time t, the corresponding transition
matrix Pta is only known to lie in some given subset P a . Two models for transition matrix uncertainty are possible, leading to two possible forms of finite-horizon robust control
problems. In a first model, referred to as the stationary uncertainty model, the transition
matrices are chosen by nature depending on the controller policy once and for all, and
remain fixed thereafter. In a second model, which we refer to as the time-varying uncertainty model, the transition matrices can vary arbitrarily with time, within their prescribed
bounds. Each problem leads to a game between the controller and nature, where the controller seeks to minimize the maximum expected cost, with nature being the maximizing
player.
Let us define our two problems more formally. A policy of nature refers to a specific
collection of time-dependent transition matrices ? = (Pta )a?A,t?T chosen by nature, and
the set of admissible policies of nature is T := (?a?A P a )N . Denote by Ts the set of
stationary admissible policies of nature:
Ts = {? = (Pta )a?A,t?T ? T : Pta = Psa for every t, s ? T, a ? A} .
The stationary uncertainty model leads to the problem
?N (?, Ts ) := min max CN (?, ? ).
(3)
??? ? ?Ts
In contrast, the time-varying uncertainty model leads to a relaxed version of the above:
?N (?, Ts ) ? ?N (?, T ) := min max CN (?, ? ).
??? ? ?T
(4)
The first model is attractive for statistical reasons, as it is much easier to develop statistically
accurate sets of confidence when the underlying process is time-invariant. Unfortunately,
the resulting game (3) seems to be hard to solve. The second model is attractive as one
can solve the corresponding game (4) using a variant of the dynamic programming algorithm seen later, but we are left with a difficult task, that of estimating a meaningful set of
confidence for the time-varying matrices Pta . In this paper we will use the first model of
uncertainty in order to derive statistically meaningful sets of confidence for the transition
matrices, based on likelihood or entropy bounds. Then, instead of solving the corresponding difficult control problem (3), we use an approximation that is common in robust control,
and solve the time-varying upper bound (4), using the uncertainty sets P a derived from a
stationarity assumption about the transition matrices. We will also consider a variant of
the finite-horizon time-varying problem (4), where controller and nature play alternatively,
leading to a repeated game
?rep
N (?, Q) := min max min max . . . min
a0
?0 ?Q
a1
?1 ?Q
max
aN ?1 ?N ?1 ?Q
CN (?, ? ),
(5)
where the notation ?t = (Pta )a?A denotes the collection of transition matrices at a given
time t ? T , and Q := ?a?A P a is the corresponding set of confidence.
Finally, we will consider an infinite-horizon robust control problem, with the discounted
cost function referred to above, and where we restrict control and nature policies to be
stationary:
?? (?s , Ts ) := min max C? (?, ? ),
(6)
???s ? ?Ts
where ?s denotes the space of stationary control policies. We define ?? (?, T ), ?? (?, Ts )
and ?? (?s , T ) accordingly.
In the sequel, for a given control policy ? ? ? and subset S ? T , the notation
?N (?, S) := max? ?S CN (?, ? ) denotes the worst-case expected total cost for the finitehorizon problem, and ?? (?, S) is defined likewise.
2.2 Main results
Our main contributions are as follows. First we provide a recursion, the ?robust dynamic
programming? algorithm, which solves the finite-horizon robust control problem (4). We
provide a simple proof in [2] of the optimality of the recursion, where the main ingredient
is to show that perfect duality holds in the game (4). As a corollary of this result, we obtain that the repeated game (5) is equivalent to its non-repeated counterpart (4). Second,
we provide similar results for the infinite-horizon problem with discounted cost function,
(6). Moreover, we obtain that if we consider a finite-horizon problem with a discounted
cost function, then the gap between the optimal value of the stationary uncertainty problem
(3) and that of its time-varying counterpart (4) goes to zero as the horizon length goes to
infinity, at a rate determined by the discount factor. Finally, we identify several classes
of uncertainty models, which result in an algorithm that is both statistically accurate and
numerically tractable. We provide precise complexity results that imply that, with the proposed approach, robustness can be handled at practically no extra computing cost.
3 Finite-Horizon Robust MDP
We consider the finite-horizon robust control problem defined in section 2.1. For a given
state i ? X , action a ? A, and P a ? P a , we denote by pai the next-state distribution
drawn from P a corresponding to state i ? X ; thus pai is the i-th row of matrix P a . We
define Pia as the projection of the set P a onto the set of pai -variables. By assumption, these
sets are included in the probability simplex of Rn , ?n ; no other property is assumed. The
following theorem is proved in [2].
Theorem 1 (robust dynamic programming) For the robust control problem (4), perfect
duality holds:
?N (?, T ) = min max CN (?, ? ) = max min CN (?, ? ) := ?N (?, T ).
??? ? ?T
? ?T
???
The problem can be solved via the recursion
?
?
vt (i) = min ct (i, a) + ?Pia (vt+1 ) , i ? X , t ? T,
a?A
(7)
where ?P (v) := sup{pT v : p ? P} denotes the support function of a set P, vt (i) is the
worst-case optimal value function in state i at stage t. A corresponding optimal control
policy ? ? = (a?0 , . . . , a?N ?1 ) is obtained by setting
?
?
a?t (i) ? arg min ct (i, a) + ?Pia (vt+1 ) , i ? X .
(8)
a?A
The effect of uncertainty on a given strategy ? = (a0 , . . . , aN ) can be evaluated by the
following recursion
vt? (i) =
?
), i ? X ,
ct (i, at (i)) + ?P at (i) (vt+1
i
(9)
which provides the worst-case value function v ? for the strategy ?.
The above result has a nice consequence for the repeated game (5):
Corollary 2 The repeated game (5) is equivalent to the game (4):
?rep
N (?, Q) = ?N (?, T ),
and the optimal strategies for ?N (?, T ) given in theorem 1 are optimal for ?rep
N (?, Q) as
well.
The interpretation of the perfect duality result given in theorem 1, and its consequence
given in corollary 2, is that it does not matter wether the controller or nature play first, or if
they alternatively; all these games are equivalent.
Each step of the robust dynamic programming algorithm involves the solution of an optimization problem, referred to as the ?inner problem?, of the form
?Pia (v) = maxa v T p,
p?Pi
(10)
where Pia is the set that describes the uncertainty on i-th row of the transition matrix P a ,
and v contains the elements of the value function at some given stage. The complexity of
the sets Pia for each i ? X and a ? A is a key component in the complexity of the robust
dynamic programming algorithm. Beyond numerical tractability, an additional criteria for
the choice of a specific uncertainty model is of course be that the sets P a should represent accurate (non-conservative) descriptions of the statistical uncertainty on the transition
matrices. Perhaps surprisingly, there are statistical models of uncertainty, such as those
described in section 5, that are good on both counts. Precisely, these models result in inner
problems (10) that can be solved in worst-case time of O(n log(vmax /?)) via a simple bisection algorithm, where n is the size of the state space, vmax is a global upper bound on
the value function, and ? > 0 specifies the accuracy at which the optimal value of the inner
problem (10) is computed. In the finite-horizon case, we can bound vmax by O(N ).
Now consider the following algorithm, where the uncertainty is described in terms of one
of the models described in section 5:
Robust Finite Horizon Dynamic Programming Algorithm
1. Set ? > 0. Initialize the value function to its terminal value v?N = cN .
2. Repeat until t = 0:
(a) For every state i ? X and action a ? A, compute, using the bisection algorithm given in [2], a value ?
?ia such that
?
?ia ? ?/N ? ?Pia (?
vt ) ? ?
?ia .
(b) Update the value function by v?t?1 (i) = mina?A (ct?1 (i, a) + ?
?ia ) , i ? X .
(c) Replace t by t ? 1 and go to 2.
3. For every i ? X and t ? T , set ? ? = (a?0 , . . . , a?N ?1 ), where
a?t (i) = arg max {ct?1 (i, a) + ?
?ia } , i ? X , a ? A.
a?A
As shown in [2], the above algorithm provides an suboptimal policy ? ? that achieves the
exact optimum with prescribed accuracy ?, with a required number of flops bounded above
by O(mnN log(N/?)). This means that robustness is obtained at a relative increase of
computational cost of only log(N/?) with respect to the classical dynamic programming
algorithm, which is small for moderate values of N . If N is very large, we can turn instead
to the infinite-horizon problem examined in section 4, and similar complexity results hold.
4 Infinite-Horizon MDP
In this section, we address a the infinite-horizon robust control problem, with a discounted
cost function of the form (2), where the terminal cost is zero, and ct (i, a) = ? t c(i, a),
where c(i, a) is now a constant cost function, which we assume non-negative and finite
everywhere, and ? ? (0, 1) is a discount factor.
We begin with the infinite-horizon problem involving stationary control and nature policies
defined in (6). The following theorem is proved in [2].
Theorem 3 (Robust Bellman recursion) For the infinite-horizon robust control problem
(6) with stationary uncertainty on the transition matrices, stationary control policies, and
a discounted cost function with discount factor ? ? [0, 1), perfect duality holds:
?? (?s , Ts ) = max min C? (?, ? ) := ?? (?s , Ts ).
(11)
? ?Ts ???s
The optimal value is given by ?? (?s , Ts ) = v(i0 ), where i0 is the initial state, and where
the value function v satisfies is the optimality conditions
?
?
v(i) = min c(i, a) + ??Pia (v) , i ? X .
(12)
a?A
The value function is the unique limit value of the convergent vector sequence defined by
?
?
vk+1 (i) = min c(i, a) + ??Pia (vk ) , i ? X , k = 1, 2, . . .
(13)
a?A
A stationary, optimal control policy ? = (a? , a? , . . .) is obtained as
?
?
a? (i) ? arg min c(i, a) + ??Pia (v) , i ? X .
a?A
(14)
Note that the problem of computing the dual quantity ?? (?s , Ts ) given in (11), has been
addressed in [1], where the authors provide the recursion (13) without proof.
Theorem (3) leads to the following corollary, also proved in [2].
Corollary 4 In the infinite-horizon problem, we can without loss of generality assume that
the control and nature policies are stationary, that is,
?? (?, T ) = ?? (?s , Ts ) = ?? (?s , T ) = ?? (?, Ts ).
(15)
Furthermore, in the finite-horizon case, with a discounted cost function, the gap between
the optimal values of the finite-horizon problems under stationary and time-varying uncertainty models, ?N (?, T )??N (?, Ts ), goes to zero as the horizon length N goes to infinity,
at a geometric rate ?.
Now consider the following algorithm, where we describe the uncertainty using one of the
models of section 5.
Robust Infinite Horizon Dynamic Programming Algorithm
1. Set ? > 0, initialize the value function v?1 > 0 and set k = 1.
2. (a) For all states i and controls a, compute, using the bisection algorithm given
in [2], a value ?
?ia such that
?
?ia ? ? ? ?Pia (?
vk ) ? ?
?ia ,
where ? = (1 ? ?)?/2?.
(b) For all states i and controls a, compute v?k+1 (i) by,
v?k+1 (i) = min (c(i, a) + ? ?
?ia ) .
a?A
3. If
(1 ? ?)?
,
2?
go to 4. Otherwise, replace k by k + 1 and go to 2.
k?
vk+1 ? v?k k <
4. For each i ? X , set an ? ? = (a? , a? , . . .), where
a? (i) = arg max {c(i, a) + ? ?
?ia } , i ? X .
a?A
In [2], we establish that the above algorithm finds an ?-suboptimal robust policy in at most
O(nm log(1/?)2 ) flops. Thus, the extra computational cost incurred by robustness in the
infinite-horizon case is only O(log(1/?)).
5 Kullback-Liebler Divergence Uncertainty Models
We now address the inner problem (10) for a specific action a ? A and state i ? X .
Denote by D(pkq) denotes the Kullback-Leibler (KL) divergence (relative entropy) from
the probability distribution q ? ?n to the probability distribution p ? ?n :
X
p(j)
.
D(pkq) :=
p(j) log
q(j)
j
The above function provides a natural way to describe errors in (rows of) the transition
matrices; examples of models based on this function are given below.
Likelihood Models: Our first uncertainty model is derived from a controlled experiment
starting from state i = 1, 2, . . . , n and the count of the number of transitions to different
states. We denote by F a the matrix of empirical frequencies of transition with control a in
the experiment; denote by fia its ith row. We have F a ? 0 and F a 1 = 1, where 1 denotes
the vector of ones. The ?plug-in? estimate P? a = F a is the solution to the maximum
likelihood problem
X
max
F a (i, j) log P (i, j) : P ? 0, P 1 = 1.
(16)
P
i,j
P
a
The optimal log-likelihood is ?max
= i,j F a (i, j) log F a (i, j). A classical description
of uncertainty in a maximum-likelihood setting is via the ?likelihood region? [6]
?
?
?
?
X
(17)
F a (i, j) log P (i, j) ? ? a ,
P a = P ? Rn?n : P ? 0, P 1 = 1,
?
?
i,j
a
a
?max
is a pre-specified number, which represents the uncertainty level. In
where ? <
practice, the designer specifies an uncertainty level ? a based on re-sampling Bmethods, or
on a large-sample Gaussian approximation, so as to ensure that the set above achieves a
desired level of confidence.
With then above model, we note that the inner problem (10) only oinvolves the set
P a
a
a
Pia := pai ? Rn : pai ? 0, pai T 1 = 1,
j F (i, j) log pi (j) ? ?i , where the paP
P
a
a
a
a
a
rameter ?i := ? ? k6=i j F (k, j) log F (k, j). The set Pi is the projection of
the set described in (17) on a specific axis of pai -variables. Noting further that the likelihood function can be expressed in terms of KL divergence, the corresponding uncera ? A, is given by a set of the form
tainty model on the row pai for given i ? X , P
Pia = {p ? ?n : D(fia kp) ? ?ia }, where ?ia = j F a (i, j) log F a (i, j) ? ?ia is a function of the uncertainty level.
Maximum A-Posteriori (MAP) Models: a variation on Likelihood models involves Maximum A Posteriori (MAP) estimates. If there exist a prior information regrading the uncertainty on the i-th row of P a , which can be described via a Dirichlet distribution [4] with
parameter ?ia , the resulting MAP estimation problem takes the form
max (fia + ?ia ? 1)T log p : pT 1 = 1, p ? 0.
p
Thus, the MAP uncertainty model is equivalent to a Likelihood model, with the sample
distribution fia replaced by fia + ?ia ? 1, where ?ia is the prior corresponding to state i and
action a.
Relative Entropy Models: Likelihood or MAP models involve the KL divergence from the
unknown distribution to a reference distribution. We can also choose to describe uncertainty by exchanging the order of the arguments of the KL divergence. This results in a
so-called ?relative entropy? model, where the uncertainty on the i-th row of the transition
matrix P a described by a set of the form Pia = {p ? ?n : D(pkqia ) ? ?ia }, where ?ia > 0
is fixed, qia > 0 is a given ?reference? distribution (for example, the Maximum Likelihood
distribution).
Equipped with one of the above uncertainty models, we can address the inner problem
(10). As shown in [2], the inner problem can be converted by convex duality, to a problem
of minimizing a single-variable, convex function. In turn, this one-dimensional convex
optimization problem can be solved via a bisection algorithm with a worst-case complexity
of O(n log(vmax /?)), where ? > 0 specifies the accuracy at which the optimal value of the
inner problem (10) is computed, and vmax is a global upper bound on the value function.
Remark: We can also use models where the uncertainty in the i-th row for the transition
a,K
matrix P a is described by a finite set of vectors, Pia = {pa,1
}. In this case the
i , . . . , pi
complexity of the corresponding robust dynamic programming algorithm is increased by
a relative factor of K with respect to its classical counterpart, which makes the approach
attractive when the number of ?scenarios? K is moderate.
6 Concluding remarks
We proposed a ?robust dynamic programming? algorithm for solving finite-state and finiteaction MDPs whose solutions are guaranteed to tolerate arbitrary changes of the transition probability matrices within given sets. We proposed models based on KL divergence,
which is a natural way to describe estimation errors. The resulting robust dynamic programming algorithm has almost the same computational cost as the classical dynamic programming algorithm: the relative increase to compute an ?-suboptimal policy is O(N log(1/?))
in the N -horizon case, and O(log(1/?)) for the infinite-horizon case.
References
[1] J. Bagnell, A. Ng, and J. Schneider. Solving uncertain Markov decision problems. Technical
Report CMU-RI-TR-01-25, Robotics Institute, Carnegie Mellon University, August 2001.
[2] L. El-Ghaoui and A. Nilim. Robust solution to Markov decision problems with uncertain transition matrices: proofs and complexity analysis. Technical Report UCB/ERL M04/07, Department of EECS, University of California, Berkeley, January 2004. A related version has been
submitted to Operations Research in Dec. 2003.
[3] E. Feinberg and A. Shwartz. Handbook of Markov Decision Processes, Methods and Applications. Kluwer?s Academic Publishers, Boston, 2002.
[4] T. Ferguson. Prior distributions on space of probability measures. The Annal of Statistics,
2(4):615?629, 1974.
[5] R. Givan, S. Leach, and T. Dean. Bounded parameter Markov decision processes. In fourth
European Conference on Planning, pages 234?246, 1997.
[6] E. Lehmann and G. Casella. Theory of point estimation. Springer-Verlag, New York, USA,
1998.
[7] M. Putterman. Markov Decision Processes: Discrete Stochastic Dynamic Programming. WileyInterscince, New York, 1994.
[8] J. K. Satia and R. L. Lave. Markov decision processes with uncertain transition probabilities.
Operations Research, 21(3):728?740, 1973.
[9] A. Shapiro and A. J. Kleywegt. Minimax analysis of stochastic problems. Optimization Methods
and Software, 2002. to appear.
[10] C. C. White and H. K. Eldeib. Markov decision processes with imprecise transition probabilities. Operations Research, 42(4):739?749, 1994.
| 2367 |@word version:3 polynomial:1 seems:1 seek:2 incurs:1 tr:1 initial:2 contains:3 ours:1 lave:1 yet:1 numerical:1 update:1 stationary:12 accordingly:1 ith:1 short:1 provides:3 finitehorizon:1 expected:4 themselves:1 planning:1 terminal:4 bellman:3 discounted:9 equipped:1 qia:1 provided:1 estimating:1 notation:4 underlying:1 moreover:1 bounded:2 begin:1 maxa:1 proposing:1 berkeley:5 every:5 exactly:1 control:21 appear:1 engineering:1 limit:2 consequence:2 laurent:1 examined:1 statistically:4 practical:2 unique:1 practice:2 empirical:1 projection:2 imprecise:1 confidence:5 pre:1 refers:4 onto:1 context:1 applying:1 equivalent:4 map:5 dean:1 maximizing:1 go:7 starting:1 convex:4 simplicity:1 importantly:1 erl:1 handle:1 variation:1 limiting:1 pt:4 nominal:2 play:2 exact:1 programming:14 pa:1 element:1 electrical:1 solved:4 worst:6 region:1 complexity:9 dynamic:14 solving:5 darpa:1 describe:5 kp:1 choosing:1 quite:1 whose:1 solve:3 otherwise:1 statistic:1 eldeib:1 sequence:1 propose:1 description:3 optimum:1 perfect:5 depending:1 develop:1 derive:1 solves:1 involves:3 stochastic:2 occupies:1 givan:1 regrading:1 hold:4 practically:2 considered:1 vary:1 achieves:2 estimation:7 maker:1 sensitive:2 tainty:1 gaussian:1 varying:7 corollary:5 derived:2 notational:1 vk:4 likelihood:11 contrast:1 am:1 posteriori:2 dependent:3 el:2 i0:3 ferguson:1 typically:1 a0:3 issue:1 dual:2 arg:4 denoted:1 k6:1 special:1 initialize:2 once:1 ng:1 sampling:1 pai:8 represents:1 simplex:2 report:3 pta:8 divergence:7 replaced:1 stationarity:1 huge:1 interest:1 feinberg:1 accurate:5 re:1 desired:1 annal:1 uncertain:5 increased:1 exchanging:1 cost:27 pia:15 tractability:1 subset:2 eec:5 sequel:1 polytopic:1 nm:2 choose:2 possibly:1 leading:2 converted:1 summarized:1 matter:1 later:1 wether:1 sup:2 start:1 contribution:1 minimize:1 accuracy:3 likewise:1 identify:1 bayesian:1 bisection:4 liebler:1 submitted:1 casella:1 against:1 frequency:1 proof:4 proved:3 knowledge:1 tolerate:1 evaluated:1 generality:1 furthermore:1 stage:5 until:1 perhaps:2 mdp:3 usa:1 effect:1 counterpart:3 hence:2 leibler:2 nmn:1 white:1 attractive:3 psa:1 game:10 criterion:1 allowable:1 mina:1 common:1 interpretation:1 kluwer:1 numerically:2 significant:1 refer:2 mellon:1 funded:1 longer:1 pkq:2 moderate:2 scenario:1 verlag:1 inequality:1 rep:3 arbitrarily:1 vt:7 leach:1 seen:1 additional:2 relaxed:1 schneider:1 technical:2 academic:1 plug:1 rameter:1 a1:2 controlled:1 impact:1 variant:2 involving:1 controller:8 cmu:1 represent:1 arnab:1 robotics:1 dec:1 addressed:2 interval:1 publisher:1 extra:3 strict:2 noting:1 restrict:1 suboptimal:4 inner:8 cn:12 motivated:1 handled:1 effort:1 york:2 remark:2 action:12 involve:1 discount:4 specifies:3 shapiro:1 exist:1 nsf:1 designer:1 estimated:1 eurocontrol:1 carnegie:1 discrete:1 thereafter:1 key:1 drawn:1 pap:1 realworld:1 everywhere:2 uncertainty:37 fourth:1 lehmann:1 almost:1 decision:14 bound:7 ct:10 guaranteed:2 convergent:1 infinity:2 precisely:1 ri:1 software:1 argument:1 min:16 prescribed:2 f33615:1 optimality:2 concluding:1 department:3 according:2 poor:1 remain:1 describes:1 making:1 invariant:1 ghaoui:2 turn:2 count:2 tractable:2 fia:5 operation:4 apply:1 generic:1 robustness:6 original:1 denotes:8 dirichlet:1 ensure:1 especially:1 establish:1 classical:4 added:1 quantity:1 strategy:4 bagnell:1 polytope:3 argue:1 reason:1 length:2 minimizing:1 difficult:4 unfortunately:1 statement:1 negative:4 policy:19 unknown:1 upper:3 markov:12 finite:25 t:16 january:1 flop:3 precise:1 rn:7 arbitrary:1 august:1 required:1 kl:5 specified:1 componentwise:3 california:3 address:4 beyond:1 below:1 elghaoui:1 max:16 ia:19 natural:2 recursion:8 minimax:1 mdps:4 imply:1 axis:1 prior:4 nice:1 geometric:1 satia:1 relative:6 loss:1 bear:1 ingredient:1 incurred:1 pi:4 row:9 course:1 surprisingly:1 repeat:1 institute:1 transition:39 author:4 collection:3 vmax:5 far:1 kullback:3 global:2 handbook:1 summing:1 assumed:1 shwartz:1 alternatively:2 nature:12 robust:26 ca:2 kleywegt:1 mnn:1 european:1 main:3 whole:1 allowed:1 repeated:5 referred:3 nilim:3 deterministically:1 lie:2 admissible:2 theorem:7 specific:5 horizon:29 gap:2 easier:1 boston:1 entropy:4 expressed:1 springer:1 satisfies:1 ann:1 replace:2 change:2 hard:1 included:1 determined:2 infinite:12 conservative:2 total:4 called:1 duality:5 player:1 meaningful:2 ucb:1 formally:1 support:2 |
1,504 | 2,368 | An Improved Scheme for Detection and
Labelling in Johansson Displays
Claudio Fanti
Marzia Polito
Computational Vision Lab, 136-93
California Institute of Technology
Pasadena, CA 91125, USA
Intel Corporation, SC12-303
2200 Mission College Blvd.
Santa Clara, CA 95054, USA
[email protected]
[email protected]
Pietro Perona
Computational Vision Lab, 136-93
California Institute of Technology
Pasadena, CA 91125, USA
[email protected]
Abstract
Consider a number of moving points, where each point is attached
to a joint of the human body and projected onto an image plane.
Johannson showed that humans can effortlessly detect and recognize the presence of other humans from such displays. This is true
even when some of the body points are missing (e.g. because of
occlusion) and unrelated clutter points are added to the display.
We are interested in replicating this ability in a machine. To this
end, we present a labelling and detection scheme in a probabilistic
framework. Our method is based on representing the joint probability density of positions and velocities of body points with a
graphical model, and using Loopy Belief Propagation to calculate
a likely interpretation of the scene. Furthermore, we introduce a
global variable representing the body?s centroid. Experiments on
one motion-captured sequence suggest that our scheme improves on
the accuracy of a previous approach based on triangulated graphical models, especially when very few parts are visible. The improvement is due both to the more general graph structure we use
and, more significantly, to the introduction of the centroid variable.
1
Introduction
Perceiving and analyzing human motion is a natural and useful task for our visual
system. Replicating this ability in machines is one of the most important and difficult goals of machine vision. As Johannson experiments show [4], the instantaneous
information on the position and velocity of a few features, such as the joints of the
body, present sufficient information to detect human presence and understand the
gist of human activity. This is true even if clutter features are detected in the scene,
and if some body parts features are occluded (generalized Johansson display). Selecting features in a frame, as well as computing their velocity across frames, is
a task for which good quality solutions exist in the literature [5] and we will not
consider it here.
We therefore assume that a number of features that are associated to the body
have been detected and their velocity has been computed. We will not assume that
all such features have been found, nor that all the features that were detected are
associated to the body. We study the interpretation of such a generalized Johannson
display, i.e. the detection of the presence of a human in the scene and the labelling
of the point features as parts of the body or as clutter. We generalize an approach
presented in [3] where the pattern of point positions and velocities associated to
human motion was modelled with a triangulated graphical model. We are interested
here in exploring the benefit of allowing long-range connections, and therefore loops
in the graph representing correlations between cliques of variables. Furthermore,
while [3] obtained translation invariance at the level of individual cliques, we study
the possibility of obtaining translation invariance globally by introducing a variable
representing the ensemble model of the body. Algorithms based on loopy belief
propagation (LBP) are applied to efficiently compute high-likelihood interpretations
of the scene, and therefore detection and labelling.
1.1
Notations
We use bold-face letters x for random vectors and italic letters x for their sample
values. The probability density (or mass) function for a variable x is denoted by
fx (x). When x is a random quantity we write the expectation as Efx [x]. An ordered
set I = [i1 . . . iK ] used as a vector?s subscript has the obvious meaning of yI =
[yi1 . . . yiK ] or, when enclosed in squared brackets [I]s applied to a dimension of a
matrix V = [vij ], it selects the s-dimensional members (specified by the subscript)
of the matrix along that dimension, i.e. V[1:2]4 [1:2]4 is the 8 ? 8 matrix obtained by
selecting the first two 4-dimensional rows and columns.
1.2
Problem Definition
We identify M = 16 relevant body parts (intuitively corresponding to the main
joints). Each marked point on a display (referred to as a detection or observation) is
denoted by yi ? R4 and is endowed with four values, i.e. yi = [yi,a , yi,b , yi,va , yi,vb ]T
corresponding to its horizontal and vertical positions and velocities. Our goal here
is to find the most probable assignment of a subset of detections to the body parts.
T T
] the 4N ? 1 vector of all observations (on
For each display we call y = [y1T . . . yN
a frame) and we model each single observation as a 4 ? 1 random vector yi . In
general N ? M however some or all of the M parts might not be present in a given
display. The binary random variable ? i indicates whether the ith part has been
detected or not (i ? {1 . . . M }) . For i ? {1 . . . M }, a discrete random variable ?i
taking values on {1 . . . N } is used to further specify the correspondence of a body
part i to a particular detection ?i . Since this makes sense only if the body part is
detected, we assume by convention that ?i = 0 if ?i = 0. A pair h = [?, ?] is called
a labelling hypothesis.
Any particular labelling hypothesis determines a partition of the set of indices corresponding to detections into foreground and background: [1 . . . N ]T = F ?B, where
F = [?i : ?i = 1, i = 1 . . . M ]T and B = [1 . . . N ]T \ F . We say that m = |F| parts
have been detected and M ? m are missing. Based on the partition induced on ?
by ?, we can define two vectors ?f = ?F and ?b = ?B , each identifying the detections that were assigned to the foreground and those assigned to the background
respectively. Finally, the set of detections y remains partitioned into the vectors
y?f and y?b of the foreground and background detections respectively.
The foreground and background detections are assumed to be (conditionally) independent (given h) meaning that their joint distribution factorizes as follows
fy|?? (y|??) = fy
?f |??
(y?f |??) ? fy
?b |??
(y?b |??)
where fy f |?? (y?f |??) is a gaussian pdf, while fy b |?? (y?b |??) is the uniform pdf
?
?
UN ?m (A), with A determining the area of the position and velocity hyperplane for
each of the N ? m background parts.
More specifically, when all M parts are observed (? = [1 . . . 1]T ) we have that
fy?
When m ? M instead, N (?f , ?f ) is the
|?? (y?[1:M ]1 |??) is N (?, ?).
[1:M ]1
marginalized (over the M ? m missing parts) version N (?f , ?f ) of the complete
model N (?, ?).
? = [?,
? ?]
? such that
Our goal is to find an hypothesis h
? ?]
? = arg max{Q(?, ?)} = arg max{fy |?? (y? |?, ?)}.
[?,
?
??
2
(1)
??
Learning the Model?s Parameters and Structure
In this section we will assume some familiarity with the connections between probability density functions and graphical models. Let us initially assume that the
moving human being we want to detect is centrally positioned in the frame. We
will then enhance the model in order to accommodate for horizontal and vertical
translations.
In the learning process we want to estimate the parameters of fy f |?? (y?f |??),
?
where the labeling of the training set is known, N = M (no clutter is present) and
? = [1 . . . 1]T (all parts are visible). A fully connected graphical model would be the
most accurate description of the training set, however, the search for the optimal
labelling, given a display, would be computationally infeasible. Additionally, by
Occam?s razor, such model might not generalize as well as a simpler one. It is
intuitive to think that some (conditional) independencies between the yi ?s hold.
We learn the model structure from the data, as well as the parameters. To limit the
computational cost and to hope in a better generalizing model, we put an upper
bound on the fan-in (number of incoming edges) of the nodes.
In order to make the trade-off between complexity and likelihood explicit, we adopt
the BIC (Bayesian Information Criterion) score. We recall that the BIC score is
consistent, and that since the probability distribution factorizes family-wise, the
score decomposes additively. An exhaustive search among graphs is infeasible. We
therefore attempt to determine the highest scoring graph by mean of a greedy hillclimbing algorithm, with random restarts. Specifically, at each step the algorithm
chooses the elementary operation (among adding, removing or inverting an edge of
the graph) that results in the highest increase for the score. To prevent getting
stuck in local maxima, we randomly restart a number of times once we cannot get
any score improvements, and then we pick the graph achieving the highest score
overall. We finally obtain our model by retaining the associated maximum likelihood
parameters.
As opposed to previous approaches [3], no decomposability of the graph is imposed,
and exact belief propagation methods that pass through the construction of a junc-
tion tree are not applicable. When the junction property is satisfied, the maximum
spanning tree algorithm allows an efficient construction of the junction tree. The
tree with the most populated separators between cliques is produced in linear time.
Here, we propose instead a construction of the junction graph that (greedily) attempts to minimize the complexity of the induced subgraph associated with each
variable.
Figure 1: Graphical Models. Light shaded vertices represent variables associated
to different body parts, edges indicate conditional (in)dependencies, following the
standard Graphical Models conventions. [Left] Hand made decomposable graph
from [3], used for comparison. [Right] Model learned from data (sequence W1, see
section 4), with max fan-in constrain of 2.
3
Detection and Labelling with Expectation Maximization
One could solve the maximization problem (1) by means of Belief Propagation
(BP), however, we require our system to be invariant with respect to translations
in the first two coordinates (position) of the observations. To achieve this we introduce a new parameter ? = [? a , ? b , 0, 0]T that represents the reference system?s
origin, which we now allow to be different than zero. By introducing the centered
? ? = y? ? ? our model becomes
observations y
fy? ? |?h (?
y |?h) = fy? ?f |??? (?
y?f |???) ? fy? ?b |?? (?
y?b |??).
? f ) while the second
where in the second member the first factor is now N (?
?f , ?
?
factor remains UN ?m (A).
We finally use an EM-like procedure to estimate ? obtaining, as a by-product, the
maximizing hypothesis h we are after.
3.1
E-Step
As the hypothesis h is unobservable we replace the complete-data log-likelihood,
with its expected value
? c (f?, h) = E ? [log fy? |? (?
L
y? |?)]
?
fh
(2)
where the expectation is taken with respect to a generic distribution f?h (h). It?s
(k)
known that the E-step maximizing solution is f?h (h) ? fy? ? |? (?
y? |? (k?1) ). Since we
will not be able to compute such distribution for all the assignments h of h, we will
make a so-called hard assignment i.e. we will approximate fy? ? |? (?
y? |? (k?1) ) with
(k)
1(h ? h ), where
h(k) = arg max{fy? ? |? (?
y? |? (k?1) )}.
h
Given the current estimate ? (k?1) of ?, the hypothesis h(k) can be determined by
maximizing the (discrete) potential ?(h) = log fy? ?f |?h (?
y?f |? (k?1) h) ? fy?b |h (y?b |h)
with a Max-Sum Loopy Belief Propagation (LBP) on the associated junction graph.
The potential above decomposes into a number of factors (or cliques). With the
exception of root nodes, each family gives rise to a factor that we initialize to the
family?s conditional probability mass function (pmf). For a root node, its marginal
pmf is multiplied into one of its children.
If LBP converges and the determined h(k) maximizes the expected log-likelihood
? c (f?(k) , h(k?1) ), then we are guaranteed (otherwise there is just reasonable1 hope)
L
that EM will converge to the sought-after ML estimate of ?.
3.2
M-Step
In the M-Step we maximize (2) with respect to ?, holding h = h(k) , i.e. we compute
? (k+1) = arg max{log fy? ? |? (?
y?(k) |?)}
?
(3)
The maximizing ? can be obtained from
? ?1 (y? ? ?
? ? J?)]
0 = ?? [(y? ? ?
? ? J?)T ?
where J4 = diag(1, 1, 0, 0) and J = [ J4
|
J4
???
{z
(4)
T
J4 ] .
}
m
? as a whole which is numerically
The solution involves the inversion of the matrix ?
instable given the minimal variance in the vertical component of the motion. We
? with
therefore approximate it with a block-diagonal version ?
?
? [i] [i] = I4 det(?[i]4 [i]4 ) .
(5)
?
4
4
?
det(?)
It?s easy to see that, for appropriate ?i ?s,
X
? (k+1) = J4
[?i (y?i ? ?
?i )] .
(6)
?i =1
3.3
Detection Criteria
Let ? be a (discrete) indicator random variable for the event that the Johansson?s
display represents a scene with a human body. So far, in our discussion we have
implicitly assumed that ? = 1. In the following section we will describe a way for
determining whether a human body is actually present (detection). By defining
f? |y (1|y)
, we claim that a human body is present whenever R(y) > 1. By
R(y) = f?
|y (0|y)
Bayes rule, R(y) can be rewritten as
R(y) =
fy|? (y|1)
fy|? (y|1) f? (1)
?
=
? Rp
fy|? (y|0) f? (0)
fy|? (y|0)
1
Experimentally it is observed that when LBP converges, the determined maximum is
either global or, although local, the potential?s value is very close to its global optimum.
If the potential is increased (not necessarily maximized) by LBP, that suffices for EM to
converge
P [?=1]
where Rp = P
[?=0] is the contribution to R(y) due to the prior on ?. In order to
compute the R(y) we marginalize over the labelling hypothesis h.
When ? = 0, the only admissible hypotheses must have ? = 0T (no body parts
are present) which translates into f?|? (?|?) = P [? = ?|? = 0] = 1k (? ? 0T ). Also,
f?|?? (?|?1) = N ?N as no labelling is more likely than any other, before we have
seen the detections. All N detections are labelled by ? as background and their
conditional density is UN (A). Therefore, we have fy|? (y|0) = A1N N1N where the
summation is over the ?, ? compatible with ? = 0.
When ? = 1, we have f?|? (?|1) = P [? = ?] = 2?M as we assume that each body
part appears (or not) in a given display with probability 21 , independently of all
other parts. Also, f?|?? (?|?1) = N ?N as before and therefore we can write
fy|? (y|1) =
X?
? 1 1
fy|??? (y|??1)
N N 2M
?,?
where the summation is over the ?, ? compatible with ? = 1. We conclude that
R(y) = Rp
?
fy|? (y|1)
AN X ?
= Rp M
fy|??? (y|??1)
fy|? (y|0)
2
?,?
When implementing Loopy Belief Propagation, on a finite-precision computational
architecture using Gaussian models, we are unable to perform marginalization as we
can only represent log-probabilities. However, we will assume that the ML labelling
? ? is predominant over all other labelling, so that in the estimate of ? we can
h
approximate marginalization with maximization and therefore write
R(y) ? Rp
AN
? ?1)
?
fy|??? (y|?
2M
? ?? is the maximizing hypothesis when ? = 1.
where ?,
4
Experimental Results
In our experiment we use two sequences W1 and W22 of about 7,000 frames each,
representing a human subject walking back and forth along a straight line. Both
sequences were acquired and labelled with a motion capture system. Each pair
of consecutive frames is used to produce a Johannson display with positions and
velocities. W1 is used to learn the probabilistic model?s parameter and structure.
A 700 frames random sample from W2 is then used to test of our algorithm.
We evaluate the performance of our technique and compare it with the hand-made,
decomposable graphical model of [3]. There, translation invariance is achieved by
using relative positions within each clique. We refer to it as to the local version of
translation invariance (as opposed to the global version proposed in this paper).
We first explore the benefits of just relaxing the decomposability constrain, still
implementing the translation invariance locally. The lower two dashed curves of
Figure 2 already show a noticeable improvement, especially when fewer body parts
are visible. However, the biggest increase in performance is brought by global
translation invariance as it is evident from the upper two curves of Figure 2.
2
available at http://www.vision.caltech.edu/fanti.
Labeling Performance
Detection Performance
100
100
90
95
80
90
Prob. of Detection
% Correct Labels
70
85
80
75
65
60
3
4
5
6
7
8
9
Number of Visible Points
10
11
50
40
30
Loopy + Global Inv.
Decomp. + Global Inv.
Loopy + Local Inv.
Decomp. + Local Inv.
70
60
Loopy + Global Inv.
Decomp. + Global Inv.
Loopy + Local Inv.
Decomp. + Local Inv.
20
10
12
0
3
4
5
6
7
8
9
10
11
12
Number of Visible parts
Figure 2: Detection and Labeling Performance. [Left] Labeling: On each display
from the sequence W2, we randomly occlude between 3 and 10 parts and superimpose 30 randomly positioned clutter points. For any given number of visible parts,
the four curves represent the percentage of correctly labeled parts out of the total
labels in all 700 displays of W2. Each curve reflects a combination of either Local
or Global translation invariance and Decomposable or Loopy graph. [Right] Detection: For the same four combinations we plot Pdetection (Prob. of detecting a person
when the display shows one) for a fixed Pf alse?alarm = 10% (probability of stating
that a person is present when only 30 points of clutters are presented). Again, we
vary the number of visible points between 4, 7 and 11.
As for the dynamical programming algorithm of [3], the Loopy Belief Propagation
algorithm runs in O(M N 3 ), however 4 or 5 more iterations are needed for it to
converge. Furthermore, to avoid local maxima, we restart the algorithm at most
10 times using a randomly generated schedule to pass the messages. Finally, when
global invariance is used, we re-initialize ? up to 10 times. Each time we randomly
pick a value within a different region of the display. On average, about 5 restarts
for ?, 5 different scheduling and 3 iterations of EM suffice to achieve a labeling with
a likelihood comparable with the one of the ground truth labeling.
5
Discussion, Conclusions and Future Work
Generalizing our model from decomposable [3] to loopy produced a gain in performance. Further improvement would be expected when allowing larger cliques in the
junction graph, at a considerable computational cost. A more sensible improvement
was obtained by adding a global variable modeling the centroid of the figure.
Taking [3] as a reference, there is about a 10x increase in computational cost when
we either allow a loopy graph or account for translations with the centroid. When
both enhancement are present the cost increase is between 100x and 1,000x.
We believe that the combination of these two techniques points in the right direction.
The local translation invariance model required the computation of relative positions
within the same clique. These could not be computed in the majority of cliques
when a large number of body parts were occluded, even with the more accurate
loopy graphical model. Moreover, the introduction of the centroid variable is also
valuable in light of a possible extension of the algorithm to multi-frame tracking.
We should also note that the structure learning technique is sub-optimal due to
the greediness of the algorithm. In addition, the model parameters and structure
are estimated under the hypothesis of no occlusion or clutter. An algorithm that
considers these two phenomena in the learning phase could likely achieve better
results in realistic situations, when clutter and occlusion are significant.
Finally, the step towards using displays directly obtained from gray-level image
sequences remains a challenge that will be the goal of future work.
5.1
Acknowledgements
We are very grateful to Max Welling, who first proposed the idea of using LBP to
solve for the optimal labelling in a 2001 Research Note, and who gave many useful
suggestion. Sequences W1 and W2 used in the experiments were collected by L.
Goncalves and E. di Bernando. This work was partially funded by the NSF Center
for Neuromorphic Systems Engineering grant EEC-9402726 and by the ONR MURI
grant N00014-01-1-0890.
References
[1] Y. Song, L. Goncalves and P. Perona, ?Learning Probabilistic Structure for Human
Motion Detection?, Proc. IEEE Conf. Computer Vision and Pattern Recognition, vol
II, pages 771-777, Kauai, Hawaii, December 2001.
[2] Y. Song, L. Goncalves and P. Perona, ?Unsupervised Learning of Human Motion Models?, Advances in Neural Information Processing Systems 14, Vancouver, Cannada,
December 2001.
[3] Y. Song, L. Goncalves, and P. Perona, ?Monocular perception of biological motion clutter and partial occlusion?, Proc. of 6th European Conferences on Computer Vision,
vol II, pages 719-733, Dublin, Ireland, June/July, 2000.
[4] G. Johansson, ?Visual Perception of Biological Motion and a Model For Its Analysis?,
Perception and Psychophysics 14, 201-211, 1973.
[5] C. Tomasi and T. Kanade, ?Detection and tracking of point features?, Tech. Rep.
CMU-CS-91-132, Carnegie Mellon University, 1991.
[6] S.M. Aji and R.J. McEliece, ?The generalized distributive law?, IEEE Trans. Info.
Theory, 46:325-343, March 2000.
[7] P. Giudici and R Castelo, ?Improving Markov Chain Monte Carlo Model Search for
Data Mining?, Machine Learning 50(1-2), 127-158, 2003.
[8] W.T.Freeman and Y. Weiss, ?On the optimality of solutions of the max-product belief
propagation algorithm in arbitrary graphs?, IEEE Transactions on Information Theory
47:2 pages 723-735. (2001).
[9] J.S. Yedidia, W.T.Freeman and Y. Weiss, ?Bethe free energy, Kikuchi approximations and belief propagation algorithms?, Advances in Neural Information Processing
Systems 13, Vancouver, Canada, December 2000.
[10] D. Chickering, ?Optimal Structure Identification with Greedy Search?, Journal of
Machine Learning Research 3, pages 507-554 (2002).
| 2368 |@word version:4 inversion:1 johansson:4 giudici:1 additively:1 pick:2 accommodate:1 score:6 selecting:2 current:1 com:1 clara:1 must:1 visible:7 realistic:1 partition:2 plot:1 gist:1 occlude:1 greedy:2 fewer:1 plane:1 yi1:1 ith:1 detecting:1 node:3 simpler:1 along:2 ik:1 introduce:2 acquired:1 expected:3 nor:1 multi:1 freeman:2 globally:1 pf:1 becomes:1 unrelated:1 notation:1 maximizes:1 mass:2 suffice:1 moreover:1 marzia:2 corporation:1 grant:2 yn:1 before:2 engineering:1 local:10 limit:1 analyzing:1 subscript:2 might:2 blvd:1 r4:1 shaded:1 relaxing:1 range:1 block:1 kauai:1 procedure:1 aji:1 area:1 significantly:1 suggest:1 get:1 onto:1 cannot:1 close:1 marginalize:1 scheduling:1 put:1 greediness:1 www:1 imposed:1 missing:3 maximizing:5 center:1 independently:1 decomposable:4 identifying:1 rule:1 fx:1 coordinate:1 construction:3 exact:1 programming:1 hypothesis:10 origin:1 velocity:8 recognition:1 walking:1 muri:1 labeled:1 observed:2 capture:1 calculate:1 region:1 connected:1 trade:1 highest:3 valuable:1 complexity:2 occluded:2 grateful:1 joint:5 describe:1 monte:1 detected:6 labeling:6 exhaustive:1 y1t:1 solve:2 larger:1 say:1 otherwise:1 ability:2 think:1 sequence:7 propose:1 mission:1 product:2 instable:1 relevant:1 loop:1 subgraph:1 achieve:3 forth:1 description:1 intuitive:1 getting:1 enhancement:1 optimum:1 produce:1 converges:2 kikuchi:1 stating:1 noticeable:1 c:1 involves:1 indicate:1 triangulated:2 convention:2 direction:1 correct:1 centered:1 human:15 implementing:2 require:1 suffices:1 probable:1 elementary:1 summation:2 biological:2 exploring:1 extension:1 hold:1 effortlessly:1 ground:1 claim:1 sought:1 adopt:1 consecutive:1 vary:1 fh:1 proc:2 applicable:1 label:2 reflects:1 hope:2 brought:1 gaussian:2 avoid:1 claudio:1 factorizes:2 june:1 improvement:5 likelihood:6 indicates:1 tech:1 centroid:5 greedily:1 detect:3 sense:1 initially:1 pasadena:2 perona:5 selects:1 interested:2 i1:1 arg:4 among:2 overall:1 unobservable:1 denoted:2 retaining:1 initialize:2 psychophysics:1 marginal:1 once:1 represents:2 unsupervised:1 foreground:4 future:2 few:2 randomly:5 recognize:1 individual:1 phase:1 occlusion:4 attempt:2 detection:23 message:1 possibility:1 mining:1 predominant:1 bracket:1 light:2 chain:1 accurate:2 edge:3 partial:1 tree:4 pmf:2 re:1 minimal:1 dublin:1 increased:1 column:1 modeling:1 assignment:3 maximization:3 loopy:13 cost:4 introducing:2 vertex:1 subset:1 decomposability:2 neuromorphic:1 uniform:1 a1n:1 dependency:1 eec:1 chooses:1 person:2 density:4 probabilistic:3 off:1 enhance:1 w1:4 squared:1 again:1 satisfied:1 opposed:2 n1n:1 hawaii:1 conf:1 account:1 potential:4 bold:1 tion:1 root:2 lab:2 bayes:1 contribution:1 minimize:1 accuracy:1 variance:1 who:2 efficiently:1 ensemble:1 maximized:1 identify:1 generalize:2 modelled:1 bayesian:1 identification:1 produced:2 carlo:1 straight:1 whenever:1 definition:1 energy:1 obvious:1 associated:7 di:1 gain:1 recall:1 improves:1 schedule:1 positioned:2 actually:1 back:1 appears:1 restarts:2 specify:1 improved:1 wei:2 furthermore:3 just:2 correlation:1 mceliece:1 hand:2 horizontal:2 propagation:9 quality:1 gray:1 believe:1 usa:3 true:2 assigned:2 conditionally:1 razor:1 criterion:2 generalized:3 pdf:2 evident:1 complete:2 motion:9 image:2 meaning:2 instantaneous:1 wise:1 attached:1 polito:2 interpretation:3 numerically:1 refer:1 significant:1 mellon:1 populated:1 replicating:2 funded:1 moving:2 j4:5 showed:1 n00014:1 binary:1 onr:1 rep:1 yi:9 caltech:3 scoring:1 captured:1 seen:1 determine:1 converge:3 maximize:1 dashed:1 ii:2 july:1 long:1 va:1 vision:8 expectation:3 cmu:1 iteration:2 represent:3 achieved:1 lbp:6 background:6 want:2 addition:1 w2:4 induced:2 subject:1 member:2 december:3 call:1 presence:3 easy:1 marginalization:2 bic:2 gave:1 architecture:1 idea:1 translates:1 det:2 whether:2 song:3 yik:1 useful:2 santa:1 clutter:9 locally:1 fanti:3 http:1 exist:1 percentage:1 nsf:1 estimated:1 correctly:1 write:3 discrete:3 carnegie:1 vol:2 independency:1 four:3 achieving:1 prevent:1 graph:14 efx:1 pietro:1 sum:1 run:1 prob:2 letter:2 family:3 vb:1 comparable:1 bound:1 guaranteed:1 centrally:1 display:17 correspondence:1 fan:2 activity:1 i4:1 constrain:2 bp:1 scene:5 w22:1 optimality:1 combination:3 march:1 across:1 em:4 partitioned:1 alse:1 intuitively:1 invariant:1 taken:1 computationally:1 monocular:1 remains:3 needed:1 end:1 junction:5 operation:1 endowed:1 rewritten:1 multiplied:1 available:1 yedidia:1 generic:1 appropriate:1 rp:5 graphical:9 marginalized:1 especially:2 added:1 quantity:1 already:1 diagonal:1 italic:1 ireland:1 unable:1 restart:2 sensible:1 majority:1 distributive:1 fy:29 considers:1 collected:1 spanning:1 index:1 difficult:1 holding:1 info:1 rise:1 perform:1 allowing:2 upper:2 vertical:3 observation:5 markov:1 finite:1 defining:1 situation:1 frame:8 arbitrary:1 inv:8 canada:1 superimpose:1 inverting:1 pair:2 required:1 specified:1 connection:2 tomasi:1 california:2 learned:1 trans:1 able:1 dynamical:1 pattern:2 perception:3 challenge:1 max:8 belief:9 event:1 natural:1 indicator:1 representing:5 scheme:3 technology:2 prior:1 literature:1 acknowledgement:1 vancouver:2 determining:2 relative:2 law:1 fully:1 suggestion:1 goncalves:4 enclosed:1 sufficient:1 consistent:1 vij:1 occam:1 translation:11 row:1 compatible:2 free:1 infeasible:2 allow:2 understand:1 institute:2 face:1 taking:2 benefit:2 curve:4 dimension:2 stuck:1 made:2 projected:1 far:1 welling:1 transaction:1 approximate:3 implicitly:1 clique:8 ml:2 global:12 incoming:1 assumed:2 conclude:1 un:3 search:4 decomposes:2 additionally:1 kanade:1 bethe:1 learn:2 ca:3 obtaining:2 improving:1 necessarily:1 separator:1 european:1 diag:1 main:1 whole:1 alarm:1 child:1 body:22 intel:2 referred:1 biggest:1 precision:1 sub:1 position:9 explicit:1 chickering:1 admissible:1 removing:1 familiarity:1 adding:2 labelling:13 generalizing:2 likely:3 explore:1 visual:2 hillclimbing:1 ordered:1 tracking:2 partially:1 truth:1 determines:1 conditional:4 goal:4 marked:1 decomp:4 towards:1 labelled:2 replace:1 considerable:1 hard:1 experimentally:1 specifically:2 perceiving:1 determined:3 hyperplane:1 called:2 total:1 pas:2 invariance:9 experimental:1 exception:1 college:1 evaluate:1 phenomenon:1 |
1,505 | 2,369 | Circuit Optimization Predicts Dynamic
Networks for Chemosensory Orientation in the
Nematode Caenorhabditis elegans
Nathan A. Dunn
John S. Conery
Dept. of Computer Science
University of Oregon
Eugene, OR 97403
{ndunn,conery}@cs.uoregon.edu
Shawn R. Lockery
Institute of Neuroscience
University of Oregon
Eugene, OR 97403
[email protected] ?
Abstract
The connectivity of the nervous system of the nematode Caenorhabditis elegans has been described completely, but the analysis of the neuronal basis of behavior in this system is just beginning. Here, we used
an optimization algorithm to search for patterns of connectivity sufficient to compute the sensorimotor transformation underlying C. elegans
chemotaxis, a simple form of spatial orientation behavior in which turning probability is modulated by the rate of change of chemical concentration. Optimization produced differentiator networks with inhibitory
feedback among all neurons. Further analysis showed that feedback regulates the latency between sensory input and behavior. Common patterns
of connectivity between the model and biological networks suggest new
functions for previously identified connections in the C. elegans nervous
system.
1
Introduction
The complete description of the morphology and synaptic connectivity of all 302 neurons
in the nematode Caenorhabditis elegans [15] raised the prospect of the first comprehensive
understanding of the neuronal basis of an animal?s entire behavioral repertoire. The advent
of new electrophysiological and functional imaging techniques for C. elegans neurons [7, 8]
has made this project more realistic than before. Further progress would be accelerated,
however, by prior knowledge of the sensorimotor transformations underlying the behaviors
of C. elegans, together with knowledge of how these transformations could be implemented
with C. elegans-like neuronal elements.
In previous work, we and others have identified the main features of the sensorimotor transformation underlying C. elegans chemotaxis [5, 11], one of two forms of spatial orientation
identified in this species. Locomotion consists of periods of sinusoidal forward movement,
called ?runs,? which are punctuated by bouts of turning [12] that have been termed ?pirouettes? [11]. Pirouette probability is modulated by the rate of change of chemical concentration (dC(t)/dt). When dC(t)/dt < 0, pirouette probability is increased whereas when
?
To whom correspondence should be addressed.
dC(t)/dt > 0, pirouette probability is decreased. Thus, runs down the gradient are truncated and runs up the gradient are extended, resulting in net movement toward the gradient
peak.
The process of identifying the neurons that compute this sensorimotor transformation is
just beginning. The chemosensory neurons responsible for the input representation are
known[1], as are the premotor interneurons for turning behavior[2]. Much less is known
about the interneurons that link inputs to outputs. To gain insight into how this transformation might be computed at the interneuronal level, we used an unbiased parameter optimization algorithm to construct model neural networks capable of computing the transformation using C. elegans-like neurons. We found that networks with one or two interneurons
were sufficient. A common but unexpected feature of all networks was inhibitory feedback
among all neurons. We propose that the main function of this feedback is to regulate the
latency between sensory input and behavior.
2
Assumptions
We used simulated annealing to search for patterns of connectivity sufficient for computing
the chemotaxis sensorimotor transformation. The algorithm was constrained by three main
assumptions:
1. Primary chemosensory neurons in C. elegans report attractant concentration at a
single point in space.
2. Chemosensory interneurons converge on a network of locomotory command neurons to regulate turning probability.
3. The sensorimotor transformation in C. elegans is computed mainly at the network
level, not at the cellular level.
Assumption (1) follows from the anatomy and distribution of chemosensory organs in C.
elegans[1, 13, 14]. Assumption (2) follows from anatomical reconstructions of the C. elegans nervous system [15], together with the fact that laser ablation studies have identified
four pairs of pre-motor interneurons that are necessary for turning in C. elegans[2]. Assumption (3) is heuristic.
3
Network
Neurons were modeled by the equation:
?i
dAi (t)
= ?Ai (t) + ?(Ii ), with
dt
Ii =
X
(wji Aj (t)) + bi
(1)
j
where Ai is activation level of neuron i in the network, ?(Ii ) is the logistic function
1/(1 + e?Ii ), wji is the synaptic strength from neuron j to neuron i, and bi is static
bias. The time constant ?i determines how rapidly the activation approaches its steadystate value for constant Ii . Equation 1 embodies the additional assumption that, on the
time scale of chemotaxis behavior, C. elegans neurons are effectively passive, isopotential nodes that release neurotransmitter in graded fashion. This assumption follows from
preliminary electrophysiological recordings from neurons and muscles in C. elegans and
Ascaris, another species of nematode[3, 4, 6].
The model of the chemosensory network had one input neuron, eight interneurons, and one
output neuron (Figure 1). The input neuron (i = 0) was a lumped representation of all
C(t)
(0)
sensory
neuron
(1)
interneuron
(2)
interneuron
(8)
interneuron
(9)
output
neuron
F (t )
Figure 1: Model chemosensory network. Model neurons were passive,
isopoential nodes. The network contained one sensory neuron, one output neuron, and eight interneurons.
Input to the sensory neuron was the
time course of chemoattractant concentration C(t). The activation of
the output neuron was mapped to
turning probability by the function
F (t) given in Equation 2. The network was fully connected with selfconnections (not shown).
the chemosensory neurons in the real animal. Sensory input to the network was C(t), the
time course of attractant concentration experienced by a real worm in an actual chemotaxis
assay[11]. C(t) was added to the net input of the sensory neuron (i = 0). The interneurons
in the model (1 ? i ? 8) represented all the chemosensory interneurons in the real animal.
The activity level of the output neuron (i = 9) determined the behavioral state of the model,
i.e. turning probability[11], according to the piecewise function:
(
F (t) =
Phigh
Prest
Plow
A9 (t) ? T1
T1 < A9 (t) < T2
A9 (t) ? T2
(2)
where T1 and T2 are arbitrary thresholds and the three P values represent the indicated
levels of turning probability.
4
Optimization
The chemosensory network model was optimized to compute an idealized version of the
true sensorimotor transformation linking C(t) to turning probability[11]. To construct the
idealized transformation, we mapped the instantaneous derivative of C(t) to desired turning
probability G(t) as follows:
(
G(t) =
Phigh
Prest
Plow
dC(t)/dt ? ?U
?U < dC(t)/dt < +U
dC(t)/dt ? +U
(3)
where U is a threshold derived from previous behavioral observations (Figure 7 in [11]).
The goal of the optimization was to make the network?s turning probability F (t) equal to
the desired turning probability G(t) at all t. Optimization was carried out by annealing
three parameter types: weights, time constants, and biases. Optimized networks were fully
connected and self-connections were allowed.
The result of a typical optimization run is illustrated in Figure 2(a), which shows good
agreement between network and desired turning probabilities. Results similar to Figure
2(a) were found for 369 networks out of 401 runs (92%). We noted that in most networks,
many interneurons had a constant offset but showed little or no response to changes in
sensory input. We found that we could eliminate these interneurons by a pruning procedure
in which the tonic effect of the offset was absorbed into the bias term of postsynaptic
neurons. Pruning had little or no effect on network performance (Figure 2(b)), suggesting
that the eliminated neurons were nonfunctional. By this procedure, 67% of the networks
could be reduced to one interneuron and 27% could be reduced to two interneurons. A key
question is whether the network generalizes to a C(t) time course that it has not seen before.
Generalization was tested by challenging pruned networks with the C(t) time course from
a second real chemotaxis assay. There was good agreement between network and desired
turning probability, indicating an acceptable level of generalization (Figure 2(c)).
G(t)
F(t)
(a)
0
200
400
600
G(t)
F(t)
800
(b)
0
200
400
600
800
200
400
600
800
G(t)
F(t)
(c)
0
time (seconds)
Figure 2: Network performance after optimization. In each panel, the upper trace represents G(t), the desired turning probability in response to a particular C(t) time course (not
shown), whereas the lower trace represents F (t), the resulting network turning probability. Shading signifies turning probability (black = Phigh , grey = Prest , white = Plow ). (a)
Performance of a typical network after optimization. (b) Performance of the same network
after pruning. (c) Performance of the pruned network when stimulated by a different C(t)
time course. Network turning probability is delayed relative to desired turning probability
because of the time required for sensory input to affect behavioral state.
5
Results
Here we focus on the largest class of networks, those with a single interneuron (Figure
3(a)). All single-interneuron networks had three common features (Figure 3(b)). First,
the direct pathway from sensory neuron to output neuron was excitatory, whereas the indirect pathway via the interneuron was inhibitory. Such a circuit computes an approximate
derivative of its input by subtracting a delayed version of the input from its present value[9].
Second, all neurons had significant inhibitory self-connections. We noted that inhibitory
self-connections were strongest on the input and output neurons, the two neurons comprising the direct pathway representing current sensory input. We hypothesized that the function of inhibitory self-connections was to decrease response latency in the direct pathway.
Such a decrease would be a means of compensating for the fact that G(t) was an instantaneous function of C(t), whereas the neuronal time constant ?i tends to introduce a delay
between C(t) and the network?s output. Third, the net effect of all disynaptic recurrent connections was also inhibitory. By analogy to inhibitory self-connections, we hypothesized
that the function of the recurrent pathways was also to regulate response latency.
To test the hypothetical functions of the self-connections and recurrent connections, we introduced an explicit time delay (?t) between dC(t)/dt and the desired turning probability
G(t) such that:
G0 (t) = G(t ? ?t)
(4)
G0 (t) was then substituted for G(t) during optimization. We then repeated the optimization
procedure with a range of ?t values and looked for systematic effects on connectivity.
(a)
Excitatory
Inhibitory
C(t)
Input
Neuron
(b)
Feature
Figure
direct excitatory
delayed
inhibitory
Interneuron
+
slow
self-connection
F(t)
differentiation
hypothesis:
regulation of
response latency
+
inhibitory
recurrent
connection
Output
Neuron
Function
Figure 3: Connectivity and common features of single-interneuron networks. (a) Average
sign and strength of connections. Line thickness is proportional to connection strength. In
other single-interneuron networks, the sign of the connections to and from the interneuron
were reversed (not shown). (b) The three common features of single-interneuron networks.
Effects on self-connections. We found that the magnitude of self-connections on the
input and output neurons was inversely related to ? t (Figure 4(a)). This result suggests
that the function of these self-connections is to regulate response latency, as hypothesized.
We noted that the interneuron self-connection remains comparatively small regardless of
? t. This result is consistent with the function of the disynaptic pathway, which is to present
a delayed version of
the input
output
neuron.
t, Target
Delayto
forthe
G(t)
(seconds)
0
1
t (seconds)
2
3
4
interneuron
-5
5
(b)
Product of Recurrent Weights
Self-connection Weight
(a)
0
1
t (seconds)
2
3
4
5
-50
-100
-10
output neuron
interneuron - output
input - output
-150
input - interneuron
-200
-15
input neuron
-20
-250
input - interneuron
input - output
interneuron - output
interneuron - output
input neuron
interneuron
output neuron
input - interneuron
input - interneuron
input - output
interneuron - output
Product of Recurrent Weights
Figure 4: The effect on connectivity of introducing time delays between input and output
during optimization. (a) The effect on self-connections. (b) The effect on recurrent connect, Target Delay for G(t) (seconds)
tions. Recurrent
connection
strength
was
quanti?ed
by taking the product of the weights
0
1
2
3
4
5
along each
-50 of the three recurrent loops in Figure 3(a).
-100
input - output
Effects on recurrent
connections. We quanti?ed the strength of the recurrent connections by-150
taking the product of the two weights along each of the three recurrent loops in the
network.-200
We found that the strengths of the two recurrent loops that included the interneu- output
ron was inversely related to ? interneuron
t (Figure
4(b)). This result suggests that the function of these
loops is-250
to regulate response latency and supports the hypothetical function of the recurinput - interneuron
rent connections. Interestingly, however, the strength of the recurrent loop between input
and output neurons was not affected by changes in ?t. Comparing the overall patterns of
changes in weights produced by changes in ?t showed that the optimization algorithm utilized self-connections to adjust delays along the direct pathway and recurrent connections
to adjust delays along the indirect pathway. The reason for this pattern is presently unclear.
6
Analysis
To provide a theoretical explanation for the effects of time delays on the magnitude of selfconnections, we analyzed the step response of Equation 1 for a reduced system containing
a single linear neuron with a self-connection:
?i
dAi (t)
= wii Ai (t) ? Ai (t) + h(t)
dt
(5)
where h(t) represents a generic external input (sensory or synaptic). Solving Equation 5
for h(t) equal to a step of amplitude M at t = 0 with A(0) = 0 gives:
Ai (t) =
M
1 ? wii
1 ? wii
t
1 ? exp ?
?i
(6)
From Equation 6, when wii = 0 (no self-connection) the neuron relaxes at the rate 1/?i ,
whereas when wii < 0 (inhibitory self-connection) the neuron relaxes at the higher rate
of (1 + |wii |)/?i . Thus, response latency drops as the strength of the inhibitory self connection increases and, conversely, response latency rises as connection strength decreases.
This result explains the effect on self-connection strength of introducing a delay between
between dC(t)/dt and turning probability (Figure 4(a)).
We made a similar analysis of the effects of time delays on the recurrent connections. Here,
however, we studied a reduced system of two linear neurons with reciprocal synapses and
an external input to one of the neurons.
?i
dAi (t)
= wji Aj (t) ? Ai (t) + h(t)
dt
and
?j
dAj (t)
= wij Ai (t) ? Aj (t) (7)
dt
We solved this system for the case where the external input h(t) = M sin(?t). The solution has the form:
Ai (t) = Di sin(?t ? ?i )
Aj (t) = Dj sin(?t ? ?j )
and
with
?i = ?j = arctan
2??
1 ? wij wji ? ?2 ? 2
(8)
(9)
Equation (9) gives the phase delay between the sinusoidal external input and the sinusoidal
response of the two neuron system. In Figure 5, the relationship between phase delay and
the strength of the recurrent connections is plotted with the connection strength on the
ordinate as in Figure 4(b). The graph shows an inverse relationship between connection
strength and phase delay that approximates the inverse relationship between connection
strength and time delay shown in Figure 4(b). The correspondence between the trends in
Figure 4(b) and 5 explain the effects on recurrent connection strength of introducing a delay
between between dC(t)/dt and turning probability.
Phase Delay (radians)
Recurrent Product
-50
20
?3
40
Figure 5:
The relationship between
phase delay and recurrent connection
strength.
Equation 9 is plotted for
three different driving frequencies, (Hz
?10?3 ): ?1 = 50, ?2 = 18.75, and
?3 = 3.75. These frequencies span the
frequencies observed in a Fourier analysis of the C(t) time course used during
optimization. There is an inverse relationship between connection strength and
phase delay. Axis have been reversed for
comparison with Figure 4(b).
-3
60
80x10
?2
-100
?1
-150
Recurrent Product
-200
-250
f = 0.00375 Hz
f = 0.01875 Hz
f = 0.05 ASE
Hz
AWC
-250
-3
AIB
Phase Delay (radians)
80x10
SAAD
60
7
40
AFDRecurrent
-200
DVC
FLP
chemosensory
neurons
AIY
Product
-150
AIA
-100
RIA
RIM
RIF
RIB
?1
AVA
-50
interneurons
AVB
command
neurons
?2
20
Discussion
Figure 6: The network of chemosensory
interneurons in the real animal. Shown
are the interneurons interposed between
the chemosensory neuron ASE and the
two locomotory command neurons AVA
and AVB. The diagram is restricted to interneuron pathways with less than three
synapses. Arrows are chemical synapses.
Dashed lines are gap junctions. Connectivity is inferred from the anatomical reconstructions of reference [15].
?3
We used simulated annealing to search for networks capable of computing an idealized
version of the chemotaxis sensorimotor transformation in C. elegans. We found that one
class of such networks is the three neuron differentiator with inhibitory feedback. The
appearance of differentiator networks was not surprising [9] because the networks were
optimized to report, in essence, the sign of dC(t)/dt (Equation 3). The prevalence of inhibitory feedback, however, was unexpected. Inhibitory feedback was found at two levels:
self-connections and recurrent connections. Combining an empirical and theoretical approach, we have argued that inhibitory feedback at both levels functions to regulate the
response latency of the system?s output relative to its input. Such regulation could be functionally significant in the C. elegans nervous system, where neurons may have an unusually
high input resistance due to their small size. High input resistance could lead to long relaxation times because the membrane time constant is proportional to input resistance. The
types of inhibitory feedback identified here could also be used to mitigate this effect.
There are intriguing parallels between our three-neuron network models and the biological
network. Figure 6 shows the network of interneurons interposed between the chemosensory
neuron class ASE, the main chemosensory neurons for salt chemotaxis, and the locomotory
command neurons classes AVB and AVA. The interneurons in Figure 6 are candidates for
computing the sensorimotor transformation for chemotaxis C. elegans. Three parallels are
prominent. First, there are two candidate differentiator circuits, as noted previously[16].
These circuits are formed by the neuronal triplets ASE-AIA-AIB and ASE-AWC-AIB.
Second, there are self-connections on three neuron classes in the circuit, including AWC,
one of the differentiator neurons. These self-connections represent anatomically identified
connections between left and right members of the respective classes. It remains to be
seen, however, whether these connections are inhibitory in the biological network. Selfconnections could also be implemented at the cellular level by voltage dependent currents.
A voltage-dependent potassium current, for example, can be functionally equivalent to an
inhibitory self-connection. Electrophysiological recordings from ASE and other neurons in
C. elegans confirm the presence of such currents[6, 10]. Thus, it is conceivable that many
neurons in the biological network have the cellular equivalent of self-connections. Third,
there are reciprocal connections between ASE and three of its four postsynaptic targets.
These connections could provide recurrent inhibition if they have the appropriate signs.
Common patterns of connectivity between the model and biological networks suggest new
functionality for identified connections in the C. elegans nervous system. It should be
possible to test these functions through physiological recordings and neuronal ablations.
Acknowledgements
We are grateful Don Pate for his technical assistance. Supported by NSF IBN-0080068.
References
[1] C. I. Bargmann and H. R. Horvitz. Chemosensory neurons with overlapping functions direct
chemotaxis to multiple chemicals in C. elegans. Neuron, 7:729?742, 1991.
[2] M. Chalfie, J.E. Sulston, J.G. White, E. Southgate, J.N. Thomson, and S. Brenner. The neural
circuit for touch sensitivity in C. elegans. J. of Neurosci., 5:956?964, 1985.
[3] R. E. Davis and A. O. Stretton. Passive membrane properties of motorneurons and their role in
long-distance signaling in the nematode Ascaris. J. of Neurosci., 9:403?414, 1989.
[4] R. E. Davis and A. O. W. Stretton. Signaling properties of Ascaris motorneurons: graded active
response, graded synaptic transmission, and tonic transmitter release. J. of Neurosci., 9:415?
425, 1989.
[5] D.B. Dusenbery. Responses of the nematode C. elegans to controlled chemical stimulation. J.
of Comparative Physiology, 136:127?331, 1980.
[6] M.B. Goodman, D.H. Hall, L. Avery, and S.R. Lockery. Active currents regulate sensitivity and
dynamic range in C. elegans neurons. Neuron, 20:763?772, 1998.
[7] R. Kerr, V. Lev-Ram, G. Baird, P. Vincent, R. Y. Tsien, and W. R. Schafer. Optical imaging
of calcium transients in neurons and pharyngeal muscle of C. elegans. Neuron, 26(3):583?94,
2000.
[8] S. R. Lockery and M. B. Goodman. Tight-seal whole-cell patch clamping of C. elegans neurons.
Methods Enzymol, 293:201?17, 1998.
[9] E.E. Munro, L.E. Shupe, and E.E Fetz. Integration and differentiation in dynamical recurrent
neural networks. Neural Computation, 6:405?419, 1994.
[10] W.T. Nickell, R.Y. Pun, C.I. Bargmann, and S.J. Kleene. Single ionic channels of two C. elegans
chemosensory neurons in native membrane. J. of Membrane Biology, 189(1):55?66, 2002.
[11] J. T. Pierce-Shimomura, T. M. Morse, and S. R. Lockery. The fundamental role of pirouettes in
C. elegans chemotaxis. J. of Neurosci., 19(21):9557?69, 1999.
[12] T.A. Rutherford and N.A. Croll. Wave forms of C. elegans in a chemical attractant and repellent
and in thermal gradients. J. of Nematology, 11:232?240, 1979.
[13] S. Ward. Chemotaxis by the nematode C. elegans: identification of attractants and analysis of
the response by use of mutants. Proc of the Natl Acad Sci USA, 70:817?821, 1973.
[14] S. Ward, N. Thomson, J. G. White, and S. Brenner. Electron microscopical reconstruction
of the anterior sensory anatomy of the nematode C. elegans. J. of Comparative Neurology,
160:313?338, 1975.
[15] J. G White, E. Southgate, J. N. Thomson, and S. Brenner. The structure of the nervous system
of the nematode C. elegans. Phil Trans of the R Soc Lond [Biol], 314:1?340, 1986.
[16] J.G. White. Neuronal connectivity in C. elegans. Trends in Neuroscience, 8:277?283, 1985.
| 2369 |@word version:4 seal:1 grey:1 shading:1 interestingly:1 horvitz:1 current:5 comparing:1 anterior:1 surprising:1 activation:3 intriguing:1 john:1 realistic:1 motor:1 drop:1 nervous:6 ria:1 beginning:2 reciprocal:2 node:2 ron:1 arctan:1 pun:1 along:4 direct:6 awc:3 consists:1 avery:1 pathway:9 behavioral:4 introduce:1 behavior:7 morphology:1 compensating:1 actual:1 little:2 project:1 underlying:3 schafer:1 circuit:6 attractant:3 advent:1 panel:1 plow:3 kleene:1 transformation:13 differentiation:2 mitigate:1 hypothetical:2 unusually:1 lockery:4 interneuronal:1 ascaris:3 before:2 t1:3 tends:1 acad:1 lev:1 might:1 black:1 studied:1 ava:3 suggests:2 challenging:1 conversely:1 bi:2 range:2 uoregon:2 responsible:1 prevalence:1 signaling:2 procedure:3 dunn:1 empirical:1 physiology:1 pre:1 suggest:2 equivalent:2 phil:1 punctuated:1 regardless:1 identifying:1 insight:1 his:1 target:3 hypothesis:1 locomotion:1 agreement:2 element:1 trend:2 forthe:1 utilized:1 predicts:1 native:1 observed:1 role:2 daj:1 solved:1 connected:2 movement:2 prospect:1 decrease:3 dynamic:2 grateful:1 solving:1 tight:1 completely:1 basis:2 indirect:2 bargmann:2 represented:1 neurotransmitter:1 laser:1 nematode:9 premotor:1 heuristic:1 ward:2 a9:3 net:3 propose:1 reconstruction:3 subtracting:1 product:7 caenorhabditis:3 loop:5 ablation:2 rapidly:1 combining:1 prest:3 description:1 potassium:1 transmission:1 comparative:2 nonfunctional:1 tions:1 recurrent:24 progress:1 soc:1 implemented:2 c:1 ibn:1 avb:3 anatomy:2 functionality:1 dvc:1 transient:1 explains:1 argued:1 locomotory:3 generalization:2 preliminary:1 repertoire:1 biological:5 attractants:1 hall:1 exp:1 electron:1 driving:1 proc:1 phigh:3 largest:1 organ:1 command:4 voltage:2 release:2 derived:1 focus:1 mutant:1 transmitter:1 mainly:1 dependent:2 entire:1 eliminate:1 wij:2 comprising:1 overall:1 among:2 orientation:3 animal:4 spatial:2 raised:1 constrained:1 integration:1 equal:2 construct:2 eliminated:1 biology:1 represents:3 t2:3 others:1 report:2 piecewise:1 comprehensive:1 delayed:4 phase:7 interneurons:17 adjust:2 analyzed:1 natl:1 capable:2 necessary:1 respective:1 desired:7 plotted:2 theoretical:2 increased:1 signifies:1 introducing:3 delay:18 connect:1 thickness:1 peak:1 sensitivity:2 fundamental:1 systematic:1 chemotaxis:12 together:2 connectivity:11 chemoattractant:1 containing:1 external:4 derivative:2 suggesting:1 sinusoidal:3 baird:1 oregon:2 idealized:3 wave:1 parallel:2 formed:1 identification:1 vincent:1 produced:2 ionic:1 explain:1 strongest:1 synapsis:3 synaptic:4 ed:2 sensorimotor:9 disynaptic:2 frequency:3 di:1 static:1 radian:2 gain:1 knowledge:2 electrophysiological:3 amplitude:1 rim:1 rif:1 higher:1 dt:14 response:15 just:2 touch:1 overlapping:1 logistic:1 aj:4 indicated:1 usa:1 effect:14 hypothesized:3 aib:3 unbiased:1 true:1 chemical:6 assay:2 illustrated:1 white:5 sin:3 lumped:1 self:24 during:3 assistance:1 essence:1 noted:4 davis:2 prominent:1 complete:1 thomson:3 passive:3 steadystate:1 instantaneous:2 common:6 functional:1 stimulation:1 microscopical:1 regulates:1 salt:1 linking:1 approximates:1 functionally:2 significant:2 stretton:2 ai:8 flp:1 had:5 dj:1 inhibition:1 showed:3 termed:1 wji:4 muscle:2 seen:2 dai:3 additional:1 converge:1 period:1 dashed:1 ii:5 multiple:1 x10:2 technical:1 long:2 ase:7 controlled:1 represent:2 cell:1 whereas:5 addressed:1 decreased:1 annealing:3 diagram:1 goodman:2 saad:1 recording:3 hz:4 elegans:35 member:1 presence:1 relaxes:2 affect:1 identified:7 shawn:2 whether:2 munro:1 resistance:3 differentiator:5 latency:10 reduced:4 nsf:1 inhibitory:20 sign:4 neuroscience:2 anatomical:2 affected:1 key:1 four:2 threshold:2 southgate:2 imaging:2 graph:1 relaxation:1 ram:1 run:5 inverse:3 patch:1 acceptable:1 correspondence:2 activity:1 strength:17 nathan:1 fourier:1 span:1 lond:1 pruned:2 optical:1 according:1 chemosensory:17 membrane:4 postsynaptic:2 presently:1 anatomically:1 restricted:1 equation:9 previously:2 remains:2 kerr:1 generalizes:1 wii:6 junction:1 eight:2 regulate:7 generic:1 appropriate:1 aia:2 embodies:1 graded:3 comparatively:1 g0:2 added:1 question:1 looked:1 concentration:5 primary:1 unclear:1 gradient:4 conceivable:1 reversed:2 link:1 mapped:2 simulated:2 distance:1 sci:1 whom:1 cellular:3 toward:1 reason:1 modeled:1 relationship:5 regulation:2 trace:2 rise:1 rutherford:1 calcium:1 upper:1 neuron:72 observation:1 thermal:1 truncated:1 extended:1 tonic:2 dc:10 arbitrary:1 inferred:1 ordinate:1 introduced:1 pair:1 required:1 connection:51 optimized:3 bout:1 trans:1 dynamical:1 pattern:6 including:1 explanation:1 turning:22 representing:1 inversely:2 axis:1 carried:1 lox:1 eugene:2 understanding:1 prior:1 acknowledgement:1 relative:2 morse:1 fully:2 proportional:2 analogy:1 sufficient:3 consistent:1 course:7 excitatory:3 supported:1 interposed:2 bias:3 institute:1 fetz:1 taking:2 feedback:9 computes:1 sensory:13 forward:1 made:2 pruning:3 approximate:1 confirm:1 rib:1 active:2 neurology:1 don:1 search:3 triplet:1 pirouette:5 stimulated:1 channel:1 substituted:1 quanti:2 main:4 neurosci:4 arrow:1 whole:1 motorneurons:2 allowed:1 repeated:1 neuronal:7 fashion:1 slow:1 experienced:1 explicit:1 candidate:2 rent:1 third:2 down:1 offset:2 physiological:1 effectively:1 magnitude:2 pierce:1 repellent:1 clamping:1 interneuron:26 gap:1 tsien:1 appearance:1 absorbed:1 unexpected:2 contained:1 determines:1 goal:1 brenner:3 change:6 included:1 determined:1 typical:2 called:1 specie:2 worm:1 isopotential:1 indicating:1 support:1 modulated:2 accelerated:1 dept:1 tested:1 biol:1 |
1,506 | 237 | 758
Satyanarayana, Tsividis and Graf
A Reconfigurable Analog VLSI Neural Network
Chip
Srinagesh Satyanarayana and Yannis Tsividis
Department of Electrical Engineering
and
Center for Telecommunications Research
Columbia University, New York, NY 10027, USA
Hans Peter Graf
AT&T
Bell Laboratories
Holmdel, NJ 07733
USA
ABSTRACT
1024 distributed-neuron synapses have been integrated in an active
area of 6.1mm x 3.3mm using a 0.9p.m, double-metal, single-poly,
n-well CMOS technology. The distributed-neuron synapses are arranged in blocks of 16, which we call '4 x 4 tiles'. Switch matrices
are interleaved between each of these tiles to provide programmability of interconnections. With a small area overhead (15 %), the
1024 units of the network can be rearranged in various configurations. Some of the possible configurations are, a 12-32-12 network,
a 16-12-12-16 network, two 12-32 networks etc. (the numbers separated by dashes indicate the number of units per layer, including
the input layer). Weights are stored in analog form on MaS capacitors. The synaptic weights are usable to a resolution of 1% of
their full scale value. The limitation arises due to charge injection
from the access switch and charge leakage. Other parameters like
gain and shape of nonlinearity are also programmable.
Introduction
A wide variety of ptoblems can be solved by using the neural network framework
[1]. However each of these problems requires a different topology and weight set.
At a much lower system level, the performance of the network can be improved
by selecting suitable neuron gains and saturation levels. Hardware realizations of
A Reconfigurable Analog VLSI Neural Network Chip
'hidcMn
NtUYOftS
3 inputs
, inputs
7 Inputs
Figure 1: Reconfigurability
neural networks provide a fast means of solving the problem. We have chosen
analog circuits to implement neural networks because they provide high synapse
density and high computational speed. In order to provide a general purpose hardware for solving a wide variety of problems that can be mapped into the neural
network framework, it is necessary to make the topology, weights and other neurosynaptic parameters programmable. Weight programmability has been extensively
dealt in several implementations [2 - 9]. However features like programmable topology, neuron gains and saturation levels have not been addressed extensively. We
have designed, fabricated and tested an analog VLSI neural network in which the
topology, weights and neuron gains and saturations levels are all programmable.
Since the process of design, fabrication and testing is time-consuming and expensive,
redesigning the hardware for each application is inefficient. Since the field of neural
networks is still in its infancy, new solutions to problems are being searched for
everyday. These involve modifying the topology [10] and finding the best weight
set. In such an environment, a computational tool that is fully programmable is
very desirable.
The Concept of Reconfigurability
We define reconfigurabilityas the ability to alter the topology (the number oflayers,
number of neurons per layer , interconnections from layer to layer and interconnections within a layer) of the network. The topology of a network does not describe
the value of each synaptic weight. It only specifies the presence or absence of a
synapse between two neurons (However in the special case of binary weight (0,1),
defining the topology specifies the weight). The ability to alter the synaptic weight
can be defined as weight programmability. Figure 1 illustrates reconfigurability,
whereas Figure 2 shows how the weight value is realized in our implementation.
The Voltage Vw across the capacitor represents the synaptic weight. Altering this
voltage makes weight programmability possible.
Why is On-Chip Reconfigurability Important?
Synapses, neurons and interconnections occupy real estate on a chip. Chip sizes
are limited due to various factors like yield and cost. Hence only a limited number
759
760
Satyanarayana, Tsividis and Graf
Figure 2: Weight programmability
of synapses can be integrated in a given chip area. Currently the most compact
realizations (considering more than 6 bits of synaptic accuracy) permit us to integrate only a few thousand synapses per cm 2 ? In such a situation every zero-valued
(inactive) synapse represents wasted area, and decreases the computational ability
per unit area of the chip. If a fixed topology network is used for different problems,
it will be underutilized as long as some synapses are set to zero value. On the
other hand, if the network is reconfigurable, the limited resources on-chip can be
reallocated to build networks with different topologies more efficiently. For example
the network with topology-2 of Figure 1 requires 30 synapses. If the network was
reconfigurable, we could utilize these synapses to build a two-layer network with 15
synapses in the first layer and 15 in the second layer. In a similar fashion we could
also build the network with topology-3 which is a network with localized receptive
fields.
The Distributed-Neuron Concept
In order to provide reconfigurability on-chip, we have developed a new cell called the
distributed-neuron synapse [11]. In addition to making reconfiguration easy, it has
other advantages like being modular hence making design easy, provides automatic
gain scaling, avoids large current build-up at any point and makes possible a fault
tolerant system.
Figure 3 shows a lumped neuron with N synaptic inputs. We call it 'lumped'
because, the circuit that provides the nonlinear function is lumped into one block.
r;:::===+=;:::::==a:~:::: ::: :::: ::::::==::::;~r::: : Yout
? ? ? ?
/
)(
2
...._+._.--1-.._..
)(
3
Figure 3: A lumped neuron with N synaptic inputs
A Recontigurable Analog VLSI Neural Network Chip
The synapses are assumed to be voltage-to-current (transconductor) cells, and the
neuron is assumed to be a current-to-voltage cell. Summation is achieved through
addition of the synapse output currents in the parallel connection.
Figure 4 shows the equivalent distributed-neuron with N synaptic inputs. It is
called 'distributed' because the circuit that functions as the neuron, is split into 'N'
parts. One of these parts is integrated with each synapse. This new block ( that
contains a a synapse and a fraction of the neuron) is called the 'distributed-neuron
synapse'. Details of the distributed-neuron concept are described in [11]. It has to
be noted that the splitting of the neuron to form the distributed-neuron synapse
is done at the summation point where the computation is linear. Hence the two
realizations of the neuron are computationally equivalent. However, the distributedneuron implementation offers a number of advantages, as is now explained .
? ? ? ?
-........
............
_.....
+
- Yout
"'-
Distribut.d
N.uron
x1
)(
)(
2
3
Disiribui.d-n.uron
s~nllps.
)(
N
Figure 4: A distributed-neuron with N synaptic inputs
Modularity of the design
As is obvious from Figure 4, the task of building a complete network involves designing one single distributed-neuron synapse module and interconnecting several of
them to form the whole system. Though at a circuit level, a fraction of the neuron
has to be integrated with each synapse, the system level design is simplified due to
the modularity.
Automatic gain normalization
In the distributed-neuron, each unit of the neuron serves as a load to the output
of a synapse. As the number of synapses at the input of a neuron increases, the
number of neuron elements also increases by the same number. The neuron output
is given by:
1 N
Yj
= f{ N
L
WijXi -
8j}
(1)
i=1
Where Yj is the output of the ph neuron, Wij is the weight from the ith synaptic
input Xi and 8j is the threshold, implemented by connecting in parallel an appropriate number of distributed-neuron synapses with fixed inputs. Assume for the
761
762
Satyanarayana, Tsividis and Graf
Distri buted- neuron
synapse
Figure 5: Switches used for reconfiguration in the distributed-neuron implementation.
moment that all the inputs Xi are at a maximum possible value. Then it is easily
seen that Yj is independent of N . This is the manifestation of the automatic gain
normalization that is inherent to the idea of distributed-neuron synapses.
Ease of reconfiguration
In a distributed-neuron implementation, reconfiguration involves interconnecting a
set of distributed-neuron synapse modules (Figure 5). A neuron of the right size
gets formed when the outputs of the required number of synapses are connected.
In a lumped neuron implementation, reconfiguration involves interconnecting a set
of synapses with a set of neurons. This involves more wiring, switches and logic
control blocks.
A voiding large current build-up in the neuron
In our implementation the synaptic outputs are currents. These currents are summed
by Kirchoffs current law and sent to the neuron. Since the neuron is distributed, the
total current is divided into N equal parts, where N is the number of distributedneuron synapses. One of these part flows through each unit of the distributed neuron
as illustrated in Figure 4. This obviates the need for large current summation wires
and avoids other problems associated with large currents at any single point.
Fault tolerance
On a VLSI chip defects are commonly seen. Some of these defects can short wires,
hence corrupting the signals that are carried on them. Defects can also render some
synapses and neurons defective. In our implementation, we have integrated switches
in-between groups of distributed-neuron synapses (which we call 'tiles') to make the
chip reconfigurable (Figure 6). This makes each tile of the chip externally testable.
The defective sections of the chip can be isolated and the remaining synapses can
thus be reconfigured into another topology as shown in Figure 6.
Circuit Description of the Distributed-Neuron Synapse
Figure 7 shows a distributed-neuron synapse constructed around a differential-input,
differential-output transconductance multiplier. A weight converter is used to con-
A Reconfigurable Analog VLSI Neural Network Chip
Figure 6: Improved fault tolerance in the distributed-neuron system
Figure 1: The distributed-neuron synapse circuit
vert the single-ended weight controlling voltage Vw into a set of differential currents
that serve as the bias currents of the multiplier. The weight is stored on aMOS
capacitor.
The differential nature of the circuit offers several advantages like improved rejection
of power supply noise and linearity of multiplication. Common-mode feedback is
provided at the output of the synapse. An amplitude limiter that is operational
only when the weighted sum exceeds a certain range serves as the distributed-neuron
part. The saturation levels of the neuron can be programmed by adjusting VN1 and
VN 2 ? Gains can be set by adjusting the bias current IB and/or a load (not shown).
The measured synapse characteristics are shown in Figure 8 .
763
764
Satyanarayana, Tsividis and Graf
----
2.3V
~
:::;J
--~----- .'
If ? H
.e
H
.. ~
:::;J
a
3c:
0
~
..2!
0
)t
W : )III
)111.-
W
II
H
M- --jlf- -
.-+' +-+-.+-.... . .. ...
-2.3V
' - - - _' - _ _
.
........... ----.....;-~---
.. ..
----1-.-_ _ .1... ___ _ .1. _ .-1. ___ . --1.-_ _ j _
-40JT\V
0
_
-<--_~
40mV
Difterential Input
- -- --------- wt = I [-\.0 FS\ --+- wt = 117 [-0.1 FS\ ........ wt = 126 [-O.OIFS\
-M- wt = 137 [ 0.1 FS\ -+- wt = 255 [1.0 FS\
----.- - - - - -- ----- -- - ---- - -- -- - - - - - -
.--- ----~
Individual curVElS are for different
\'eIght values. I FS - Full Scale I
Figure 8: Measured characteristics of the distributed-neuron synapse
Distributed-neuron synapse
output
wlr~
:i$a$l:a:ml:ttn::
SYMBOLIC
DIAGRAM
ACTUAL ON-CHIP
WIRING OF A
DODD
DODO
4X4 TILE.
0000
horizontal
swi tch matri x
DODD
DODD
DODO
DODD
L.!:::=:=====--+-+--J
DODO
DODO
DODO
DODO
DODO
1021 SYNAPSES
IN GROUPS OF 1 )( ..
Figure 9: Organization of the distributed-neurons and switches on chip
A Reconfigurable Analog VLSI Neural Network Chip
Organization of the Chip
Figure 9 shows how the distributed-neuron synapses are arranged on-chip. 16
distributed-neuron synapses have been arranged in a 4 x 4 crossbar fashion to
form a 4-input-4-output network. We call this a '4 x 4 tile'. Input and output
wires are available on all four sides of the tile. This makes interconnections to adjacent blocks easy. Vertical and horizontal switch matrices are interleaved in-between
the tiles to select one of the various possible modes of interconnections. These
modes can be configured by setting the 4 bits of memory in each switch matrix.
1024 distributed-neuron synapses have been integrated in an active area of 6.1mm
x 3.3mm using a 0.9J.lm, double-metal, single-poly, n-well CMOS technology.
The Weight Update/Refresh Scheme
Weights are stored in analog form on a MOS capacitor. A semi-serial-parallel weight
update scheme has been built. 8 pins of the chip are used to distribute the weights
to the 1024 capacitors on the chip. Each pin can refresh 128 capacitors contained
in a row of tiles. The capacitors in each tile-row are selected one at a time by a
decoder. The maximum refresh speed depends on the time needed to charge up the
weight storage capacitor and the parasitic capacitances. One complete refresh of all
weights on the chip is possible in about 130 J.l seconds. However one could refresh
at a much slower rate, the lower limit of which is decided by the charge leakage. For
a 7-bit precision in the weight at room temperature, a refresh rate in the order of
milliseconds should be adequate. Charge injection due to the parasitic capacitances
has been kept low by using very small switches. In the first version of the chip,
only the distributed-neuron synapses, the switches used for reconfiguration, and
the topology memory have been integrated. Weights are stored outside the chip in
digital form in a 1K x 8 RAM. The contents of the RAM are continuously read and
converted into analog form using a bank of off-chip D/ A converters. An advantage
of our scheme is that the forward-pass operation is not interrupted by the weight
refresh mechanism. A fast weight update scheme of the type used here is very
desirable while executing learning algorithms at a high speed. The complete block
diagram of the weight refresh/update and testing scheme is shown in Figure 10.
Configuration Examples
In Figure 11 we show some of the network topologies that can be configured with
the resources available on the chip. The left-hand side of the figure shows the
actual wiring on the chip and the right-hand side shows the symbolic diagram of
the network configuration. The darkened tiles have been used for implementing
the thresholds. Several other topologies like feedback networks and networks with
localized receptive fields can be configured with this chip.
The complete system
Figure 10 shows how the neural network chip fits into a complete system that is
necessary for its use and testing. The 'Config-EPROM' stores the bit pattern corre-
765
766
Satyanarayana, Tsividis and Graf
SIngle-ended to dUferenltl1
conYerter
WeIght
RAn
Neural
Networll:
Conflg:
EPRon
Figure 10: Block diagram of the system for reconfiguration, weight update/refresh
and testing.
sponding to the desired topology. This bit pattern is down-loaded into the memory
cells of the switch matrices before the start of computation. Input vectors are read
out from the 'Data memory' and converted into analog form by D/A converters.
The outputs of the D/ A converters are further transformed into differential signals and then fed into the chip. The chip delivers differential outputs which are
converted into digital form using an A/D converter and stored in a computer for
further analysis.
The delay in processing one layer with N inputs driving another layer with an
equal number of inputs is typically 1J.lsec. Hence a 12-32-12 network should take
about 6J.lsecs for one forward-pass operation. However external loads can slow down
the computation considerably. This problem can be solved by increasing the bias
currents or/and using pad buffers. Each block on the chip has been tested and has
been found to function as expected. Tests of the complete chip in a variety of neural
network configurations are being planned.
Conclusions
We have designed a reconfigurable array of 1024 distributed-neuron synapses that
can be configured into several different types of neural networks. The distributedneuron concept that is integral to this chip offers advantages in terms of modularity
and automatic gain normalization. The chip can be cascaded with several other
chips of the same type to build larger systems.
References
[1] Richard Lippmann. Pattern classification using neural networks. IEEE Communications Magazine, 27(11):47-64, November 1989.
A Reconfigurable Analog VLSI Neural Network Chip
12
outputs
12
Inputs
12 outputs
12 inputs
"'-4)(4
TILE
16 inputs
16 outputs
++++++++++++++
++++++++++
I I I I I I I I I I I I I I
16 outputs
-
I nput wi re
-I
-
16 inputs
Output wi re
Figure 11: Reconfiguring the network to produce two different topologies
767
768
Satyanarayana, Tsividis and Graf
[2] Y. Tsividis and S. Satyanarayana. Analogue circuits for variable-synapse electronic neural networks. Electronics Letters, 23(24):1313-1314, November 1987.
[3] Y. Tsividis and D. Anastassiou. Switched-capacitor neural networks. Electronics Letters, 23(18):958-959, August 1987.
[4] Paul Mueller et al. A Programmable Analog Neural Computer and Simulator,
volume 1 of Advances in Neural Information Processing systems, pages 712719. Morgan Kaufmann Publishers, 1989.
[5] D. B. Schwartz, R. E. Howard, and W. E. Hubbard. Adaptive Neural Networks
Using MOS Charge Storage, volume 1 of Advances in Neural Information Processing systems, pages 761-768. Morgan Kaufmann Publishers, 1989.
[6] J. R. Mann and S. Gilbert. An Analog Self-Organizing Neural Network Chip,
volume 1 of Advances in Neural Information Processing systems, pages 739747. Morgan Kaufmann Publishers, 1989.
[7] Mark Holler, Simon Tam, Hernan Castro, and Ronald Benson. An electrically
trainable artificial neural network etann with 10240 'floating gate' synapses. In
IJCNN International Joint Conference on Neural Networks, volume 2, pages
191-196. International Neural Network Society (INNS) and Institue of Electrical and Electronic Engineers (IEEE), 1989.
[8] S. Eberhardt, T. Duong, and A. Thakoor. Design of parallel hardware neural
network systems from custom analog vlsi 'building block' chips. In IJCNN
International Joint Conference on Neural Networks, volume 2, pages 191-196.
International Neural Network Society (INNS) and Institue of Electrical and
Electronic Engineers (IEEE), 1989.
[9] F. J. Kub, 1. A. Mack, K. K. Moon, C. Yao, and J. Modola. Programmable
analog synapses for microelectronic neural networks using a hybrid digitalanalog approach. In IEEE International Conference on Neural Networks, San
Diego, 1988.
[10] Y. Le Cun et al. Handwritten digit recognition: Application of neural network
chips and automatic learning. IEEE Communications Magazine, 27(11):41-46,
November 1989.
[11] S. Satyanarayana, Y. Tsividis, and H. P. Graf. Analogue neural networks with
distributed neurons. Electronics Letters, 25(5) :302-304, March 1989.
| 237 |@word version:1 jlf:1 ttn:1 etann:1 moment:1 electronics:3 configuration:5 contains:1 selecting:1 duong:1 current:15 refresh:9 interrupted:1 ronald:1 shape:1 designed:2 update:5 selected:1 ith:1 short:1 provides:2 constructed:1 differential:6 supply:1 overhead:1 expected:1 simulator:1 actual:2 considering:1 increasing:1 distri:1 provided:1 linearity:1 circuit:8 cm:1 developed:1 finding:1 fabricated:1 nj:1 ended:2 every:1 charge:6 schwartz:1 control:1 unit:5 before:1 engineering:1 limit:1 ease:1 limited:3 programmed:1 range:1 decided:1 testing:4 yj:3 block:9 implement:1 dodd:4 digit:1 area:6 bell:1 vert:1 symbolic:2 get:1 storage:2 gilbert:1 equivalent:2 center:1 resolution:1 splitting:1 array:1 controlling:1 diego:1 eprom:1 magazine:2 designing:1 element:1 expensive:1 recognition:1 module:2 electrical:3 solved:2 thousand:1 connected:1 decrease:1 ran:1 environment:1 microelectronic:1 solving:2 serve:1 easily:1 joint:2 chip:41 various:3 separated:1 fast:2 describe:1 wijxi:1 artificial:1 outside:1 modular:1 larger:1 valued:1 tested:2 interconnection:6 ability:3 advantage:5 inn:2 oflayers:1 realization:3 organizing:1 yout:2 description:1 everyday:1 double:2 produce:1 cmos:2 executing:1 measured:2 implemented:1 kirchoffs:1 involves:4 indicate:1 thakoor:1 modifying:1 implementing:1 mann:1 summation:3 mm:4 around:1 mo:2 lm:1 driving:1 purpose:1 currently:1 limiter:1 hubbard:1 tool:1 amos:1 weighted:1 reconfigured:1 vn1:1 voltage:5 mueller:1 integrated:7 typically:1 pad:1 vlsi:9 wij:1 redesigning:1 transformed:1 classification:1 distribut:1 special:1 summed:1 field:3 equal:2 x4:1 represents:2 alter:2 inherent:1 few:1 richard:1 individual:1 floating:1 dodo:7 organization:2 custom:1 programmability:5 integral:1 necessary:2 desired:1 re:2 isolated:1 planned:1 altering:1 cost:1 delay:1 fabrication:1 stored:5 considerably:1 density:1 international:5 off:1 holler:1 connecting:1 continuously:1 swi:1 yao:1 tile:12 external:1 tam:1 usable:1 inefficient:1 distribute:1 converted:3 configured:4 mv:1 depends:1 start:1 parallel:4 simon:1 formed:1 accuracy:1 moon:1 loaded:1 characteristic:2 efficiently:1 kaufmann:3 yield:1 dealt:1 handwritten:1 synapsis:27 synaptic:11 obvious:1 associated:1 con:1 gain:9 neurosynaptic:1 adjusting:2 amplitude:1 improved:3 synapse:22 arranged:3 done:1 though:1 crossbar:1 hand:3 horizontal:2 nonlinear:1 mode:3 building:2 usa:2 concept:4 multiplier:2 hence:5 read:2 laboratory:1 illustrated:1 anastassiou:1 wiring:3 adjacent:1 lumped:5 self:1 noted:1 manifestation:1 complete:6 delivers:1 temperature:1 common:1 hernan:1 volume:5 analog:16 automatic:5 wlr:1 nonlinearity:1 access:1 han:1 etc:1 store:1 certain:1 buffer:1 binary:1 fault:3 seen:2 morgan:3 kub:1 signal:2 semi:1 ii:1 full:2 desirable:2 exceeds:1 offer:3 long:1 divided:1 serial:1 normalization:3 sponding:1 achieved:1 cell:4 whereas:1 addition:2 addressed:1 diagram:4 publisher:3 sent:1 flow:1 capacitor:9 call:4 reconfiguring:1 vw:2 satyanarayana:9 presence:1 estate:1 config:1 split:1 easy:3 iii:1 switch:11 variety:3 fit:1 topology:18 converter:5 idea:1 inactive:1 reconfigurability:5 peter:1 render:1 f:5 york:1 programmable:7 adequate:1 involve:1 extensively:2 ph:1 hardware:4 rearranged:1 specifies:2 occupy:1 millisecond:1 per:4 group:2 four:1 threshold:2 utilize:1 kept:1 ram:2 wasted:1 defect:3 fraction:2 sum:1 letter:3 telecommunication:1 electronic:3 vn:1 holmdel:1 scaling:1 bit:5 interleaved:2 layer:11 dash:1 matri:1 corre:1 institue:2 ijcnn:2 lsec:1 speed:3 transconductance:1 injection:2 department:1 march:1 electrically:1 underutilized:1 across:1 wi:2 reallocated:1 cun:1 making:2 castro:1 benson:1 explained:1 mack:1 computationally:1 resource:2 pin:2 mechanism:1 needed:1 fed:1 serf:2 available:2 operation:2 permit:1 eight:1 appropriate:1 slower:1 gate:1 obviates:1 remaining:1 testable:1 build:6 society:2 leakage:2 capacitance:2 nput:1 realized:1 receptive:2 darkened:1 mapped:1 decoder:1 implementation:8 design:5 vertical:1 neuron:60 wire:3 howard:1 november:3 defining:1 situation:1 communication:2 august:1 required:1 connection:1 pattern:3 saturation:4 built:1 including:1 memory:4 analogue:2 power:1 suitable:1 hybrid:1 cascaded:1 scheme:5 technology:2 carried:1 columbia:1 multiplication:1 graf:8 law:1 fully:1 limitation:1 localized:2 digital:2 integrate:1 switched:1 metal:2 corrupting:1 bank:1 row:2 bias:3 side:3 wide:2 distributed:34 tolerance:2 feedback:2 avoids:2 forward:2 commonly:1 adaptive:1 san:1 simplified:1 compact:1 lippmann:1 logic:1 ml:1 active:2 tolerant:1 assumed:2 consuming:1 xi:2 modularity:3 why:1 nature:1 operational:1 eberhardt:1 tsividis:10 poly:2 whole:1 noise:1 paul:1 defective:2 x1:1 fashion:2 ny:1 slow:1 interconnecting:3 precision:1 infancy:1 uron:2 ib:1 yannis:1 externally:1 down:2 load:3 reconfigurable:9 jt:1 illustrates:1 rejection:1 tch:1 contained:1 ma:1 room:1 absence:1 content:1 wt:5 reconfiguration:7 engineer:2 called:3 total:1 pas:2 select:1 parasitic:2 searched:1 mark:1 arises:1 trainable:1 |
1,507 | 2,370 | Automatic Annotation of Everyday Movements
Deva Ramanan and D. A. Forsyth
Computer Science Division
University of California, Berkeley
Berkeley, CA 94720
[email protected], [email protected]
Abstract
This paper describes a system that can annotate a video sequence with:
a description of the appearance of each actor; when the actor is in view;
and a representation of the actor?s activity while in view. The system does
not require a fixed background, and is automatic. The system works by
(1) tracking people in 2D and then, using an annotated motion capture
dataset, (2) synthesizing an annotated 3D motion sequence matching the
2D tracks. The 3D motion capture data is manually annotated off-line
using a class structure that describes everyday motions and allows motion annotations to be composed ? one may jump while running, for
example. Descriptions computed from video of real motions show that
the method is accurate.
1. Introduction
It would be useful to have a system that could take large volumes of video data of people
engaged in everyday activities and produce annotations of that data with statements about
the activities of the actors. Applications demand that an annotation system: is wholly automatic; can operate largely independent of assumptions about the background or the number
of actors; can describe a wide range of everyday movements; does not fail catastrophically
when it encounters an unfamiliar motion; and allows easy revision of the motion descriptions that it uses. We describe a system that largely has these properties. We track multiple
figures in video data automatically. We then synthesize 3D motion sequences matching
our 2D tracks using a collection of annotated motion capture data, and then apply the annotations of the synthesized sequence to the video.
Previous work is extensive, as classifying human motions from some input is a matter of
obvious importance. Space does not allow a full review of the literature; see [1, 5, 4, 9, 13].
Because people do not change in appearance from frame to frame, a practical strategy is to
cluster an appearance model for each possible person over the sequence, and then use these
models to drive detection. This yields a tracker that is capable of meeting all our criteria,
described in greater detail in [14]; we used the tracker of that paper. Leventon and Freeman
show that tracks can be significantly improved by comparison with human motion [12].
Describing motion is subtle, because we require a set of categories into which the motion
can be classified; except in the case of specific activities, there is no known natural set of
categories. Special cases include ballet and aerobic moves, which have a clearly established
categorical structure [5, 6]. In our opinion, it is difficult to establish a canonical set of
human motion categories, and more practical to produce a system that allows easy revision
of the categories (section 2).
Figure 1 shows an overview of our approach to activity recognition. We use 3 core components; annotation, tracking, and motion synthesis. Initially, a user labels a collection of 3D
motion capture frames with annotations (section 2). Given a new video sequence to annotate, we use a kinematic tracker to obtain 2D tracks of each figure in sequence (section 3).
user
annotations
3D motion
library
2D tracks
tracker
video
motion
synthesis
Figure 1: Our annotation system consists of 3 main components; annotation, tracking,
and motion synthesis (the shaded nodes). A user initially labels a collection of 3D motion capture frames with annotations. Given a new video sequence to annotate, we use a
kinematic tracker to obtain 2D tracks of each figure in sequence. We then synthesize 3D
motion sequences which look like the 2D tracks by lifting tracks to 3D and matching them
to our annotated motion capture library. We accept the annotations associated with the
synthesized 3D motion sequence as annotations for the underlying video sequence.
We then synthesize 3D motion sequences which look like the 2D tracks by lifting tracks
to 3D and matching them to our annotated motion capture library (section 4). We finally
smooth the annotations associated with the synthesized 3D motion sequence (section 5),
accepting them as annotations for the underlying video sequence.
2. Obtaining Annotated Data
We have annotated a body of motion data with an annotation system, described in detail
in [3]; we repeat some information here for the convenience of the reader.
There is no reason to believe that a canonical annotation vocabulary is available for everyday motion, meaning that the system of annotation should be flexible. Annotations should
allow for composition as one can wave while walking, for example. We achieve this by
representing each separate term in the vocabulary as a bit in a bit string. Our annotation
system attaches a bit string to each frame of motion. Each bit in the string represents annotation with a particular element of the vocabulary, meaning that elements of the vocabulary
can be composed arbitrarily.
Actual annotation is simplified by using an approach where the user bootstraps a classifier.
One SVM classifier is learned for each element of the vocabulary. The user annotates a
series of example frames by hand by selecting a sequence from the motion collection; a
classifier is then learned from these examples, and the user reviews the resulting annotations. If they are not acceptable, the user revises the annotations at will, and then re-learns
a classifier. Each classifier is learned independently. The classifier itself uses a radial basis
function kernel, and uses the joint positions for one second of motion centered at the frame
being classified as a feature vector. Since the motion is sampled in time, each joint has
a discrete 3D trajectory in space for the second of motion centered at the frame. In our
implementation, we used a public domain SVM library (libsvm [7]). The out of margin
cost for the SVM is kept high to force a good fit within the capabilities of the basis function
approximation.
Our reference collection consists of a total of 7 minutes of motion capture data.
The vocabulary that we chose to annotate this database consisted of: run, walk, wave, jump, turn left, turn right, catch,
reach, carry, backwards, crouch, stand, and pick up. Some of these
annotations co-occur: turn left while walking, or catch while jumping and
running. Our approach admits any combination of annotations, though some combinations may not be used in practice: for example, we can?t conceive of a motion that should
be annotated with both stand and run. A different choice of vocabulary would be appropriate for different collections. The annotations are not required to be canonical. We have
verified that a consistent set of annotations to describe a motion set can be picked by asking
people outside our research group to annotate the same database and comparing annotation
results.
3. Kinematic Tracking
We use the tracker of [14], which is described in greater detail in that paper. We repeat
some information here for the convenience of the reader. The tracker works by building
an appearance model of putative actors, detecting instances of that model, and linking the
instances across time.
The appearance model approximates a view of the body as a puppet built of colored,
textured rectangles. The model is built by applying detuned body segment detectors to
some or all frames in a sequence. These detectors respond to roughly parallel contrast
energies at a set of fixed scales (one for the torso and one for other segments). A detector
response at a given position and orientation suggests that there may be a rectangle there.
For the frames that are used to build the model, we cluster together segments that are
sufficiently close in appearance ? as encoded by a patch of pixels within the segment ?
and appear in multiple frames without violating upper bounds on velocity. Clusters that
contain segments that do not move at any point of the sequence are then rejected. The next
step is to build assemblies of segments that lie together like a body puppet. The torso is
used as a root, because our torso detector is quite reliable. One then looks for segments that
lie close to the torso in multiple frames to form arm and leg segments. This procedure does
not require a reliable initial segment detector, because we are using many frames to build a
model ? if a segment is missed in a few frames, it can be found in others. We are currently
assuming that each individual is differently dressed, so that the number of individuals is
the number of distinct appearance models. Detecting the learned appearance model in the
sequence of frames is straightforward [8].
4. 3D Motion Synthesis
Once the 2D configuration of actors has been identified, we need to synthesize a sequence
of 3D configurations matching the 2D reports. Maintaining a degree of smoothness ? i.e.
ensuring that not only is a 3D representation a good match to the 2D configuration, but also
links well to the previous and future 3D representations ? is a needed because the image
detection is not perfect. We assume that camera motion can be recovered from a video
sequence and so we need only to recover the pose of the root of the body model ? in our
case, the torso ? with respect to the camera.
Representing Body Configuration: We assume the camera is orthographic and is oriented
with the y axis perpendicular to the ground plane, by far the most important case. From
the puppet we can compute 2D positions for various key points on the body (we use the
left-right shoulder, elbow, wrist, knee, ankle and the upper & lower torso). We represent
the 2D key points with respect to a 2D torso coordinate frame. We analogously convert the
motion capture data to 3D key points represented with respect to the 3D torso coordinate
frame.
We assume that all people are within an isotropic scaling of one another. This means that
the scaling of the body can be folded in with the camera scale, and the overall scale is
be estimated using corresponding limb lengths in lateral views (which can be identified
because they maximize the limb lengths). This strategy would probably lead to difficulties
if, for example, the motion capture data came from an individual with a short torso and long
arms; the tendency of ratios of body segment lengths to vary from individual to individual
and with age is a known, but not well understood, source of trouble in studies of human
motion [10].
Our motion capture database is too large for us to use every frame in the matching process.
Furthermore, many motion fragments are similar ? there is an awful lot of running ?
so we vector quantize the 11,000 frames down to k = 300 frames by clustering with
k-means and retaining only the cluster medoids. Our distance metric is a weighted sum
of differences between 3D key point positions, velocities, and accelerations ([2] found
this metric sufficient to ensure smooth motion synthesis). The motion capture data are
M
t1
M1
T1
T
t
m
Variables
(a)
M
m
1
T
1
Directed model
(b)
M1
M2
M3
M1
M2
M3
T1
T2
T3
T1
T2
T3
1
Undirected
model
(c)
Factorial HMM
(d)
Triangulated FHMM
(e)
Figure 2: In (a), the variables under discussion in camera inference. M is a representation
of figure in 3D with respect to its root coordinate frame, m is the partially observed vector
of 2D key points, t is the known camera position and T is the position of the root of the 3D
figure. In (b) a camera model for frame i where 2D keypoints are dependent on the camera
position, 3D figure configuration, and the root of the 3D figure. A simplified undirected
model in (c) is obtained by marginalizing out the observed variables yielding a single
potential on M i and T i . In (d), the factorial hidden Markov model obtained by extending
the undirected model across time. As we show in the text, it is unwise to yield to the
temptation to cut links between T ?s (or M ?s) to obtain a simplified model. However, our
FHMM is tractable, and yields the triangulated model in (e).
represented at the same frame rate as the video, to ensure consistent velocity estimates.
Modeling Root Configuration: Figure 2 illustrates our variables. For a given frame, we
have unknowns M , a vector of 3D key points and T , the 3D global root position. Known
are m, the (partially) observed vector of 2D key points, and t, the known camera position.
In practice, we do not need to model the translations for the 3D root (which is the torso); our
tracker reports the (x, y) image position for the torso, and we simply accept these reports.
This means that T reduces to a single scalar representing the orientation of the torso along
the ground plane. The relative out of image plane movement of the torso (in the z direction)
can be recovered from the final inferred M and T values by integration ? one sums the
out of plane velocities of the rotated motion capture frames.
Figure 2 shows the directed graphical model linking these variables for a single frame.
This model can be converted to an undirected model ? also shown in the figure ? where
the observed 2D key points specify a potential between Mi and Ti . Write the potential
for the ith frame as ?viewi (Mi , Ti ). We wish to minimize image error, so it is natural
to use backprojection error for the potential. This means that ?viewi (Mi , Ti ) is the mean
squared error between the visible 2D key points mi and the corresponding 3D keypoints
Mi rendered at orientation Ti . To handle left-right ambiguities, we take the minimum error
over all left-right assignments. To incorporate higher-order dynamic information such as
velocities and accelerations, we add keypoints from the two preceding and two following
frames when computing the mean squared error.
We quantize the torso orientation Ti into a total of c = 20 values. This means that the
potential ?viewi (Mi , Ti ) is represented by a c ? k table (recall that k is the total number
of motion capture medoids used, section 4).
We must also define a potential linking body configurations in time, representing the continuity cost of placing one motion after another. We write this potential as ?link (Mi , Mi+1 ).
This is a k ? k table, and we set the (i, j)?th entry of this table to be the distance between
the j?th medoid and the frame following the i?th medoid, using the metric used for vector
quantizing the motion capture dataset (section 4).
Inferring Root Configuration: The model of figure 2-(d) is known as a factorial hidden Markov model (FHMM) where observations have been marginalized out and is quite
tractable. Exact inference requires triangulating the graph (figure 2-(e)) to make explicit
additional probabilistic dependencies [11].The maximum clique size is now 3, making inference O(k 2 cN ) (where N is the number of total frames). Furthermore, the triangulation
allows us to explicitly define the potential ?torso (Mi , Ti , Ti+1 ) to capture the dependency
Manual
Automatic
present
fFace
Closed
Extend
present
fFace
Closed
Extend
Present
FFace
Stand
Walk
Wave
Pick up
Jump
Reach
Crouch
LTurn
RTurn
Carry
Run
Catch
Bkwd
Present
FFace
Stand
Walk
Wave
Pick up
Jump
Reach
Crouch
LTurn
RTurn
Carry
Run
Catch
Bkwd
time
time
Figure 3: Unfamiliar configurations can either be annotated with ?null? or with the closest
match. We show smoothed annotation results for a sequence of jumping jacks (sometimes
known as star jumps) from two such annotation systems. In the top row, we show the
same two frames run through each system. The MAP reconstruction of the human figure
obtained from the tracking data has been reprojected back to the image, using the MAP
estimate of camera configuration. In the bottom, we show signals representing annotation bits over time. The manual annotator records whether or not the figure is present,
front faceing, in a closed stance, and/or in an extended stance. The automatic
annotation consists of a total of 16 bits; present, front faceing, plus the 13 bits from
the annotation vocabulary of Sec.2. In first dotted line, corresponding to the image above
it, the manual annotator asserts the figure is present, frontally faceing, and about
to reach the extended stance. The automatic annotator asserts the figure is present,
frontally faceing, and walking and waveing, and is not standing, not jumping, etc.
The annotations for both systems are reasonable given there are no corresponding categories available (this is like describing a movement that is totally unfamiliar). On the left,
we freely allow ?null? annotations (where no annotation bit is set). On the right, we discourage ?null? annotations as described in Sec.6. Configurations near the closed stance
are now labeled as standing, a reasonable approximation.
of torso angular velocity on the given motion. For example, we expect the torso angular
velocity of a turning motion frame to be different from a walking forward frame. We set
a given entry of this table to be the squared error between the sampled angular velocity
(Ti+1 ? Ti , shifted to lie between ?? . . . ?) and the actual torso angular velocity of the
medoid Mi .
We scale the ?viewi (Mi , Ti ), ?link (Mi , Mi+1 ), and ?torso (Mi , Ti , Ti+1 ) potentials by
empirically determined values to yield satisfactory results. These scale factors are weight
the degree to which the final 3D track should be continuous versus the degree to which
it should match the 2D data. In principle, these weights could be set optimally by a detailed study of the properties of our tracker, but we have found it simpler to set them by
experiment.
We find the maximum a posteriori (MAP) estimate of Mi and Ti by a variant of dynamic
programming defined for clique trees [11]. Since we implicitly used negative log likelihoods to define the potentials (the squared error terms), we used the min-sum variant of the
max-product algorithm.
Possible Variants: One might choose to not enforce consistency in the root orientation T i
between frames. By breaking the links between the Ti variables in figure 2-(a), we could
Manual
Automatic
present
rFace
lFace
walk
stop
bkwd
present
rFace
lFace
walk
stop
bkwd
present
rFace
lFace
walk
stop
bkwd
Present
RFace
LFace
Walk
Stand
Bkwd
LTurn
Wave
Pick up
Jump
Reach
Crouch
RTurn
Carry
Run
Catch
Present
RFace
LFace
Walk
Stand
Bkwd
LTurn
Wave
Pick up
Jump
Reach
Crouch
RTurn
Carry
Run
Catch
Present
RFace
LFace
Walk
Stand
Bkwd
LTurn
Wave
Pick up
Jump
Reach
Crouch
RTurn
Carry
Run
Catch
time
time
time
Figure 4: We show annotation results for a walking sequence from three versions of our
system using the notation of Fig.3. Null matches are allowed. On the left, we infer the 3D
configuration M i (and associated annotation) independently for each frame, as discussed
in Sec.4. In the center, we model temporal dependencies when inferring M i and its corresponding annotation. On the right, we smooth the annotations, as discussed in Sec.5. Each
image is labeled with an arrow pointing in the direction the inferred figure is facing, not
moving. By modeling camera dependencies, we are able to fix incorrect torso orientations
present in the left system (i.e., the first image frame and the automatic left faceing and
right faceing annotation bits). By smoothing the annotations, we eliminate spurious
stand?s present in the center. Although the smoothing system correctly annotates the
last image frame with backward, the occluded arm incorrectly triggers a wave, by the
mechanism described in Sec.5.
reduce our model to a tree and make inference even simpler ? we now have an HMM.
However, this is simplicity at the cost of wasting an important constraint ? the camera
does not flip around the body from frame to frame. This constraint is useful, because
our current image representation provides very little information about the direction of
movement in some cases. In particular, in a lateral view of a figure in the stance phase of
walking it is very difficult to tell which way the actor is facing without reference to other
frames ? where it may not be ambiguous. We have found that if one does break these
links, the reconstruction regularly flips direction around such frames.
5. Reporting Annotations
? i } and orientation {T?i } of the
We now have MAP estimates of the 3D configuration {M
body for each frame. The simplest method for reporting annotations is to produce an anno? i }. Recall that {M
? i } is one of the medoids produced by
tation that is some function of {M
our clustering process (section 4). It represents a cluster of frames, all of which are similar.
We could now report either the annotation of the medoid, the annotation that appears most
frequently in the cluster, the annotation of the cluster element that matches the image best,
or the frequency of annotations across the cluster.
The fourth alternative produces results that may be useful for some kinds of decisionmaking, but are very difficult to interpret directly ? each frame generates a posterior probability over the annotation vocabulary ? and we do not discuss it further here. Each of the
first three tends to produce choppy annotation streams (figure 4, center). This is because
we have vector quantized the motion capture frames, meaning that ?link (Mi , Mi+1 ) is a
Manual
Automatic
present
rFace
lFace
run
catch
throw
present
rFace
lFace
run
catch
throw
present
rFace
lFace
run
catch
throw
Present
RFace
LFace
Run
Walk
Catch
Wave
Reach
Pick up
Jump
Stand
Crouch
LTurn
RTurn
Carry
Bkwd
Present
RFace
LFace
Run
Walk
Catch
Wave
Reach
Pick up
Jump
Stand
Crouch
LTurn
RTurn
Carry
Bkwd
Present
RFace
LFace
Run
Walk
Catch
Wave
Reach
Pick up
Jump
Stand
Crouch
LTurn
RTurn
Carry
Bkwd
time
time
time
Figure 5: Smoothed annotations of 3 figures from a video sequence of the three passing a
ball back and forth using the conventions of figure 3. Null matches are allowed. The dashed
vertical lines indicate annotations corresponding to the frames shown. The automatic annotations are largely accurate: the figures are correctly identified, and the direction in
which the figures are facing are largely correct. There is some confusion between run
and walk, and throws appear to be identified as waves and reaches. Generally, when
the figure has the ball (after catching and before throwing, as denoted in the manual
annotations), he is annotated as carrying, though there is some false detection. There
are no spurious crouches, turns, etc.
fairly rough approximation of a smoothness constraint (because some frames in one cluster
might link well to some frames in another and badly to others in that same cluster). An
alternative is to smooth the annotation stream.
Smoothing Annotations: Recall that we have 13 terms in our annotation vocabulary, each
of which can be on or off for any given frame. Of the 213 possible bit strings, we observe
a total of 32 in our set of motions. Clearly, we cannot smooth annotation bits directly,
because we might very likely create bit strings that never occur. Instead, we regard each
observed annotation string as a codeword.
We can model the temporal dynamics of codewords and their quantized observations using
a standard HMM. The hidden state is the code word, taking on one of l (= 32) values,
while the observed state is the cluster, taking on one of k (= 300) values. This model is
defined by a l ? l matrix representing codeword dynamics and a l ? k matrix representing
the quantized observation. Note that this model is fully observed in the 11,000 frames of
the motion database; we know the true code word for each motion frame and the cluster
to which the frame belongs. Hence we can learn both matrices through straightforward
? i }, inferring
multinomial estimation. We now apply this model to the MAP estimate of {M
a sequence of annotation codewords (which we can later expand back into annotation bit
vectors).
Occlusion:When a limb is not detected by the tracker, the configuration of that limb is not
scored in evaluating the potential. In turn, this means that the best configuration consistent
with all else detected is used, in this case with the figure waving (figure 4). In an ideal
closed world, we can assume the limb is missing because its not there; in practice, it may
be due to a detector failure. This makes employing ?negative evidence? difficult.
6. Experimental Results
It is difficult to evaluate results simply by recording detection information (say an ROC
for events). Furthermore, there is no meaningful standard against which one can compare.
Instead, we lay out a comparison between human and automatic annotations, as in Fig.3,
which shows annotation results for a 91 frame jumping jack (or star jump) sequence. The
top 4 lower case annotations are hand-labeled over the entire 91 frame sequence. Generally,
automatic annotation is successful: the figure is detected correctly, oriented correctly (this
is recovered from the torso orientation estimates Ti ), and the description of the figure?s
activities is largely correct.
Fig.4 compares three versions of our system on a 288 frame sequence of a figure walking
back and forth. Comparing the annotations on the left (where configurations have been
inferred without temporal dependency) with the center (with temporary dependency), we
see temporal dependency in inferred configurations is important, because otherwise the
figure can change direction quickly, particularly during lateral views of the stance phase
of a walk (section 4). Comparing the center annotations with those on the right (smoothed
with our HMM) shows that annotation smoothing makes it possible to remove spurious
jump, reach, and stand labels ? the label dynamics are wrong.
We show smoothed annotations for three figures from one sequence passing a ball back
and forth in Fig.5; the sequence contains a lot of fast movement. Each actor is correctly
detected, and the system produces largely correct descriptions of the actor?s orientation and
actions. The inference procedure interprets a run as a combination of run and walk. Quite
often, the walk annotation will fire as the figure slows down to turn from face right
to face left or vice versa. When the figures use their arms to catch or throw, we see
increased activity for the similar annotations of catch, wave, and reach.
When a novel motion is encountered, we want the system to either respond by (1) recognizing it cannot annotate this sequence, or (2) annotate it with the best match possible.
We can implement (2) by adjusting the parameters for our smoothing HMM so that the
?null? codeword (all annotation bits being off) is unlikely. In Fig.3, system (1) responds to
a jumping jack sequence (star jump, in some circles) with a combination of walking and
jumping while waveing. In system (2), we see an additional standing annotation for
when the figure is near the closed stance.
References
[1] J. K. Aggarwal and Q. Cai. Human motion analysis: A review. Computer Vision and Image
Understanding: CVIU, 73(3):428?440, 1999.
[2] O. Arikan and D. Forsyth. Interactive motion generation from examples. In Proc. ACM SIGGRAPH, 2002.
[3] O. Arikan, D. Forsyth, and J. O?Brien. Motion synthesis from annotations. In Proc. ACM
SIGGRAPH, 2003.
[4] A. Bobick. Movement, activity, and action: The role of knowledge in the perception of motion.
Philosophical Transactions of Royal Society of London, B-352:1257?1265, 1997.
[5] A. F. Bobick and J. Davis. The recognition of human movement using temporal templates.
IEEE T. Pattern Analysis and Machine Intelligence, 23(3):257?267, 2001.
[6] L. W. Campbell and A. F. Bobick. Recognition of human body motion using phase space
constraints. In ICCV, pages 624?630, 1995.
[7] C. C. Chang and C. J. Lin. Libsvm: Introduction and benchmarks. Technical report, Department
of Computer Science and Information Engineering, National Taiwan University, 2000.
[8] P. Felzenschwalb and D. Huttenlocher. Efficient matching of pictorial structures. In Proc CVPR,
2000.
[9] D. M. Gavrila. The visual analysis of human movement: A survey. Computer Vision and Image
Understanding: CVIU, 73(1):82?98, 1999.
[10] J. K. Hodgins and N. S. Pollard. Adapting simulated behaviors for new characters. In SIGGRAPH - 97, 1997.
[11] M. I. Jordan, editor. Learning in Graphical Models. MIT Press, Cambridge, MA, 1999.
[12] M. Leventon and W. Freeman. Bayesian estimation of 3D human motion from an image sequence. Technical Report TR-98-06, MERL, 1998.
[13] D. Ramanan and D. A. Forsyth. Automatic annotation of everyday movements. Technical
report, UCB//CSD-03-1262, UC Berkeley, CA, 2003.
[14] D. Ramanan and D. A. Forsyth. Finding and tracking people from the bottom up. In Proc
CVPR, 2003.
| 2370 |@word version:2 ankle:1 pick:9 tr:1 carry:9 catastrophically:1 initial:1 configuration:17 series:1 fragment:1 selecting:1 contains:1 brien:1 recovered:3 comparing:3 current:1 must:1 visible:1 remove:1 intelligence:1 plane:4 isotropic:1 ith:1 core:1 short:1 record:1 accepting:1 colored:1 detecting:2 provides:1 node:1 quantized:3 simpler:2 along:1 incorrect:1 consists:3 behavior:1 roughly:1 frequently:1 freeman:2 automatically:1 actual:2 little:1 elbow:1 revision:2 totally:1 underlying:2 notation:1 null:6 kind:1 string:6 finding:1 wasting:1 temporal:5 berkeley:5 every:1 ti:16 interactive:1 classifier:6 puppet:3 wrong:1 ramanan:4 appear:2 t1:4 before:1 understood:1 engineering:1 tends:1 tation:1 might:3 chose:1 plus:1 suggests:1 shaded:1 co:1 range:1 perpendicular:1 directed:2 practical:2 camera:12 wrist:1 practice:3 orthographic:1 implement:1 bootstrap:1 procedure:2 wholly:1 significantly:1 adapting:1 matching:7 word:2 radial:1 convenience:2 close:2 cannot:2 applying:1 map:5 center:5 missing:1 straightforward:2 independently:2 reprojected:1 survey:1 simplicity:1 knee:1 m2:2 handle:1 coordinate:3 trigger:1 user:7 exact:1 programming:1 us:3 synthesize:4 element:4 recognition:3 velocity:9 walking:8 particularly:1 lay:1 cut:1 database:4 labeled:3 observed:7 bottom:2 role:1 huttenlocher:1 capture:17 movement:10 occluded:1 dynamic:5 carrying:1 deva:1 segment:11 division:1 basis:2 textured:1 joint:2 siggraph:3 differently:1 various:1 represented:3 distinct:1 fast:1 describe:3 london:1 detected:4 tell:1 outside:1 quite:3 encoded:1 cvpr:2 say:1 otherwise:1 itself:1 final:2 sequence:33 quantizing:1 cai:1 reconstruction:2 product:1 bobick:3 achieve:1 forth:3 description:5 asserts:2 everyday:6 cluster:12 extending:1 decisionmaking:1 produce:6 perfect:1 rotated:1 pose:1 throw:5 c:2 indicate:1 triangulated:2 convention:1 direction:6 annotated:11 correct:3 centered:2 human:11 opinion:1 public:1 require:3 fix:1 tracker:10 sufficiently:1 ground:2 around:2 pointing:1 vary:1 estimation:2 proc:4 label:4 currently:1 vice:1 create:1 weighted:1 rough:1 clearly:2 mit:1 temptation:1 likelihood:1 contrast:1 posteriori:1 inference:5 dependent:1 eliminate:1 entire:1 accept:2 initially:2 spurious:3 hidden:3 unlikely:1 expand:1 pixel:1 overall:1 flexible:1 orientation:9 denoted:1 retaining:1 smoothing:5 special:1 crouch:10 integration:1 fairly:1 uc:1 once:1 never:1 manually:1 represents:2 placing:1 look:3 future:1 others:2 report:7 t2:2 few:1 oriented:2 composed:2 national:1 individual:5 pictorial:1 phase:3 occlusion:1 fire:1 detection:4 kinematic:3 yielding:1 accurate:2 capable:1 jumping:6 tree:2 walk:16 re:1 circle:1 catching:1 instance:2 increased:1 modeling:2 merl:1 asking:1 leventon:2 assignment:1 cost:3 entry:2 recognizing:1 successful:1 too:1 front:2 optimally:1 dependency:7 person:1 standing:3 probabilistic:1 off:3 synthesis:6 together:2 analogously:1 quickly:1 squared:4 ambiguity:1 choose:1 potential:11 converted:1 star:3 sec:5 matter:1 forsyth:5 explicitly:1 stream:2 later:1 view:6 picked:1 root:10 lot:2 closed:6 break:1 wave:13 recover:1 capability:1 parallel:1 annotation:77 waving:1 minimize:1 conceive:1 largely:6 yield:4 t3:2 fhmm:3 bayesian:1 produced:1 trajectory:1 drive:1 classified:2 detector:6 reach:13 manual:6 failure:1 against:1 energy:1 frequency:1 obvious:1 associated:3 mi:17 arikan:2 sampled:2 stop:3 dataset:2 adjusting:1 revise:1 recall:3 knowledge:1 torso:21 subtle:1 back:5 campbell:1 appears:1 higher:1 violating:1 response:1 improved:1 specify:1 though:2 furthermore:3 rejected:1 angular:4 hand:2 continuity:1 believe:1 building:1 aerobic:1 consisted:1 contain:1 true:1 hence:1 stance:7 satisfactory:1 during:1 ambiguous:1 davis:1 criterion:1 confusion:1 motion:63 meaning:3 image:14 jack:3 novel:1 multinomial:1 empirically:1 overview:1 volume:1 linking:3 extend:2 approximates:1 m1:3 synthesized:3 discussed:2 interpret:1 unfamiliar:3 composition:1 he:1 versa:1 cambridge:1 smoothness:2 automatic:13 consistency:1 moving:1 actor:10 annotates:2 etc:2 add:1 closest:1 posterior:1 triangulation:1 belongs:1 codeword:3 arbitrarily:1 came:1 meeting:1 minimum:1 greater:2 additional:2 preceding:1 freely:1 maximize:1 dashed:1 signal:1 multiple:3 full:1 keypoints:3 reduces:1 infer:1 aggarwal:1 smooth:5 technical:3 match:7 unwise:1 long:1 lin:1 ensuring:1 variant:3 vision:2 metric:3 annotate:7 kernel:1 represent:1 sometimes:1 background:2 want:1 else:1 source:1 operate:1 probably:1 recording:1 gavrila:1 undirected:4 regularly:1 jordan:1 near:2 backwards:1 ideal:1 easy:2 fit:1 identified:4 interprets:1 reduce:1 cn:1 whether:1 pollard:1 passing:2 action:2 useful:3 generally:2 detailed:1 factorial:3 category:5 simplest:1 canonical:3 shifted:1 dotted:1 estimated:1 medoid:4 track:12 correctly:5 discrete:1 write:2 group:1 key:9 libsvm:2 verified:1 kept:1 rectangle:2 backward:1 graph:1 convert:1 sum:3 run:17 fourth:1 respond:2 reporting:2 reader:2 reasonable:2 patch:1 putative:1 missed:1 acceptable:1 scaling:2 ballet:1 bit:14 bound:1 encountered:1 activity:8 badly:1 occur:2 constraint:4 throwing:1 generates:1 min:1 rendered:1 department:1 combination:4 ball:3 describes:2 across:3 character:1 making:1 leg:1 iccv:1 medoids:3 describing:2 turn:6 fail:1 mechanism:1 needed:1 discus:1 flip:2 know:1 tractable:2 available:2 apply:2 limb:5 observe:1 appropriate:1 enforce:1 alternative:2 encounter:1 top:2 running:3 include:1 assembly:1 trouble:1 clustering:2 ensure:2 maintaining:1 graphical:2 marginalized:1 build:3 establish:1 society:1 backprojection:1 move:2 codewords:2 strategy:2 responds:1 distance:2 separate:1 link:8 lateral:3 simulated:1 hmm:5 reason:1 taiwan:1 assuming:1 length:3 code:2 ratio:1 difficult:5 statement:1 negative:2 slows:1 synthesizing:1 implementation:1 unknown:1 upper:2 vertical:1 observation:3 markov:2 benchmark:1 incorrectly:1 extended:2 shoulder:1 frame:55 smoothed:4 inferred:4 required:1 extensive:1 philosophical:1 california:1 learned:4 temporary:1 established:1 able:1 perception:1 pattern:1 built:2 reliable:2 max:1 video:13 royal:1 event:1 natural:2 force:1 difficulty:1 turning:1 arm:4 representing:7 library:4 axis:1 categorical:1 catch:15 text:1 review:3 literature:1 understanding:2 marginalizing:1 relative:1 fully:1 expect:1 generation:1 attache:1 versus:1 facing:3 age:1 annotator:3 degree:3 sufficient:1 consistent:3 principle:1 editor:1 daf:1 classifying:1 translation:1 row:1 repeat:2 last:1 allow:3 wide:1 template:1 taking:2 face:2 regard:1 vocabulary:10 stand:12 evaluating:1 world:1 forward:1 collection:6 jump:14 simplified:3 far:1 employing:1 transaction:1 implicitly:1 clique:2 global:1 continuous:1 table:4 learn:1 ca:2 obtaining:1 quantize:2 discourage:1 domain:1 hodgins:1 main:1 csd:1 arrow:1 scored:1 allowed:2 body:13 fig:5 roc:1 position:10 inferring:3 awful:1 wish:1 explicit:1 lie:3 breaking:1 learns:1 minute:1 down:2 specific:1 svm:3 admits:1 evidence:1 rface:12 false:1 importance:1 lifting:2 illustrates:1 demand:1 margin:1 cviu:2 simply:2 appearance:8 likely:1 visual:1 tracking:6 partially:2 scalar:1 chang:1 acm:2 ma:1 acceleration:2 change:2 folded:1 except:1 determined:1 total:6 triangulating:1 engaged:1 tendency:1 experimental:1 m3:2 meaningful:1 ucb:1 people:6 incorporate:1 evaluate:1 |
1,508 | 2,371 | Statistical Debugging of Sampled Programs
Alice X. Zheng
EE Division
UC Berkeley
[email protected]
Michael I. Jordan
CS Division and Department of Statistics
UC Berkeley
[email protected]
Ben Liblit
CS Division
UC Berkeley
[email protected]
Alex Aiken
CS Division
UC Berkeley
[email protected]
Abstract
We present a novel strategy for automatically debugging programs given
sampled data from thousands of actual user runs. Our goal is to pinpoint
those features that are most correlated with crashes. This is accomplished
by maximizing an appropriately defined utility function. It has analogies
with intuitive debugging heuristics, and, as we demonstrate, is able to
deal with various types of bugs that occur in real programs.
1
Introduction
No software is perfect, and debugging is a resource-consuming process. Most users take
software bugs for granted, and willingly run buggy programs every day with little complaint. In some sense, these user runs of the program are the ideal test suite any software
engineer could hope for. In an effort to harness the information contained in these field
tests, companies like Netscape/Mozilla and Microsoft have developed automated, opt-in
feedback systems. User crash reports are used to direct debugging efforts toward those
bugs which seem to affect the most people.
However, we can do much more with the information users may provide. Even if we collect
just a little bit of information from every user run, successful or not, we may end up with
enough information to automatically pinpoint the locations of bugs. In earlier work [1] we
present a program sampling framework that collects data from users at minimal cost; the
aggregated runs are then analyzed to isolate the bugs. Specifically, we learn a classifier
on the data set, regularizing the parameters so that only the few features that are highly
predictive of the outcome have large non-zero weights.
One limitation of this earlier approach is that it uses different methods to deal with different
types of bugs. In this paper, we describe how to design a single classification utility function
that integrates the various debugging heuristics. In particular, determinism of some features
is a significant issue in this domain, and an additional penalty term for false positives is
included to deal with this aspect. Furthermore, utility levels, while subjective, are robust:
we offer simple guidelines for their selection, and demonstrate that results remain stable
and strong across a wide range of reasonable parameter settings.
We start by briefly describing the program sampling framework in Section 2, and present
the feature selection framework in Section 3. The test programs and our data set are described in Section 4, followed by experimental results in Section 5.
2
Program Sampling Framework
Our approach relies on being able to collect information about program behavior at runtime.
To avoid paying large costs in time or space, we sparsely sample the program?s runtime
behavior. We scatter a large number of checks in the program code, but do not execute all
of them during any single run. The sampled results are aggregated into counts which no
longer contain chronology information but are much more space efficient.
To catch certain types of bugs, one asks certain types of questions. For instance, function
call return values are good sanity checks which many programmers neglect. Memory corruption is another common class of bugs, for which we may check whether pointers are
within their prescribed ranges. We add a large set of commonly useful assertions into the
code, most of which are wild guesses which may or may not capture interesting behavior.
At runtime, the program tosses a coin (with low heads probability) independently for each
assertion it encounters, and decides whether or not to execute the assertion.
However, while it is not expensive to generate a random coin toss, doing so separately for
each assertion would incur a very large overhead; the program will run even slower than
just executing every assertion. The key is to combine coin tosses. Given i.i.d. Bernoulli
random variables with success probability h, the number of trials it takes until the first
success is a geometric random variable with probability P (n) = (1 ? h)n?1 h. Instead of
tossing a Bernoulli coin n times, we can generate a geometric random variable to be used
as a countdown to the next sample. Each assertion decrements this countdown by 1; when
it reaches 0, we perform the assertion and generate another geometric random variable.1
However, checking to see if the counter has reached 0 at every assertion is still an expensive
procedure. For further code optimization, we analyze each contiguous acyclic code region
(loops- and recursion-free) at compile time and count the maximum number of assertions
on any path through that region. Whenever possible, the generated code decrements in
bulk, and takes a fast path that skips over the individual checks within a contiguous code
region using just a single check against this maximum threshold.
Samples are taken in chronological order as the program runs. Useful as it might be, it
would take a huge amount of space to record this information. To save space, we instead
record only the counts of how often each assertion is found to be true or false. When the
program finishes, these counts, along with the program exit status, are sent back to the
central server for further analysis.
The program sampling framework is a non-trivial software analysis effort. Interested readers may refer to [1] for a more thorough treatment of all the subtleties, along with detailed
analyses of performance impact at different sampling rates.
3
Classification and Feature Selection
In the hopes of catching a wide range of bugs, we add a large number of rather wild guesses
into the code. Having cast a much bigger net than what we may need, the next step is to
identify the relevant features. Let crashes be labeled with an output of 1, and successes
labeled with 0. Knowing the final program exit status (crashed or successful) leaves us in
1
The sampling density h controls the tradeoff between runtime overhead and data sparsity. It is
set to be small enough to have tolerable overhead, which then requires more runs in order to alleviate
the effects of sparsity. This is not a problem for large programs like Mozilla and Windows with
thousands of crash reports a day.
a classification setting. However, our primary goal is that of feature selection [2]. Good
feature selection should be corroborated by classification performance, though in our case,
we only care about features that correctly predict one of the two classes. Hence, instead of
working in the usual maximum likelihood setting for classification and regularization, we
define and maximize a more appropriate utility function. Ultimately, we will see that the
two are not wholly unrelated.
It has been noted that the goals of variable and feature selection do not always coincide
with that of classification [3]. Classification is but the means to an end. As we demonstrate
in Section 5, good classification performance assures the user that the system is working
correctly, but one still has to examine the selected features to see that they make sense.
3.1
Some characteristics of the problem
We concentrate on isolating the bugs that are caused by the occurrence of a small set of
features, i.e. assertions that are always true when a crash occurs.2 We want to identify the
predicate counts that are positively correlated with the program crashing. In contrast, we
do not care much about the features that are highly correlated with successes. This makes
our feature selection an inherently one-sided process.
Due to sampling effects, it is quite possible that a feature responsible for the ultimate crash
may not have been observed in a given run. This is especially true in the case of ?quick and
painless? deaths, where a program crashes very soon after the actual bug occurs. Normally
this would be an easy bug to find, because one wouldn?t have to look very far beyond
the crashing point at the top of the stack. However, this is a challenge for our approach,
because there may be only a single opportunity to sample the buggy feature before the
program dies. Thus many crashes may have an input feature profile that is very similar to
that of a successful run. From the classification perspective, this means that false negatives
are quite likely.
At the other end of the spectrum, if we are dealing with a deterministic bug3 , false positives
should have a probability of zero: if the buggy feature is observed to be true, then the
program has to crash; if the program did not crash, then the bug must not have occurred.
Therefore, for a deterministic bug, any false positives during the training process should
incur a much larger penalty compared to any false negatives.
3.2
Designing the utility function
Let (x, y) denote a data point, where x is an input vector of non-negative integer counts, and
y ? {0, 1} is the output label. Let f (x; ?) denote a classifier with parameter vector ?. There
are four possible prediction outcomes: y = 1 and f (x; ?) = 1, y = 0 and f (x; ?) = 0,
y = 1 and f (x; ?) = 0, and y = 0 and f (x; ?) = 1. The last two cases represent false
negative and false positive, respectively. In the general form of utility maximization for
classification (see, e.g., [4]), we can define separate utility functions for each of the four
cases, and maximize the sum of the expected utilities:
max EP (Y |x) U (Y, x; ?),
?
where
(1)
U (Y, x; ?) = u1 (x; ?)Y I{f (x;?)=1} + u2 (x; ?)Y I{f (x;?)=0}
+ u3 (x; ?)(1 ? Y )I{f (x;?)=0} + u4 (x; ?)(1 ? Y )I{f (x;?)=1} + v(?), (2)
2
There are bugs that are caused by non-occurrence of certain events, such as forgotten initializations. We do not focus on this type of bugs in this paper.
3
A bug is deterministic if it crashes the program every time it is observed. For example, dereferencing a null pointer would crash the program without exception. Note that this notion of determinism
is data-dependent: it is always predicated on the trial runs that we have seen.
and where IW is the indicator function for event W . The ui (x; ?) functions specify the
utility of each case. v(?) is a regularization term, and can be interpreted as a prior over the
classifier parameters ? in the Bayesian terminology.
We can approximate the distribution P (Y |x) simply by its empirical distribution, P (Y =
1|x; ?) := P? (Y = 1|x) = y. The actual distribution of input features X is determined by
the software under examination, hence it is difficult to specify and highly non-Gaussian.
Thus we need a discriminative classifier. Let z = ?T x, where the x vector is now augmented by a trailing 1 to represent the intercept term.4 We use the logistic function ?(z) to
model the class conditional probability:
P (Y = 1|x)
:= ?(z) = 1/(1 + e?z ).
(3)
The decision boundary is set to 1/2, so that f (x; ?) = 1 if ?(z) > 1/2, and f (x; ?) = 0
if ?(z) ? 1/2. The regularization term is chosen to be the
P `1 norm of ?, which has the
effect of driving most ?i ?s to zero: v(?) := ??|?|11 = ?? i |?i |. To slightly simplify the
formula, we choose the same functional form for u1 and u2 , but add an extra penalty term
for false positives:
u1 (x; ?) := u2 (x; ?)
u3 (x; ?)
u4 (x; ?)
:= ?1 (log2 ?(x; ?) + 1)
:= ?2 (log2 (1 ? ?(x; ?)) + 1)
(4)
(5)
:= ?2 (log2 (1 ? ?(x; ?)) + 1) ? ?3 ?T x .
(6)
Note that the additive constants do not affect the outcome of the optimization; they merely
ensure that utility at the decision boundary is zero. Also, we can fold any multiplicative
constants of the utility functions into ?i , so the base of the log function is freely exchangeable. We find that the expected utility function is equivalent to:
E U = ?1 y log ? + ?2 (1 ? y) log(1 ? ?) ? ?3 ?T x(1 ? y)I{?>1/2} ? ?k?k11 .
(7)
When ?1 = ?2 = 1 and ?3 = 0, Eqn. (7) is akin to the Lasso [5] (standard logistic regression
with ML parameter estimation and `1 -norm regularization). In general, this expected utility
function weighs each class separately using ?i , and has an additional penalty term for false
positives.
Parameter learning is done using stochastic (sub)gradient ascent on the objective function.
Besides having desirable properties like fast convergence rate and space efficiency, such
on-line methods also improve user privacy. Once the sufficient statistics are collected, the
trial run can be discarded, thus obviating the need to permanently store any user?s private
data on a central server.
Eqn. (7) is concave in ?, but the `1 norm and the indicator function are non-differentiable
at ?i = 0 and ?T x = 0, respectively. This can be handled by subgradient ascent methods5 .
In practice, we jitter the solution away from the point of non-differentiability by taking a
very small step along any subgradient. This means that none of the ?i ?s will ever be exactly
zero. But this does not matter since weights close enough to zero are essentially taken as
zero. Only the few features with the most positive weights are selected at the end.
3.3
Interpretation of the utility functions
Let us closely examine the utility functions defined in Eqns. (4)?(6). For the case of Y = 1,
Fig. 1(a) plots the function log2 ?(z) + 1. It is positive when z is positive, and approaches
4
Assuming that the more abnormalities there are, the more likely it is for the program to crash, it
is reasonable to use a classifier based on a linear combination of features.
5
Subgradients are a generalization of gradients that are also defined at non-differentiable points.
A subgradient for a convex function is any sublinear function pivoted at that point, and minorizing
the entire convex function. For convex (concave) optimization, any subgradient is a feasible descent
(ascent) direction. For more details, see, e.g., [6].
(a) Y = 1
2
1.5
1.5
1{z>0}
1
1
1{z<0}
0.5
log2?(z)+1
0
u(z)
u(z)
0.5
0 log (1??(z))+1
2
?0.5
?0.5
?1
?1
?1.5
?1.5
?2
?2
(b) Y = 0
2
?1
0
z
1
2
?2
?2
?z/2ln2
?z/ln2
?1
0
z
1
2
Figure 1: (a) Plot of the true positive indicator function and the utility function log2 ?(z) +
1. (b) Plot of the true negative indicator function, utility function log2 (1 ? ?(z)) + 1, and
its asymptotic slopes ?z/ log 2 and ?z/2 log 2.
1 as z approaches +?. It is a crude but smooth approximation of the indicator function
for a true positive, yI{?>1/2} . On the other hand, when z is negative, the utility function
is negative, acting as a penalty for false negatives. Similarly, Fig. 1(b) plots the utility
functions for Y = 0. In both cases, the utilify function has an upper bound of 1, so that the
effect of correct classifications is limited. On the other hand, incorrect classifications are
undesirable, thus their penalty is an unbounded (but slowly deceasing) negative number.
d
Taking the derivative dz
log2 (1??(z)+1) = ??(z)/ log 2, we see that, when z is positive,
?1 ? ??(z) ? ?1/2, so log2 (1 ? ?(z)) + 1 is sandwiched between two linear functions
?z/ log 2 and ?z/2 log 2. It starts off being closer to ?z/2 log 2, but approaches ?z/ log 2
asymptotically (see Fig. 1(b)). Hence, when the false positive is close to the decision
boundary, the additional penalty of ?T x = z in Eqn. (6) is larger than the default false
positive penalty, though the two are asymptotically equivalent.
Let us turn to the roles of the multiplicative weights. ?1 and ?2 weigh the relative importance of the two classes, and can be used to deal with imbalanced training sets where one
class is disproportionately larger than the other [7]. Most of the time a program exits successfully without crashing, so we have to deal with having many more successful runs than
crashed runs (see Section 5). Furthermore, since we really only care about predicting class
1, increasing ?1 beyond an equal balance of the two data sets could be beneficial for feature
selection performance. Finally, ?3 is the knob of determinism: if the bug is deterministic,
then setting ?3 to a large value will severely penalize false positives; if the bug is not deterministic, then a small value for ?3 affords the necessary slack to accommodate runs which
should have failed but did not. As we shall see in Section 5, if the bug is truly deterministic,
then the quality of the final features selected will be higher for large ?3 values.
In a previous paper [1], we outlined some simple feature elimination heuristics that can be
used in the case of a deterministic bug. hElimination by universal falsehoodi discards
any counter that is always zero, because it likely represents an assertion that can never
be true. This is a very common data preprocessing step. hElimination by lack of failing
examplei discards any counter that is zero on all crashes, because what never happens
cannot have caused the crash. hElimination by successful counterexamplei discards
any counter that is non-zero on any successful run, because these are assertions that can be
true without a subsequent program failure. In our model, if a feature xi is never positive
for any crashes, then its associated weight ?i will only decrease in the maximization process. Thus it will not be selected as a crash-predictive feature. This handles helimination
by lack of failing examplei. Also, if a heavily weighted feature xi is positive on a successful run in the training set, then the classifier is more likely to result in a false positive.
The false positive penalty term will then decrease the weight ?i , so that such a feature is
unlikely to be chosen at the end. Thus utility maximization also handles helimination by
successful counterexamplei. The model we derive here, then, neatly subsumes the ad
hoc elimination heuristics used in our earlier work.
4
Two Case Studies
As examples, we present two cases studies of C programs with bugs that are at the opposite ends of the determinism spectrum. Our deterministic example is ccrypt, a small
encryption utility. ccrypt-1.2 is known to contain a bug that involves overwriting existing files. If the user responds to a confirmation prompt with EOF rather than yes or no,
ccrypt consistently crashes. Our non-deterministic example is GNU bc-1.06, the Unix
command line calculator tool. We find that feeding bc nine megabytes of random input
causes it to crash roughly one time in four while calling malloc() ? a strong indication
of heap corruption. Such bugs are inherently difficult to fix because they are inherently
non-deterministic: there is no guarantee that a mangled heap will cause a crash soon or
indeed at all.
ccrypt?s sensitivity to EOF inputs suggests that the problem has something to do with its
interactions with standard file operations. Thus, randomly sampling function return values
may identify key operations close to the bug. Our instrumented program adds instrumentation after each function call to sample and record the number of times the return value is
negative, zero, or positive. There are 570 call sites of interest, for 570 ? 3 = 1710 counters. In lieu of a large user community, we generate many runs artificially using reasonable
inputs. Each run uses a randomly selected set of present or absent files, randomized command line flags, and randomized responses to ccrypt prompts including the occasional
EOF. We have collected 7204 trial runs at a sampling rate of 1/100, 1162 of which result
in a crash. 6516 (? 90%) of these trial runs are randomly selected for training, and the
remaining 688 held aside for cross-validation. Out of the 1710 counter features, 1542 are
constant across all runs, leaving 168 counters to be considered in the training process.
In the case of bc, we are interested in the behavior of all pointers and buffers. All pointers
and array indices are scalars, hence we compare all pairs of scalar values. At any direct
assignment to a scalar variable a, we identify all other variables b1 , b2 , . . . , bn of the same
type that are also in scope. We record the number of times that a is found to be greater
than, equal to, or less than each bi . Additionally, we compare each pointer to the NULL
value. There are 30150 counters in all, of which 2908 are not constant across all runs. Our
bc data set consists of 3051 runs with distinct random inputs at a sampling rate of 1/1000.
2729 of these runs are randomly chosen as training set, 322 for the hold-out set.
5
Experimental Results
We maximize the utility function in Eqn. (7) using stochastic subgradient ascent with a
learning rate of 10?5 . In order to make the magnitude of the weights ?i comparable to
each other, the feature values are shifted and scaled to lie between [0, 1], then normalized
to have unit variance. There are four learning parameters, ?1 , ?2 , ?3 , and ?. Since only their
relative scale is important, the regularization parameter ? can be set to some fixed value
(we use 0.1). For each setting of ?i , the model is set to run for 60 iterations through the
training set, though the process usually converges much sooner. For bc, this takes roughly
110 seconds in MATLAB on a 1.8 GHz Pentium 4 CPU with 1 GB of RAM. The smaller
ccrypt dataset requires just under 8 seconds.
The values of ?1 , ?2 , and ?3 can all be set through cross-validation. However, this may
take a long time, plus we would like to leave the ultimate control of the values to the users
of this tool. The more important knobs are ?1 and ?3 : the former controls the relative
importance of classification performance on crashed runs, the latter adjusts the believed
level of determinism of the bug. Here are some guidelines for setting ?1 and ?3 that we find
to work well in practice. (1) In order to counter the effects of imbalanced datasets, the ratio
of ?1 /?2 should be at least around the range of the ratio of successful to crashed runs. This
is especially crucial for the ccrypt data set, which contains roughly 32 successful runs
for every crash. (2) ?3 should not be higher than ?1 , because it is ultimately more important
(a) ccrypt
(b) ccrypt, ?1 = 30
1.6
1.6
1.6
1.6
1.4
1.2
1
1 10 20 30 40 50
?1
1.4
1.4
1.2
1
best score
2
1.8
best score
2
1.8
best score
2
1.8
best score
2
1.8
1.4
1.2
1.2
0
5 10 15 20 25
?3
(d) bc, ?1 = 5
(c) bc
1
1
5
10
?1
15
20
1
0
1
2
?3
3
4
5
Figure 2: (a,b) Cross-validation scores for the ccrypt data set; (c,d) Cross-validation
scores for the bc data set. All scores shown are the maximum over free parameters.
to correctly classify crashes than to not have any false positives.
As a performance metric, we look at the hold-out set confusion matrix and define the score
as the sum of the percentages of correctly classified data points for each class. Fig. 2(a)
shows a plot of cross-validation score (maximum over a number of settings for ?2 and
?3 ) for the ccrypt data set at various ?1 values. It is apparent from the plot that any ?1
values in the range of [10, 50] are roughly equivalent in terms of classification performance.
Specifically, for the case of ?1 = 30 (which is around the range suggested by Guideline 1
above), Fig. 2(b) shows the cross-validation scores plotted against different values for ?3 .
In this case, as long as ?3 is in the rough range of [3, 15], the classification performance
remains the same.6
Furthermore, settings for ?1 and ?3 that are safe for classification also select high quality
features for debugging. The ?smoking gun? which directly indicates the ccrypt bug is:
traverse.c:122:
xreadline() return value == 0
This call to xreadline() returns 0 if the input terminal is at EOF. In all of the above
mentioned safe settings for ?1 and ?3 , this feature is returned as the top feature. The rest
of the higher ranked features are sufficient, but not necessary, conditions for a crash. The
only difference is that, in more optimal settings, the separation between the top feature and
the rest can be as large as an order of magnitude; in non-optimal settings (classification
score-wise), the separation is smaller.
For bc, the classification results are even less sensitive to the particular settings of ?1 , ?2 ,
and ?3 . (See Fig. 2(c,d).) The classification score is roughly constant for ?1 ? [5, 20], and
for a particular value of ?1 , such as ?1 = 5, the value of ?3 has little impact on classification
performance. This is to be expected: the bug in bc is non-deterministic, and therefore false
positives do indeed exist in the training set. Hence any small value for ?3 will do.
As for the feature selection results for bc, for all reasonable parameter settings (and even
those that do not have the best classification performance), the top features are a group of
correlated counters that all point to the index of an array being abnormally big. Below are
the top five features for ?1 = 10, ?2 = 2, ?3 = 1:
1.
2.
3.
4.
5.
6
storage.c:176:
storage.c:176:
storage.c:176:
storage.c:176:
storage.c:176:
more
more
more
more
more
arrays():
arrays():
arrays():
arrays():
arrays():
indx
indx
indx
indx
indx
>
>
>
>
>
optopt
opterr
use math
quiet
f count
In Fig. 2(b), the classification performance for ?1 = 30 and ?3 = 0 is deceptively high. In
this case, the best ?2 value is 5, which offsets the cross-validation score by increasing the number
of predicted non-crashes, at the expense of worse crash-prediction performance. The top feature
becomes a necessary but not sufficient condition for a crash ? a false positive-inducing feature! Hence
the lesson is that if the bug is believed to be deterministic then ?3 should always be positive.
These features immediately point to line 176 of the file storage.c. They also indicate
that the variable indx seems to be abnormally big. Indeed, indx is the array index that
runs over the actual array length, which is contained in the integer variable a count.
The program may crash long after the first array bound violation, which means that there
are many opportunities for the sampling framework to observe the abnormally big value
of indx. Since there are many comparisons between indx and other integer variables,
there is a large set of inter-correlated counters, any subset of which may be picked by our
algorithm as the top features. In the training run shown above, the smoking gun of indx
> a count is ranked number 8. But in general its rank could be much smaller, because
the top features already suffice for predicting crashes and pointing us to the right line in the
code.
6
Conclusions and Future Work
Our goal is a system that automatically pinpoints the location of bugs in widely deployed
software. We tackle different types of bugs using a custom-designed utility function with
a ?determinism level? knob. Our methods are shown to work on two real-world programs,
and are able to locate the bugs in a range of parameter settings.
In the real world, programs contain not just one, but many bugs, which will not be distinctly
labeled in the set of crashed runs. It is difficult to tease out the different failure modes
through clustering: it relies on macro-level usage patterns, as opposed to the microscopic
difference between failures. In on-going research, we are extending our approach to deal
with the problem of multiple bugs in larger programs. We are also working on modifying
the program sampling framework to allow denser sampling in more important regions of
the code. This should alleviate the sparsity of features while reducing the number of runs
required to yield useful results.
Acknowledgments
This work was supported in part by ONR MURI Grant N00014-00-1-0637; NASA Grant
No. NAG2-1210; NSF Grant Nos. EIA-9802069, CCR-0085949, ACI-9619020, and IIS9988642; DOE Prime Contract No. W-7405-ENG-48 through Memorandum Agreement
No. B504962 with LLNL.
References
[1] B. Liblit, A. Aiken, A. X. Zheng, and M. I. Jordan. Bug isolation via remote program sampling.
In ACM SIGPLAN PLDI 2003, 2003.
[2] A. Blum and P. Langley. Selection of relevant features and examples in machine learning. Artificial Intelligence, 97(1-2):245?271, 1997.
[3] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. Journal of Machine
Learning Research, 3:1157?1182, March 2003.
[4] E. L. Lehmann. Testing Statistical Hypotheses. John Wiley & Sons, 2nd edition, 1986.
[5] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning. Springer?Verlag,
2001.
[6] J.-B. Hiriart-Urruty and C. Lemarechal. Convex Analysis and Minimization Algorithms, volume II. Springer?Verlag, 1993.
[7] N. Japkowicz and S. Stephen. The class imbalance problem: a systematic study. Intelligent Data
Analysis Journal, 6(5), November 2002.
| 2371 |@word trial:5 private:1 briefly:1 norm:3 seems:1 nd:1 bn:1 eng:1 elisseeff:1 asks:1 accommodate:1 contains:1 score:13 bc:11 subjective:1 existing:1 scatter:1 must:1 john:1 additive:1 subsequent:1 plot:6 designed:1 aside:1 intelligence:1 leaf:1 guess:2 selected:6 record:4 pointer:5 math:1 location:2 traverse:1 five:1 unbounded:1 along:3 direct:2 incorrect:1 consists:1 wild:2 overhead:3 combine:1 privacy:1 liblit:3 inter:1 indeed:3 expected:4 roughly:5 behavior:4 examine:2 terminal:1 automatically:3 company:1 actual:4 little:3 window:1 cpu:1 increasing:2 becomes:1 unrelated:1 suffice:1 null:2 what:2 interpreted:1 developed:1 suite:1 guarantee:1 forgotten:1 berkeley:8 every:6 thorough:1 concave:2 tackle:1 chronological:1 runtime:4 exactly:1 complaint:1 classifier:6 scaled:1 control:3 normally:1 exchangeable:1 grant:3 unit:1 positive:24 before:1 severely:1 path:2 might:1 plus:1 initialization:1 collect:3 suggests:1 alice:1 compile:1 limited:1 range:8 bi:1 acknowledgment:1 responsible:1 testing:1 practice:2 procedure:1 wholly:1 langley:1 empirical:1 universal:1 cannot:1 close:3 selection:11 undesirable:1 storage:6 intercept:1 equivalent:3 deterministic:12 quick:1 dz:1 maximizing:1 independently:1 convex:4 immediately:1 adjusts:1 array:10 deceptively:1 handle:2 notion:1 memorandum:1 heavily:1 user:13 us:2 designing:1 hypothesis:1 agreement:1 element:1 expensive:2 u4:2 sparsely:1 corroborated:1 labeled:3 muri:1 observed:3 ep:1 role:1 capture:1 thousand:2 region:4 remote:1 counter:11 decrease:2 weigh:1 mentioned:1 ui:1 ultimately:2 predictive:2 incur:2 division:4 exit:3 efficiency:1 various:3 distinct:1 fast:2 describe:1 artificial:1 outcome:3 sanity:1 quite:2 heuristic:4 larger:4 apparent:1 widely:1 denser:1 statistic:2 final:2 hoc:1 differentiable:2 indication:1 net:1 hiriart:1 interaction:1 macro:1 relevant:2 loop:1 intuitive:1 bug:35 inducing:1 convergence:1 extending:1 perfect:1 executing:1 ben:1 encryption:1 converges:1 derive:1 leave:1 paying:1 strong:2 c:7 skip:1 involves:1 predicted:1 indicate:1 concentrate:1 direction:1 safe:2 closely:1 correct:1 modifying:1 stochastic:2 programmer:1 elimination:2 disproportionately:1 feeding:1 fix:1 generalization:1 really:1 alleviate:2 opt:1 hold:2 around:2 considered:1 scope:1 predict:1 pointing:1 trailing:1 u3:2 driving:1 heap:2 failing:2 estimation:1 pivoted:1 integrates:1 label:1 iw:1 sensitive:1 successfully:1 tool:2 weighted:1 hope:2 minimization:1 rough:1 always:5 gaussian:1 overwriting:1 rather:2 avoid:1 command:2 knob:3 focus:1 consistently:1 bernoulli:2 check:5 likelihood:1 indicates:1 rank:1 contrast:1 pentium:1 sense:2 dependent:1 entire:1 unlikely:1 going:1 japkowicz:1 interested:2 issue:1 classification:22 eof:4 uc:4 field:1 once:1 equal:2 having:3 never:3 sampling:14 represents:1 look:2 future:1 report:2 simplify:1 intelligent:1 few:2 randomly:4 individual:1 microsoft:1 friedman:1 huge:1 interest:1 highly:3 zheng:2 custom:1 violation:1 analyzed:1 truly:1 held:1 closer:1 necessary:3 sooner:1 plotted:1 catching:1 isolating:1 weighs:1 minimal:1 instance:1 classify:1 earlier:3 assertion:13 contiguous:2 assignment:1 maximization:3 cost:2 subset:1 mozilla:2 successful:10 predicate:1 density:1 sensitivity:1 randomized:2 contract:1 off:1 systematic:1 michael:1 central:2 opposed:1 choose:1 slowly:1 megabyte:1 worse:1 derivative:1 return:5 b2:1 subsumes:1 matter:1 caused:3 ad:1 multiplicative:2 picked:1 doing:1 analyze:1 reached:1 start:2 slope:1 calculator:1 variance:1 characteristic:1 yield:1 identify:4 lesson:1 yes:1 bayesian:1 none:1 corruption:2 classified:1 reach:1 whenever:1 against:2 failure:3 associated:1 sampled:3 dataset:1 treatment:1 back:1 nasa:1 higher:3 indx:10 day:2 harness:1 specify:2 response:1 eia:1 execute:2 though:3 done:1 furthermore:3 just:5 predicated:1 until:1 working:3 eqn:4 hand:2 lack:2 logistic:2 mode:1 quality:2 usage:1 effect:5 contain:3 true:9 normalized:1 former:1 hence:6 regularization:5 death:1 deal:6 during:2 eqns:1 noted:1 ln2:2 demonstrate:3 confusion:1 llnl:1 wise:1 novel:1 common:2 functional:1 volume:1 occurred:1 interpretation:1 significant:1 refer:1 methods5:1 outlined:1 similarly:1 pldi:1 neatly:1 stable:1 longer:1 add:4 base:1 something:1 imbalanced:2 perspective:1 instrumentation:1 discard:3 prime:1 store:1 certain:3 server:2 buffer:1 n00014:1 onr:1 success:4 verlag:2 accomplished:1 yi:1 seen:1 additional:3 care:3 greater:1 malloc:1 abnormally:3 freely:1 tossing:1 aggregated:2 maximize:3 ii:1 stephen:1 multiple:1 desirable:1 smooth:1 offer:1 cross:7 long:3 believed:2 bigger:1 impact:2 prediction:2 regression:1 essentially:1 metric:1 iteration:1 represent:2 penalize:1 crash:29 separately:2 want:1 crashing:3 leaving:1 crucial:1 appropriately:1 extra:1 rest:2 ascent:4 file:4 isolate:1 sent:1 seem:1 jordan:3 call:4 integer:3 ee:1 ideal:1 abnormality:1 enough:3 easy:1 automated:1 affect:2 finish:1 isolation:1 hastie:1 lasso:1 opposite:1 knowing:1 tradeoff:1 absent:1 whether:2 handled:1 utility:23 ultimate:2 granted:1 gb:1 effort:3 akin:1 penalty:9 returned:1 nine:1 cause:2 matlab:1 useful:3 detailed:1 amount:1 differentiability:1 generate:4 affords:1 percentage:1 exist:1 nsf:1 shifted:1 correctly:4 tibshirani:1 bulk:1 ccr:1 shall:1 group:1 key:2 four:4 terminology:1 threshold:1 blum:1 aiken:3 ram:1 asymptotically:2 subgradient:5 merely:1 sum:2 run:34 unix:1 jitter:1 lehmann:1 reasonable:4 reader:1 guyon:1 separation:2 decision:3 dy:1 comparable:1 bit:1 bound:2 gnu:1 followed:1 fold:1 occur:1 alex:1 software:6 sigplan:1 calling:1 aspect:1 u1:3 prescribed:1 subgradients:1 department:1 debugging:7 combination:1 march:1 remain:1 across:3 slightly:1 beneficial:1 instrumented:1 smaller:3 son:1 happens:1 sided:1 taken:2 resource:1 remains:1 assures:1 describing:1 count:9 turn:1 slack:1 urruty:1 end:6 lieu:1 operation:2 observe:1 occasional:1 away:1 appropriate:1 tolerable:1 occurrence:2 save:1 coin:4 encounter:1 slower:1 permanently:1 top:8 remaining:1 ensure:1 clustering:1 opportunity:2 log2:9 neglect:1 especially:2 sandwiched:1 objective:1 question:1 already:1 occurs:2 strategy:1 primary:1 usual:1 responds:1 microscopic:1 gradient:2 quiet:1 separate:1 gun:2 collected:2 trivial:1 toward:1 assuming:1 code:9 besides:1 index:3 length:1 ratio:2 balance:1 difficult:3 expense:1 negative:10 design:1 guideline:3 perform:1 upper:1 imbalance:1 datasets:1 discarded:1 descent:1 november:1 ever:1 head:1 locate:1 stack:1 community:1 prompt:2 cast:1 crashed:5 pair:1 smoking:2 required:1 lemarechal:1 able:3 beyond:2 suggested:1 usually:1 below:1 pattern:1 sparsity:3 challenge:1 program:38 max:1 memory:1 including:1 event:2 ranked:2 examination:1 predicting:2 indicator:5 recursion:1 improve:1 catch:1 willingly:1 prior:1 geometric:3 checking:1 asymptotic:1 relative:3 sublinear:1 interesting:1 limitation:1 analogy:1 acyclic:1 validation:7 sufficient:3 supported:1 last:1 free:2 soon:2 tease:1 allow:1 wide:2 taking:2 determinism:6 distinctly:1 ghz:1 feedback:1 boundary:3 default:1 world:2 commonly:1 coincide:1 wouldn:1 preprocessing:1 far:1 countdown:2 approximate:1 status:2 dealing:1 ml:1 decides:1 b1:1 consuming:1 discriminative:1 xi:2 spectrum:2 aci:1 additionally:1 learn:1 robust:1 confirmation:1 inherently:3 artificially:1 domain:1 did:2 decrement:2 big:3 profile:1 edition:1 obviating:1 positively:1 augmented:1 fig:7 site:1 deployed:1 wiley:1 sub:1 pinpoint:3 lie:1 crude:1 formula:1 buggy:3 offset:1 k11:1 false:19 importance:2 magnitude:2 painless:1 simply:1 likely:4 failed:1 contained:2 scalar:3 subtlety:1 u2:3 springer:2 relies:2 acm:1 conditional:1 goal:4 toss:3 feasible:1 included:1 specifically:2 determined:1 reducing:1 acting:1 flag:1 engineer:1 experimental:2 exception:1 select:1 people:1 latter:1 regularizing:1 correlated:5 |
1,509 | 2,372 | Bounded Finite State Controllers
Pascal Poupart
Department of Computer Science
University of Toronto
Toronto, ON M5S 3H5
[email protected]
Craig Boutilier
Department of Computer Science
University of Toronto
Toronto, ON M5S 3H5
[email protected]
Abstract
We describe a new approximation algorithm for solving partially observable MDPs. Our bounded policy iteration approach searches through the
space of bounded-size, stochastic finite state controllers, combining several advantages of gradient ascent (efficiency, search through restricted
controller space) and policy iteration (less vulnerability to local optima).
1 Introduction
Finite state controllers (FSCs) provide a simple, convenient way of representing policies
for partially observable Markov decision processes (POMDPs). Two general approaches
are often used to construct good controllers: policy iteration (PI) [7] and gradient ascent
(GA) [10, 11, 1]. The former is guaranteed to converge to an optimal policy, however, the
size of the controller often grows intractably. In contrast, the latter restricts its search to
controllers of a bounded size, but may get trapped in a local optimum.
While locally optimal solutions are often acceptable, for many planning problems with a
combinatorial flavor, GA can easily get trapped by simple policies that are far from optimal. Consider a system engaged in preference elicitation, charged with discovering optimal
query policy to determine relevant aspects of a user?s utility function. Often no single question yields information of much value, while a sequence of queries does. If each question
has a cost, a system that locally optimizes the policy by GA may determine that the best
course of action is to ask no questions (i.e., minimize cost given no information gain).
When an optimal policy consists of a sequence of actions any small perturbation to which
results in a bad policy, there is little hope of finding this sequence using methods that
greedily perform local perturbations such as those employed by GA.
In general, we would like the best of both worlds: bounded controller size and convergence to a global optimum. While achieving both is NP-hard for the class of deterministic
controllers [10], one can hope for a tractable algorithm that at least avoids obvious local optima. We propose a new anytime algorithm, bounded policy iteration (BPI) that improves a
policy much like Hansen?s PI [7] while keeping the size of the controller fixed. Whenever
the algorithm gets stuck in a local optimum, the controller is allowed to slightly grow by
introducing one (or a few) node(s) to escape the local optimum.
Following a brief review of FSCs (Sec. 2), we extend PI to stochastic controllers (Sec. 3),
thus admitting smaller, high quality controllers. We then derive the BPI algorithm by ensuring that the number of nodes remains unchanged (Sec. 4). We analyze the structure of
local optima for BPI (Sec. 5), relate this analysis to GA, and use it to justify a new method
to escape local optima. Finally, we report some preliminary experiments (Sec. 6).
2 Finite State Controllers for POMDPs
A POMDP is defined by a set of states ; a set of actions ; a set of observations ;
a transition function , where
denotes the transition probabilities
;
an observation function , where
denotes the probability
of making
observation in state after taking action ; and a reward function , where
denotes the immediate reward associated with state when executing ation . We assume
discrete state, action and observation sets and we focus on discounted, infinite horizon
. Since states are not directly observable in
POMDPs with discount factor
POMDPs, we define a belief state
to be a distribution over states. Belief
state can be updated in response to a action-observation pair
using Bayes rule.
!#"%$'&)(
*+-,.+
*
/0
1
23,4/65798:1
Policies represented by FSCs are defined by a (possibly cyclic) directed graph
,
where each node
is labeled by an action and each edge
by an observation
. Each node has one outward edge per observation. The FSC can be viewed as a policy
, where action strategy associates each node with an action
,
and observation strategy associates each node and observation with a successor node
(corresponding to the edge from labeled with ). A policy is executed
by taking the action associated with the ?current node,? and updating the current node by
following the edge labeled by the observation made.
;=<>5
24,A/0BC9D:1
DG0;HI<J5
B
D
;
?-<@8
;
;
BC0;EF<7
KML
2
2
K L 0 ;H
NG,O
BC0;E9EPQ$RS4
BC0;E99 BTU;E9K L 0DGU;H
V
(1)
Given an initial belief state * , an FSC?s value at node ; is simply the expectation
KW_0;H`b
a *VX,ZY#[\*+KW0;H
N ; the best starting node for a given * is determined by K]*VX,
^K-U;Hc*V . As a result, the value KWU;Hc*V of each node ; is linear with respect to the
The value function
of an FSC is the expected discounted sum of rewards for executing
its policy , and can be computed by solving a set of linear equations:
belief state; hence the value function of the controller is piecewise-linear and convex. In
Fig. 1(a), each linear segment corresponds to the value function of a node and the upper
surface of these segments forms the controller value function. The optimal value function
satisfies Bellman?s equation:
Kd
^ f _` *N
EPg$RS40\ *N
9KW* Sf
K d *Ve, -
(2)
Policy iteration (PI) [7] incrementally improves a controller by alternating between two
steps, policy improvement and policy evaluation, until convergence to an optimal policy.
Policy evaluation solves Eq. 1 for a given policy. Policy improvement adds nodes to the
controller by dynamic programming (DP) and removes other nodes. A DP backup applies
the r.h.s. of Eq. 2 to the value function ( in Fig. 2(a)) of the current controller to obtain a
new, improved value function ( in Fig. 2(a)). Each linear segment of
corresponds to a
new node added to the controller. Several algorithms can be used to perform DP backups,
with incremental pruning [4] perhaps being the fastest. After the new nodes created by
DP have been added, old nodes that are now pointwise dominated are removed. A node
is pointwise dominated when its value is less than that of some other node at all belief
states (e.g., is pointwise dominated by
in Fig. 2(a)). The inward edges of a pointwise
dominated node are re-directed to the dominating node since it offers better value (e.g.,
inward arcs of
are redirected to
in Fig. 2(c)). The controller resulting from this
policy improvement step is guaranteed to offer higher value at all belief states. On the
other hand, up to
new nodes may be added with each DP backup, so the size of
the controller quickly becomes intractable in many POMDPs.
K
K
;ih
K
;Ej
;h
;j
k 54l mCl
value function:
backed up value function:
n3
value
value
upper surface:
convex combination:
n2
n1
belief space
b?
a)
b
b)
Figure 1: a) Value function example; b) BPI local optimum: each linear segment of the
value function is tangent to the backed up value function
V:
V?:
value
n4
n3
n2
n1
a
n3
n1
b
a
c
n4
belief space
n3
n2
a)
b
b)
K
c
n4
n2
c)
K
Figure 2: a) Value function and the backed-up
obtained by DP; b) original controller
( and ) with nodes added ( and ) by DP; c) new controller once pointwise dominated node
is removed and its inward arcs a, b, c are redirected to
;ih
;
;
;h
;j
;j
3 Policy Iteration for Stochastic Controllers
Policy iteration only prunes nodes that are pointwise dominated, rather than all dominated
nodes. This is because the algorithm is designed to produce controllers with deterministic
observation strategies. A pointwise-dominated node can safely be pruned since its inward
arcs are redirected to the dominating node (which has value at least as high as the dominated
node at each state). In contrast, a node jointly dominated by several nodes (e.g., in
Fig. 2(b) is jointly dominated by and ) cannot be pruned without its inward arcs being
redirected to different nodes depending on the current belief state.
;
;
;j
This problem can be circumvented by allowing stochastic observation strategies. We revise
the notion of observation strategy
, defining a distribution over
successor nodes
for each
-pair. If the stochastic strategy is chosen carefully, the
corresponding convex combination of dominating nodes may pointwise dominate the node
we would like to prune. In Fig. 1(a),
is dominated by and together (but neither of
them alone). Convex combinations of and correspond to all lines that pass through
the intersection of and . The dotted line illustrates one convex combination of and
that pointwise dominates : consequently, can be safely removed and its inward
arcs re-directed to reflect this convex combination by setting the observation probabilities
accordingly. In general, when a node is jointly dominated by a group of nodes, there exists
a pointwise-dominating convex combination of this group.
;E
;H
DCU;H 9;i, U; ;H
; h
;
;
;
;
;h
K U;H
]
;:hN
KW0;H
;
;
;h
;
;
;
of a node is jointly dominated by the value funcTheorem 1 The value function
tions
of nodes
if and only if there is a convex combination
that dominates
.
Y
K]U;ih
KWU;
K-0;
9;
^
s.t.
Y [ * NKWU;HcN P Y [ *N9K]U;
N
Y [ * Ne,4('*N !b
#
KWU;H is jointly dominated by KW0;:h
KW0;
^-_` s.t. KW0;H
NEP " Y
K]U; cN <
Y
, (
!b
Table 1: Primal LP:
Table 2: Dual LP: convex combination
KWU;H
Y
when
! .
KW0; dominates KWU;H when
!.
KWU; hN
K]U;
*
KW0; h
*VV VcKWU;
*V
KWU;H
Proof:
is dominated by
when the objective of the LP in
Table 1 is positive. This LP finds the belief state that minimizes the difference between
and the max of
. It turns out that the dual LP (Table 2) finds
the most dominating convex combination parallel to
. Since the dual has positive
objective value when the primal does, the theorem follows.
KW0;H
*V
As argued in the proof of Thm. 1, the LP in Table 1 gives us an algorithm to find the most
dominating convex combination parallel to a dominated node. In summary, by considering
stochastic controllers, we can extend PI to prune all dominated nodes (pointwise or jointly)
in the policy improvement step. This provides two advantages: controllers can be made
smaller while improving their decision quality.
4 Bounded Policy Iteration
Although pruning all dominated nodes helps to keep the controller small, it may still grow
substantially with each DP backup. Several heuristics are possible to bound the number of
nodes. Feng and Hansen [6] proposed that one prunes all nodes that dominate the value
function by less than some after each DP backup. Alternatively, instead of growing the
controller with each backup and then pruning, we can do a partial DP backup that generates
only a subset of the nodes using Cheng?s algorithm [5], the witness algorithm [9], or other
heuristics [14]. In order to keep the controller bounded, for each node created in a partial
DP backup, one node must be pruned and its inward arcs redirected to some dominating
convex combination. In the event where no node is dominated, we can still prune a node
and redirect its arcs to a good convex combination, but the resulting controller may have
lesser value at some belief states. We now propose a new algorithm called bounded policy iteration (BPI) that guarantees monotonic value improvement at all belief states while
keeping the number of nodes fixed.
BPI considers one node at a time and tries to improve it while keeping all other nodes
fixed. Improvement is achieved by replacing each node by a good convex combination of
the nodes normally created by a DP backup, but without actually performing a backup.
Since the backed up value function must dominate the controller?s current value function,
then by Thm. 1 there must exist a convex combination of the backed up nodes that pointwise dominates each node of the controller. Combining this idea with Eq. 2, we can directly
compute such convex combinations with the LP in Table 3. This LP has
variables corresponding to the probabilities of the convex combination as well as the variable
measuring the value improvement. We can significantly reduce the number of variables
by pushing the convex combination variables as far as possible into the DP backup, resulting in the LP shown in Table 4. The key here is to realize that we can aggregate many
variables since we only care about the marginals
and
.
] 54 l mCl
f a , Y a a a! a f a a a
f ,'Y a a a f a a a
^-_`
s.t.
K]U;H
NEP "OY f a S a a
f a a a S
P
$ Y [ +
90\ +
9KWU;
+ <
Y f a a a
f a a a , (
f a a a !b
9; h ; 9; l mCl
;
Table 3: Naive LP to find a convex combination of backed up nodes that dominate .
^-_ `
s.t.
S
S
KWU;HcN P " Y f
f a
Pg$ Y [ +
a +0
f a KW0;
+ !
!b
Y f
f ,7( Y a
f ,
f
f !b \
f
;
Table 4: Efficient LP to find a convex combination of backed up nodes that dominate .
f a
] > 54 Pg ] P (
The efficient LP in Table 4 has only
variables.1 Furthermore, the varihave an intuitive interpretation w.r.t. the action and observation strategies
ables
and
for the improved node. Each
variable indicates the probability of executing action (i.e.,
variable indicates the (unnormalized) probability of
). Similarly, each
after executing and observing (i.e.,
reaching node
). Note
that we now use probabilistic action strategies and have extended probabilistic observation
strategies to depend on the action executed.
f
f
BTU;H
, f S
;
f a
S
DGU;H
; T, f a f
To summarize, BPI alternates between policy evaluation and improvement as in regular PI,
but the policy improvement step simply tries to improve each node by solving the LP in
Table 4. The
and
variables are used to set the probabilistic action and observation
strategies of the new improved node.
f
f a
5 Local Optima
BPI is a simple, efficient alternative to standard PI that monotonically improves an FSC
while keeping its size constant. Unfortunately, it is only guaranteed to converge to a local
optimum. We now characterize BPI?s local optima and propose a method to escape them.
5.1 Characterization
Thm. 2 gives a necessary and sufficient condition characterizing BPI?s local optima. Intuitively, a controller is a local optimum when each linear segment touches from below, or is
tangent to, the controller?s backed up value function (see Fig. 1(b)).
Theorem 2 BPI has converged to a local optimum if and only if each node?s value function
is tangent to the backed up value function.
Proof: Since the objective function of the LP in Table 4 seeks to maximize the improvement , the resulting convex combination must be tangent to the upper surface of the
backed up value function. Conversely, the only time when the LP won?t be able to improve
a node is when its vector is already tangent to the backed up value function.
1
Actually, we don?t need the
variables since they can be derived from the
summing out , so the number of variables can be reduced to .
variables by
Interestingly, tangency is a necessary (but not sufficient) condition for GA?s local optima.
Corollary 1 If GA has converged to a local optimum, then the value function of each node
reachable from the initial belief state is tangent to the backed up value function.
Proof: GA seeks to monotonically improve a controller in the direction of steepest ascent.
The LP of Table 4 also seeks a monotonically improving direction. Thus if BPI can
improve a controller by finding a direction of improvement using the LP of Table 4, then
GA will also find it or will find a steeper one. Conversely, when a controller is a local
optimum for GA, then there is no monotonic improvement possible in any direction. Since
BPI can only improve a controller by following a direction of monotonic improvement,
GA?s local optima are a subset of BPI?s local optima. Thus, tangency is a necessary, but
not sufficient, condition of GA?s local optima.
In the proof of Corollary 1, we argued that GA?s local optima are a subset of BPI?s local
optima. This suggests that BPI is inferior to GA since it can be trapped by more local
optima than GA. However we will describe in the next section a simple technique that
allows BPI to easily escape from local optima.
5.2 Escape Technique
The tangency condition characterizing local optima can be used to design an effective escape method for BPI. It essentially tells us that such tangent belief states are ?bottlenecks?
for further policy improvement. If we could improve the value at the tangent belief state(s)
of some node, then we could break out of the local optimum. A simple method for doing
so consists of a one-step lookahead search from the tangent belief states. Figure 1(b) illustrates how belief state can be reached in one step from tangent belief state , and how
the backed up value function improves ?s current value. Thus, if we add a node to the
controller that maximizes the value of , its improved value can subsequently be backed
up to the tangent belief state , breaking out of the local optimum.
*V
*
*
*
*
Our algorithm is summarized as follows: perform a one-step lookahead search from each
tangent belief state; when a reachable belief state can be improved, add a new node to the
controller that maximizes that belief state?s value. Interestingly, when no reachable belief
state can be improved, the policy must be optimal at the tangent belief states.
Theorem 3 If the backed up value function does not improve the value of any belief state
reachable in one step from any tangent belief state, then the policy is optimal at the tangent
belief states.
Proof: By definition, belief states for which the backed up value function provides no
improvement are tangent belief states. Hence, when all belief states reachable in one step
are themselves tangent belief states, then the set of tangent belief states is closed under
every policy. Since there is no possibility of improvement, the current policy must be
optimal at the tangent belief states.
Although Thm 3 guarantees an optimal solution only at the tangent belief states, in practice,
they rarely form a proper subset of the belief space (when none of the reachable belief states
can be improved). Note also that the escape algorithm assumes knowledge of the tangent
belief states. Fortunately, the solution to the dual of the LP in Table 4 is a tangent belief
state. Since most commercial LP solvers return both the solution of the primal and dual, a
tangent belief state is readily available for each node.2
2
A node may have more than one tangent belief state when an interval of its linear segment is
Maze400
Tag?Avoid
Expected Rewards
Expected Rewards
55
50
45
40
?10
?20
?30
?40
?50
35
0
500
1000
1500
0
500
1000
1500
Number of nodes
Number of nodes
Maze400
Tag?Avoid
Expected Rewards
Expected Rewards
55
50
45
40
?10
?20
?30
?40
?50
35 0
10
1
2
10
10
3
10
4
10
5
10
1
10
2
10
Time (seconds)
3
10
4
5
10
10
6
10
Time (seconds)
Figure 3: Experimental results for the maze and tag-avoid problems.
6 Experiments
We report some preliminary experiments with BPI and the escape method to assess their
robustness against local optima, as well as their scalability to relatively large POMDPs.
In a first experiment, we ran BPI with escape on a preference elicitation problem and a
modified version of the Heaven-and-Hell problem described in [3]. It consistently found
the optimal policy, whereas GA settles for a local optimum for both problems.
In a second experiment, we report the running time and decision quality of the controllers found for two large grid-world problems. The first is a
-state extention of
Hauskrecht?s [8] -state maze problem, and the second Pineau et al.?s [12] -state tagavoid problem. In Figure 3, we report the expected return achieved w.r.t. time and number
of nodes. For the maze problem, the expected return is averaged over all 400 states since
BPI tries to optimize the policy for all belief states simultaneously. For comparison purposes, the expected return for the tag-avoid problem is measured at the same initial belief
state used in [12] even though BPI doesn?t tailor its policy exclusively to that belief state.
In contrast, many point-based algorithms including PBVI [12] (which is perhaps the best
such algorithm) optimize the policy for a single initial belief state, capitalizing on a hopefully small reachable belief region. BPI found a -node controller in with the
same expected return of
achieved by PBVI in
with a policy of
linear
segments. This suggests that most of the belief space is reachable in tag-avoid. We also
!!
!
(
!
( ! !
!
(
tangent to the backed up value function, indicating that it is identical to some backed up node.
ran BPI on the tiger-grid, hallway and hallway2 benchmark problems [12] and obtained
and
achieving expected returns of
-node controllers in
,
, , at the same initial belief states used in [12], but without using them to
tailor the policy. In contrast, PBVI achieved expected returns of ,
and
in
with policies of , and linear segments tailored to those
, and
initial belief states. This suggests that only a small portion of the belief space is reachable.
( !!
( b( ! ( !
(
!
!
!
!
!
!
!
7 Conclusion
We have introduced the BPI algorithm, which guarantees monotonic improvement of the
value function while keeping controller size fixed. While quite efficient, the algorithm may
get trapped in local optima. An analysis of such local optima reveals that the value function
of each node is tangent to the backed up value function. This property can be successfully
exploited in an algorithm that escapes local optima quite robustly.
This research can be extented in a number of directions. State aggregation [2] and belief
compression [13] techniques could be easily integrated with BPI to scale to problems with
large state spaces. Also, since stochastic GA [11, 1] can tackle model free problems (which
BPI cannot) it would be interesting to see if tangent belief states could be computed for
stochastic GA and used to design a heuristic to escape local optima similar to the one
proposed for BPI.
Acknowledgements We thank Darius Braziunas for his help with the implementation and the anonymous reviewers for the helpful comments.
References
[1] D. Aberdeen and J. Baxter. Scaling internal-state policy-gradient methods for POMDPs. Proc.
ICML-02, pp.3?10, Sydney, Australia, 2002.
[2] C. Boutilier and D. Poole. Computing optimal policies for partially observable decision processes using compact representations. Proc. AAAI-96, pp.1168?1175, Portland, OR, 1996.
[3] D. Braziunas. Stochastic local search for POMDP controllers. Master?s thesis, University of
Toronto, Toronto, 2003.
[4] A. R. Cassandra, M. L. Littman, and N. L. Zhang. Incremental pruning: A simple, fast, exact
method for POMDPs. Proc.UAI-97, pp.54?61, Providence, RI, 1997.
[5] H.-T. Cheng. Algorithms for Partially Observable Markov Decision Processes. PhD thesis,
University of British Columbia, Vancouver, 1988.
[6] Z. Feng and E. A. Hansen. Approximate planning for factored POMDPs. Proc. ECP-01, Toledo,
Spain, 2001.
[7] E. A. Hansen. Solving POMDPs by searching in policy space. Proc. UAI-98, pp.211?219,
Madison, Wisconsin, 1998.
[8] M. Hauskrecht. Value-function approximations for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 13:33?94, 2000.
[9] L. P. Kaelbling, M. Littman, and A. R. Cassandra. Planning and acting in partially observable
stochastic domains. Artificial Intelligence, 101:99?134, 1998.
[10] N. Meuleau, K.-E. Kim, L. P. Kaelbling, and A. R. Cassandra. Solving POMDPs by searching
the space of finite policies. Proc. UAI-99, pp.417?426, Stockholm, 1999.
[11] N. Meuleau, L. Peshkin, K.-E. Kim, and L. P. Kaelbling. Learning finite-state controllers for
partially observable environments. Proc. UAI-99, pp.427?436, Stockholm, 1999.
[12] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: an anytime algorithm for
POMDPs. In Proc. IJCAI-03, Acapulco, Mexico, 2003.
[13] P. Poupart and C. Boutilier. Value-directed compressions of POMDPs. Proc. NIPS-02, pp.1547?
1554, Vancouver, Canada, 2002.
[14] N. L. Zhang and W. Zhang. Speeding up the convergence of value-iteration in partially observable Markov decision processes. Journal of Artificial Intelligence Research, 14:29?51, 2001.
| 2372 |@word version:1 compression:2 seek:3 pg:2 initial:6 cyclic:1 exclusively:1 bc:4 interestingly:2 current:7 must:6 readily:1 realize:1 remove:1 designed:1 alone:1 intelligence:3 discovering:1 accordingly:1 hallway:1 steepest:1 meuleau:2 provides:2 characterization:1 node:78 toronto:8 preference:2 zhang:3 redirected:5 consists:2 expected:11 themselves:1 planning:3 growing:1 bellman:1 discounted:2 little:1 considering:1 solver:1 becomes:1 spain:1 bounded:9 maximizes:2 inward:7 minimizes:1 substantially:1 finding:2 hauskrecht:2 guarantee:3 safely:2 every:1 tackle:1 normally:1 positive:2 local:35 kml:1 conversely:2 suggests:3 fastest:1 averaged:1 directed:4 practice:1 epg:1 significantly:1 convenient:1 regular:1 get:4 cannot:2 ga:18 optimize:2 deterministic:2 charged:1 reviewer:1 backed:19 starting:1 convex:21 pomdp:2 factored:1 rule:1 dominate:5 his:1 searching:2 notion:1 updated:1 commercial:1 user:1 exact:1 programming:1 associate:2 updating:1 labeled:3 ep:2 region:1 removed:3 ran:2 environment:1 reward:7 littman:2 dynamic:1 depend:1 solving:5 segment:8 efficiency:1 easily:3 represented:1 fast:1 describe:2 effective:1 fsc:4 query:2 artificial:3 tell:1 aggregate:1 quite:2 heuristic:3 dominating:7 jointly:6 advantage:2 sequence:3 propose:3 relevant:1 combining:2 pbvi:3 lookahead:2 ppoupart:1 intuitive:1 scalability:1 convergence:3 ijcai:1 optimum:34 produce:1 incremental:2 executing:4 tions:1 derive:1 depending:1 help:2 measured:1 eq:3 solves:1 sydney:1 c:2 direction:6 stochastic:10 subsequently:1 australia:1 successor:2 settle:1 argued:2 preliminary:2 hell:1 anonymous:1 acapulco:1 stockholm:2 purpose:1 proc:9 combinatorial:1 hansen:4 vulnerability:1 successfully:1 hope:2 modified:1 rather:1 reaching:1 avoid:5 ej:1 corollary:2 derived:1 focus:1 improvement:17 consistently:1 braziunas:2 indicates:2 portland:1 contrast:4 greedily:1 kim:2 helpful:1 bt:2 integrated:1 j5:1 dual:5 pascal:1 construct:1 once:1 extention:1 kw:20 identical:1 icml:1 np:1 report:4 piecewise:1 escape:11 few:1 gordon:1 dg:3 simultaneously:1 mcl:3 n1:3 possibility:1 epq:1 evaluation:3 admitting:1 primal:3 heaven:1 edge:5 partial:2 necessary:3 old:1 re:2 hallway2:1 measuring:1 cost:2 introducing:1 kaelbling:3 subset:4 characterize:1 providence:1 probabilistic:3 together:1 quickly:1 thesis:2 reflect:1 aaai:1 hn:2 possibly:1 bpi:28 return:7 sec:5 summarized:1 try:3 break:1 closed:1 analyze:1 observing:1 steeper:1 doing:1 bayes:1 reached:1 parallel:2 portion:1 aggregation:1 minimize:1 ass:1 yield:1 correspond:1 craig:1 zy:1 none:1 pomdps:13 m5s:2 converged:2 whenever:1 definition:1 against:1 pp:7 obvious:1 associated:2 proof:6 gain:1 ask:1 revise:1 anytime:2 knowledge:1 improves:4 carefully:1 actually:2 higher:1 response:1 improved:7 though:1 furthermore:1 until:1 hand:1 replacing:1 touch:1 hopefully:1 incrementally:1 pineau:2 quality:3 perhaps:2 grows:1 former:1 hence:2 alternating:1 inferior:1 unnormalized:1 won:1 extend:2 interpretation:1 marginals:1 grid:2 similarly:1 reachable:9 fscs:3 surface:3 add:3 optimizes:1 exploited:1 fortunately:1 care:1 employed:1 prune:5 converge:2 maximize:1 determine:2 monotonically:3 offer:2 ensuring:1 controller:48 essentially:1 expectation:1 iteration:11 tailored:1 achieved:4 whereas:1 interval:1 grow:2 ascent:3 comment:1 baxter:1 reduce:1 idea:1 lesser:1 bottleneck:1 peshkin:1 utility:1 action:15 boutilier:3 outward:1 s4:2 discount:1 locally:2 reduced:1 exist:1 restricts:1 dotted:1 trapped:4 per:1 discrete:1 group:2 key:1 achieving:2 tangency:3 neither:1 graph:1 sum:1 master:1 tailor:2 ecp:1 decision:7 acceptable:1 scaling:1 bound:1 guaranteed:3 cheng:2 n3:4 ri:1 dominated:20 generates:1 aspect:1 tag:5 ables:1 pruned:3 performing:1 relatively:1 circumvented:1 department:2 alternate:1 combination:20 smaller:2 slightly:1 lp:19 n4:3 making:1 intuitively:1 restricted:1 equation:2 remains:1 turn:1 tractable:1 capitalizing:1 available:1 robustly:1 alternative:1 robustness:1 original:1 denotes:3 assumes:1 running:1 madison:1 pushing:1 unchanged:1 feng:2 objective:3 question:3 added:4 already:1 strategy:10 gradient:3 dp:13 thank:1 thrun:1 poupart:2 considers:1 pointwise:12 mexico:1 executed:2 unfortunately:1 relate:1 design:2 implementation:1 proper:1 policy:47 perform:3 allowing:1 upper:3 observation:17 markov:4 arc:7 finite:6 benchmark:1 immediate:1 defining:1 witness:1 extended:1 dc:1 perturbation:2 thm:4 canada:1 introduced:1 pair:2 toledo:1 nip:1 able:1 elicitation:2 poole:1 below:1 summarize:1 max:1 including:1 belief:51 event:1 ation:1 representing:1 improve:8 mdps:1 brief:1 created:3 redirect:1 naive:1 columbia:1 speeding:1 review:1 acknowledgement:1 tangent:27 vancouver:2 wisconsin:1 ckw:1 oy:1 interesting:1 sufficient:3 pi:7 course:1 summary:1 cebly:1 keeping:5 free:1 intractably:1 taking:2 characterizing:2 world:2 avoids:1 transition:2 maze:3 doesn:1 stuck:1 made:2 far:2 pruning:4 observable:9 compact:1 approximate:1 keep:2 global:1 reveals:1 uai:4 summing:1 alternatively:1 don:1 search:6 table:15 improving:2 domain:1 backup:11 n2:4 allowed:1 fig:8 sf:1 breaking:1 theorem:3 british:1 bad:1 dominates:4 intractable:1 exists:1 ih:3 phd:1 illustrates:2 horizon:1 cassandra:3 flavor:1 intersection:1 aberdeen:1 simply:2 partially:8 applies:1 monotonic:4 corresponds:2 satisfies:1 viewed:1 consequently:1 hard:1 tiger:1 infinite:1 determined:1 justify:1 acting:1 called:1 pas:1 engaged:1 experimental:1 rarely:1 indicating:1 internal:1 latter:1 h5:2 |
1,510 | 2,373 | Minimax embeddings
Matthew Brand
Mitsubishi Electric Research Labs
Cambridge MA 02139 USA
Abstract
Spectral methods for nonlinear dimensionality reduction (NLDR) impose
a neighborhood graph on point data and compute eigenfunctions of a
quadratic form generated from the graph. We introduce a more general
and more robust formulation of NLDR based on the singular value decomposition (SVD). In this framework, most spectral NLDR principles
can be recovered by taking a subset of the constraints in a quadratic form
built from local nullspaces on the manifold. The minimax formulation
also opens up an interesting class of methods in which the graph is ?decorated? with information at the vertices, offering discrete or continuous
maps, reduced computational complexity, and immunity to some solution instabilities of eigenfunction approaches. Apropos, we show almost
all NLDR methods based on eigenvalue decompositions (EVD) have a solution instability that increases faster than problem size. This pathology
can be observed (and corrected via the minimax formulation) in problems
as small as N < 100 points.
1
Nonlinear dimensionality reduction (NLDR)
.
Spectral NLDR methods are graph embedding problems where a set of N points X =
[x1 , ? ? ? , xN ] ? RD?N sampled from a low-dimensional manifold in a ambient space RD is
reparameterized by imposing a neighborhood graph G on X and embedding the graph with
minimal distortion in a ?parameterization? space Rd , d < D. Typically the graph is sparse
and local, with edges connecting points to their immediate neighbors. The embedding must
keep these edges short or preserve their length (for isometry) or angles (for conformality).
The graph-embedding problem was first introduced as a least-squares problem by Tutte [1],
and as an eigenvalue problem by Fiedler [2]. The use of sparse graphs to generate metrics
for least-squares problems has been studied intensely in the following three decades (see
[3]). Modern NLDR methods use graph constraints to generate a metric in a space of embeddings RN . Eigenvalue decomposition (EVD) gives the directions of least or greatest variance
under this metric. Typically a subset of d extremal eigenvectors gives the embedding of N
points in Rd parameterization space. This includes the IsoMap family [4], the locally linear
embedding (LLE) family [5,6], and Laplacian methods [7,8]. Using similar methods, the
Automatic Alignment [6] and Charting [9] algorithms embed local subspaces instead of
points, and by combining subspace projections thus obtain continuous maps between RD
and Rd .
This paper introduces a general algebraic framework for computing optimal embeddings
directly from graph constraints. The aforementioned methods can can be recovered as special cases. The framework also suggests some new methods with very attractive properties,
including continuous maps, reduced computational complexity, and control over the degree
of conformality/isometry in the desired map. It also eliminates a solution instability that is
intrinsic to EVD-based approaches. A perturbational analysis quantifies the instability.
2
Minimax theorem for graph embeddings
We begin with neighborhood graph specified by a nondiagonal weighted adjacency matrix
M ? RN?N that has the data-reproducing property XM = X (this can be relaxed to XM ?
X in practice). The graph-embedding and NLDR literatures offer various constructions of
M, each appropriate to different sets of assumptions about the original embedding and
its sampling X (e.g., isometry, local linearity, noiseless samples, regular sampling, etc.).
Typically Mi j 6= 0 if points i, j are nearby on the intrinsic manifold and |Mi j | is small or
zero otherwise. Each point is taken to be a linear or convex combination of its neighbors,
and thus M specifies manifold connectivity in the sense that any nondegenerate embedding
Y that satisfies YM ? Y with small residual kYM ? YkF will preserve this connectivity
and the structure of local neighborhoods. For example, in barycentric embeddings, each
point is the average of its neighbors and thus Mi j = 1/k if vertex i is connected to vertex j
(of degree k). We will also consider three optional constraints on the embedding :
1. A null-space restriction, where the solution must be outside to the column-space
of C ? RN?M , M < N. For example, it is common to stipulate that the solution Y
be centered, i.e., YC = 0 for C = 1, the constant vector.
2. A basis restriction, where the solution must be a linear combination of the rows
of basis Z ? RK?N , K ? N. This can be thought of as information placed at the
vertices of the graph that serves as example inputs for a target NLDR function. We
will use this to construct dimension-reducing radial basis function networks.
3. A metric ? ? RN?N that determines how error is distributed over the points. For
example, it might be important that boundary points have less error. We assume
that ? is symmetric positive definite and has factorization ? = AA> (e.g., A could
be a Cholesky factor of ?).
In most settings, the optional matrices will default to the identity matrix. In this context,
we define the per-dimension embedding error of row-vector yi ? rows(Y) to be
.
EM (yi ) =
max
yi
?range(Z),, K?RM?N
k(yi (M + CD) ? yi )Ak
kyi Ak
(1)
where D is a matrix constructed by an adversary to maximize the error. The optimizing yi
is a vector inside the subspace spanned by the rows of Z and outside the subspace spanned
by the columns of C, for which the reconstruction residual yi M?yi has smallest norm
w.r.t. the metric ?. The following theorem identifies the optimal embedding Y for any
choice of M, Z, C, ?:
Minimax solution: Let Q ? SK?P be a column-orthonormal basis of the null-space of the
rows of ZC, with P = K ? rank(C). Let B ? RP?P be a square factor satisfying B> B =
Q> Z?Z> Q, e.g., a Cholesky factor (or the ?R? factor in QR-decomposition of (Q> ZA)> ).
Compute the left singular vectors U ? SN?N of Udiag(s)V> = B?> Q> Z(I ? M)A, with
.
singular values s> = [s1 , ? ? ? , sP ] ordered s1 ? s2 ? ? ? ? ? s p . Using the leading columns
?> Q> Z.
U1:d of U, set Y = U>
1:d B
Theorem 1. Y is the optimal (minimax) embedding in Rd with error k[s1 , ? ? ? , sd ]k2 :
.
?> >
Y = U>
Q Z = arg min
? EM (yi )2 with EM (yi ) = si .
1:d B
Y?Rd?N y ?rows(Y)
i
(2)
Appendix A develops the proof and other error measures that are minimized.
Local NLDR techniques are easily expressed in this framework. When Z = A = I, C = [],
and M reproduces X through linear combinations with M> 1 = 1, we recover LLE [5].
When Z = I, C = [], I ? M is the normalized graph Laplacian, and A is a diagonal matrix
of vertex degrees, we recover Laplacian eigenmaps [7]. When further Z = X we recover
locally preserving projections [8].
3
Analysis and generalization of charting
The minimax construction of charting [9] takes some development, but offers an interesting insight into the above-mentioned methods. Recall that charting first solves for a set
of local affine subspace axes S1 ? RD?d , S2 , ? ? ? at offsets ?1 ? RD , ?2 , ? ? ? that best cover
the data and vary smoothly over the manifold. Each subspace offers a chart?a local parameterization of the data by projection onto the local axes. Charting then constructs a
weighted mixture of affine projections that merges the charts into a global parameterization. If the data manifold is curved, each projection will assign a point a slightly different
embedding, so the error is measured as the variance of these proposed embeddings about
their mean. This maximizes consistency and tends to produce isometric embeddings; [9]
discusses ways to explicitly optimize the isometry of the embedding.
Under the assumption of isometry, the charting error is equivalent to the sumsquared displacements of an embedded point relative to its immediate neighbors
(summed over all neighborhoods). To construct the same error criteria in the minimax setting, let xi?k , ? ? ? , xi , ? ? ? , xi+k denote points in the ith neighborhood and let
the columns of Vi ? R(2k+1)?d be an orthonormal basis of rows of the local parameterization S>
i [xi?k , ? ? ? , xi , ? ? ? , xi+k ]. Then a nonzero reparameterization will satisfy
[yi?k , ? ? ? , yi , ? ? ? , yi+k ]Vi V>
i = [yi?k , ? ? ? , yi , ? ? ? , yi+k ] if and only if it preserves the relative
position of the points in the local parameterization. Conversely, any relative displacements
of the points are isolated by the formula [yi?k , ? ? ? , yi , ? ? ? , yi+k ](I ? Vi V>
i ). Minimizing
the Frobenius norm of this expression is thus equivalent to minimizing the local error in
charting. We sum these constraints over all neighborhoods to obtain the constraint matrix
>
th
th
M = I ? ?i Fi (I ? Vi V>
i )Fi , where (Fi )k j = 1 iff the j point of the i neighborhood is
>
the kth point of the dataset. Because Vi V>
i and (I ? Vi Vi ) are complementary, it follows
that the error criterion of any local NLDR method (e.g., LLE, Laplacian eigenmaps, etc.)
must measure the projection of the embedding onto some subspace of (I ? Vi V>
i ).
To construct a continuous map, charting uses an overcomplete radial basis function (RBF)
representation Z = [z(x1 ), z(x2 ), ? ? ? z(xN )], where z(x) is a vector that stacks z1 (x), z2 (x),
etc., and
pm (x)
. K>
m (x ? ?m )
,
(3)
zm (x) =
1
?m pm (x)
> ?1
.
pm (x) = N (x|?m , ?m ) ? e?(x??m ) ?m (x??m )/2
(4)
and Km is any local linear dimensionality reducer, typically Sm itself. Each column of Z
contains many ?views? of the same point that are combined to give its low-dimensional
embedding.
Finally, we set C = 1, which forces the embedding of the full data to be centered.
Applying the minimax solution to these constraints yields the RBF network mixing ma.
?> Q> z(x). Theorem 1 guarantees that the resulting embedding is leasttrix, f (x) = U>
1:d B
squares optimal w.r.t. Z, M, C, A at the datapoints f (xi ), and because f (?) is an affine transform of z(?) it smoothly interpolates the embedding between points.
There are some interesting variants:
Kernel embeddings of the twisted swiss roll
generalized EVD
minimax SVD
UR corner detail
LL corner detail
Fig. 1. Minimax and generalized EVD solution for kernel eigenmap of a non-developable
swiss roll. Points are connected into a grid which ideally should be regular. The EVD solution shows substantial degradation. Insets detail corners where the EVD solution crosses
itself repeatedly. The border compression is characteristic of Laplacian constraints.
One-shot charting: If we set the local dimensionality reducers to the identity matrix (all
Km = I), then the minimax method jointly optimizes the local dimensionality reduction to
charts and the global coordination of the charts (under any choice of M). This requires that
rows(Z) ? N for a fully determined solution.
Discrete isometric charting: If Z = I then we directly obtain a discrete isometric embedding of the data, rather than a continuous map, making this a local equivalent of IsoMap.
Reduced basis charting: Let Z be constructed using just a small number of kernels randomly placed on the data manifold, such that rows(Z) N. Then the size of the SVD
problem is substantially reduced.
4
Numerical advantage of minimax method
Note that the minimax method projects the constraint matrix M into a subspace derived
from C and Z and decomposes it there. This suppresses unwanted degrees of freedom
(DOFs) admitted by the problem constraints, for example the trivial R0 embedding where
all points are mapped to a single point yi = N ?1/2 . The R0 embedding serves as a translational DOF in the solution. LLE- and eigenmap-based methods construct M to have a
constant null-space so that the translational DOF will be isolated in the EVD as null eigenvalue paired to a constant eigenvector, which is then discarded. However, section 4.1 shows
that this construction makes the EVD increasingly unstable as problem size grows and/or the
data becomes increasing amenable to low-residual embeddings, ultimately causing solution
collapse. As the next paragraph demonstrates, the problem is exacerbated when embedding
w.r.t. a basis Z (via the equivalent generalized eigenproblem), partly because the eigenvector associated with the unwanted DOF can have arbitrary structure. In all cases the problem
can be averted by using the minimax formulation with C = 1 to suppress the DOF.
A 2D plane was embedded in 3D with a curl, a twist, and 2.5% Gaussian noise, then regularly sampled at 900 points. We computed a kernelized Laplacian eigenmap using 70 random points as RBF centers, i.e., a continous map using M derived from the graph Laplacian
and Z constructed as above. The map was computed both via the minimax (SVD) method
and via the equivalent generalized eigenproblem, where the translational degree of freedom
must be removed by discarding an eigenvector from the solution. The two solutions are algebraically equivalent in every other regard. A variety of eigensolvers were tried; we took
?5
excess energy
x 10
Eigen spectrum compared to minimax spectrum
15
10
5
0
?5
Eigen spectrum compared to minimax spectrum
2
15
deviation
excess energy
x 10
10
5
100
200
eigenvalue
Error in null embedding
?5
x 10
0
?2
?4
?6
?8
0
100
?5
eigenvalue
Error in null embedding
200
100
200
300
400 500
point
600
700
800
900
Fig. 2. Excess energy in the eigenspectrum indicates that the translational DOF has contam2
inated
many eigenvectors. If the EVD had successfully isolated the unwanted DOF, then its
0
remaining
eigenvalues should be identical to those derived from the minimax solution. The
?2
?4 at left shows the difference in the eigenspectra. The graph at right shows the EVD
graph
?6
solution?s
deviation from the translational vector y0 = 1 ? N ?1/2 ? .03333. If the numer?8
ics were100
perfect
flat,800but900in practice the deviation is significant enough
200 the
300 line
400 would
500 600 be700
point
(roughly 1% of the diameter
of the embedding) to noticably perturb points in figure 1.
deviation
x 10
the best result. Figure 1 shows that the EVD solution exhibits many defects, particularly a
folding-over of the manifold at the top and bottom edges and at the corners. Figure 2 shows
that the noisiness of the EVD solution is due largely to mutual contamination of numerically
unstable eigenvectors.
4.1
Numerical instability of eigen-methods
The following theorem uses tools of matrix perturbation theory to show that as the problem size increases, the desired and unwanted eigenvectors become increasingly wobbly
and gradually contaminate each other, leading to degraded solutions. More precisely, the
low-order eigenvalues are ill-conditioned and exhibit multiplicities that may be true (due
to noiseless samples from low-curvature manifolds) or false (due to numerical noise). Although in many cases some post-hoc algebra can ?filter? the unwanted components out
of the contaminated eigensolution, it is not hard to construct cases where the eigenvectors
cannot be cleanly separated. The minimax formulation is immune to this problem because
it explicitly suppresses the gratuitous component(s) before matrix decomposition.
Theorem 2. For any finite numerical precision, as the number of points N increases, the
Frobenius norm of numerical noise in the null eigenvector v0 can grow as O(N 3/2 ), and
the eigenvalue problem can approach a false multiplicity at a rate as fast as O(N 3/2 ),
at which point the eigenvectors of interest?embedding and translational?are mutually
contaminated and/or have an indeterminate eigenvalue ordering.
Please see appendix B for the proof. This theorem essentially lower-bounds an upperbound on error; examples can be constructed in which the problem is worse. For example, it can be shown analytically that when embedding points drawn from the simple curve
xi = [a, cos ?a]> , a ? [0, 1] with K = 2 neighbors, instabilities cannot be bounded better
than O(N 5/2 ); empirically we see eigenvector mixing with N < 100 points and we see
it grow at the rate ? O(N 4 )?in many different eigensolvers. At very large scales, more
pernicious instabilities set in. E.g., by N = 20000 points, the solution begins to fold over.
Although algebraic multiplicity and instability of the eigenproblem is conceptually a minor
oversight in the algorithmic realizations of eigenfunction embeddings, as theorem 2 shows,
the consequences are eventually fatal.
5
Summary
One of the most appealing aspects of the spectral NLDR literature is that algorithms are
usually motivated from analyses of linear operators on smooth differentiable manifolds,
e.g., [7]. Understandably, these analysis rely on assumptions (e.g., smoothness or isometry
or noiseless sampling) that make it difficult to predict what algorithmic realizations will do
when real, noisy data violates these assumptions. The minimax embedding theorem provides a complete algebraic characterization of this discrete NLDR problem, and provides
a solution that recovers numerically robustified versions of almost all known algorithms.
It offers a principled way of constructing new algorithms with clear optimality properties
and good numerical conditioning?notably the construction of a continuous NLDR map (an
RBF network) in a one-shot optimization ( SVD ). We have also shown how to cast several
local NLDR principles in this framework, and upgrade these methods to give continuous
maps. Working in the opposite direction, we sketched the minimax formulation of isometric charting and showed that its constraint matrix contains a superset of all the algebraic
constraints used in local NLDR techniques.
References
1. W.T. Tutte. How to draw a graph. Proc. London Mathematical Society, 13:743?768, 1963.
2. Miroslav Fiedler. A property of eigenvectors of nonnegative symmetric matrices and its application to graph theory. Czech. Math. Journal, 25:619?633, 1975.
3. Fan R.K. Chung. Spectral graph theory, volume 92 of CBMS Regional Conference Series in
Mathematics. American Mathematical Society, 1997.
4. Joshua B. Tenenbaum, Vin de Silva, and John C. Langford. A global geometric framework for
nonlinear dimensionality reduction. Science, 290:2319?2323, December 22 2000.
5. Sam T. Roweis and Lawrence K. Saul. Nonlinear dimensionality reduction by locally linear
embedding. Science, 290:2323?2326, December 22 2000.
6. Yee Whye Teh and Sam T. Roweis. Automatic alignment of hidden representations. In Proc.
NIPS-15, 2003.
7. Mikhail Belkin and Partha Niyogi. Laplacian eigenmaps for dimensionality reduction and data
representation. volume 14 of Advances in Neural Information Processing Systems, 2002.
8. Xiafei He and Partha Niyogi. Locality preserving projections. Technical Report TR-2002-09,
University of Chicago Computer Science, October 2002.
9. Matthew Brand. Charting a manifold. volume 15 of Advances in Neural Information Processing
Systems, 2003.
10. G.W. Stewart and Ji-Guang Sun. Matrix perturbation theory. Academic Press, 1990.
A
Proof of minimax embedding theorem (1)
The burden of this proof is carried by supporting lemmas, below. To emphasize the proof
strategy, we give the proof first; supporting lemmas follow.
Proof. Setting yi = l>
i Z, we will solve for li ? columns(L). Writing the error in terms of li ,
EM (li ) = max
K?RM?N
>
kl>
kl>
i Z(I ? M ? CK)Ak
i Z(I ? M)A ? li ZCKAk
=
max
.
kl>
kl>
K?RM?N
i ZAk
i ZAk
(5)
>
The term l>
i ZCKA produces infinite error unless li ZC = 0, so we accept this as a constraint and seek
kl>
i Z(I ? M)Ak
min
.
(6)
>
kl>
li ZC=0
i ZAk
By lemma 1, that orthogonality is satisfied by solving the problem in the space orthogonal
.
to ZC; the basis for this space is given by columns of Q = null((ZC)> ).
By lemma 2, the denominator of the error specifies the metric in solution space to be
ZAA> Z> ; when the problem is projected into the space orthogonal to ZC it becomes
Q> (ZAA> Z> )Q. Nesting the ?orthogonally-constrained-SVD? construction of lemma 1
inside the ?SVD-under-a-metric? lemma 2, we obtain a solution that uses the correct metric
in the orthogonal space:
B> B = Q> ZAA> Z> Q
>
?>
Udiag(s)V = B
{Q(Z(I ? M)A)}
(7)
(8)
L = QB?1 U
(9)
where braces indicate the nesting of lemmas. By the ?best-projection? lemma (#3), if we
order the singular values by ascending magnitude,
q
(10)
L1:d = arg min
?ji ?cols(J) (kj> Z(I ? M)Ak/kjkZ?Z> )2
J?RN?d
The proof is completed by making the substitutions L> Z ? Y> and kx> Ak ? kxk? (for
? = AA> ), and leaving off the final square root operation to obtain
2
(Y> )1:d = arg min ?ji ?cols(J) kj> (I ? M)k? /kjk? .
(11)
J?RN?d
Lemma 1. Orthogonally constrained SVD: The left singular vectors L of matrix M under
.
SVD
the constraint U> C = 0 are calculated as Q = null(C> ), Udiag(s)V> ? Q> M, L = QU.
Proof. First observe that L is orthogonal to C: By definition, the null-space basis satisfies
Q> C = 0, thus L> C = U> Q> C = 0. Let J be an orthonormal basis for C, with J> J = I
and Q> J = 0. Then Ldiag(s)V> = QQ> M = (I ? JJ> )M, the orthogonal projector of C
applied to M, proving that the SVD captures the component of M that is orthogonal to C.
Lemma 2. SVD with respect to a metric: The vectors li ? L, vi ? V that diagonalize matrix
M with respect to positive definite column-space metric ? are calculated as B> B ? ?,
SVD
.
Udiag(s)V> ? B?> M, L = B?1 U satisfy kl>
i Mk/kli k? = si and extremize this form for
the extremal singular values smin , smax .
Proof. By construction, L and V diagonalize M:
L> MV = (B?1 U)> MV = U> (B?> M)V = diag(s)
diag(s)V>
(12)
B?> M.
and
=
Forming the gram matrices of both sides of the last line, we
obtain the identity Vdiag(s)2 V> = M> B?1 B?> M = M> ??1 M, which demonstrates that
si ? s are the singular values of M w.r.t. column-space metric ?. Finally, L is orthonormal
w.r.t. the metric ?, because kLk2? = L> ?L = U> B?> B> BB?1 U = I. Consequently,
kl> Mk/klk? = kl> Mk/1 = ksi v>
i k = si .
and by the Courant-Hilbert theorem,
smax = max kl> Mk/klk? ;
smin = min kl> Mk/klk? .
l
l
(13)
(14)
Lemma 3. Best projection: Taking L and s from lemma 2, let the columns of L and elements of s be sorted so that s1 ? s2 ? ? ? ? ? sN . Then for any dimensionality 1 ? d ? N,
.
L1:d = [l1 , ? ? ? , ld ] = arg max kJ> Mk(J> ?J)?1
(15)
J?RN?d
= arg
max
J?RN?d |J> ?J=I
= arg max
J?RN?d
kJ> MkF
q
?ji ?cols(J) (kj> Mk/kjk? )2
(16)
(17)
with the optimum value of all right hand sides being (?di=1 s2i )1/2 . If the sort order is reversed, the minimum of this form is obtained.
Proof. By the Eckart-Young-Mirsky theorem, if U> MV = diag(s) with singular values
.
sorted in descending order, then U1:d = [u1 , ? ? ? , ud ] = arg maxU?SN?d kU> MkF . We first
extend this to a non-orthonogonal basis J under a Mahalonobis norm:
maxJ?RN?d kJ> Mk(J> J)?1 = maxU?SN?d kU> MkF
(18)
because
kJ> Mk2(J> J)?1 = trace(M> J(J> J)?1 J> M) = trace(M> JJ+ (JJ+ )> M) =
k(JJ+ )Mk2F = kUU> Mk2F = kU> Mk2F since JJ+ is a (symmetric) orthogonal projector having binary eigenvalues ? ? {0, 1} and therefore it is the gram of an thin
orthogonal matrix. We then impose a metric ? on the column-space of J to obtain
the first criterion (equation 15), which asks what maximizes variance in J> M while
minimizing the norm of J w.r.t. metric ?. Here it suffices to substitute in the leading
(resp., trailing) columns of L and verify that the norm is maximized (resp., mini2
>
> >
?1 >
mized). Expanding, kL>
1:d Mk(L> ?L )?1 = trace((L1:d M) (L1:d ?L1:d ) (L1:d M)) =
1:d
1:d
>
>
> >
>
2
trace((L>
1:d M) I(L1:d M)) = trace((diag(s1:d )V1:d ) (diag(s1:d )V1:d )) = ks1:d k . Again,
by the Eckart-Young-Mirsky theorem, these are the maximal variance-preserving projections, so the first criterion is indeed maximized by setting J to the columns in L
corresponding to the largest values in s.
Criterion #2 restates the first criterion with the set of candidates for J restricted to (the hyperelliptical manifold of) matrices that reduce the metric on the norm to the identity matrix
(thereby recovering the Frobenius norm). Criterion #3 criterion merely expands the above
trace by individual singular values. Note that the numerator and denominator can have different metrics because they are norms in different spaces, possibly of different dimension.
Finally, that the trailing d eigenvectors minimize these criteria follows directly from the fact
that leading N ? d singular values account for the maximal part of the variance.
B
Proof of instability theorem (2)
Proof. When generated from a sparse graph with average degree K, weighted connectivity
matrix W is sparse and has O(NK) entries. Since the graph vertices represent samples from
a smooth manifold, increasing the sampling density N does not change the distribution of
magnitudes in W. Consider a perturbation of the nonzero values in W, e.g., W ? W + E
due to numerical noise E created by finite machine precision. By the weak law
? of large
numbers, the Frobenius norm of the sparse perturbation grows as kEkF ? O( N). However the t th -smallest nonzero eigenvalue ?t (W) grows as ?t (W) = vt> Wvt ? O(N ?1 ), because elements of corresponding eigenvector vt grow as O(N ?1/2 ) and only K of those
elements are multiplied by nonzero values to form each element of Wvt . In sum, the perturbation kEkF grows while the eigenvalue ?t (W) shrinks. In linear embedding algorithms,
.
the eigengap of interest is ?gap = ?1 ? ?0 . The tail eigenvalue ?0 = 0 by construction but
it is possible that ?0 > 0 with numerical error, thus ?gap ? ?1 . Combining these facts,
the ratio between the perturbation and the eigengap grows as kEkF /?gap ? O(N 3/2 ) or
faster. Now consider the shifted eigenproblem I ? W with leading (maximal) eigenvalues 1 ? ?0 ? 1 ? ?1 ? ? ? ? and unchanged eigenvectors. From matrix perturbation the.
ory [10, thm. V.2.8], when W is perturbed to W0 = W + E, the
? change in the lead0 is bounded as |?0 ? ? | ?
ing eigenvalue from
1
?
?
to
1
?
?
2kEkF and similarly
0
0
0
0
?
?
1 ? ?01 ? 1 ? ?1 + 2kEkF . Thus ?0gap ? ?gap ? 2kEkF . Since kEkF /?gap ? O(N 3/2 ),
the right hand side of the gap bound goes negative at a supralinear rate, implying that the
eigenvalue ordering eventually becomes unstable with the possibility of the first and second
eigenvalue/vector pairs being swapped. Mutual contamination of the eigenvectors happens
well before: Under general (dense) conditions, the change in the eigenvector v0 is bounded
?F
as kv00 ? v0 k ? |? ??4kEk
[10, thm. V.2.8]. (This bound is often tight enough to serve
0
1 |? 2kEkF
as a good approximation.) Specializing this to the
sparse embedding
matrix, we find that
?
?
O( N) ?
O( N)
0
?1/2
the bound weakens to kv0 ? 1 ? N
k ? O(N ?1 )?O( N) > O(N ?1 ) = O(N 3/2 ).
| 2373 |@word version:1 compression:1 norm:10 open:1 km:2 cleanly:1 seek:1 mitsubishi:1 tried:1 decomposition:5 asks:1 thereby:1 tr:1 klk:3 shot:2 ld:1 reduction:6 substitution:1 contains:2 eigensolvers:2 series:1 offering:1 recovered:2 z2:1 si:4 must:5 john:1 numerical:8 chicago:1 implying:1 parameterization:6 plane:1 ith:1 short:1 provides:2 characterization:1 math:1 kv0:1 mathematical:2 constructed:4 become:1 inside:2 paragraph:1 introduce:1 notably:1 indeed:1 roughly:1 klk2:1 increasing:2 becomes:3 begin:2 project:1 linearity:1 bounded:3 maximizes:2 twisted:1 null:10 what:2 substantially:1 eigenvector:7 suppresses:2 guarantee:1 contaminate:1 every:1 expands:1 unwanted:5 rm:3 k2:1 demonstrates:2 control:1 positive:2 before:2 local:19 sd:1 tends:1 consequence:1 ak:6 might:1 studied:1 mirsky:2 suggests:1 sumsquared:1 conversely:1 co:1 factorization:1 collapse:1 range:1 practice:2 definite:2 swiss:2 perturbational:1 displacement:2 thought:1 indeterminate:1 projection:10 radial:2 regular:2 onto:2 cannot:2 operator:1 context:1 applying:1 instability:9 yee:1 writing:1 restriction:2 optimize:1 map:10 equivalent:6 center:1 projector:2 descending:1 go:1 convex:1 apropos:1 insight:1 nesting:2 spanned:2 orthonormal:4 datapoints:1 reparameterization:1 embedding:34 proving:1 qq:1 resp:2 construction:7 target:1 us:3 element:4 satisfying:1 particularly:1 observed:1 bottom:1 capture:1 eckart:2 connected:2 sun:1 ordering:2 removed:1 reducer:2 contamination:2 conformality:2 mentioned:1 substantial:1 principled:1 complexity:2 ideally:1 ultimately:1 solving:1 tight:1 algebra:1 serve:1 basis:12 easily:1 various:1 s2i:1 fiedler:2 separated:1 fast:1 mahalonobis:1 london:1 neighborhood:8 outside:2 dof:6 solve:1 distortion:1 otherwise:1 fatal:1 niyogi:2 transform:1 itself:2 jointly:1 noisy:1 final:1 hoc:1 advantage:1 eigenvalue:18 differentiable:1 took:1 reconstruction:1 maximal:3 zm:1 stipulate:1 causing:1 combining:2 realization:2 iff:1 mixing:2 roweis:2 frobenius:4 udiag:4 qr:1 optimum:1 smax:2 produce:2 perfect:1 weakens:1 measured:1 minor:1 exacerbated:1 solves:1 recovering:1 indicate:1 direction:2 correct:1 filter:1 centered:2 violates:1 adjacency:1 assign:1 suffices:1 generalization:1 ic:1 lawrence:1 algorithmic:2 predict:1 maxu:2 matthew:2 trailing:2 vary:1 smallest:2 proc:2 coordination:1 extremal:2 largest:1 successfully:1 tool:1 weighted:3 gaussian:1 rather:1 ck:1 ax:2 derived:3 noisiness:1 rank:1 indicates:1 sense:1 typically:4 accept:1 kernelized:1 hidden:1 wvt:2 sketched:1 translational:6 arg:7 oversight:1 aforementioned:1 ill:1 development:1 constrained:2 special:1 summed:1 mutual:2 construct:6 eigenproblem:4 having:1 sampling:4 identical:1 thin:1 minimized:1 contaminated:2 report:1 develops:1 belkin:1 modern:1 randomly:1 preserve:3 individual:1 maxj:1 freedom:2 interest:2 possibility:1 alignment:2 numer:1 introduces:1 mixture:1 kjk:2 amenable:1 ambient:1 edge:3 orthogonal:8 unless:1 dofs:1 desired:2 isolated:3 overcomplete:1 minimal:1 mk:9 miroslav:1 column:14 nullspaces:1 cover:1 stewart:1 averted:1 vertex:6 deviation:4 subset:2 ory:1 entry:1 eigenmaps:3 perturbed:1 eigensolution:1 combined:1 density:1 off:1 connecting:1 ym:1 decorated:1 connectivity:3 again:1 satisfied:1 possibly:1 worse:1 corner:4 american:1 chung:1 leading:5 li:7 account:1 upperbound:1 de:1 includes:1 satisfy:2 explicitly:2 mv:3 vi:9 view:1 root:1 lab:1 recover:3 sort:1 vin:1 partha:2 minimize:1 square:5 chart:4 degraded:1 roll:2 kek:1 variance:5 characteristic:1 largely:1 yield:1 maximized:2 upgrade:1 conceptually:1 weak:1 za:1 definition:1 energy:3 proof:13 mi:3 associated:1 recovers:1 di:1 sampled:2 dataset:1 recall:1 dimensionality:9 tutte:2 hilbert:1 cbms:1 courant:1 isometric:4 follow:1 formulation:6 shrink:1 just:1 langford:1 nondiagonal:1 working:1 hand:2 ykf:1 nonlinear:4 grows:5 restates:1 usa:1 normalized:1 true:1 isomap:2 verify:1 mini2:1 analytically:1 symmetric:3 nonzero:4 zaa:3 attractive:1 ll:1 numerator:1 please:1 criterion:9 generalized:4 whye:1 complete:1 l1:8 silva:1 fi:3 common:1 empirically:1 twist:1 ji:4 conditioning:1 volume:3 extend:1 he:1 tail:1 intensely:1 numerically:2 significant:1 cambridge:1 imposing:1 zak:3 curl:1 rd:10 automatic:2 consistency:1 pm:3 grid:1 smoothness:1 mathematics:1 similarly:1 pathology:1 had:1 immune:1 v0:3 etc:3 curvature:1 isometry:6 showed:1 optimizing:1 optimizes:1 binary:1 vt:2 yi:21 joshua:1 preserving:3 minimum:1 relaxed:1 impose:2 r0:2 algebraically:1 maximize:1 ud:1 full:1 smooth:2 technical:1 faster:2 academic:1 ing:1 offer:4 cross:1 post:1 paired:1 laplacian:8 specializing:1 variant:1 pernicious:1 noiseless:3 metric:16 essentially:1 denominator:2 kernel:3 represent:1 folding:1 singular:10 grow:3 leaving:1 diagonalize:2 swapped:1 eliminates:1 regional:1 brace:1 eigenfunctions:1 eigenspectra:1 december:2 regularly:1 embeddings:10 enough:2 superset:1 variety:1 opposite:1 reduce:1 expression:1 motivated:1 eigengap:2 algebraic:4 interpolates:1 kuu:1 jj:5 repeatedly:1 clear:1 eigenvectors:10 locally:3 tenenbaum:1 extremize:1 diameter:1 reduced:4 generate:2 specifies:2 shifted:1 per:1 noticably:1 discrete:4 smin:2 drawn:1 kyi:1 v1:2 graph:24 defect:1 merely:1 sum:2 angle:1 mk2:1 almost:2 family:2 draw:1 appendix:2 bound:4 fold:1 quadratic:2 fan:1 nonnegative:1 constraint:14 precisely:1 orthogonality:1 x2:1 flat:1 nearby:1 u1:3 aspect:1 min:5 optimality:1 qb:1 mized:1 robustified:1 combination:3 slightly:1 em:4 increasingly:2 ur:1 y0:1 appealing:1 sam:2 qu:1 making:2 s1:7 ks1:1 happens:1 gradually:1 multiplicity:3 restricted:1 taken:1 equation:1 mutually:1 discus:1 eventually:2 nldr:16 ascending:1 serf:2 operation:1 multiplied:1 observe:1 spectral:5 appropriate:1 eigen:3 rp:1 original:1 substitute:1 top:1 remaining:1 completed:1 perturb:1 society:2 unchanged:1 strategy:1 diagonal:1 exhibit:2 kth:1 subspace:8 reversed:1 mapped:1 w0:1 manifold:13 kekf:8 unstable:3 trivial:1 eigenspectrum:1 charting:13 length:1 ratio:1 minimizing:3 difficult:1 october:1 trace:6 negative:1 suppress:1 teh:1 sm:1 discarded:1 finite:2 curved:1 optional:2 reparameterized:1 immediate:2 supporting:2 rn:10 barycentric:1 reproducing:1 stack:1 arbitrary:1 perturbation:7 thm:2 introduced:1 cast:1 pair:1 specified:1 kl:12 immunity:1 z1:1 continous:1 merges:1 czech:1 nip:1 eigenfunction:2 adversary:1 usually:1 below:1 xm:2 yc:1 built:1 including:1 max:7 greatest:1 force:1 rely:1 residual:3 minimax:23 orthogonally:2 identifies:1 created:1 carried:1 sn:4 kj:7 literature:2 geometric:1 relative:3 law:1 embedded:2 fully:1 interesting:3 degree:6 affine:3 principle:2 nondegenerate:1 cd:1 row:9 summary:1 placed:2 last:1 zc:6 side:3 lle:4 neighbor:5 saul:1 taking:2 evd:13 mikhail:1 sparse:6 distributed:1 regard:1 boundary:1 dimension:3 default:1 xn:2 curve:1 calculated:2 gram:2 projected:1 bb:1 excess:3 supralinear:1 emphasize:1 mkf:3 keep:1 reproduces:1 global:3 xi:8 spectrum:4 continuous:7 decade:1 quantifies:1 sk:1 decomposes:1 ku:3 robust:1 expanding:1 electric:1 constructing:1 diag:5 understandably:1 sp:1 dense:1 s2:3 border:1 noise:4 guang:1 complementary:1 x1:2 fig:2 precision:2 position:1 col:3 candidate:1 young:2 theorem:14 rk:1 embed:1 formula:1 discarding:1 inset:1 offset:1 intrinsic:2 burden:1 false:2 magnitude:2 conditioned:1 kx:1 ksi:1 nk:1 gap:7 locality:1 smoothly:2 kym:1 admitted:1 forming:1 expressed:1 ordered:1 kxk:1 aa:2 satisfies:2 determines:1 ma:2 identity:4 kli:1 sorted:2 consequently:1 rbf:4 hard:1 change:3 determined:1 infinite:1 corrected:1 reducing:1 degradation:1 lemma:12 partly:1 svd:12 brand:2 cholesky:2 eigenmap:3 |
1,511 | 2,374 | An MCMC-Based Method of Comparing
Connectionist Models in Cognitive Science
Woojae Kim, Daniel J. Navarro?, Mark A. Pitt, In Jae Myung
Department of Psychology
Ohio State University
fkim.1124, navarro.20, pitt.2, [email protected]
Abstract
Despite the popularity of connectionist models in cognitive science,
their performance can often be di?cult to evaluate. Inspired by the
geometric approach to statistical model selection, we introduce a
conceptually similar method to examine the global behavior of a
connectionist model, by counting the number and types of response
patterns it can simulate. The Markov Chain Monte Carlo-based
algorithm that we constructed ?nds these patterns e?ciently. We
demonstrate the approach using two localist network models of
speech perception.
1
Introduction
Connectionist models are popular in some areas of cognitive science, especially
language processing. One reason for this is that they provide a means of expressing
the fundamental principles of a theory in a readily testable computational form.
For example, levels of mental representation can be mapped onto layers of nodes
in a connectionist network. Information ?ow between levels is then de?ned by the
types of connection (e.g., excitatory and inhibitory) between layers. The soundness
of the theoretical assumptions are then evaluated by studying the behavior of the
network in simulations and testing its predictions experimentally.
Although this sort of modeling has enriched our understanding of human cognition,
the consequences of the choices made in the design of a model can be di?cult to
evaluate. While good simulation performance is assumed to support the model
and its underlying principles, a drawback of this testing methodology is that it can
obscure the role played by a model?s complexity and other reasons why a competing
model might simulate human data equally well.
These concerns are part and parcel of the well-known problem of model selection. A
great deal of progress has been made in solving it for statistical models (i.e., those
that can be described by a family of probability distributions [1, 2]). Connectionist
?
Correspondence should be addressed to Daniel Navarro, Department of Psychology,
Ohio State University, 1827 Neil Avenue Mall, Columbus OH 43210, USA. Telephone:
(614) 292-1030, Facsimile: (614) 292-5601.
models, however, are a computationally di?erent beast. The current paper introduces a technique that can be used to assist in evaluating and choosing between
connectionist models of cognition.
2
A Complexity Measure for Connectionist Models
The ability of a connectionist model to simulate human performance well does not
provide conclusive evidence that the network architecture is a good approximation
to the human cognitive system that generated the data. For instance, it would be
unimpressive if it turned out that the model could also simulate many non-humanlike patterns. Accordingly, we need a ?global? view of the model?s behavior to
discover all of the qualitatively di?erent patterns it can simulate.
A model?s ability to reproduce diverse patterns of data is known as its complexity, an
intrinsic property of a model that arises from the interaction between its parameters
and functional form. For statistical models, it can be calculated by integrating the
determinant of the Fisher information matrix over the parameter space of the model,
and adding a term that is linear in the number of parameters. Although originally
derived by Rissanen [1] from an algorithmic coding perspective, this measure is
sometimes called the geometric complexity, because it is equal to the logarithm of
the ratio of two Riemannian volumes. Viewed from this geometric perspective, the
measure has an elegant interpretation as a count of the number of ?distinguishable?
distributions that a model can generate [3, 4]. Unfortunately, geometric complexity
cannot be applied to connectionist models, because these models rarely possess a
likelihood function, much less a well-de?ned Fisher information matrix. Also, in
many cases a learning (i.e., model-?tting) algorithm for ?nding optimal parameter
values is not proposed along with the model, further complicating matters.
A conceptually simple solution to the problem, albeit a computationally demanding
one, is ?rst to discretize the data space in some properly de?ned sense and then
to identify all of the data patterns a connectionist model can generate. This approach provides the desired global view of the model?s capabilities and its de?nition
resembles that of geometric complexity: the complexity of a connectionist model is
de?ned in terms of the number of discrete data patterns the model can produce. As
such, this reparametrization-invariant complexity measure can be used for virtually
all types of network models provided that the discretization of the data space is
both justi?able and meaningful.
A challenge in implementing this solution lies in the enormity of the data space,
which may contain a truly astronomical number of patterns. Only a small fraction of
these might correspond to a model?s predictions, so it is essential to use an e?cient
search algorithm, one that will ?nd most or all of these patterns in a reasonable
time. We describe an algorithm that uses Markov Chain Monte Carlo (MCMC)
to solve such problems. It is tailored to exploit the kinds of search spaces that we
suspect are typical of localist connectionist models, and we evaluate its performance
on two of them.
3
Localist Models of Phoneme Perception
A central issue in the ?eld of human speech perception is how lexical knowledge
in?uences the perception of speech sounds. That is, how does knowing the word you
are hearing in?uence how you hear the smaller units that make up the word (i.e.,
its phonemes)? Two localist models have been proposed that represent opposing
theoretical positions. Both models were motivated by di?erent theoretical prin-
Lexical Layer
Phoneme Layer
Lexical Layer
Phoneme
Input
Phoneme
Decision
Figure 1: Network architectures for trace (left) and merge (right). Arrows indicate excitatory connections between layers; lines with dots indicate inhibitory
connections within layers.
ciples. Proponents of trace [5] argue for bi-directional communication between
layers whereas proponents of merge [6] argue against it. The models are shown
schematically in Figure 1. Each contains two main layers. Phonemes are represented in the ?rst layer and words in the second. Activation ?ows from the ?rst to
the second layer in both models. At the heart of the controversy is whether activation also ?ows in the reverse direction, directly a?ecting how the phonemic input is
processed. In trace it can. In merge it cannot. Instead, the processing performed
at the phoneme level in merge is split in two, with an input stage and a phoneme
decision stage. The second, lexical layer cannot directly a?ect phoneme activation.
Instead, the two sources of information (phonemic and lexical) are integrated only
at the phoneme decision stage.
Although the precise details of the models are unnecessary for the purposes of this
paper , it will be useful to sketch a few of their technical details. The parameters for
the models (denoted ?), of which trace has 7 and merge has 11, correspond to the
strength of the excitatory and inhibitory connections between nodes, both within
and between layers. The networks receive a continuous input, and stabilize at a ?nal
state after a certain number of cycles. In our formulation, a parameter set ? was
considered valid only if the ?nal state satis?ed certain decision criteria (discussed
shortly). Detailed descriptions of the models, including typical parameter values,
are given by [5] and [6].
Despite the di?erences in motivation, trace and merge are comparable in their
ability to simulate key experimental ?ndings [6], making it quite challenging if
not impossible to distinguish between then experimentally. Yet surely the models
are not identical? Is one more complex than the other? What are the functional
di?erences between the two?
In order to address these questions, we consider data from experiments by [6] which
are captured well by both models. In the experiments, monosyllabic words were
presented in which the last phoneme from one word was partially replaced by one
from another word (through digital editing) to create word blends that retained
residual information about the identity of the phoneme from both words. The six
types of blends are listed on the left of Table 1. Listeners had to categorize the last
phoneme in one task (phoneme decision) and categorize the entire utterance as a
word or a nonsense word in the other task (lexical decision). The response choices
in each task are listed in the table. Three responses choices were used in lexical
decision to test the models? ability to distinguish between words, not just words
and nonwords. The asterisks in each cell indicate the responses that listeners chose
most often. Both trace and merge can simulate this pattern of responses.
Table 1: The experimental design. Asterisks denote human responses.
Condition Name
bB
gB
vB
zZ
gZ
vZ
Example
JOb
JOg
JOv
JOz
JOg
JOv
+
+
+
+
+
+
joB
joB
joB
joZ
joZ
joZ
Phonemic Decision
/b/
/g/
/z/
/v/
*
*
*
*
*
*
Lexical Decision
job
jog
nonword
*
*
*
*
*
*
Table 2: Two sets of decision rules for trace and merge. The values shown
correspond to activation levels of the appropriate decision node.
Constraint
Phoneme Decision
Choose /b/ if. . .
Lexical Decision
Choose "job" if. . .
Choose ?nonword? if. . .
Weak
/b/> 0.4 & others < 0.4
job > 0.4 & jog < 0.4
both < 0.4
Strong
/b/> 0.45 & others < 0.25
(/b/ ? max(others)) > 0.3
job > 0.45 & jog < 0.25
(job ? jog) > 0.3
both < 0.25
abs(di?erence) < 0.15
The pro?le of responses decisions (phoneme and lexical) over the six experimental
conditions provides a natural de?nition of a data pattern that the model could
produce, and the decision rules establish a natural (surjective) mapping from the
continuous space of network states (of which each model can produce some subset)
to the discrete space of data patterns. We applied two di?erent sets of decision rules,
listed in Table 2, and were interested in determining how many patterns (besides
the human-like pattern) each model can generate. As previously discussed, these
counts will serve as a measure of model complexity.
4
The Search Algorithm
The search problem that we need to solve di?ers from the standard Monte Carlo
counting problem. Ordinarily, Monte Carlo methods are used to discover how much
of the search space is covered by some region by counting how often co-ordinates are
sampled from that region. In our problem, a high-dimensional parameter space has
been partitioned into an unknown number of regions, with each region corresponding
to a single data pattern. The task is to ?nd all such regions irrespective of their size.
How do we solve this problem? Given the dimensionality of the space, brute force
searches are impossible. Simple Monte Carlo (SMC; i.e., uniform random sampling)
will fail because it ignores the structure of the search space.
The spaces that we consider seem to possess three regularities, which we call a
?grainy? structure, illustrated schematically in Figure 2. Firstly, on many occasions the network does not converge on a state that meets the decision criteria, so
some proportion of the parameter space does not correspond to any data pattern.
Secondly, the size of the regions vary a great deal. Some data patterns are elicited
by a wide range of parameter values, whereas others can be produced only by a small
range of values. Thirdly, small regions tend to cluster together. In these models,
there are likely to be regions where the model consistently chooses the dominant
phoneme and makes the correspondingly appropriate lexical decision. However,
there will also be large regions in which the models always choose ?nonword? ir-
Figure 2: A parameter space with ?grainy? structure. Each region corresponds to
a single data pattern that the model can generate. Regions vary in size, and small
regions cluster together.
respective of whether the stimulus is a word. Along the borders between regions,
however, there might be lots of smaller ?transition regions?, and these regions will
tend to be near one another.
The consequence of this structure is that the size of the region in which the process
is currently located provides extensive information about the number of regions
that are likely to lie nearby. In a small region, there will probably be other small
regions nearby, so a ?ne-grained search is required in order to ?nd them. However,
a ?ne-grained search process will get stuck in a large region, taking tiny steps when
great leaps are required. Our algorithm exploits this structure by using MCMC
to estimate a di?erent parameter sampling distribution p(?jri ) for every region ri
that it encounters, and then cycling through these distributions in order to sample
parameter sets. The procedure can be reduced to three steps:
1. Set i = 0, m = 0. Sample ? from p(?jr0 ), a uniform distribution over the
space. If ? does not generate a valid data pattern, repeat Step 1.
2. Set m = m + 1 and then i = m. Record the new pattern, and use MCMC
to estimate p(?jri ).
3. Sample ? from p(?jri ). If ? generates a new pattern, return to Step 2.
Otherwise, set i = mod(i, m) + 1, and repeat Step 3.
The process of estimating p(?jri ) is a fairly straightforward application of MCMC
[7]. We specify a uniform jumping distribution over a small hypersphere centered
on the current point ? in the parameter space, accepting candidate points if and
only if they produce the same pattern as ?. After collecting enough samples, we
calculate the mean and variance-covariance matrix for these observations, and use
this to estimate an ellipsoid around the mean, as an approximation to the i-th
region. However, since we want to ?nd points in the bordering regions, the the
estimated ellipsoid is deliberately oversized. The sampling distribution p(?jri ) is
simply a uniform distribution over the ellipsoid.
Unlike SMC (or even a more standard application of MCMC), our algorithm has
the desirable property that it focuses on each region in equal proportion, irrespective of its size. Not only that, because the parameter space is high dimensional,
the vast majority of the distribution p(?jri ) will actually lie near the edges of the
ellipsoid: that is, the area just outside of the i-th region. Consequently, we search
primarily along the edges of the regions that we have already discovered, paying
closer attention to the small regions. The overall distribution p(?) is essentially a
mixture distribution that assigns higher density to points known to lie near many
regions.
5
Testing the Algorithm
In the absence of analytic results, the algorithm was evaluated against standard
SMC. The ?rst test applied both to a simple toy problem possessing a grainy structure. Inside a hypercube [0, 1]d , an assortment of large and small regions (also
hypercubes) were de?ned using unevenly spaced grids so that all the regions neighbored each other (d ranged from 3 to 6). In higher dimensions (d ? 4), SMC did not
?nd all of the regions. In contrast, the MCMC algorithm found all of the regions,
and did so in a reasonable amount of time. Overall, the MCMC-based algorithm is
slower than SMC at the beginning of the search due to the time required for region
estimation. However, the time required to learn the structure of the parameter
space is time well spent because the search becomes more e?cient and successful,
paying large dividends in time and accuracy in the end.
As a second test, we applied the algorithms to simpli?ed versions of trace, constructed so that even SMC might work reasonably well. In one reduced model,
for instance, only phoneme responses were considered. In the other, only lexical
responses were considered. Weak and strong constraints (Table 2) were imposed on
both models. In all cases, MCMC found as many or more patterns than SMC, and
all SMC patterns were among the MCMC patterns.
6
Application to Models of Phoneme Perception
Next we ran the search algorithm on the full versions of trace and merge, using
both the strong and weak constraints (Table 2). The number of patterns discovered
in each case is summarized in Figure 3. In this experimental design merge is more
complex than trace, although the extent of this e?ect is somewhat dependent on
the choice of constraints. When strong constraints are applied trace (27 patterns)
is nested within merge (67 patterns), which produces 148% more patterns. However, when these constraints are eased, the nesting relationship disappears, and
merge (73 patterns) produces only 40% more patterns than trace (52 patterns).
Nevertheless, it is noteworthy that the behavior of each is highly constrained, producing less than 100 of the 46 ? 36 = 2, 985, 984 patterns available. Also, for both
models (under both sets of constraints), the vast majority of the parameter space
was occupied by only a few patterns.
A second question of interest is whether each model?s ouput veers far from human
performance (Table 1). To answer this, we classi?ed every data pattern in terms of
the number of mismatches from the human-like pattern (from 0 to 12), and counted
how frequently the model patterns fell into each class. The results, shown in Figure 4, are quite similar and orderly for both models. The choice of constraints had
little e?ect, and in both cases the trace distribution (open circles) is a little closer
to the human-like pattern than the merge distribution (closed circles). Even so,
both models are remarkably human-like when considered in light of the distribution
of all possible patterns (cross hairs). In fact, the probability is virtually zero that
a ?random model? (consisting of a random sample of patterns) would display such
a low mismatch frequency.
Building on this analysis, we looked for qualitative di?erences in the types of mismatches made by each model. Since the choice of constraints made no di?erence,
Figure 5 shows the mismatch pro?les under weak constraints. Both models produce
no mismatches in some conditions (e.g., bB-phoneme identi?cation, vZ-lexical decision) and many in others (e.g., gB-lexical decision). Interestingly, trace and merge
produce similar mismatch pro?les for lexical decision, and a comparable number of
mismatches (108 vs. 124). However, striking qualitative di?erences are evident for
Weak Constraints
Strong Constraints
TRACE
TRACE
32
20
27
41
40
MERGE
MERGE
Figure 3: Venn diagrams showing the number of patterns discovered for both models
under both types of constraint.
Proportion of Patterns
0.4
Weak TRACE
Strong TRACE
Weak MERGE
Strong MERGE
All Patterns
0.3
0.2
0.1
0
0
1
2
3
4
5
6
7
8
Number of Mismatches
9
10
11
12
Figure 4: Mismatch distributions for all four models plus the data space. The
0-point corresponds to the lone human-like pattern contained in all distributions.
phoneme decisions, with merge producing mismatches in conditions that trace
does not (e.g., vB, vZ). When the two graphs are compared, an asymmetry is evident
in the frequency of mismatches across tasks: merge makes phonemic mismatches
with about the same frequency as lexical errors (139 vs. 124), whereas trace does
so less than half as often (56 vs. 108).
The mismatch asymmetry accords nicely with the architectures shown in Figure 1.
The two models make lexical decisions in an almost identical manner: phonemic
information feeds into the lexical decision layer, from which a decision is made. It
should then come as no surprise that lexical processing in trace and merge is so
similar. In contrast, phoneme processing is split between two layers in merge but
con?ned to one in trace. The two layers dedicated to phoneme processing provide
merge an added degree of ?exibility (i.e., complexity) in generating data patterns.
This shows up in many ways, not just in merge?s ability to produce mismatches in
more conditions than trace. For example, these mismatches yield a wider range
of phoneme responses. Shown above each bar in Figure 5 is the phoneme that was
misrecognized in the given condition. trace only misrecogized the phoneme as /g/
whereas merge misrecognized it as /g/, /z/, and /v/.
These analyses describe a few consequences of dividing processing between two
layers, as in merge, and in doing so creating a more complex model. On the
basis of performance (i.e., ?t) alone, this additional complexity is unnecessary for
modeling phoneme perception because the simpler architecture of trace simulates
human data as well as merge. If merge?s design is to be preferred, the additional
complexity must be justifed for other reasons [6].
55
phoneme
lexical
50
45
45
40
40
/g/
35
nw
30
25
/g/
nw
nw
jog
20
jog
15
Number of Mismatches
Number of Mismatches
50
55
lexical
nw
jog
/g/
35
/v/
nw
30
25
/z/
20
15
10
10
5
5
0
/g/
phoneme
nw
jog
0
bB gB vB zZ gZ vZ
bB gB vB zZ gZ vZ
TRACE
bB gB vB zZ gZ vZ
bB gB vB zZ gZ vZ
MERGE
Figure 5: Mismatch pro?les for both trace and merge when the weak constraints
are applied. Conditions are denoted by their phoneme blend.
7
Conclusions
The results of this preliminary evaluation suggest that the MCMC-based algorithm
is a promising method for comparing connectionist models. Although it was developed to compare localist models like trace and merge, it may be broadly
applicable whenever the search space exhibits this ?grainy? structure. Indeed, the
algorithm could be a general tool for designing, comparing, and evaluating connectionist models of human cognition. Plans are underway to extend the approach to
other experimental designs, dependent measures (e.g., reaction time), and models.
Acknowledgements
The authors were supported by NIH grant R01-MH57472 awarded to IJM and MAP. DJN
was also supported by a grant from the O?ce of Research at OSU. We thank Nancy Briggs,
Cheongtag Kim and Yong Su for helpful discussions.
References
[1] Rissanen, J. (1996). Fisher information and stochastic complexity. IEEE Transactions
on Information Theory 42, 40-47.
[2] Rissanen, J. (2001). Strong optimality of the normalized ML models as universal codes
and information in data. IEEE Transactions on Information Theory 47, 1712-1717.
[3] Balasubramanian, V. (1997). Statistical inference, Occam?s razor and statistical mechanics on the space of probability distributions. Neural Computation, 9, 349-368.
[4] Myung, I. J., Balasubramanian, V., & Pitt, M. A. (2000). Counting probability distributions: Di?erential geometry and model selection. Proceedings of the National
Academy of Sciences USA, 97, 11170-11175.
[5] McClelland, J. L. & Elman, J. L. (1986). The TRACE model of speech perception.
Cognitive Psychology, 18, 1-86.
[6] Norris, D., McQueen, J. M. & Cutler, A. (2000). Merging phonetic and lexical information in phonetic decision-making. Behavioral & Brain Sciences, 23, 299-325.
[7] Gilks, W. R. , Richardson, S., & Spiegelhalter, D. J. (1995). Markov Chain Monte
Carlo in Practice. London: Chapman and Hall.
| 2374 |@word determinant:1 version:2 proportion:3 nd:6 open:1 simulation:2 covariance:1 unimpressive:1 eld:1 contains:1 daniel:2 interestingly:1 reaction:1 current:2 comparing:3 discretization:1 activation:4 yet:1 must:1 readily:1 analytic:1 v:3 alone:1 half:1 accordingly:1 cult:2 beginning:1 record:1 accepting:1 mental:1 provides:3 hypersphere:1 node:3 firstly:1 simpler:1 along:3 constructed:2 ect:3 ouput:1 qualitative:2 behavioral:1 inside:1 manner:1 introduce:1 indeed:1 behavior:4 elman:1 examine:1 frequently:1 mechanic:1 brain:1 inspired:1 balasubramanian:2 little:2 becomes:1 provided:1 discover:2 underlying:1 estimating:1 what:1 kind:1 developed:1 lone:1 every:2 collecting:1 brute:1 unit:1 grant:2 producing:2 humanlike:1 consequence:3 despite:2 meet:1 merge:31 noteworthy:1 might:4 chose:1 plus:1 resembles:1 challenging:1 co:1 monosyllabic:1 smc:8 bi:1 range:3 gilks:1 testing:3 practice:1 procedure:1 area:2 universal:1 erence:2 word:13 integrating:1 suggest:1 get:1 onto:1 cannot:3 selection:3 impossible:2 imposed:1 map:1 lexical:22 straightforward:1 attention:1 assigns:1 rule:3 nesting:1 oh:1 tting:1 us:1 designing:1 located:1 role:1 calculate:1 region:33 cycle:1 nonwords:1 ran:1 complexity:13 controversy:1 solving:1 serve:1 basis:1 represented:1 listener:2 joz:4 describe:2 london:1 monte:6 choosing:1 outside:1 quite:2 solve:3 otherwise:1 ability:5 soundness:1 neil:1 richardson:1 oversized:1 interaction:1 turned:1 academy:1 description:1 rst:4 regularity:1 cluster:2 asymmetry:2 produce:9 generating:1 spent:1 wider:1 erent:5 progress:1 job:9 paying:2 dividing:1 phonemic:5 strong:8 indicate:3 come:1 direction:1 drawback:1 stochastic:1 centered:1 human:14 implementing:1 ndings:1 preliminary:1 secondly:1 around:1 considered:4 hall:1 great:3 cognition:3 algorithmic:1 mapping:1 pitt:3 nw:6 vary:2 purpose:1 estimation:1 proponent:2 applicable:1 leap:1 currently:1 vz:7 create:1 tool:1 always:1 occupied:1 derived:1 focus:1 properly:1 consistently:1 likelihood:1 contrast:2 kim:2 sense:1 helpful:1 inference:1 dependent:2 integrated:1 entire:1 reproduce:1 interested:1 issue:1 overall:2 uences:1 among:1 denoted:2 plan:1 constrained:1 fairly:1 equal:2 grainy:4 nicely:1 sampling:3 zz:5 identical:2 chapman:1 connectionist:15 others:5 stimulus:1 few:3 primarily:1 national:1 replaced:1 geometry:1 consisting:1 opposing:1 ab:1 interest:1 satis:1 highly:1 evaluation:1 introduces:1 truly:1 mixture:1 cutler:1 light:1 chain:3 edge:2 closer:2 respective:1 jumping:1 dividend:1 logarithm:1 desired:1 circle:2 uence:1 theoretical:3 instance:2 modeling:2 localist:5 enormity:1 hearing:1 subset:1 uniform:4 successful:1 answer:1 chooses:1 hypercubes:1 density:1 fundamental:1 together:2 central:1 choose:4 cognitive:5 creating:1 return:1 toy:1 de:7 coding:1 stabilize:1 summarized:1 matter:1 mcmc:11 performed:1 view:2 lot:1 closed:1 doing:1 sort:1 capability:1 reparametrization:1 elicited:1 ir:1 accuracy:1 phoneme:30 variance:1 correspond:4 identify:1 spaced:1 yield:1 directional:1 conceptually:2 weak:8 produced:1 carlo:6 cation:1 whenever:1 ed:3 against:2 frequency:3 di:15 riemannian:1 con:1 sampled:1 popular:1 nancy:1 astronomical:1 knowledge:1 dimensionality:1 actually:1 feed:1 originally:1 higher:2 methodology:1 response:10 specify:1 editing:1 formulation:1 evaluated:2 erences:4 just:3 stage:3 sketch:1 bordering:1 su:1 columbus:1 usa:2 name:1 building:1 contain:1 ranged:1 normalized:1 deliberately:1 illustrated:1 deal:2 parcel:1 razor:1 criterion:2 occasion:1 evident:2 demonstrate:1 dedicated:1 pro:4 ohio:2 possessing:1 nih:1 functional:2 volume:1 thirdly:1 discussed:2 interpretation:1 extend:1 expressing:1 grid:1 language:1 had:2 dot:1 dominant:1 perspective:2 awarded:1 reverse:1 phonetic:2 certain:2 nition:2 captured:1 additional:2 simpli:1 somewhat:1 surely:1 converge:1 full:1 sound:1 desirable:1 technical:1 jog:10 cross:1 equally:1 prediction:2 hair:1 essentially:1 nonword:3 sometimes:1 tailored:1 represent:1 accord:1 cell:1 receive:1 whereas:4 schematically:2 want:1 remarkably:1 addressed:1 unevenly:1 diagram:1 source:1 unlike:1 posse:2 probably:1 navarro:3 suspect:1 tend:2 elegant:1 virtually:2 fell:1 simulates:1 mod:1 seem:1 call:1 ciently:1 near:3 counting:4 split:2 enough:1 psychology:3 architecture:4 competing:1 avenue:1 knowing:1 whether:3 motivated:1 six:2 assist:1 gb:6 speech:4 useful:1 mcqueen:1 detailed:1 listed:3 covered:1 amount:1 processed:1 mcclelland:1 reduced:2 generate:5 inhibitory:3 estimated:1 popularity:1 diverse:1 jri:6 discrete:2 broadly:1 key:1 four:1 rissanen:3 nevertheless:1 ce:1 nal:2 vast:2 graph:1 fraction:1 you:2 striking:1 family:1 reasonable:2 eased:1 almost:1 decision:26 vb:6 comparable:2 layer:17 played:1 distinguish:2 correspondence:1 display:1 strength:1 constraint:14 ri:1 yong:1 prin:1 nearby:2 generates:1 simulate:7 neighbored:1 ecting:1 optimality:1 ned:6 department:2 smaller:2 across:1 beast:1 partitioned:1 making:2 invariant:1 heart:1 computationally:2 previously:1 count:2 fail:1 end:1 briggs:1 studying:1 available:1 assortment:1 appropriate:2 encounter:1 shortly:1 slower:1 exploit:2 testable:1 especially:1 establish:1 surjective:1 hypercube:1 r01:1 question:2 already:1 looked:1 blend:3 added:1 cycling:1 exhibit:1 ow:1 thank:1 mapped:1 majority:2 argue:2 extent:1 reason:3 besides:1 code:1 retained:1 relationship:1 ellipsoid:4 ratio:1 unfortunately:1 trace:29 ordinarily:1 design:5 unknown:1 discretize:1 observation:1 markov:3 communication:1 precise:1 discovered:3 ordinate:1 ijm:1 required:4 extensive:1 connection:4 conclusive:1 identi:1 address:1 able:1 bar:1 pattern:45 perception:7 mismatch:18 challenge:1 hear:1 including:1 max:1 mall:1 demanding:1 natural:2 force:1 residual:1 djn:1 spiegelhalter:1 ne:2 nonsense:1 nding:1 irrespective:2 disappears:1 gz:5 utterance:1 geometric:5 understanding:1 acknowledgement:1 determining:1 underway:1 ows:2 digital:1 asterisk:2 degree:1 myung:3 principle:2 tiny:1 occam:1 obscure:1 excitatory:3 repeat:2 last:2 supported:2 wide:1 taking:1 correspondingly:1 venn:1 calculated:1 complicating:1 evaluating:2 valid:2 transition:1 dimension:1 ignores:1 stuck:1 made:5 qualitatively:1 author:1 counted:1 far:1 facsimile:1 transaction:2 bb:6 preferred:1 orderly:1 ml:1 global:3 assumed:1 unnecessary:2 search:14 continuous:2 why:1 table:8 promising:1 learn:1 reasonably:1 complex:3 did:2 main:1 arrow:1 motivation:1 border:1 jae:1 enriched:1 cient:2 position:1 lie:4 candidate:1 justi:1 grained:2 showing:1 er:1 concern:1 evidence:1 intrinsic:1 essential:1 albeit:1 adding:1 merging:1 surprise:1 distinguishable:1 simply:1 likely:2 contained:1 misrecognized:2 partially:1 norris:1 corresponds:2 nested:1 viewed:1 identity:1 consequently:1 fisher:3 absence:1 experimentally:2 telephone:1 typical:2 classi:1 called:1 experimental:5 osu:2 meaningful:1 rarely:1 mark:1 support:1 arises:1 categorize:2 evaluate:3 exibility:1 |
1,512 | 2,375 | Synchrony Detection by Analogue VLSI
Neurons with Bimodal STDP Synapses
Adria Bofill-i-Petit
The University of Edinburgh
Edinburgh, EH9 3JL
Scotland
[email protected]
Alan F. Murray
The University of Edinburgh
Edinburgh, EH9 3JL
Scotland
[email protected]
Abstract
We present test results from spike-timing correlation learning experiments carried out with silicon neurons with STDP (Spike Timing Dependent Plasticity) synapses. The weight change scheme
of the STDP synapses can be set to either weight-independent or
weight-dependent mode. We present results that characterise the
learning window implemented for both modes of operation. When
presented with spike trains with different types of synchronisation
the neurons develop bimodal weight distributions. We also show
that a 2-layered network of silicon spiking neurons with STDP
synapses can perform hierarchical synchrony detection.
1
Introduction
Traditionally, Hebbian learning algorithms have interpreted Hebb?s postulate in
terms of coincidence detection. They are based on mean spike firing rates correlations between presynaptic and postsynaptic spikes rather than upon precise timing
differences between presynaptic and postsynaptic spikes.
In recent years, new forms of synaptic plasticity that rely on precise spike-timing
differences between presynaptic and postsynaptic spikes have been discovered in
several biological systems[1][2][3]. These forms of plasticity, generally termed Spike
Timing Dependent Plasticity (STDP), increase the synaptic efficacy of a synapse
when a presynaptic spike reaches the neuron a few milliseconds before the postsynaptic action potential. In contrast, when the postsynaptic neuron fires immediately
before the presynaptic neuron the strength of the synapse diminishes.
Much debate has taken place regarding the precise characteristics of the learning
rules underlying STDP [4]. The presence of weight dependence in the learning rule
has been identified as having a dramatic effect on the computational properties of
STDP. When weight modifications are independent of the weight value, a strong
competition takes places between the synapses. Hence, even when no spike-timing
correlation is present in the input, synapses develop maximum or minimum strength
so that a bimodal weight distribution emerges from learning[5]. Conversely, if the
learning rule is strongly weight-dependent, such that strong synapses receive less potentiation than weaker ones while depression is independent of the synaptic strength,
a smooth unimodal weight distribution emerges from the learning process[6].
In this paper we present circuits to support STDP on silicon. Bimodal weight
distributions are effectively binary. Hence, they are suited to analog VLSI implementation, as the main barrier to the implementation of on-chip learning, the long
term storage of precise analog weight values, can be rendered unimportant. However, weight-independent STDP creates a highly unstable learning process that may
hinder learning when only low levels of spike-timing correlations exist and neurons
have few synapses. The circuits proposed here introduce a tunable weight dependence mechanism which stabilises the learning process. This allows finer correlations
to be detected than does a weight-independent scheme. In the weight-dependent
learning experiments reported here the weight-dependence is set at moderate levels
such that bimodal weight distributions still result from learning.
The analogue VLSI implementation of spike-based learning was first investigated in [7]. The authors used a weight-dependent scheme and concentrated on the
weight normalisation properties of the learning rule. In [8], we proposed circuits
to implement asymmetric STDP which lacked the weight-dependent mechanism.
More recently, others have also investigated asymmetric STDP learning using VLSI
systems[9][10]. STDP synapses that contain an explicit bistable mechanism have
been proposed in [10]. Long-term bistable synapses are a good technological solution for weight storage. However, the maximum and minimum weight limits in
bimodal STDP already act as natural attractors. An explicit bistable mechanism
may increase the instability of the learning process and may hinder, in consequence,
the detection of subtle correlations. In contrast, the circuits that we propose here
introduce a mechanism that tends to stabilise learning.
2
STDP circuits
The circuits in Figure 1 implement the asymmetric decaying learning window with
the abrupt transition at the origin that is so characteristic of STDP. The weight of
each synapse is represented by the charge stored on its weight capacitor C w . The
strength of the weight is inversely proportional to Vw . The closer the value of Vw
is to GND , the stronger is the synapse.
Our silicon spiking neurons signal their firing events with the sequence of pulses
seen in Figure 1c. Signal post bp is back-propagated to the afferent synapses of the
neuron. Long is a longer pulse (a few ?s) used in the current neuron (termed as
signal postLong in Figure 1b). Long is also sent to input synapses of following neurons in the activity path (see preLong in 1a). Finally, spikeOut is the presynaptic
spike for the next receiving neuron (termed pre in Figure 1a). More details on the
implementation of the silicon neuron can be found in [11]
In Figure 1a, if preLong is long enough (a few ?s) the voltage created by Ibpot on
the diode connected transistor N5 is copied to the gate of N2. This voltage across
Cpot decays with time from its peak value due to a leakage current set by Vbpot .
When the postsynaptic neuron fires, a back propagation pulse post bp switches N3
on. Therefore, the weight is potentiated (Vw decreased) by an amount which reflects
the time elapsed since the last presynaptic event.
A weight dependence mechanism is introduced by the simple linearised V-I configuration P5-P6 and current mirror N7-N6 (see Figure 1a). P5 is a low gain transistor operated in strong inversion whereas P6 is a wide transistor made to operate
in weak inversion such that it has even higher gain. When the value of Vw decreases
(weight increase) the current through P5-P6 increases, but P5 is maintained in the
linear region by the high gain transistor. Thus, a current proportional to the value
of the weight is subtracted from Ibpot . The resulting smaller current injected into
N5 will cause a drop in the peak of potentiation for large weight values.
post_bp
P1
pre
P4
Long
Cw
Vw
P2
P3
P5
P6
spikeOut
Idep
Vr
Ibpot
preLong
N3
(c)
post_bp
Ibdep
Idep_1
postLong
N7
N6 N5
N8
Vbpot
N2
N1
Cpot
N1
(a)
N2
Vbdep
Idep_N
N4
N3
Cdep
(b)
Figure 1: Weight change circuits. (a) The strength of the synapse is inversely proportional
to the value of Vw . The lower Vw , the smaller the weight of the synapse. This section of
the weight change circuit detects causal spike correlations. (b) A single depression circuit
present in the soma of the neuron creates the decaying shape of the depression side of the
learning window. (c) Waveforms of pulses that signal an action potential event. They are
used to stimulate the weight change circuits.
In a similar manner to potentiation, the weight is weakened by the circuit of
Figure 1b when it detects a non-causal interaction between a presynaptic and a
postsynaptic spike. When a postsynaptic spike event is generated a postLong pulse
charges Cdep . The charge accumulated leaks linearly through N3 at a rate set by
Vbdep . A set of non-linear decaying currents (IdepX ) is sent to the weight change
circuits placed in the input synapse (see Idep in Figure 1a). When a presynaptic
spike reaches a synapse P1 is switched on. If this occurs soon enough after the
postLong pulse was generated, Vw is brought closer to Vdd (weight strength decreased). Only one depression circuit per neuron is required since the depression
part of the learning rule is independent of the weight value.
A chip including 5 spiking neurons with STDP synapses has been fabricated using a standard 0.6?m CMOS process. Each neuron has 6 learning synapses, a single
excitatory non-learning synapse and a single inhibitory one. Along with the silicon
neuron circuits, the chip contains several voltage buffers that allow us to monitor
the behaviour of the neuron. The testing setup uses a networked logic analysis system to stimulate the silicon neuron and to capture the results of on-chip learning.
An externally addressable circuit creates preLong and pre pulses to stimulate the
synapses.
3
3.1
Weight-independent learning rule
Characterisation
A weight-independent weight change regime is obtained by setting Vr to Vdd in
the weight change circuit presented in Figure 1 . The resulting learning window
on silicon can be seen in Figure 2. Each point in the curve was obtained from the
stimulation of the fix synapse and a learning synapse with a varying delay between
them. As can be seen in the figure, the circuit is highly tunable. Figure 2a shows
that the peaks for potentiation and depression can be set independently. Also, as
shown in Figure 2b the decay of the learning window for both sides of the curve can
be set independently of the maximum weight change with Vbdep and Vbpot . Since the
weight-dependent mechanism is switched off, the curve of the learning window is
the same for a wide range of Vw . Obviously, when the weight voltage Vw approaches
0.4
0.3
0.25
0.3
0.2
0.2
?Vw ( V )
?Vw ( V )
0.15
0.1
0
0.1
0.05
0
?0.1
?0.05
?0.2
?0.3
?0.1
?30
?20
?10
0
tpre ? tpost
10
( ms )
20
30
40
?0.15
?40
(a)
?30
?20
?10
t
pre
0
?t
post
10
( ms )
20
30
40
(b)
Figure 2: Experimental learning window for weight-independent STDP. The curves show
the weight modification induced in the weight of a learning synapse for different time intervals between the presynaptic and the postsynaptic spike. For the results shown, the
synapses were operated in a weight-independent mode. (a) The peaks of the learning window is shown for 4 different settings. The peak for potentiation and depression are tuned
independently with Ibpot and Ibdep (b) The rate of decay of the learning window for potentiation and depression can be set independently without affecting the maximum weight
change.
any of the power supply rails a saturation effect occurs as the transistors injecting
current in the weight capacitor leave saturation. For the learning experiment with
weight-independent weight change the area under the potentiation curve should be
approximately 50% smaller than the area under the depression region.
3.2
Learning spike-timing correlations with weight-independent
learning
We stimulated a 6-synapse silicon neuron with 6 independent Poisson-distributed
spike trains with a rate of 30Hz. An absolute refractory period of 10ms was enforced
between consecutive spikes of each train. Refractoriness helps break the temporal
axis into disjoint segments so that presynaptic spikes can make less noisy ?predictions? of the postsynaptic time of firing. We introduced spike-timing correlations
between the inputs for synapses 1 and 2. Synapses 3 to 6 were uncorrelated.
The evolution of the 6 weights for one of such experiments is show in Figure 3.
The correlated inputs shared 35% of the spike-timings. They were constructed by
merging two independent 19.5Hz Poisson-distributed spike trains with a common
10.5Hz spike train. As can be seen in Figure 3 the weights of synapses that receive
correlated activity reach maximum strength (Vw close to GND) whereas the rest
decay towards Vdd. Clearly, the bimodal weight distribution reflects the correlation
pattern of the input signals.
3.3
Hierarchical synchrony detection
To experiment with hierarchical synchrony detection we included in the chip a
small 2-layered network of STDP silicon neurons with the configuration shown
in Figure 4. Neurons in the first layer were stimulated with independent sets of
Poisson-distributed spike trains with a mean spiking rate of 30Hz. As with the
experiments presented in the preceding section, a 10ms refractory period was
forced between consecutive spikes. A primary level of correlation was introduced
for each neuron in the first layer as signalled by the arrowed bridge between the
4.5
Vw6
4
3.5
Vw5
Vw4
2.5
w
(V)
3
V
2
Vw3
1.5
Vw1, Vw2
1
0.5
0
0
1
2
3
4
5
time ( s )
6
7
8
Figure 3: Learning experiment with weight-independent STDP.
0.25
0.5
0.5
N1
0.5
N2
0.5
N3
N4
N5
Figure 4: Final weight values for a 2-layered network of STDP silicon neurons.
inputs of synapses 1 and 2 of each neuron. For the results shown here these 2
inputs of each neuron shared 50% of the spike-timings (indicated with 0.5 on top
of the double-arrowed bridge of Figure 4). A secondary level of correlation was
introduced between the inputs of synapses 1 and 2 of both N1 and N2, as signalled
by the arrow linking the first level of correlations of N1 and N2. This second level
of correlations is weaker, with only 25% of shared spikes (indicated with 0.25 in
Figure 4). The two direct inputs of N5, in the second layer, were also Poisson
distributed but had a rate of 15Hz.
The evolution of the weights recorded for the experiment just described is
presented in Figure 5. On the left, we see the weight evolution for N1. The weights
corresponding to synapses 1 and 2 evolve towards the maximum value (i.e. GND).
The weights of the remaining synapses, which receive random activity, decrease
(i.e. Vw close to Vdd). The other neurons in the 1st layer have weight evolutions
similar to that of N1. Synapses with synchronised activity corresponding to the
1st level of correlations win the competition imposed by STDP. The Vw traces
on the right-hand side of Figure 5 show how N5 in the second layer captures the
secondary level of correlation. Weights of the synapses receiving input from N1
and N2 are reinforced while the rest are decreased towards the minimum possible
weight value (Vw = Vdd). Clearly, the second layer only captures features from
signals which have already a basic level of interesting features (primary level of
correlations) detected by the first layer.
In Figure 4, we have represented graphically the final weight distribution for
all synapses. As marked by filled circles, only synapses in the path of hierarchical
N1
4.5
4
4
Vw3
3.5
w5
V
(V)
w4
Vw3
w4
, V
w6
V
3
2.5
2
w5
Vw6
2.5
2
V
w
V
3.5
Vw ( V )
V
3
N5
4.5
1.5
1.5
Vw1 , Vw2
1
1
0.5
0.5
0
0
2
4
6
8
10
0
0
time ( s )
Vw1 , Vw2
5
10
15
20
25
time ( s )
30
35
40
Figure 5: Hierarchical synchrony detection. (a) Weight evolution of neuron in first layer.
(b) Weight evolution of output neuron in 2nd layer.
synchrony activity develop maximum weight strength. In contrast, weights with
final minimum strength are indicated by empty circles. These correspond to
synapses of first layer neurons which received uncorrelated inputs or synapses of
N5 which received inputs from neurons stimulated without a secondary level of
correlations (N3-N4).
4
4.1
Weight-dependent learning rule
Characterisation
The STDP synapses presented can also be operated in weight-dependent mode.
The weight dependent learning window implemented is similar to that which seems
to underly some STDP recordings from biological neurons [6]. Figure 6a shows
chip results of the weight-dependent learning rule. The weight change curve for
potentiation is given for 3 different weight values. The larger the weight value (low
Vw ), the smaller the degree of potentiation induced in the synapse. The depression
side of the learning window is unaffected by the weight value since the depression
circuit shown in Figure 1b does not have an explicit weight-dependent mechanism.
4.2
Learning spike-timing correlations with weight-dependent learning
Figure 6b shows the weight evolution for an experiment where the correlated activity between synapses 1 and 2 consisted of only 20% of common spike-timings.
As in the weight-independent experiments, the mean firing rate was 30Hz and a
refractory period of 10ms was enforced.
Finally, we stimulated a neuron in weight-dependent mode with a form of synchrony where spike-timings coincided in a time window (window of correlation)
instead of being perfectly matched (syn0-1). The uncorrelated inputs (syn2-5) were
Poisson-distributed spike trains. The synchrony data was an inhomogeneous Poisson spike train with a rate modulated by a binary signal with random transition
points. Figure 7 shows a normalised histogram of spike intervals between the correlated inputs for synapses 0 and 1 (Figure 7a) and the histogram of the uncorrelated
inputs for synapses 2 and 3 (Figure 7b). Again, as can be seen in Figure 7c the
neuron with weight-dependent STDP can detect this low-level of synchrony with
non-coincident spikes. Clearly, the bimodal weight distribution identifies the syn-
0.25
4.5
0.2
4
0.15
3.5
V
w6
3
Vw ( V )
?Vw ( V )
0.1
Vw5
0.05
0
?0.05
Winit = 0.75V
2.5
V
w3
2
V
w4
1.5
V
w1
?0.1
?0.15
pre
0
?t
post
5
10
( ms )
(a)
w2
0.5
Winit = 3.25
?25 ?20 ?15 ?10 ?5
t
V
1
Winit = 2V
15
20
25
30
0
0
2
4
6
8
10
time ( s )
12
14
16
(b)
Figure 6: (a) Experimental learning window for weight-dependent STDP (b) Learning
experiment with weight-dependent STDP. Synapses 1 and 2 share 20% of spike-timings.
The other synapses receive completely uncorrelated activity. Correlated activity causes
synapses to develop strong weights (Vw close to GND).
chrony pattern of the inputs.
5
Conclusions
The circuits presented can be used to study both weight-dependent and weightindependent learning rules. The influence of weight-dependence on the final weight
distribution has been studied extensively[5][6]. In this paper, we have concentrated on the stabilising effect that moderate weight-dependence can have on learning processes that develop bimodal weight distributions. By introducing weightdependence subtle spike-timing correlations can be detected.
We have also shown experimentally that a small feed-forward network of silicon
neurons with STDP synapses can detect a hierarchical synchrony structure embedded in noisy spike trains.
We are currently investigating the synchrony amplification properties of silicon
neurons with bimodal STDP. We are also working on a new chip that uses lateralinhibitory connections between neurons to classify data with complex synchrony
patterns.
References
[1] G-Q. Bi and M m Poo. Synaptic modifications in cultured hippocampal neurons;
dependence on spike timing, synaptic strength and postsynaptic cell type. Journal of
Neuroscience, 18:10464?10472, 1998.
[2] L.I. Zhang, H.W. Tao, C.E. Holt, W.A. Harris, and M m. Poo. A critical window
for cooperation and competition among developing retinotectal synapses. Nature,
395:37?44, 1998.
[3] H. Markram, J. Lubke, M. Frotscher, and B. Sakmann. Regulation of synaptic efficacy
by coincidence of postsynaptic APs and EPSPs. Science, 275:213?215, 1997.
[4] A. Kepecs, M.C.W van Rossum, S. Song, and J. Tegner. Spike-timing-dependent
plasticity: common themes and divergent vistas. Biological Cybernetics, 87:446?458,
2002.
[5] S. Song, K.D. Miller, and L.F. Abbott. Competitive Hebbian learning through spiketiming dependent synaptic plasticity. Nature Neuroscience, 3:919?926, 2000.
0.05
0.045
0.04
0.04
0.035
0.035
0.03
Correlation
Correlation
0.05
0.045
0.025
0.02
0.03
0.025
0.02
0.015
0.015
0.01
0.01
0.005
0.005
0
?0.06
?0.04
?0.02
0
?t ( s )
0.02
0.04
0.06
0
?0.06
?0.04
?0.02
0
?t ( s )
(a)
0.02
0.04
0.06
(b)
4.5
V
w13
,V
w14
4
Vw15
3.5
3
V
Vw ( V )
w12
2.5
2
1.5
Vw11
Vw10
1
0.5
0
0
10
20
30
time ( s )
40
50
60
(c)
Figure 7: Detection of non-coincident spike-timing synchrony with weight-dependent
STDP.(a) Normalised spike interval histogram of the 2 correlated inputs (synapses 0 and
1). (b) Normalised spike interval histogram between 2 uncorrelated inputs (synapses 2-5)
(c) Synapses 0 and 1 win the learning competition.
[6] M. van Rossum and G.G. Turrigiano. Corrrelation based learning from spike timing
dependent plasticity. Neurocomputing, 38-40:409?415, 2001.
[7] P. Halfiger, M. Mahowald, and L. Watts. A spike based learning neuron in analog
VLSI. In M.C. Mozer, M.I. Jordan, and T. Petsche, editors, Advances in Neural
Information Processing Systems 9, pages 692?698. MIT Press, 1996.
[8] A. Bofill, A. F. Murray, and D. P. Thompson. Circuits for VLSI implementation
of temporally asymmetric Hebbian learning. In T. G. Dietterich, S. Becker, and
Z. Ghahramani, editors, Advances in Neural Information Processing Systems 14. MIT
Press, 2002.
[9] R. J. Vogelstein, F. Tenore, R. Philipp, M. S. Adlerstein, D. H. Goldberg, and
G. Cauwenberghs. Spike timing-dependent plasticity in the address domain. In
S. Becker, S. Thrun, and Klaus Obermayer, editors, Advances in Neural Information Processing Systems 15. MIT Press, 2003.
[10] G. Indiveri. Circuits for bistable spike-timing-dependent plasticity neuromorphic vlsi
synapses. In S. Becker, S. Thrun, and Klaus Obermayer, editors, Advances in Neural
Information Processing Systems 15. MIT Press, 2003.
[11] A. Bofill i Petit and A.F. Murray. Learning temporal correlations in biologicallyinspired aVLSI. In IEEE Internation Symposium on Circuits and Systems, volume 5,
pages 817?820, 2003.
| 2375 |@word inversion:2 stronger:1 seems:1 nd:1 pulse:7 dramatic:1 n8:1 configuration:2 contains:1 efficacy:2 tuned:1 current:8 underly:1 plasticity:9 shape:1 drop:1 aps:1 scotland:2 philipp:1 zhang:1 along:1 constructed:1 direct:1 supply:1 symposium:1 manner:1 introduce:2 p1:2 detects:2 window:15 underlying:1 matched:1 circuit:22 interpreted:1 fabricated:1 synchronisation:1 temporal:2 act:1 charge:3 uk:2 rossum:2 before:2 timing:22 tends:1 limit:1 consequence:1 firing:4 path:2 approximately:1 weakened:1 studied:1 conversely:1 range:1 bi:1 testing:1 implement:2 addressable:1 area:2 w4:3 pre:5 holt:1 close:3 layered:3 storage:2 influence:1 instability:1 imposed:1 petit:2 poo:2 graphically:1 independently:4 thompson:1 abrupt:1 immediately:1 rule:9 traditionally:1 cultured:1 us:2 goldberg:1 origin:1 asymmetric:4 p5:5 coincidence:2 capture:3 region:2 connected:1 decrease:2 technological:1 mozer:1 leak:1 hinder:2 vdd:5 segment:1 upon:1 creates:3 completely:1 chip:7 represented:2 vista:1 train:9 forced:1 detected:3 klaus:2 larger:1 noisy:2 final:4 obviously:1 sequence:1 transistor:5 turrigiano:1 propose:1 interaction:1 p4:1 networked:1 amplification:1 competition:4 double:1 empty:1 cmos:1 leave:1 arrowed:2 stabilising:1 help:1 develop:5 ac:2 avlsi:1 received:2 p2:1 epsps:1 implemented:2 diode:1 strong:4 waveform:1 inhomogeneous:1 bistable:4 potentiation:9 behaviour:1 fix:1 biological:3 stdp:29 consecutive:2 diminishes:1 injecting:1 currently:1 bridge:2 reflects:2 brought:1 clearly:3 mit:4 rather:1 varying:1 voltage:4 indiveri:1 contrast:3 detect:2 stabilise:1 dependent:25 accumulated:1 vlsi:7 tao:1 among:1 frotscher:1 idep:2 having:1 vw3:3 others:1 few:4 neurocomputing:1 fire:2 attractor:1 n1:9 linearised:1 detection:8 normalisation:1 w5:2 highly:2 signalled:2 operated:3 closer:2 filled:1 circle:2 causal:2 classify:1 neuromorphic:1 mahowald:1 introducing:1 delay:1 stored:1 reported:1 st:2 peak:5 off:1 receiving:2 w1:1 again:1 postulate:1 recorded:1 potential:2 kepecs:1 afferent:1 break:1 cauwenberghs:1 competitive:1 decaying:3 synchrony:13 lubke:1 characteristic:2 reinforced:1 correspond:1 miller:1 weak:1 finer:1 unaffected:1 cybernetics:1 synapsis:42 reach:3 ed:2 synaptic:7 propagated:1 gain:3 tunable:2 emerges:2 subtle:2 syn:1 back:2 feed:1 higher:1 synapse:14 refractoriness:1 strongly:1 just:1 p6:4 correlation:24 hand:1 working:1 propagation:1 mode:5 indicated:3 stimulate:3 effect:3 dietterich:1 contain:1 consisted:1 evolution:7 hence:2 maintained:1 m:6 hippocampal:1 recently:1 tegner:1 common:3 stimulation:1 spiking:4 refractory:3 volume:1 jl:2 analog:3 linking:1 silicon:13 had:1 longer:1 recent:1 moderate:2 termed:3 buffer:1 binary:2 syn2:1 seen:5 minimum:4 preceding:1 period:3 signal:7 vogelstein:1 unimodal:1 hebbian:3 alan:2 smooth:1 long:6 w13:1 post:4 prediction:1 basic:1 n5:8 poisson:6 histogram:4 bimodal:10 cell:1 receive:4 whereas:2 affecting:1 decreased:3 interval:4 w2:1 rest:2 operate:1 induced:2 hz:6 recording:1 sent:2 capacitor:2 n7:2 jordan:1 ee:2 vw:22 presence:1 enough:2 switch:1 w3:1 identified:1 perfectly:1 regarding:1 becker:3 song:2 cause:2 action:2 depression:11 generally:1 unimportant:1 characterise:1 amount:1 extensively:1 concentrated:2 gnd:4 exist:1 millisecond:1 inhibitory:1 neuroscience:2 disjoint:1 per:1 soma:1 monitor:1 characterisation:2 abbott:1 year:1 enforced:2 injected:1 place:2 p3:1 w12:1 eh9:2 layer:10 copied:1 activity:8 strength:10 vw2:3 bp:2 n3:6 rendered:1 developing:1 watt:1 across:1 smaller:4 postsynaptic:12 n4:3 modification:3 taken:1 mechanism:8 operation:1 lacked:1 hierarchical:6 petsche:1 subtracted:1 gate:1 top:1 remaining:1 spiketiming:1 ghahramani:1 murray:4 leakage:1 already:2 spike:50 occurs:2 primary:2 dependence:7 obermayer:2 win:2 cw:1 thrun:2 presynaptic:11 unstable:1 bofill:4 w6:2 setup:1 regulation:1 debate:1 trace:1 implementation:5 sakmann:1 perform:1 potentiated:1 neuron:42 coincident:2 precise:4 discovered:1 introduced:4 required:1 connection:1 elapsed:1 tpre:1 address:1 pattern:3 regime:1 saturation:2 including:1 analogue:2 power:1 event:4 critical:1 natural:1 rely:1 scheme:3 inversely:2 temporally:1 identifies:1 axis:1 created:1 carried:1 n6:2 evolve:1 embedded:1 interesting:1 proportional:3 switched:2 degree:1 editor:4 uncorrelated:6 vw1:3 share:1 excitatory:1 cooperation:1 placed:1 last:1 soon:1 w14:1 side:4 weaker:2 allow:1 normalised:3 wide:2 barrier:1 markram:1 absolute:1 edinburgh:4 distributed:5 curve:6 van:2 transition:2 author:1 made:1 tpost:1 forward:1 winit:3 logic:1 investigating:1 stimulated:4 retinotectal:1 nature:2 investigated:2 complex:1 domain:1 main:1 stabilises:1 linearly:1 arrow:1 n2:7 hebb:1 vr:2 theme:1 explicit:3 rail:1 coincided:1 externally:1 decay:4 divergent:1 merging:1 effectively:1 mirror:1 suited:1 harris:1 adria:2 marked:1 towards:3 internation:1 shared:3 change:11 experimentally:1 included:1 secondary:3 experimental:2 support:1 synchronised:1 modulated:1 correlated:6 |
1,513 | 2,376 | Iterative scaled trust-region learning in
Krylov subspaces via Pearlmutter?s
implicit sparse Hessian-vector multiply
Eiji Mizutani
Department of Computer Science
Tsing Hua University
Hsinchu, 300 TAIWAN R.O.C.
[email protected]
James W. Demmel
Mathematics and Computer Science
University of California at Berkeley,
Berkeley, CA 94720 USA
[email protected]
Abstract
The online incremental gradient (or backpropagation) algorithm is
widely considered to be the fastest method for solving large-scale
neural-network (NN) learning problems. In contrast, we show that
an appropriately implemented iterative batch-mode (or block-mode)
learning method can be much faster. For example, it is three times
faster in the UCI letter classi?cation problem (26 outputs, 16,000
data items, 6,066 parameters with a two-hidden-layer multilayer
perceptron) and 353 times faster in a nonlinear regression problem
arising in color recipe prediction (10 outputs, 1,000 data items,
2,210 parameters with a neuro-fuzzy modular network). The three
principal innovative ingredients in our algorithm are the following:
First, we use scaled trust-region regularization with inner-outer iteration to solve the associated ?overdetermined? nonlinear least
squares problem, where the inner iteration performs a truncated
(or inexact) Newton method. Second, we employ Pearlmutter?s
implicit sparse Hessian matrix-vector multiply algorithm to construct the Krylov subspaces used to solve for the truncated Newton update. Third, we exploit sparsity (for preconditioning) in the
matrices resulting from the NNs having many outputs.
1
Introduction
Our objective function to be minimized for optimizing the n-dimensional parameis
the sum over all the d data of squared
ter vector ? of an F -output NN
model
F
2
1
r
=
rk 22 . Here, m?F d; r(?) is the mresiduals: E(?) = 12 r(?)22 = 12 m
2
i=1 i
k=1
dimensional residual vector composed of all m residual elements: ri (i = 1, . . . , m);
and rk the d-dimensional residual vector evaluated at terminal node k. The gradient vector and the Hessian matrix of E(?) are given by g ? JT r and H ? JT J + S,
respectively, where J, the m?n (residual) Jacobian matrix of r, is readily obtainable from backpropagation (BP) process, and S is the matrix of second-derivative
terms of r; i.e., S ? m
r ?2 ri . Most nonlinear least squares algorithms take adi=1 i
vantage of information of J or its cross product called the Gauss-Newton (GN)
Hessian JT J (or the Fisher information matrix for E(.) in Amari?s natural-gradient
learning [1]), which is the important portion of H because in?uence of S becomes
weaker and weaker as residuals become smaller while learning progresses. With
multiple F -output nonlinear models (except fully-connected NNs), J is known to
have the m ? n block angular matrix form (see [7, 6] and references therein).
For instance, consider a single-hidden layer S-H-F MLP (with S -input H -hidden
F -output nodes); there are nA =F (H + 1) terminal parameters ? A (including
threshold parameters) on direct connections to F terminal nodes, each of which has
CA (=H + 1) direct connections, and the rest of nB =H(S + 1) parameters are not
directly connected to any terminal node; hence, nB hidden parameters ?B . In
other words, model?s parameters ? (n=F CA + nB in total) can separate as: ?T =
T
T
T
AT
AT
BT
[? A |? B ] =[? A
], where ? A
1 , ? ? ? , ? k , ? ? ? , ? F |?
k is a vector of the kth subset of
CA terminal parameters directly linked to terminal node k (k = 1, ? ? ? ,F ). The
associated residual Jacobian matrix J can be given in the block-angular form below
left, and thus the (full) Hessian matrix H has the n ? n sparse block arrow form
below right (? denotes some non-zero block) as well as ?
the GN-Hessian JT J?:
? A1
?
?
?
B1
A2
B2
?
? ?
?
?
?
?
?
?
?
?
? ? . (1)
. ?,
J =?
H =?
.
?
?
. ?
..
?
.
?
?
? ?
m?n
n?n
AF
BF
?
?
?
?
?
Here in J, Ak and Bk are d ? CA and d ? nB Jacobian matrices, respectively, of
the d-dimensional residual vector rk evaluated at terminal node k. Notice that
there are F diagonal Ak blocks [because (F ? 1)CA terminal parameters excluding
?A
k have no e?ect on rk ], and F vertical Bk blocks corresponding to the nB hidden
parameters ?B that contribute to minimizing all the residuals rk (k=1, ? ? ? , F ) evaluated at all F terminal nodes. Therefore, the posed problem is overdetermined when
?m > n? (namely, ?d > CA + F1 nB ?) holds. In addition, when the terminal nodes
have linear identity functions, terminal parameters ?A are linear, and thus all Ak
blocks become identical A1 = A2 = ? ? ? = AF , with H + 1 hidden-node outputs (including one constant bias-node output) in each row. For small- and medium-scale
problems, direct batch-mode learning is recommendable with a suitable ?direct? matrix factorization, but attention must be paid to exploiting obvious sparsity in either
block-angular J or block-arrow H so as to render the algorithms e?cient in both
memory and operation counts [7, 6]. Notice that H?1 is dense even if H has a nice
block-arrow sparsity structure. For large-scale problems, Krylov subspace methods, which circumvent the need to perform time-consuming and memory-intensive
direct matrix factorizations, can be employed to realize what we call iterative
batch-mode learning. If any rows (or columns) of those matrices Ak and Bk
are not needed explicitly, then Pearlmutter?s method [11] can automatically exploit
such sparsity to perform sparse Hessian-vector product in constructing a Krylov
subspace for parameter optimization, which we describe in what follows with our
numerical evidence.
2
Inner-Outer Iterative Scaled Trust-Region Methods
Practical Newton methods enjoy both the global convergence property of the Cauchy
(or steepest descent) method and the fast local convergence of the Newton method.
2.1
Outer iteration process in trust-region methods
One might consider a convex combination of the Cauchy step ??Cauchy and the
Newton step ?? N ewton such as (using a scalar parameter h):
def
?? Dogleg = (1 ? h)??Cauchy + h??N ewton ,
(2)
which is known as the dogleg step [4, 9]. This step yields a good approximate solution
to the so-called ?scaled 2-norm? or ?M -norm? trust-region subproblem (e.g., see
Chap. 7 in [2]) with Lagrange multiplier ? below:
min?? q(??) subject to ??M ? R, or min?? q(??) + ?2 (?? T M?? ? R2 ) ,
(3)
?
where the distances are measured in the M -norm: xM = xT Mx with a symmetric positive de?nite matrix M, and R (called the trust-region radius) signi?es the
def
trust-region size of the local quadratic model q(??) = E(?) + gT ?? + 12 ?? T H?? .
Radius R is controlled according to how well q(.) predicts the behavior of E(.) by
checking the error reduction ratio below:
E(?now ) ? E(?next )
Actual error reduction
.
(4)
=
?=
Predicted error reduction
E(? now ) ? q(?? )
For more details, refer to [9, 2]. The posed constrained quadratic minimization can
be solved with Lagrange multiplier ?: If ?? is a solution to the posed problem, then
?? satis?es the formula: (H + ?M)?? = ?g, with ?(??M ? R) = 0, ? ? 0, and
H + ?M positive semide?nite. In nonlinear least squares context, the nonnegative
scalar parameter ? is known as the Levenberg-Marquardt parameter. When
? = 0 (namely, R ? ?? N ewton M ), the trust-region step ?? becomes the Newton
def
step ?? N ewton = ?H?1 g, and, as ? increases (i.e., as R decreases), ?? gets closer to
def
the (full) Cauchy step ?? Cauchy : ?? Cauchy = ? gT M?1 g/gT M?1 HM?1 g M?1 g.
When R < ?? Cauchy M , the trust-region step ?? reduces to the restricted Cauchy
def
step ?? RC = ?(R/?? Cauchy M )?? Cauchy . If ?? Cauchy M < R < ?? N ewton M ,
?? is the ?dogleg step,? intermediate between ?? Cauchy and ?? N ewton , as shown in
Eq. (2), where scalar h (0 < h < 1) is the positive root of s + hpM = R:
?
?sT Mp+ (sT Mp)2 +pT Mp(R2 ?sT Ms)
,
(5)
h=
pT Mp
def
def
with s = ??Cauchy and p = ??N ewton ? ??Cauchy (when pT g < 0). In this way, the
trial step ?? is subject to trust-region regularization.
In large-scale problems, the linear-equation solution sequence {??k } is generated
iteratively while seeking a trial step ?? in the inner iteration process, and the parameter sequence {?i }, whose two consecutive elements are denoted by ?now and
? next , is produced by the outer iteration (i.e., epoch in batch mode). The outer
iterative process updates parameters by ?next = ?now + ?? without taking any
uphill movement: That is, if the step is not satisfactory, then R is decreased so as
to realize an important Levenberg-Marquardt concept: the failed step is shortened
and deflected towards the Cauchy-step direction simultaneously. For this purpose,
the trust-region methods compute the gradient vector in batch mode or with (suf?ciently large) data block (i.e., block mode; see our demonstration in Section 3).
2.2
Inner iteration process with truncated preconditioned linear CG
We employ a preconditioned conjugate gradient (PCG) (among many Krylov subspace methods; see Section 6.6 in [3] and Chapter 5 in [2]) with our symmetric
positive de?nite preconditioner M for solving the M -norm trust-region subproblem (3). This is the truncated PCG (also known as Steihaug-Toint CG) applicable
even to nonconvex problems for solving inexactly the Newton formula by the inner
iterative process below (see pp. 628-629 in [10]; pp. 202?218 in [2]) based on the
standard PCG algorithm (e.g., see page 317 in [3]):
Algorithm 1: The inner iteration process via preconditioned CG.
1. Initialization (k=0):
Set ?? 0 = 0 and ? 0 = ?g (=?g ? H?? 0 );
Solve Mz = ? 0 for pseudoresiduals: z = M?1 ? 0 ;
Compute ?0 = ? T0 z;
Set k = 1 and d1 = z, and then proceed to Step 2.
2. Matrix-vector product: z = Hdk = JT (Jdk ) + Sdk (see also Algorithm 2).
3. Curvature check: ?k = dTk z = dTk Hdk .
If ?k > 0, then continue with Step 4. Otherwise, compute h (> 0) such that
?? k?1 + hdk M = R, and terminate with ?? = ?? k?1 + hdk .
4. Step size: ?k = ?k?1
?k .
5. Approximate solution: ??k = ?? k?1 + ?k dk .
If ?? k M < R, go onto Step 6; else terminate with ?? = ??R ?? k . (6)
k M
6. Linear-system residuals: ? k = ? k?1 ? ?k z [= ?g ? H?? k = ?q (?? k )].
If ? k 2 is small enough; i.e., ? k 2 ? ?g2 , then terminate with ?? = ??k .
7. Pseudoresiduals: z = M?1 ? k , and then compute ?k = ? Tk z.
?k
8. Conjugation factor: ?k+1 = ?k?1
.
9. Search direction: dk+1 = ? k + ?k+1 dk .
10. If k < klimit , set k = k + 1 and return to Step 2.
Otherwise, terminate with ?? = ??k . 2
At Step 3, h is obtainable with Eq. (5) with s = ??k?1 and p = dk plugged in.
Likewise, in place of Eq. (6) at Step 5, we may use Eq. (5) for ?? = ?? k?1 +
hdk such that ?? k?1 + hdk M = R, but both computations become identical if
R ? ?? Cauchy M ; otherwise, Eq. (6) is less expensive and tends to give more bias
towards the Newton direction. The inner-iterative process terminates (i.e., stops at
inner iteration k) when one of the next four conditions holds:
(A) dTk Hdk ? 0, (B) ?? k M ? R, (C) H?? k + g2 ? ?g2 , (D) k=klimit . (7)
Condition (D) at Step 10 is least likely to be met since there would be no prior
knowledge about preset limits klimit to inner iterations (usually, klimit =n). As long
as dTk Hdk > 0 holds, PCG works properly until the CG-trajectory hits the trustregion boundary [Condition (B) at Step 5], or till the 2-norm linear-system residuals
become small [Condition (C) at Step 6], where ? can be ?xed (e.g., ?=0.01). Condition (A) dTk Hdk ? 0 (at Step 3) may hold when the local model is not strictly convex
(or H is not positive de?nite). That is, dk is a direction of zero or negative curvature;
a typical exploitation of non-positive curvature is to set ?? equal to the ?step to the
trust-region boundary along that curvature segment (in Step 3)? as a model minimizer in the trust region. In this way, the terminated kth CG step yields an approximate solution to the trust-region subproblem (3), and it belongs to the Krylov sub1
1
1
1
1
1
1
space span {?M? 2 g, ?(M? 2 HM? 2 )M? 2 g, ..., ?(M? 2 HM? 2 )k?1 M? 2 g}, resulting
?1
from our application of CG (without multiplying by M 2 ) to the symmetric
1
1
1
1
Newton formula (M? 2 HM? 2 )(M 2 ??) = ?M? 2 g, because M?1 H (in the system M?1 H?? = ?M?1 g) is unlikely symmetric (see page 317 in [3]) even if M
is a diagonal matrix (unless M = I).
The overall memory requirement of Algorithm 1 is O(n) because at most ?ve nvectors are enough to implement. Since the matrix-vector product Hdk at Step 2
is dominant in operation cost of the entire inner-outer process, we can employ
Pearlmutter?s method with no H explicitly required. To better understand the
method, we ?rst describe a straightforward implicit sparse matrix-vector multiply
when H = JT J; it evaluates JT Jdi (without forming JT J) in two-step implicit
matrix-vector product as z=JT (Jdi ), exploiting block-angular J in Eq. (1); i.e.,
working on each block, Ak and Bk , in a row-wise manner below:
Algorithm 2: Implicit (i.e., matrix-free) sparse matrix-vector multiplication step
with an F -output NN model at inner iteration i starting with z = 0:
for p = 1 to d (i.e., one sweep of d training data):
(a) do forward pass to compute F ?nal outputs yp (?) on datum p;
for k = 1 to F (at each terminal node k):
? (b) do backward pass to obtain the pth row of Ak as the CA -vector
aTp,k , and the pth row of Bk as the nB -vector bTp,k ;
? (c) compute ?k ap and ?k bp,k , where scalar ?k = aTp,k dai,k + bTp,k dbi ,
and then add them to their corresponding elements of z;
end for k.
end for p. 2
Here, Step (a) costs at least 2dn (see details in [8]); Step (b) costs at least 2mlu ,
where m=F d and lu =CA +nB < n=F CA +nB ; and Step (c) costs 4mlu ; overall, Algorithm 2 costs O(mlu ), linear in F . Note that if sparsity is ignored, the cost becomes
O(mn), quadratic in F since mn = F d(F CA +nB ). Algorithm 2 can extract explicitly
F pairs of row vectors (aT and bT ) of J (with F lu storage) on each datum, making
it easier to apply other numerical linear algebra approaches such as preconditioning
to reduce the number of inner iterations. Yet, if the row vectors are not needed explicitly, then Pearlmutter?s method is more e?cient, calculating ?k [see Step (c)] in
its forward pass (i.e., R{yk }=?k ; see Eq. (4.3) on page 151 in [11]). When H = JT J,
it is easy to simplify its backward pass (see Eq. (4.4) on page 152 in [11]), just
by eliminating the terms involving residuals r and second-derivatives of node functions f (.), so as to multiply vectors ak and bk through by scalar ?k implicitly. This
simpli?ed method of Pearlmutter runs in time O(dn), whereas Algorithm 2 does in
O(mlu ). Since mlu ? dn = dF (CA + nB ) ? d(F CA + nB ) = d(F ? 1)nB , Pearlmutter?s
method can be up to F times faster than Algorithm 2. Furthermore, Pearlmutter?s
original method e?ciently multiplies an
n-vector by the ?full?
matrix still
m Hessian
T
2
(u
d
)u
+
[?
r
]r
d
, where uTi
in O(dn) for z = Hdi = JT (Jdi ) + Sdi = m
i
j
j
j
i
j
j=1
j=1
is the ith row vector of J; notably, the method automatically exploits block-arrow
sparsity of H [see Eq. (1), right] in the essentially same way as the standard BP
deals with block-angular sparsity of J [see Eq. (1), left] to perform the matrix-vector
product g = JT r in O(dn).
3
Experiments and Discussion
In simulation, we compared the following ?ve algorithms:
Algorithm A: Online-BP (i.e., H = I) with a ?xed momentum (0.8);
Algorithm B: Algorithm 2 alone for Algorithm 1 with H = JT J (see [6]);
Algorithm C: Pearlmutter?s method alone for Algorithm 1 with H = JT J;
Algorithm D: Algorithm 2 to obtain preconditioner M = diag(JT J) only, and
Pearlmutter?s method for Algorithm 1 with H = JT J;
Algorithm E: Same as Algorithm D except with ?full? Hessian H = JT J + S. 2
Algorithm A is tested for our speed comparison purpose, because if it works, it?s
probably fastest. In Algorithms D and E, Algorithm 2 was only employed for
obtaining a diagonal preconditioner M = diag(JT J) (or Jacobi preconditioner ) for
Algorithm 1, whereas in Algorithms B and C, no preconditioning (M = I) was applied. The performance comparisons were made with a nonlinear regression task and
a classi?cation benchmark, the letter recognition problem, from the UCI machine
learning repository. All the experiments were conducted on a 1.6-GHz Pentium-IV
PC with FreeBSD 4.5 and gcc-2.95.3 compiler (with -O2 optimization ?ag).
The ?rst regression task was a real-world application color recipe prediction, a
problem of determining mixing proportions of available colorants to reproduce a
given target color, requiring mappings from 16 inputs (16 spectral re?ectance signals of the target color) to ten outputs (F =10) (ten colorant proportions) using
1,000 training data (d=1,000; m=10,000) with 302 test data. The table below
shows the results averaged over 20 trials with a single 16-82-10 MLP [n=2,224
u
(CA =83;nB =1,394;lu =1,477); hence, ml
dn =6.6], which was optimized until ?training RMSE ? 0.002 (application requirement)? satis?ed, when we say that ?convergence? (relatively early stop) occurs. Clearly, the posed regression task is nontrivial because Algorithm A, online-BP, took roughly six days (averaged over only
ten trials), nearly 280 (=8748.4/31.2) times slower than (fastest) Algorithm D. In
generalization performance, all the posed algorithms were more or less equivalent.
Model
Algorithm
Total time (min)
Stopped epoch
Time/epoch (sec)
Inner itr./epoch
Flops ratio/itr.
Test RMSE
A
8748.4
2,916,495.2
0.2
N/A
N/A
0.020
Single 16-82-10 MLP
B
C
D
336.4 107.2
31.2
272.5 261.5 132.7
73.8
24.6
14.1
218.3 216.0 142.7
3.9
1.0
0.015 0.015 0.015
E
64.5
300.3
12.9
110.9
1.3
0.015
Five-MLP mixed
B
C
D
162.3
57.6
20.9
147.3 160.0 179.1
65.2
21.6
7.0
193.8 174.1
66.0
4.1
1.2
0.016 0.016 0.017
We also observed that use of full Hessian matrix (Algorithm E) helped reduce inner iterations per epoch, although the total convergence time turned out to be
greater than that obtained with the GN-Hessian (Algorithm D), presumably because our Jacobi-preconditioner must be more suitable for the GN-Hessian than for
the full Hessian, and perhaps because the inner iterative process of Algorithm E
can terminate due to detection of non-positive curvature in Eq. (7)(A); this extra
chance of termination may increase the total epochs, but help reduce the time per
epoch. Remarkably, the time per inner iteration of Algorithm E did not di?er much
from Algorithms C and D owing to Pearlmutter?s method; in fact, given preconditioner M, Algorithm E merely needed about 1.3 times more flops ? per inner
iteration than Algorithms C and D did, although Algorithm B needed nearly 3.9
times more. The measured megaflop rates for all these codes lie roughly in the
range from 200-270 M?op/sec; typically, below 10 % of peak machine speed.
For improving single-MLP performance, one might employ two layers of hidden
nodes (rather than one large hidden layer; see the letter problem below), which
increases nB while reducing nA , rendering Algorithm 2 less e?cient (i.e., slower).
Alternatively, one might introduce direct connections between the input and terminal output layers, which increases CA , the column size of Ak , retaining nice
parameter separability. Yet another approach (if applicable) is to use a ?comple?
The ?oating-point operation counts were measured by using PAPI (Performance Application Programming Interface); see http://icl.cs.utk.edu/projects/papi/.
mentary mixtures of Z MLP-experts? model (or a neuro-fuzzy modular network)
residual vector
that combines Z smaller-size MLPs complementarily;
Z the associated
w
o
?
t,
where
scalar wi ,
to be minimized becomes: r(?) = y(?) ? t =
i
i
i=1
the ith output of the integrating unit, is the ith (normalized) mixing proportion
assigned to the outputs (F -vector oi ) of expert-MLP i. Note that each expert learns
?residuals? rather than ?desired outputs? (unlike in the committee method below)
in the sense that only the ?nal combined outputs y must come close to the desired
ones t. That is, there are strong coupling e?ects (see page 80 in [5]) among all experts; hence, it is crucial to consider the global Hessian across all experts to optimize
them simultaneously [7]. The corresponding J has the same block-angular form as
1 2
Z
that in Eq. (1)(left) with Ak ? [A1k A2k ? ? ? AZ
k ], and Bk ? [Bk Bk ? ? ? Bk ] (k = 1, ? ? ? , F ).
Here, the residual Jacobian portion for the parameters of the integrating unit was
omitted because they were merely ?ne-tuned with a steepest-descent type method
owing to our knowledge-based design for input-partition to avoid (too many) local
experts. Speci?cally, the spectral re?ectance signals (16 inputs) were converted to
the hue angle as input to the integrating unit that consists of ?ve bell-shaped basis
functions, partitioning that hue-subspace alone in a fuzzy fashion into only ?ve color
regions (red, yellow, green, blue, and violet) for ?ve 16-16-10 MLP-experts, each of
which receives all the 16 spectral signals as input [hence, Z =5; n=2,210 (CA =85;
u
nB =1,360); ml
dn =6.5]. Due to localized parameter-tunings, our ?ve-MLP mixtures
model was better in learning; see faster learning in table above. In particular, our
model with Algorithm D worked 353 (? 123.1 ? 60.0/20.9) times faster than with
Algorithm A that took 123.1 hours (see [6]) and 419 (? 8748.4/20.9) times faster
than the single MLP with Algorithm A. For our complementary mixtures model,
R{.}-operator of Pearlmutter?s method is readily
at termi applicable; for
instance,
Z
R{o
}w
+
R{w
}o
, where
nal node k (k=1, ? ? ? ,F ): R{rk } = R{yk } = Z
i
i
i,k
i,k
i
i
each R{oi,k } yields ?k [see Algorithm 2(c)] for each expert-MLP i (i = 1, ? ? ? ,Z ).
The second letter classi?cation benchmark problem involves 16 inputs (features) and
26 outputs (alphabets) with 16,000 training data (F =26; d=16,000; m=416,000)
plus 4,000 test data. We used the 16-70-50-26 MLP (see [12]) (n=6,066) with 10 sets
of di?erent initial parameters randomly generated uniformly in the range [?0.2, 0.2].
We implemented block-mode learning (as well as batch mode) just by splitting
the training data set into two or four equally-sized data blocks, and each data block
alone is employed for Algorithms 1 and 2 except for computing ? in Eq. (4), where
evaluation of E(.) involves all the d training data. Notice that two-block mode
learning scheme updates model?s parameters ? twice per epoch, whereas onlineBP updates them on each datum (i.e., d times per epoch). We observed that
possible redundancy in the data set appeared to help reduce the number of inner
iterations, speeding up our iterative batch-mode learning; therefore, we did not use
preconditioning. The next table shows the average performance (over ten trials)
when the best test-set performance was obtained by epoch 1,000 with online-BP
(i.e., Algorithm A) and by epoch 50 with Algorithm C in three learning modes:
Average results
Total time (min)
Stopped epoch
Time/epoch (sec)
Avg. inner itr.
Error (train/test)
Committee error
Online-BP
63.2
597.8
6.3
N/A
2.3% / 6.4%
0.2% / 3.0%
Four-block mode
22.4
36.6
36.8
4.5/block
2.7% / 5.1%
1.2% / 2.8%
Two-block mode
41.0
22.1
111.7
26.3/block
1.2% / 4.6%
0.3% / 2.2%
Batch mode
61.1
27.1
135.2
31.0/batch
1.2% / 4.9%
0.1% / 2.3%
On average, Algorithm C in four-block mode worked about three (? 63.2/22.4)
times faster than online-BP, and thus can work faster than batch-mode nonlinearCG algorithms, since, reported in [12], online-BP worked faster than nonlinear-CG.
Here, we also tested the committee methods (see Chap. 8 in [13]) that merely
combined all (equally-weighted) outputs of the ten MLPs, which were optimized
independently in this experiment. The committee error was better than the average
error, as expected. Intriguingly, our block-mode learning schemes introduced small
(harmless) bias, improving the test-data performance; speci?cally, the two-block
mode yielded the best test error rate 2.2% even with this simple committee method.
4
Conclusion and Future Directions
Pearlmutter?s method can construct Krylov subspaces e?ciently for implementing
iterative batch- or block-mode learning. In our simulation examples, the simpler version of Pearlmutter?s method (see Algorithms C and D) worked excellently. But it
would be of interest to investigate other real-life large-scale problems to ?nd out the
strengths of the full-Hessian based methods (see Algorithm E) perhaps with a more
elaborate preconditioner, which would be much more time-consuming per epoch
but may reduce the total time dramatically; hence, need to deal with a delicate
balancing act. Beside the simple committee method, it would be worth examining
our algorithms for implementing other statistical learning methods (e.g., boosting)
in conjunction with appropriate numerical linear algebra techniques. These are part
of our overlay ambitious goal for attacking practical large-scale problems.
References
[1] Shun-ichi Amari. Natural gradient works e?ciently in learning. In Neural Computation, 10, pp. 251?276, 1998.
[2] A. R. Conn, N. I. M. Gould, and P. L. Toint. Trust-Region Methods. SIAM, 2000.
[3] James W. Demmel. Applied Numerical Linear Algebra. SIAM, 1997.
[4] J. E. Dennis, D. M. Gay, and R. E. Welsch. ?An Adaptive Nonlinear Least-Squares
Algorithm.? In ACM Trans. on Mathematical Software, 7(3), pp. 348?368, 1981.
[5] R. A. Jacobs, M. I. Jordan, S. J. Nowlan and G. E. Hinton. ?Adaptive Mixtures of
Local Experts.? In Neural Computation, pp. 79?87, Vol. 3, No. 1, 1991.
[6] Eiji Mizutani and James W. Demmel. ?On structure-exploiting trust-region regularized nonlinear least squares algorithms for neural-network learning.? In International
Journal of Neural Networks. Elsevier Science, Vol. 16, pp. 745-753, 2003.
[7] Eiji Mizutani and James W. Demmel. ?On separable nonlinear least squares algorithms for neuro-fuzzy modular network learning.? In Proceedings of the IEEE Int?l
Joint Conf. on Neural Networks, Vol.3, pp. 2399?2404, Honolulu USA, May, 2002.
(Available at http://www.cs.berkeley.edu/?eiji/ijcnn02.pdf.)
[8] Eiji Mizutani and Stuart E. Dreyfus. ?On complexity analysis of supervised MLPlearning for algorithmic comparisons.? In Proceedings of the INNS-IEEE Int?l Joint
Conf. on Neural Networks, Vol. 1, pp. 347?352, Washington D.C., July, 2001.
[9] Jorge J. Mor?e and Danny C. Sorensen. ?Computing A Trust Region Step.? In SIAM
J. Sci. Stat. Comp. 4(3), pp. 553?572, 1983.
[10] Trond Steihaug ?The Conjugate Gradient Method and Trust Regions in Large Scale
Optimization.? In SIAM J. Numer. Anal. pp. 626?637, vol. 20, no. 3. 1983.
[11] Barak A. Pearlmutter. ?Fast exact multiplication by the Hessian.? In Neural Computation, pp. 147?160, Vol. 6, No. 1, 1994.
[12] Holger Schwenk and Yoshua Bengio. ?Boosting neural networks.? In Neural Computation, pp. 1869?1887, Vol. 12, No. 8, 2000.
[13] Trevor Hastie, Robert Tibshirani, and Jerome Friedman. The Elements of Statistical
Learning. Springer-Verlag, 2001 (Corrected printing 2002).
| 2376 |@word tsing:1 trial:5 exploitation:1 dtk:5 eliminating:1 repository:1 norm:5 proportion:3 version:1 bf:1 nd:1 termination:1 simulation:2 jacob:1 paid:1 reduction:3 initial:1 tuned:1 o2:1 nowlan:1 marquardt:2 yet:2 danny:1 must:3 readily:2 realize:2 numerical:4 partition:1 update:4 a1k:1 alone:4 item:2 steepest:2 ith:3 jdk:1 node:14 contribute:1 boosting:2 simpler:1 five:1 rc:1 along:1 dn:7 direct:6 become:4 ect:2 mathematical:1 consists:1 combine:1 introduce:1 manner:1 uphill:1 notably:1 expected:1 roughly:2 behavior:1 terminal:13 chap:2 automatically:2 actual:1 becomes:4 project:1 medium:1 what:2 xed:2 fuzzy:4 ag:1 klimit:4 berkeley:4 act:1 scaled:4 hit:1 partitioning:1 wayne:1 unit:3 enjoy:1 positive:7 local:5 tends:1 limit:1 shortened:1 ak:9 ap:1 might:3 plus:1 twice:1 therein:1 initialization:1 fastest:3 factorization:2 range:2 averaged:2 practical:2 block:30 implement:1 backpropagation:2 nite:4 bell:1 honolulu:1 vantage:1 word:1 integrating:3 get:1 onto:1 close:1 operator:1 nb:16 context:1 storage:1 optimize:1 equivalent:1 www:1 go:1 attention:1 straightforward:1 starting:1 convex:2 independently:1 splitting:1 d1:1 dbi:1 harmless:1 pt:3 target:2 exact:1 programming:1 overdetermined:2 element:4 expensive:1 recognition:1 mentary:1 predicts:1 observed:2 subproblem:3 solved:1 region:20 connected:2 decrease:1 movement:1 mz:1 yk:2 complexity:1 solving:3 segment:1 algebra:3 basis:1 preconditioning:4 joint:2 schwenk:1 chapter:1 alphabet:1 train:1 fast:2 describe:2 demmel:5 whose:1 modular:3 widely:1 solve:3 posed:5 say:1 amari:2 otherwise:3 online:7 semide:1 sequence:2 inn:1 took:2 product:6 uci:2 turned:1 till:1 mixing:2 hdk:10 az:1 recipe:2 exploiting:3 convergence:4 rst:2 requirement:2 incremental:1 tk:1 help:2 coupling:1 stat:1 a2k:1 measured:3 erent:1 op:1 progress:1 eq:13 strong:1 implemented:2 c:4 signi:1 come:1 predicted:1 met:1 involves:2 direction:5 radius:2 owing:2 implementing:2 shun:1 f1:1 generalization:1 strictly:1 hold:4 considered:1 presumably:1 mapping:1 algorithmic:1 consecutive:1 a2:2 early:1 omitted:1 purpose:2 applicable:3 weighted:1 minimization:1 clearly:1 rather:2 avoid:1 conjunction:1 properly:1 check:1 contrast:1 pentium:1 cg:7 sense:1 elsevier:1 mizutani:4 nn:3 bt:2 unlikely:1 entire:1 typically:1 hidden:8 reproduce:1 overall:2 among:2 denoted:1 pcg:4 multiplies:1 retaining:1 constrained:1 equal:1 construct:2 having:1 shaped:1 intriguingly:1 washington:1 identical:2 stuart:1 holger:1 nearly:2 future:1 minimized:2 yoshua:1 simplify:1 employ:4 randomly:1 composed:1 simultaneously:2 ve:6 delicate:1 friedman:1 detection:1 mlp:12 satis:2 interest:1 investigate:1 multiply:4 evaluation:1 numer:1 mixture:4 pc:1 sorensen:1 closer:1 hdi:1 unless:1 iv:1 plugged:1 re:2 desired:2 uence:1 stopped:2 instance:2 column:2 gn:4 cost:6 violet:1 subset:1 examining:1 conducted:1 too:1 reported:1 nns:2 combined:2 st:3 peak:1 siam:4 international:1 na:2 squared:1 trond:1 conf:2 expert:9 derivative:2 return:1 yp:1 converted:1 de:3 b2:1 sec:3 int:2 explicitly:4 mp:4 root:1 helped:1 linked:1 portion:2 compiler:1 red:1 rmse:2 recommendable:1 mlps:2 square:6 oi:2 likewise:1 yield:3 yellow:1 steihaug:2 produced:1 lu:3 trajectory:1 multiplying:1 worth:1 comp:1 cation:3 ed:2 trevor:1 inexact:1 evaluates:1 pp:12 james:4 obvious:1 associated:3 jacobi:2 di:2 stop:2 icl:1 color:5 knowledge:2 obtainable:2 hsinchu:1 day:1 supervised:1 evaluated:3 furthermore:1 angular:6 implicit:5 just:2 preconditioner:7 until:2 working:1 receives:1 dennis:1 jerome:1 trust:19 nonlinear:10 mode:20 perhaps:2 usa:2 concept:1 multiplier:2 requiring:1 normalized:1 gay:1 regularization:2 hence:5 assigned:1 symmetric:4 iteratively:1 satisfactory:1 deal:2 levenberg:2 m:1 pdf:1 mlu:5 pearlmutter:15 performs:1 interface:1 wise:1 dreyfus:1 mor:1 refer:1 atp:2 tuning:1 mathematics:1 hp:1 gt:3 add:1 dominant:1 curvature:5 optimizing:1 belongs:1 verlag:1 nonconvex:1 continue:1 sdi:1 life:1 jorge:1 dai:1 simpli:1 greater:1 utk:1 employed:3 speci:2 attacking:1 signal:3 july:1 multiple:1 full:7 reduces:1 faster:10 af:2 cross:1 long:1 equally:2 a1:2 controlled:1 prediction:2 neuro:3 regression:4 involving:1 multilayer:1 essentially:1 df:1 iteration:15 addition:1 whereas:3 remarkably:1 decreased:1 else:1 crucial:1 appropriately:1 extra:1 rest:1 unlike:1 probably:1 subject:2 jordan:1 call:1 ciently:4 ter:1 intermediate:1 bengio:1 enough:2 easy:1 rendering:1 hastie:1 inner:20 reduce:5 itr:3 intensive:1 t0:1 six:1 render:1 hessian:16 proceed:1 ignored:1 dramatically:1 hue:2 ten:5 excellently:1 eiji:6 http:2 overlay:1 notice:3 arising:1 per:7 tibshirani:1 blue:1 comple:1 vol:7 ichi:1 redundancy:1 four:4 conn:1 threshold:1 nal:3 backward:2 merely:3 sum:1 run:1 angle:1 letter:4 deflected:1 place:1 uti:1 toint:2 layer:5 def:7 conjugation:1 datum:3 quadratic:3 nonnegative:1 yielded:1 nontrivial:1 strength:1 worked:4 bp:9 ri:2 software:1 speed:2 innovative:1 min:4 span:1 separable:1 relatively:1 complementarily:1 gould:1 department:1 according:1 combination:1 conjugate:2 smaller:2 terminates:1 across:1 separability:1 wi:1 tw:1 making:1 restricted:1 equation:1 count:2 committee:6 needed:4 end:2 available:2 operation:3 apply:1 spectral:3 appropriate:1 batch:11 slower:2 original:1 denotes:1 newton:10 calculating:1 cally:2 exploit:3 seeking:1 objective:1 sweep:1 occurs:1 diagonal:3 gradient:7 kth:2 subspace:7 distance:1 separate:1 mx:1 sci:1 outer:6 cauchy:17 nthu:1 taiwan:1 preconditioned:3 code:1 dogleg:3 ratio:2 minimizing:1 demonstration:1 robert:1 negative:1 gcc:1 design:1 anal:1 ambitious:1 perform:3 vertical:1 benchmark:2 descent:2 truncated:4 flop:2 hinton:1 excluding:1 bk:10 introduced:1 namely:2 required:1 pair:1 connection:3 optimized:2 california:1 hour:1 trans:1 krylov:7 below:10 usually:1 appeared:1 sparsity:7 including:2 memory:3 green:1 suitable:2 natural:2 circumvent:1 regularized:1 residual:14 mn:2 scheme:2 ne:1 hm:4 extract:1 speeding:1 nice:2 epoch:14 prior:1 checking:1 multiplication:2 determining:1 beside:1 fully:1 mixed:1 suf:1 ingredient:1 localized:1 balancing:1 row:8 free:1 bias:3 weaker:2 understand:1 perceptron:1 barak:1 trustregion:1 taking:1 sparse:6 ghz:1 boundary:2 world:1 forward:2 made:1 avg:1 adaptive:2 pth:2 approximate:3 implicitly:1 ml:2 global:2 b1:1 consuming:2 alternatively:1 search:1 iterative:10 table:3 terminate:5 ca:16 obtaining:1 improving:2 adi:1 constructing:1 diag:2 did:3 dense:1 arrow:4 terminated:1 complementary:1 cient:3 elaborate:1 fashion:1 momentum:1 lie:1 third:1 jacobian:4 printing:1 learns:1 rk:6 formula:3 xt:1 jt:18 er:1 r2:2 dk:5 evidence:1 easier:1 welsch:1 likely:1 forming:1 failed:1 lagrange:2 scalar:6 hua:1 springer:1 sdk:1 minimizer:1 inexactly:1 chance:1 acm:1 identity:1 sized:1 goal:1 towards:2 fisher:1 typical:1 except:3 reducing:1 uniformly:1 corrected:1 preset:1 classi:3 principal:1 total:6 called:3 pas:4 gauss:1 e:2 tested:2 |
1,514 | 2,377 | Modeling User Rating Profiles For
Collaborative Filtering
Benjamin Marlin
Department of Computer Science
University of Toronto
Toronto, ON, M5S 3H5, CANADA
[email protected]
Abstract
In this paper we present a generative latent variable model for
rating-based collaborative filtering called the User Rating Profile
model (URP). The generative process which underlies URP is designed to produce complete user rating profiles, an assignment of
one rating to each item for each user. Our model represents each
user as a mixture of user attitudes, and the mixing proportions are
distributed according to a Dirichlet random variable. The rating for
each item is generated by selecting a user attitude for the item, and
then selecting a rating according to the preference pattern associated with that attitude. URP is related to several models including
a multinomial mixture model, the aspect model [7], and LDA [1],
but has clear advantages over each.
1
Introduction
In rating-based collaborative filtering, users express their preferences by explicitly
assigning ratings to items that they have accessed, viewed, or purchased. We assume
a set of N users {1, ..., N }, a set of M items {1, ..., M }, and a set of V discrete rating
values {1, ..., V }. In the natural case where each user has at most one rating r yu
for each item y, the ratings for each user form a vector with one component per
item. Of course, the values of some components are not known. We refer to user
u?s rating vector as their rating profile denoted ru .
Rating prediction is the elementary task performed with rating-based data. Given
a particular item and user, the goal is to predict the user?s true rating for the
item in question. Early work on rating prediction focused on neighborhood-based
methods such as the GroupLens algorithm [9]. Personalized recommendations can
be generated for any user by first predicting ratings for all items the user has not
rated, and recommending items with the highest predicted ratings. The capability
to predict ratings has other interesting applications. Rating predictions can be
incorporated with content-based scores to create a preference augmented search
procedure [4]. Rating prediction also facilitates an active approach to collaborative
filtering using expected value of information. In such a framework the predicted
rating of each item is interpreted as its expected utility to the user [2].
In order to gain the maximum advantage from the expressive power of ratings, a
probabilistic model must enable the calculation of the distribution over ratings, and
thus the calculation of predicted ratings. A handful of such models exist including
the multinomial mixture model shown in figure 3, and the aspect model shown in
figure 1 [7]. As latent variable models, both the aspect model and the multinomial
mixture model have an intuitive appeal. They can be interpreted as decomposing
user preferences profiles into a set of typical preference patterns, and the degree to
which each user participates in each preference pattern. The settings of the latent
variable are casually referred to as user attitudes. The multinomial mixture model
constrains all users to have the same prior distribution over user attitudes, while
the aspect model allows each user to have a different prior distribution over user
attitudes. The added flexibility of the aspect model is quite attractive, but the
interpretation of the distribution over user attitudes as parameters instead of random variables induces several problems.1 First, the aspect model lacks a principled,
maximum likelihood inference procedure for novel user profiles. Second the number
of parameters in the model grows linearly with the number of users in the data set.
Recent research has seen the proposal of several generative latent variable models
for discrete data, including Latent Dirichlet Allocation [1] shown in figure 2, and
multinomial PCA (a generalization of LDA to priors other than Dirichlet) [3]. LDA
and mPCA were both designed with co-occurrence data in mind (word-document
pairs). They can only be applied to rating data if the data is first processed into
user-item pairs using some type of thresholding operation on the rating values.
These models can then be used to generate recommendations; however, they can
not be used to infer a distribution over ratings of items, or to predict the ratings of
items.
The contribution of this paper is a new generative, latent variable model that views
rating-based data at the level of user rating profiles. The URP model incorporates
proper generative semantics at the user level that are similar to those used in LDA
and mPCA, while the inner workings of the model are designed specifically for rating
profiles. Like the aspect model and the multinomial mixture model, the URP model
can be interpreted in terms of decomposing rating profiles into typical preference
patterns, and the degree to which each user participates in each pattern. In this
paper we describe the URP model, give model fitting and initialization procedures,
and present empirical results for two data sets.
2
The User Rating Profile Model
The graphical representation of the aspect, LDA, multinomial mixture, and URP
models are shown in figures 1 through 4. In all models U is a user index, Y is an item
index, Z is a user attitude, Zy is the user attitude responsible for item y, R is a rating
value, Ry is a rating value for item Y , and ?vyz is a multinomial parameter giving
P (Ry = v|Zy = z). In the aspect model ? is a set of multinomial parameters where
?zu represents P (Z = z|U = u). The number of these parameters obviously grows as
the number of training users is increased. In the mixture of multinomials model ?
is a single distribution over user attitudes where ?z represents P (Z = z). This gives
the multinomial mixture model correct, yet simplistic, generative semantics at the
user level. In both LDA and URP ? is not a parameter, but a Dirichlet random
variable with parameter ?. A unique ? is sampled for each user where ?z gives
1
Girolami and Kab?
an have recently shown that a co-occurrence version of the aspect
model can be interpreted as a MAP/ML estimated LDA model under a uniform Dirichlet
prior [5]. Essentially the same relationship holds between the aspect model for ratings
shown in figure 1, and the URP model.
?
?
?
Z
Y
M
N
Figure 1: Aspect Model
Figure 2: LDA Model
?
?
R1
?
?
R2
Z
?
RM
Z1
R1
Z2
R2
ZM
RM
N
N
Figure 3: Multinomial Mixture Model
Figure 4: URP Model
P (Z = z) for that user. This gives URP much more powerful generative semantics
at the user level than the multinomial mixture model. As with LDA, URP could
be generalized to use any continuous distribution on the simplex, but in this case
the Dirichlet leads to efficient prediction equations. Note that the bottom level of
the LDA model consists of an item variable Y , and ratings do not come into LDA
at any point.
The probability of observing a given user rating profile ru under the URP model
is shown in equation 1 where we define ?(ruy , v) to be equal to 1 if user u assigned
rating v to item y, and 0 otherwise. Note that we assume unspecified ratings are
missing at random. As in LDA, the Dirichlet prior renders the computation of the
posterior distribution p(?, z|ru , ?, ?) = P (?, z, ru |?, ?)/P (ru |?, ?) intractable.
u
P (r |?, ?) =
3
Z
P (?|?)
?
M Y
V
Y
y=1 v=1
K
X
P (Zy = z|?)P (Ry = v|Zy = z, ?)
z=1
!?(ryu ,v)
d?
(1)
Parameter Estimation
The procedure we use for parameter estimation is a variational expectation maximization algorithm based on free energy maximization. As with LDA, other methods including expectation propagation could be applied. We choose to apply a
fully factored variational q-distribution as shown in equation 2. We define q(?|? u )
to be a Dirichlet distribution with Dirichlet parameters ?zu , and q(Zy |?uy ) to be a
multinomial distribution with parameters ?uzy .
P (?, z|?, ?, ru )
? q(?, z|? u , ?u ) = q(?|? u )
M
Y
y=1
q(Zy = zy |?uy )
(2)
A per-user free energy function F [? u , ?u , ?, ?] provides a variational lower bound
on the log likelihood log p(ru |?, ?) of a single user rating profile. The sum of the
per-user free energy functions F [? u , ?u , ?, ?] yields the total free energy function
F [?, ?, ?, ?], which is a lower bound on the log likelihood of a complete data set of
user rating profiles. The variational and model parameter updates are obtained by
expanding F [?, ?, ?, ?] using the previously described distributions, and maximizing
the result with respect to ? u , ?u , ? and ?. The variational parameter updates are
shown in equations 3, and 4. ? denotes the first derivative of the log gamma
function, also know as the digamma or psi function.
?uzy
?
V
Y
u
?vyz ?(ry ,v) exp(?(?zu ) ? ?(
v=1
?zu
= ?z +
M
X
Pk
u
j=1 ?j ))
?uzy
(3)
(4)
y=1
By iterating the the variational updates with fixed ? and ? for a particular user, we
are guaranteed to reach a local maximum of the per-user free energy F [? u , ?u , ?, ?].
This iteration is a well defined approximate inference procedure for the URP model.
The model multinomial update has a closed form solution as shown in equation 5.
This is not the case for the model Dirichlet ? due to coupling of its parameters.
However, Minka has proposed two iterative methods for estimating a Dirichlet distribution from probability vectors that can be used here. We give Minka?s fixed-point
iteration in equations 6 and 7, which yields very similar results compared to the
alternative Newton iteration. Details for both procedures including the inversion of
the digamma function may be found in [8].
?vyz
?
N
X
?uzy ?(ryu , v)
(5)
u=1
?(?z )
?z
4
=
=
?(
?
K
X
z=1
?1
?z ) + 1/N (
(?(?z ))
N
X
u=1
?(?zu ) ? ?(
N
X
?zu ))
(6)
u=1
(7)
Model Fitting and Initialization
We give a variational expectation maximization procedure for model fitting in this
section as well as an initialization method that has proved to be very effective for
the URP model. Lastly, we discuss stopping criteria used for the EM iterations.
4.1
Model Fitting
The variational inference procedure should be run to convergence to insure a maximum likelihood solution. However, if we are satisfied with simply increasing the free
energy at each step, other fitting procedures are possible. In general, the number
of steps of variational inference can be determined by a user dependant heuristic
function H(u). Buntine uses a single step of variational inference for each user to
fit the mPCA model. At the other end of the spectrum, Blei et al. select a sufficient
number of steps to achieve convergence when fitting the LDA model. Empirically,
we have found that simple linear functions, of the number of ratings in each user
profile provide a good heuristic. The details of the fitting procedure are given below.
E-Step:
1. For all users u
2.
3.
4.
M-Step:
For h = 0 to H(u)
Pk
QV
u
?uzy ? v=1 ?ryz ?(ry ,v) exp(?(?zu ) ? ?( j=1 ?ju ))
PM
?zu = ?z + y=1 ?uzy
1. For each v, y, z set ?vyz ?
PN
u=1
?uzyv ?(ryu , v).
2. While not converged
PK
PN
PN
3.
?(?z ) = ?( z=1 ?z ) + 1/N ( u=1 ?(?zu ) ? ?( u=1 ?zu ))
4.
4.2
?z = ??1 (?(?z ))
Initialization and Early Stopping
Fitting the URP model can be quite difficult starting from randomly initialized
parameters. The initialization method we have adopted is to partially fit a multinomial mixture model with the same number of user attitudes as the URP model.
Fitting the multinomial mixture model for a small number of EM iterations yields
a set of multinomial distributions encoded by ? 0 , as well as a single multinomial
distribution over user attitudes encoded by ? 0 . To initialize the URP model we set
? = ? 0 , ? = ?? 0 where ? is a positive constant. Letting ? = 1 appears to give good
results in practice.
Normally EM is run until the bound on log likelihood converges, but this tends
to lead to over fitting in some models including the aspect model. To combat
this problem Hofmann suggests using early stopping of the EM iteration [7]. We
implemented early stopping for all models using a separate validation set to allow
for a fair comparison.
5
Prediction
The primary task for any model applied to the rating-based collaborative filtering
problem is to predict ratings for the items a user has not rated, based on the ratings
the user has specified. Assume we have a user u with rating profile ru , and we wish
to predict the user?s rating ryu for an unrated item y. The distribution over ratings
for the item y can be calculated using the model as follows:
Z X
P (Ry = v|ru ) =
P (Ry = v|Zy = z)P (Zy = z|?)P (?|ru )d?
(8)
?
z
This quantity may look quite difficult to compute, but by interchanging the sum
and integral, and appealing to our variational approximation q(?|? u ) ? P (?|ru ) we
obtain an expression in terms of the model and variational parameters.
u
p(Ry = v|r ) =
K
X
z=1
?u
?vyz PK z
j=1
?ju
(9)
To compute P (Ry = v|ru ) according to equation 9 given the model parameters ?
and ?, it is necessary to apply our variational inference procedure to compute ? u .
However, this only needs to be done once for each user in order to predict all unknown ratings in the user?s profile. Given the distribution P (Ry |ru ), various rules
can be used to compute the predicted rating. One could predict the rating with
maximal probability, predict the expected rating, or predict the median rating. Of
course, each of these prediction rules minimizes a different prediction error measure. In particular, median prediction minimizes the mean absolute error and is the
prediction rule we use in our experiments.
6
Experimentation
We consider two different experimental procedures that test the predictive ability
of a rating-based collaborative filtering method. The first is a weak generalization
all-but-1 experiment where one of each user?s ratings is held out. The model is then
trained on the remaining observed ratings and tested on the held out ratings. This
experiment is designed to test the ability of a method to generalize to other items
rated by the users it was trained on.
We introduce a second experimental protocol for testing a stronger form of generalization. The model is first trained using all ratings from a set of training users.
Once the model is trained, an all-but-1 experiment is performed using a separate
set of test users. This experiment is designed to test the ability of the model to
generalize to novel user profiles.
Two different base data sets were used in the experiments. The well known EachMovie data set, and the recently released million rating MovieLens data set. Both
data sets were filtered to contain users with at least 20 ratings. EachMovie was
filtered to remove movies with less than 2 ratings leaving 1621 movies. The MovieLens data was similarly filtered leaving 3592 movies. The EachMovie training sets
contained 30000 users while the test sets contained 5000 users. The MovieLens
training sets contained 5000 users while the test sets contained 1000 users. The
EachMovie rating scale is from 0 to 5, while the MovieLens rating scale is from 1
to 5.
Both types of experiment were performed for a range of numbers of user attitudes.
For each model and number of user attitudes, each experiment was repeated on
three different random partitions of each base data set into known ratings, held out
ratings, validation ratings, training users and testing users. In the weak generalization experiments the aspect, multinomial mixture, and URP models were tested.
In the strong generalization experiments only the multinomial mixture and URP
models were tested since a trained aspect model can not be applied to new user
profiles. Also recall that LDA and mPCA can not be used for rating prediction
so they are not be tested in these experiments. We provide results obtained with
a best-K-neighbors version of the GroupLens method for various values of K as a
baseline method.
0.48
0.47
0.46
0.45
0.44
0.43
Neighborhood
Aspect Model
Multinomial Mixture
URP
0.42
0.41
0.4
5
10
15
20
25
30
35
40
K
Normalized Mean Absolute Error
Figure 5: EachMovie weak generalization.
MovieLens Weak Generalization
0.5
0.49
0.48
0.47
0.46
0.45
0.44
0.43
Neighborhood
Aspect Model
Multinomial Mixture
URP
0.42
0.41
0.4
2
3
4
5
6
7
8
9
10
K
Figure 7: MovieLens weak generalization.
7
Normalized Mean Absolute Error
0.49
EachMovie Strong Generalization
0.5
0.49
0.48
0.47
0.46
0.45
0.44
0.43
0.42
Neighborhood
Multinomial Mixture
URP
0.41
0.4
5
10
15
20
25
30
35
40
K
Figure 6: EachMovie strong generalization.
Normalized Mean Absolute Error
Normalized Mean Absolute Error
EachMovie Weak Generalization
0.5
MovieLens Strong Generalization
0.5
0.49
0.48
0.47
0.46
0.45
0.44
0.43
0.42
Neighborhood
Multinomial Mixture
URP
0.41
0.4
2
3
4
5
6
7
8
9
10
K
Figure 8: MovieLens strong generalization.
Results
Results are reported in figures 5 through 8 in terms of normalized mean absolute
error (NMAE). We define our NMAE to be the standard MAE normalized by the the
expected value of the MAE assuming uniformly distributed rating values and rating
predictions. For the EachMovie dataset E[M AE] is 1.9?4, and for the MovieLens
data set it is 1.6. Note that our definition of NMAE differs from that used by
Goldberg et al. [6]. Goldberg et al. take the normalizer to be the difference
between the minimum and maximum ratings, which means most of the error scale
corresponds to performing much worse than random.
In both the weak and strong generalization experiments using the EachMovie data
set, the URP model performs significantly better than the other methods, and
obtains the lowest prediction error. The results obtained from the MovieLens data
set do not show the same clean trends as the EachMovie data set for the weak
generalization experiment. The smaller size of MovieLens data set seems to cause
URP to over fit for larger values of K, thus increasing its test error. Nevertheless,
the lowest error attained by URP is not significantly different than that obtained
by the aspect model. In the strong generalization experiment the URP model again
out performs the other methods.
8
Conclusions
In this paper we have presented the URP model for rating-based collaborative
filtering. Our model combines the intuitive appeal of the multinomial mixture and
aspect models, with the strong high level generative semantics of LDA and mPCA.
As a result of being specially designed for collaborative filtering, our model also
contains unique rating profile generative semantics not found in LDA or mPCA.
This gives URP the capability to operate directly on ratings data, and to efficiently
predict all missing ratings in a user profile. This means URP can be applied to
recommendation, as well as many other tasks based on rating prediction.
We have empirically demonstrated on two different data sets that the weak generalization performance of URP is at least as good as that of the aspect and multinomial
mixture models. For online applications where it is impractical to refit the model
each time a rating is supplied by a user, the result of interest is strong generalization
performance. The aspect model can not be applied in a principled manner in such
a scenario, and we see that URP outperforms the other methods by a significant
margin.
Acknowledgments
We thank the Compaq Computer Corporation for the use of the EachMovie data
set, and the GroupLens Research Group at the University of Minnesota for use of
the MovieLens data set. Many thanks go to Rich Zemel for helpful comments and
numerous discussions about this work.
References
[1] D. Blei, A. Ng, and M. Jordan. Latent Dirichlet allocation. Journal of Machine
Learning Research, 3:993?1022, Jan. 2003.
[2] C. Boutilier, R. S. Zemel, and B. Marlin. Active collaborative filtering. In
Proceedings of the Nineteenth Annual Conference on Uncertainty in Artificial
Intelligence, pages 98?106, 2003.
[3] W. Buntine. Variational extensions to EM and multinomial PCA. In Proceedings
of the European Conference on Machine Learning, 2002.
[4] M. Claypool, A. Gokhale, T. Miranda, P. Murnikov, D. Netes, and M. Sartin.
Combining content-based and collaborative filters in an online newspaper. In
Proceedings of ACM SIGIR Workshop on Recommender Systems, 1999.
[5] M. Girolami and A. Kab?
an. On an equivalence between PLSI and LDA. In Proceedings of the ACM Conference on Research and Development in Information
Retrieval, pages 433?434, 2003.
[6] K. Goldberg, T. Roeder, D. Gupta, and C. Perkins. Eigentaste: A constant time
collaborative filtering algorithm. Information Retrieval Journal, 4(2):133?151,
July 2001.
[7] T. Hofmann. Learning What People (Don?t) Want. In Proceedings of the European Conference on Machine Learning, 2001.
[8] T. Minka. Estimating a Dirichlet Distribution. Unpublished, 2003.
[9] P. Resnick, N. Iacovou, M. Suchak, P. Bergstorm, and J. Riedl. GroupLens:
An Open Architecture for Collaborative Filtering of Netnews. In Proceedings of
ACM 1994 Conference on Computer Supported Cooperative Work, pages 175?
186, Chapel Hill, North Carolina, 1994. ACM.
| 2377 |@word version:2 inversion:1 seems:1 proportion:1 stronger:1 open:1 carolina:1 contains:1 score:1 selecting:2 document:1 outperforms:1 z2:1 assigning:1 yet:1 must:1 partition:1 hofmann:2 remove:1 designed:6 update:4 generative:9 intelligence:1 item:24 urp:33 filtered:3 blei:2 provides:1 toronto:3 preference:7 accessed:1 consists:1 fitting:10 combine:1 manner:1 introduce:1 expected:4 ry:10 increasing:2 estimating:2 insure:1 lowest:2 what:1 interpreted:4 unspecified:1 minimizes:2 marlin:3 corporation:1 impractical:1 combat:1 rm:2 normally:1 positive:1 local:1 tends:1 initialization:5 equivalence:1 suggests:1 co:2 range:1 uy:2 unique:2 responsible:1 acknowledgment:1 testing:2 practice:1 differs:1 procedure:12 jan:1 empirical:1 significantly:2 word:1 map:1 demonstrated:1 missing:2 maximizing:1 go:1 starting:1 focused:1 sigir:1 factored:1 rule:3 chapel:1 user:78 us:1 goldberg:3 trend:1 cooperative:1 bottom:1 observed:1 resnick:1 highest:1 principled:2 benjamin:1 constrains:1 trained:5 predictive:1 various:2 attitude:14 describe:1 effective:1 artificial:1 zemel:2 netnews:1 neighborhood:5 quite:3 heuristic:2 encoded:2 larger:1 nineteenth:1 otherwise:1 compaq:1 ability:3 nmae:3 online:2 obviously:1 advantage:2 maximal:1 zm:1 combining:1 mixing:1 flexibility:1 achieve:1 intuitive:2 convergence:2 r1:2 produce:1 converges:1 coupling:1 strong:9 implemented:1 c:1 predicted:4 come:1 girolami:2 correct:1 filter:1 enable:1 generalization:18 elementary:1 extension:1 hold:1 exp:2 claypool:1 predict:10 early:4 released:1 estimation:2 grouplens:4 create:1 qv:1 pn:3 likelihood:5 digamma:2 normalizer:1 baseline:1 helpful:1 inference:6 roeder:1 stopping:4 semantics:5 denoted:1 development:1 initialize:1 equal:1 once:2 ng:1 represents:3 yu:1 look:1 simplex:1 interchanging:1 randomly:1 gamma:1 interest:1 mixture:21 held:3 integral:1 necessary:1 initialized:1 increased:1 suchak:1 modeling:1 assignment:1 maximization:3 uniform:1 buntine:2 reported:1 ju:2 thanks:1 probabilistic:1 participates:2 again:1 satisfied:1 choose:1 worse:1 derivative:1 north:1 explicitly:1 performed:3 view:1 closed:1 observing:1 capability:2 collaborative:12 contribution:1 efficiently:1 yield:3 generalize:2 weak:9 zy:9 m5s:1 converged:1 reach:1 definition:1 iacovou:1 energy:6 minka:3 associated:1 psi:1 gain:1 sampled:1 proved:1 dataset:1 recall:1 appears:1 attained:1 done:1 lastly:1 until:1 working:1 expressive:1 lack:1 propagation:1 dependant:1 lda:18 grows:2 kab:2 contain:1 true:1 normalized:6 assigned:1 attractive:1 criterion:1 generalized:1 hill:1 complete:2 performs:2 variational:14 gokhale:1 novel:2 recently:2 multinomial:28 empirically:2 million:1 interpretation:1 mae:2 refer:1 significant:1 pm:1 similarly:1 minnesota:1 base:2 posterior:1 recent:1 plsi:1 scenario:1 unrated:1 seen:1 minimum:1 july:1 infer:1 eachmovie:12 calculation:2 retrieval:2 prediction:14 underlies:1 simplistic:1 ae:1 essentially:1 expectation:3 iteration:6 proposal:1 want:1 median:2 leaving:2 operate:1 specially:1 comment:1 facilitates:1 incorporates:1 jordan:1 fit:3 architecture:1 inner:1 expression:1 pca:2 utility:1 render:1 cause:1 boutilier:1 iterating:1 clear:1 induces:1 processed:1 generate:1 supplied:1 exist:1 estimated:1 per:4 discrete:2 express:1 group:1 nevertheless:1 ryz:1 clean:1 miranda:1 sum:2 run:2 powerful:1 uncertainty:1 bound:3 guaranteed:1 annual:1 handful:1 perkins:1 personalized:1 aspect:21 performing:1 department:1 according:3 riedl:1 smaller:1 em:5 appealing:1 equation:7 previously:1 discus:1 mind:1 know:1 letting:1 end:1 adopted:1 decomposing:2 operation:1 experimentation:1 apply:2 occurrence:2 alternative:1 denotes:1 dirichlet:13 remaining:1 graphical:1 newton:1 giving:1 purchased:1 question:1 added:1 quantity:1 primary:1 separate:2 thank:1 assuming:1 ru:13 index:2 relationship:1 difficult:2 casually:1 proper:1 refit:1 unknown:1 recommender:1 eigentaste:1 incorporated:1 canada:1 rating:80 pair:2 unpublished:1 specified:1 z1:1 ryu:4 below:1 pattern:5 including:6 power:1 natural:1 predicting:1 movie:3 rated:3 numerous:1 prior:5 fully:1 interesting:1 filtering:11 allocation:2 validation:2 degree:2 mpca:6 sufficient:1 thresholding:1 course:2 supported:1 free:6 allow:1 neighbor:1 absolute:6 distributed:2 calculated:1 rich:1 newspaper:1 approximate:1 obtains:1 ml:1 active:2 recommending:1 spectrum:1 don:1 search:1 latent:7 continuous:1 iterative:1 expanding:1 european:2 protocol:1 pk:4 linearly:1 profile:20 fair:1 repeated:1 augmented:1 referred:1 wish:1 zu:10 appeal:2 r2:2 gupta:1 intractable:1 workshop:1 margin:1 simply:1 contained:4 partially:1 recommendation:3 corresponds:1 acm:4 viewed:1 goal:1 content:2 typical:2 specifically:1 determined:1 movielens:12 uniformly:1 called:1 total:1 experimental:2 select:1 people:1 h5:1 tested:4 |
1,515 | 2,378 | Policy search by dynamic programming
J. Andrew Bagnell
Carnegie Mellon University
Pittsburgh, PA 15213
Andrew Y. Ng
Stanford University
Stanford, CA 94305
Sham Kakade
University of Pennsylvania
Philadelphia, PA 19104
Jeff Schneider
Carnegie Mellon University
Pittsburgh, PA 15213
Abstract
We consider the policy search approach to reinforcement learning. We
show that if a ?baseline distribution? is given (indicating roughly how
often we expect a good policy to visit each state), then we can derive
a policy search algorithm that terminates in a finite number of steps,
and for which we can provide non-trivial performance guarantees. We
also demonstrate this algorithm on several grid-world POMDPs, a planar
biped walking robot, and a double-pole balancing problem.
1
Introduction
Policy search approaches to reinforcement learning represent a promising method for solving POMDPs and large MDPs. In the policy search setting, we assume that we are given
some class ? of policies mapping from the states to the actions, and wish to find a good
policy ? ? ?. A common problem with policy search is that the search through ? can be
difficult and computationally expensive, and is thus typically based on local search heuristics that do not come with any performance guarantees.
In this paper, we show that if we give the learning agent a ?base distribution? on states
(specifically, one that indicates how often we expect it to be in each state; cf. [5, 4]), then we
can derive an efficient policy search algorithm that terminates after a polynomial number
of steps. Our algorithm outputs a non-stationary policy, and each step in the algorithm
requires only a minimization that can be performed or approximated via a call to a standard
supervised learning algorithm. We also provide non-trivial guarantees on the quality of the
policies found, and demonstrate the algorithm on several problems.
2
Preliminaries
We consider an MDP with state space S; initial state s0 ? S; action space A; state transition
probabilities {Psa (?)} (here, Psa is the next-state distribution on taking action a in state s);
and reward function R : S 7? R, which we assume to be bounded in the interval [0, 1].
In the setting in which the goal is to optimize the sum of discounted rewards over an infinitehorizon, it is well known that an optimal policy which is both Markov and stationary (i.e.,
one where the action taken does not depend on the current time) always exists. For this
reason, learning approaches to infinite-horizon discounted MDPs have typically focused
on searching for stationary policies (e.g., [8, 5, 9]). In this work, we consider policy search
in the space of non-starionary policies, and show how, with a base distribution, this allows
us to derive an efficient algorithm.
We consider a setting in which the goal is to maximize the sum of undiscounted rewards
over a T step horizon: T1 E[R(s0 ) + R(s1 ) + . . . + R(sT ?1 )]. Clearly, by choosing
T sufficiently large, a finite-horizon problem can also be used to approximate arbitrarily well an infinite-horizon discounted problem. (E.g., [6]) Given a non-stationary policy
(?t , ?t+1 , . . . , ?T ?1 ), where each ?t : S 7? A is a (stationary) policy, we define the value
V?t ,...,?T ?1 (s) ? T1 E[R(st ) + R(st+1 ) + . . . + R(sT ?1 )|st = s; (?t , . . . , ?T ?1 )]
as the expected (normalized) sum of rewards attained by starting at state s and the ?clock?
at time t, taking one action according to ?t , taking the next action according to ?t+1 , and
so on. Note that
V?t ,...,?T ?1 (s) ? T1 R(s) + Es0 ?Ps?t (s) [V?t+1 ,...,?T ?1 (s)],
where the ?s0 ? Ps?t (s) ? subscript indicates that the expectation is with respect to s0 drawn
from the state transition distribution Ps?t (s) .
In our policy search setting, we consider a restricted class of deterministic, stationary policies ?, where each ? ? ? is a map ? : S 7? A, and a corresponding class of non-stationary
policies ?T = {(?0 , ?1 , . . . , ?T ?1 ) | for all t, ?t ? ?}. In the partially observed, POMDP
setting, we may restrict ? to contain policies that depend only on the observable aspects
of the state, in which case we obtain a class of memoryless/reactive policies. Our goal is
to find a non-stationary policy (?0 , ?1 . . . , ?T ?1 ) ? ?T which performs well under the
performance measure V?0 ,?1 ...,?T ?1 (s0 ), which we abbreviate as V? (s0 ) when there is no
risk of confusion.
3
The Policy Search Algorithm
Following [5, 4], we assume that we are given a sequence of base distributions
?0 , ?1 , . . . , ?T ?1 over the states. Informally, we think of ?t as indicating to the algorithm
approximately how often we think a good policy visits each state at time t.
Our algorithm (also given in [4]), which we call Policy Search by Dynamic Programming
(PSDP) is in the spirit of the traditional dynamic programming approach to solving MDPs
where values are ?backed up.? In PSDP, it is the policy which is backed up. The algorithm
begins by finding ?T ?1 , then ?T ?2 , . . . down to ?0 . Each policy ?t is chosen from the
stationary policy class ?. More formally, the algorithm is as follows:
Algorithm 1 (PSDP) Given T , ?t , and ?:
for t = T ? 1, T ? 2, . . . , 0
Set ?t = arg max?0 ?? Es??t [V?0 ,?t+1 ...,?T ?1 (s)]
In other words, we choose ?t from ? so as to maximize the expected sum of future rewards
for executing actions according to the policy sequence (?t , ?t+1 , . . . , ?T ?1 ) when starting
from a random initial state s drawn from the baseline distribution ?t .
Since ?0 , . . . , ?T ?1 provides the distribution over the state space that the algorithm is optimizing with respect to, we might hope that if a good policy tends to visit the state space
in a manner comparable to this base distribution, then PSDP will return a good policy.
The following theorem formalizes this intuition. The theorem also allows for the situation
where the maximization step in the algorithm (the arg max?0 ?? ) can be done only approximately. We later give specific examples showing settings in which this maximization can
(approximately or exactly) be done efficiently.
The following definitions will be useful. For a non-stationary policy ? = (?0 , . . . , ?T ?1 ),
define the future state distribution
??,t (s) = Pr(st = s|s0 , ?).
I.e. ??,t (s) is the probability that we will be in state s at time t if picking actions according
to ? and starting from state s0 . Also, given two T -step sequences of distributions over
states ? = (?0 , . . . , ?t ) and ?0 = (?00 , . . . , ?0t ), define the average variational distance
between them to be1
T ?1
1 XX
dvar (?, ?0 ) ?
|?t (s) ? ?0t (s)|
T t=0
s?S
Hence, if ?ref is some policy, then dvar (?, ??ref ) represents how much the base distribution
? differs from the future state distribution of the policy ?ref .
Theorem 1 (Performance Guarantee) Let ? = (?0 , . . . , ?T ?1 ) be a non-stationary policy returned by an ?-approximate version of PSDP in which, on each step, the policy ? t
found comes within ? of maximizing the value. I.e.,
Es??t [V?t ,?t+1 ...,?T ?1 (s)] ? max?0 ?? Es??t [V?0 ,?t+1 ...,?T ?1 (s)] ? ? .
(1)
Then for all ?ref ? ?T we have that
V? (s0 ) ? V?ref (s0 ) ? T ? ? T dvar (?, ??ref ) .
Proof. This proof may also be found in [4], but for the sake of completeness, we also
provide it here. Let Pt (s) = Pr(st = s|s0 , ?ref ), ?ref = (?ref ,0 , . . . , ?ref ,T ?1 ) ? ?T , and
? = (?0 , . . . , ?T ?1 ) be the output of ?-PSDP. We have
PT ?1
V?ref ? V? = T1 t=0 Est ?Pt [R(st )] ? V?0 ,... (s)
PT ?1
1
=
t=0 Est ?Pt [ T R(st ) + V?t ,... (st ) ? V?t ,... (st )] ? V?0 ,... (s)
PT ?1
1
=
t=0 Est ?Pt ,st+1 ?Pst ?ref ,t (st ) [ T R(st ) + V?t+1 ,... (st+1 ) ? V?t ,... (st )]
PT ?1
=
t=0 Est ?Pt [V?ref ,t ,?t+1 ,...,?T ?1 (st ) ? V?t ,?t+1 ,...,?T ?1 (st )]
It is well-known that for any function
Pf bounded in absolute value by B, it holds true that
|Es??1 [f (s)] ? Es??2 [f (s)]| ? B s |?1 (s) ? ?2 (s)|. Since the values are bounded in
the interval [0, 1] and since Pt = ??ref ,t ,
PT ?1
t=0 Est ?Pt [V?ref ,t ,?t+1 ,...,?T ?1 (st ) ? V?t ,?t+1 ,...,?T ?1 (st )]
PT ?1
PT ?1
?
t=0 Es??t [V?ref ,t ,?t+1 ,...,?T ?1 (s) ? V?t ,?t+1 ,...,?T ?1 (s)] ?
t=0 |Pt (s) ? ?t (s)|
PT ?1
?
t=0 max? 0 ?? Es??t [V? 0 ,?t+1 ,...,?T ?1 (s) ? V?t ,?t+1 ,...,?T ?1 (s)] ? T dvar (??ref , ?)
? T ? + T dvar (??ref , ?)
where we have used equation (1) and the fact that ?ref ? ?T . The result now follows. ?
This theorem shows that PSDP returns a policy with performance that competes favorably
against those policies ?ref in ?T whose future state distributions are close to ?. Hence, we
expect our algorithm to provide a good policy if our prior knowledge allows us to choose a
? that is close to a future state distribution for a good policy in ?T .
It is also shown in [4] that the dependence on dvar is tight in the worst case. Furthermore,
it is straightforward to show (cf. [6, 8]) that the ?-approximate PSDP can be implemented
using a number of samples that is linear in the VC dimension of ?, polynomial in T and 1? ,
but otherwise independent of the size of the state space. (See [4] for details.)
4
Instantiations
In this section, we provide detailed examples showing how PSDP may be applied to specific
classes of policies, where we can demonstrate computational efficiency.
1
If S is continuous and ?t and ?0t are densities, the inner summation is replaced by an integral.
4.1
Discrete observation POMDPs
Finding memoryless policies for POMDPs represents a difficult and important problem.
Further, it is known that the best memoryless, stochastic, stationary policy can perform
better by an arbitrarily large amount than the best memoryless, deterministic policy. This
is frequently given as a reason for using stochastic policies. However, as we shortly show,
there is no advantage to using stochastic (rather than deterministic) policies, when we are
searching for non-stationary policies.
Four natural classes of memoryless policies to consider are as follows: stationary deterministic (SD), stationary stochastic (SS), non-stationary deterministic (ND) and non-stationary
stochastic (NS). Let the operator opt return the value of the optimal policy in a class. The
following specifies the relations among these classes.
Proposition 1 (Policy ordering) For any finite-state, finite-action POMDP,
opt(SD) ? opt(SS) ? opt(ND) = opt(NS)
We now sketch a proof of this result. To see that opt(ND) = opt(NS), let ?NS be the future
distribution of an optimal policy ?N S ? NS. Consider running PSDP with base distribution
?NS . After each update, the resulting policy (?NS,0 , ?NS,1 , . . . , ?t , . . . , ?T ) must be at least
as good as ?NS . Essentially, we can consider PSDP as sweeping through each timestep and
modifying the stochastic policy to be deterministic, while never decreasing performance.
A similar argument shows that opt(SS) ? opt(ND) while a simple example POMDP in the
next section demonstrates this inequality can be strict.
The potentially superior performance of non-stationary policies contrasted with stationary
stochastic ones provides further justification for their use. Furthermore, the last inequality suggests that only considering deterministic policies is sufficient in the non-stationary
regime.
Unfortunately, one can show that it is NP-hard to exactly or approximately find the best
policy in any of these classes (this was shown for SD in [7]). While many search heuristics
have been proposed, we now show PSDP offers a viable, computationally tractable, alternative for finding a good policy for POMDPs, one which offers performance guarantees in
the form of Theorem 1.
Proposition 2 (PSDP complexity) For any POMDP, exact PSDP (? = 0) runs in time
polynomial in the size of the state and observation spaces and in the horizon time T .
Under PSDP, the policy update is as follows:
?t (o) = arg maxa Es??t [p(o|s)Va,?t+1 ...,?T ?1 (s)] ,
(2)
where p(o|s) is the observation probabilities of the POMDP and the policy sequence
(a, ?t+1 . . . , ?T ?1 ) always begins by taking action a. It is clear that given the policies from
time t + 1 onwards, Va,?t+1 ...,?T ?1 (s) can be efficiently computed and thus the update 2
can be performed in polynomial time in the relevant quantities. Intuitively, the distribution
? specifies here how to trade-off the benefits of different underlying state-action pairs that
share an observation. Ideally, it is the distribution provided by an optimal policy for ND
that optimally specifies this tradeoff.
This result does not contradict the NP-hardness results, because it requires that a good
baseline distribution ? be provided to the algorithm. However, if ? is the future state
distribution of the optimal policy in ND, then PSDP returns an optimal policy for this class
in polynomial time.
Furthermore, if the state space is prohibitively large to perform the exact update in equation 2, then Monte Carlo integration may be used to evaluate the expectation over the state
space. This leads to an ?-approximate version of PSDP, where one can obtain an algorithm
with no dependence on the size of the state space and a polynomial dependence on the
number of observations, T , and 1? (see discussion in [4]).
4.2
Action-value approximation
PSDP can also be efficiently implemented if it is possible to efficiently find an approximate
action-value function V?a,?t+1 ...,?T ?1 (s), i.e., if at each timestep
? ? Es??t [maxa?A |V?a,?t+1 ...,?T ?1 (s) ? Va,?t+1 ...,?T ?1 (s)|] .
(Recall that the policy sequence (a, ?t+1 . . . , ?T ?1 ) always begins by taking action a.)
If the policy ?t is greedy with respect to the action value V?a,?t+1 ...,?T ?1 (s) then it follows
immediately from Theorem 1 that our policy value differs from the optimal one by 2T ? plus
the ? dependent variational penalty term. It is important to note that this error is phrased in
terms of an average error over state-space, as opposed to the worst case errors over the state
space that are more standard in RL. We can intuitively grasp this by observing that value
iteration style algorithms may amplify any small error in the value function by pushing
more probability mass through where these errors are. PSDP, however, as it does not use
value function backups, cannot make this same error; the use of the computed policies
in the future keeps it honest. There are numerous efficient regression algorithms that can
minimize this, or approximations to it.
4.3
Linear policy MDPs
We now examine in detail a particular policy search example in which we have a twoaction MDP, and a linear policy class is used. This case is interesting because, if the
term Es??t [V?,?t+1 ,...,?T ?1 (s)] (from the maximization step in the algorithm) can be nearly
maximized by some linear policy ?, then a good approximation to ? can be found.
Let A = {a1 , a2 }, and ? = {?? (s) : ? ? Rn }, where ?? (s) = a1 if ?T ?(s) ? 0,
and ?? (s) = a2 otherwise. Here, ?(s) ? Rn is a vector of features of the state s. Consider the maximization step in the PSDP algorithm. Letting 1{?} be the indicator function
(1{True} = 1, 1{False} = 0), we have the following algorithm for performing the maximization:
Algorithm 2 (Linear maximization) Given m1 and m2:
for i = 1 to m1
Sample s(i) ? ?t .
Use m2 Monte Carlo samples to estimate Va1 ,?t+1 ,...,?T ?1 (s(i) ) and
Va2 ,?t+1 ,...,?T ?1 (s(i) ). Call the resulting estimates q1 and q2 .
Let y (i) = 1{q1 > q2 }, and w (i) = |q1 ? q2 |.
Pm1 (i)
Find ? = arg min? i=1
w 1{1{? T ?(s(i) ) ? 0} 6= y (i) }.
Output ?? .
Intuitively, the algorithm does the following: It samples m1 states s(1) , . . . , s(m1 ) from the
distribution ?t . Using m2 Monte Carlo samples, it determines if action a1 or action a2 is
preferable from that state, and creates a ?label? y (i) for that state accordingly. Finally, it
tries to find a linear decision boundary separating the states from which a1 is better from
the states from which a2 is better. Further, the ?importance? or ?weight? w (i) assigned to
s(i) is proportional to the difference in the values of the two actions from that state.
The final maximization step can be approximated via a call to any standard supervised
learning algorithm that tries to find linear decision boundaries, such as a support vector
machine or logistic regression. In some of our experiments, we use a weighted logistic
regression to perform this maximization. However, using linear programming, it is possible
to approximate this maximization. Let
m1
X
T (?) =
w(i) 1{1{? T ?(s(i) ) ? 0} 6= y (i) }
i=1
Figure 1: Illustrations of mazes: (a) Hallway (b) McCallum?s Maze (c) Sutton?s Maze
be the objective in the minimization. If there is a value of ? that can satisfies T (?) = 0,
then it can be found via linear programming. Specifically, for each value of i, we let there
be a constraint
? T
? ?(s(i) ) > ?
if y (i) = 1
T
(i)
? ?(s ) < ?? otherwise
otherwise, where ? is any small positive constant. In the case in which these constraints
cannot be simultaneously satisfied, it is NP-hard to find arg min? T (?). [1] However, the
optimal value can be approximated. Specifically, if ? ? = arg min? T (?), then [1] presents
a polynomial time algorithm that finds ? so that
T (?) ? (n + 1)T (? ? ).
Here, n is the dimension of ?. Therefore, if there is a linear policy that does well, we also
find a policy that does well. (Conversely, if there is no linear policy that does well?i.e.,
if T (? ? ) above were large?then the bound would be very loose; however, in this setting
there is no good linear policy, and hence we arguably should not be using a linear policy
anyway or should consider adding more features.)
5
Experiments
The experiments below demonstrate each of the instantiations described previously.
5.1
POMDP gridworld example
Here we apply PSDP to some simple maze POMDPs (Figure (5.1) to demonstrate its performance. In each the robot can move in any of the 4 cardinal direction. Except in (5.1c),
the observation at each grid-cell is simply the directions in which the robot can freely move.
The goal in each is to reach the circled grid cell in the minimum total number of steps from
each starting cell.
First we consider the hallway maze in Figure (5.1a). The robot here is confounded by all
the middle states appearing the same, and the optimal stochastic policy must take time at
least quadratic in the length of the hallway to ensure it gets to the goal from both sides.
PSDP deduces a non-stationary deterministic policy with much better performance: first
clear the left half maze by always traveling right and then the right half maze by always
traveling left.
McCallum?s maze (Figure 5.1b) is discussed in the literature as admitting no satisficing
determinisitic reactive policy. When one allows non-stationary policies, however, solutions
do exist: PSDP provides a policy with 55 total steps to goal. In our final benchmark,
Sutton?s maze (Figure 5.1c), the observations are determined by the openness of all eight
connected directions.
Below we summarize the total number of steps to goal of our algorithm as compared with
optimality for two classes of policy. Column 1 denotes PSDP performance using a uniform
baseline distribution. The next column lists the performance of iterating PSDP, starting
initially with a uniform baseline ? and then computing with a new baseline ?0 based on the
previously constructed policy. 2 Column 3 corresponds to optimal stationary deterministic
2
It can be shown that this procedure of refining ? based on previous learned policies will never
decrease performance.
policy while the final column gives the best theoretically achievable performance given
arbitrary memory. It is worthwhile to note that the PSDP computations are very fast in all
of these problems, taking well under a second in an interpreted language.
Hallway
McCallum
Sutton
5.2
? uniform
21
55
412
? iterated
21
48
412
Optimal SD
?
?
416
Optimal
18
39
? 408
Robot walking
Our work is related in spirit to Atkeson and Morimoto [2], which describes a differential
dynamic programming (DDP) algorithm that learns quadratic value functions along trajectories. These trajectories, which serve as an analog of our ? distribution, are then refined
using the resulting policies. A central difference is their use of the value function backups as opposed to policy backups. In tackling the control problem presented in [2] we
demonstrate ways in which PSDP extends that work.
[2] considers a planar biped robot that walks along a bar. The robot has two legs and a
motor that applies torque where they meet. As the robot lacks knees, it walks by essentially
brachiating (upside-down); a simple mechanism grabs the bar as a foot swings into position. The robot (excluding the position horizontally along the bar) can be described in a 5
dimensional state space using angles and angular velocities from the foot grasping the bar.
The control variable that needs to be determined is the hip-torque.
In [2], significant manual ?cost-function engineering? or ?shaping? of the rewards was
used to achieve walking at fixed speed. Much of this is due to the limitations of differential
dynamic programming in which cost functions must always be locally quadratic. This rules
out natural cost functions that directly penalize, for example, falling. As this limitation does
not apply to our algorithm, we used a cost function that rewards the robot for each timestep it remains upright. In addition, we penalize quadratically deviation from the nominal
horizontal velocity of 0.4 m/s and control effort applied.
Samples of ? are generated in the same way [2] generates initial trajectories, using a parametric policy search. For our policy we approximated the action-value function with a
locally-weighted linear regression. PSDP?s policy significantly improves performance over
the parametric policy search; while both keep the robot walking we note that PSDP incurs
31% less cost per step.
DDP makes strong, perhaps unrealistic assumptions about the observability of state variables. PSDP, in contrast, can learn policies with limited observability. By hiding state
variables from the algorithm, this control problem demonstrates PSDP?s leveraging of nonstationarity and ability to cope with partial observability. PSDP can make the robot walk
without any observations; open loop control is sufficient to propel the robot, albeit at a
significant reduction in performance and robustness. In Figure (5.2) we see the signal generated by the learned open-loop controller. This complex torque signal would be identical
for arbitrary initial conditions? modulo sign-reversals, as the applied torque at the hip is
inverted from the control signal whenever the stance foot is switched.
5.3
Double-pole balancing
Our third problem, double pole balancing, is similar to the standard inverted pendulum
problem, except that two unactuated poles, rather than a single one, are attached to the
cart, and it is our task to simultaneously keep both of them balanced. This makes the task
significantly harder than the standard single pole problem.
Using the simulator provided by [3], we implemented PSDP for this problem. The
state variables were the cart position x; cart velocity x;
? the two poles? angles ? 1 and
?2 ; and the poles? angular velocities ?? 1 and ?? 2 . The two actions are to accelerate left
2
0.5
0.4
1.5
0.3
1
angle (rad)
control torque
0.2
0.5
0
?0.5
0.1
0
?0.1
?0.2
?1
?0.3
?1.5
?2
0
?0.4
2
4
6
8
10
time (s)
12
14
16
18
20
?0.5
0
2
4
6
8
10
12
14
16
18
20
time (s)
Figure 2: (Left) Control signal from open-loop learned controller. (Right) Resulting angle
of one leg. The dashed line in each indicates which foot is grasping the bar at each time.
and to accelerate right. We used a linear policy class ? as described previously, and
?(s) = [x, x,
? ?1 , ?? 1 , ?2 , ?? 2 ]T . By symmetry of the problem, a constant intercept term
was unnecessary; leaving out an intercept enforces that if a1 is the better action for some
state s, then a2 should be taken in the state ?s.
The algorithm we used for the optimization step was logistic regression.3 The baseline
distribution ? that we chose was a zero-mean multivariate Gaussian distribution over all
the state variables. Using a horizon of T = 2000 steps and 5000 Monte Carlo samples per
iteration of the PSDP algorithm, we are able to successfully balance both poles.
Acknowledgments. We thank Chris Atkeson and John Langford for helpful conversations. J. Bagnell is supported by an NSF graduate fellowship. This work was also supported by NASA, and by the Department of the Interior/DARPA under contract number
NBCH1020014.
References
[1] E. Amaldi and V. Kann. On the approximability of minimizing nonzero variables or
unsatisfied relations in linear systems. Theoretical Comp. Sci., 1998.
[2] C. Atkeson and J. Morimoto. Non-parametric representation of a policies and value
functions: A trajectory based approach. In NIPS 15, 2003.
[3] F. Gomez.
http://www.cs.utexas.edu/users/nn/pages/software/software.html.
[4] Sham Kakade. On the Sample Complexity of Reinforcement Learning. PhD thesis,
University College London, 2003.
[5] Sham Kakade and John Langford. Approximately optimal approximate reinforcement
learning. In Proc. 19th International Conference on Machine Learning, 2002.
[6] Michael Kearns, Yishay Mansour, and Andrew Y. Ng. Approximate planning in large
POMDPs via reusable trajectories. (extended version of paper in NIPS 12), 1999.
[7] M. Littman. Memoryless policies: theoretical limitations and practical results. In Proc.
3rd Conference on Simulation of Adaptive Behavior, 1994.
[8] Andrew Y. Ng and Michael I. Jordan. P EGASUS: A policy search method for large
MDPs and POMDPs. In Proc. 16th Conf. Uncertainty in Artificial Intelligence, 2000.
[9] Ronald J. Williams. Simple statistical gradient-following algorithms for connectionist
reinforcement learning. Machine Learning, 8:229?256, 1992.
3
In our setting, we use weighted logistic regression and minimize ?`(?)
=
P (i)
? i w log p(y (i) |s(i) , ?) where p(y = 1|s, ?) = 1/(1 + exp(?? T s)). It is straightforward to show that this is a (convex) upper-bound on the objective function T (?).
| 2378 |@word middle:1 version:3 achievable:1 polynomial:7 nd:6 open:3 simulation:1 q1:3 incurs:1 harder:1 reduction:1 initial:4 current:1 tackling:1 must:3 john:2 ronald:1 motor:1 update:4 stationary:23 greedy:1 half:2 intelligence:1 accordingly:1 hallway:4 mccallum:3 completeness:1 provides:3 along:3 constructed:1 differential:2 viable:1 manner:1 theoretically:1 hardness:1 expected:2 behavior:1 roughly:1 frequently:1 examine:1 simulator:1 planning:1 torque:5 discounted:3 decreasing:1 es0:1 pf:1 considering:1 hiding:1 begin:3 xx:1 bounded:3 competes:1 underlying:1 provided:3 mass:1 interpreted:1 maxa:2 q2:3 finding:3 guarantee:5 formalizes:1 exactly:2 prohibitively:1 demonstrates:2 preferable:1 twoaction:1 control:8 arguably:1 t1:4 positive:1 engineering:1 local:1 tends:1 sd:4 sutton:3 subscript:1 meet:1 approximately:5 might:1 plus:1 chose:1 suggests:1 conversely:1 limited:1 graduate:1 acknowledgment:1 practical:1 enforces:1 differs:2 procedure:1 significantly:2 word:1 get:1 amplify:1 close:2 cannot:2 operator:1 interior:1 risk:1 intercept:2 optimize:1 www:1 deterministic:9 map:1 maximizing:1 backed:2 straightforward:2 williams:1 starting:5 convex:1 focused:1 pomdp:6 knee:1 immediately:1 m2:3 rule:1 searching:2 anyway:1 justification:1 pt:16 nominal:1 yishay:1 modulo:1 exact:2 programming:7 user:1 pa:3 velocity:4 expensive:1 approximated:4 walking:4 observed:1 worst:2 connected:1 ordering:1 trade:1 decrease:1 grasping:2 balanced:1 intuition:1 complexity:2 reward:7 ideally:1 littman:1 dynamic:5 depend:2 solving:2 tight:1 serve:1 creates:1 efficiency:1 accelerate:2 darpa:1 fast:1 london:1 monte:4 artificial:1 choosing:1 refined:1 whose:1 heuristic:2 stanford:2 s:3 otherwise:4 ability:1 think:2 final:3 sequence:5 advantage:1 relevant:1 deduces:1 loop:3 achieve:1 double:3 undiscounted:1 p:3 executing:1 derive:3 andrew:4 strong:1 implemented:3 c:1 come:2 direction:3 foot:4 modifying:1 stochastic:8 vc:1 preliminary:1 opt:9 proposition:2 summation:1 hold:1 sufficiently:1 exp:1 mapping:1 a2:5 proc:3 label:1 utexas:1 successfully:1 weighted:3 minimization:2 hope:1 clearly:1 always:6 gaussian:1 rather:2 openness:1 refining:1 indicates:3 contrast:1 baseline:7 helpful:1 dependent:1 nn:1 typically:2 initially:1 relation:2 arg:6 among:1 html:1 integration:1 never:2 ng:3 identical:1 represents:2 nearly:1 amaldi:1 future:8 np:3 connectionist:1 cardinal:1 simultaneously:2 replaced:1 onwards:1 propel:1 grasp:1 admitting:1 integral:1 partial:1 walk:3 theoretical:2 hip:2 column:4 maximization:9 cost:5 pole:8 deviation:1 uniform:3 optimally:1 st:20 density:1 international:1 contract:1 off:1 picking:1 michael:2 thesis:1 central:1 satisfied:1 opposed:2 choose:2 conf:1 style:1 return:4 performed:2 later:1 try:2 observing:1 pendulum:1 minimize:2 morimoto:2 efficiently:4 maximized:1 pm1:1 iterated:1 carlo:4 trajectory:5 pomdps:8 comp:1 reach:1 manual:1 nonstationarity:1 whenever:1 definition:1 against:1 proof:3 pst:1 recall:1 knowledge:1 conversation:1 improves:1 shaping:1 nasa:1 attained:1 supervised:2 planar:2 kann:1 done:2 furthermore:3 angular:2 clock:1 traveling:2 sketch:1 langford:2 horizontal:1 lack:1 logistic:4 quality:1 perhaps:1 mdp:2 contain:1 true:2 normalized:1 swing:1 hence:3 assigned:1 stance:1 memoryless:6 nonzero:1 psa:2 be1:1 demonstrate:6 confusion:1 performs:1 variational:2 common:1 superior:1 rl:1 attached:1 discussed:1 analog:1 m1:5 mellon:2 significant:2 rd:1 grid:3 biped:2 language:1 robot:13 base:6 multivariate:1 optimizing:1 inequality:2 arbitrarily:2 inverted:2 minimum:1 schneider:1 freely:1 maximize:2 signal:4 dashed:1 upside:1 sham:3 offer:2 visit:3 a1:5 va:3 regression:6 controller:2 essentially:2 expectation:2 iteration:2 represent:1 cell:3 penalize:2 addition:1 fellowship:1 interval:2 leaving:1 strict:1 cart:3 leveraging:1 spirit:2 jordan:1 call:4 pennsylvania:1 restrict:1 inner:1 observability:3 tradeoff:1 honest:1 effort:1 penalty:1 returned:1 action:21 useful:1 iterating:1 detailed:1 informally:1 clear:2 amount:1 locally:2 http:1 specifies:3 exist:1 nsf:1 sign:1 per:2 carnegie:2 discrete:1 reusable:1 four:1 falling:1 drawn:2 timestep:3 grab:1 sum:4 run:1 angle:4 uncertainty:1 extends:1 decision:2 comparable:1 bound:2 ddp:2 gomez:1 quadratic:3 constraint:2 software:2 phrased:1 sake:1 generates:1 aspect:1 speed:1 argument:1 min:3 optimality:1 approximability:1 performing:1 department:1 according:4 terminates:2 describes:1 kakade:3 s1:1 leg:2 intuitively:3 restricted:1 pr:2 taken:2 computationally:2 equation:2 previously:3 remains:1 loose:1 mechanism:1 letting:1 tractable:1 reversal:1 confounded:1 apply:2 eight:1 worthwhile:1 appearing:1 alternative:1 robustness:1 shortly:1 denotes:1 running:1 cf:2 ensure:1 pushing:1 objective:2 move:2 quantity:1 parametric:3 dependence:3 bagnell:2 traditional:1 gradient:1 distance:1 thank:1 separating:1 sci:1 chris:1 considers:1 trivial:2 reason:2 length:1 illustration:1 balance:1 minimizing:1 difficult:2 unfortunately:1 potentially:1 favorably:1 policy:98 perform:3 upper:1 observation:8 markov:1 benchmark:1 finite:4 situation:1 extended:1 excluding:1 gridworld:1 rn:2 mansour:1 sweeping:1 arbitrary:2 pair:1 egasus:1 rad:1 learned:3 quadratically:1 nip:2 able:1 bar:5 below:2 regime:1 summarize:1 max:4 memory:1 unrealistic:1 natural:2 indicator:1 abbreviate:1 mdps:5 numerous:1 satisficing:1 philadelphia:1 prior:1 circled:1 literature:1 unsatisfied:1 expect:3 interesting:1 limitation:3 proportional:1 switched:1 agent:1 sufficient:2 s0:11 share:1 balancing:3 supported:2 last:1 side:1 taking:6 absolute:1 benefit:1 boundary:2 dimension:2 world:1 transition:2 maze:9 reinforcement:5 adaptive:1 atkeson:3 cope:1 approximate:8 observable:1 contradict:1 keep:3 instantiation:2 pittsburgh:2 unnecessary:1 search:18 continuous:1 promising:1 learn:1 ca:1 symmetry:1 complex:1 backup:3 ref:20 n:9 unactuated:1 position:3 wish:1 third:1 learns:1 down:2 theorem:6 specific:2 showing:2 list:1 exists:1 false:1 adding:1 albeit:1 importance:1 phd:1 horizon:6 simply:1 infinitehorizon:1 horizontally:1 partially:1 applies:1 corresponds:1 determines:1 satisfies:1 goal:7 jeff:1 hard:2 specifically:3 infinite:2 contrasted:1 except:2 determined:2 upright:1 kearns:1 total:3 e:10 est:5 indicating:2 formally:1 college:1 support:1 reactive:2 evaluate:1 |
1,516 | 2,379 | Sparse Representation and Its Applications in
Blind Source Separation
Yuanqing Li, Andrzej Cichocki, Shun-ichi Amari, Sergei Shishkin
RIKEN Brain Science Institute, Saitama, 3510198, Japan
Jianting Cao
Department of Electronic Engineering
Saitama Institute of Technology
Saitama, 3510198, Japan
Fanji Gu
Department of Physiology and Biophysics
Fudan University
Shanghai, China
Abstract
In this paper, sparse representation (factorization) of a data matrix is
first discussed. An overcomplete basis matrix is estimated by using the
K?means method. We have proved that for the estimated overcomplete basis matrix, the sparse solution (coefficient matrix) with minimum
l1 ?norm is unique with probability of one, which can be obtained using
a linear programming algorithm. The comparisons of the l1 ?norm solution and the l0 ?norm solution are also presented, which can be used
in recoverability analysis of blind source separation (BSS). Next, we apply the sparse matrix factorization approach to BSS in the overcomplete
case. Generally, if the sources are not sufficiently sparse, we perform
blind separation in the time-frequency domain after preprocessing the
observed data using the wavelet packets transformation. Third, an EEG
experimental data analysis example is presented to illustrate the usefulness of the proposed approach and demonstrate its performance. Two
almost independent components obtained by the sparse representation
method are selected for phase synchronization analysis, and their periods of significant phase synchronization are found which are related to
tasks. Finally, concluding remarks review the approach and state areas
that require further study.
1
Introduction
Sparse representation or sparse coding of signals has received a great deal of attention in
recent years. For instance, sparse representation of signals using large-scale linear programming under given overcomplete bases (e.g., wavelets) was discussed in [1]. Also, in
[2], a sparse image coding approach using the wavelet pyramid architecture was presented.
Sparse representation can be used in blind source separation [3][4]. In [3], a two stage approach was proposed, that is, the first is to estimate the mixing matrix by using a clustering
algorithm, the second is to estimate the source matrix. In our opinion, there are still three
fundamental problems related to sparse representation of signals and BSS which need to be
further studied: 1) detailed recoverability analysis; 2) high dimensionality of the observed
data; 3) overcomplete case in which the sources number is unknown.
The present paper first considers sparse representation (factorization) of a data matrix based
on the following model
X = BS,
(1)
n?N
where the X = [x(1), ? ? ? , x(N )] ? R
(N 1) is a known data matrix, B =
[b1 ? ? ? bm ] is a n ? m basis matrix, S = [s1 , ? ? ? , sN ] = [sij ]m?N is a coefficient matrix, also called a solution corresponding to the basis matrix B. Generally, m > n, which
implies that the basis is overcomplete.
The discussion of this paper is under the following assumptions on (1).
Assumption 1: 1. The number of basis vectors m is assumed to be fixed in advance and
satisfies the condition n ? m < N . 2. All basis vectors are normalized to be unit vectors
with their 2?norms being equal to 1 and all n basis vectors are linearly independent.
The rest of this paper is organized as follows. Section 2 analyzes the sparse representation
of a data matrix. Section 3 presents the comparison of the l0 norm solution and l1 norm
solution. Section 4 discusses blind source separation via sparse representation. An EEG
data analysis example is given in Section 5. Concluding remarks in Section 6 summarize
the advantages of the proposed approach.
2
Sparse representation of data matrix
In this section, we discuss sparse representation of the data matrix X using the two-stage
approach proposed in [3]. At first, we apply an algorithm based on K ?means clustering
method for finding a suboptimal basis matrix that is composed of the cluster centers of the
normalized, known data vectors as in [3]. With this kind of cluster center basis matrix, the
corresponding coefficient matrix estimated by linear programming algorithm presented in
this section can become very sparse.
Algorithm outline 1: Step 1. Normalize the data vectors. Step 2. Begin a K ?means
clustering iteration followed by normalization to estimate the suboptimal basis matrix. End
Now we discuss the estimation of the coefficient matrix. For a given basis matrix B in (1),
the coefficient matrix can be found by solving the following optimization problem as in
many existing references (e.g., [3, 5]),
min
m X
N
X
|sij |, subject to BS = X.
(2)
i=1 j=1
It is not difficult to prove that the linear programming problem (2) is equivalent to the
following set of N smaller scale linear programming problems:
min
m
X
|sij |, subject to Bsj = x(j), j = 1, ? ? ? , N.
(3)
i=1
By setting S = U ? V, where U = [uij ]m?N ? 0, V = [vij ]m?N ? 0, (3) can
be converted to the following standard linear programming problems with non-negative
constraints,
min
m
X
(uij + vij ), subject to [B, ?B][uTj , vjT ]T = x(j), uj ? 0, vj ? 0,
i=1
where j = 1, ? ? ? , N .
(4)
Theorem 1 For almost all bases B ? Rn?m , the sparse solution (l1 ?norm solution) of
(1) is unique. That is, the set of bases B, under which the sparse solution of (1) is not
unique, is of measure zero. And there are at most n nonzero entries of the solution.
It follows from Theorem 1 that for any given basis, there exists a unique sparse solution of
(2) with probability of one.
3
Comparison of the l0 norm solution and l1 norm solution
Usually, l0 norm J0 (S) =
n P
N
P
|sij |0 (the number of nonzero entries of S) is used as a
i=1 j=1
sparsity measure of S, since it ensures the sparsest solution. Under this measure, the sparse
solution is obtained by solving the problem
min
m X
N
X
|sij |0 , subject to BS = X.
(5)
i=1 j=1
In [5], is discussed optimally sparse representation in general (non-orthogonal) dictionaries
via l1 ?norm minimization, and two sufficient conditions are proposed on the nonzero entry
number of the l0 ?norm solution, under which the equivalence between l0 ?norm solution
and l1 ?norm solution holds precisely. However, these bounds are very small in real world
situations generally, if the basis vectors are far away from orthogonality. For instance,
the bound is smaller than 1.5 in the simulation experiments shown in the next section.
This implies that the l0 ?norm solution allows only one nonzero entry in order that the
equivalence holds. In the next, we will also discuss the equivalence of the l0 norm solution
and l1 norm solution but from the viewpoint of probability.
First, we introduce the two optimization problems:
m
P
|si |0 , subject to As = x,
(P0 ) min
(P1 )
i=1
m
P
min
|si |, subject to As = x.
i=1
where A ? Rn?m , x ? Rn are a known basis matrix and a data vector, respectively, and
s ? Rm , n ? m. Suppose that s0? is a solution of (P0 ), and s1? is a solution of (P1 ).
Theorem 2 The solution of (P0 ) is not robust to additive noise of the model, while the
solution of (P1 ) is robust to additive noise, at least to some degree.
Although the problem (P0 ) provides the sparsest solution, it is not an efficient way to find
the solution by solving the problem (P0 ). The reasons are: 1) if ||s0? ||0 = n, then the
solution of (P0 ) is not unique generally; 2) until now, an effective algorithm to solve the
optimization problem (P0 ) does not exist (it has been proved that problem (P0 ) is NP
hard); 3) the solution of (P0 ) is not robust to noise. In contrast, the solution of (P1 ) is
unique with a probability of one according to Theorem 1. It is well known that there are
many efficient optimization tools to solve the problem (P1 ). From the above mentioned
facts arises naturally a problem: what is the condition under which the solution of (P 1 ) is
one of the sparsest solutions, that is, the solution has the same number of nonzero entries
as the solution of (P0 )? In the following, we will discuss the problem.
Lemma 1 Suppose that x ? Rn and A ? Rn?m are selected randomly. If x is represented
by a linear combination of k column vectors of A, then k ? n generally, that is, the
probability that k < n is zero.
Theorem 3 For the optimization problems (P0 ) and (P1 ), suppose that A ? Rn?m is
selected randomly, x ? Rn is generated by As? , l = ||s? ||0 < n, and that all nonzero
entries of s? are also selected randomly. We have
1. s? is the unique solution of (P0 ) with probability of one, that is, s0? = s? . And
if ||s1? ||0 < n, then s1? = s? with probability of one. 2. The probability P (s1? =
s? ) ? (P (1, l, n, m))l , where P (1, l, n, m) (1 ? l ? n) are n probabilities satisfying
1 = P (1, 1, n, m) ? P (1, 2, n, m) ? ? ? ? ? P (1, n, n, m) (their explanations are omitted here due to limit of space). 3. For given positive integers l0 and n0 , if l ? l0 , and
m ? n ? n0 , then lim P (s1? = s? ) = 1.
n?+?
Remarks 1: 1. From Theorem 3, if n and m are fixed, and l is sufficiently small, then
s1? = s? with a high probability. 2. For fixed l and m ? n, if n is sufficiently large, then
s1? =s? with a high probability. Theorem 3 will be used in recoverability analysis of BSS.
4
Blind source separation based on sparse representation
In this section, we discuss blind source separation based on sparse representation of mixture
signals. The proposed approach is also suitable for the case in which the number of sensors
is less than or equal to the number of sources, while the number of source is unknown. We
consider the following noise-free model,
xi = Asi , i = 1, ? ? ? , N,
(6)
n?m
where the mixing matrix A ? R
is unknown, the matrix S = [s1 , ? ? ? , sN ] ?
Rm?N is composed by the m unknown sources, and the only observed data matrix
X = [x1 , ? ? ? , xN ] ? Rn?N that has rows containing mixtures of sources, n ? m. The
task of blind source separation is to recover the sources using only the observable data
matrix X.
We also use a two-step approach presented in [3] for BSS. The first step is to estimate the
mixing matrix using clustering Algorithm 1. If the mixing matrix is estimated correctly,
and a source vector s? satisfies that ||s? ||0 = l < n, then by Theorem 3, s? is the l0 norm solution of (6) with probability one. And if the source vector is sufficiently sparse,
e.g., l is sufficiently small compared with n, then it can be recovered by solving the linear
programming problem (P1 ) with a high probability. Considering the source number is
? = [A,
? 4A] ? Rn?m0
unknown generally, we denote the estimated mixing matrix A
0
0
(m > m). We introduce the following optimization problem (P1 ) and denote its solution
0
?s = [?sT , 4sT ]T ? Rm ,
m
P
0
(P10 )
min
? = x.
|si |, subject to As
i=1
We can prove the following recoverability result.
? (of the estimated mixing matrix A)
? is suffiTheorem 4 Suppose that the sub-matrix A
ciently close to the true mixing matrix A neglecting scaling and permutation ambiguities,
and that a source vector is sufficiently sparse. Then the source vector can be recovered
with a high probability (close to one) by solving (P10 ). That is, ?s is sufficiently close to the
original source vector, and 4s is close to zero vector.
To illustrate Theorem 4 partially, we have performed two simulation experiments in which
the mixing matrix is supposed to be estimated correctly. Fig. 1 shows the probabilities
that a source vector can be recovered correctly in different cases, estimated in the two
simulations. In the first simulation, n and m are fixed to be 10 and 15, respectively, l
denotes the number of nonzero entries of source vector and changes from 1 to 15. For
every fixed nonzero entry number l, the probabilities that the source vector is recovered
correctly is estimated through 3000 independent repeated stochastic experiments, in which
the mixing matrix A and all nonzero entries of the source vector s0 are selected randomly
according to the uniform distribution. Fig. 1 (a) shows the probability curve. We can see
that the source can be estimated correctly when l = 1, 2, and the probability is greater than
0.95 when l ? 5.
In the second simulation experiment, all original source vectors have 5 nonzero entries,
that is, l = 5; and m = 15. The dimension n of the mixture vectors varies from 5 to
15. As in the first simulation, the probabilities for correctly estimated source vectors are
estimated through 3000 stochastic experiments and showed in Fig. 1 (b). It is evident that
when n ? 10, the source can be estimated correctly with probability higher than 0.95.
1
1
(b)
(a)
0.8
0.6
0.6
Probabity
Probability
0.8
0.4
0.4
0.2
0
0
0.2
5
10
l
15
0
5
10
n
15
Figure 1: (a) the probability curve that the source vectors are estimated correctly as a
function of l obtained in the first simulation; (b) the probability curve that the source vectors
are estimated correctly as a function of n obtained in the second simulation.
In order to estimate the mixing matrix correctly, the sources should be sufficiently sparse.
Thus sparseness of the sources plays an important role not only in estimating the sources
but also in estimating the mixing matrix. However, if the sources are not sufficiently sparse
in reality, we can have a wavelet packets transformation preprocessing. In the following, a
blind separation algorithm based on preprocessing is presented for dense sources.
Algorithm outline 2:
Step 1. Transform the n time domain signals, (n rows of X, to time-frequency signals by a
wavelet packets transformation, and make sure that n wavelet packets trees have the same
structure.
Step 2. Select these nodes of wavelet packets trees, of which the coefficients are as sparse
as possible. The selected nodes of different trees should have the same indices. Based on
? ? Rn?m0 using the Algorithm 1
these coefficient vectors, estimate the mixing matrix A
presented in Section 2.
? and the coefficients of all nodes obtained
Step 3. Based on the estimated mixing matrix A
in step 1, estimate the coefficients of all the nodes of the wavelet packets trees of sources
by solving the set of linear programming problems (4).
Step 4. Reconstruct sources using the inverse wavelet packets transformation. End
We have successfully separated speech sources in a number of simulations in overcomplete
case (e.g., 8 sources, 4 sensors) using Algorithm 2. In the next section, we will present an
EEG data analysis example.
Remark 2: A challenge problem in the algorithm above is to estimate the mixing matrix as
precisely as possible. In our many simulations on BSS of speech mixtures, we use 7?level
wavelet packets transformation for preprocessing. When K?means clustering method is
used for estimating the mixing matrix, the number of clusters (the number of columns of
the estimated mixing matrix) should be set to be greater than the source number even if the
source number is known. In this way, the estimated matrix will contain a submatrix very
close to the original mixing matrix. From Theorem 4, we can estimate the source using the
overestimated mixing matrix.
5
An example in EEG data analysis
The electroencephalogram (EEG) is a mixture of electrical signals coming from multiple
brain sources. This is why application of ICA to EEG recently has become popular, yielding new promising results (e.g., [6]). However, compared with ICA, the sparse representation has two important advantages: 1) sources are not assumed to be mutually independent
as in ICA, even be not stationary; 2) source number can be larger than the number of
sensors. We believe that sparse representation is a complementary and very prospective
approach in the analysis of EEG.
Here we present the results of testing the usefulness of sparse representation in the analysis
of EEG data based on temporal synchronization between components. The analyzed 14channel EEG was recorded in an experiment based on modified Sternberg memory task.
Subjects were asked to memorize numbers successively presented at random positions on
the computer monitor. After 2.5 s pause following by a warning signal, a ?test number?
was presented. If it was the same as one of the numbers in the memorized set, the subject
had to press the button. This cycle, including also resting (waiting) period, was repeated
160 times (about 24 min). EEG was sampled at 256 Hz rate. Here we describe, mainly, the
analysis results of one subject?s data.
EEG was filtered off-line in 1 ? 70 Hz range, trials with artifacts were rejected by visual
inspection, and a data set including 20 trials with correct response, and 20 trials with incorrect response, was selected for analysis (1 trial=2176 points). Thus we obtain a 14 ? 87040
dimensional data matrix, denoted by X. Using the sparse representation algorithm proposed in this paper, we decomposed the EEG signals X into 20 components. Denote the
20?87040 dimensional components matrix S, which contains 20 trials for correct response,
and 20 trials for incorrect response, respectively.
At first, we calculated the correlation coefficient matrices of X and S, denoted by R x and
Rs , respectively. We found that Rxi,j ? (0.18, 1] (the median of |Rxi,j | is 0.5151). In
the case of components, the correlation coefficients were considerably lower (the median
of |Rsi,j | is 0.2597). And there exist many pairs of components with small correlation
coefficients, e.g., Rs2,11 = 0.0471, Rs8,13 = 0.0023, etc. Furthermore, we found that the
higher order correlation coefficients of these pairs are also very small (e.g., the median
of absolute value of 4th order correlation is 0.1742). We would like to emphasize that,
although the independence principle was not used, many pairs of components were almost
independent.
According to modern brain theories, dynamics of synchronization of rhythmic activities in
distinct neural networks plays a very important role in interactions between them. Thus,
phase synchronization in a pair of two almost independent components (si1 , si14 ) (Rs1,14 =
0.0085, fourth correlation coefficient 0.0026) was analyzed using method described in [7].
The synchronization index is defined by SI(f, t) = max(SP LV (f, t) ? Ssur, 0), where
SP LV (f, t) is a single-trial phase locking value at the frequency f and time t, which has
been smoothed by a window with a length of 99, and Ssur is the 0.95 percentile of the
distribution of 200 surrogates (the 200 pairs of surrogate data are Gaussian distributed).
Fig. 2 shows phase synchrony analysis results. The phase synchrony is observed mainly
in low frequency band (1 Hz-15 Hz) and demonstrated a tendency for task-related variations.Though only ten trials are presented among the 40 trials due to page space, 32 of 40
trials shows similar characteristics.
In Fig. 3 (a), two averaged synchronization index curves are presented, which are obtained
by averaging synchronization index SI in the range 1-15 Hz and across 20 trials, separately
for correct and incorrect response. Note the time variations of the averaged synchronization index and its higher values for correct responses, especially in the beginning and the
end of the trial (preparation and response periods). To test the significance of the time
and correctness effects, the synchronization index was averaged again for each 128 time
points (0.5 s) for removing artificial correlation between neighboring points and submitted
to Friedman nonparametric ANOVA. The test showed significance of time (p=0.013) and
correctness (p=0.0017) effects. Thus, the phase synchronization between the two analyzed
components was sensitive both to changes in brain activity induced by time-varying task
demands and to correctness-related variations in the brain state. The higher synchronization for correct responses could be related to higher integration of brain systems required
for effective information processing. This kind of phenomena also has been seen in the
same analysis of EEG data from another subject (Fig. 3 (b)).
A substantial part of synchronization between raw EEG channels can be explained by volume conduction effects. Large cortical areas may work as stable unified oscillating systems, and this may account for other large part of synchronization in raw EEG. This kind
of strong synchronization may make invisible synchronization appearing for brief periods,
which is of special interest in brain research. To study temporally appearing synchronization, components related to the activity of more or less unified brain sources should be
separated from EEG. Our first results of application of sparse representation to real EEG
data support that they can help us to reveal brief periods of synchronization between brain
?sources?.
Frequency
30
30
30
30
30
1
0.8
0.6
0.4
0.2
15
1
1
1088
1
2176
1
0.5
1088
1
2176
1
0.5
1088
1
2176
1
0.5
1088
1
2176
1
0.5
1
1088
0
2176
1
30
1088
0
2176
1
30
1088
0
2176
1
30
1088
0
2176
1
30
1
1088
1
2176
1
0.5
1088
1
2176
1
0.5
1088
1
2176
1
0.5
1088
1
2176
1
0.5
1
1088
k
Averaged SI
0.5
0
Frequency
2176
0.25
30
1088
15
1
0.5
Averaged SI
1088
1088
2176
1
0.8
0.6
0.4
0.2
2176
0.25
0
2176
0
1
1088
k
2176
0
1
1088
k
2176
0
1
1088
k
2176
0
1
1088
k
2176
Figure 2: Time course of EEG synchrony in single trials. 1st row: time-frequency charts
for 5 single trials with correct response. Synchronization index values are shown for every
frequency and time sample point (f, k). 2nd row: mean synchronization index averaged
across frequencies in range 1-15 Hz, for the same trials as in the 1st row. 3d and 4th rows:
same for five trials with incorrect response. In each subplot, the first line refers to the
beginning of presentation of numbers to be memorized, the second line refers to the end of
test number.
6
Concluding remarks
Sparse representation of data matrices and its application to blind source separation were
analyzed based on a two-step approach presented in [3] in this paper. The l1 norm is used
0.2
0.4
(b)
(a)
(a)
Averaged SI
Averaged SI
0.3
0.1
0.2
0.1
0
1
1088
k
2176
0
1
1088
k
2176
Figure 3: Time course of EEG synchrony, averaged across trials. Left: same subject as in
previous figure; right: another subject. The curves show mean values of synchronization
index averaged in the range 1-15 Hz and across 20 trials. Black curves are for trials with
correct response, red dotted curves refers to trials with incorrect response. Solid vertical
lines: as in the previous figure.
as a sparsity measure, whereas, the l0 norm sparsity measure is considered for comparison
and recoverability analysis of BSS. From equivalence analysis of the l1 norm solution and
l0 norm solution presented in this paper, it is evident that if a data vector (observed vector)
is generated from a sufficiently sparse source vector, then, with high probability, the l 1
norm solution is equal to the l0 norm solution, the former in turn is equal to the source
vector, which can be used for recoverability analysis of blind sparse source separation.
This kind of construct that employs sparse representation can be used in BSS as in [3],
especially in cases in which fewer sensors exist than sources while the source number is
unknown, and sources are not completely independent. Lastly, an application example
for analysis of phase synchrony in real EEG data supports its validity and performance of
the proposed approach. Since the components separated by sparse representation are not
constrained by the condition of complete independence, they can be used in the analysis
of brain synchrony maybe more effectively than components separated by general ICA
algorithms based on independence principle.
References
[1] Chen, S., Donoho, D.L. & Saunders M. A. (1998) Automic decomposition by basis pursuit.SIAM Journal on Scientific Computing 20(1):33-61.
[2] Olshausen, B.A., Sallee, P. & Lewicki, M.S. (2001) Learning sparse image codes using a
wavelet pyramid architecture. Advances in Neural Information Processing Systems 13, pp. 887893. Cambridge, MA: MIT Press.
[3] Zibulevsky M., Pearlmutter B. A., Boll P., & Kisilev P. (2000) Blind source separation by sparse
decomposition in a signal dictionary. In Roberts, S. J. and Everson, R. M. (Eds.), Independent
Components Analysis: Principles and Practice, Cambridge University Press.
[4] Lee, T.W., Lewicki, M.S., Girolami, M. & Sejnowski, T.J. (1999) Blind source separation of
more sources than mixtures using overcomplete representations. IEEE Signal Processing Letter
6(4):87-90.
[5] Donoho, D.L. & Elad, M. (2003) Maximal sparsity representation via l 1 minimization. the Proc.
Nat. Aca. Sci. 100:2197-2202.
[6] Makeig, S., Westerfield, M., Jung, T.P., Enghoff, S., Townsend, J., Courchesne, E. & Sejnowski,
T.J. (2002) Dynamic brain sources of visual evoked responses. Science 295:690-694.
[7] Le Van Quyen, M., Foucher, J., Lachaux, J.P., Rodriguez, E., Lutz, A., Martinerie, J. & Varela,
F.J. (2001) Comparison of Hilbert transform and wavelet methods for the analysis of neuronal
synchrony. Journal of Neuroscience Methods 111:83-98.
| 2379 |@word trial:20 norm:24 nd:1 r:1 simulation:10 decomposition:2 p0:12 solid:1 contains:1 existing:1 recovered:4 bsj:1 si:9 sergei:1 additive:2 n0:2 stationary:1 selected:7 fewer:1 inspection:1 beginning:2 filtered:1 provides:1 node:4 si1:1 five:1 become:2 incorrect:5 prove:2 westerfield:1 introduce:2 ica:4 p1:8 brain:11 decomposed:1 window:1 considering:1 begin:1 estimating:3 fudan:1 what:1 kind:4 unified:2 finding:1 transformation:5 warning:1 temporal:1 every:2 makeig:1 rm:3 unit:1 positive:1 engineering:1 limit:1 black:1 china:1 studied:1 equivalence:4 evoked:1 factorization:3 range:4 averaged:10 unique:7 testing:1 practice:1 j0:1 area:2 asi:1 physiology:1 refers:3 close:5 equivalent:1 demonstrated:1 center:2 attention:1 courchesne:1 variation:3 suppose:4 play:2 programming:8 satisfying:1 observed:5 role:2 electrical:1 ensures:1 cycle:1 zibulevsky:1 mentioned:1 substantial:1 locking:1 asked:1 dynamic:2 solving:6 basis:16 gu:1 completely:1 represented:1 riken:1 separated:4 distinct:1 effective:2 describe:1 sejnowski:2 artificial:1 saunders:1 larger:1 solve:2 elad:1 amari:1 reconstruct:1 transform:2 advantage:2 interaction:1 coming:1 maximal:1 neighboring:1 cao:1 mixing:18 supposed:1 normalize:1 cluster:3 oscillating:1 help:1 illustrate:2 received:1 strong:1 implies:2 memorize:1 girolami:1 correct:7 stochastic:2 packet:8 opinion:1 memorized:2 shun:1 require:1 hold:2 sufficiently:10 considered:1 great:1 m0:2 dictionary:2 omitted:1 estimation:1 proc:1 sensitive:1 correctness:3 successfully:1 tool:1 minimization:2 mit:1 sensor:4 gaussian:1 modified:1 martinerie:1 varying:1 l0:14 mainly:2 contrast:1 uij:2 among:1 denoted:2 constrained:1 integration:1 special:1 equal:4 construct:1 np:1 employ:1 modern:1 randomly:4 composed:2 phase:8 friedman:1 interest:1 mixture:6 analyzed:4 yielding:1 neglecting:1 orthogonal:1 tree:4 overcomplete:8 instance:2 column:2 rxi:2 entry:10 sallee:1 saitama:3 usefulness:2 uniform:1 optimally:1 conduction:1 varies:1 considerably:1 st:4 fundamental:1 siam:1 overestimated:1 lee:1 off:1 again:1 ambiguity:1 recorded:1 successively:1 containing:1 li:1 japan:2 account:1 converted:1 coding:2 coefficient:14 blind:13 performed:1 aca:1 red:1 recover:1 synchrony:7 chart:1 characteristic:1 raw:2 submitted:1 ed:1 frequency:9 pp:1 naturally:1 sampled:1 proved:2 popular:1 lim:1 dimensionality:1 organized:1 hilbert:1 higher:5 response:13 though:1 furthermore:1 rejected:1 stage:2 lastly:1 until:1 correlation:7 rodriguez:1 artifact:1 reveal:1 scientific:1 believe:1 olshausen:1 effect:3 validity:1 normalized:2 true:1 contain:1 former:1 nonzero:10 deal:1 percentile:1 outline:2 evident:2 demonstrate:1 electroencephalogram:1 invisible:1 complete:1 l1:10 pearlmutter:1 image:2 recently:1 shanghai:1 volume:1 discussed:3 resting:1 significant:1 cambridge:2 had:1 stable:1 etc:1 base:3 recent:1 showed:2 p10:2 minimum:1 analyzes:1 greater:2 rs2:1 seen:1 subplot:1 period:5 signal:11 multiple:1 enghoff:1 biophysics:1 iteration:1 normalization:1 pyramid:2 whereas:1 separately:1 median:3 source:59 rest:1 sure:1 subject:13 hz:7 induced:1 rs1:1 integer:1 ciently:1 independence:3 architecture:2 suboptimal:2 rsi:1 kisilev:1 speech:2 remark:5 generally:6 detailed:1 maybe:1 nonparametric:1 band:1 ten:1 exist:3 dotted:1 estimated:18 neuroscience:1 correctly:10 waiting:1 ichi:1 varela:1 monitor:1 anova:1 button:1 year:1 inverse:1 letter:1 fourth:1 almost:4 electronic:1 separation:13 scaling:1 submatrix:1 bound:2 followed:1 activity:3 constraint:1 precisely:2 orthogonality:1 sternberg:1 min:8 concluding:3 utj:1 department:2 according:3 combination:1 smaller:2 across:4 b:3 s1:9 explained:1 sij:5 vjt:1 mutually:1 discus:6 turn:1 end:4 pursuit:1 everson:1 apply:2 away:1 appearing:2 original:3 andrzej:1 clustering:5 denotes:1 uj:1 especially:2 surrogate:2 sci:1 prospective:1 considers:1 reason:1 yuanqing:1 length:1 code:1 index:9 difficult:1 robert:1 negative:1 lachaux:1 unknown:6 perform:1 vertical:1 quyen:1 situation:1 rn:10 smoothed:1 recoverability:6 boll:1 pair:5 required:1 usually:1 sparsity:4 summarize:1 challenge:1 including:2 memory:1 explanation:1 max:1 suitable:1 townsend:1 pause:1 technology:1 brief:2 temporally:1 cichocki:1 sn:2 lutz:1 review:1 synchronization:21 permutation:1 lv:2 degree:1 sufficient:1 s0:4 principle:3 viewpoint:1 vij:2 row:6 course:2 jung:1 free:1 institute:2 absolute:1 sparse:42 rhythmic:1 distributed:1 van:1 curve:7 bs:8 xn:1 world:1 dimension:1 calculated:1 cortical:1 preprocessing:4 bm:1 far:1 observable:1 emphasize:1 b1:1 assumed:2 xi:1 why:1 reality:1 promising:1 channel:2 robust:3 eeg:20 domain:2 vj:1 sp:2 significance:2 dense:1 linearly:1 noise:4 repeated:2 complementary:1 x1:1 neuronal:1 fig:6 sub:1 position:1 sparsest:3 third:1 wavelet:12 theorem:10 removing:1 exists:1 effectively:1 nat:1 sparseness:1 demand:1 chen:1 visual:2 partially:1 lewicki:2 satisfies:2 ma:1 presentation:1 donoho:2 hard:1 change:2 averaging:1 lemma:1 called:1 experimental:1 tendency:1 select:1 support:2 arises:1 preparation:1 phenomenon:1 |
1,517 | 238 | 606
Ahmad, Thsauro and He
Asymptotic Convergence of Backpropagation:
Numerical Experiments
Subutai Ahmad
ICSI
1947 Center St.
Berkeley, CA 94704
Gerald Tesauro
mM Watson Labs.
P. O. Box 704
Yorktown Heights, NY
10598
Yu He
Dept. of Physics
Ohio State Univ.
Columbus, OH 43212
ABSTRACT
We have calculated, both analytically and in simulations, the rate
of convergence at long times in the backpropagation learning algorithm for networks with and without hidden units. Our basic
finding for units using the standard sigmoid transfer function is lit
convergence of the error for large t, with at most logarithmic corrections for networks with hidden units. Other transfer functions
may lead to a 8lower polynomial rate of convergence. Our analytic
calculations were presented in (Tesauro, He & Ahamd, 1989). Here
we focus in more detail on our empirical measurements of the convergence rate in numerical simulations, which confirm our analytic
results.
1
INTRODUCTION
Backpropagation is a popular learning algorithm for multilayer neural networks
which minimizes a global error function by gradient descent (Werbos, 1974: Parker,
1985; LeCun, 1985; Rumelhart, Hinton & Williams, 1986). In this paper, we examine the rate of convergence of backpropagation late in learning when all of the
errors are small. In this limit, the learning equations become more amenable to analytic study. By expanding in the small differences between the desired and actual
output states, and retaining only the dominant terms, one can explicitly solve for
the leading-order behavior of the weights as a function of time. This is true both for
Asymptotic Convergence of Backpropagation: Numerical Experiments
single-layer networks, and for multilayer networks containing hidden units. We confirm our analysis by empirical measurements of the convergence rate in numerical
simula tions.
In gradient-descent learning, one minimizes an error function E according to:
(1)
where .:1tii is the change in the weight vector at each time step, and the learning
rate E is a small numerical constant. The convergence of equation 1 for single-layer
networks with general error functions and transfer functions is studied in section 2.
In section 3, we examine two standard modifications of gradient-descent: the use
of a "margin" variable for turning oft'the error backpropagation, and the inclusion
of a "momentum" term in the learning equation. In section 4 we consider networks
with hidden units, and in the final section we summarize our results and discuss
possible extensions in future work.
2
CONVERGENCE IN SINGLE-LAYER NETWORKS
The input-output relationship for single-Ia.yer networks takes the form:
(2)
Yp = g(tii? zp)
where zp represents the state of the input units for pattern p, 10 is the real-valued
weight vector of the network, 9 is the input-output transfer function (for the moment
unspecified), and Yp is the output state for pattern p. We assume that the transfer
function approaches 0 for large negative inputs and 1 for large positive inputs.
For convenience of analysis, we rewrite equation 1 for continuous time as:
~ __ ~ BEp __ ~ BEp ~ __ ~ BEp '(h)'"
E L.J B10 E L.J B
B10 - E L.J B 9 p:C p
W -
P
P
Yp
p
(3)
Yp
where Ep is the individual error for pattern p, hp = Uj,zp is the total input activation
of the output unit for pattern p, and the summation over p is for an arbitrary subset
of the possible training patterns. Ep is a function of the difference between the
actual output Yp and the desired output dp for pattern p. Examples of common
error functions are the quadratic error Ep = (yP - dp )2 and the "cross-entropy"
error (Hinton, 1987) Ep = d p logyp + (1 - dp) log(l - Up).
Instead of solving equation 3 for the weights directly, it is more convenient to work
with the outputs Yp' The outputs evolve according to:
. =
Yp
-Eg
'(hP ) ~
..
L.J BEq
B 9'(h)"
q:Cq ' :Cp
q
(4)
Yq
Let us now consider the situation late in learning when the output states are approaching the desired values. We define new variables rJp = Yp - dp , and assume
607
608
Ahmad, Tesauro and He
2.'
'.8
-'.3
-1.5
-2.&7
-3.8
-5 .? 0+----+----+-----+---1----;----01
1.&7
8.33
10.0.
3.33
5 ???
&.&7
????
Figure 1: Plots of In(error) vs. In(epochs) for single-layer networks learning the
majority function using standard backpropagation without momentum. Four different learning runs starting from different random initial weights are shown. In each
case, the asymptotic behavior is approximately E ,.." l/t, as seen by comparison
with a reference line of slope -1.
that 'lp is small for all p. For reasonable error functions, the individual errors Ep
will go to zero as some power of '1p, i.e., Ep ,.." '1;. (For the quadratic error, .., = 2,
and for the cross-entropy error, .., = 1.) Similarly, the slope of the transfer function
should approach zero as the output state approaches 1 or 0, and for reasonable
transfer functions, this will again follow a power law, i.e., g'(hp ) ,.." 'lpll. Using the
definitions of '1, .., and {1, equation 4 becomes:
rl" ,. " l'1p III L '1q 'Y- 1 1'1q I" II:~ ? 11:-; + higher order
(5)
q
The absolute value appears because g is a non-decreasing function. Let
slowest to approach zero among all the 'lp's. We then have for '1r:
f'Ip
be the
(6)
Upon integrating we obtain
f'Ip _
t- 1 /(211+'Y- 2 ) i
E ,.."
f'Ip 'Y ,.." ,-'Y/(211+'Y- 2 )
(7)
When {1 = 1, i.e., g' ,.." '1, the error function approaches zero like l/t, independent
of..,. Since {1 = 1 for the standard sigmoid function g( 11:) = (1 +e - III) -I, one expects
to see l/t behavior in the error function in this case. This behavior was in fact first
Asymptotic Convergence of Backpropagation: Numerical Experiments
seen in the numerical experiments of (Ahmad, 1988; Ahmad & Tesauro, 1988). The
behavior was obtained at relatively small t, about 20 cycles through the training
set. Figure 1 illustrates this behavior for single-layer networks learning a data set
containing 200 randomly chosen instances of the majority function. In each case,
the behavior at long times in this plot is approximately a straight line, indicating
power-law decrease of the error. The slopes are in each case within a few percent
of the theoretically predicted value of -1.
It turns out that {3 = 1 gives the fastest possible convergence of the error function.
This is because {3 < 1 yields transfer functions which do not saturate at finite values,
and thus are not allowed, while (3 > 1 yields slower convergence. For example, if
we take the transfer function to be g(.x) = 0.5[1 + (2/,rr) tan- 1 .x], then (3 = 2. In
this case, the error function will go to zero as E "'" t-'Y/('Y+ 2 ). In particular, when
; = 2, E "'" l/Vi.
3
MODIFICATIONS OF GRADIENT DESCENT
One common modification to strict gradient-descent is the use of a "margin" variable
IJ such that, if the difference between network output and teacher signal is smaller
than IJ, no error is backpropagated. This is meant to prevent the network from
devoting resources to making its output arbitrarily close to the teacher signal, which
is usually unnecessary. It is clear from the structure of equations 5, 6 that the margin
will not affect the basic l/t error convergence, except in a rather trivial way. When
a margin is employed, certain driving terms on the right-hand side of equation 5
will be set to zero as soon as they become small enough. However, as long as !ome
non-zero driving terms are present, the basic polynomial solution of equation 7 will
be unaltered. Of course, when all the driving terms disappear because they are all
smaller than the margin, the network will stop learning, and the error will remain
constant at some positive value. Thus the prediced behavior is l/t decrease in the
error followed eventually by a rapid transition to constant non-zero error. This
agrees with what is seen numerically in Figure 2.
Another popular generalization of equation 1 includes a "momentum" term:
~w(t)
=
-E
~~(t) + Ct~tii(t -
1)
(8)
In continuous time, this takes the form:
- + (1 -
CtW
.
Ct)tii
BE
-E
(9)
Bw
Turning this into an equation for the evolution of outputs gives:
- - Ctg"(h)[
YP]2
CtYp
p '(h)
9 P
+
aYq
(1 - Ct )Yp
. = -eg '(h)
" BEq g'(h)'"
...
p 'L...J
q.xq ? .xp
q
(10)
Once again, exapanding Yp, Ep and g' in small TIp yields a second-order differential
equation for TIp in terms of a sum over other Tlq. As in equation 6, the sum will be
609
610
Ahmad, Thsauro and He
0,0.025
-s.OO+----+----+----+---f----+----ot
'.00
1.67
3.33
S.IO
6.67
B.33
10.00
Figure 2: Plot of In(error) vs. In(epochs) for various values of margin variable /J
as indicated. In each case there is a 1ft decrease in the error followed by a sudden
transition to constant error. This transition occurs earlier for larger values of /J.
controlled by some dominant term r, and the equation for this term is:
(11)
where C I , C2 and C 3 are numerical constants. For polynomial solutions,.". - t Z ,
the first two terms are of order t z - 2 , and can be neglected relative to the third term
which is of order t z - l ? The resulting equation thus has exactly the same form as
in the zero momentum case of section 2, and therefore the rate of convergence is
the same as in equation 7. This is demonstrated numerically in Figure 3. We can
see that the error behaves as 1 ft for large t regardless of the value of momentum
constant cr. Furthermore, although it is not required by the analytic theory, the
numerical prefactor appears to be the same in each case.
Finally, we have also considered the effect on convergence of schemes for adaptively
altering the lea.rning rate constant E. It was shown analytically in (Tesauro, He &
Ahmad, 1989) that for the scheme proposed by Jacobs (1988), in which the learning
rate could in principle increase linearly with time, the error would decrease as Ift 2
for sigmoid units, instead of the 1ft result for fixed E.
4
CONVERGENCE IN NETWORKS WITH HIDDEN UNITS
We now consider networks with a single hidden layer. In (Tesauro, He & Ahmad,
1989), it was shown that if the hidden units saturate late in Ie a.rning , then the
convergence rate is no different from the single-layer rate. This should be typical
Asymptotic Convergence of Backpropagation: Numerical Experiments
-0.3
-1.5
-2.6
-3.8
-5 ?? O+----+---~---+---_+_--_+_--__oe
1.67
3.33
5.00
6 .67
8.33
10 . 00
????
Figure 3: Plot of In( error) vs. In( epochs) for single-layer networks learning the
majority function, with momentum const8.I1.t (t = 0,0.25,0.5, 0.75,0.99. Each run
starts from the same r8.I1.dom initial weights. Asymptotic l/t behavior is obtained
in each case, with the same numerical prefactor.
of what usually happens. However, assuming for purposes of argument that the
hidden units do not saturate, when one goes through a small 11 exp8.I1.sion of the
learning equation, one obtains a coupled system of equations of the following form:
11 -
11211 + y - 1 [1
+ n2]
n _ 11"1+11-
1
(12)
(13)
where n represents the magnitude of the second layer weights, 8.I1.d for convenience
all indices have been suppressed 8.I1.d all terms of order 1 have been written simply
as 1.
For f3 > 1, this system has polynomial solutions of the form 11 - t.&, n - t~, with
z = - 3/ (37 + 413 - 4) 8.I1.d ..\ = z h + f3 - 1) - 1. It is interesting to note that these
solutions converge slightly faster th8.I1. in the single-layer case. For example, with
7 = 2 8.I1.d f3 = 2, 11 - t- 3 / 10 in the multilayer case, but as shown previously, 11 goes
to zero only as t- 1/ 4 in the single-layer case. We emphasize that this slight speed-up
will only be obtained when the hidden unit states do not saturate. To the extent
that the hidden units saturate 8.I1.d their slopes become small, the convergence rate
will return to the single-layer rate.
When f3 = 1 the above polynomial solution is not possible. Instead, one C8.I1. verify
that the following is a self-consistent leading order solution to equations 12, 13:
(14)
611
612
Ahmad, Thsauro and He
5."
2.5'
....
o Hidden Units
-2 . 51
-5."
3 Hidden Units
10 Hidden Units
50 Hidden Units
-7.51
-n ...
2
6
7
Figure 4: Plot of In(error) vs. In(epochs) for networks with varying numbers of
hidden units (as indicated) learning majority function data set. Approximate l/t
behavior is obtained in each case.
(15)
Recall that in the single-layer case, '1 "'" t-1/'y. Therefore, the effect of multiple layers
could provide at most only a logarithmic speed-up of convergence when the hidden
units do not saturate. For practical purposes, then, we expect the convergence of
networks with hidden units to be no different empiric8Jly from networks without
hidden units. This is in fact what our simulations find, as illustrated in Figure 4.
5
DISCUSSION
We have obtained results for the asymptotic convergence of gradient-descent learning which are valid for a wide variety of error functions a:nd transfer functions. We
typically expect the same rate of convergence to be obtained regardless of whether
or not the network has hidden units. However, it may be possible to obtain a slight
polynomial speed-up when {3 > 1 or a logarithmic speed-up when {3 = 1. We point
out that in all cases, the sigmoid provides the maximum possible convergence rate,
and is therefore a "good" transfer function to use in that sense.
We have not attempted analysis of networks with multiple layers of hidden units;
however, the analysis of (Tesauro, He & Ahmad, 1989) suggests that, to the extent
that the hidden unit states saturate and the g' factors vanish, the rate of convergence
would be no different even in networks with arbitrary numbers of hidden layers.
Another important finding is that the expected rate of convergence does not depend
on the use of all 2ft. input patterns in the training set. The same behavior should
Asymptotic Convergence of Backpropagation: Numerical Experiments
be seen for general subsets of training data. This is also in agreement with our
numerical results, and with the results of (Ahamd, 1988; Ahmand & Tesauro, 1988).
In conclusion, a combination of analysis and numerical simulations has led to insight
into the late stages of gradient-descent learning. It might also be possible to extend
our approach to times earlier in the learning process, when not all of the errors
are small. One might also be able to analyze the numbers, sizes and shapes of the
basins of attraction for gradient-descent learning in feed-forward networks. Another
important issue is the behavior of the generalization performance, i.e., the error on
a set of test patterns not used in training, which was not addressed in this paper.
Finally, our analysis might provide insight into the development of new algorithms
which might scale more favorably than backpropagation.
References
S. Ahmad. (1988) A study of scaling and generalization in neural networks. Master's
Thesis, Univ. of Illinois at Urbana-Champaign, Dept. of Computer Science.
S. Ahmad & G. Tesauro. (1988) Scaling and generalization in neural networks: a
case study. In D. S. Touretzky et al. (eds.), Proceedings of the 1988 Connectionist
Models Summer School, 3-10. San Mateo, CA: Morgan Kaufmann.
G. E. Hinton. (1987) Connectionist learning procedures. Technical Report No.
CMU-CS-87-115, Dept. of Computer Science, Carnegie-Mellon University.
R. A. Jacobs. (1988) Increased rates of convergence through learning rate adaptation. Neural Networks 1:295-307.
Y. Le Cun. (1985) A learning procedure for asymmetric network. Proceedings of
Cognitiva (Paris) 85:599-604.
D. B. Parker. (1985) Learning-logic. Technical Report No. TR-47, MIT Center for
Computational Research in Economics and Management Science.
D. E. Rumelhart, G. E. Hinton, & R. J. Williams. (1986) Learning representations
by back-propagating errors. Nature 323:533-536.
G. Tesauro, Y. He & S. Ahmad. (1989) Asymptotic convergence of backpropagation.
Neural Computation 1:382-391.
P. Werbos. (1974) Ph. D. Thesis, Harvard University.
613
| 238 |@word unaltered:1 polynomial:6 nd:1 simulation:4 jacob:2 tr:1 moment:1 initial:2 activation:1 written:1 numerical:14 ctyp:1 shape:1 analytic:4 plot:5 v:4 sudden:1 provides:1 height:1 c2:1 become:3 differential:1 theoretically:1 expected:1 rapid:1 behavior:12 examine:2 decreasing:1 actual:2 becomes:1 what:3 unspecified:1 minimizes:2 finding:2 berkeley:1 exactly:1 unit:24 positive:2 limit:1 io:1 approximately:2 might:4 studied:1 mateo:1 suggests:1 fastest:1 practical:1 lecun:1 backpropagation:12 procedure:2 rning:2 empirical:2 thsauro:3 convenient:1 integrating:1 convenience:2 close:1 demonstrated:1 center:2 williams:2 go:4 starting:1 regardless:2 economics:1 bep:3 insight:2 attraction:1 oh:1 tan:1 agreement:1 harvard:1 rumelhart:2 simula:1 werbos:2 asymmetric:1 ep:7 ft:4 prefactor:2 cycle:1 oe:1 decrease:4 ahmad:13 icsi:1 neglected:1 gerald:1 dom:1 depend:1 rewrite:1 solving:1 upon:1 various:1 univ:2 larger:1 solve:1 valued:1 final:1 ip:3 rr:1 adaptation:1 ome:1 convergence:30 zp:3 tions:1 oo:1 propagating:1 ij:2 school:1 ahamd:2 predicted:1 c:1 generalization:4 summation:1 extension:1 correction:1 mm:1 considered:1 driving:3 purpose:2 agrees:1 mit:1 subutai:1 rather:1 cr:1 sion:1 varying:1 focus:1 slowest:1 sense:1 typically:1 hidden:22 i1:10 issue:1 among:1 retaining:1 development:1 devoting:1 once:1 f3:4 represents:2 lit:1 yu:1 future:1 beq:2 connectionist:2 report:2 few:1 randomly:1 individual:2 bw:1 amenable:1 desired:3 instance:1 increased:1 earlier:2 altering:1 subset:2 expects:1 teacher:2 adaptively:1 st:1 ie:1 physic:1 tip:2 again:2 thesis:2 management:1 containing:2 leading:2 return:1 yp:12 tii:4 includes:1 explicitly:1 vi:1 lab:1 analyze:1 start:1 slope:4 kaufmann:1 yield:3 straight:1 touretzky:1 ed:1 definition:1 stop:1 popular:2 recall:1 back:1 appears:2 feed:1 higher:1 follow:1 box:1 furthermore:1 stage:1 hand:1 indicated:2 columbus:1 effect:2 verify:1 true:1 evolution:1 analytically:2 illustrated:1 eg:2 self:1 yorktown:1 cp:1 percent:1 ohio:1 sigmoid:4 common:2 behaves:1 rl:1 extend:1 he:10 slight:2 numerically:2 measurement:2 mellon:1 hp:3 inclusion:1 similarly:1 ctw:1 illinois:1 dominant:2 tesauro:10 certain:1 watson:1 arbitrarily:1 seen:4 morgan:1 employed:1 converge:1 signal:2 ii:1 multiple:2 champaign:1 technical:2 faster:1 calculation:1 cross:2 long:3 controlled:1 basic:3 multilayer:3 cmu:1 lea:1 addressed:1 ot:1 cognitiva:1 strict:1 iii:2 enough:1 variety:1 affect:1 approaching:1 whether:1 clear:1 backpropagated:1 ph:1 carnegie:1 four:1 prevent:1 sum:2 run:2 master:1 reasonable:2 scaling:2 layer:16 ct:3 followed:2 summer:1 quadratic:2 speed:4 argument:1 c8:1 relatively:1 according:2 combination:1 smaller:2 remain:1 slightly:1 suppressed:1 lp:2 cun:1 modification:3 making:1 happens:1 equation:19 resource:1 previously:1 discus:1 turn:1 eventually:1 slower:1 uj:1 disappear:1 occurs:1 gradient:8 dp:4 majority:4 extent:2 trivial:1 assuming:1 index:1 relationship:1 cq:1 favorably:1 negative:1 urbana:1 finite:1 descent:8 situation:1 hinton:4 arbitrary:2 required:1 paris:1 able:1 usually:2 pattern:8 oft:1 summarize:1 ia:1 power:3 turning:2 scheme:2 yq:1 b10:2 coupled:1 xq:1 epoch:4 evolve:1 asymptotic:9 law:2 relative:1 expect:2 interesting:1 basin:1 xp:1 consistent:1 principle:1 course:1 ift:1 soon:1 side:1 wide:1 absolute:1 calculated:1 transition:3 valid:1 forward:1 san:1 approximate:1 obtains:1 emphasize:1 logic:1 confirm:2 global:1 unnecessary:1 continuous:2 nature:1 transfer:11 ca:2 expanding:1 linearly:1 n2:1 allowed:1 parker:2 ny:1 momentum:6 vanish:1 late:4 third:1 saturate:7 r8:1 magnitude:1 yer:1 illustrates:1 margin:6 entropy:2 logarithmic:3 led:1 simply:1 change:1 typical:1 except:1 total:1 attempted:1 indicating:1 meant:1 dept:3 |
1,518 | 2,380 | Eye Movements for Reward Maximization
Nathan Sprague
Computer Science Department
University of Rochester
Rochester, NY 14627
[email protected]
Dana Ballard
Computer Science Department
University of Rochester
Rochester, NY 14627
[email protected]
Abstract
Recent eye tracking studies in natural tasks suggest that there is a tight
link between eye movements and goal directed motor actions. However,
most existing models of human eye movements provide a bottom up account that relates visual attention to attributes of the visual scene. The
purpose of this paper is to introduce a new model of human eye movements that directly ties eye movements to the ongoing demands of behavior. The basic idea is that eye movements serve to reduce uncertainty
about environmental variables that are task relevant. A value is assigned
to an eye movement by estimating the expected cost of the uncertainty
that will result if the movement is not made. If there are several candidate
eye movements, the one with the highest expected value is chosen. The
model is illustrated using a humanoid graphic figure that navigates on a
sidewalk in a virtual urban environment. Simulations show our protocol
is superior to a simple round robin scheduling mechanism.
1
Introduction
This paper introduces a new framework for understanding the scheduling of human eye
movements. The human eye is characterized by a small, high resolution fovea. The importance of foveal vision means that fast ballistic eye movements called saccades are made at
a rate of approximately three per second to direct gaze to relevant areas of the visual field.
Since the location of the fovea provides a powerful clue to what information the visual
system is processing, understanding the scheduling and targeting of eye movements is key
to understanding the organization of human vision.
The recent advent of portable eye-trackers has made it possible to study eye movements
in everyday behaviors. These studies show that behaviors such as driving [1, 2] or navigating a city sidewalk [3] show rapid alternating saccades to different targets indicative of
competing perceptual demands.
This paper introduces a model of how humans select visual targets in terms of the value of
the information obtained. Previous work has modeled the direction of the eyes to targets
primarily in terms of visual saliency [4]. Such models fail to incorporate the role of task
demands and do not address the problem of resource contention. In contrast, our underlying
premise is that much of routine human behavior can be understood in the framework of
reward maximization. In other words, humans choose actions by trading off the cost of the
actions versus their benefits. Experiments show that the extent to which humans can make
such trade-offs is very refined [5]. To keep track of the value of future real rewards such as
money or calories, humans use internal chemical rewards such as dopamine [6].
One obvious way of modeling eye movement selection is to use a reinforcement learning
strategy directly. However, standard reinforcement learning algorithms are are best suited
to handling actions that have direct consequences for a task. Actions such as eye movements are more difficult to put in a reinforcement learning framework because they have
indirect consequences: they do not change the state of the environment; they serve only
to obtain information. We show a way of overcoming this difficulty while preserving the
notion of reward maximization in the scheduling of eye movements. The basic idea is that
eye movements serve to reduce uncertainty about environmental variables that are relevant
to behavior. A value is assigned to an eye movement by estimating the expected cost of the
uncertainty that will result if the movement is not made. If there are several candidate eye
movements, the one with the highest potential loss is chosen.
We demonstrate these ideas through the example of a virtual human navigating through
a rendered environment. The agent is faced with multiple simultaneous goals including
walking along a sidewalk, picking up litter, and avoiding obstacles. He must schedule
simulated eye movements so as to maximize his reward across the set of goals. We model
eye movements as abstract sensory actions that serve to retrieve task relevant information
from the environment. Our focus is on temporal scheduling; we are not concerned with the
spatial targeting of eye movements. The purpose of this paper is to recast the question of
how eye movements are scheduled, and to propose a possible answer. Experiments on real
humans will be required to determine if this model accurately describes human behavior.
2
Learning Visually Guided Behaviors
Our model of visual control is built around the concept of visual behaviors. Here we borrow the usage of behavior from the robotics community to refer to a sensory-action control
module that is responsible for handling a single narrowly defined goal [7]. The key advantage of the behavior based approach is compositionality: complex control problems can be
solved by sequencing and combining simple behaviors. For the purpose of modeling human performance it is assumed that each behavior has the ability to direct the eye, perform
appropriate visual processing to retrieve the information necessary for performance of the
behavior?s task, and choose an appropriate course of action.
As long as only one goal is active at a time the behavior based approach is straightforward:
the appropriate behavior is put in control and has all the machinery necessary to pursue the
goal. However it is often the case that multiple goals must be addressed at once. In this
case there is need for arbitration mechanisms to distribute control among the set of active
behaviors. In the following sections we will describe how physical control is arbitrated,
and building on that framework, how eye movements are arbitrated.
Our approach to designing behaviors is to model each behavior?s task as a Markov decision
process and then find good policies using reinforcement learning. An MDP is described
by a 4-tuple (S, A, T, R), where S is the state space, A is the action space, and T (s, a, s 0 )
is the transition function that indicates the probability of arriving in state s0 when action a
is taken in state s. The reward function R(s, a) denotes the expected one-step payoff for
taking action a in state s. The goal of reinforcement learning algorithms is to discover an
optimal policy ? ? (s) that maps states to actions so as to maximize discounted long term
reward. Generally, we do not assume prior knowledge of R and T .
One approach to finding optimal policies for MDPs is based on discovering the optimal
value function Q(s, a). This function denotes the expected discounted return if action a is
taken in state s and the optimal policy is followed thereafter. If Q(s, a) is known then the
learning agent can behave optimally by always choosing arg maxa Q(s, a).
There are a number of algorithms for learning Q(s, a) [8, 9] the simplest is to take random
actions in the environment and use the Q-learning update rule:
Q(s, a) ? (1 ? ?)Q(s, a) + ?(r + ? max
Q(s0 , a0 ))
0
a
Here ? is a learning rate parameter, and ? is a term that determines how much to discount
future reward. As long as each state-action pair is visited infinitely often in the limit, this
update rule is guaranteed to converge to the optimal value function.
A benefit of knowing the value function for each behavior is that the Q-values can be used
to handle the arbitration problem. Here we assume that the behaviors share an action space.
In order to choose a compromise action, it is assumed that the Q-function for the composite
task is approximately equal to the sum of the Q-functions for the component tasks:
Q(s, a) ?
n
X
Qi (si , a),
(1)
i=1
where Qi (si , a) represents the Q-function for the ith active behavior. The idea of using
Q-values for multiple goal arbitration was independently introduced in [10] and [11].
The real world interactions that this model is meant to address are best expressed through
continuous rather than discrete state variables. The theoretical foundations of value based
continuous state reinforcement learning are not as well established as for the discrete state
case. However empirical results suggest that good results can be obtained by using a function approximator such as a CMAC along with the Sarsa(0) learning rule: [12]
Q(s, a) ? (1 ? ?)Q(s, a) + ?(r + ?Q(s0 , a0 ))
This rule is nearly identical to the Q-learning rule, except that the max action is replaced
by the action that is actually observed on the next step. The Q-functions used throughout
this paper are learned using this approach. For reasons of space this paper will not include
a complete description of the training procedure used to obtain the Q-functions for the
sidewalk task. More details can be found in [13] and [14].
3
A Composite Task: Sidewalk Navigation
The components of the sidewalk navigation task are to stay on the sidewalk, avoid obstacles, and pick up litter. This was chosen as a good example of a task with multiple goals
and conflicting demands.
Our sidewalk navigation model has three behaviors, sidewalk following, obstacle avoidance, and litter collection. These behaviors share an action space composed of three actions: 15o right turn, 15o left turn, and no turn (medium gray, dark gray, and light gray
arrows in Figure 1). During the sidewalk navigation task the virtual human walks forward
at a steady rate of 1.3 meters per second. Every 300ms a new action is selected according
to the action selection mechanism summarized in Equation (1).
Each of the three behaviors has a two dimensional state space. For obstacle avoidance
the state space is comprised of the distance and angle, relative to the agent, to the nearest
obstacle. The litter collection behavior uses the same parameterization for the nearest litter
item. For the sidewalk following behavior the state space is the angle of the center-line of
the sidewalk relative to the agent, as well as the signed distance to the center of the sidewalk,
where positive values indicate that the agent is to the left of the center, and negative values
indicate that the agent is to the right. All behaviors use the log of distance in order to
16.7
Inf
1.17
distance
?30.0
0.29
?0.29
distance
angle
?0.00 ?50.0
e)
Inf
1.17
distance
f)
Inf
16.7
angle
50.0
?0.00 ?50.0
angle
Inf
3.17
?0.29
?16.7
?16.7
3.17
angle
distance
1.17
?0.00
?50.0
50.0
16.7
0
Inf
0.29
distance
3.17
?Inf ?90.0
2
distance
d)
30.0
0
Inf
?16.7
3.17
90.0
5
50.0
36
value
38
c)
value
b)
value
a)
?Inf
?90.0
1.17
?30.0
30.0
angle
90.0
?0.00
?50.0
?16.7
16.7
50.0
angle
Figure 1: Q-values and policies for the three behaviors. Figures a)-c) show max a Q(s, a)
for the three behaviors: a) obstacle avoidance, b) sidewalk following and c) litter collection. Figures d)-f) show the corresponding policies for the three behaviors. Empty regions
indicate areas that were not seen often enough during training to compute reliable values.
devote more of the state representation to areas near the agent. The agent receives two
units of reward for every item of litter collected , one unit for every time step he remains on
the sidewalk, and four units for every time step he does not collide with an obstacle. Figure
1 shows a representation of the Q-functions and policies for the three behaviors.
The behaviors use simple sensory routines to retrieve the relevant state information from
the environment. The sidewalk following behavior searches for pixels at the border of the
sidewalk and the grass, and finds the most prominent line using a hough transform. The
litter collection routine uses color based matching to find the location of litter items. The
obstacle avoidance routines refers to the world model directly to compute a rough depth
map of the area ahead, and from that extracts the position of the nearest obstacle.
4
Eye Movements and Internal Models
The discussion above assumed that the MDPs have perfect state information. In order to
model limited sensory capacity this assumption must be weakened. Without perfect information the component tasks are most accurately described as partially observable MDPs.
The Kalman filter [15] solves the problem of tracking a discrete time, continuous state variable in the face of noise in both measurements and in the underlying process being tracked.
It allows us to represent the consequences of not having the most recent information from
an eye movement. The Kalman filter has two properties that are important in this respect.
One is that it not only maintains an estimate of the state variable, it also maintains an estimate of the uncertainty. With this information the behaviors may treat their state estimates
as continuous random variables with known probability distributions. The other useful
property of the Kalman filter is that it is able to propagate state estimates in the absence of
sensory information. The state estimate is updated according to the system dynamics, and
the uncertainty in the estimate increases according to the known process noise.
In order to simulate the fact that only one area of the visual field may be foveated, only
one behavior is allowed access to perception during each 300ms time step. That behavior
updates its Kalman filter with a measurement, while the others propagate their estimates
and track the increase in uncertainty. In order to simulate noise in the estimator, the state
estimates are corrupted with zero-mean normally distributed noise at each time step.
Since the agent does not have perfectly up to date state information, he must select the
best action given his current estimates of the state. A reasonable way of selecting an action
under uncertainty is to select the action with the highest
Pn expected return. Building on Equation (1) we have the following: aE = arg maxa E[ i=1 Qi (si , a)], where the expectation
is computed over the state variables for the behaviors. By distributing the expectation, and
making a slight change to the notation we can write this as:
n
X
aE = arg max
QE
(2)
i (si , a),
a
i=1
QE
i
where
refers to the expected Q-value of the ith behavior. In practice we will estimate
expectations by sampling from the distributions provided by the Kalman filter.
Selecting the action with the highest expected return does not guarantee that the agent will
choose the best action for the true state of the environment. Whenever the agent chooses an
action that is sub-optimal for the true state of the environment, he can expect to lose some
return. We can estimate the expected loss as follows:
X
X
loss = E[max
Qi (si , a)] ? E[
Qi (si , aE )].
(3)
a
The term on the left-hand side of the minus sign expresses the expected return that the agent
would receive if he were able to act with knowledge of the true state of the environment.
The term on the right expresses the expected return if the agent is forced to choose an action
based on his state estimate. The difference between the two can be thought of as the cost
of the agent?s current uncertainty. This value is guaranteed to be positive, and may be zero
if all possible states would result in the same action choice.
The total expected loss does not help to select which of the behaviors should be given
access to perception. To make this selection, the loss value needs to be broken down into
the losses associated with the uncertainty for each particular behavior b:
X
X
E
QE
(4)
lossb = E max Qb (sb , a) +
Qi (si , a) ?
i (si , aE ).
a
i?B,i6=b
i
Here the expectation on the left is computed only over sb . The value on the left is the
expected return if sb were known, but the other state variables were not. The value on
the right is the expected return if none of the state variables are known. The difference is
interpreted as the cost of the uncertainty associated with sb .
Given that the Q functions are known, and that the Kalman filters provide distributions over
the state variables, it is straightforward to estimate lossb for each behavior b by sampling.
This value is then used to select which behavior will make an eye movement.
Figure 2 gives an example of several steps of the sidewalk task, the associated eye movements, and the state estimates. The eye movements are allocated to reduce the uncertainty
where it has the greatest potential negative consequences for reward. For example, the
agent fixates the obstacle as he draws close to it, and shifts perception to the other two
behaviors when the obstacle has been safely passed.
It is important to recognize that the procedures outlined above for selecting actions and
allocating perception are only approximations. Since the Q-tables were trained under the
assumption of perfect state information, they will be somewhat inaccurate under conditions
of partial observability. Note also that the behaviors actually employ multiple Kalman
filters. For example if the obstacle avoidance behavior sees two obstacles it will initialize a
filter for each. However, only the single closest object is used to determine the state for the
purpose of action selection and scheduling eye movements.
a)
b)
OA
SF
LC
TIME
Figure 2: a) An overhead view of the virtual agent during seven time steps of the sidewalk
navigation task. The two darker cubes are obstacles, and the lighter cube is litter. The
rays projecting from the agent represent eye movements; gray rays correspond to obstacle
avoidance, black rays correspond to sidewalk following, and white correspond to litter collection. b) State estimates during the same seven time steps. The top row shows the agent?s
estimates of the obstacle location. The axes here are the same as those presented in Figure
1. The light gray regions correspond to the 90% confidence bounds before any perception
has taken place. When present, the black regions correspond to the 90% confidence bounds
after an eye movement has been made. The second and third rows show the corresponding
information for the sidewalk following and litter collection tasks.
5
Results
In order to test the effectiveness of the loss minimization approach, we compare it to two
alternative scheduling mechanisms: round robin, which sequentially rotates through the
three behaviors, and random, which makes a uniform random selection on each time step.
Round robin might be expected to perform well in this task, because it is optimal in terms
of minimizing long waits across the three behaviors.
The three strategies are compared under three different conditions. In the default condition
exactly one behavior is given access to perception on each time step. The other two conditions investigate the performance of the system under increasing perceptual load. During
these trials 33% or 66% of steps are randomly selected to have no perceptual action at all.
For the default condition the average per-step reward is .034 higher for the loss minimization scheduling than for the round robin scheduling. Two factors make this difference more
substantial than it first appears. The first is that the reward scale for this task does not start
at zero: when taking completely random actions the agent receives an average of 4.06 units
of reward per step. Therefore the advantage of the loss minimization approach is a full
3.6% over round robin, relative to baseline performance.
The second factor to consider is the sheer number of eye movements that a human makes
over the course of a day: a conservative estimate is 150,000. The average benefit of properly
scheduling a single eye movement may be small, but the cumulative benefit is enormous. To
5.1
loss?min
round robin
random
average reward
5.05
5
4.95
4.9
4.85
4.8
0%
33%
percent eye movements blocked
66%
Figure 3: Comparison of loss minimization scheduling to round robin and random strategies. For each condition the agent is tested for 500 trials lasting 20 seconds each. In the
33% and 66% conditions the corresponding percentage of eye movements are randomly
blocked, and no sensory input is allowed. The error bars represent 95% confidence intervals. The dashed line at 5.037 indicates the average reward received when all three
behaviors are given access to perception at each time step. This can be seen as an upper
bound on the possible reward.
make this point more concrete, notice that over a period of one hour of sidewalk navigation
the agent will lose around 370 units of reward if he uses round robin instead of the loss
minimization approach. In the currency of reward this is equal to 92 additional collisions
with obstacles, 184 missed litter items, or two additional minutes spent off the sidewalk.
Under increasing perceptual load the loss minimization strategy begins to lose its advantage
over the other two techniques. This could be because the Q-tables become increasingly
inaccurate as the assumption of perfect state information becomes less valid.
6
Related Work
The action selection mechanism from Equation (2) is essentially a continuous state version
of the Q-MDP algorithm for finding approximate solutions to POMDPs [16]. Many discrete
POMDP solution and approximation techniques are built on the idea of maintaining a belief
state, which is a probability distribution over the unobserved state variables. The idea
behind the Q-MDP algorithm
P is to first solve the underlying MDP, and then choose actions
according to arg maxa s bel(s)Q(s, a), where bel(s) is the probability that the system
is in state s and Q(s, a) is the optimal value function for the underlying MDP. The main
drawback of the Q-MDP algorithm is that it does not specifically seek out actions that
reduce uncertainty. In this work the Kalman filters serve precisely the role of maintaining
a continuous belief state, and the problem of reducing uncertainty is handled through the
separate mechanism of choosing eye movements to minimize loss.
The gaze control system introduced in [17] also addresses the problem of perceptual arbitration in the face of multiple goals. The approach taken in that paper has many parallels to
the work presented here, although the focus is on robot control rather than human vision.
7
Discussion and Conclusions
Any system for controlling competing visuo-motor behaviors that all require access to a
sensor such as the human eye faces a resource allocation problem. Gaze cannot be two
places at once and therefore has to be shared among the concurrent tasks. Our model
resolves this difficulty by computing the cost of having inaccurate state information for
each active behavior. Reward can be maximized by allocating gaze to the behavior that
stands to lose the most. As the simulations show, the performance of the algorithm is
superior both to the round robin protocol and to a random allocation strategy.
It is possible for humans to examine locations in the visual scene without overt eye movements. In such cases our formalism would still be relevant to the covert allocation of visual
resources.
Finally, although the expected loss protocol is developed for eye movements, the computational strategy is very general and extends to any situation where there are multiple active
behaviors that must compete for information gathering sensors.
Acknowledgments
This material is based upon work supported by grant number P200A000306 from the Department of Education, grant number 5P41RR09283 from the National Institutes of Health
and a grant number E1A-0080124 from the National Science Foundation.
References
[1] M. F. Land and D. Lee. Where we look when we steer. Nature, 377, 1994.
[2] H. Shinoda, M. Hayhoe, and A. S Shrivastava. The coordination of eye, head, and hand movements in a natural task. Vision Research, 41, 2001.
[3] D. Ballard and N. Sprague. Attentional resource allocation in extended natural tasks [abstract].
Journal of Vision, 2(7):568a, 2002.
[4] L. Itti and C. Koch. Computational modeling of visual attention. Nature Reviews Neuroscience,
2(3):194?203, Mar 2001.
[5] L. Maloney and M. Landy. When uncertainty matters: the selection of rapid goal-directed
movements [abstract]. Journal of Vision, (to appear).
[6] P. Waelti, A. Dickinson, and W. Schultz. Dopamine responses comply with basic assumptions
of formal learning theory. Nature, 412, July 2001.
[7] Rodney A. Brooks. A robust layered control system for a mobile robot. IEEE Journal of
Robotics and Automation, RA-2(1):14?23, April 1986.
[8] Leslie P. Kaelbling, Michael L. Littman, and Andrew W. Moore. Reinforcement learning: A
survey. Journal of Artificial Intelligence Research, 4:237?285, 1996.
[9] R.S. Sutton and A.G. Barto. Reinforcement Learning: An Introduction. MIT Press, 1998.
[10] M. Humphrys. Action selection methods using reinforcement learning. In Proceedings of the
Fourth International Conference on Simulation of Adaptive Behavior, 1996.
[11] J. Karlsson. Learning to Solve Multiple Goals. PhD thesis, University of Rochester, 1997.
[12] R. Sutton. Generalization in reinforcement learning: Successful examples using sparse coarse
coding. In Advances in Neural Information Processing Systems, volume 8, 1996.
[13] N. Sprague and D. Ballard. Multiple-goal reinforcement learning with modular sarsa(0). In
International Joint Conference on Artificial Intelligence, August 2003.
[14] N. Sprague and D. Ballard. Multiple goal learning for a virtual human. Technical Report 829,
University Of Rochester Computer Science Department, 2004.
[15] R. E. Kalman. A new approach to linear filtering and prediction problems. Transactions of the
ASME?Journal of Basic Engineering, 82(Series D):35?45, 1960.
[16] A. Cassandra. Exact and approximate algorithms for partially observable Markov decision
processes. PhD thesis, Brown University, 1998.
[17] J. F. Seara, K. H. Strobl, E. Martin, and G. Schmidt. Task-oriented and sitaution-dependent
gaze control for vision guided autonomous walking. In Proceedings of the 3rd IEEE-RAS
International Conference on Humanoid Robots, 2003.
| 2380 |@word trial:2 version:1 seek:1 propagate:2 simulation:3 pick:1 minus:1 foveal:1 series:1 selecting:3 existing:1 current:2 si:8 must:5 motor:2 update:3 grass:1 intelligence:2 discovering:1 selected:2 item:4 parameterization:1 indicative:1 ith:2 provides:1 coarse:1 location:4 along:2 direct:3 become:1 overhead:1 ray:3 introduce:1 ra:2 expected:16 rapid:2 behavior:54 examine:1 discounted:2 resolve:1 increasing:2 becomes:1 provided:1 estimating:2 underlying:4 discover:1 notation:1 medium:1 advent:1 begin:1 what:1 interpreted:1 pursue:1 maxa:3 developed:1 finding:2 unobserved:1 guarantee:1 temporal:1 safely:1 every:4 act:1 tie:1 exactly:1 control:10 unit:5 normally:1 grant:3 appear:1 positive:2 before:1 understood:1 engineering:1 treat:1 limit:1 consequence:4 sutton:2 approximately:2 signed:1 black:2 might:1 weakened:1 limited:1 directed:2 acknowledgment:1 responsible:1 practice:1 procedure:2 cmac:1 area:5 empirical:1 thought:1 composite:2 matching:1 word:1 confidence:3 refers:2 wait:1 suggest:2 cannot:1 targeting:2 selection:8 close:1 scheduling:11 put:2 layered:1 map:2 center:3 straightforward:2 attention:2 independently:1 pomdp:1 resolution:1 survey:1 rule:5 avoidance:6 estimator:1 borrow:1 his:3 retrieve:3 handle:1 notion:1 autonomous:1 updated:1 target:3 controlling:1 exact:1 lighter:1 dickinson:1 us:3 designing:1 walking:2 bottom:1 role:2 module:1 observed:1 solved:1 region:3 movement:42 highest:4 trade:1 substantial:1 environment:9 broken:1 reward:20 littman:1 dynamic:1 trained:1 tight:1 compromise:1 serve:5 upon:1 completely:1 collide:1 indirect:1 joint:1 forced:1 fast:1 describe:1 artificial:2 choosing:2 refined:1 modular:1 solve:2 ability:1 transform:1 advantage:3 propose:1 interaction:1 relevant:6 combining:1 date:1 description:1 fixates:1 everyday:1 empty:1 perfect:4 object:1 help:1 spent:1 andrew:1 nearest:3 received:1 solves:1 c:2 trading:1 indicate:3 direction:1 guided:2 drawback:1 attribute:1 filter:9 human:20 virtual:5 material:1 education:1 require:1 premise:1 generalization:1 sarsa:2 tracker:1 around:2 koch:1 visually:1 driving:1 purpose:4 overt:1 lose:4 ballistic:1 visited:1 coordination:1 concurrent:1 city:1 minimization:6 offs:1 rough:1 sensor:2 always:1 mit:1 rather:2 avoid:1 pn:1 mobile:1 barto:1 ax:1 focus:2 properly:1 sequencing:1 indicates:2 contrast:1 baseline:1 dependent:1 sb:4 inaccurate:3 a0:2 pixel:1 arg:4 among:2 spatial:1 initialize:1 cube:2 field:2 once:2 equal:2 having:2 sampling:2 identical:1 represents:1 look:1 nearly:1 future:2 others:1 report:1 primarily:1 employ:1 randomly:2 oriented:1 composed:1 recognize:1 national:2 replaced:1 arbitrated:2 organization:1 investigate:1 karlsson:1 introduces:2 navigation:6 light:2 behind:1 allocating:2 tuple:1 partial:1 necessary:2 machinery:1 hough:1 walk:1 theoretical:1 formalism:1 modeling:3 steer:1 obstacle:17 maximization:3 leslie:1 cost:6 kaelbling:1 uniform:1 comprised:1 successful:1 graphic:1 optimally:1 answer:1 corrupted:1 chooses:1 international:3 stay:1 lee:1 off:2 picking:1 gaze:5 michael:1 concrete:1 thesis:2 choose:6 itti:1 return:8 account:1 potential:2 distribute:1 summarized:1 coding:1 automation:1 matter:1 view:1 start:1 maintains:2 parallel:1 rochester:8 rodney:1 minimize:1 maximized:1 correspond:5 saliency:1 accurately:2 none:1 pomdps:1 simultaneous:1 whenever:1 maloney:1 obvious:1 associated:3 visuo:1 knowledge:2 color:1 schedule:1 routine:4 actually:2 appears:1 higher:1 day:1 response:1 april:1 mar:1 hand:2 receives:2 scheduled:1 gray:5 mdp:6 usage:1 building:2 concept:1 true:3 brown:1 assigned:2 chemical:1 alternating:1 moore:1 illustrated:1 white:1 round:9 during:6 steady:1 qe:3 m:2 prominent:1 asme:1 complete:1 demonstrate:1 covert:1 percent:1 contention:1 superior:2 physical:1 tracked:1 volume:1 he:8 slight:1 refer:1 measurement:2 blocked:2 rd:1 outlined:1 i6:1 access:5 robot:3 money:1 navigates:1 closest:1 recent:3 inf:8 preserving:1 seen:2 additional:2 somewhat:1 determine:2 maximize:2 converge:1 period:1 dashed:1 july:1 relates:1 multiple:10 full:1 currency:1 technical:1 characterized:1 long:4 qi:6 prediction:1 basic:4 ae:4 vision:7 expectation:4 essentially:1 dopamine:2 represent:3 robotics:2 receive:1 addressed:1 interval:1 allocated:1 litter:13 effectiveness:1 near:1 enough:1 concerned:1 competing:2 perfectly:1 reduce:4 idea:6 observability:1 knowing:1 shift:1 narrowly:1 handled:1 distributing:1 passed:1 action:39 generally:1 useful:1 collision:1 discount:1 dark:1 simplest:1 percentage:1 notice:1 sign:1 neuroscience:1 per:4 track:2 discrete:4 write:1 express:2 key:2 thereafter:1 four:1 sheer:1 enormous:1 urban:1 sum:1 compete:1 angle:8 uncertainty:15 powerful:1 fourth:1 place:2 throughout:1 reasonable:1 extends:1 missed:1 draw:1 decision:2 bound:3 followed:1 guaranteed:2 ahead:1 precisely:1 scene:2 sprague:5 nathan:1 simulate:2 min:1 qb:1 rendered:1 martin:1 department:4 according:4 across:2 describes:1 increasingly:1 making:1 lasting:1 projecting:1 gathering:1 taken:4 resource:4 equation:3 remains:1 turn:3 mechanism:6 fail:1 sidewalk:23 appropriate:3 alternative:1 schmidt:1 denotes:2 top:1 include:1 maintaining:2 landy:1 question:1 strobl:1 strategy:6 devote:1 navigating:2 fovea:2 distance:9 link:1 rotates:1 simulated:1 capacity:1 oa:1 separate:1 attentional:1 seven:2 portable:1 extent:1 collected:1 reason:1 kalman:9 modeled:1 minimizing:1 difficult:1 negative:2 policy:7 perform:2 upper:1 markov:2 behave:1 payoff:1 situation:1 extended:1 head:1 august:1 community:1 overcoming:1 compositionality:1 introduced:2 pair:1 required:1 bel:2 learned:1 conflicting:1 established:1 hour:1 brook:1 address:3 able:2 bar:1 hayhoe:1 perception:7 recast:1 including:1 built:2 max:6 reliable:1 belief:2 greatest:1 natural:3 difficulty:2 mdps:3 eye:45 extract:1 health:1 faced:1 prior:1 understanding:3 review:1 meter:1 comply:1 relative:3 loss:15 expect:1 allocation:4 filtering:1 dana:2 versus:1 approximator:1 foundation:2 humanoid:2 agent:21 s0:3 share:2 land:1 row:2 course:2 supported:1 arriving:1 side:1 formal:1 institute:1 taking:2 face:3 sparse:1 benefit:4 distributed:1 depth:1 default:2 stand:1 transition:1 world:2 cumulative:1 valid:1 sensory:6 forward:1 made:5 clue:1 reinforcement:11 collection:6 schultz:1 adaptive:1 transaction:1 approximate:2 observable:2 keep:1 active:5 sequentially:1 assumed:3 continuous:6 search:1 robin:9 table:2 ballard:4 nature:3 robust:1 shrivastava:1 complex:1 protocol:3 main:1 arrow:1 border:1 noise:4 allowed:2 ny:2 darker:1 lc:1 sub:1 position:1 sf:1 candidate:2 perceptual:5 third:1 down:1 minute:1 load:2 importance:1 phd:2 foveated:1 demand:4 cassandra:1 suited:1 infinitely:1 visual:13 expressed:1 tracking:2 partially:2 saccade:2 environmental:2 determines:1 goal:15 shared:1 absence:1 change:2 specifically:1 except:1 reducing:1 conservative:1 called:1 total:1 select:5 internal:2 meant:1 ongoing:1 incorporate:1 arbitration:4 tested:1 avoiding:1 handling:2 |
1,519 | 2,381 | A Sampled Texture Prior for Image
Super-Resolution
Lyndsey C. Pickup, Stephen J. Roberts and Andrew Zisserman
Robotics Research Group
Department of Engineering Science
University of Oxford
Parks Road, Oxford, OX1 3PJ
{elle,sjrob,az}@robots.ox.ac.uk
Abstract
Super-resolution aims to produce a high-resolution image from a set of
one or more low-resolution images by recovering or inventing plausible
high-frequency image content. Typical approaches try to reconstruct a
high-resolution image using the sub-pixel displacements of several lowresolution images, usually regularized by a generic smoothness prior over
the high-resolution image space. Other methods use training data to learn
low-to-high-resolution matches, and have been highly successful even
in the single-input-image case. Here we present a domain-specific image prior in the form of a p.d.f. based upon sampled images, and show
that for certain types of super-resolution problems, this sample-based
prior gives a significant improvement over other common multiple-image
super-resolution techniques.
1
Introduction
The aim of super-resolution is to take a set of one or more low-resolution input images of
a scene, and estimate a higher-resolution image. If there are several low resolution images
available with sub-pixel displacements, then the high frequency information of the superresolution image can be increased.
In the limiting case when the input set is just a single image, it is impossible to recover
any high-frequency information faithfully, but much success has been achieved by training models to learn patchwise correspondences between low-resolution and possible highresolution information, and stitching patches together to form the super-resolution image [1]. A second approach uses an unsupervised technique where latent variables are
introduced to model the mean intensity of groups of surrounding pixels [2].
In cases where the high-frequency detail is recovered from image displacements, the
models tend to assume that each low-resolution image is a subsample from a true highresolution image or continuous scene. The generation of the low-resolution inputs can then
be expressed as a degradation of the super-resolution image, usually by applying an image
homography, convolving with blurring functions, and subsampling [3, 4, 5, 6, 7, 8, 9].
Unfortunately, the ML (maximum likelihood) super-resolution images obtained by revers-
ing the generative process above tend to be poorly conditioned and susceptible to highfrequency noise. Most approaches to multiple-image super-resolution use a MAP (maximum a-posteriori) approach to regularize the solution using a prior distribution over the
high-resolution space. Gaussian process priors [4], Gaussian MRFs (Markov Random
Fields) and Huber MRFs [3] have all been proposed as suitable candidates.
In this paper, we consider an image prior based upon samples taken from other images,
inspired by the use of non-parametric sampling methods in texture synthesis [10]. This
texture synthesis method outperformed many other complex parametric models for texture
representation, and produces perceptively correct-looking areas of texture given a sample
texture seed. It works by finding texture patches similar to the area around a pixel of
interest, and estimating the intensity of the central pixel from a histogram built up from
similar samples. We turn this approach around to produce an image prior by finding areas
in our sample set that are similar to patches in our super-resolution image, and evaluate
how well they match, building up a p.d.f. over the high-resolution image. In short, given
a set of low resolution images and example images of textures in the same class at the
higher resolution, our objective is to construct a super-resolution image using a prior that
is sampled from the example images.
Our method differs from the previous super-resolution methods of [1, 7] in two ways: first,
we use our training images to estimate a distribution rather than learn a discrete set of lowresolution to high-resolution matches from which we must build up our output image; second, since we are using more than one image, we naturally fold in the extra high-frequency
information available from the low-resolution image displacements.
We develop our model in section 2, and expand upon some of the implementation details
in section 3, as well as introducing the Huber prior model against which most of the comparisons in this paper are made. In section 4 we display results obtained with our method
on some simple images, and in section 5 we discuss these results and future improvements.
2
The model
In this section we develop the mathematical basis for our model. The main contribution
of this work is in the construction of the prior over the super-resolution image, but first we
will consider the generative model for the low-resolution image generation, which closely
follows the approaches of [3] and [4]. We have K low-resolution images y (k) , which we
assume are generated from the super-resolution image x by
y (k) = W (k) x + G (k)
(1)
?1
), and ?G is the noise precision.
where G is a vector of i.i.d. Gaussians G ? N (0, ?G
The construction of W involves mapping each low-resolution pixel into the space of the
super-resolution image, and performing a convolution with a point spread function. The
constructions given in [3] and [4] are very similar, though the former uses bilinear interpolation to achieve a more accurate approximation.
We begin by assuming that the image registration parameters may be determined a priori,
so each input image has a corresponding set of registration parameters ? (k) . We may now
construct the likelihood function
? M/2
h ?
i
G
G
p(y (k) |x, ? (k) ) =
exp ?
||y (k) ? W (k) x||2
(2)
2?
2
where each input image is assumed to have M pixels (and the super-resolution image N
pixels).
The ML solution for x can be found simply by maximizing equation 2 with respect to x,
which is equivalent to minimizing the negative log likelihood
? log p({y (k) }|x, {? (k) }) ?
K
X
||y (k) ? W (k) x||2 ,
(3)
k=1
though super-resolved images recovered in this way tend to be dominated by a great deal
of high-frequency noise.
To address this problem, a prior over the super-resolution image is often used. In [4],
the authors restricted themselves to Gaussian process priors, which made their estimation
of the registration parameters ? tractable, but encouraged smoothness across x without
any special treatment to allow for edges. The Huber Prior was used successfully in [3]
to penalize image gradients while being less harsh on large image discontinuities than a
Gaussian prior. Details of the Huber prior are given in section 3.
If we assume a uniform prior over the input images, the posterior distribution over x is of
the form
p(x|{y (k) , ? (k) })
? p(x)
K
Y
p(y (k) |x, ? (k) ).
(4)
k=1
To build our expression for p(x), we adopt the philosophy of [10], and sample from other
example images rather than developing a parametric model. A similar philosophy was used
in [11] for image-based rendering. Given a small image patch around any particular pixel,
we can learn a distribution for the central pixel?s intensity value by examining the values
at the centres of similar patches from other images. Each pixel xi has a neighbourhood
region R(xi ) consisting of the pixels around it, but not including xi itself. For each R(xi ),
we find the closest neighbourhood patch in the set of sampled patches, and find the central
pixel associated with this nearest neighbour, LR (xi ). The intensity of our original pixel
is then assumed to be Gaussian distributed with mean equal to the intensity of this central
pixel, and with some precision ?T ,
xi ? N (LR (xi ), ?T?1 )
(5)
leading us to a prior of the form
? N/2
h ?
i
T
T
p(x) =
exp ?
||x ? LR (x)||2 .
2?
2
(6)
Inserting this prior into equation 4, the posterior over x, and taking the negative log, we
have
? log p(x|{y (k) , ? (k) })
? ?||x ? LR (x)||2 +
K
X
||y (k) ? W (k) x||2 + c,
(7)
k=1
where the right-hand side has been scaled to leave a single unknown ratio ? between the
data error term and the prior term, and includes an arbitrary constant c. Our super-resolution
image is then just arg minx (L), where
L = ?||x ? LR (x)||2 +
K
X
||y (k) ? W (k) x||2 .
(8)
k=1
3
Implementation details
We optimize the objective function of equation 8 using scaled conjugate gradients (SCG)
to obtain an approximation to our super-resolution image. This requires an expression for
the gradient of the function with respect to x. For speed, we approximate this by
K
dL
2 X (k)T (k)
= 2? x ? LR (x) ?
W
y ? W (k) x ,
dx
K
(9)
k=1
which assumes that small perturbations in the neighbours of x will not change the value
returned by LR (x). This is obviously not necessarily the case, but leads to a more efficient
algorithm. The same k-nearest-neighbour variation introduced in [10] could be adopted to
smooth this response.
Our image patch regions R(xi ) are square windows centred on xi , and pixels near the edge
of the image are supported using the average image of [3] extending beyond the edge of the
super-resolution image. To compute the nearest region in the example images, patches are
normalized to sum to unity, and centre weighted as in [10] by a 2-dimensional Gaussian.
The width of the image patches used, and of the Gaussian weights, depends very much
upon the scales of the textures present in the image. Our images intensities were in the
range [0, 1], and all the work so far has been with grey-scale images.
Most of our results with this sample-based prior are compared to super-resolution images
obtained using the Huber prior used in [3]. Other edge-preserving functions are discussed
in [12], though the Huber function performed better than these as a prior in this case. The
Huber potential function is given by
n
x2 ,
if |x| ? ?
?(x) =
(10)
2?|x| ? ?2 , otherwise.
If G is a matrix which pre-multiplies x to give a vector of first-order approximations to the
magnitude of the image gradient in the horizontal, vertical, and two diagonal directions,
then the Huber prior we use is of the form:
p(x) =
4N
i
h
X
1
?((Gx)i )
exp ? ?
Z
i=1
(11)
for some prior strength ?, Z is the partition function, and (Gx) is the 4N ? 1 column
vector of approximate derivatives of x in the four directions mentioned above.
Plugging this into the posterior distribution of equation 4 leads to a Huber MAP image x H
which minimizes the negative log probability
LH = ?
4N
X
i=1
?((Gx)i ) +
K
X
||y (k) ? W (k) x||2 ,
(12)
k=1
where again the r.h.s. has been scaled so that ? is the single unknown ratio parameter. We
H
also optimize this by SCG, using the full analytic expression for dL
dx .
4
Preliminary results
To test the performance of our texture-based prior, and compare it with that of the Huber
prior, we produced sets of input images by running the generative model of equation 1 in
the forward direction, introducing sub-pixel shifts in the x- and y-directions, and a small
rotation about the viewing axis. We added varying amounts of Gaussian noise (2/256,
6/256 and 12/256, grey levels) and took varying number of these images (2, 5, 10) to
produce nine separate sets of low-resolution inputs from each of our initial ?ground-truth?
high resolution images. Figure 1 shows three 100 ? 100 pixel ground truth images, each
accompanied by corresponding 40 ? 40 pixel low-resolution images generated from the
ground truth images at half the resolution, with 6/256 levels of noise. Our aim was to
reconstruct the central 50 ? 50 pixel section of the original ground truth image. Figure 2
shows the example images from which our texture samples patches were taken 1 ? note that
these do not overlap with the sections used to generate the low-resolution images.
Text Truth
Brick Truth
Beads Truth
Text Low?res
Brick Low?res
Beads Low?res
Figure 1: Left to right: ground truth text, ground truth brick, ground truth beads, low-res
text, low-res brick and low-res beads.
Figure 2: Left: Text sample (150 ? 200 pixels). Centre: Brick sample (200 ? 200 pixels).
Right: Beads sample (60 ? 60 pixels).
Figure 3 shows the difference in super-resolution image quality that can be obtained using
the sample-based prior over the Huber prior using identical input sets as described above.
For each Huber super-resolution image, we ran a set of reconstructions, varying the Huber
parameter ? and the prior strength parameter ?. The image shown for each input number/noise level pair is the one which gave the minimum RMS error when compared to the
ground-truth image; these are very close to the ?best? images chosen from the same sets by
a human subject.
The images shown for the sample-based prior are again the best (in the sense of having
minimal RMS error) of several runs per image. We varied the size of the sample patches
from 5 to 13 pixels in edge length ? computational cost meant that larger patches were not
considered. Compared to the Huber images, we tried relatively few different patch size and
?-value combinations for our sample-based prior; again, this was due to our method taking
longer to execute than the Huber method. Consequently, the Huber parameters are more
likely to lie close to their own optimal values than our sample-based prior parameters are.
We also present images recovered using a ?wrong? texture. We generated ten lowresolution images from a picture of a leaf, and used texture samples from a small black-andwhite spiral in our reconstruction (Figure 4). A selection of results are shown in Figure 5,
where we varied the ? parameter governing the prior?s contribution to the output image.
1
Text grabbed from Greg Egan?s novella Oceanic, published online at the author?s website. Brick
image from the Brodatz texture set. Beads image from http://textures.forrest.cz/.
Texture prior
HMAP prior
2
Number of Images
Number of Images
2
5
10
5
10
2
6
Noise (grey levels)
12
2
Texture prior
2
Number of Images
Number of Images
12
HMAP prior
2
5
10
5
10
2
6
Noise (grey levels)
12
2
Texture prior
6
Noise (grey levels)
12
HMAP prior
2
Number of Images
2
Number of Images
6
Noise (grey levels)
5
10
5
10
2
12
Noise (grey levels)
32
2
12
Noise (grey levels)
32
Figure 3: Recovering the super-resolution images at a zoom factor of 2, using the texturebased prior (left column of plots) and the Huber MRF prior (right column of plots). The text
and brick datasets contained 2, 6, 12 grey levels of noise, while the beads dataset used 2,
12 and 32 grey levels. Each image shown is the best of several attempts with varying prior
strengths, Huber parameter (for the Huber MRF prior images) and patch neighbourhood
sizes (for the texture-based prior images).
Using a low value gives an image not dissimilar to the ML solution; using a significantly
higher value makes the output follow the form of the prior much more closely, and here this
means that the grey values get lost as the evidence for them from the data term is swamped
by the black-and-white pattern of the prior.
Figure 4: The original 120?120 high-resolution image (left), and the 80?80 pixel ?wrong?
texture sample image (right).
beta=0.01
beta=0.04
beta=0.16
beta=0.64
Figure 5: Four 120?120 super-resolution images are shown on the lower row, reconstructed
using different values of the prior strength parameter ?: 0.01, 0.04, 0.16, 0.64, from left to
right.
5
Discussion and further considerations
The images of Figure 3 show that our prior offers a qualitative improvement over the
generic prior, especially when few input images are available.
Quantitively, our method gives an RMS error of approximately 25 grey levels from only 2
input images with 2 grey levels of additive Gaussian noise on the text input images, whereas
the best Huber prior super-resolution image for that image set and noise level uses all 10
available input images, and still has an RMS error score of almost 30 grey levels.
Figure 6 plots the RMS errors from the Huber and sample-based priors against each other.
In all cases, the sample-based method fares better, with the difference most notable in the
text example.
In general, larger patch sizes (11 ? 11 pixels) give smaller errors for the noisy inputs,
while small patches (5 ? 5) are better for the less noisy images. Computational costs mean
we limited the patch size to no more than 13 ? 13, and terminated the SCG optimization
algorithm after approximately 20 iterations.
In addition to improving the computational complexity of our algorithm implementation,
we can extend this work in several directions. Since in general the textures for the prior
will not be invariant to rotation and scaling, consideration of the registration of the input
images will be necessary. The optimal patch size will be a function of the image textures,
so learning this as a parameter of an extended model, in a similar way to how [4] learns the
point-spread function for a set of input images, is another direction of interest.
Comparison of RMSE (grey levels)
Texture?based RMS
60
equal?error line
text dataset
brick dateset
bead dataset
50
40
30
20
10
10
20
30
40
Huber RMS
50
60
Figure 6: Comparison of RMS errors in reconstructing the text, brick and bead images
using the Huber and sample-based priors.
References
[1] W. T. Freeman, T. R. Jones, and E. C. Pasztor. Example-based super-resolution. IEEE Computer
Graphics and Applications, 22(2):56?65, March/April 2002.
[2] A. J. Storkey. Dynamic structure super-resolution. In S. Thrun S. Becker and K. Obermayer,
editors, Advances in Neural Information Processing Systems 15, pages 1295?1302. MIT Press,
Cambridge, MA, 2003.
[3] D. P. Capel. Image Mosaicing and Super-resolution. PhD thesis, University of Oxford, 2001.
[4] M. E. Tipping and C. M. Bishop. Bayesian image super-resolution. In S. Thrun S. Becker and
K. Obermayer, editors, Advances in Neural Information Processing Systems 15, pages 1279?
1286. MIT Press, Cambridge, MA, 2003.
[5] M. Irani and S. Peleg. Improving resolution by image registration. CVGIP: Graphical Models
and Image Processing, 53:231?239, 1991.
[6] M. Irani and S. Peleg. Motion analysis for image enhancement:resolution, occlusion, and transparency. Journal of Visual Communication and Image Representation, 4:324?335, 1993.
[7] S. Baker and T. Kanade. Limits on super-resolution and how to break them. IEEE Transactions
on Pattern Analysis and Machine Intelligence, 24(9):1167?1183, 2002.
[8] R. R. Schultz and R. L. Stevenson. Extraction of high-resolution frames from video sequences.
IEEE Transactions on Image Processing, 5(6):996?1011, June 1996.
[9] P. Cheeseman, B. Kanefsky, R. Kraft, J. Stutz, and B. Hanson. Super-resolved surface reconstruction from multiple images. In Glenn R. Heidbreder, editor, Maximum Entropy and
Bayesian Methods, pages 293?308. Kluwer Academic Publishers, Dordrecht, the Netherlands,
1996.
[10] A. A. Efros and T. K. Leung. Texture synthesis by non-parametric sampling. In IEEE International Conference on Computer Vision, pages 1033?1038, Corfu, Greece, September 1999.
[11] A. Fitzgibbon, Y. Wexler, and A. Zisserman. Image-based rendering using image-based priors.
In Proceedings of the International Conference on Computer Vision, October 2003.
[12] M. J. Black, G. Sapiro, D. Marimont, and D. Heeger. Robust anisotropic diffusion. IEEE Trans.
on Image Processing, 7(3):421?432, 1998.
| 2381 |@word grey:15 scg:3 tried:1 wexler:1 initial:1 score:1 recovered:3 dx:2 must:1 additive:1 partition:1 analytic:1 plot:3 generative:3 half:1 leaf:1 website:1 intelligence:1 short:1 lr:7 gx:3 mathematical:1 beta:4 lowresolution:3 qualitative:1 huber:23 themselves:1 inspired:1 freeman:1 window:1 begin:1 estimating:1 baker:1 heidbreder:1 superresolution:1 minimizes:1 finding:2 sapiro:1 scaled:3 wrong:2 uk:1 engineering:1 limit:1 bilinear:1 oxford:3 interpolation:1 approximately:2 black:3 limited:1 range:1 lost:1 differs:1 fitzgibbon:1 displacement:4 area:3 significantly:1 pre:1 road:1 get:1 close:2 selection:1 impossible:1 applying:1 optimize:2 equivalent:1 map:2 maximizing:1 resolution:60 regularize:1 variation:1 limiting:1 construction:3 us:3 storkey:1 region:3 grabbed:1 ran:1 mentioned:1 complexity:1 dynamic:1 upon:4 blurring:1 basis:1 kraft:1 resolved:2 surrounding:1 dordrecht:1 larger:2 plausible:1 reconstruct:2 otherwise:1 itself:1 noisy:2 online:1 obviously:1 sequence:1 took:1 reconstruction:3 inserting:1 poorly:1 achieve:1 az:1 enhancement:1 extending:1 produce:4 brodatz:1 leave:1 andrew:1 ac:1 develop:2 nearest:3 recovering:2 involves:1 peleg:2 revers:1 direction:6 closely:2 correct:1 human:1 viewing:1 inventing:1 preliminary:1 around:4 considered:1 ground:8 exp:3 great:1 seed:1 mapping:1 efros:1 adopt:1 estimation:1 outperformed:1 faithfully:1 successfully:1 weighted:1 mit:2 gaussian:9 super:33 aim:3 rather:2 varying:4 june:1 improvement:3 likelihood:3 sense:1 posteriori:1 mrfs:2 leung:1 hmap:3 expand:1 pixel:26 arg:1 priori:1 multiplies:1 special:1 field:1 construct:2 equal:2 having:1 extraction:1 sampling:2 encouraged:1 identical:1 park:1 jones:1 unsupervised:1 future:1 few:2 neighbour:3 zoom:1 consisting:1 occlusion:1 attempt:1 interest:2 highly:1 accurate:1 edge:5 stutz:1 necessary:1 lh:1 egan:1 re:6 minimal:1 increased:1 column:3 brick:9 sjrob:1 cost:2 introducing:2 uniform:1 successful:1 examining:1 graphic:1 mosaicing:1 international:2 lyndsey:1 homography:1 together:1 synthesis:3 again:3 central:5 thesis:1 convolving:1 derivative:1 leading:1 corfu:1 potential:1 stevenson:1 centred:1 accompanied:1 includes:1 notable:1 depends:1 performed:1 try:1 break:1 recover:1 rmse:1 contribution:2 square:1 greg:1 bayesian:2 produced:1 published:1 against:2 frequency:6 naturally:1 associated:1 sampled:4 dataset:3 treatment:1 oceanic:1 greece:1 higher:3 tipping:1 follow:1 zisserman:2 response:1 april:1 execute:1 ox:1 though:3 just:2 governing:1 hand:1 horizontal:1 capel:1 ox1:1 quality:1 building:1 normalized:1 true:1 former:1 irani:2 deal:1 white:1 width:1 highresolution:2 motion:1 image:129 consideration:2 common:1 rotation:2 anisotropic:1 discussed:1 fare:1 extend:1 kluwer:1 significant:1 cambridge:2 smoothness:2 centre:3 robot:1 longer:1 surface:1 posterior:3 closest:1 own:1 certain:1 success:1 preserving:1 minimum:1 stephen:1 multiple:3 full:1 transparency:1 ing:1 smooth:1 match:3 academic:1 offer:1 plugging:1 mrf:2 vision:2 histogram:1 iteration:1 cz:1 robotics:1 achieved:1 penalize:1 whereas:1 addition:1 publisher:1 extra:1 subject:1 tend:3 marimont:1 near:1 spiral:1 rendering:2 gave:1 shift:1 expression:3 rms:8 becker:2 elle:1 returned:1 nine:1 netherlands:1 amount:1 ten:1 generate:1 http:1 per:1 discrete:1 group:2 four:2 pj:1 registration:5 diffusion:1 sum:1 run:1 almost:1 forrest:1 patch:19 scaling:1 display:1 correspondence:1 fold:1 strength:4 scene:2 x2:1 dominated:1 speed:1 performing:1 relatively:1 department:1 developing:1 combination:1 march:1 conjugate:1 across:1 smaller:1 reconstructing:1 unity:1 swamped:1 restricted:1 invariant:1 taken:2 equation:5 turn:1 discus:1 stitching:1 tractable:1 adopted:1 available:4 gaussians:1 generic:2 neighbourhood:3 original:3 assumes:1 running:1 subsampling:1 graphical:1 cvgip:1 build:2 especially:1 objective:2 added:1 parametric:4 diagonal:1 highfrequency:1 obermayer:2 september:1 gradient:4 minx:1 separate:1 thrun:2 evaluate:1 assuming:1 length:1 ratio:2 minimizing:1 unfortunately:1 susceptible:1 robert:1 october:1 negative:3 implementation:3 unknown:2 vertical:1 convolution:1 markov:1 datasets:1 pasztor:1 pickup:1 extended:1 looking:1 communication:1 frame:1 perturbation:1 varied:2 arbitrary:1 intensity:6 introduced:2 pair:1 hanson:1 discontinuity:1 trans:1 address:1 beyond:1 usually:2 pattern:2 built:1 including:1 video:1 suitable:1 overlap:1 regularized:1 cheeseman:1 picture:1 axis:1 harsh:1 text:11 prior:55 generation:2 editor:3 row:1 supported:1 side:1 allow:1 taking:2 distributed:1 author:2 made:2 forward:1 schultz:1 far:1 transaction:2 reconstructed:1 approximate:2 ml:3 assumed:2 xi:9 continuous:1 latent:1 bead:9 glenn:1 kanade:1 learn:4 robust:1 improving:2 complex:1 necessarily:1 domain:1 main:1 spread:2 terminated:1 subsample:1 noise:15 precision:2 sub:3 heeger:1 candidate:1 lie:1 learns:1 specific:1 bishop:1 evidence:1 dl:2 texture:24 magnitude:1 phd:1 conditioned:1 entropy:1 simply:1 likely:1 visual:1 expressed:1 contained:1 truth:11 ma:2 consequently:1 content:1 change:1 determined:1 typical:1 degradation:1 meant:1 dissimilar:1 philosophy:2 kanefsky:1 |
1,520 | 2,382 | ARA*: Anytime A* with Provable Bounds on
Sub-Optimality
Maxim Likhachev, Geoff Gordon and Sebastian Thrun
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
{maxim+, ggordon, thrun}@cs.cmu.edu
Abstract
In real world planning problems, time for deliberation is often limited.
Anytime planners are well suited for these problems: they find a feasible solution quickly and then continually work on improving it until time
runs out. In this paper we propose an anytime heuristic search, ARA*,
which tunes its performance bound based on available search time. It
starts by finding a suboptimal solution quickly using a loose bound, then
tightens the bound progressively as time allows. Given enough time it
finds a provably optimal solution. While improving its bound, ARA*
reuses previous search efforts and, as a result, is significantly more efficient than other anytime search methods. In addition to our theoretical
analysis, we demonstrate the practical utility of ARA* with experiments
on a simulated robot kinematic arm and a dynamic path planning problem for an outdoor rover.
1
Introduction
Optimal search is often infeasible for real world problems, as we are given a limited amount
of time for deliberation and want to find the best solution given the time provided. In
these conditions anytime algorithms [9, 2] prove to be useful as they usually find a first,
possibly highly suboptimal, solution very fast and then continually work on improving
the solution until allocated time expires. Unfortunately, they can rarely provide bounds
on the sub-optimality of their solutions unless the cost of an optimal solution is already
known. Even less often can these algorithms control their sub-optimality. Providing suboptimality bounds is valuable, though: it allows one to judge the quality of the current
plan, decide whether to continue or preempt search based on the current sub-optimality, and
evaluate the quality of past planning episodes and allocate time for future planning episodes
accordingly. Control over the sub-optimality bounds helps in adjusting the tradeoff between
computation and plan quality.
A* search with inflated heuristics (actual heuristic values are multiplied by an inflation
factor > 1) is sub-optimal but proves to be fast for many domains [1, 5, 8] and also provides a bound on the sub-optimality, namely, the by which the heuristic is inflated [7].
To construct an anytime algorithm with sub-optimality bounds one could run a succession
of these A* searches with decreasing inflation factors. This naive approach results in a series of solutions, each one with a sub-optimality factor equal to the corresponding inflation
factor. This approach has control over the sub-optimality bound, but wastes a lot of computation since each search iteration duplicates most of the efforts of the previous searches.
One could try to employ incremental heuristic searches (e.g., [4]), but the sub-optimality
bounds for each search iteration would no longer be guaranteed.
To this end we propose the ARA* (Anytime Repairing A*) algorithm, which is an
efficient anytime heuristic search that also runs A* with inflated heuristics in succession
but reuses search efforts from previous executions in such a way that the sub-optimality
bounds are still satisfied. As a result, a substantial speedup is achieved by not re-computing
the state values that have been correctly computed in the previous iterations. We show the
efficiency of ARA* on two different domains. An evaluation of ARA* on a simulated robot
kinematic arm with six degrees of freedom shows up to 6-fold speedup over the succession
of A* searches. We also demonstrate ARA* on the problem of planning a path for a mobile
robot that takes into account the robot?s dynamics.
The only other anytime heuristic search known to us is Anytime A*, described in [8]. It
also first executes an A* with inflated heuristics and then continues to improve a solution.
However, the algorithm does not have control over its sub-optimality bound, except by
selecting the inflation factor of the first search. Our experiments show that ARA* is able
to decrease its bounds much more gradually and, moreover, does so significantly faster.
Another advantage of ARA* is that it guarantees to examine each state at most once during
its first search, unlike the algorithm of [8]. This property is important because it provides
a bound on the amount of time before ARA* produces its first plan. Nevertheless, as
mentioned later, [8] describes a number of very interesting ideas that are also applicable to
ARA*.
2
2.1
The ARA* Algorithm
A* with Weighted Heuristic
Normally, A* takes as input a heuristic h(s) which must be consistent. That is, h(s) ?
c(s, s0 ) + h(s0 ) for any successor s0 of s if s 6= sgoal and h(s) = 0 if s = sgoal . Here
c(s, s0 ) denotes the cost of an edge from s to s0 and has to be positive. Consistency, in
its turn, guarantees that the heuristic is admissible: h(s) is never larger than the true cost
of reaching the goal from s. Inflating the heuristic (that is, using ? h(s) for > 1)
often results in much fewer state expansions and consequently faster searches. However,
inflating the heuristic may also violate the admissibility property, and as a result, a solution
is no longer guaranteed to be optimal. The pseudocode of A* with inflated heuristic is
given in Figure 1 for easy comparison with our algorithm, ARA*, presented later.
A* maintains two functions from states to real numbers: g(s) is the cost of the current
path from the start node to s (it is assumed to be ? if no path to s has been found yet), and
f (s) = g(s)+?h(s) is an estimate of the total distance from start to goal going through s.
A* also maintains a priority queue, OPEN, of states which it plans to expand. The OPEN
queue is sorted by f (s), so that A* always expands next the state which appears to be on
the shortest path from start to goal. A* initializes the OPEN list with the start state, sstart
(line 02). Each time it expands a state s (lines 04-11), it removes s from OPEN. It then
updates the g-values of all of s?s neighbors; if it decreases g(s0 ), it inserts s0 into OPEN.
A* terminates as soon as the goal state is expanded.
01 g(sstart ) = 0; OPEN = ?;
02 insert sstart into OPEN with f (sstart ) = ? h(sstart );
03 while(sgoal is not expanded)
04 remove s with the smallest f -value from OPEN;
05 for each successor s0 of s
06
if s0 was not visited before then
07
f (s0 ) = g(s0 ) = ?;
08
if g(s0 ) > g(s) + c(s, s0 )
09
g(s0 ) = g(s) + c(s, s0 );
10
f (s0 ) = g(s0 ) + ? h(s0 );
11
insert s0 into OPEN with f (s0 );
Figure 1: A* with heuristic weighted by ? 1
= 2.5
= 1.5
= 1.0
= 2.5
= 1.5
= 1.0
Figure 2: Left three columns: A* searches with decreasing . Right three columns: the corresponding
ARA* search iterations.
Setting to 1 results in standard A* with an uninflated heuristic; the resulting solution
is guaranteed to be optimal. For > 1 a solution can be sub-optimal, but the sub-optimality
is bounded by a factor of : the length of the found solution is no larger than times the
length of the optimal solution [7].
The left three columns in Figure 2 show the operation of the A* algorithm with a
heuristic inflated by = 2.5, = 1.5, and = 1 (no inflation) on a simple grid world. In
this example we use an eight-connected grid with black cells being obstacles. S denotes a
start state, while G denotes a goal state. The cost of moving from one cell to its neighbor
is one. The heuristic is the larger of the x and y distances from the cell to the goal. The
cells which were expanded are shown in grey. (A* can stop search as soon as it is about
to expand a goal state without actually expanding it. Thus, the goal state is not shown in
grey.) The paths found by these searches are shown with grey arrows. The A* searches with
inflated heuristics expand substantially fewer cells than A* with = 1, but their solution is
sub-optimal.
2.2 ARA*: Reuse of Search Results
ARA* works by executing A* multiple times, starting with a large and decreasing prior
to each execution until = 1. As a result, after each search a solution is guaranteed to be
within a factor of optimal. Running A* search from scratch every time we decrease ,
however, would be very expensive. We will now explain how ARA* reuses the results of
the previous searches to save computation. We first explain the ImprovePath function (left
column in Figure 3) that recomputes a path for a given . In the next section we explain the
Main function of ARA* (right column in Figure 3) that repetitively calls the ImprovePath
function with a series of decreasing s.
Let us first introduce a notion of local inconsistency (we borrow this term from [4]). A
state is called locally inconsistent every time its g-value is decreased (line 09, Figure 1) and
until the next time the state is expanded. That is, suppose that state s is the best predecessor
for some state s0 : that is, g(s0 ) = mins00 ?pred(s0 ) (g(s00 )+c(s00 , s0 )) = g(s)+c(s, s0 ). Then,
if g(s) decreases we get g(s0 ) > mins00 ?pred(s0 ) (g(s00 ) + c(s00 , s0 )). In other words, the
decrease in g(s) introduces a local inconsistency between the g-value of s and the g-values
of its successors. Whenever s is expanded, on the other hand, the inconsistency of s is
corrected by re-evaluating the g-values of the successors of s (line 08-09, Figure 1). This
in turn makes the successors of s locally inconsistent. In this way the local inconsistency
is propagated to the children of s via a series of expansions. Eventually the children no
longer rely on s, none of their g-values are lowered, and none of them are inserted into
the OPEN list. Given this definition of local inconsistency it is clear that the OPEN list
consists of exactly all locally inconsistent states: every time a g-value is lowered the state
is inserted into OPEN, and every time a state is expanded it is removed from OPEN until
the next time its g-value is lowered. Thus, the OPEN list can be viewed as a set of states
from which we need to propagate local inconsistency.
A* with a consistent heuristic is guaranteed not to expand any state more than once.
Setting > 1, however, may violate consistency, and as a result A* search may re-expand
states multiple times. It turns out that if we restrict each state to be expanded no more
than once, then the sub-optimality bound of still holds. To implement this restriction we
check any state whose g-value is lowered and insert it into OPEN only if it has not been
previously expanded (line 10, Figure 3). The set of expanded states is maintained in the
CLOSED variable.
procedure fvalue(s)
01 return g(s) + ? h(s);
procedure Main()
01? g(sgoal ) = ?; g(sstart ) = 0;
02? OPEN = CLOSED = INCONS = ?;
procedure ImprovePath()
03? insert sstart into OPEN with fvalue(sstart );
02 while(fvalue(sgoal ) > mins?OPEN (fvalue(s))) 04? ImprovePath();
03 remove s with the smallest fvalue(s) from OPEN; 05? 0 = min(, g(sgoal )/ mins?OPEN?INCONS (g(s)+h(s)));
04 CLOSED = CLOSED ? {s};
06? publish current 0 -suboptimal solution;
05 for each successor s0 of s
07? while 0 > 1
0
06
if s was not visited before then
08? decrease ;
07
g(s0 ) = ?;
09? Move states from INCONS into OPEN;
0
0
08
if g(s ) > g(s) + c(s, s )
10? Update the priorities for all s ? OPEN according to fvalue(s);
0
0
09
g(s ) = g(s) + c(s, s );
11? CLOSED = ?;
0
10
if s 6? CLOSED
12? ImprovePath();
11
insert s0 into OPEN with fvalue(s0 );
13? 0 = min(, g(sgoal )/ mins?OPEN?INCONS (g(s)+h(s)));
12
else
14? publish current 0 -suboptimal solution;
13
insert s0 into INCONS;
Figure 3: ARA*
With this restriction we will expand each state at most once, but OPEN may no longer
contain all the locally inconsistent states. In fact, it will only contain the locally inconsistent
states that have not yet been expanded. It is important, however, to keep track of all the
locally inconsistent states as they will be the starting points for inconsistency propagation
in the future search iterations. We do this by maintaining the set INCONS of all the locally
inconsistent states that are not in OPEN (lines 12-13, Figure 3). Thus, the union of INCONS
and OPEN is exactly the set of all locally inconsistent states, and can be used as a starting
point for inconsistency propagation before each new search iteration.
The only other difference between the ImprovePath function and A* is the termination
condition. Since the ImprovePath function reuses search efforts from the previous executions, sgoal may never become locally inconsistent and thus may never be inserted into
OPEN. As a result, the termination condition of A* becomes invalid. A* search, however,
can also stop as soon as f (sgoal ) is equal to the minimal f -value among all the states on
OPEN list. This is the condition that we use in the ImprovePath function (line 02, Figure 3). It also allows us to avoid expanding sgoal as well as possibly some other states
with the same f -value. (Note that ARA* no longer maintains f -values as variables since in
between the calls to the ImprovePath function is changed, and it would be prohibitively
expensive to update the f -values of all the states. Instead, the fvalue(s) function is called
to compute and return the f -values only for the states in OPEN and sgoal .)
2.3
ARA*: Iterative Execution of Searches
We now introduce the main function of ARA* (right column in Figure 3) which performs a
series of search iterations. It does initialization and then repetitively calls the ImprovePath
function with a series of decreasing s. Before each call to the ImprovePath function a
new OPEN list is constructed by moving into it the contents of the set INCONS. Since
OPEN list has to be sorted by the current f -values of states it is also re-ordered (lines 09?10?, Figure 3). Thus, after each call to the ImprovePath function we get a solution that is
sub-optimal by at most a factor of .
As suggested in [8] a sub-optimality bound can also be computed as the ratio between
g(sgoal ), which gives an upper bound on the cost of an optimal solution, and the minimum
un-weighted f -value of a locally inconsistent state, which gives a lower bound on the cost
of an optimal solution. (This is a valid sub-optimality bound as long as the ratio is larger
than or equal to one. Otherwise, g(sgoal ) is already equal to the cost of an optimal solution.)
Thus, the actual sub-optimality bound for ARA* is computed as the minimum between
and the ratio (lines 05? and 13?, Figure 3). At first, one may also think of using this actual
sub-optimality bound in deciding how to decrease between search iterations (e.g., setting
to 0 minus a small delta). Experiments, however, seem to suggest that decreasing in
small steps is still more beneficial. The reason is that a small decrease in often results
in the improvement of the solution, despite the fact that the actual sub-optimality bound of
the previous solution was already substantially less than the value of . A large decrease in
, on the other hand, may often result in the expansion of too many states during the next
search. (Another useful suggestion from [8], which we have not implemented in ARA*, is
to prune OPEN so that it never contains a state whose un-weighted f -value is larger than
or equal to g(sgoal ).)
Within each execution of the ImprovePath function we mainly save computation by
not re-expanding the states which were locally consistent and whose g-values were already
correct before the call to ImprovePath (Theorem 2 states this more precisely). For example,
the right three columns in Figure 2 show a series of calls to the ImprovePath function.
States that are locally inconsistent at the end of an iteration are shown with an asterisk.
While the first call ( = 2.5) is identical to the A* call with the same , the second call
to the ImprovePath function ( = 1.5) expands only 1 cell. This is in contrast to 15 cells
expanded by A* search with the same . For both searches the sub-optimality factor, ,
decreases from 2.5 to 1.5. Finally, the third call to the ImprovePath function with set to
1 expands only 9 cells. The solution is now optimal, and the total number of expansions
is 23. Only 2 cells are expanded more than once across all three calls to the ImprovePath
function. Even a single optimal search from scratch expands 20 cells.
2.4 Theoretical Properties of the Algorithm
We now present some of the theoretical properties of ARA*. For the proofs of these and
other properties of the algorithm please refer to [6]. We use g ? (s) to denote the cost of an
optimal path from sstart to s. Let us also define a greedy path from sstart to s as a path
that is computed by tracing it backward as follows: start at s, and at any state si pick a state
si?1 = arg mins0 ?pred(si ) (g(s0 ) + c(s0 , si )) until si?1 = sstart .
Theorem 1 Whenever the ImprovePath function exits, for any state s with f (s) ?
mins0 ?OPEN (f (s0 )), we have g ? (s) ? g(s) ? ? g ? (s), and the cost of a greedy path
from sstart to s is no larger than g(s).
The correctness of ARA* follows from this theorem: each execution of the ImprovePath function terminates when f (sgoal ) is no larger than the minimum f -value in
OPEN, which means that the greedy path from start to goal that we have found is within a
factor of optimal. Since before each iteration is decreased, and it, in its turn, is an upper
bound on 0 , ARA* gradually decreases the sub-optimality bound and finds new solutions
to satisfy the bound.
Theorem 2 Within each call to ImprovePath() a state is expanded at most once and only
if it was locally inconsistent before the call to ImprovePath() or its g-value was lowered
during the current execution of ImprovePath().
The second theorem formalizes where the computational savings for ARA* search
come from. Unlike A* search with an inflated heuristic, each search iteration in ARA*
is guaranteed not to expand states more than once. Moreover, it also does not expand states
whose g-values before a call to the ImprovePath function have already been correctly computed by some previous search iteration, unless they are in the set of locally inconsistent
states already and thus need to update their neighbors (propagate local inconsistency).
3
Experimental Study
3.1 Robotic Arm
We first evaluate the performance of ARA* on simulated 6 and 20 degree of freedom (DOF)
robotic arms (Figure 4). The base of the arm is fixed, and the task is to move its end-effector
to the goal while navigating around obstacles (indicated by grey rectangles). An action
is defined as a change of a global angle of any particular joint (i.e., the next joint further
along the arm rotates in the opposite direction to maintain the global angle of the remaining
joints.) We discretitize the workspace into 50 by 50 cells and compute a distance from each
cell to the cell containing the goal while taking into account that some cells are occupied
by obstacles. This distance is our heuristic. In order for the heuristic not to overestimate
true costs, joint angles are discretitized so as to never move the end-effector by more than
one cell in a single action. The resulting state-space is over 3 billion states for a 6 DOF
robot arm and over 1026 states for a 20 DOF robot arm, and memory for states is allocated
on demand.
(a) 6D arm trajectory for = 3
(b) uniform costs
(c) non-uniform costs
(d) both Anytime A* and A*
(e) ARA*
(f) non-uniform costs
after 90 secs, cost=682, 0 =15.5 after 90 secs, cost=657, 0 =14.9
Figure 4: Top row: 6D robot arm experiments. Bottom row: 20D robot arm experiments (the
trajectories shown are downsampled by 6). Anytime A* is the algorithm in [8].
Figure 4a shows the planned trajectory of the robot arm after the initial search of ARA*
with = 3.0. This search takes about 0.05 secs. (By comparison, a search for an optimal
trajectory is infeasible as it runs out of memory very quickly.) The plot in Figure 4b shows
that ARA* improves both the quality of the solution and the bound on its sub-optimality
faster and in a more gradual manner than either a succession of A* searches or Anytime
A* [8]. In this experiment is initially set to 3.0 for all three algorithms. For all the experiments in this section is decreased in steps of 0.02 (2% sub-optimality) for ARA* and
a succession of A* searches. Anytime A* does not control , and in this experiment it
apparently performs a lot of computations that result in a large decrease of at the end. On
the other hand, it does reach the optimal solution first this way. To evaluate the expense of
the anytime property of ARA* we also ran ARA* and an optimal A* search in a slightly
simpler environment (for the optimal search to be feasible). Optimal A* search required
about 5.3 mins (2,202,666 state expanded) to find an optimal solution, while ARA* required about 5.5 mins (2,207,178 state expanded) to decrease in steps of 0.02 from 3.0
until a provably optimal solution was found (about 4% overhead).
While in the experiment for Figure 4b all the actions have the same cost, in the experiment for Figure 4c actions have non-uniform costs: changing a joint angle closer to the
base is more expensive than changing a higher joint angle. As a result of the non-uniform
costs our heuristic becomes less informative, and so search is much more expensive. In
this experiment we start with = 10, and run all algorithms for 30 minutes. At the end,
ARA* achieves a solution with a substantially smaller cost (200 vs. 220 for the succession
of A* searches and 223 for Anytime A*) and a better sub-optimality bound (3.92 vs. 4.46
for both the succession of A* searches and Anytime A*). Also, since ARA* controls it
decreases the cost of the solution gradually. Reading the graph differently, ARA* reaches
a sub-optimality bound 0 = 4.5 after about 59 thousand expansions and 11.7 secs, while
the succession of A* searches reaches the same bound after 12.5 million expansions and
27.4 minutes (about 140-fold speedup by ARA*) and Anytime A* reaches it after over 4
million expansions and 8.8 minutes (over 44-fold speedup by ARA*). Similar results hold
when comparing the amount of work each of the algorithms spend on obtaining a solution
of cost 225. While Figure 4 shows execution time, the comparison of states expanded (not
shown) is almost identical. Additionally, to demonstrate the advantage of ARA* expanding
each state no more than once per search iteration, we compare the first searches of ARA*
and Anytime A*: the first search of ARA* performed 6,378 expansions, while Anytime
A* performed 8,994 expansions, mainly because some of the states were expanded up to
(a) robot with laser scanner
(b) 3D Map
(c) optimal 2D search
(d) optimal 4D search with A*
(e) 4D search with ARA*
(f) 4D search with ARA*
after 25 secs
after 0.6 secs ( = 2.5)
after 25 secs ( = 1.0)
Figure 5: outdoor robot navigation experiment (cross shows the position of the robot)
seven times before a first solution was found.
Figures 4d-f show the results of experiments done on a 20 DOF robot arm, with actions
that have non-uniform costs. All three algorithms start with = 30. Figures 4d and 4e
show that in 90 seconds of planning the cost of the trajectory found by ARA* and the suboptimality bound it can guarantee is substantially smaller than for the other algorithms. For
example, the trajectory in Figure 4d contains more steps and also makes one extra change
in the angle of the third joint from the base of the arm (despite the fact that changing lower
joint angles is very expensive) in comparison to the trajectory in Figure 4e. The graph in
Figure 4f compares the performance of the three algorithms on twenty randomized environments similar to the environment in Figure 4d. The environments had random goal locations, and the obstacles were slid to random locations along the outside walls. The graph
shows the additional time the other algorithms require to achieve the same sub-optimality
bound that ARA* does. To make the results from different environments comparable we
normalize the bound by dividing it by the maximum of the best bounds that the algorithms
achieve before they run out of memory. Averaging over all environments, the time for
ARA* to achieve the best bound was 10.1 secs. Thus, the difference of 40 seconds at the
end of the Anytime A* graph corresponds to an overhead of about a factor of 4.
3.2 Outdoor Robot Navigation
For us the motivation for this work was efficient path-planning for mobile robots in large
outdoor environments, where optimal trajectories involve fast motion and sweeping turns
at speed. In such environments it is particularly important to take advantage of the robot?s
momentum and find dynamic rather than static plans. We use a 4D state space: xy position,
orientation, and velocity. High dimensionality and large environments result in very large
state-spaces for the planner and make it computationally infeasible for the robot to plan
optimally every time it discovers new obstacles or modelling errors. To solve this problem
we built a two-level planner: a 4D planner that uses ARA*, and a fast 2D (x, y) planner
that uses A* search and whose results serve as the heuristic for the 4D planner.1
1
To interleave search with the execution of the best plan so far we perform 4D search backward.
That is, the start of the search, sstart , is the actual goal state of the robot, while the goal of the search,
sgoal , is the current state of the robot. Thus, sstart does not change as the robot moves and the search
tree remains valid in between search iterations. Since heuristics estimate the distances to sgoal (the
robot position) we have to recompute them during the reorder operation (line 10?, Figure 3).
In Figure 5 we show the robot we used for navigation and a 3D laser scan [3] constructed by the robot of the environment we tested our system in. The scan is converted
into a map of the environment (Figure 5c, obstacles shown in black). The size of the environment is 91.2 by 94.4 meters, and the map is discretitized into cells of 0.4 by 0.4 meters.
Thus, the 2D state-space consists of 53808 states. The 4D state space has over 20 million
states. The robot?s initial state is the upper circle, while its goal is the lower circle. To
ensure safe operation we created a buffer zone with high costs around each obstacle. The
squares in the upper-right corners of the figures show a magnified fragment of the map with
grayscale proportional to cost. The 2D plan (Figure 5c) makes sharp 45 degree turns when
going around the obstacles, requiring the robot to come to complete stops. The optimal
4D plan results in a wider turn, and the velocity of the robot remains high throughout the
whole trajectory. In the first plan computed by ARA* starting at = 2.5 (Figure 5e) the
trajectory is much better than the 2D plan, but somewhat worse than the optimal 4D plan.
The time required for the optimal 4D planner was 11.196 secs, whereas the time for
the 4D ARA* planner to generate the plan in Figure 5e was 556ms. As a result, the robot
that runs ARA* can start executing its plan much earlier. A robot running the optimal
4D planner would still be near the beginning of its path 25 seconds after receiving a goal
location (Figure 5d). In contrast, in the same amount of time the robot running ARA* has
advanced much further (Figure 5f), and its plan by now has converged to optimal ( has
decreased to 1).
4
Conclusions
We have presented the first anytime heuristic search that works by continually decreasing
a sub-optimality bound on its solution and finding new solutions that satisfy the bound on
the way. It executes a series of searches with decreasing sub-optimality bounds, and each
search tries to reuse as much as possible of the results from previous searches. The experiments show that our algorithm is much more efficient than any of the previous anytime
searches, and can successfully solve large robotic planning problems.
Acknowledgments
This work was supported by AFRL contract F30602?01?C?0219, DARPA?s MICA program.
References
[1] B. Bonet and H. Geffner. Planning as heuristic search. Artificial Intelligence, 129(12):5?33, 2001.
[2] T. L. Dean and M. Boddy. An analysis of time-dependent planning. In Proc. of the
National Conference on Artificial Intelligence (AAAI), 1988.
[3] D. Haehnel. Personal communication, 2003.
[4] S. Koenig and M. Likhachev. Incremental A*. In Advances in Neural Information
Processing Systems (NIPS) 14. Cambridge, MA: MIT Press, 2002.
[5] R. E. Korf. Linear-space best-first search. Artificial Intelligence, 62:41?78, 1993.
[6] M. Likhachev, G. Gordon, and S. Thrun. ARA*: Formal Analysis. Tech. Rep. CMUCS-03-148, Carnegie Mellon University, Pittsburgh, PA, 2003.
[7] J. Pearl. Heuristics: Intelligent Search Strategies for Computer Problem Solving.
Addison-Wesley, 1984.
[8] R. Zhou and E. A. Hansen. Multiple sequence alignment using A*. In Proc. of the
National Conference on Artificial Intelligence (AAAI), 2002. Student abstract.
[9] S. Zilberstein and S. Russell. Approximate reasoning using anytime algorithms. In
Imprecise and Approximate Computation. Kluwer Academic Publishers, 1995.
| 2382 |@word interleave:1 open:35 termination:2 grey:4 gradual:1 propagate:2 korf:1 pick:1 minus:1 initial:2 series:7 contains:2 selecting:1 fragment:1 past:1 current:8 comparing:1 si:5 yet:2 must:1 informative:1 remove:3 plot:1 progressively:1 update:4 v:2 greedy:3 fewer:2 intelligence:4 accordingly:1 beginning:1 provides:2 recompute:1 node:1 location:3 simpler:1 along:2 constructed:2 predecessor:1 become:1 prove:1 consists:2 overhead:2 manner:1 introduce:2 planning:10 examine:1 ara:57 decreasing:8 actual:5 becomes:2 provided:1 moreover:2 bounded:1 substantially:4 finding:2 inflating:2 magnified:1 guarantee:3 formalizes:1 every:5 expands:5 exactly:2 prohibitively:1 control:6 normally:1 reuses:4 continually:3 overestimate:1 before:11 positive:1 local:6 despite:2 path:14 black:2 initialization:1 limited:2 practical:1 acknowledgment:1 union:1 implement:1 procedure:3 fvalue:8 significantly:2 imprecise:1 word:1 preempt:1 downsampled:1 suggest:1 get:2 restriction:2 map:4 dean:1 starting:4 borrow:1 notion:1 suppose:1 us:2 pa:2 velocity:2 expensive:5 particularly:1 continues:1 bottom:1 inserted:3 thousand:1 connected:1 episode:2 decrease:14 removed:1 russell:1 valuable:1 ran:1 mentioned:1 substantial:1 environment:12 dynamic:3 personal:1 solving:1 serve:1 rover:1 efficiency:1 exit:1 expires:1 joint:8 darpa:1 geoff:1 differently:1 laser:2 fast:4 recomputes:1 artificial:4 repairing:1 outside:1 dof:4 whose:5 heuristic:30 larger:7 spend:1 solve:2 otherwise:1 think:1 deliberation:2 advantage:3 sequence:1 propose:2 achieve:3 normalize:1 billion:1 produce:1 incremental:2 executing:2 help:1 wider:1 school:1 dividing:1 implemented:1 c:1 judge:1 come:2 inflated:8 direction:1 safe:1 correct:1 successor:6 require:1 wall:1 insert:7 hold:2 scanner:1 inflation:5 around:3 deciding:1 achieves:1 smallest:2 proc:2 applicable:1 visited:2 hansen:1 correctness:1 successfully:1 weighted:4 mit:1 always:1 reaching:1 occupied:1 avoid:1 rather:1 zhou:1 mobile:2 zilberstein:1 improvement:1 modelling:1 check:1 mainly:2 tech:1 contrast:2 dependent:1 initially:1 expand:8 going:2 provably:2 arg:1 among:1 orientation:1 plan:15 equal:5 construct:1 once:8 never:5 saving:1 identical:2 future:2 gordon:2 duplicate:1 employ:1 intelligent:1 national:2 maintain:1 freedom:2 highly:1 kinematic:2 evaluation:1 alignment:1 introduces:1 navigation:3 edge:1 closer:1 haehnel:1 xy:1 unless:2 tree:1 re:5 circle:2 theoretical:3 minimal:1 effector:2 column:7 earlier:1 obstacle:8 planned:1 cost:26 uniform:6 too:1 optimally:1 randomized:1 workspace:1 contract:1 receiving:1 quickly:3 s00:4 satisfied:1 aaai:2 containing:1 possibly:2 priority:2 worse:1 corner:1 geffner:1 return:2 account:2 converted:1 sec:9 waste:1 student:1 satisfy:2 performed:2 later:2 try:2 lot:2 closed:6 apparently:1 start:12 maintains:3 square:1 succession:8 none:2 trajectory:10 executes:2 converged:1 explain:3 reach:4 sebastian:1 whenever:2 definition:1 proof:1 static:1 propagated:1 stop:3 adjusting:1 anytime:24 improves:1 dimensionality:1 actually:1 appears:1 wesley:1 afrl:1 higher:1 done:1 though:1 until:7 hand:3 koenig:1 bonet:1 propagation:2 quality:4 indicated:1 contain:2 true:2 requiring:1 during:4 please:1 maintained:1 suboptimality:2 m:1 complete:1 demonstrate:3 performs:2 motion:1 reasoning:1 discovers:1 pseudocode:1 tightens:1 million:3 kluwer:1 mellon:2 refer:1 cambridge:1 consistency:2 grid:2 had:1 lowered:5 moving:2 robot:29 longer:5 base:3 buffer:1 rep:1 continue:1 inconsistency:9 minimum:3 additional:1 somewhat:1 prune:1 shortest:1 violate:2 multiple:3 faster:3 academic:1 repetitively:2 long:1 cross:1 cmu:1 publish:2 iteration:14 achieved:1 cell:16 addition:1 want:1 whereas:1 decreased:4 else:1 allocated:2 publisher:1 extra:1 unlike:2 inconsistent:13 seem:1 call:15 near:1 enough:1 easy:1 restrict:1 suboptimal:4 opposite:1 idea:1 mica:1 tradeoff:1 whether:1 six:1 allocate:1 utility:1 reuse:2 effort:4 likhachev:3 queue:2 action:5 useful:2 clear:1 involve:1 tune:1 amount:4 sgoal:17 locally:14 generate:1 delta:1 sstart:14 correctly:2 track:1 per:1 carnegie:2 nevertheless:1 changing:3 backward:2 rectangle:1 graph:4 run:7 angle:7 planner:9 almost:1 decide:1 throughout:1 comparable:1 bound:39 guaranteed:6 fold:3 precisely:1 speed:1 optimality:28 min:7 expanded:17 speedup:4 according:1 describes:1 terminates:2 beneficial:1 across:1 slightly:1 smaller:2 gradually:3 computationally:1 previously:1 remains:2 turn:7 loose:1 eventually:1 addison:1 end:7 available:1 operation:3 multiplied:1 eight:1 save:2 denotes:3 running:3 remaining:1 top:1 ensure:1 maintaining:1 f30602:1 prof:1 initializes:1 move:4 already:6 strategy:1 navigating:1 distance:5 rotates:1 thrun:3 simulated:3 seven:1 reason:1 provable:1 length:2 providing:1 ratio:3 unfortunately:1 expense:1 twenty:1 perform:1 upper:4 communication:1 sweeping:1 sharp:1 pred:3 namely:1 required:3 pearl:1 nip:1 able:1 suggested:1 usually:1 reading:1 program:1 built:1 memory:3 rely:1 advanced:1 arm:14 improve:1 created:1 naive:1 prior:1 meter:2 admissibility:1 interesting:1 suggestion:1 proportional:1 asterisk:1 degree:3 consistent:3 s0:36 row:2 changed:1 supported:1 soon:3 infeasible:3 cmucs:1 formal:1 neighbor:3 taking:1 tracing:1 world:3 evaluating:1 valid:2 far:1 approximate:2 keep:1 global:2 mins0:2 robotic:3 pittsburgh:2 assumed:1 reorder:1 grayscale:1 search:78 iterative:1 ggordon:1 un:2 additionally:1 expanding:4 obtaining:1 improving:3 expansion:9 domain:2 main:3 arrow:1 motivation:1 whole:1 child:2 sub:32 position:3 momentum:1 outdoor:4 third:2 admissible:1 theorem:5 minute:3 list:7 maxim:2 execution:9 slid:1 demand:1 suited:1 ordered:1 corresponds:1 ma:1 goal:16 sorted:2 viewed:1 consequently:1 invalid:1 feasible:2 content:1 change:3 except:1 corrected:1 averaging:1 total:2 called:2 experimental:1 rarely:1 zone:1 scan:2 evaluate:3 tested:1 scratch:2 |
1,521 | 2,383 | A Holistic Approach
to Compositional Semantics:
a connectionist model and robot experiments
Yuuya Sugita
BSI, RIKEN
Hirosawa 2-1, Wako-shi
Saitama 3510198 JAPAN
[email protected]
Jun Tani
BSI, RIKEN
Hirosawa 2-1, Wako-shi
Saitama 3510198 JAPAN
[email protected]
Abstract
We present a novel connectionist model for acquiring the semantics of a
simple language through the behavioral experiences of a real robot. We
focus on the ?compositionality? of semantics, a fundamental characteristic of human language, which is the ability to understand the meaning
of a sentence as a combination of the meanings of words. We also pay
much attention to the ?embodiment? of a robot, which means that the
robot should acquire semantics which matches its body, or sensory-motor
system. The essential claim is that an embodied compositional semantic
representation can be self-organized from generalized correspondences
between sentences and behavioral patterns. This claim is examined and
confirmed through simple experiments in which a robot generates corresponding behaviors from unlearned sentences by analogy with the correspondences between learned sentences and behaviors.
1
Introduction
Implementing language acquisition systems is one of the most di?cult problems, since
not only the complexity of the syntactical structure, but also the diversity in the domain
of meaning make this problem complicated and intractable. In particular, how linguistic
meaning can be represented in the system is crucial, and this problem has been investigated
for many years.
In this paper, we introduce a connectionist model to acquire the semantics of language with
respect to the behavioral patterns of a real robot. An essential question is how embodied compositional semantics can be acquired in the proposed connectionist model without
providing any representations of the meaning of a word or behavior routines a priori. By
?compositionality?, we refer to the fundamental human ability to understand a sentence
from (1) the meanings of its constituents, and (2) the way in which they are put together.
It is possible for a language acquisition system that acquires compositional semantics to
derive the meaning of an unknown sentence from the meanings of known sentences. Consider the unknown sentence: ?John likes birds.? It could be understood by learning these
three sentences: ?John likes cats.?; ?Mary likes birds.?; and ?Mary likes cats.? That is to
say, generalization of meaning can be achieved through compositional semantics.
From the point of view of compositionality, the symbolic representation of word meaning
has much a?nity with processing the linguistic meaning of sentences [4]. Following this
observation, various learning models have been proposed to acquire the embodied seman-
tics of language. For example, some models learn semantics in the form of correspondences
between sentences and non-linguistic objects, i.e., visual images [10] or the sensory-motor
patterns of a robot [7, 13].
In these works, the syntactic aspect of language was acquired through a pre-acquired lexicon. This means that the meanings of words (i.e., lexicon) is acquired independently of
the usages of words in sentences (i.e., syntax). Although this separated learning approach
seems to be plausible from the requirements of compositionality, it causes inevitable difficulties in representing the meaning of a sentence. A priori separation of lexicon and
syntax requires a pre-defined manner of combining word meanings into the meaning of a
sentence. In Iwahashi?s model, the class of a word is assumed to be given prior to learning its meaning because di?erent acquisition algorithms are required for nouns and verbs
(c.f., [12]). Moreover, the meaning of a sentence is obtained by filling a pre-defined template with meanings of words. Roy?s model does not require a priori knowledge of word
classes, but requires the strong assumption, that the meaning of a word can be assigned
to some pre-defined attributes of non-linguistic objects. This assumption is not realistic
in more complex cases, such as when the meaning of a word needs to be extracted from
non-linguistic spatio-temporal patterns, as in case of learning verbs.
In this paper, we discuss an essential mechanism for self-organizing embodied compositional semantic representations, in which separate treatments of words and syntax are not
required. Our model implements compositional semantics by utilizing the generalization
capability of an RNN, where the meaning of each word cannot exist independently, but
emerges from the relations with others (c.f., reverse compositionality, [3]). In this situation, a sort of generalization can be expected, such that the meanings of novel sentences
can be inferred by analogy with learned ones.
The experiments were conducted using a real mobile robot with an arm and with various
sensors, including a vision system. A finite set of two-word sentences consisting of a verb
followed by a noun was considered. Our analysis will clarify what sorts of internal neural
structures should be self-organized for achieving compositional semantics grounded to a
robot?s behavioral experiences. Although our experimental design is limited, the current
study will suggest an essential mechanism for acquiring grounded compositional semantics, with the minimal combinatorial structure of this finite language [2].
2
Task Design
The aim of our experimental task is to discuss an essential mechanism for self-organizing
compositional semantics based on the behavior of a robot. In the training phase, our robot
learns the relationships between sentences and the corresponding behavioral sensory-motor
sequences of a robot in a supervised manner. It is then tested to generate behavioral sequences from a given sentence. We regard compositional semantics as being acquired if
appropriate behavioral sequences can be generated from unlearned sentences by analogy
with learned data.
Our mobile robot has three actuators, with two wheels and a joint on the arm; a colored
vision sensor; and two torque sensors, on the wheel and the arm (Figure 1a). The robot
operates in an environment where three colored objects (red, blue, and green) are placed
on the floor (Figure 1b). The positions of these objects can be varied so long as the robot
sees the red object on the left side of its field of view, the green object in the middle, and
the blue object on the right at the start of every trial of behavioral sequences. The robot
thus learns nine categories of behavioral patterns, consisting of pointing at, pushing, and
hitting each of the three objects, in a supervised manner. These categories are denoted as
POINT-R, POINT-B, POINT-G, PUSH-R, PUSH-B, PUSH-G, HIT-R, HIT-B, and HIT-G
(Figure 1c-e).
The robot also learns sentences which consist of one of 3 verbs (point, push, hit) fol-
Starting Position
POINT-G
PUSH-G
HIT-G
"point red"
"point left"
Blue
Red
Green
"point blue"
"point center"
"point green"
(a)
(b)
(c)
(d)
(e)
Figure 1: The mobile robot (a) starts from a
fixed position in the environment and (b) ends
each behavior by (c) pointing at, (d) pushing,
or (e) hitting an object.
"point right"
POINT-R
POINT-B
POINT-G
"push red"
"push left"
"push blue"
"push center"
"push green"
"push right"
PUSH-R
PUSH-B
PUSH-G
"hit red"
"hit left"
"hit blue"
"hit center"
"hit green"
"hit right"
HIT-R
HIT-B
HIT-G
Figure 2: The correspondence between
sentences and behavioral categories. Each
behavioral category has two corresponding
sentences.
lowed by one of 6 nouns (red, left, blue, center, green, right). The meanings of
these 18 possible sentences are given in terms of fixed correspondences with the 9 behavioral categories (Figure 2). For example, ?point red? and ?point left? correspond to
POINT-R, ?point blue? and ?point center? to POINT-B, and so on. In these correspondences, ?left,? ?center,? and ?right? have exactly the same meaning as ?red,?
?blue,? and ?green? respectively. These synonyms are introduced to observe how the
behavioral similarity a?ects the acquired linguistic semantic structure.
3
Proposed Model
Our model employs two RNNs with parametric bias nodes (RNNPBs) [15] in order to
implement a linguistic module and a behavioral module (Figure 3). The RNNPB, like the
conventional Jordan-type RNN [8], is a connectionist model to learn time sequences. The
linguistic module learns the above sentences represented as time sequences of words [1],
while the behavioral module learns the behavioral sensory-motor sequences of the robot.
To acquire the correspondences between the sentences and behavioral sequences, these two
modules are connected to each other by using the parametric bias binding method. Before
discussing this binding method in detail, we introduce the overall architecture of RNNPB.
word prediction
output nodes
word input
nodes
Linguistic Module
parametric bias context nodes
nodes
sensory-motor
prediction output
nodes
Behavioral Module
sensory-motor parametric bias context nodes
input nodes
nodes
Interaction via parametric binding method
Figure 3: Our model is composed of two RNNs with parametric bias nodes (RNNPBs),
one for a linguistic module and the other for a behavioral module. Both modules interact
with each other during the learning process via the parametric bias method introduced in
the text.
3.1 RNNPB
The RNNPB has the same neural architecture as the Jordan-type RNN except for the PB
nodes in the input layer (c.f., each module of Figure 3). Unlike the other input nodes, these
PB nodes take a specific constant vector throughout each time sequence, and are employed
to implement a mapping between fixed-length vectors and time sequences.
Like the conventional Jordan-type RNN, the RNNPB learns time sequences in a supervised
manner. The di?erence is that in the RNNPB, the vectors that encode the time sequences
are self-organized in PB nodes during the learning process. The common structural properties of all the training time sequences are acquired as connection weight values by using the
back-propagation through time (BPTT) algorithm, as used also in the conventional RNN
[8, 11]. Meanwhile, the specific properties of each individual time sequence are simultaneously encoded as PB vectors (c.f., [9]). As a result, the RNNPB self-organizes a mapping
between the PB vectors and the time sequences.
The learning algorithm for the PB vectors is a variant of the BPTT algorithm. For each of n
training time sequences of real-numbered vectors x0 , ? ? ? , xn?1 , the back-propagated errors
with respect to the PB nodes are accumulated for all time steps to update the PB vectors.
Formally, the update rule for the PB vector pxi encoding the i-th training time sequence xi
is given as follows:
?2 p xi
=
i
1
error pxi (t)
li t=0
(1)
?p xi
=
? ?2 p xi + ? ? ?pold
xi
(2)
p xi
=
pold
xi
l ?1
+ ?p xi
(3)
In equation (1), the update of PB vector ?2 p xi is obtained from the average back-propagated
error with respect to a PB node error pxi (t) through all time steps from t = 0 to li ? 1, where
li is the length of xi . In equation (2), this update is low-pass filtered to inhibit frequent rapid
changes in the PB vectors.
After successfully learning the time sequences, the RNNPB can generate a time sequence
xi from its corresponding PB vector p xi . The actual generation process of a time sequence
xi is implemented by iteratively utilizing the RNNPB with the corresponding PB vector
p xi , a fixed initial context vector, and input vectors for each time step. Depending on the
required functionality, both the external information (e.g., sensory information) and the
internal prediction (e.g., motor commands) are employed as input vectors.
Here, we introduce an abstracted operational notation for the RNNPB to facilitate a later
explanation of our proposed method of binding language and behavior. By using an operator RNNPB, the generation of xi from p xi is described as follows:
RNNPB(p xi )
?
xi ,
i = 0, ? ? ? , n ? 1.
(4)
Furthermore, the RNNPB can be used not only for sequence generation processes but also
for recognition processes. For a given sequence xi , the corresponding PB vector p xi can
be obtained by using the update rules for the PB vectors (equations (1) to (3)), without
updating the connection weight values. This inverse operation for generation is regarded
as recognition, and is hence denoted as follows:
RNNPB?1 (xi ) ?
p xi ,
i = 0, ? ? ? , n ? 1.
(5)
The other important characteristic nature of the RNNPB is that the relational structure
among the training time sequences can be acquired in the PB space through the learning
process. This generalization capability of RNNPB can be employed to generate and recognize unseen time sequences without any additional learning. For instance, by learning
several cyclic time sequences of di?erent frequency, novel time sequences of intermediate
frequency can be generated [6].
3.2 Binding
In the proposed model, corresponding sentences and behavioral sequences are constrained
to have the same PB vectors in both modules. Under this condition, corresponding behavioral sequences can be generated naturally from sentences. When a sentence si and its
corresponding behavioral sequence bi have the same PB vector, we can obtain bi from si
as follows:
RNNPBB (RNNPB?1
L (si )) ?
bi
(6)
where RNNPBL and RNNPBB are abstracted operators for the linguistic module and the
behavioral module, respectively.
The PB vector p si is obtained by recognizing the sentence si . Because of the constraint
that corresponding sentences and behavioral sequences must have the same PB vectors,
pbi is equal to p si . Therefore, we can obtain the corresponding behavioral sequence bi by
utilizing the behavioral module with pbi .
The binding constraint is implemented by introducing an interaction term into part of the
update rule for the PB vectors (equation (3)).
p si
=
old
old
pold
si + ?p si + ?L ? (pbi ? p si )
pbi
=
pold
bi
+ ?pbi + ?B ?
(pold
si
?
pold
bi )
(7)
(8)
where ?L and ?B are positive coe?cients that determine the strength of the binding. Equations (7) and (8) are the constrained update rules for the linguistic module and the behavior
module, respectively. Under these rules, the PB vectors of a corresponding sentence si and
behavioral sequence bi attract each other. Actually, the corresponding PB vectors psi and
pbi need not be completely equalized to learn a correspondence. The epsilon errors of the
PB vectors can be neglected because of the continuity of PB spaces.
3.3 Generalization of Correspondences
As noted above, our model enables a robot to understand a sentence by means of a generated behavior as if the meaning of the sentence were composed of the meanings of the
constituents. That is to say, the robot can generate appropriate behavioral sequences from
all sentences without learning all correspondences. To achieve this, an unlearned sentence
and its corresponding behavioral sequences must have the same PB vector. Nevertheless,
the PB binding method only equalizes the PB vectors for given corresponding sentences
and behavioral sequences (c.f., equation (7) and (8)).
Implicit binding, or in other words, inter-module generalization of correspondences, is
achieved by dynamic coordination between the PB binding method and the intra-module
generalization of each module. The local e?ect of the PB binding method spreads over
the whole PB space, because each individual PB vector depends on the others in order to
self-organize PB structures reflecting the relationships among training data. Thus, the PB
structures of both modules densely interact via the PB binding methods. Finally, both PB
structures converge into a common PB structure, and therefore, all corresponding sentences
and behavioral sequences then share the same PB vectors automatically.
4
Experiments
In the learning phase, the robot learned 14 of 18 correspondences between sentences and
behavioral patterns (c.f., Figure 2). It was then tested to generate behavioral sequences
from each of the remaining 4 sentences (?point green?, ?point right?, ?push red?,
and ?push left?).
To enable a robot to learn correspondences robustly, five corresponding sentences and behavioral sequences were associated by using the PB binding method for each of the 14
training correspondences. Thus, the linguistic module learned 70 sentences with PB binding. Meanwhile, the behavioral module learned the behavioral sequences of the 9 categories, including 2 categories which had no corresponding sentences in the training set.
The behavioral module learned 10 di?erent sensory-motor sequences for each behavioral
category. It therefore learned 70 behavioral sequences corresponding to the training sentences with PB binding and the remaining 20 sequences independently. In addition, the
behavioral module learned the same 90 behavioral sequences without binding.
A sentence is represented as a time sequence of words, which starts with a fixed starting
symbol. Each word is locally represented, such that each input node of the module corre-
sponds to a specific word. A single input node takes a value of 1.0 while the others take
0.0 [1]. The linguistic module has 10 input nodes for each of 9 words and a starting symbol. The module also has 6 parametric bias nodes, 4 context nodes, 50 hidden nodes, and
10 prediction output nodes. Thus, no a priori knowledge about the meanings of words is
pre-programmed.
A training behavioral sequence was created by sampling three sensory-motor vectors per
second during a trial of the robot?s human-guided behavior. For robust learning of behavior,
each training behavioral sequence was generated under a slightly di?erent environment in
which object positions were varied. The variation was at most 20 percent of the distance
between the starting position of the robot and the original position of each object in every
direction (c.f., Figure 1b). Typical behavioral sequences are about 5 to 25 seconds long,
and therefore have about 15 to 75 sensory-motor vectors. A sensory-motor vector is a realnumbered 26-dimensional vector consisting of 3 motor values (for 2 wheels and the arm), 2
values from torque sensors (of the wheels and the arm), and 21 values encoding the visual
image. The visual field is divided vertically into 7 regions, and each region is represented
by (1) the fraction of the region covered by the object, (2) the dominant hue of the object
in the region, and (3) the bottom border of the object in the region, which is proportional
to the distance of the object from the camera. The behavioral module had 26 input nodes
for sensory-motor input, 6 parametric bias nodes, 6 context nodes, 70 hidden nodes, and 6
output nodes for motor commands and partial prediction of the sensory image at the next
time step.
5
Results and Analysis
In this section, we analyze the results of the experiment presented in the previous section.
The analysis reveals that the inter-module generalization realized by the PB binding method
could fill an essential role in self-organizing the compositional semantics of the simple
language through the behavioral experiences of the robot. As mentioned in the previous
section, the training data for this experiment did not include all the correspondences. As
a result, although the behavioral module was trained with the behavioral sequences of all
behavioral categories, those in two of the categories, whose corresponding sentences were
not in the linguistic training set, could not be bound.
The most important result was that these dangling behavioral sequences could be bound
with appropriate sentences. The robot could properly recognize four unseen sentences, and
generate the corresponding behaviors. This means that both modules share the common
PB structure successfully.
Comparing the PB spaces of both modules shows that they indeed shared a common structure as a result of binding. The linguistic PB vectors are computed by recognizing all
the possible 18 sentences including 4 unseen ones (Figure 4a), and the behavioral PB
vectors are computed at the learning phase for all the corresponding 90 behavioral sequences in the training data (Figure 4b). The acquired correspondences between sentences and behavioral sequences can be examined according to equation (6). In particular, the implicit binding of the four unlearned correspondences (?point green??POINTG, ?point right??POINT-G, ?push red??PUSH-R, and ?push left??PUSH-R)
demonstrates acquisition of the underlying semantics, or the generalized correspondences.
The acquired common structure has two striking characteristics: (1) the combinatorial
structure originated from the linguistic module, and (2) the metric based on the behavioral similarity originated from the behavioral module. The interaction between modules
enabled both PB spaces to simultaneously acquire both of these two structural properties.
We can find three congruent sub-structures for each verb, and six congruent sub-structures
for each noun in the linguistic PB space. This congruency represents the underlying syn-
The second principal component
0.2
0.2
The first principal component
0.8
(a) Linguistic module
0.8
POINT-R
POINT-B
The second principal component
point red
point left
point blue
point center
point green
point right
push red
push left
push blue
push center
push green
push right
hit red
hit left
hit blue
hit center
hit green
hit right
0.8
POINT-G
PUSH-R
PUSH-B
PUSH-G
HIT-R
HIT-B
HIT-G
0.2
0.2
The first principal component
0.8
(b) Behavioral module
Figure 4: Plots of the bound linguistic module (a) and the bound behavioral module (b).
Both plots are projections of the PB spaces onto the same surface determined by the PCA
method. Here, the accumulated contribution rate is about 73%. Unlearned sentences and
their corresponding behavioral categories are underlined.
tax structure of training sentences. For example, it is possible to estimate the PB vector
of ?point green? from the relationship among the PB vectors of ?point blue?, ?hit
blue? and ?hit green.? This predictable geometric regularity could be acquired by independent learning of the linguistic module. However it could not be acquired by independent
learning of the behavioral module because these behavioral sequences can not be decomposed into plausible primitives, unlike the sentences which can be broken down into words.
We can also see a metric reflecting the similarity of behavioral sequences not only in the
behavioral modules but also in the linguistic module. The PB vectors of sentences that
correspond to the same behavioral category take the similar values. For example, the two
sentences corresponding to POINT-R (?point red? and ?point left?) are encoded in
similar PB vectors. Such a metric nature could not be observed in the independent learning
of the linguistic module, in which all nouns were plotted symmetrically in the PB space by
means of the syntactical constraints.
The above observation thus confirms that the embodied compositional semantics was selforganized through the unification of both modules, which was implemented by the PB
binding method. We also made experiments with di?erent test sentences, and confirmed
that similar results could be obtained.
6
Discussion and Summary
Our simple experiments showed that the minimal grounded compositional semantics of our
language can be acquired by generalizing the correspondences between sentences and the
behavioral sensory-motor sequences of a robot. Our experiments could not examine strong
systematicity [4], but could address the combinatorial characteristic nature of sentences.
That is to say, the robot could understand relatively simple sentences in a systematic way,
and could understand novel sentences. Therefore, our results can elucidate some important
issues about the compositional semantic representation.
We claim that the acquisition of word meaning and syntax can not be separated from the
standpoint of the symbol grounding problem [5]. The meanings of words depend on each
other to compose the meanings of sentences [16]. Consider the meaning of the word ?red.?
The meaning of ?red? must be something which combines with the meaning of ?point?,
?push? or ?hit? to form the grounded meanings of sentences. Therefore, a priori definition
of the meaning of ?red? substantially a?ects the organization of the other parts of the
system, and often results in further pre-programming. This means that it is inevitably
di?cult to explicitly extract the meaning of a word from the meaning of a sentence.
Our model avoids this di?culty by implementing the grounded meaning of a word implicitly in terms of the relationships among the meanings of sentences based on behavioral
experiences. Our model does not require any pre-programming of syntactic information,
such as symbolic representation of word meaning, a predefined combinatorial structure in
the semantic domain, or behavior routines. Instead, the essential structures accounting for
compositionality are fully self-organized in the iterative dynamics of the RNN, through the
structural interactions between language and behavior using the PB binding method. Thus,
the robot can understand ?red? through its behavioral interactions in the designed tasks in
a bottom-up way [14]. A similar argument holds true for verbs. For example, the robot
understands ?point? through pointing at red, blue, and green objects.
To the summary, the current study has shown the importance of generalization of the correspondences between sentences and behavioral patterns in the acquisition of an embodied
language. In future studies, we plan to apply our model to larger language sets. In the current experiment, the training set consists of a large fraction of the legal input space, when
compared with related works. Such a large training set is needed because our model has
no a priori knowledge of syntax and composition rules. However, we think that our model
requires relatively fewer fraction of sentences to learn a larger language set, for a given
degree of syntactic complexity.
References
[1] J. L. Elman. Finding structure in time. Cognitive Science, 14:179?211, 1990.
[2] G. Evans. Semantic Theory and Tacit Knowledge. In S. Holzman and C. Leich, editors, Wittgenstein: To Follow a Rule. London: Routledge and Kegan Paul, 1981.
[3] J. Fodor. Why Compositionality Won?t Go Away: Reflections on Horwich?s ?Deflationary?
Theory. Technical Report 46, Rutgers University, 1999.
[4] R. F. Hadley. Systematicity revisited: reply to Christiansen and Chater and Niklasson and van
Gelder. Mind and Language, 9:431?444, 1994.
[5] S. Harnad. The symbol grounding problem. Physica D, 42:335?346, 1990.
[6] M. Ito and J. Tani. Generalization and Diversity in Dynamic Pattern Learning and Generation
by Distributed Representation Architecture . Technical Report 3, Lab. for BDC, Brain Science
Institute, RIKEN, 2003.
[7] N. Iwahashi. Language acquisition by robots ? Towards New Paradigm of Language Processing
?. Journal of Japanese Society for Artificial Intelligence, 18(1):49?58, 2003.
[8] M.I. Jordan and D.E. Rumelhart. Forward models: supervised learning with a distal teacher.
Cognitive Science, 16:307?354, 1992.
[9] R. Miikkulainen. Subsymbolic Natural Language Processing: An Integrated Model of Script s,
Lexicon, and Memory. MIT Press, 1993.
[10] D. K. Roy. Learning visually grounded words and syntax for a scene description task. Computer
Speech and Language, 16, 2002.
[11] D. E. Rumelhart, G. E. Hinton, and R. J. Williams. Learning internal representations by error
propagation. In D. E. Rumelhart and J. L. Mclelland, editors, Parallel Distributed Processing.
Cambridge, MA: MIT Press, 1986.
[12] J. M. Siskind. Grounding the Lexical Semantics of Verbs in Visual Perception using Force
Dynamics and Event Logic. Artificial Intelligence Research, 15:31?90, 2001.
[13] L. Steels. The Emergence of Grammar in Communicating Autonomous Robotic Agents. In
W. Horn, editor, Proceedings of European Conference of Artificial Intelligence, pages 764?769.
IOS Press, 2000.
[14] J. Tani. Model-Based Learning for Mobile Robot Navigation from the Dynamical Systems
Perspective. IEEE Trans. on SMC (B), 26(3):421?436, 1996.
[15] J. Tani. Learning to generate articulated behavior through the bottom-up and the top-down
interaction process. Neural Networks, 16:11?23, 2003.
[16] T. Winograd. Understanding natural language. Cognitive Psychology, 3(1):1?191, 1972.
| 2383 |@word trial:2 middle:1 seems:1 bptt:2 confirms:1 pold:6 accounting:1 initial:1 cyclic:1 wako:2 current:3 comparing:1 si:12 must:3 john:2 evans:1 realistic:1 enables:1 motor:15 plot:2 designed:1 update:7 intelligence:3 fewer:1 cult:2 deflationary:1 colored:2 filtered:1 node:28 lexicon:4 revisited:1 five:1 ect:3 consists:1 compose:1 combine:1 behavioral:67 manner:4 introduce:3 x0:1 acquired:13 inter:2 indeed:1 expected:1 rapid:1 elman:1 examine:1 behavior:14 brain:3 torque:2 decomposed:1 automatically:1 actual:1 moreover:1 notation:1 underlying:2 what:1 tic:1 substantially:1 gelder:1 finding:1 temporal:1 every:2 exactly:1 demonstrates:1 hit:26 organize:1 before:1 positive:1 understood:1 local:1 vertically:1 io:1 encoding:2 rnns:2 bird:2 examined:2 limited:1 programmed:1 smc:1 bi:7 horn:1 camera:1 implement:3 rnn:6 erence:1 projection:1 word:31 pre:7 numbered:1 suggest:1 symbolic:2 cannot:1 wheel:4 onto:1 operator:2 put:1 context:5 conventional:3 shi:2 center:9 lexical:1 go:3 attention:1 starting:4 independently:3 primitive:1 williams:1 communicating:1 rule:7 utilizing:3 regarded:1 fill:1 siskind:1 enabled:1 variation:1 autonomous:1 fodor:1 elucidate:1 programming:2 roy:2 recognition:2 rumelhart:3 updating:1 winograd:1 bottom:3 role:1 module:46 observed:1 region:5 connected:1 inhibit:1 mentioned:1 environment:3 unlearned:5 complexity:2 predictable:1 broken:1 neglected:1 dynamic:4 trained:1 depend:1 completely:1 joint:1 represented:5 cat:2 various:2 riken:5 articulated:1 separated:2 london:1 artificial:3 equalized:1 equalizes:1 whose:1 encoded:2 larger:2 plausible:2 say:3 grammar:1 ability:2 unseen:3 syntactic:3 think:1 emergence:1 sequence:54 interaction:6 frequent:1 cients:1 combining:1 culty:1 holistic:1 nity:1 organizing:3 achieve:1 tax:1 description:1 constituent:2 regularity:1 requirement:1 congruent:2 object:16 derive:1 depending:1 erent:5 strong:2 implemented:3 direction:1 guided:1 attribute:1 functionality:1 human:3 enable:1 implementing:2 require:2 generalization:10 clarify:1 hold:1 physica:1 considered:1 visually:1 mapping:2 claim:3 pointing:3 combinatorial:4 coordination:1 successfully:2 mit:2 sensor:4 aim:1 mobile:4 command:2 chater:1 linguistic:23 encode:1 focus:1 pxi:3 properly:1 attract:1 accumulated:2 integrated:1 hidden:2 relation:1 semantics:19 overall:1 among:4 issue:1 denoted:2 priori:6 plan:1 noun:5 constrained:2 field:2 equal:1 sampling:1 represents:1 filling:1 inevitable:1 future:1 connectionist:5 others:3 report:2 employ:1 composed:2 simultaneously:2 recognize:2 densely:1 individual:2 phase:3 consisting:3 organization:1 intra:1 navigation:1 predefined:1 partial:1 unification:1 experience:4 old:2 plotted:1 minimal:2 instance:1 introducing:1 saitama:2 recognizing:2 conducted:1 teacher:1 fundamental:2 systematic:1 together:1 hirosawa:2 external:1 cognitive:3 li:3 japan:2 diversity:2 explicitly:1 depends:1 script:1 later:1 view:2 systematicity:2 lab:1 analyze:1 fol:1 red:20 start:3 sort:2 parallel:1 complicated:1 capability:2 contribution:1 characteristic:4 correspond:2 confirmed:2 definition:1 acquisition:7 frequency:2 naturally:1 associated:1 di:9 psi:1 propagated:2 treatment:1 knowledge:4 emerges:1 organized:4 syn:1 routine:2 actually:1 back:3 reflecting:2 understands:1 supervised:4 follow:1 wittgenstein:1 furthermore:1 implicit:2 reply:1 propagation:2 continuity:1 tani:5 mary:2 usage:1 facilitate:1 grounding:3 true:1 hence:1 assigned:1 bsi:2 iteratively:1 semantic:6 hadley:1 distal:1 during:3 self:9 acquires:1 noted:1 won:1 generalized:2 syntax:6 reflection:1 percent:1 meaning:40 image:3 novel:4 common:5 niklasson:1 jp:2 mclelland:1 refer:1 composition:1 cambridge:1 routledge:1 language:21 had:2 robot:33 similarity:3 surface:1 dominant:1 something:1 showed:1 perspective:1 reverse:1 underlined:1 discussing:1 additional:1 floor:1 employed:3 determine:1 converge:1 paradigm:1 technical:2 match:1 long:2 divided:1 prediction:5 variant:1 vision:2 metric:3 rutgers:1 grounded:6 achieved:2 addition:1 crucial:1 standpoint:1 unlike:2 jordan:4 structural:3 symmetrically:1 intermediate:1 psychology:1 architecture:3 six:1 syntactical:2 pca:1 speech:1 cause:1 compositional:15 nine:1 covered:1 hue:1 locally:1 category:12 generate:7 exist:1 dangling:1 per:1 blue:15 four:2 nevertheless:1 pb:57 achieving:1 fraction:3 year:1 inverse:1 striking:1 throughout:1 separation:1 pbi:6 christiansen:1 layer:1 bound:4 pay:1 followed:1 corre:1 correspondence:20 strength:1 constraint:3 scene:1 generates:1 aspect:1 argument:1 sponds:1 relatively:2 bdc:3 according:1 combination:1 slightly:1 legal:1 equation:7 discus:2 mechanism:3 needed:1 mind:1 end:1 operation:1 apply:1 actuator:1 observe:1 away:1 appropriate:3 robustly:1 original:1 top:1 remaining:2 include:1 coe:1 pushing:2 epsilon:1 society:1 question:1 realized:1 parametric:9 distance:2 separate:1 length:2 relationship:4 providing:1 acquire:5 steel:1 design:2 unknown:2 observation:2 finite:2 inevitably:1 situation:1 relational:1 hinton:1 varied:2 verb:7 inferred:1 compositionality:7 introduced:2 required:3 sentence:66 connection:2 learned:9 iwahashi:2 trans:1 address:1 dynamical:1 pattern:8 perception:1 including:3 green:16 explanation:1 memory:1 event:1 difficulty:1 natural:2 force:1 arm:5 representing:1 created:1 jun:1 extract:1 embodied:6 tacit:1 text:1 prior:1 geometric:1 understanding:1 fully:1 generation:5 proportional:1 analogy:3 degree:1 agent:1 harnad:1 editor:3 share:2 summary:2 placed:1 side:1 bias:8 understand:6 institute:1 template:1 van:1 regard:1 distributed:2 embodiment:1 xn:1 avoids:1 sensory:14 forward:1 made:1 miikkulainen:1 implicitly:1 logic:1 abstracted:2 robotic:1 reveals:1 assumed:1 spatio:1 xi:22 iterative:1 why:1 learn:5 nature:3 robust:1 operational:1 interact:2 selforganized:1 congruency:1 investigated:1 complex:1 meanwhile:2 japanese:1 domain:2 european:1 did:1 spread:1 synonym:1 whole:1 border:1 paul:1 body:1 sub:2 position:6 originated:2 ito:1 learns:6 subsymbolic:1 down:2 specific:3 symbol:4 lowed:1 essential:7 intractable:1 consist:1 importance:1 push:30 generalizing:1 visual:4 hitting:2 binding:21 acquiring:2 extracted:1 ma:1 towards:1 shared:1 change:1 typical:1 except:1 operates:1 determined:1 principal:4 pas:1 experimental:2 organizes:1 formally:1 internal:3 tested:2 |
1,522 | 2,384 | Increase information transfer rates in BCI
by CSP extension to multi-class
Guido Dornhege1 , Benjamin Blankertz1 , Gabriel Curio2 , Klaus-Robert M?ller1,3
1 Fraunhofer FIRST.IDA, Kekul?str. 7, 12489 Berlin, Germany
2 Neurophysics Group, Dept. of Neurology, Klinikum Benjamin Franklin,
Freie Universit?t Berlin, Hindenburgdamm 30, 12203 Berlin, Germany
3 University of Potsdam, August-Bebel-Str. 89, 14482 Potsdam, Germany
{dornhege,blanker,klaus}@first.fraunhofer.de,
[email protected]
Abstract
Brain-Computer Interfaces (BCI) are an interesting emerging technology
that is driven by the motivation to develop an effective communication interface translating human intentions into a control signal for devices like
computers or neuroprostheses. If this can be done bypassing the usual human output pathways like peripheral nerves and muscles it can ultimately
become a valuable tool for paralyzed patients. Most activity in BCI research is devoted to finding suitable features and algorithms to increase
information transfer rates (ITRs). The present paper studies the implications of using more classes, e.g., left vs. right hand vs. foot, for operating
a BCI. We contribute by (1) a theoretical study showing under some mild
assumptions that it is practically not useful to employ more than three
or four classes, (2) two extensions of the common spatial pattern (CSP)
algorithm, one interestingly based on simultaneous diagonalization, and
(3) controlled EEG experiments that underline our theoretical findings
and show excellent improved ITRs.
1
Introduction
The goal of a Brain-Computer Interface (BCI) is to establish a communication channel for
translating human intentions ? reflected by suitable brain signals ? into a control signal for,
e.g., a computer application or a neuroprosthesis (cf. [1]). If the brain signal is measured
non-invasively by an electroencephalogram (EEG), if short training and preparation times
are feasible and if it is possible to achieve high information transfer rates (ITRs), this interface can become a useful tool for disabled patients or an interesting gadget in the context
of computer games. Recently, some approaches have been presented (cf. [1, 2]) which are
good candidates for successfully implementing such an interface.
In a BCI system a subject tries to convey her/his intentions by behaving according to welldefined paradigms, like imagination of specific movements. An effective discrimination
of different brain states is important in order to implement a suitable system for human
subjects. Therefore appropriate features have to be chosen by signal processing techniques
according to the selected paradigm. These features are translated into a control signal,
either by simple threshold criteria (cf. [1]), or by machine learning techniques where the
computer learns a decision function from some training data [1, 3, 4, 5, 6].
For non-invasive BCI systems that are based on discrimination of voluntarily induced brain
states three approaches are characteristic. (1) The T?bingen Thought Translation Device
(TTD) [7] enables subjects to learn self-regulation of slow cortical potentials (SCP), i.e.,
electrocortical positivity and negativity. After some training in experiments with vertical
cursor movement as feedback navigated by the SCP from central scalp position, patients
are able to generate binary decisions in a 4-6 second pace with an accuracy of up to 85 %.
(2) Users of the Albany BCI system [8] are able to control a cursor movement by their oscillatory brain activity into one of two or four possible targets on the computer screen and to
achieve over 90 % hit rates after adapting to the system during many feedback sessions with
a selection rate of 4 to 5 seconds in the binary decision problem. And (3), based on eventrelated modulations of the pericentral ? - and/or ? -rhythms of sensorimotor cortices (with
a focus on motor preparation and imagination) the Graz BCI system [9] obtains accuracies
of over 96 % in a ternary classification task with a trial duration of 8 seconds by evaluation
of adaptive auto-regressive models (AAR). Note that there are other BCI systems which
rely on stimulus/response paradigms, e.g. P300, see [1] for an overview.
In [10] an approach called Common Spatial Patterns (CSP) was suggested for use in a
BCI context. This algorithm extracts event-related desynchronization (ERD) effects, i.e.,
event-related attenuations in some frequency bands, e.g., ? /? -rhythm. However, the CSP
algorithm can be used more generally, e.g., in [11] a suitable modification to movementrelated potentials was presented. Further in [12] a first multi-class extension of CSP is
presented which is based on pairwise classification and voting. In this paper we present
further ways to extend this approach to many classes and compare to prior work.
By extending a BCI system to many classes a gain in performance can be obtained since
the ITR can increase even if the percentage of correct classifications decreases. In [13] a
first study for increasing the number of classes is demonstrated based on a hidden markov
model approach. The authors conclude to use three classes which attains the highest ITR.
We are focussing here on the same problem but using CSP extracted features and arrive at
similar results. However, in a theoretical part we show that using more classes can be worth
the effort if a suitable accuracy of all pairwise classifications is available. Consequently,
extensions to multi-class settings are worthwhile for a BCI system, if and only if a suitable
number of effectivly separable human brain states can be assigned.
2
How many brain states should be chosen?
Out of many different brain states (classes) our task is to find a subset of classes which is
most profitable for the user of a BCI system. In this part we only focus on the information
theoretical perspective. Using more classes holds the potential to increase ITR, although
the rate of correct classifications decreases. For the subsequent theoretical considerations
we assume gaussian distributions with equal covariance matrices for all classes which is a
reasonable assumption for a wide range of EEG features, see section 4.3. Furthermore we
assume equal priors between all classes. For three classes and equal pairwise classifications
errors err, bounds for the expected classification error can be calculated in the following
way: Let (X,Y ) ? IRn ? Y (Y = {1, 2, 3}) be random variables and P ? N (?1,2,3 , ?) the
probability distribution. Scaling appropriately we can assume ? = I. We define the optimal
classifier by f ? : IRn ? Y with f ? = argmin f ?F P( f (X) 6= Y ), where F is some class of
functions1 . Similarly fi,? j describes the optimal classifier between classes i and j. Directly
R
we get err := P( f i,? j (X) 6= Y ) = G(||?i ? ? j ||/2) with G(x) := ?12? x? exp(?x2 /2)dx and
1 For
the moment we pay no attention to whether such a function exists. In the current set-up F
is usually the space of all linear classifiers, and under the probability assumptions mentioned above
such a minimum exist.
2.5
?3
A
2 calc
3 sim
4 sim
5 sim
6 sim
3 range
2
Cl
R = A+B+Cl+D
Cu = Cl+D+E
1.5
D
E
1
D
?1
?2
B
0.5
0
0
2
4
6
8
10
12
14
16
18
20
Figure 1: The figure on the left visualizes a method to estimate bounds for the ITR depending on
the expected pairwise misclassification risk for three classes. The figure on the right shows the ITR
[bits per decision] depending on the classification error [%] for simulated data for different number
of classes (3-6 sim) and for 2 classes the real values (2 calc). Additionally the expected range (see
(1)) (3 range) for three classes is visualized.
i 6= j. Therefore we get ||? j ? ?i ||2 = ? for all i 6= j with some ? > 0 and finally due to
symmetry and equal priors P( f ? (X) 6= Y ) = Q(||X||2 ? min j=2,3 (||X ? ? j + ?1 ||2 /2)) where
Q ? N (0, I). Since evaluation of probabilities for polyhedrons in the gaussian space is
hard, we only estimate lower and upper bounds. We can directly reduce the problem to a 2
dimensional space by shifting and rotating and by Fubini?s theorem. Since ||? j ? ?i ||2 = ?
for all i 6= j the means lie at corners of an equilateral triangle (see Figure 1). We define
R := {x ? IR2 | ||x||2 ? ||x ? ? j + ?1 ||2 , j = 2, 3} and we can see after some calculation or by
Figure 1 (left) with the sets defined there, that A ? B ?Cl ? R ? A ? B ?Cu . Due symmetry,
the equilateral triangle and polar coordinates transformation we get finally
exp(??2 /6)
exp(??2 /8)
? P( f ? (X) 6= Y ) ? err +
.
(1)
6
6
To compare classification performances involing different numbers of classes, we use
the ITR quantified as bit rate per decision I as defined due to Shannon?s theorem: I :=
log2 N + p log2 (p) + (1 ? p) log2 ((1 ? p)/(N ? 1)) per decision with number of classes N
and classification accuracy p (cf. [14]). Figure 1 (right) shows the bounds in (1) for the ITR
as a function of the expected pairwise misclassification errors. Additionally the same values on simulated data (100000 data points for each class) under the assumptions described
above (equal pairwise performance, Gaussian distributed ...) are visualized for N = 2, ..., 6
classes. First of all, the figure confirms our estimated bounds. Furthermore the figure shows
that under this strong assumptions extensions to multi-class are worthwhile. However, the
gain of using more than 4 classes is tiny if the pairwise classification error is about 10 %
or more. Under more realistic assumptions, i.e., more classes have increasing pairwise
classification error compared to a wisely chosen subset it is improbable to increase the bit
rate by increasing the number of classes higher than three or four. However, this depends
strongly on the pairwise errors. If a suitable number of different brain states that can be
discriminitated well, then indeed extensions to more classes are useful.
err +
3
CSP and some multi-class extension
The CSP algorithm in its original form can be utilized for brain states that are characterized
by a decrease or increase of a cortical rhythm with a characteristic topographic pattern.
3.1
CSP in a binary problem
Let ?1,2 be the centered covariance matrices calculated in the standard way of a trialconcatenated vector of dimension [channels ? concatenated timepoints] belonging to the
respective label. The computation of ?1,2 needs to be adapted to the paradigm, e.g., for
slow cortical features such as the lateralized readiness potential (cf. [11]). The original
CSP algorithm calculates a matrix R and diagonal matrix D with elements in [0, 1] with
R?1 RT = D
and
R?2 RT = 1 ? D
(2)
which can easily be obtained by whitening and spectral theory. Only a few projections
with the highest ratio between their eigenvalues (lowest and highest ratios) are selected.
Intuitively the CSP projections provide the scalp patterns which are most discriminative
(see e.g. Figure 4).
3.2
Multi-Class extensions
Using CSP within the classifier (IN): This algorithm reduces a multi-class to several binary
problems (cf. [15]) and was suggested in [12] for CSP in a BCI context. For all combinations of two different classes the CSP patterns are calculated as described in Eq.(2).
The variances of the projections to CSP of every channel are used as input for an LDAclassifier for each 2-class combination. New trials are projected on these CSP patterns and
are assigned to the class for which most classifiers are voting.
One versus rest CSP (OVR): We suggest a subtle modification of the approach above which
permits to compute the CSP approach before the classification. We compute spatial patterns for each class against all others2 . Then we project the EEG signals on all these CSP
patterns, calculate the variances as before and then perform an LDA multi-class classification. The approach OVR appears rather similar to the approach IN, but there is in fact a
large practical difference (additionally to the one-versus-rest strategy as opposed to pairwise binary subproblems). In the approach IN classification is only done binary on the
CSP patterns according to the binary choice. OVR does multi-class classification on all
projected signals.
Simultaneous diagonalization (SIM): The main trick in the binary case is that the CSP
algorithm finds a simultaneous diagonalization of both covariance matrices whose eigenvalues sum to one. Thus a possible extension to many classes, i.e., many covariances
(?i )i=1,...,N is to find a matrix R and diagonal matrices (Di )i=1,...N with elements in [0, 1]
and with R?i RT = Di for all i = 1, ..., N and ?Ni=1 Di = I. Such a decomposition can only
be approximated for N > 2. There are several algorithms for approximate simultaneous
diagonalization (cf. [16, 17]) and we are using the algorithm described in [18] due to its
speed and reliability. As opposed to the two class problem there is no canonical way to
choose the relevant CSP patterns. We explored several options such as using the highest
or lowest eigenvalues. Finally, the best strategy was based on the assumption that two
different eigenvalues for the same pattern have the same effect if their ratios to the mean
of the eigenvalues of the other classes are multiplicatively inverse to each other, i.e., their
product is 1. Thus all eigenvalues ? are mapped to max(? , (1 ? ? )/(1 ? ? + (N ? 1) 2 ? ))
and a specified number m of highest eigenvalues for each class are used as CSP patterns.
It should be mentioned that each pattern is only used once, namely for the class which has
the highest modified eigenvalue. If a second class would choose this pattern it is left out
for this class and the next one is chosen. Finally variances are computed on the projected
trials as before and conventional LDA multi-class classification is done.
4
4.1
Data acquisition and analysis methods
Experiments
We recorded brain activity from 4 subjects (codes aa, af, ak and ar) with multi-channel
EEG amplifiers using 64 (128 for aa) channels band-pass filtered between 0.05 and 200 Hz
and sampled at 1000 Hz. For offline analysis all signals were downsampled to 100 Hz.
Surface EMG at both forearms and one leg, as well as horizontal and vertical EOG signals,
were recorded to check for muscle activation and eye movements, but no trial was rejected.
2 Note that this can be done similarly with pairwise patterns, but in our studies no substantical
difference was observable and therefore one-versus-rest is favourable, since it chooses less patterns.
The subjects in this experiment were sitting in a comfortable chair with arms lying relaxed
on the armrests. All 4.5 seconds one of 6 different letters was appearing on the computer
screen for 3 seconds. During this period the subject should imagine one of 6 different actions according to the displayed letter: imagination of left or right hand or f oot movement,
or imagination of a visual, auditory or tactile sensation. Subject aa took only part in an
experiment with the 3 classes l, r and f. 200 (resp. 160 for aa) trials for each class were
recorded.
The aim of classification in these experiments is to discriminate trials of different classes
using the whole period of imagination. A further reasonable objective to detect a new
brainstate as fast as possible was not an object of this particular study. Note that the classes
v, a and t were originally not intended to be BCI paradigms. Rather, these experiments were
included to explore multi-class single-trial detection for brain states related to different
sensory modalities for which it can reasonably be assumed that the regional activations can
be well differentiated at a macroscopic scale of several centimeters.
4.2
Feature Extraction
Due to the fact that we focus on desynchronization effects (here the ? -rhythm) we apply
first a causal frequency filter of 8?15 Hz to the signals. Further, each trial consists of a
two second window starting 500 ms after the visual stimulus. Then, the CSP algorithm is
applied and finally variances of the projected trials were calculated to acquire the feature
vectors. Alternatively, to see how effective the CSP algorithm is, the projection is left out
for the binary classification task and we use instead techniques like Laplace filtering or
common average reference (CAR) with a regularized LDA classifier on the variances.
The frequency band and the time period should be chosen individually by closer analysis
of each data set. However, we are not focussing on this effect here, therefore we choose a
setting which works well for all subjects. The number of chosen CSP patterns is a further
variable. Extended search for different values can be done, but is omitted here. To have
similar number of patterns for each algorithm we choose for IN 2 patterns from each side in
each pairwise classification (resulting in 2N(N ?1) patterns), for OVR 2 patterns from each
side in each one-versus rest choice and for SIM 4 patterns for each class (both resulting in
4N patterns).
4.3
Classification and Validation
According to our studies the assumption that the features we are using are Gaussian distributed with equal covariance matrices holds well [2]. In this case Linear Discriminant
Analysis (LDA) is optimal for classification in the sense that it minimizes the risk of misclassifications. Due to the low dimensionality of the CSP features regularization is not
required.
To assess the classification performance, the generalization error was estimated by 10?10fold cross-validation. Since the CSP algorithm depends on the class labels, the calculation
of this projection is done in the cross-validation on each training set. Doing it on the whole
data set beforehand can result in overfitting, i.e., underestimating the generalization error.
For the purpose of this paper the best configuration of classes should be found. The most
sophisticated way in BCI context would have consisted in doing many experiment with
different sets of classes. Unfortunately this is very time consuming and not of interest
for the BCI user. A more useful way is to do in a preliminary step experiments with
many classes and choose within an offline analysis which is the best subset by testing all
combinations. With the best chosen class configuration the experiment should be repeated
to confirm the results. However, in this paper we present results of this simpler experiment,
in fact following the setting in [13].
aa
af
ak
ar
0.6
Results
LAPLACE, CAR
5
0.5
0.4
In Figure 2 the bit rates for all binary combinations of
two classes and for all subjects are shown. The results
0.3
for the CSP algorithm are contrasted in the plot with the
0.2
results of LAPLACE/CAR in such a way that for points
0.1
below the diagonal CSP is better and for points above
0
0
0.2
0.4
0.6
the other algorithms are better. We can conclude that it
CSP
is usually advantageous to use the CSP algorithm. FurFigure 2: In the scatter plot
thermore it is observable that the pairwise classification
the ITRs [bits per decision]
performances differ strongly. According to our theoretfor all 2-class combinations for
ical considerations we should therefore assume that in
all subjects obtained by CSP
the multi-class case a configuration with 3 classes will
are shown on the x-axis while
perform best.
those by LAPLACE (dark points)
resp. CAR (light points) are on
Figure 3 shows the ITRs for all multi-class configuthe y-axis. That means for marks
rations (N=3, . . . , 6) for different subjects. Results for
below the diagonal CSP outperbaseline method IN are compared to the new methods
forms LAPLACE resp. CAR.
SIM and OVR. The latter methods are superior for those
configurations whose results are below the diagonal in the scatter plot. For an overview
the upper plots show histograms of the differences in ITR between SIM/OVR and IN and a
gaussian approximation. We can conclude from these figures that no algorithm is generally
the best. SIM shows the best mean performance for subjects ak and ar but the performance
falls off for subject af. Since for aa only one three class combination is available, we omit
a visualization. However, SIM performs again best for this subject.
Statistical tests of significance are omitted since the classification results are generally not
independent, e.g., classification of {l,r,f } and {l,a,t} are dependent since the trials of class
l are involved in both. For a given number of classes Figure 4 shows the ITR obtained for
the optimal subset of brain states by the best of the presented algorithms. As conjectured
from fluctuations in pairwise discriminability, the bit rates decrease when using more than
three classes. In three out of four subjects the peak ITR is obtained with three classes,
only for subject aa pairwise classification is better. Here one further strategy is helpful.
Additionally to the variance, autoregressive parameters can be calculated on the projections
on the CSP patterns filtered here at 7?30 Hz and used for classification. In this case the
pairwise classification errors are more balanced such that we acquire finally an ITR of 0.76
af
ar
ak
OVR
SIM
OVR
SIM
?0.1
0
0.1
?0.1
0.1
0.55
0.1
0.25
0.4
0.2
0.3
0.4
IN
0
0.5
0.45
0.15
0.2
0.35
0.1
0.3
0.1
0.25
OVR,SIM
?0.1
0.5
0.6
OVR
SIM
0
0.05
0.3
0.4
0.5
0.6
0
0
0.2
0.4
0.05
0.1
0.15
0.2
0.25
Figure 3: In the scatter plot the ITRs [bits per decision] obtained by the baseline method IN are shown
on the y-axis while those by SIM (+) and OVR (?) are on the x-axis. That means for marks below the
diagonal SIM resp. OVR outperforms IN. For an overall overview the upper plots show histograms
of the differences in ITR between SIM/OVR and IN and shows a gaussian approximation of them.
Here positive values belong to good performances of SIM and OVR.
0
aa
af
ak
foot
0.2
right
0.4
left
2
3
4
5
6
0.6
ar
Figure 4: The figure on the left shows the ITR per trial for different number of classes with the best
algorithm described above. The figure on the right visualizes the first pattern chosen by SIM for each
class for aa.
per decision, whereas the best binary combination has 0.6 bits per decision. The worth of
using AR for this subject are caused by different frequency bands in which discriminative
informations are. For the other subjects similar gains could not be observed by using AR
parameters.
Finally the CSP algorithm contains some further feature, namely that the spatial patterns
can be plotted as scalp topographies. In Figure 4 the first pattern for each class of algorithm
SIM is shown for subject aa. Evidently, this algorithm can reproduce neurophysiological
prior knowledge about the location of ERD effects because for each activated limb the
appropriate region of motor cortex is activated, e.g., a left (right) lateral site for the right
(left) hand and an area closer to the central midline for the foot.
Psychological perspective.
In principle, multi-class decisions can be derived from a
decision space natural to human subjects. In a BCI context such set of decisions will be
performed most ?intuitively?, i.e., without a need for prolonged training, if the differential
brain states are naturally related to a set of intended actions. This is the case, e.g., for
movements of different body parts which have a somatotopically ordered lay-out in the
primary motor cortex resulting in spatially discriminable patterns of EEG signals, such as
readiness potentials or event-related desynchonizations specific for finger, elbow or shoulder movement intentions. In contrast, having to imagine a tune in order to move a cursor
upwards vs imaging a visual scene to induce a downward movement will produce spatially
discriminable patterns of EEG signals related to either auditory or visual imagery, but its
action-effect-contingency would be counter-intuitive. While humans are able to adapt and
to learn such complex tasks, this could take weeks of training before it would be performed
fast, reliably and ?automatically?. Another important aspect of multi-class settings is that
the use of more classes which is discriminated by the BCI device only at lower accuracy is
likely to confuse the user.
6
Concluding discussion
Current BCI research strives for enhanced information transfer rates. Several options are
available: (1) training of the BCI users, which can be somewhat tedious if up to 300 hours
of training would be necessary, (2) invasive BCI techniques, which we consider not applicable for healthy human test subjects, (3) improved machine learning and signal processing
methods where, e.g., new filtering, feature extraction and sophisticated classifiers are constantly tuned and improved3 , (4) faster trial speeds and finally (5) more classes among
which the BCI user is choosing. This work analysed the theoretical and practical implications of using more than two classes, and also psychological issues were shortly discussed.
In essence we found that higher a ITR is achieved with three classes, however, it seems
unlikely that it can be increased by moving above four classes. This finding is confirmed in
EEG experiments. As a further, more algorithmic, contribution we suggested two modifications of the CSP method for the multi-class case. As a side remark: our multi-class CSP
algorithms also allow to gain a significant speed up in a real-time feedback experiment as
filtering operations only need to be performed on very few CSP components (as opposed
to on all channels). Since this corresponds to an implicit dimensionality reduction, good
3 See 1st
and 2nd BCI competition:
http://ida.first.fraunhofer.de/~blanker/competition/
results can be also achieved with CSP using less patterns/trials.
Comparing the results of SIM, OVR and IN we find that for most of the subjects SIM
or OVR provide better results. Assuringly the algorithms SIM, OVR and IN allow to extract scalp pattern for the classification that match well with neurophysiological textbook
knowledge (cf. Figure 4). In this paper the beneficial role of a third class was confirmed
by an offline analysis. Future studies will therefore target on online experiments with more
than two classes; first experimental results are promising. Another line of study will explore information from complementary neurophysiological effects in the spirit of [19] in
combination with multi-class paradigms.
Finally it would be useful to explore configurations with more than two classes which
are more natural and also more userfriendly from the psychological perspective discussed
above.
Acknowledgments We thank S. Harmeling, M. Kawanabe, A. Ziehe, G. R?tsch, S. Mika,
P. Laskov, D. Tax, M. Kirsch, C. Sch?fer and T. Zander for helpful discussions. The studies were
supported by BMBF-grants FKZ 01IBB02A and FKZ 01IBB02B.
References
[1] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T. M. Vaughan, ?Brain-computer interfaces for communication and control?, Clin. Neurophysiol., 113: 767?791, 2002.
[2] B. Blankertz, G. Dornhege, C. Sch?fer, R. Krepki, J. Kohlmorgen, K.-R. M?ller, V. Kunzmann, F. Losch, and G. Curio,
?Boosting Bit Rates and Error Detection for the Classification of Fast-Paced Motor Commands Based on Single-Trial EEG
Analysis?, IEEE Trans. Neural Sys. Rehab. Eng., 11(2): 127?131, 2003.
[3] B. Blankertz, G. Curio, and K.-R. M?ller, ?Classifying Single Trial EEG: Towards Brain Computer Interfacing?, in: T. G.
Diettrich, S. Becker, and Z. Ghahramani, eds., Advances in Neural Inf. Proc. Systems (NIPS 01), vol. 14, 157?164, 2002.
[4] L. Trejo, K. Wheeler, C. Jorgensen, R. Rosipal, S. Clanton, B. Matthews, A. Hibbs, R. Matthews, and M. Krupka, ?Multimodal Neuroelectric Interface Development?, IEEE Trans. Neural Sys. Rehab. Eng., 2003, accepted.
[5] L. Parra, C. Alvino, A. C. Tang, B. A. Pearlmutter, N. Yeung, A. Osman, and P. Sajda, ?Linear spatial integration for single
trial detection in encephalography?, NeuroImage, 2002, to appear.
[6] W. D. Penny, S. J. Roberts, E. A. Curran, and M. J. Stokes, ?EEG-Based Communication: A Pattern Recognition Approach?,
IEEE Trans. Rehab. Eng., 8(2): 214?215, 2000.
[7] N. Birbaumer, N. Ghanayim, T. Hinterberger, I. Iversen, B. Kotchoubey, A. K?bler, J. Perelmouter, E. Taub, and H. Flor, ?A
spelling device for the paralysed?, Nature, 398: 297?298, 1999.
[8] J. R. Wolpaw, D. J. McFarland, and T. M. Vaughan, ?Brain-Computer Interface Research at the Wadsworth Center?, IEEE
Trans. Rehab. Eng., 8(2): 222?226, 2000.
[9] B. O. Peters, G. Pfurtscheller, and H. Flyvbjerg, ?Automatic Differentiation of Multichannel EEG Signals?, IEEE Trans.
Biomed. Eng., 48(1): 111?116, 2001.
[10] H. Ramoser, J. M?ller-Gerking, and G. Pfurtscheller, ?Optimal spatial filtering of single trial EEG during imagined hand
movement?, IEEE Trans. Rehab. Eng., 8(4): 441?446, 2000.
[11] G. Dornhege, B. Blankertz, and G. Curio, ?Speeding up classification of multi-channel Brain-Computer Interfaces: Common
spatial patterns for slow cortical potentials?, in: Proceedings of the 1st International IEEE EMBS Conference on Neural
Engineering. Capri 2003, 591?594, 2003.
[12] J. M?ller-Gerking, G. Pfurtscheller, and H. Flyvbjerg, ?Designing optimal spatial filters for single-trial EEG classification
in a movement task?, Clin. Neurophysiol., 110: 787?798, 1999.
[13] B. Obermaier, C. Neuper, C. Guger, and G. Pfurtscheller, ?Information Transfer Rate in a Five-Classes Brain-Computer
Interface?, IEEE Trans. Neural Sys. Rehab. Eng., 9(3): 283?288, 2001.
[14] J. R. Wolpaw, N. Birbaumer, W. J. Heetderks, D. J. McFarland, P. H. Peckham, G. Schalk, E. Donchin, L. A. Quatrano, C. J.
Robinson, and T. M. Vaughan, ?Brain-Computer Interface Technology: A review of the First International Meeting?, IEEE
Trans. Rehab. Eng., 8(2): 164?173, 2000.
[15] E. Allwein, R. Schapire, and Y. Singer, ?Reducing multiclass to binary: A unifying approach for margin classifiers?, Journal
of Machine Learning Research, 1: 113?141, 2000.
[16] J.-F. Cardoso and A. Souloumiac, ?Jacobi angles for simultaneous diagonalization?, SIAM J.Mat.Anal.Appl., 17(1): 161 ff.,
1996.
[17] D.-T. Pham, ?Joint Approximate Diagonalization of Positive Definite Matrices?, SIAM J. on Matrix Anal. and Appl., 22(4):
1136?1152, 2001.
[18] A. Ziehe, P. Laskov, K.-R. M?ller, and G. Nolte, ?A Linear Least-Squares Algorithm for Joint Diagonalization?, in: Proc.
4th International Symposium on Independent Component Analysis and Blind Signal Separation (ICA2003), 469?474, Nara,
Japan, 2003.
[19] G. Dornhege, B. Blankertz, G. Curio, and K.-R. M?ller, ?Combining Features for BCI?, in: S. Becker, S. Thrun, and
K. Obermayer, eds., Advances in Neural Inf. Proc. Systems (NIPS 02), vol. 15, MIT Press: Cambridge, MA, 2003.
| 2384 |@word blankertz1:1 mild:1 cu:2 trial:18 advantageous:1 seems:1 underline:1 nd:1 tedious:1 confirms:1 covariance:5 decomposition:1 eng:8 reduction:1 moment:1 configuration:5 contains:1 tuned:1 interestingly:1 franklin:1 outperforms:1 err:4 current:2 ida:2 comparing:1 analysed:1 activation:2 scatter:3 dx:1 subsequent:1 realistic:1 enables:1 motor:4 plot:6 v:3 discrimination:2 selected:2 device:4 sys:3 short:1 underestimating:1 filtered:2 regressive:1 boosting:1 contribute:1 location:1 simpler:1 five:1 become:2 differential:1 symposium:1 welldefined:1 consists:1 pathway:1 pairwise:16 indeed:1 expected:4 multi:20 brain:22 automatically:1 prolonged:1 kohlmorgen:1 str:2 window:1 increasing:3 somatotopically:1 ller1:1 project:1 elbow:1 ibb02b:1 lowest:2 argmin:1 minimizes:1 emerging:1 textbook:1 finding:3 transformation:1 differentiation:1 jorgensen:1 freie:1 dornhege:4 every:1 attenuation:1 voting:2 universit:1 classifier:8 hit:1 control:5 grant:1 omit:1 appear:1 comfortable:1 before:4 positive:2 engineering:1 krupka:1 ak:5 fluctuation:1 modulation:1 mika:1 diettrich:1 discriminability:1 quantified:1 appl:2 range:4 practical:2 acknowledgment:1 harmeling:1 testing:1 ternary:1 pericentral:1 implement:1 definite:1 wolpaw:3 ovr:17 wheeler:1 area:1 oot:1 thought:1 adapting:1 projection:6 osman:1 intention:4 induce:1 downsampled:1 suggest:1 get:3 ir2:1 selection:1 context:5 risk:2 vaughan:3 conventional:1 demonstrated:1 center:1 attention:1 starting:1 duration:1 his:1 blanker:2 coordinate:1 laplace:5 profitable:1 resp:4 target:2 imagine:2 enhanced:1 user:6 guido:1 curran:1 designing:1 trick:1 element:2 approximated:1 recognition:1 utilized:1 lay:1 observed:1 role:1 calculate:1 graz:1 region:1 decrease:4 movement:10 highest:6 valuable:1 thermore:1 voluntarily:1 benjamin:2 mentioned:2 balanced:1 counter:1 tsch:1 ration:1 ultimately:1 triangle:2 translated:1 neurophysiol:2 easily:1 multimodal:1 joint:2 finger:1 equilateral:2 sajda:1 fast:3 effective:3 neurophysics:1 klaus:2 choosing:1 whose:2 bci:26 alvino:1 topographic:1 bler:1 online:1 eigenvalue:8 evidently:1 took:1 product:1 fer:2 rehab:7 relevant:1 combining:1 p300:1 achieve:2 tax:1 kunzmann:1 intuitive:1 competition:2 guger:1 extending:1 produce:1 object:1 depending:2 develop:1 measured:1 eq:1 sim:24 strong:1 differ:1 foot:3 sensation:1 correct:2 filter:2 centered:1 human:8 translating:2 implementing:1 generalization:2 preliminary:1 parra:1 extension:9 bypassing:1 hold:2 practically:1 lying:1 pham:1 exp:3 algorithmic:1 week:1 matthew:2 omitted:2 purpose:1 polar:1 albany:1 proc:3 applicable:1 label:2 healthy:1 curio2:1 individually:1 successfully:1 tool:2 mit:1 interfacing:1 gaussian:6 aim:1 csp:40 rather:2 modified:1 allwein:1 command:1 eventrelated:1 derived:1 focus:3 polyhedron:1 check:1 contrast:1 attains:1 baseline:1 lateralized:1 detect:1 sense:1 bebel:1 helpful:2 dependent:1 unlikely:1 her:1 hidden:1 ical:1 irn:2 reproduce:1 germany:3 biomed:1 issue:1 overall:1 classification:33 among:1 development:1 spatial:8 integration:1 wadsworth:1 equal:6 once:1 extraction:2 functions1:1 having:1 future:1 stimulus:2 employ:1 few:2 midline:1 intended:2 amplifier:1 detection:3 interest:1 evaluation:2 light:1 activated:2 devoted:1 implication:2 beforehand:1 calc:2 fu:1 closer:2 paralysed:1 necessary:1 improbable:1 respective:1 rotating:1 plotted:1 causal:1 theoretical:6 psychological:3 increased:1 ar:7 kekul:1 subset:4 perelmouter:1 emg:1 discriminable:2 chooses:1 st:2 ibb02a:1 peak:1 gerking:2 international:3 siam:2 ghanayim:1 off:1 again:1 central:2 recorded:3 imagery:1 opposed:3 choose:5 obermaier:1 positivity:1 hinterberger:1 corner:1 imagination:5 japan:1 potential:6 de:3 caused:1 depends:2 blind:1 performed:3 try:1 doing:2 hindenburgdamm:1 option:2 encephalography:1 contribution:1 ass:1 square:1 ni:1 accuracy:5 variance:6 characteristic:2 sitting:1 kotchoubey:1 worth:2 confirmed:2 clanton:1 visualizes:2 simultaneous:5 oscillatory:1 ed:2 against:1 sensorimotor:1 frequency:4 acquisition:1 invasive:2 involved:1 naturally:1 di:3 jacobi:1 gain:4 sampled:1 auditory:2 knowledge:2 car:5 dimensionality:2 subtle:1 sophisticated:2 nerve:1 appears:1 higher:2 fubini:1 originally:1 reflected:1 response:1 improved:2 erd:2 done:6 strongly:2 furthermore:2 rejected:1 implicit:1 hand:4 horizontal:1 readiness:2 lda:4 disabled:1 effect:7 consisted:1 regularization:1 assigned:2 spatially:2 game:1 self:1 during:3 essence:1 rhythm:4 criterion:1 m:1 electroencephalogram:1 pearlmutter:1 performs:1 interface:11 upwards:1 consideration:2 recently:1 fi:1 common:4 superior:1 discriminated:1 overview:3 birbaumer:3 imagined:1 extend:1 belong:1 discussed:2 significant:1 taub:1 cambridge:1 automatic:1 session:1 similarly:2 reliability:1 moving:1 cortex:3 operating:1 behaving:1 whitening:1 surface:1 perspective:3 conjectured:1 inf:2 driven:1 binary:12 meeting:1 muscle:2 minimum:1 relaxed:1 somewhat:1 paradigm:6 focussing:2 period:3 signal:16 paralyzed:1 ller:6 reduces:1 faster:1 characterized:1 calculation:2 af:5 cross:2 adapt:1 match:1 nara:1 controlled:1 calculates:1 patient:3 yeung:1 histogram:2 achieved:2 whereas:1 embs:1 modality:1 appropriately:1 macroscopic:1 rest:4 regional:1 sch:2 flor:1 subject:22 induced:1 hz:5 spirit:1 neuroelectric:1 misclassifications:1 nolte:1 fkz:2 reduce:1 itr:14 multiclass:1 whether:1 becker:2 effort:1 tactile:1 peter:1 bingen:1 hibbs:1 action:3 remark:1 gabriel:1 useful:5 generally:3 cardoso:1 tune:1 dark:1 band:4 visualized:2 multichannel:1 generate:1 http:1 schapire:1 percentage:1 exist:1 wisely:1 canonical:1 estimated:2 per:8 pace:1 mat:1 vol:2 group:1 four:5 threshold:1 navigated:1 imaging:1 sum:1 inverse:1 letter:2 angle:1 arrive:1 reasonable:2 separation:1 decision:13 scaling:1 kirsch:1 bit:9 bound:5 pay:1 laskov:2 paced:1 fold:1 activity:3 scalp:4 adapted:1 x2:1 scene:1 aspect:1 speed:3 ttd:1 min:1 chair:1 concluding:1 separable:1 according:6 peripheral:1 combination:8 belonging:1 describes:1 strives:1 beneficial:1 modification:3 leg:1 intuitively:2 visualization:1 singer:1 krepki:1 available:3 operation:1 permit:1 apply:1 limb:1 worthwhile:2 kawanabe:1 appropriate:2 spectral:1 differentiated:1 appearing:1 shortly:1 original:2 cf:8 log2:3 schalk:1 clin:2 iversen:1 unifying:1 concatenated:1 ghahramani:1 establish:1 objective:1 move:1 strategy:3 primary:1 rt:3 usual:1 diagonal:6 spelling:1 obermayer:1 klinikum:1 thank:1 mapped:1 berlin:4 simulated:2 lateral:1 thrun:1 discriminant:1 code:1 multiplicatively:1 ratio:3 acquire:2 regulation:1 unfortunately:1 robert:2 subproblems:1 anal:2 reliably:1 perform:2 upper:3 vertical:2 forearm:1 markov:1 displayed:1 extended:1 communication:4 shoulder:1 stokes:1 august:1 neuroprosthesis:1 namely:2 required:1 specified:1 potsdam:2 hour:1 nip:2 trans:8 robinson:1 able:3 suggested:3 mcfarland:3 usually:2 pattern:33 below:4 rosipal:1 max:1 shifting:1 suitable:7 event:3 misclassification:2 rely:1 regularized:1 natural:2 arm:1 blankertz:4 technology:2 eye:1 axis:4 fraunhofer:3 negativity:1 auto:1 extract:2 eog:1 speeding:1 dornhege1:1 prior:4 review:1 topography:1 interesting:2 filtering:4 versus:4 validation:3 contingency:1 principle:1 tiny:1 classifying:1 translation:1 centimeter:1 supported:1 offline:3 side:3 allow:2 wide:1 fall:1 penny:1 distributed:2 feedback:3 calculated:5 cortical:4 dimension:1 souloumiac:1 autoregressive:1 sensory:1 author:1 adaptive:1 projected:4 approximate:2 obtains:1 observable:2 confirm:1 overfitting:1 conclude:3 assumed:1 consuming:1 discriminative:2 neurology:1 alternatively:1 search:1 flyvbjerg:2 additionally:4 promising:1 nature:1 channel:7 transfer:5 learn:2 reasonably:1 symmetry:2 eeg:14 excellent:1 cl:4 complex:1 ramoser:1 significance:1 main:1 motivation:1 whole:2 repeated:1 complementary:1 convey:1 gadget:1 body:1 site:1 ff:1 screen:2 slow:3 bmbf:1 pfurtscheller:5 neuroimage:1 position:1 timepoints:1 candidate:1 lie:1 third:1 learns:1 aar:1 tang:1 theorem:2 specific:2 showing:1 invasively:1 desynchronization:2 explored:1 favourable:1 exists:1 curio:5 donchin:1 diagonalization:7 trejo:1 downward:1 confuse:1 cursor:3 margin:1 explore:3 likely:1 neurophysiological:3 visual:4 ordered:1 aa:10 corresponds:1 constantly:1 extracted:1 ma:1 goal:1 consequently:1 losch:1 towards:1 feasible:1 hard:1 included:1 contrasted:1 reducing:1 called:1 pas:1 discriminate:1 experimental:1 accepted:1 shannon:1 neuper:1 ziehe:2 scp:2 mark:2 latter:1 preparation:2 dept:1 |
1,523 | 2,385 | Online Classification on a Budget
Koby Crammer
Computer Sci. & Eng.
Hebrew University
Jerusalem 91904, Israel
Jaz Kandola
Royal Holloway,
University of London
Egham, UK
Yoram Singer
Computer Sci. & Eng.
Hebrew University
Jerusalem 91904, Israel
[email protected]
[email protected]
[email protected]
Abstract
Online algorithms for classification often require vast amounts of memory and computation time when employed in conjunction with kernel
functions. In this paper we describe and analyze a simple approach for an
on-the-fly reduction of the number of past examples used for prediction.
Experiments performed with real datasets show that using the proposed
algorithmic approach with a single epoch is competitive with the support vector machine (SVM) although the latter, being a batch algorithm,
accesses each training example multiple times.
1
Introduction and Motivation
Kernel-based methods are widely being used for data modeling and prediction because of
their conceptual simplicity and outstanding performance on many real-world tasks. The
support vector machine (SVM) is a well known algorithm for finding kernel-based linear
classifiers with maximal margin [7]. The kernel trick can be used to provide an effective
method to deal with very high dimensional feature spaces as well as to model complex input phenomena via embedding into inner product spaces. However, despite generalization
error being upper bounded by a function of the margin of a linear classifier, it is notoriously
difficult to implement such classifiers efficiently. Empirically this often translates into very
long training times. A number of alternative algorithms exist for finding a maximal margin
hyperplane many of which have been inspired by Rosenblatt?s Perceptron algorithm [6]
which is an on-line learning algorithm for linear classifiers. The work on SVMs has inspired a number of modifications and enhancements to the original Perceptron algorithm.
These incorporate the notion of margin to the learning and prediction processes whilst exhibiting good empirical performance in practice. Examples of such algorithms include the
Relaxed Online Maximum Margin Algorithm (ROMMA) [4], the Approximate Maximal
Margin Classification Algorithm (ALMA) [2], and the Margin Infused Relaxed Algorithm
(MIRA) [1] which can be used in conjunction with kernel functions.
A notable limitation of kernel based methods is their computational complexity since the
amount of computer memory that they require to store the so called support patterns grows
linearly with the number prediction errors. A number of attempts have been made to speed
up the training and testing of SVM?s by enforcing a sparsity condition. In this paper we
devise an online algorithm that is not only sparse but also generalizes well. To achieve
this goal our algorithm employs an insertion and deletion process. Informally, it can be
thought of as revising the weight vector after each example on which a prediction mistake
has been made. Once such an event occurs the algorithm adds the new erroneous example
(the insertion phase), and then immediately searches for past examples that appear to be
redundant given the recent addition (the deletion phase). As we describe later, making this
adjustment to the algorithm allows us to modify the standard online proof techniques so as
to provide a bound on the total number of examples the algorithm keeps.
This paper is organized as follows. In Sec. 2 we formalize the problem setting and provide
a brief outline of our method for obtaining a sparse set of support patterns in an online
setting. In Sec. 3 we present both theoretical and algorithmic details of our approach and
provide a bound on the number of support patterns that constitute the cache. Sec. 4 provides
experimental details, evaluated on three real world datasets, to illustrate the performance
and merits of our sparse online algorithm. We end the paper with conclusions and ideas for
future work.
2
Problem Setting and Algorithms
This work focuses on online additive algorithms for classification tasks. In such problems
we are typically given a stream of instance-label pairs (x1 , y1 ), . . . , (xt , yt ), . . .. we assume
that each instance is a vector xt ? Rn and each label belongs to a finite set Y. In this
and the next section we assume that Y = {?1, +1} but relax this assumption in Sec. 4
where we describe experiments with datasets consisting of more than two labels. When
dealing with the task of predicting new labels, thresholded linear classifiers of the form
h(x) = sign(w ? x) are commonly employed. The vector w P
is typically represented as
a weighted linear combination of the examples, namely w = t ?t yt xt where ?t ? 0.
The instances for which ?t > 0 are referred to as support patterns. Under this assumption,
the output of the classifier solely depends on inner-products of the form x ? xt the use of
kernel functions can easily be employed simply by replacing the standard scalar product
with a function K(?, ?) which satisfies Mercer conditions
[7]. The resulting classification
P
rule takes the form h(x) = sign(w ? x) = sign( t ?t yt K(x, xt )).
The majority of additive online algorithms for classification, for example the well known
Perceptron [6], share a common algorithmic structure. These online algorithms typically
work in rounds. On the tth
P round, an online algorithm receives an instance xt , computes
the inner-products st = i<t ?i yi K(xi , xt ) and sets the predicted label to be sign(st ).
The algorithm then receives the correct label yt and evaluates whether yt st ? ?t . The exact
value of parameter ?t depends on the specific algorithm being used for classification. If the
result of this test is negative, the algorithms do not modify wt and thus ?t is implicitly
set to 0. Otherwise, the algorithms modifies its classification using a predetermined update
rule. Informally we can consider this update to be decomposed into three stages. Firstly, the
algorithms choose a non-negative value for ?t (again the exact choice of the parameter ?t is
algorithm dependent). Secondly, the prediction vector is replaced with a linear combination
of the current vector wt and the example, wt+1 = wt + ?t yt xt . In a third, optional stage
(see for example [4]), the norm of the newly updated weight vector is scaled, w t+1 ?
ct wt+1 for some ct > 0. The various online algorithms differ in the way the values of the
parameters ?t , ?t and ct are set. A notable example of an online algorithm is the Perceptron
algorithm [6] for which we set ?t = 0, ?t = 1 and ct = 1. More recent algorithms
such as the Relaxed Online Maximum Margin Algorithm (ROMMA) [4] the Approximate
Maximal Margin Classification Algorithm (ALMA) [2] and the Margin Infused Relaxed
Algorithm (MIRA) [1] can also be described in this framework although the constants
?t , ?t and ct are not as simple as the ones employed by the Perceptron algorithm.
An important computational consideration needs to be made when employing kernel functions for machine learning tasks. This is because the amount of memory required to
store the so called support patterns grows linearly with the number prediction errors. In
Input: Tolerance ?.
Initialize: Set ?t ?t = 0 , w0 = 0 , C0 = ?.
Loop: For t = 1, 2, . . . , T
? Get a new instance xt ? Rn .
? Predict y?t = sign (yt (xt ? wt?1 )).
? Get a new label yt .
? if yt (xt ? wt?1 ) ? ? update:
1. Insert Ct ? Ct?1 ? {t}.
2. Set ?t = 1.
3. Compute wt ? wt?1 + yt ?t xt .
4. DistillCache(Ct , wt , (?1 , . . . , ?t )).
Output : H(x) = sign(wT ? x).
Figure 1: The aggressive Perceptron algorithm with a variable-size cache.
this paper we shift the focus to the problem of devising online algorithms which are
budget-conscious as they attempt to keep the number of support patterns small. The
approach is attractive for at least two reasons. Firstly, both the training time and classification time can be reduced significantly if we store only a fraction of the potential
support patterns. Secondly, a classier with a small number of support patterns is intuitively ?simpler?, and hence are likely to exhibit good generalization properties rather
than complex classifiers with large numbers of support patterns. (See for instance [7]
for formal results connecting the number of support patterns to the generalization error.)
In Sec. 3 we present a formal analysis and
Input: C, w, (?1 , . . . , ?t ).
the algorithmic details of our approach.
Loop:
Let us now provide a general overview
? Choose i ? C such that
of how to restrict the number of support
? ? yi (w ? ?i yi xi ).
patterns in an online setting. Denote by
Ct the indices of patterns which consti? if no such i exists then return.
tute the classification vector wt . That is,
? Remove the example i :
i ? Ct if and only if ?i > 0 on round
1. ?i = 0.
t when xt is received. The online classification algorithms discussed above keep
2. w ? w ? ?i yi xi .
enlarging Ct ? once an example is added
3. C ? C/{i}
to Ct it will never be deleted. However,
Return : C, w, (?1 , . . . , ?t ).
as the online algorithm receives more examples, the performance of the classifier
Figure 2: DistillCache
improves, and some of the past examples
may have become redundant and hence
can be removed. Put another way, old examples may have been inserted into the cache simply due the lack of support patterns in early rounds. As more examples are observed, the
old examples maybe replaced with new examples whose location is closer to the decision
boundary induced by the online classifier. We thus add a new stage to the online algorithm
in which we discard a few old examples from the cache CP
of
t . We suggest a modification
the online algorithm structure as follows. Whenever yt
i<t ?i yi K(x, xi ) ? ?t , then
after adding xt to w and inserting the tth into Ct , we scan the cache Ct for seemingly
redundant examples by examining the margin conditions of old examples in C t . If such
an example is found, we discard it from the both the classifier and the cache by updating
wt ? wt ? ?i yi xi and setting Ct ? Ct /{i}. The pseudocode for this ?budget-conscious?
version of the aggressive Perceptron algorithm [3] is given in Fig. 1. We say that the algo-
rithm employs a variable-size cache since we do no limit explicitly the number of support
patterns though we do attempt to discard as many patterns as possible from the cache. A
similar modification, to that described for aggressive Perceptron, can be made to all of the
online classification algorithms outlined above. In particular, we use a modification of the
MIRA [1] algorithm in our experiments.
3
Analysis
In this section we provide our main formal result for the algorithm described in the previous
section. Informally, the theorem below states that the actual size of the cache that the algorithm builds is inversely proportional to the square of the best margin that can be achieved
on the data. This form of bound is common to numerous online learning algorithms for
classification. However, here the bound is on the size of the cache whereas in common settings the corresponding bounds are on the number of prediction mistakes. The bound also
depends on ?, the margin used by the algorithm to check whether a new example should be
added to the cache and to discard old examples attaining a large margin. Clearly, the larger
the value of ? the more often we add examples to the cache.
Theorem 1 Let (x1 , y1 ), . . . , (xT , yT ) be an input sequence for the algorithm given in
Fig. 1, where xt ? Rn and yt ? {?1, +1}. Denote by R = maxt kxt k. Assume that there
exists a vector u of unit norm (kuk = 1) which classifies the entire sequence correctly with
a margin ? = mint yt (u ? xt ) > 0. Then the number of support patterns constituting the
cache is at most S ? (R2 + 2?)/? 2 .
Proof: The proof of the theorem is based on the mistake bound of the Perceptron algorithm [5]. To prove the theorem we bound kwT k22 from above and below and compare the
bounds. Denote by ?it the weight of the ith example at the end of round t (after stage 4 of
the algorithm). Similarly, we denote by ?
? it to be the weight of the ith example on round
t after stage 3, before calling the DistillCache (Fig. 2) procedure. We analogously
? t the corresponding instantaneous classifiers. First, we derive a lower
denote by wt and w
bound on kwT k2 by bounding the term wT ? u from below in a recursive manner.
X
wT ? u =
?tT yt (xt ? u)
t?CT
? ?
X
?tT = ? S .
(1)
t?CT
We now turn to upper bound kwT k2 . Recall that each example may be added to the cache
and removed from the cache a single time. Let us write kwT k2 as a telescopic sum,
? T k2 ) + (kw
? T k2 ? kwT ?1 k2 ) + . . . + (kw
? 1 k2 ? kw0 k2 ) . (2)
kwT k2 = (kwT k2 ? kw
We now consider three different scenarios that may occur for each new example. The
first case is when we did not insert the tth example into the cache at all. In this case,
? t k2 ? kwt?1 k2 ) = 0. The second scenario is when an example is inserted into the
(kw
cache and is never discarded in future rounds, thus,
? t k2 = kwt?1 + yt xt k2 = kwt?1 k2 + 2yt (wt?1 ? xt ) + kxt k2 .
kw
Since we inserted (xt , yt ), the condition yt (wt?1 ? xt ) ? ? must hold. Combining this
? t k2 ?
with the assumption that the examples are enclosed in a ball of radius R we get, (k w
kwt?1 k2 ) ? 2? + R2 . The last scenario occurs when an example is inserted into the cache
on some round t, and is then later on removed from the cache on round t + p for p > 0. As
in the previous case we can bound the value of summands in Equ. (2),
? t k2 ? kwt?1 k2 ) + (kwt+p k2 ? kw
? t+p k2 )
(kw
Input: Tolerance ?, Cache Limit n.
Initialize: Set ?t ?t = 0 , w0 = 0 , C0 = ?.
Loop: For t = 1, 2, . . . , T
? Get a new instance xt ? Rn .
? Predict y?t = sign (yt (xt ? wt?1 )).
? Get a new label yt .
? if yt (xt ? wt?1 ) ? ? update:
1. If |Ct | = n remove one example:
(a) Find i = arg maxj?Ct {yj (wt?1 ? ?j yj xj )}.
(b) Update wt?1 ? wt?1 ? ?i yi xi .
(c) Remove Ct?1 ? Ct?1 /{i}
2. Insert Ct ? Ct?1 ? {t}.
3. Set ?t = 1.
4. Compute wt ? wt?1 + yt ?t xt .
Output : H(x) = sign(wT ? x).
Figure 3: The aggressive Perceptron algorithm with as fixed-size cache.
? t+p ? xt ) + kxt k2
= 2yt (wt?1 ? xt ) + kxt k2 ? 2yt (w
? t+p ? yt xt ) ? xt )]
= 2 [yt (wt?1 ? xt ) ? yt ((w
? t+p ? yt xt ) ? xt )] .
? 2 [? ? yt ((w
? t+p ? yt xt ) ? xt ) ? ?, and
Based on the form of the cache update we know that yt ((w
thus,
? t k2 ? kwt?1 k2 ) + (kwt+p k2 ? kw
? t+p k2 ) ? 0 .
(kw
Summarizing all three cases we see that only the examples which persist in the cache
contribute a factor of R2 + 2? each to the bound of the telescopic sum of Equ. (2) and
the rest of the examples do contribute anything to the bound. Hence, we can bound the
norm of wT as follows,
kwT k2 ? S R2 + 2? .
(3)
We finish up the proof by applying the Cauchy-Swartz inequality and the assumption
kuk = 1. Combining Equ. (1) and Equ. (3) we get,
? 2 S 2 ? (wT ? u)2 ? kwT k2 kuk2 ? S(2? + R2 ) ,
which gives the desired bound.
4
Experiments
In this section we describe the experimental methods that were used to compare the performance of standard online algorithms with the new algorithm described above. We also
describe shortly another variant that sets a hard limit on the number of support patterns.
The experiments were designed with the aim of trying to answer the following questions.
First, what is effect of the number of support patterns on the generalization error (measured in terms of classification accuracy on unseen data), and second, would the algorithm
described in Fig. 2 be able to find an optimal cache size that is able to achieve the best
generalization performance. To examine each question separately we used a modified version of the algorithm described by Fig. 2 in which we restricted ourselves to have a fixed
bounded cache. This modified algorithm (which we refer to as the fixed budget Perceptron)
Name
mnist
letter
usps
No. of
Training Examples
60000
16000
7291
No. of
Test Examples
10000
4000
2007
No. of
Classes
10
26
10
No. of
Attributes
784
16
256
Table 1: Description of the datasets used in experiments.
simulates the original Perceptron algorithm with one notable difference. When the number of support patterns exceeds a pre-determined limit, it chooses a support pattern from
the cache and discards it. With this modification the number of support patterns can never
exceed the pre-determined limit. This modified algorithm is described in Fig. 3. The algorithm deletes the example which seemingly attains the highest margin after the removal of
the example itself (line 1(a) in Fig. 3).
Despite the simplicity of the original Perceptron algorithm [6] its good generalization performance on many datasets is remarkable. During the last few year a number of other additive online algorithms have been developed [4, 2, 1] that have shown better performance on
a number of tasks. In this paper, we have preferred to embed these ideas into another online
algorithm and start with a higher baseline performance. We have chosen to use the Margin
Infused Relaxed Algorithm (MIRA) as our baseline algorithm since it has exhibited good
generalization performance in previous experiments [1] and has the additional advantage
that it is designed to solve multiclass classification problem directly without any recourse
to performing reductions.
The algorithms were evaluated on three natural datasets: mnist1 , usps2 and letter3 .
The characteristics of these datasets has been summarized in Table 1. A comprehensive
overview of the performance of various algorithms on these datasets can be found in a
recent paper [2]. Since all of the algorithms that we have evaluated are online, it is not
implausible for the specific ordering of examples to affect the generalization performance.
We thus report results averaged over 11 random permutations for usps and letter and
over 5 random permutations for mnist. No free parameter optimization was carried out
and instead we simply used the values reported in [1]. More specifically, the margin parameter was set to ? = 0.01 for all algorithms and for all datasets. A homogeneous polynomial
kernel of degree 9 was used when training on the mnist and usps data sets, and a RBF
kernel for letter data set. (The variance of the RBF kernel was identical to the one used
in [1].)
We evaluated the performance of four algorithms in total. The first algorithm was the
standard MIRA online algorithm, which does not incorporate any budget constraints. The
second algorithm is the version of MIRA described in Fig. 3 which uses a fixed limited
budget. Here we enumerated the cache size limit in each experiment we performed. The
different sizes that we tested are dataset dependent but for each dataset we evaluated at
least 10 different sizes. We would like to note that such an enumeration cannot be done in
an online fashion and the goal of employing the the algorithm with a fixed-size cache is to
underscore the merit of the truly adaptive algorithm. The third algorithm is the version of
MIRA described in Fig. 2 that adapts the cache size during the running of the algorithms.
We also report additional results for a multiclass version of the SVM [1]. Whilst this
algorithm is not online and during the training process it considers all the examples at once,
this algorithm serves as our gold-standard algorithm against which we want to compare
1
Available from http://www.research.att.com/?yann
Available from ftp.kyb.tuebingen.mpg.de
3
Available from http://www.ics.uci.edu/?mlearn/MLRepository.html
2
usps
mnist
Fixed
Adaptive
SVM
MIRA
1.8
4.8
4.7
letter
Fixed
Adaptive
SVM
MIRA
5.5
1.7
4.6
5
1.5
1.4
Test Error
4.5
Test Error
Test Error
1.6
Fixed
Adaptive
SVM
MIRA
6
4.4
4.3
4.5
4
3.5
4.2
4.1
3
4
2.5
1.3
1.2
3.9
0.2
0.4
0.6
0.8
1
1.2
1.4
# Support Patterns
1.6
1.8
2
2.2
500
4
2
1000
1500
x 10
mnist
2000
2500
# Support Patterns
3000
3500
1000
2000
3000
usps
Fixed
Adaptive
MIRA
1550
7000
8000
9000
letter
Fixed
Adaptive
MIRA
270
4000
5000
6000
# Support Patterns
Fixed
Adaptive
MIRA
1500
265
1500
1400
260
Training Online Errors
Training Online Errors
Training Online Errors
1450
1450
255
250
245
1400
1350
1300
1350
240
1250
235
1300
0.2
0.4
0.6
0.8
1
1.2
1.4
# Support Patterns
1.6
1.8
2
2.2
500
4
1000
1500
x 10
mnist
4
x 10
2000
2500
# Support Patterns
3000
3500
1000
usps
6500
Fixed
Adaptive
MIRA
5.5
2000
3000
4000
5000
6000
# Support Patterns
7000
Fixed
Adaptive
MIRA
1.5
6000
9000
letter
4
x 10
1.6
Fixed
Adaptive
MIRA
8000
4
3.5
3
1.4
5500
Training Margin Errors
Training Margin Errors
Training Margin Errors
5
4.5
5000
4500
1.3
1.2
1.1
4000
1
2.5
3500
0.9
2
0.2
0.4
0.6
0.8
1
1.2
1.4
# Support Patterns
1.6
1.8
2
2.2
4
x 10
500
1000
1500
2000
2500
# Support Patterns
3000
3500
1000
2000
3000
4000
5000
6000
# Support Patterns
7000
8000
9000
Figure 4: Results on a three data sets - mnist (left), usps (center) and letter (right). Each
point in a plot designates the test error (y-axis) vs. the number of support patterns used
(x-axis). Four algorithms are compared - SVM, MIRA, MIRA with a fixed cache size and
MIRA with a variable cache size.
performance. Note that for the multiclass SVM we report the results using the best set of
parameters, which does not coincide with the set of parameters used for the online training.
The results are summarized in Fig 4. This figure is composed of three different plots organized in columns. Each of these plots corresponds to a different dataset - mnist (left),
usps (center) and letter (right). In each of the three plots the x-axis designates number of
support patterns the algorithm uses. The results for the fixed-size cache are connected with
a line to emphasize the performance dependency on the size of the cache.
The top row of the three columns shows the generalization error. Thus the y-axis designates
the test error of an algorithm on unseen data at the end of the training. Looking at the error
of the algorithm with a fixed-size cache reveals that there is a broad range of cache size
where the algorithm exhibits good performance. In fact for MNIST and USPS there are
sizes for which the test error of the algorithm is better than SVM?s test error. Naturally, we
cannot fix the correct size in hindsight so the question is whether the algorithm with variable
cache size is a viable automatic size-selection method. Analyzing each of the datasets in
turn reveals that this is indeed the case ? the algorithm obtains a very similar number
of support patterns and test error when compared to the SVM method. The results are
somewhat less impressive for the letter dataset which contains less examples per class. One
possible explanation is that the algorithm had fewer chances to modify and distill the cache.
Nonetheless, overall the results are remarkable given that all the online algorithms make a
single pass through the data and the variable-size method finds a very good cache size while
making it also comparable to the SVM in terms of performance. The MIRA algorithm,
which does not incorporate any form of example insertion or deletion in its algorithmic
structure, obtains the poorest level of performance not only in terms of generalization error
but also in terms of number of support patterns.
The plot of online training error against the number of support patterns, in row 2 of Fig 4,
can be considered to be a good on-the-fly validation of generalization performance. As the
plots indicate, for the fixed and adaptive versions of the algorithm, on all the datasets, a
low online training error translates into good generalization performance. Comparing the
test error plots with the online error plots we see a nice similarity between the qualitative
behavior of the two errors. Hence, one can use the online error, which is easy to evaluate,
to choose a good cache size for the fixed-size algorithm.
The third row gives the online training margin errors that translates directly to the number
of insertions into the cache. Here we see that the good test error and compactness of the
algorithm with a variable cache size come with a price. Namely, the algorithm makes
significantly more insertions into the cache than the fixed size version of the algorithm.
However, as the upper two sets of plots indicate, the surplus in insertions is later taken care
of by excess deletions and the end result is very good overall performance. In summary, the
online algorithm with a variable cache and SVM obtains similar levels of generalization and
also number of support patterns. While the SVM is still somewhat better in both aspects
for the letter dataset, the online algorithm is much simpler to implement and performs a
single sweep through the training data.
5
Summary
We have described and analyzed a new sparse online algorithm that attempts to deal with
the computational problems implicit in classification algorithms such as the SVM. The
proposed method was empirically tested and its performance in both the size of the resulting
classifier and its error rate are comparable to SVM. There are a few possible extensions and
enhancements. We are currently looking at alternative criteria for the deletions of examples
from the cache. For instance, the weight of examples might relay information on their
importance for accurate classification. Incorporating prior knowledge to the insertion and
deletion scheme might also prove important. We hope that such enhancements would make
the proposed approach a viable alternative to SVM and other batch algorithms.
Acknowledgements: The authors would like to thank John Shawe-Taylor for many helpful
comments and discussions. This research was partially funded by the EU project KerMIT
No. IST-2000-25341.
References
[1] K. Crammer and Y. Singer. Ultraconservative online algorithms for multiclass problems. Jornal
of Machine Learning Research, 3:951?991, 2003.
[2] C. Gentile. A new approximate maximal margin classification algorithm. Journal of Machine
Learning Research, 2:213?242, 2001.
[3] M?ezard M. Krauth W. Learning algorithms with optimal stability in neural networks. Journal of
Physics A., 20:745, 1987.
[4] Y. Li and P. M. Long. The relaxed online maximum margin algorithm. Machine Learning,
46(1?3):361?387, 2002.
[5] A. B. J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the Symposium on
the Mathematical Theory of Automata, volume XII, pages 615?622, 1962.
[6] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in
the brain. Psychological Review, 65:386?407, 1958. (Reprinted in Neurocomputing (MIT Press,
1988).).
[7] V. N. Vapnik. Statistical Learning Theory. Wiley, 1998.
| 2385 |@word version:7 polynomial:1 norm:3 c0:2 eng:2 reduction:2 contains:1 att:1 past:3 current:1 com:1 comparing:1 jaz:2 must:1 john:1 additive:3 predetermined:1 kyb:1 remove:3 designed:2 plot:9 update:6 v:1 fewer:1 devising:1 classier:1 ith:2 provides:1 contribute:2 location:1 firstly:2 simpler:2 mathematical:1 become:1 symposium:1 viable:2 qualitative:1 prove:2 manner:1 indeed:1 behavior:1 mpg:1 examine:1 brain:1 inspired:2 decomposed:1 actual:1 enumeration:1 cache:44 project:1 classifies:1 bounded:2 israel:2 what:1 revising:1 developed:1 whilst:2 finding:2 hindsight:1 classifier:12 scaled:1 uk:2 k2:30 unit:1 appear:1 before:1 modify:3 mistake:3 limit:6 despite:2 analyzing:1 solely:1 might:2 limited:1 jornal:1 range:1 averaged:1 testing:1 yj:2 practice:1 recursive:1 implement:2 procedure:1 empirical:1 thought:1 significantly:2 pre:2 suggest:1 get:6 cannot:2 selection:1 put:1 storage:1 applying:1 www:2 telescopic:2 yt:32 center:2 modifies:1 jerusalem:2 automaton:1 simplicity:2 immediately:1 rule:2 embedding:1 stability:1 notion:1 updated:1 exact:2 homogeneous:1 us:2 trick:1 updating:1 persist:1 observed:1 inserted:4 fly:2 connected:1 ordering:1 eu:1 removed:3 highest:1 complexity:1 insertion:7 ezard:1 algo:1 usps:9 easily:1 represented:1 various:2 describe:5 london:1 effective:1 kermit:1 whose:1 widely:1 larger:1 solve:1 say:1 relax:1 otherwise:1 unseen:2 itself:1 online:45 seemingly:2 sequence:2 kxt:4 advantage:1 maximal:5 product:4 inserting:1 uci:1 loop:3 combining:2 achieve:2 adapts:1 gold:1 description:1 convergence:1 enhancement:3 ftp:1 illustrate:1 derive:1 ac:3 measured:1 received:1 c:3 predicted:1 indicate:2 come:1 exhibiting:1 differ:1 radius:1 correct:2 attribute:1 require:2 fix:1 generalization:13 secondly:2 enumerated:1 insert:3 extension:1 hold:1 considered:1 ic:1 algorithmic:5 predict:2 early:1 relay:1 label:8 currently:1 weighted:1 hope:1 mit:1 clearly:1 aim:1 modified:3 rather:1 conjunction:2 focus:2 check:1 underscore:1 attains:1 baseline:2 summarizing:1 helpful:1 dependent:2 typically:3 entire:1 compactness:1 arg:1 classification:19 html:1 overall:2 initialize:2 once:3 never:3 identical:1 kw:9 broad:1 koby:1 future:2 report:3 novikoff:1 employ:2 few:3 composed:1 kandola:1 kwt:17 comprehensive:1 neurocomputing:1 maxj:1 replaced:2 phase:2 consisting:1 ourselves:1 attempt:4 organization:1 truly:1 analyzed:1 accurate:1 closer:1 old:5 taylor:1 desired:1 theoretical:1 psychological:1 instance:8 column:2 modeling:1 distill:1 examining:1 mnist1:1 reported:1 dependency:1 answer:1 chooses:1 st:3 huji:2 probabilistic:1 physic:1 connecting:1 analogously:1 again:1 choose:3 return:2 li:1 aggressive:4 potential:1 de:1 attaining:1 sec:5 summarized:2 notable:3 explicitly:1 depends:3 stream:1 performed:2 later:3 analyze:1 competitive:1 start:1 il:2 square:1 accuracy:1 variance:1 characteristic:1 efficiently:1 notoriously:1 mlearn:1 implausible:1 whenever:1 evaluates:1 against:2 nonetheless:1 naturally:1 proof:5 newly:1 dataset:5 recall:1 knowledge:1 improves:1 organized:2 formalize:1 surplus:1 higher:1 evaluated:5 though:1 done:1 stage:5 implicit:1 receives:3 replacing:1 alma:2 lack:1 grows:2 name:1 effect:1 k22:1 hence:4 deal:2 attractive:1 round:9 during:3 anything:1 mlrepository:1 criterion:1 trying:1 outline:1 tt:2 performs:1 cp:1 consideration:1 instantaneous:1 common:3 pseudocode:1 krauth:1 infused:3 overview:2 empirically:2 volume:1 discussed:1 refer:1 automatic:1 outlined:1 similarly:1 kobics:1 shawe:1 had:1 funded:1 access:1 impressive:1 similarity:1 summands:1 add:3 recent:3 belongs:1 mint:1 discard:5 scenario:3 store:3 inequality:1 yi:7 devise:1 additional:2 relaxed:6 somewhat:2 care:1 employed:4 gentile:1 redundant:3 swartz:1 multiple:1 exceeds:1 long:2 prediction:8 variant:1 kernel:11 achieved:1 addition:1 whereas:1 separately:1 consti:1 want:1 rest:1 romma:2 exhibited:1 comment:1 induced:1 simulates:1 exceed:1 easy:1 xj:1 finish:1 affect:1 restrict:1 inner:3 idea:2 reprinted:1 multiclass:4 translates:3 shift:1 whether:3 constitute:1 informally:3 maybe:1 amount:3 conscious:2 svms:1 tth:3 reduced:1 http:2 exist:1 sign:8 correctly:1 per:1 rosenblatt:2 xii:1 write:1 ist:1 four:2 deletes:1 deleted:1 kuk:2 thresholded:1 vast:1 fraction:1 sum:2 year:1 letter:10 yann:1 decision:1 comparable:2 poorest:1 bound:16 ct:24 occur:1 constraint:1 calling:1 aspect:1 speed:1 performing:1 combination:2 ball:1 modification:5 making:2 intuitively:1 restricted:1 taken:1 recourse:1 turn:2 kw0:1 singer:3 know:1 merit:2 end:4 serf:1 generalizes:1 available:3 egham:1 batch:2 alternative:3 shortly:1 original:3 top:1 running:1 include:1 yoram:1 build:1 sweep:1 added:3 question:3 occurs:2 exhibit:2 thank:1 sci:2 majority:1 w0:2 cauchy:1 considers:1 tuebingen:1 reason:1 enforcing:1 index:1 hebrew:2 difficult:1 negative:2 upper:3 datasets:11 discarded:1 finite:1 optional:1 looking:2 y1:2 rn:4 pair:1 namely:2 required:1 deletion:6 able:2 below:3 pattern:36 sparsity:1 royal:1 memory:3 explanation:1 event:1 natural:1 predicting:1 scheme:1 brief:1 inversely:1 numerous:1 axis:4 carried:1 epoch:1 nice:1 prior:1 removal:1 acknowledgement:1 review:1 permutation:2 limitation:1 proportional:1 enclosed:1 remarkable:2 validation:1 degree:1 mercer:1 share:1 maxt:1 row:3 summary:2 last:2 free:1 formal:3 perceptron:14 sparse:4 tolerance:2 boundary:1 world:2 computes:1 author:1 made:4 commonly:1 adaptive:11 coincide:1 employing:2 constituting:1 excess:1 approximate:3 emphasize:1 obtains:3 implicitly:1 preferred:1 keep:3 dealing:1 reveals:2 conceptual:1 equ:4 xi:6 search:1 ultraconservative:1 designates:3 table:2 obtaining:1 complex:2 did:1 main:1 linearly:2 motivation:1 bounding:1 x1:2 fig:11 referred:1 rithm:1 fashion:1 wiley:1 mira:20 third:3 theorem:4 enlarging:1 erroneous:1 kuk2:1 xt:35 specific:2 embed:1 rhul:1 r2:5 svm:17 exists:2 incorporating:1 mnist:9 vapnik:1 adding:1 importance:1 budget:6 margin:24 simply:3 likely:1 adjustment:1 partially:1 scalar:1 corresponds:1 satisfies:1 chance:1 goal:2 rbf:2 price:1 hard:1 determined:2 specifically:1 hyperplane:1 wt:31 called:2 total:2 pas:1 experimental:2 perceptrons:1 holloway:1 support:36 latter:1 crammer:2 scan:1 outstanding:1 incorporate:3 evaluate:1 tested:2 phenomenon:1 |
1,524 | 2,386 | Using the Forest to See the Trees: A Graphical
Model Relating Features, Objects, and Scenes
Kevin Murphy
MIT AI lab
Cambridge, MA 02139
[email protected]
Antonio Torralba
MIT AI lab
Cambridge, MA 02139
[email protected]
William T. Freeman
MIT AI lab
Cambridge, MA 02139
[email protected]
Abstract
Standard approaches to object detection focus on local patches of the
image, and try to classify them as background or not. We propose to
use the scene context (image as a whole) as an extra source of (global)
information, to help resolve local ambiguities. We present a conditional
random field for jointly solving the tasks of object detection and scene
classification.
1
Introduction
Standard approaches to object detection (e.g., [24, 15]) usually look at local pieces of the
image in isolation when deciding if the object is present or not at a particular location/
scale. However, this approach may fail if the image is of low quality (e.g., [23]), or the
object is too small, or the object is partly occluded, etc. In this paper we propose to use the
image as a whole as an extra global feature, to help overcome local ambiguities.
There is some psychological evidence that people perform rapid global scene analysis before conducting more detailed local object analysis [4, 2]. The key computational question
is how to represent the whole image in a compact, yet informative, form. [21] suggests
a representation, called the ?gist? of the image, based on PCA of a set of spatially averaged filter-bank outputs. The gist acts as an holistic, low-dimensional representation of the
whole image. They show that this is sufficient to provide a useful prior for what types of
objects may appear in the image, and at which locations/scale.
We extend [21] by combining the prior suggested by the gist with the outputs of bottom-up,
local object detectors, which are trained using boosting (see Section 2). Note that this is
quite different from approaches that use joint spatial constraints between the locations of
objects, such as [11, 20, 19, 8]. In our case, the spatial constraints come from the image as
a whole, not from other objects. This is computationally much simpler.
Another task of interest is detecting if the object is present anywhere in the image, regardless of location. (This can be useful for object-based image retrieval.) In principle, this
is straightforward: we declare the object is present iff the detector fires (at least once) at
any location/scale. However, this means that a single false positive at the patch level can
cause a 100% error rate at the image level. As we will see in Section 4, even very good
detectors can perform poorly at this task. The gist, however, is able to perform quite well at
suggesting the presence of types of objects, without using a detector at all. In fact, we can
use the gist to decide if it is even ?worth? running a detector, although we do not explore
this here.
Often, the presence of certains types of objects is correlated, e.g., if you see a keyboard,
you expect to see a screen. Rather than model this correlation directly, we introduce a
hidden common cause/ factor, which we call the ?scene?. In Section 5, we show how
we can reliably determine the type of scene (e.g., office, corridor or street) using the
gist. Scenes can also be defined in terms of the objects which are present in the image. Hence we combine the tasks of scene classification and object-presence detection
using a tree-structured graphical model: see Section 6. We perform top-down inference
(scenes to objects) and bottom-up inference (objects to scenes) in this model. Finally,
we conclude in Section 7. (Note: there is a longer, online version of this paper available at www.ai.mit.edu/?murphyk/Papers/nips2003 long.pdf, which has
more details and experimental results than could fit into 8 pages.)
2
Object detection and localization
For object detection there are at least three families of approaches: parts-based (an object is
defined as a specific spatial arrangement of small parts e.g., [6]), patch-based (we classify
each rectangular image region as object or background), and region-based (a region of the
image is segmented from the background and is described by a set of features that provide
texture and shape information e.g., [5]).
Here we use a patch-based approach. For objects with rigid, well-defined shapes (screens,
keyboards, people, cars), a patch usually contains the full object and a small portion of
the background. For the rest of the objects (desks, bookshelves, buildings), rectangular
patches may contain only a piece of the object. In that case, the region covered by a
number of patches defines the object. In such a case, the object detector will rely mostly
on the textural properties of the patch.
The main advantage of the patch-based approach is that object-detection can be reduced to
a binary classification problem. Specifically, we compute P (Oic = 1|vic ) for each class c
and patch i (ranging over location and scale), where Oic = 1 if patch i contains (part of) an
instance of class c, and Oic = 0 otherwise; vic is the feature vector (to be described below)
for patch i computed for class c.
To detect an object, we slide our detector across the image pyramid and classify all the
patches at each location and scale (20% increments of size and every other pixel in location). After performing non-maximal suppression [1], we report as detections all locations
for which P (Oic |vic ) is above a threshold, chosen to given a desired trade-off between false
positives and missed detections.
2.1
Features for objects and scenes
We would like to use the same set of features for detecting a variety of object types, as
well as for classifying scenes. Hence we will create a large set of features and use a feature
selection algorithm (Section 2.2) to select the most discriminative subset.
We compute a single feature k for image patch i in three steps, as follows. First we convolve
the (monochrome) patch Ii (x) with a filter gk (x), chosen from the set of 13 (zero-mean)
filters shown in Figure 1(a). This set includes oriented edges, a Laplacian filter, corner
detectors and long edge detectors. These features can be computed efficiently: The filters
used can be obtained by convolution of 1D filters (for instance, the long edge filters are
obtained by the convolution of the two filters [?1 0 1]T and [1 1 1 1 1 1]) or as linear
combinations of the other filter outputs (e.g., the first six filters are steerable).
We can summarize the response of the patch convolved with the filter, |Ii (x)?gk (x)|, using
a histogram. For natural images, we can further summarize this histogram using just two
statistics, the variance and the kurtosis [7]. Hence in step two, we compute |Ii (x)?gk (x)|?k ,
for ?k ? {2, 4}. (The kurtosis is useful for characterizing texture-like regions.)
Often we are only interested in the response of the filter within a certain region of the patch.
Hence we can apply one of 30 different spatial templates, which are shown in Figure 1(b).
The use of a spatial template provides a crude encoding of ?shape? inside the rectangular
patch. We use rectangular masks because we can efficiently compute the average response
of a filter within each region using the integral image [24].1
Summarizing,
we can compute feature k for patch i as follows: fi (k) =
P
?k
x wk (x) (|I(x) ? gk (x)| )i . (To achieve some illumination invariance, we also standardize each feature vector on a per-patch basis.) The feature vector has size 13 ? 30 ? 2 =
780 (the factor of 2 arises because we consider ?k = 2 or 4).
Figure 2 shows some of the features selected by the learning algorithm for different kinds
of objects. For example, we see that computer monitor screens are characterized by long
horizontal or vertical lines on the edges of the patch, whereas buildings, seen from the
outside, are characterized by cross-like texture, due to the repetitive pattern of windows.
(a) Dictionary of 13 filters, g(x).
(b) Dictionary of 30 spatial templates, w(x).
Figure 1: (a) Dictionary of filters. Filter 1 is a delta function, 2?7 are 3x3 Gaussian derivatives, 8 is
a 3x3 Laplacian, 9 is a 5x5 corner detector, 10?13 are long edge detectors (of size 3x5, 3x7, 5x3 and
7x3). (b) Dictionary of 30 spatial templates. Template 1 is the whole patch, 2?7 are all sub-patches
of size 1/2, 8?30 are all sub-patches of size 1/3.
Energy Energy Energy Energy Energy Energy Energy
Kurt
Energy
Kurt
Kurt
Energy
Kurt
Energy Energy
Energy
Kurt
Kurt
Kurt
Energy Energy
Energy Energy
Kurt
Energy
Energy Energy Energy Energy Energy
Figure 2: Some of the features chosen after 100 rounds of boosting for recognizing screens, pedestrians and buildings. Features are sorted in order of decreasing weight, which is a rough indication of
importance. ?Energy? means ?k = 2 and ?Kurt? (kurtosis) means ?k = 4.
1
The Viola and Jones [24] feature set is equivalent to using these masks plus a delta function filter;
the result is like a Haar wavelet basis. This has the advantage that objects of any size can be detected
without needing an image pyramid, making the system very fast. By contrast, since our filters have
fixed spatial support, we need to down-sample the image to detect large objects.
2.2
Classifier
Following [24], our detectors are based on a classifier trained using boosting. There are
many variants of boosting [10, 9, 17], which differ in the loss function they are trying to
optimize, and in the gradient directions which they follow. We, and others [14], have found
that GentleBoost [10] gives higher performance than AdaBoost [17], and requires fewer
iterations to train, so this is the version we shall (briefly) present below.
The boosting procedure learns
P a (possibly weighted) combination of base classifiers, or
?weak learners?: ?(v) = t ?t ht (v), where v is the feature vector of the patch, ht is the
base classifier used at round t, and ?t is its corresponding weight. (GentleBoost, unlike
AdaBoost, does not weight the outputs of the weak learners, so ?t = 1.) For the weak
classifiers we use regression stumps of the form h(v) = a[vf > ?] + b, where [vf > ?] = 1
iff component f of the feature vector v is above threshold ?. For most of the objects we
used about 100 rounds of boosting. (We use a hold-out set to monitor overfitting.) See
Figure 2 for some examples of the selected features.
The output of a boosted classifier is a ?confidence-rated prediction?, ?. We convert this to
a probability using logistic regression: P (Oic = 1|?(vic )) = ?(wT [1 ?]), where ?(x) =
1/(1 + exp(?x)) is the sigmoid function [16]. We can then change the hit rate/false alarm
rate of the detector by varying the threshold on P (O = 1|?).
Figure 3 summarizes the performances of the detectors for a set of objects on isolated
patches (not whole images) taken from the test set. The results vary in quality since some
objects are harder to recognize than others, and because some objects have less training
data. When we trained and tested our detector on the training/testing sets of side-views of
cars from UIUC2 , we outperformed the detector of [1] at every point on the precision-recall
curve (results not shown), suggesting that our base-line detectors can match state-of-the-art
detectors when given enough training data.
100
Detection rate
screen
90 bookshelf
pedestrian desk
streetlight
80
building
70
60
coffee
machine
50
40
30
20
car
steps
bookshelf
building
car
coffee
desk
streetlight
pedestrian
screen
steps
10
0
0 5 10 15 20 25 30 35
False alarms (from 2000 distractors)
Figure 3: a) ROC curves for 9 objects; we plot hit rate vs number of false alarms, when the detectors
are run on isolated test patches. b) Example of the detector output on one of the test set images,
before non-maximal suppression. c) Example of the detector output on a line drawing of a typical
office scene. The system correctly detects the screen, the desk and the bookshelf.
3
Improving object localization by using the gist
One way to improve the speed and accuracy of a detector is to reduce the search space, by
only running the detector in locations/ scales that we expect to find the object. The expected
location/scale can be computed on a per image basis using the gist, as we explain below.
(Thus our approach is more sophisticated than having a fixed prior, such as ?keyboards
always occur in the bottom half of an image?.)
If we only run our detectors in a predicted region, we risk missing objects. Instead, we
run our detectors everywhere, but we penalize detections that are far from the predicted
2
http://l2r.cs.uiuc.edu/?cogcomp/Data/Car/
location/scale. Thus objects in unusual locations have to be particularly salient (strong
local detection score) in order to be detected, which accords with psychophysical results of
human observers.
We define the gist as a feature vector summarizing the whole image, and denote it by vG .
One way to compute this is to treat the whole image as a single patch, and to compute
a feature vector for it as described in Section 2.1. If we use 4 image scales and 7 spatial
masks, the gist will have size 13?7?2?4 = 728. Even this is too large for some methods,
so we consider another variant that reduces dimensionality further by using PCA on the
gist-minus-kurtosis vectors. Following [22, 21], we take the first 80 principal components;
we call this the PCA-gist.
We can predict the expected location/scale of objects of class c given the gist, E[X c |v G ],
by using a regression procedure. We have tried linear regression, boosted regression [9],
and cluster-weighted regression [21]; all approaches work about equally well.
Using the gist it is easy to distinguish long-distance from close-up shots (since the overall
structure of the image looks quite different), and hence we might predict that the object is
small or large respectively. We can also predict the expected height. However, we cannot
predict the expected horizontal location, since this is typically unconstrained by the scene.
To combine the local and global sources of information, we construct a feature vector
f which combines the output of the boosted detector, ?(vic ), and the vector between
the location of the patch and the predicted location for objects of this class, xci ? x
?c .
c
c
c
c
We then train another classifier to compute P (Oi = 1|f (?(vi ), xi , x
? )) using either
boosting or logistic regression. In Figure 4, we compare localization performance using
just the detectors, P (Oic = 1|?(vic ), and using the detectors and the predicted location,
P (Oic = 1|f (?(vic ), xci , x
?c )). For keyboards (which are hard to detect) we see that using the predicted location helps a lot, whereas for screens (which are easy to detect), the
location information does not help.
Prob. location of screen
0.9
Detection rate
Detection rate
0.8
0.7
0.6
0.5
0.4
1
0.9
0.9
0.8
0.8
0.7
0.6
0.5
0.4
0.3
0.3
P(Op | vp)
P(Op | vp,x(vG))
0.1
0
0.1
0.2
0.3
False alarm rate
0.4
P(Op | vp)
P(Op | vp,x(vG))
0.1
0.5
0.7
0.6
0.5
0.4
0.3
0.2
0.2
Prob. location of person
1
Detection rate
Prob. location of keyboard
1
0
(a)
0.2
P(Op | vp)
P(Op | vp,x(vG))
0.1
0.1 0.2 0.3 0.4 0.5 0.6 0.7
False alarm rate
(b)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
False alarm rate
(c)
Figure 4: ROC curves for detecting the location of objects in the image: (a) keyboard, (b) screen,
(c) person. The green circles are the local detectors alone, and the blue squares are the detectors and
predicted location.
4
Object presence detection
We can compute the probability that the object exists anywhere in the image (which can be
used for e.g., object-based image retrieval) by taking the OR of all the detectors:
c
c
P (E c = 1|v1:N
) = ?i P (Oc = 1|v1:N
).
Unfortunately, this leads to massive overconfidence, since the patches are not independent.
As a simple approximation, we can use
c
c
c
P (E c = 1|v1:N
) ? max P (E c = 1|v1:N
) = P (E c = 1| max ?i (vic )) = P (E c = 1|?max
).
i
i
Unfortunately, even for good
Q detectors, this can give poor results: the probability of error
at the image level is 1 ? i (1 ? qi ) = 1 ? (1 ? q)N , where q is the probability of error at
the patch level and N is the number of patches. For a detector with a reasonably low false
alarm rate, say q = 10?4 , and N = 5000 patches, this gives a 40% false detection rate at
the image level! For example, see the reduced performance at the image level of the screen
detector (Figure 5(a)), which performs very well at the patch level (Figure 4(a)).
An alternative approach is to use the gist to predict the presence of the object, without using
a detector at all. This is possible because the overall structure of the image can suggest
what kind of scene this is (see Section 5), and this in turn suggests what kinds of objects
are present (see Section 6). We trained another boosted classifier to predict P (E c = 1|v G );
results are shown Figure 5. For poor detectors, such as keyboards, the gist does a much
better job than the detectors, whereas for good detectors, such as screens, the results are
comparable. Finally, we can combine both approaches by constructing a feature vector
from the output of the global and local boosted classifiers and using logistic regression:
c
c
P (E c = 1|v G , v1:N
) = ?(wT [1 ?(v G )?max
]).
However, this seems to offer little improvement over the gist alone (see Figure 5), presumably because our detectors are not very good.
Keyboard
Person
Screen
1
1
0.8
0.8
Detection rate
0.8
0.6
0.4
0.2
0
0
P(E|vG)
P(E|vlocal)
P(E|joint)
0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8
False alarm rate
(a)
Detection rate
Detection rate
1
0.6
0.4
0.2
0
0
P(E|vG)
P(E|vlocal)
P(E|joint)
0.1 0.2 0.3 0.4 0.5 0.6 0.7
False alarm rate
(b)
0.6
0.4
0.2
0
P(E|vG)
P(E|vlocal)
P(E|joint)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
False alarm rate
(c)
Figure 5: ROC curves for detecting the presence of object classes in the image: (a) keyboard (b)
screen, (c) person. The green circles use the gist alone, the blue squares use the detectors alone, and
the red stars use the joint model, which uses the gist and all the detectors from all the object classes.
5
Scene classification
As mentioned in the introduction, the presence of many types of objects is correlated.
Rather than model this correlation directly, we introduce a latent common ?cause?, which
we call the ?scene?. We assume that object presence is conditionally independent given
the scene, as explained in Section 6. But first we explain how we recognize the scene type,
which in this paper can be office, corridor or street.
The approach we take to scene classification is simple. We train a one-vs-all binary classifier for recognizing each type of scene using boosting applied to the gist.3 Then we
3
An alternative would be to use the multi-class LogitBoost algorithm [10]. However, training
separate one-vs-all classifiers allows them to have different internal structure (e.g., number of rounds).
0.9
0.8
vG
Oic11
...
0.7
E cn
...
OicN1
Oic1n
1
Precision
E c1
PR for 3 scenes categories
1
S
...
OicNnn
0.6
0.5
0.4
0.3
Boost
Maxent
Joint
0.1
Baseline
0
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1
Recall
0.2
vic11
...
vicN1
vic1n
1
...
vicNnn
v
(a)
(b)
Figure 6: (a) Graphical model for scene and object recognition. n = 6 is the number of object
classes, Nc ? 5000 is the number of patches for class c. Other terms are defined in the text. (b)
Precision-recall curve for scene classification.
s
G
=1|v )
where P (S s = 1|v G ) is the
normalize the results: P (S = s|v G ) = PP (S
P (S s? =1|v G )
s?
output of the s-vs-other classifier.4
6
Joint scene classification and object-presence detection
We now discuss how we can use scene classification to facilitate object-presence detection, and vice versa. The approach is based on the tree-structured graphical model5 in
Figure 6(a), which encodes our assumption that the objects are conditionally independent
given the scene.
This graphical model encodes the following conditional joint density:
Y
Y
1
cn
c
P (S, E 1:n , O1:N
, . . . , O1:N
|v) = P (S|v G )
?(E c , S)
P (Oic |E c , vic )
Z
c
i
where v G and vic are deterministic functions of the image v and Z is a normalizing constant.
called the partition function (which is tracatable to compute, since the graph is a tree). By
conditioning on the observations as opposed to generating them, we are free to incorporate
arbitrary, possibly overlapping features (local and global), without having to make strong
independence assumptions c.f., [13, 12].
We now define the individual terms in this expression. P (S|v G ) is the output of boosting
as described in Section 5. ?(E c , S) is essentially a table which counts the number of times
object type c occurs in scene type S. Finally, we define
?
?(wT [1 ?(vic )]) if e = 1
c
c
c
P (Oi = 1|E = e, vi ) =
0
if e = 0
This means that if we know the object is absent in the image (E c = 0), then all the local
detectors should be turned off (Oic = 0); but if the object is present (E c = 1), we do not
know where, so we allow the local evidence, vic , to decide which detectors should turn on.
We can find the maximum likelihood estimates of the parameters of this model by training
it jointly using a gradient procedure; see the long version of this paper for details.
In Figure 5, we see that we can reliably detect In Figure 5, we see that we can reliably
detect the presence of the object in an image without using the gist directly, providing we
4
For scenes, it is arguably more natural to allow multiple labels, as in [3], rather than forcing each
scene into a single category; this can be handled with a simple modification of boosting [18].
5
The graph is a tree once we remove the observed nodes.
know what the scene type is (the red curve, derived from the joint model in this section,
is basically the same as the green curve, derived from the gist model in Section 4). The
importance of this is that it is easy to label images with their scene type, and hence to train
P (S|v G ), but it is much more time consuming to annotate objects, which is required to
train P (E c |v G ).6
7
Conclusions and future work
We have shown how to combine global and local image features to solve the tasks of object
detection and scene recognition. In the future, we plan to try a larger number of object
classes. Also, we would like to investigate methods for choosing which order to run the
detectors. For example, one can imagine a scenario in which we run the screen detector
first (since it is very reliable); if we discover a screen, we conclude we are in an office, and
then decide to look for keyboards and chairs; but if we don?t discover a screen, we might
be in a corridor or a street, so we choose to run another detector to disambiguate our belief
state. This corresponds to a dynamic message passing protocol on the graphical model.
References
[1] S. Agarwal and D. Roth. Learning a sparse representation for object detection. In Proc. European Conf. on Computer Vision, 2002.
[2] I. Biederman. On the semantics of a glance at a scene. In M. Kubovy and J. Pomerantz, editors, Perceptual organization, pages 213?253.
Erlbaum, 1981.
[3] M. Boutell, X. Shen, J. Luo, and C. Brown. Multi-label semantic scene classification. Technical report, Dept. Comp. Sci. U. Rochester, 2003.
[4] D. Davon. Forest before the trees: the precedence of global features in visual perception. Cognitive Psychology, 9:353?383, 1977.
[5] Pinary Duygulu, Kobus Barnard, Nando de Freitas, David Forsyth, and Michael I. Jordan. Object recognition as machine translation: Learning a
lexicon for a fixed image vocabulary. In Proc. European Conf. on Computer Vision, 2002.
[6] R. Fergus, P. Perona, and A. Zisserman. Object class recognition by unsupervised scale-invariant learning. In Proc. IEEE Conf. Computer Vision
and Pattern Recognition, 2003.
[7] D. Field. Relations between the statistics of natural images and the response properties of cortical cells. J. Opt. Soc. Am., A4:2379?2394, 1987.
[8] M. Fink and P. Perona. Mutual boosting for contextual influence. In Advances in Neural Info. Proc. Systems, 2003.
[9] J. Friedman. Greedy function approximation: a gradient boosting machine. Annals of Statistics, 29:1189?1232, 2001.
[10] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: a statistical view of boosting. Annals of statistics, 28(2):337?374, 2000.
[11] R. Haralick. Decision making in context. IEEE Trans. on Pattern Analysis and Machine Intelligence, 5:417?428, 1983.
[12] Sanjiv Kumar and Martial Hebert. Discriminative random fields: A discriminative framework for contextual interaction in classification. In IEEE
Conf. on Computer Vision and Pattern Recognition, 2003.
[13] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Intl.
Conf. on Machine Learning, 2001.
[14] R. Lienhart, A. Kuranov, and V. Pisarevsky. Empirical analysis of detection cascades of boosted classifiers for rapid object detection. In DAGM
25th Pattern Recognition Symposium, 2003.
[15] C. Papageorgiou and T. Poggio. A trainable system for object detection. Intl. J. Computer Vision, 38(1):15?33, 2000.
[16] J. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In A. Smola, P. Bartlett,
B. Schoelkopf, and D. Schuurmans, editors, Advances in Large Margin Classifiers. MIT Press, 1999.
[17] R. Schapire. The boosting approach to machine learning: An overview. In MSRI Workshop on Nonlinear Estimation and Classification, 2001.
[18] Robert E. Schapire and Yoram Singer. BoosTexter: A boosting-based system for text categorization. Machine Learning, 39(2/3):135?168, 2000.
[19] A. Singhal, Jiebo Luo, and Weiyu Zhu. Probabilistic spatial context models for scene content understanding. In Proc. IEEE Conf. Computer
Vision and Pattern Recognition, 2003.
[20] T. M. Strat and M. A. Fischler. Context-based vision: recognizing objects using information from both 2-D and 3-D imagery. IEEE Trans. on
Pattern Analysis and Machine Intelligence, 13(10):1050?1065, 1991.
[21] A. Torralba. Contextual priming for object detection. Intl. J. Computer Vision, 53(2):153?167, 2003.
[22] A. Torralba, K. Murphy, W. Freeman, and M. Rubin. Context-based vision system for place and object recognition. In Intl. Conf. Computer
Vision, 2003.
[23] A. Torralba and P. Sinha. Detecting faces in impoverished images. Technical Report 028, MIT AI Lab, 2001.
[24] Paul Viola and Michael Jones. Robust real-time object detection. International Journal of Computer Vision - to appear, 2002.
6
Since we do not need to know the location of the object in the image in order to train P (E c |v G ),
we can use partially annotated data such as image captions, as used in [5].
| 2386 |@word briefly:1 version:3 seems:1 tried:1 minus:1 shot:1 harder:1 contains:2 score:1 kurt:9 freitas:1 contextual:3 luo:2 yet:1 additive:1 sanjiv:1 partition:1 informative:1 shape:3 remove:1 plot:1 gist:22 v:4 alone:4 half:1 selected:2 fewer:1 greedy:1 intelligence:2 mccallum:1 pisarevsky:1 detecting:5 boosting:15 provides:1 location:26 node:1 lexicon:1 simpler:1 height:1 corridor:3 symposium:1 combine:5 inside:1 introduce:2 mask:3 expected:4 rapid:2 uiuc:1 multi:2 freeman:2 detects:1 decreasing:1 resolve:1 little:1 window:1 discover:2 what:4 kind:3 every:2 act:1 fink:1 classifier:14 hit:2 platt:1 murphyk:2 appear:2 arguably:1 segmenting:1 positive:2 before:3 declare:1 local:14 treat:1 textural:1 encoding:1 might:2 plus:1 suggests:2 averaged:1 testing:1 x3:4 procedure:3 steerable:1 empirical:1 cascade:1 confidence:1 suggest:1 cannot:1 close:1 selection:1 context:5 risk:1 influence:1 www:1 equivalent:1 optimize:1 xci:2 missing:1 deterministic:1 roth:1 straightforward:1 regardless:1 rectangular:4 boutell:1 shen:1 increment:1 annals:2 imagine:1 massive:1 caption:1 us:1 standardize:1 recognition:9 particularly:1 bottom:3 observed:1 region:8 schoelkopf:1 trade:1 mentioned:1 fischler:1 occluded:1 dynamic:1 trained:4 solving:1 localization:3 learner:2 basis:3 joint:9 streetlight:2 train:6 fast:1 detected:2 labeling:1 kevin:1 outside:1 choosing:1 quite:3 larger:1 solve:1 say:1 drawing:1 otherwise:1 tested:1 statistic:4 jointly:2 online:1 advantage:2 indication:1 sequence:1 kurtosis:4 propose:2 interaction:1 maximal:2 turned:1 combining:1 kubovy:1 holistic:1 iff:2 poorly:1 achieve:1 boostexter:1 normalize:1 cluster:1 intl:4 generating:1 categorization:1 object:82 help:4 op:6 job:1 strong:2 soc:1 predicted:6 c:1 come:1 differ:1 direction:1 annotated:1 filter:18 human:1 nando:1 kobus:1 opt:1 precedence:1 hold:1 deciding:1 exp:1 presumably:1 predict:6 dictionary:4 torralba:5 vary:1 estimation:1 proc:5 outperformed:1 label:3 vice:1 create:1 weighted:2 mit:9 rough:1 gaussian:1 always:1 rather:3 boosted:6 varying:1 office:4 derived:2 focus:1 monochrome:1 haralick:1 improvement:1 likelihood:2 contrast:1 suppression:2 baseline:1 detect:6 summarizing:2 am:1 inference:2 rigid:1 dagm:1 typically:1 hidden:1 perona:2 relation:1 interested:1 semantics:1 pixel:1 overall:2 classification:11 plan:1 spatial:10 art:1 mutual:1 field:4 once:2 construct:1 having:2 look:3 jones:2 unsupervised:1 future:2 report:3 others:2 oriented:1 recognize:2 individual:1 murphy:2 fire:1 william:1 friedman:2 detection:29 organization:1 interest:1 message:1 investigate:1 cogcomp:1 edge:5 integral:1 poggio:1 tree:6 maxent:1 desired:1 circle:2 isolated:2 sinha:1 psychological:1 instance:2 classify:3 singhal:1 subset:1 recognizing:3 erlbaum:1 too:2 person:4 density:1 international:1 probabilistic:3 off:2 michael:2 imagery:1 ambiguity:2 opposed:1 choose:1 possibly:2 corner:2 conf:7 cognitive:1 derivative:1 suggesting:2 de:1 stump:1 star:1 wk:1 includes:1 pedestrian:3 forsyth:1 vi:2 piece:2 try:2 view:2 lab:4 observer:1 lot:1 portion:1 red:2 rochester:1 oi:2 square:2 accuracy:1 variance:1 conducting:1 efficiently:2 vp:6 weak:3 basically:1 worth:1 comp:1 detector:46 explain:2 energy:23 pp:1 recall:3 distractors:1 car:5 dimensionality:1 sophisticated:1 impoverished:1 higher:1 strat:1 follow:1 adaboost:2 response:4 zisserman:1 anywhere:2 just:2 smola:1 correlation:2 horizontal:2 nonlinear:1 overlapping:1 glance:1 defines:1 logistic:4 quality:2 building:5 facilitate:1 contain:1 brown:1 hence:6 spatially:1 semantic:1 conditionally:2 round:4 x5:2 oc:1 trying:1 pdf:1 performs:1 image:48 ranging:1 nips2003:1 fi:1 common:2 sigmoid:1 overview:1 conditioning:1 extend:1 relating:1 cambridge:3 versa:1 ai:8 unconstrained:1 longer:1 etc:1 base:3 forcing:1 scenario:1 keyboard:10 certain:1 binary:2 wtf:1 seen:1 determine:1 ii:3 full:1 multiple:1 needing:1 reduces:1 segmented:1 technical:2 match:1 characterized:2 cross:1 long:7 retrieval:2 offer:1 equally:1 laplacian:2 qi:1 prediction:1 variant:2 regression:9 essentially:1 vision:11 annotate:1 histogram:2 represent:1 repetitive:1 iteration:1 pyramid:2 accord:1 penalize:1 c1:1 background:4 whereas:3 agarwal:1 cell:1 source:2 extra:2 rest:1 unlike:1 lafferty:1 jordan:1 call:3 presence:11 enough:1 easy:3 variety:1 independence:1 isolation:1 fit:1 psychology:1 hastie:1 reduce:1 cn:2 absent:1 six:1 pca:3 expression:1 handled:1 bartlett:1 passing:1 cause:3 antonio:1 useful:3 detailed:1 covered:1 slide:1 desk:4 category:2 reduced:2 http:1 schapire:2 delta:2 msri:1 per:2 correctly:1 tibshirani:1 blue:2 shall:1 key:1 salient:1 threshold:3 monitor:2 ht:2 v1:5 graph:2 convert:1 run:6 prob:3 everywhere:1 you:2 place:1 family:1 decide:3 patch:35 missed:1 decision:1 summarizes:1 vf:2 comparable:1 distinguish:1 occur:1 constraint:2 scene:36 encodes:2 x7:1 speed:1 chair:1 duygulu:1 performing:1 kumar:1 structured:2 combination:2 poor:2 across:1 making:2 modification:1 explained:1 invariant:1 pr:1 taken:1 computationally:1 turn:2 discus:1 fail:1 count:1 singer:1 know:4 unusual:1 available:1 apply:1 alternative:2 convolved:1 top:1 running:2 convolve:1 graphical:6 a4:1 yoram:1 coffee:2 psychophysical:1 question:1 arrangement:1 occurs:1 gradient:3 distance:1 separate:1 sci:1 street:3 o1:2 providing:1 nc:1 mostly:1 unfortunately:2 robert:1 gk:4 info:1 reliably:3 perform:4 vertical:1 convolution:2 observation:1 viola:2 arbitrary:1 jiebo:1 biederman:1 david:1 required:1 boost:1 trans:2 able:1 suggested:1 usually:2 below:3 pattern:7 perception:1 summarize:2 l2r:1 green:3 max:4 reliable:1 belief:1 natural:3 rely:1 regularized:1 haar:1 zhu:1 improve:1 vic:12 rated:1 martial:1 text:2 prior:3 understanding:1 loss:1 expect:2 vg:8 sufficient:1 rubin:1 principle:1 editor:2 bank:1 classifying:1 translation:1 free:1 hebert:1 side:1 allow:2 pomerantz:1 template:5 characterizing:1 taking:1 lienhart:1 face:1 sparse:1 overcome:1 curve:7 vocabulary:1 cortical:1 far:1 compact:1 global:8 overfitting:1 conclude:2 consuming:1 discriminative:3 xi:1 fergus:1 don:1 search:1 latent:1 table:1 disambiguate:1 reasonably:1 robust:1 forest:2 improving:1 schuurmans:1 european:2 papageorgiou:1 constructing:1 protocol:1 priming:1 main:1 whole:9 logitboost:1 alarm:10 paul:1 bookshelf:4 gentleboost:2 overconfidence:1 roc:3 screen:17 precision:3 sub:2 pereira:1 crude:1 perceptual:1 wavelet:1 learns:1 down:2 specific:1 evidence:2 normalizing:1 exists:1 workshop:1 false:13 importance:2 texture:3 illumination:1 margin:1 explore:1 visual:1 partially:1 model5:1 corresponds:1 ma:3 conditional:3 sorted:1 barnard:1 content:1 change:1 hard:1 specifically:1 typical:1 wt:3 principal:1 called:2 partly:1 experimental:1 invariance:1 select:1 internal:1 people:2 support:2 arises:1 incorporate:1 dept:1 trainable:1 correlated:2 |
1,525 | 2,387 | PAC-Bayesian Generic Chaining
Jean-Yves Audibert ?
Universit?e Paris 6
Laboratoire de Probabilit?es et Mod`eles al?eatoires
175 rue du Chevaleret
75013 Paris - France
[email protected]
Olivier Bousquet
Max Planck Institute for Biological Cybernetics
Spemannstrasse 38
D-72076 T?ubingen - Germany
[email protected]
Abstract
There exist many different generalization error bounds for classification.
Each of these bounds contains an improvement over the others for certain situations. Our goal is to combine these different improvements into
a single bound. In particular we combine the PAC-Bayes approach introduced by McAllester [1], which is interesting for averaging classifiers,
with the optimal union bound provided by the generic chaining technique
developed by Fernique and Talagrand [2]. This combination is quite natural since the generic chaining is based on the notion of majorizing measures, which can be considered as priors on the set of classifiers, and such
priors also arise in the PAC-bayesian setting.
1
Introduction
Since the first results of Vapnik and Chervonenkis on uniform laws of large numbers for
classes of {0, 1}-valued functions, there has been a considerable amount of work aiming
at obtaining generalizations and refinements of these bounds. This work has been carried
out by different communities. On the one hand, people developing empirical processes theory like Dudley and Talagrand (among others) obtained very interesting results concerning
the behaviour of the suprema of empirical processes. On the other hand, people exploring learning theory tried to obtain refinements for specific algorithms with an emphasis on
data-dependent bounds.
One crucial aspect of all the generalization error bounds is that they aim at controlling the
behaviour of the function that is returned by the algorithm. This function is data-dependent
and thus unknown before seeing the data. As a consequence, if one wants to make statements about its behaviour (e.g. the difference between its empirical error and true error),
one has to be able to predict which function is likely to be chosen by the algorithm. But
?
Secondary affiliation: CREST, ENSAE, Laboratoire de Finance et Assurance, Malakoff, France
since this cannot be done exactly, there is a need to provide guarantees that hold simultaneously for several candidate functions. This is known as the union bound. The way to
perform this union bound optimally is now well mastered in the empirical processes community.
In the learning theory setting, one is interested in bounds that are as algorithm and data
dependent as possible. This particular focus has made concentration inequalities (see e.g.
[3]) popular as they allow to obtain data-dependent results in an effortless way. Another
aspect that is of interest for learning is the case where the classifiers are randomized or
averaged. McAllester [1, 4] has proposed a new type of bound that takes the randomization
into account in a clever way.
Our goal is to combine several of these improvements, bringing together the power of
the majorizing measures as an optimal union bound technique and the power of the PACBayesian bounds that handle randomized predictions efficiently, and obtain a generalization
of both that is suited for learning applications.
The paper is structured as follows. Next section introduces the notation and reviews the
previous improved bounds that have been proposed. Then we give our main result and
discuss its applications, showing in particular how to recover previously known results.
Finally we give the proof of the presented results.
2
Previous results
We first introduce the notation and then give an overview of existing generalization error
bounds. We consider an input space X , an output space Y and a probability distribution
P on the product space Z , X ? Y. Let Z , (X, Y ) denote a pair of random variables
distributed according to P and for a given integer n, let Z1 , . . . , Zn and Z10 , . . . , Zn0 be two
independent samples of n independent copies of Z. We denote by Pn , Pn0 and P2n the
empirical measures associated respectively to the first, the second and the union of both
samples.
To each function g : X ? Y we associate the corresponding loss function f : Z ?
R defined by f (z) = L[g(x), y] where L is a loss function. In classification, the loss
function is L = Ig(x)6=y where I denotes the indicator function. F will denote a set of
such functions. For such functions, we denote their
Pn expectation under P by P f and their
empirical expectation by Pn f (i.e. Pn f = n?1 i=1 f (Zi )). En , E0n and E2n denote the
expectation with respect to the first, second and union of both training samples.
We consider the pseudo-distances d2 (f1 , f2 ) = P (f1 ? f2 )2 and similarly dn , d0n and d2n .
We define the covering number N (F, , d) as the minimum number of balls of radius
needed to cover F in the pseudo-distance d.
We denote by ? and ? two probability measures on the space F, so that ?P f will actually
mean the expectation of P f when f is sampled according to the probability measure ?.
For two such measures, K(?, ?) will denote their Kullback-Leibler divergence (K(?, ?) =
d?
? log d?
when ? is absolutely continuous with respect to ? and K(?, ?) = +? otherwise).
Also, ? denotes some positive real number while C is some positive constant (whose value
may differ from line to line) and M1+ (F) is the set of probability measures on F. We
assume that the functions in F have range in [a, b].
Generalization error bounds give an upper bound on the difference between the true and
empirical error of functions in a given class, which holds with high probability with respect
to the sampling of the training set.
Single function. By Hoeffding?s inequality one easily gets that for each fixed f ? F, with
probability at least 1 ? ?,
r
log 1/?
P f ? Pn f ? C
.
(1)
n
Finite union bound. It is easy to convert the above statement into one which is valid
simultaneously for a finite set of functions F. The simplest form of the union bound gives
that with probability at least 1 ? ?,
r
log |F| + log 1/?
?f ? F, P f ? Pn f ? C
.
(2)
n
Symmetrization. When F is infinite, the trick is to introduce the second sample
Z10 , . . . , Zn0 and to consider the set of vectors formed by the values of each function in
F on the double sample. When the functions have values in {0, 1}, this is a finite set and
the above union bound applies. This idea was first used by Vapnik and Chervonenkis [5] to
obtain that with probability at least 1 ? ?,
r
log E2n N (F, 1/n, d2n ) + log 1/?
?f ? F, P f ? Pn f ? C
.
(3)
n
Weighted union bound and localization. The finite union bound can be directly extended
to the countable case by introducing a probability distribution ? over F which weights each
function and gives that with probability at least 1 ? ?,
r
log 1/?(f ) + log 1/?
?f ? F, P f ? Pn f ? C
.
(4)
n
It is interesting to notice that now the bound depends on the actual function f being considered and not just on the set F. This can thus be called a localized bound.
Variance. Since the deviations between P f and Pn f for a given function f actually depend on its variance (which is upper bounded by P f 2 /n or P f /n when the functions are
in [0, 1]), one can refine (1) into
!
r
P f 2 log 1/?
log 1/?
P f ? Pn f ? C
,
(5)
+
n
n
and combine this improvement with the above union bounds. This was done by Vapnik and
Chervonenkis [5] (for functions in {0, 1}).
Averaging. Consider a probability distribution ? defined on a countable F, take the expectation of (4) with respect to ? and use Jensen?s inequality. This gives with probability at
least 1 ? ?,
r
K(?, ?) + H(?) + log 1/?
??, ?(P f ? Pn f ) ? C
,
n
where H(?) is the Shannon entropy. The l.h.s. is the difference between true and empirical
error of a randomized classifier which uses ? as weights for choosing the decision function
(independently of the data). The PAC-Bayes bound [1] is a refined version of the above
bound since it has the form (for possibly uncountable F)
r
K(?, ?) + log n + log 1/?
??, ?(P f ? Pn f ) ? C
.
(6)
n
To some extent, one can consider that the PAC-Bayes bound is a refined union bound where
the gain happens when ? is not concentrated on a single function (or more precisely ? has
entropy larger than log n).
P
Rademacher averages. The quantity En E? supf ?F n1
?i f (Zi ), where the ?i are independent random signs (+1, ?1 with probability 1/2), called the Rademacher average for
F, is, up to a constant equal to En supf ?F P f ? Pn f which means that it best captures the
complexity of F. One has with probability 1 ? ?,
!
r
X
1
log 1/?
?f ? F, P f ? Pn f ? C
En E? sup
?i f (Zi ) +
.
(7)
n
n
f ?F
Chaining. Another direction in which the union bound can be refined is by considering
finite covers of the set of function at different scales. This is called the chaining technique,
pioneered by Dudley (see e.g. [6]) since one constructs a chain of functions that approximate a given function more and more closely. The results involve the Koltchinskii-Pollard
entropy integral as, for example in [7], with probability 1 ? ?,
!
r
Z ?p
1
log 1/?
.
(8)
?f ? F, P f ? Pn f ? C ? En
log N (F, , dn )d +
n
n
0
Generic chaining. It has been noticed by Fernique and Talagrand that it is possible to
capture the complexity in a better way than using minimal covers by considering majorizing
measures (essentially optimal for Gaussian processes). Let r > 0 and (Aj )j?1 be partitions
of F of diameter r ?j w.r.t. the distance dn such that Aj+1 refines Aj . Using (7) and
techniques from [2] we obtain that with probability 1 ? ?, ?f ? F
?
?
r
?
q
X
1
log
1/?
? . (9)
P f ? P n f ? C ? ? En
inf
sup
r?j log 1/?Aj (f ) +
n
n ??M1+ (F ) f ?F j=1
If one takes partitions induced by minimal covers of F at radii r ?j , one recovers (8) up to
a constant.
Concentration. Using concentration inequalities as in [3] for example, one can get rid of
the expectation appearing in the r.h.s. of (3), (8), (7) or (9) and thus obtain a bound that
can be computed from the data.
Refining the bound (7) is possible as one can localize it (see e.g. [8]) by computing the
Rademacher average only on a small ball around the function of interest. So this comes
close to combining all improvements. However it has not been combined with the PACBayes improvement. Our goal is to try and combine all the above improvements.
3
Main results
Let F be as defined in section 2 with a = 0, b = 1 and ? ? M1+ (F). Instead of using
partitions as in (9) we use approximating sets (which also induce partitions but are easier
to handle here). Consider a sequence Sj of embedded finite subsets of F: {f0 } , S0 ?
? ? ? ? Sj?1 ? Sj ? ? ? ? .
Let pj : F ? Sj be maps (which can be thought of as projections) satisfying pj (f ) = f
for f ? Sj and pj?1 ? pj = pj?1 .
The quantities ?, Sj and pj are allowed to depend on X12n in an exchangeable way (i.e.
exchanging Xi and Xi0 does not affectPtheir value). For a probability distribution ? on
0
0
F, define its j-th projection as ?j =
f ?Sj ?{f : pj (f ) = f }?f , where ?f denotes
the Dirac measure on f . To shorten notations, we denote the average distance between
two successive ?projections? by ?d2j , ?d22n [pj (f ), pj?1 (f )]. Finally, let ?n,j (f ) ,
Pn0 [f ? pj (f )] ? Pn [f ? pj (f )].
Theorem 1 If the following condition holds
lim sup ?n,j (f ) = 0,
j?+? f ?F
a.s.
(10)
then for any 0 < ? < 1/2, with probability at least 1 ? ?, for any distribution ?, we have
s
+?
+?
X
?d2j K(?j , ?j )
1 X
?Pn0 f ? Pn0 f0 ? ?Pn f ? Pn f0 + 5
+?
?j (?d2j ),
n
n
j=1
j=1
where ?j (x) = 4
r
x log 4j 2 ? ?1 log(e2 /x) .
Remark 1 Assumption (10) is not very restrictive. For instance, it is satisfied when F is
finite, or when limj?+? supf ?F |f ?pj (f )| = 0, almost surely or also when the empirical
process f 7? P f ? Pn f is uniformly continuous (which happens for classes with finite
V C dimension in particular) and limj?+? supf ?F d2n (f, pj (f )) = 0.
Remark 2 Let G be a model (i.e. a set of prediction functions). Let g? be a reference
function (not necessarily in G). Consider the class of functions F = z 7? L[g(x), y] :
g ? G ? {?
g } . Let f0 = L[?
g (x), y]. The previous theorem compares the risk on the second
sample of any (randomized) estimator with the risk on the second sample of the reference
function g?.
Now let us give a version of the previous theorem in which the second sample does not
appear.
Theorem 2 If the following condition holds
lim sup E0n ?n,j (f ) = 0,
j?+? f ?F
a.s.
(11)
then for any 0 < ? < 1/2, with probability at least 1 ? ?, for any distribution ?, we have
s
+?
+?
X
E0n [?d2j ]E0n [K(?j , ?j )]
1 X
?P f ? P f0 ? ?Pn f ? Pn f0 + 5
+?
?j E0n [?d2j ] .
n
n j=1
j=1
4
Discussion
We now discuss in which sense the result presented above combines several previous improvements in a single bound.
Notice that our bound is localized in the sense that it depends on the function of interest (or
rather on the averaging distribution ?) and does not involve a supremum over the class.
Also, the union bound is performed in an optimal way since, if one plugs in a distribution ?
concentrated on a single function, takes a supremum over F in the r.h.s., and upper bounds
the squared distance by the diameter of the partition, one recovers a result similar to (9)
up to logarithmic factors but which is localized. Also, when two successive projections
are identical, they do not enter in the bound (which comes from the fact that the variance
weights the complexity terms). Moreover Theorem 1 also includes the PAC-Bayesian improvement for averaging classifiers since if one considers the set S1 = F one recovers
a result similar to McAllester?s (6) which in addition contains the variance improvement
such as in [9].
Finally due to the power of the generic chaining, it is possible to upper bound our result by
Rademacher averages, up to logarithmic factors (using the results of [10] and [11]).
As a remark, the choice of the sequence of sets Sj can generally be done by taking successive covers of the hypothesis space with geometrically decreasing radii.
However, the obtained bound is not completely empirical since it involves the expectation
with respect to an extra sample. In the transduction setting, this is not an issue, it is even
an advantage as one can use the unlabeled data in the computation of the bound. However,
in the induction setting, this is a drawback. Future work will focus on using concentration
inequalities to give a fully empirical bound.
5
Proofs
Proof of Theorem 1: The proof is inspired by previous works on PAC-bayesian bounds
[12, 13] and on the generic chaining [2]. We first prove the following lemma.
Lemma 1 For any ? > 0, ? > 0, j ? N? and any exchangeable function ? : X 2n ?
M1+ (F), with probability at least 1 ? ?, for any probability distribution ? ? M1+ (F), we
have
o
n
? Pn0 [pj (f ) ? pj?1 (f )] ? Pn [pj (f ) ? pj?1 (f )]
?
2?
2
n ?d2n [pj (f ), pj?1 (f )]
+
K(?,?)+log(? ?1 )
.
?
Proof Let ? > 0 and let ? : X 2n ? M1+ (F) be an exchangeable function. Introduce the
quantity ?i , pj (f )(Zn+i ) ? pj?1 (f )(Zn+i ) + pj?1 (f )(Zi ) ? pj (f )(Zi ) and
2?2
d2n pj (f ), pj?1 (f ) . (12)
h , ?Pn0 pj (f ) ? pj?1 (f ) ? ?Pn pj (f ) ? pj?1 (f ) ?
n
By using the exchangeability of ?, for any ? ? {?1; +1}n , we have
2?2
E2n ?eh
?
Pn
= E2n ?e? n d2n [pj (f ),pj?1 (f )]+ n i=1 ?i
Pn
?
2?2
= E2n ?e? n d2n [pj (f ),pj?1 (f )]+ n i=1 ?i ?i .
Now take the expectation wrt ?, where ? is a n-dimensional vector of Rademacher variables. We obtain
Qn
2?2
E2n ?eh = E2n ?e? n d2n [pj (f ),pj?1 (f )] i=1 cosh n? ?i
? E2n ?e?
2?2
n
d2n [pj (f ),pj?1 (f )]
e
?2
i=1 2n2
Pn
?2i
s2
where at the last step we use that cosh s ? e 2 . Since
2
2
?2i ? 2 pj (f )(Zn+i ) ? pj?1 (f )(Zn+i ) + 2 pj (f )(Zi ) ? pj?1 (f )(Zi ) ,
we obtain that for any ? > 0, E2n ?eh ? 1. Therefore, for any ? > 0, we have
E2n Ilog ?eh+log ? >0 = E2n I?eh+log ? >1 ? E2n ?eh+log ? ? ?,
(13)
On the event log ?eh+log ? ? 0 , by the Legendre?s transform, for any probability distribution ? ? M1+ (F), we have
?h + log ? ? log ?eh+log ? + K(?, ?) ? K(?, ?),
(14)
which proves the lemma.
Now let us apply this result to the projected measures ?j and ?j . Since, by definition, ?, Sj
and pj are exchangeable, ?j is also exchangeable. Since pj (f ) = f for any f ? Sj , with
probability at least 1 ? ?, uniformly in ?, we have
n
o 2?
Kj0
?j Pn0 [f ? pj?1 (f )] ? Pn [pj (f ) ? pj?1 (f )] ?
?j d22n [f, pj?1 (f )] +
,
n
?
where Kj0 , K(?j , ?j ) + log(? ?1 ). By definition of ?j , it implies that
n
o 2?
Kj0
? Pn0 [pj (f )?pj?1 (f )]?Pn [pj (f )?pj?1 (f )] ?
?d22n [pj (f ), pj?1 (f )]+
. (15)
n
?
To shorten notations, define ?d2j , ?d22n [pj (f ), pj?1 (f )] and ??j , ? Pn0 [pj (f ) ?
pj?1 (f )] ? Pn [pj (f ) ? pj?1 (f )] . The parameter ? minimizing the RHS of the previous equation depends on ?. Therefore, we need to get a version of this inequality which
holds uniformly in ?.
q
2
First let us note that when ?d2j = 0, we have ??j = 0. When ?d2j > 0, let m log
2n and
P
?k = mek/2 and let b be a function from R? to (0, 1] such that k?1 b(?k ) ? 1. From the
previous lemma and a union bound, we obtain that for any ? > 0 and any integer j with
probability at least 1 ? ?, for any k ? N? and any distribution ?, we have
2?k 2 K(?j , ?j ) + log [b(?k )]?1 ? ?1
??j ?
?dj +
.
n
?k
i
h
log [b(?)]?1
Let us take the function b such that ? 7?
is continuous and decreasing.
?
? ?1 ?1
?
K(?
? )
j ,?j )+log([b(? )]
. For
Then there exists a parameter ?? > 0 such that 2?n ?d2j =
?
?
any ? < 1/2, we have (?? )2 ?d2j ? log2 2 n, hence ?? ? m. So there exists an integer
k ? N? such that ?k e?1/2 ? ?? ? ?k . Then we have
??
K(?j ,?j )+log([b(?? )]?1 ? ?1 )
??j ? 2?n e?d2j +
??
r
h
i
(16)
?
2
2
= (1 + e) n ?dj K(?j , ?j ) + log ([b(?? )]?1 ? ?1 ) .
To have an explicit bound, it remains to find an upperbound of [b(?? )]?1 . When b is
1
decreasing, this comes down to upperbouding ?? . Let us choose b(?) =
when
2
[log( em? )]2
P
4
? ? m and b(?) = 1/4 otherwise. Since b(?k ) = (k+4)2 , we have k?1 b(?k ) ? 1.
? 0
K
Tedious computations give ?? ? 7m ?d2j which combined with (16), yield
j
s
s
h e2 i
?d2j K(?j , ?j )
?d2j
??j ? 5
+ 3.75
log 2? ?1 log
.
n
n
?d2j
By simply using an union bound with weights taken proportional to 1/j 2 , we have that
the previous inequation holds uniformly in j ? N? provided that ? ?1 is replaced with
P
? 2 2 ?1
since j?N? 1/j 2 = ? 2 /6 ? 1.64 . Notice that
6 j ?
J
X
?j (Pn0 ? Pn )f ? (Pn0 ? Pn )pj?1 (f )
? Pn0 f ? Pn0 f0 + Pn f0 ? Pn f = ??n,J (f ) +
j=1
because pj?1 = pj?1 ? pj . So, with probability at least 1 ? ?, for any distribution ?, we
have
q 2
PJ
?dj K(?j ,?j )
? Pn0 f ? Pn0 f0 + Pn f0 ? Pn f ? supF ?n,J + 5 j=1
n
r
h i
PJ
?d2j
2 ?1 log e2
+3.75 j=1
.
n log 3.3j ?
?d2
j
Making J ? +?, we obtain theorem 1.
Proofof Theorem 2: It suffices
to modify slightly the proof of theorem 1. Introduce U ,
sup? ?h + log ? ? K(?, ?) , where h is still defined as in equation (12). Inequations (14)
n
0
implies that E2n eU ? ?. By Jensen?s inequality, we get En eEn U ? ?, hence En E0n U ?
o
0 ? ?. So with probability at least 1 ? ?, we have sup? E0n ?h + log ? ? K(?, ?) ?
E0n U ? 0.
6
Conclusion
We have obtained a generalization error bound for randomized classifiers which combines
several previous improvements. It contains an optimal union bound, both in the sense of
optimally taking into account the metric structure of the set of functions (via the majorizing
measure approach) and in the sense of taking into account the averaging distribution. We
believe that this is a very natural way of combining these two aspects as the result relies
on the comparison of a majorizing measure which can be thought of as a prior probability
distribution and a randomization distribution which can be considered as a posterior distribution.
Future work will focus on giving a totally empirical bound (in the induction setting) and
investigating possible constructions for the approximating sets Sj .
References
[1] D. A. McAllester. Some PAC-Bayesian theorems. In Proceedings of the 11th Annual Conference on Computational Learning Theory, pages 230?234. ACM Press, 1998.
[2] M. Talagrand. Majorizing measures: The generic chaining. Annals of Probability, 24(3):1049?
1103, 1996.
[3] S. Boucheron, G. Lugosi, and S. Massart. A sharp concentration inequality with applications.
Random Structures and Algorithms, 16:277?292, 2000.
[4] D. A. McAllester. PAC-Bayesian model averaging. In Proceedings of the 12th Annual Conference on Computational Learning Theory. ACM Press, 1999.
[5] V. Vapnik and A. Chervonenkis. Theory of Pattern Recognition [in Russian]. Nauka, Moscow,
1974. (German Translation: W. Wapnik & A. Tscherwonenkis, Theorie der Zeichenerkennung,
Akademie?Verlag, Berlin, 1979).
[6] R. M. Dudley. A course on empirical processes. Lecture Notes in Mathematics, 1097:2?142,
1984.
[7] L. Devroye and G. Lugosi. Combinatorial Methods in Density Estimation. Springer Series in
Statistics. Springer Verlag, New York, 2001.
[8] P. Bartlett, O. Bousquet, and S. Mendelson. Local rademacher complexities. Preprint, 2003.
[9] D. A. McAllester. Simplified pac-bayesian margin bounds. In Proceedings of Computational
Learning Theory (COLT), 2003.
[10] M. Ledoux and M. Talagrand. Probability in Banach spaces. Springer-Verlag, Berlin, 1991.
[11] M. Talagrand. The Glivenko-Cantelli problem. Annals of Probability, 6:837?870, 1987.
[12] O. Catoni. Localized empirical complexity bounds and randomized estimators, 2003. Preprint.
[13] J.-Y. Audibert. Data-dependent generalization error bounds for (noisy) classification: a PACbayesian approach. 2003. Work in progress.
| 2387 |@word version:3 tedious:1 d2:2 tried:1 contains:3 series:1 chervonenkis:4 existing:1 refines:1 partition:5 kj0:3 assurance:1 successive:3 dn:3 prove:1 combine:7 introduce:4 mpg:1 inspired:1 decreasing:3 actual:1 considering:2 totally:1 provided:2 mek:1 notation:4 bounded:1 moreover:1 developed:1 guarantee:1 pseudo:2 finance:1 exactly:1 universit:1 classifier:6 exchangeable:5 appear:1 planck:1 before:1 positive:2 local:1 modify:1 aiming:1 consequence:1 lugosi:2 emphasis:1 koltchinskii:1 range:1 averaged:1 union:18 probabilit:1 empirical:14 suprema:1 akademie:1 thought:2 projection:4 induce:1 seeing:1 get:4 cannot:1 clever:1 close:1 unlabeled:1 effortless:1 risk:2 map:1 independently:1 shorten:2 estimator:2 handle:2 notion:1 annals:2 controlling:1 construction:1 pioneered:1 olivier:2 us:1 hypothesis:1 associate:1 trick:1 satisfying:1 recognition:1 ensae:1 preprint:2 eles:1 capture:2 eu:1 complexity:5 depend:2 localization:1 f2:2 completely:1 easily:1 glivenko:1 choosing:1 refined:3 jean:1 quite:1 whose:1 valued:1 larger:1 otherwise:2 statistic:1 eatoires:1 transform:1 noisy:1 sequence:2 advantage:1 ledoux:1 product:1 fr:1 combining:2 dirac:1 double:1 rademacher:6 progress:1 involves:1 come:3 implies:2 differ:1 direction:1 radius:3 closely:1 drawback:1 pacbayes:1 mcallester:6 behaviour:3 f1:2 generalization:8 suffices:1 randomization:2 biological:1 exploring:1 hold:6 around:1 considered:3 predict:1 estimation:1 e2n:13 combinatorial:1 pacbayesian:2 majorizing:6 symmetrization:1 tscherwonenkis:1 zn0:2 weighted:1 gaussian:1 aim:1 rather:1 pn:35 exchangeability:1 focus:3 refining:1 improvement:11 cantelli:1 sense:4 dependent:5 france:2 interested:1 germany:1 issue:1 classification:3 among:1 colt:1 equal:1 construct:1 sampling:1 identical:1 future:2 others:2 simultaneously:2 divergence:1 replaced:1 n1:1 zeichenerkennung:1 interest:3 introduces:1 chain:1 integral:1 minimal:2 instance:1 d2n:9 cover:5 zn:5 exchanging:1 introducing:1 deviation:1 subset:1 uniform:1 optimally:2 combined:2 pn0:15 density:1 randomized:6 together:1 squared:1 satisfied:1 choose:1 possibly:1 hoeffding:1 account:3 upperbound:1 de:3 includes:1 audibert:2 d0n:1 depends:3 performed:1 try:1 sup:6 bayes:3 recover:1 yves:1 formed:1 variance:4 efficiently:1 yield:1 bayesian:7 cybernetics:1 definition:2 e2:3 proof:7 associated:1 recovers:3 sampled:1 gain:1 popular:1 lim:2 actually:2 improved:1 done:3 just:1 talagrand:6 hand:2 e0n:8 aj:4 believe:1 russian:1 true:3 hence:2 boucheron:1 leibler:1 limj:2 spemannstrasse:1 covering:1 chaining:9 d2j:16 overview:1 banach:1 xi0:1 m1:7 enter:1 mathematics:1 similarly:1 dj:3 f0:10 posterior:1 inf:1 certain:1 verlag:3 ubingen:1 inequality:8 affiliation:1 der:1 minimum:1 surely:1 plug:1 concerning:1 prediction:2 essentially:1 expectation:8 metric:1 addition:1 want:1 laboratoire:2 crucial:1 extra:1 bringing:1 massart:1 induced:1 mod:1 integer:3 easy:1 zi:7 een:1 idea:1 bartlett:1 returned:1 pollard:1 york:1 remark:3 generally:1 involve:2 amount:1 cosh:2 concentrated:2 simplest:1 diameter:2 exist:1 notice:3 sign:1 ccr:1 localize:1 pj:65 geometrically:1 convert:1 almost:1 decision:1 bound:51 refine:1 annual:2 precisely:1 bousquet:3 aspect:3 structured:1 developing:1 according:2 combination:1 ball:2 legendre:1 slightly:1 em:1 making:1 s1:1 happens:2 taken:1 equation:2 previously:1 remains:1 discus:2 german:1 needed:1 wrt:1 z10:2 apply:1 generic:7 dudley:3 appearing:1 denotes:3 uncountable:1 moscow:1 log2:1 mastered:1 wapnik:1 giving:1 restrictive:1 prof:1 approximating:2 noticed:1 quantity:3 concentration:5 distance:5 berlin:2 extent:1 tuebingen:1 considers:1 induction:2 devroye:1 minimizing:1 statement:2 theorie:1 countable:2 unknown:1 perform:1 upper:4 ilog:1 finite:8 situation:1 extended:1 sharp:1 community:2 introduced:1 pair:1 paris:2 z1:1 able:1 pattern:1 max:1 power:3 event:1 natural:2 eh:8 indicator:1 carried:1 prior:3 review:1 law:1 embedded:1 loss:3 fully:1 lecture:1 interesting:3 proportional:1 localized:4 s0:1 translation:1 course:1 last:1 copy:1 allow:1 institute:1 taking:3 distributed:1 dimension:1 valid:1 qn:1 made:1 refinement:2 projected:1 ig:1 simplified:1 crest:1 approximate:1 sj:11 kullback:1 supremum:2 investigating:1 rid:1 xi:1 p2n:1 continuous:3 obtaining:1 du:1 necessarily:1 rue:1 main:2 rh:1 s2:1 arise:1 n2:1 allowed:1 en:8 transduction:1 explicit:1 candidate:1 jussieu:1 theorem:10 down:1 specific:1 pac:10 showing:1 jensen:2 exists:2 mendelson:1 vapnik:4 catoni:1 margin:1 easier:1 suited:1 entropy:3 supf:5 logarithmic:2 simply:1 likely:1 applies:1 springer:3 relies:1 acm:2 goal:3 considerable:1 infinite:1 uniformly:4 averaging:6 lemma:4 called:3 secondary:1 e:1 shannon:1 nauka:1 people:2 absolutely:1 |
1,526 | 2,388 | Learning Spectral Clustering
Francis R. Bach
Computer Science
University of California
Berkeley, CA 94720
[email protected]
Michael I. Jordan
Computer Science and Statistics
University of California
Berkeley, CA 94720
[email protected]
Abstract
Spectral clustering refers to a class of techniques which rely on the eigenstructure of a similarity matrix to partition points into disjoint clusters
with points in the same cluster having high similarity and points in different clusters having low similarity. In this paper, we derive a new cost
function for spectral clustering based on a measure of error between a
given partition and a solution of the spectral relaxation of a minimum
normalized cut problem. Minimizing this cost function with respect to
the partition leads to a new spectral clustering algorithm. Minimizing
with respect to the similarity matrix leads to an algorithm for learning
the similarity matrix. We develop a tractable approximation of our cost
function that is based on the power method of computing eigenvectors.
1
Introduction
Spectral clustering has many applications in machine learning, exploratory data analysis,
computer vision and speech processing. Most techniques explicitly or implicitly assume
a metric or a similarity structure over the space of configurations, which is then used by
clustering algorithms. The success of such algorithms depends heavily on the choice of
the metric, but this choice is generally not treated as part of the learning problem. Thus,
time-consuming manual feature selection and weighting is often a necessary precursor to
the use of spectral methods.
Several recent papers have considered ways to alleviate this burden by incorporating prior
knowledge into the metric, either in the setting of K-means clustering [1, 2] or spectral
clustering [3, 4]. In this paper, we consider a complementary approach, providing a general
framework for learning the similarity matrix for spectral clustering from examples. We assume that we are given sample data with known partitions and are asked to build similarity
matrices that will lead to these partitions when spectral clustering is performed. This problem is motivated by the availability of such datasets for at least two domains of application:
in vision and image segmentation, a hand-segmented dataset is now available [5], while
for the blind separation of speech signals via partitioning of the time-frequency plane [6],
training examples can be created by mixing previously captured signals.
Another important motivation for our work is the need to develop spectral clustering methods that are robust to irrelevant features. Indeed, as we show in Section 4.2, the performance of current spectral methods can degrade dramatically in the presence of such irrelevant features. By using our learning algorithm to learn a diagonally-scaled Gaussian kernel
for generating the affinity matrix, we obtain an algorithm that is significantly more robust.
Our work is based on a new cost function J(W, e) that characterizes how close the eigenstructure of a similarity matrix W is to a partition e. We derive this cost function in Section 2. As we show in Section 2.3, minimizing J with respect to e leads to a new clustering
algorithm that takes the form of a weighted K-means algorithm. Minimizing J with respect to W yields an algorithm for learning the similarity matrix, as we show in Section 4.
Section 3 provides foundational material on the approximation of the eigensubspace of a
symmetric matrix that is needed for Section 4.
2
Spectral clustering and normalized cuts
Given a dataset I of P points in a space X and a P ? P ?similarity matrix? (or ?affinity
matrix?) W that measures the similarity between the P points (Wpp0 is large when points
indexed by p and p0 are likely to be in the same cluster), the goal of clustering is to organize
the dataset into disjoint subsets with high intra-cluster similarity and low inter-cluster similarity. Throughout this paper we always assume that the elements of W are non-negative
(W > 0) and that W is symmetric (W = W > ).
Let D denote the diagonal matrix whose i-th diagonal element is the sum of the elements in
the i-th row of W , i.e., D = diag(W 1), where 1 is defined as the vector in RP composed of
ones. There are different variants of spectral clustering. In this paper we focus on the task of
minimizing ?normalized cuts.? The classical relaxation of this NP-hard problem [7, 8, 9]
leads to an eigenvalue problem. In this section we show that the problem of finding a
solution to the original problem that is closest to the relaxed solution can be solved by a
weighted K-means algorithm.
2.1
Normalized cut and graph partitioning
The clustering problem is usually defined in terms of a complete graph with vertices
V = {1, ..., P } and an affinity matrix with weights
Wpp0 , for p, p0 ? V . We wish to find R
S
disjoint clusters A = (Ar )r?{1,...,R} , where r Ar = V , that optimize a certain cost function. An example of such a function is the R-way normalized cut defined as follows [7, 10]:
P
P R P
C(A, W ) = r=1
i?Ar ,j?V \Ar Wij /
i?Ar ,j?V Wij .
Let er be the indicator vector in RP for the r-th cluster, i.e., er ? {0, 1}R is such that er
has a nonzero component exactly at points in the r-th cluster. Knowledge of e = (er ) is
equivalent to knowledge of A = (Ar ) and, when referring to partitions, we will use the two
formulations interchangeably. A short calculation reveals that the normalized cut is then
PR
>
equal to C(e, W ) = r=1 e>
r (D ? W )er / (er Der ).
2.2
Spectral relaxation and rounding
The following proposition, which extends a result of Shi and Malik [7] for two clusters
to an arbitrary number of clusters, gives an alternative description of the clustering task,
which will lead to a spectral relaxation:
Proposition 1 The R-way normalized cut is equal to R ? tr Y > D?1/2 W D?1/2 Y for any
matrix Y ? RP ?R such that (a) the columns of D?1/2 Y are piecewise constant with
respect to the clusters and (b) Y has orthonormal columns (Y > Y = I).
Proof The constraint (a) is equivalent to the existence of a matrix ? ? RR?R such
that D?1/2 Y = (e1 , . . . , eR )? = E?. The constraint (b) is thus written as I = Y > Y =
?> E > DE?. The matrix E > DE is diagonal, with elements e>
r Der and is thus positive
and invertible. This immediately implies that ??> = (E > DE)?1 . This in turn implies that
tr Y > D?1/2 W D?1/2 Y = tr ?> E > W E? = tr E > W E??> = tr E > W E(E > DE)?1 ,
which is exactly the normalized cut (up to an additive constant).
By removing the constraint (a), we obtain a relaxed optimization problem, whose solutions
involve the eigenstructure of D?1/2 W D?1/2 and which leads to the classical lower bound
on the optimal normalized cut [8, 9]. The following proposition gives the solution obtained
from the relaxation (for the proof, see [11]):
Proposition 2 The maximum of tr Y > D?1/2 W D?1/2 Y over matrices Y ? RP ?R such
that Y > Y = I is the sum of the R largest eigenvalues of D?1/2 W D?1/2 . It is attained
at all Y of the form Y = U B1 where U ? RP ?R is any orthonormal basis of the R-th
principal subspace of D?1/2 W D?1/2 and B1 is an arbitrary rotation matrix in RR?R .
The solutions found by this relaxation will not in general be piecewise constant. In order to obtain a piecewise constant solution, we wish to find a piecewise constant matrix
that is as close as possible to one of the possible Y obtained from the eigendecomposition. Since such matrices are defined up to a rotation matrix, it makes sense to compare the subspaces spanned by their columns. A common way to compare subspaces is to
compare the orthogonal projection operators on those subspaces
to compute
P [12], that is,
1/2
the Frobenius norm between U U > and ?0 = ?0 (W, e) , r D1/2 er e>
D
/ (e>
r
r Der )
(?0 is the orthogonal projection operator on the subspace spanned by the columns of
D1/2 E = D1/2 (e1 , . . . , er ), from Proposition 1). We thus define the following cost function:
(1)
J(W, e) = 21 ||U U > ? ?0 ||2F
Using the fact that both U U > and ?0 are orthogonal projection operators on linear subspaces of dimension R, a P
short calculation reveals that the cost function J(W, e) is equal
1/2
to R ? tr U U > ?0 = R ? r e>
U U > D1/2 er / (e>
r D
r Der ). This cost function characterizes the ability of the matrix W to produce the partition e when using its eigenvectors.
Minimizing with respect to e leads to a new clustering algorithm that we now present.
Minimizing with respect to the matrix for a given partition e leads to the learning of the
similarity matrix, as we show in Section 4.
2.3
Minimizing with respect to the partition
In this section, we show that minimizing J(W, e) is equivalent to a weighted K-means algorithm. The following theorem, inspired by the spectral relaxation of K-means presented
in [8], shows that the cost function can be interpreted as a weighted distortion measure1 :
Theorem 1 Let W be an affinity matrix and let U = (u1 , . . . , uP ), where up ? RR , be
an orthonormal basis of the R-th principal subspace of D?1/2 W D?1/2 . For any partition
e ? A, we have
X X
J(W, e) =
min
dp ||up d?1/2
? ?r ||2 .
p
(?1 ,...,?R )?RR?R
P P
r
p?Ar
?1/2
Proof Let D(?, A) = r p?Ar dp ||up dp
? ?r ||2 . Minimizing D(?, A) with respect
to ? is a decoupled least-squares problem and we get:
P P
P P
P
1/2
2
min? D(?, A) = r p?Ar u>
p up ?
r ||
p?Ar dp up || / (
p?Ar dp )
1
Note that a similar equivalence holds between normalized cuts and weighted K-means for
positive semidefinite similarity matrices, which can be factorized as W = GG> ; this leads
to an approximation algorithm for minimizing normalized cuts; i.e., we have: C(W, e) =
P P
2
?1/2
min(?1 ,...,?R )?RR?R r p?Ar dp ||gp d?1
W D?1/2 .
p ? ?r || + R ? tr D
Input: Similarity matrix W ? RP ?P .
Algorithm:
1. Compute first R eigenvectors U of D?1/2 W D?1/2 where D = diag(W 1).
2. Let U = (u1 , . . . , uP ) ? RR?P and dp = Dpp .
3. Weighted K-means: while partition A is not stationary,
P
P
1/2
a. For all r, ?r = p?Ar dp up / p?Ar dp
?1/2
b. For all p, assign p to Ar where r = arg minr0 ||up dp
? ?r0 ||
P P
?1/2
Output: partition A, distortion measure r p?Ar dp ||up dp
? ?r ||2
Figure 1: Spectral clustering algorithm.
P P
1/2 1/2 >
>
u>
p up ?
r
p,p0 ?Ar dp dp0 up up0 / (er Der )
P > 1/2
= R ? r er D U U > D1/2 er / (e>
r Der ) = J(W, e)
This theorem has an immediate algorithmic implication?to minimize the cost function
J(W, e) with respect to the partition e, we can use a weighted K-means algorithm. The
resulting algorithm is presented in Figure 1. While K-means is often used heuristically as
a post-processor for spectral clustering [13], our approach provides a mathematical foundation for the use of K-means, and yields a specific weighted form of K-means that is
appropriate for the problem.
=
2.4
P
p
Minimizing with respect to the similarity matrix
When the partition e is given, we can consider minimization with respect to W . As we
have suggested, intuitively this has the effect of yielding a matrix W such that the result of
spectral clustering with that W is as close as possible to e. We now make this notion precise, by showing that the cost function J(W, e) is an upper bound on the distance between
the partition e and the result of spectral clustering using the similarity matrix W .
The metric between two partitions e = (er ) and f = (fs ) with R and S clusters respectively,
is taken to be [14]:
2
2
X fs f >
R+S X
(e>
1
X er e>
r
s
r fs )
?
=
?
(2)
d(e, f ) =
>
2
r e>
fs> fs
2
(e>
r er
r er )(fs fs )
s
r,s
F
R+S
2 ?1,
This measure is always between zero and
and is equal to zero if and only if e ? f .
The following theorem shows that if we can perform weighted K-means exactly, we obtain
a bound on the performance of our spectral clustering algorithm (for a proof, see [11]):
Theorem 2 Let ? = maxp Dpp / minp Dpp > 1. If e(W ) = arg mine J(W, e), then for all
partitions e, we have d(e, e(W )) 6 4?J(W, e).
3
Approximation of the cost function
In order to minimize the cost function J(W, e) with respect to W , which is the topic of
Section 4, we need to optimize a function of the R-th principal subspace of the matrix
D?1/2 W D?1/2 . In this section, we show how we can compute a differentiable approximation of the projection operator on this subspace.
3.1
Approximation of eigensubspace
Let X ? RP ?P be a real symmetric matrix. We assume that its eigenvalues are ordered by
magnitude: |?1 | > |?2 | > ? ? ? > |?P |. We assume that |?R | > |?R+1 | so that the R-th
principal subspace ER is well defined, with orthogonal projection ?R .
Our approximations are based on the power method to compute eigenvectors. It is well
known that for almost all vectors v, the ratio X q v/||X q v|| converges to an eigenvector
corresponding to the largest eigenvalue [12]. The same method can be generalized to the
computation of dominant eigensubspaces: If V is a matrix in RP ?R , the subspace generated by the R columns of X q V will tend to the principal eigensubspace of X. Note that
since we are interested only in subspaces, and in particular the orthogonal projection operators on those subspaces, we can choose any method for finding an orthonormal basis of
range(X q V ). The QR decomposition is fast and stable and is usually the method used to
compute such a basis (the algorithm is usually referred to as ?orthogonal iteration? [12]).
However this does not lead to a differentiable function. We develop a different approach
which does yield a differentiable function, as made precise in the following proposition (for
a proof, see [11]):
Proposition 3 Let V ? RP ?R such that ? =
max
u?ER (X)? , v?range(V )
>
q
cos(u, v) < 1. Then the
e R (Y ) = M (M > M )?1 M , where M = Y V , is C ? in a neighborhood
function Y 7? ?
?
e R (X) ? ?R ||2 6
(|?R+1 |/|?R |)q .
of X, and we have: ||?
(1?? 2 )1/2
This proposition shows that as q tends to infinity, the range of X q V will tend to the principal eigensubspace. The rate of convergence is determined by the (multiplicative) eigengap
|?R+1 |/|?R | < 1: it is usually hard to compute principal subspace of matrices with eigengap close to one. Note that taking powers of matrices without care can lead to disastrous
results [12]. By using successive QR iterations, the computations can be made stable and
the same technique can be used for the computation of the derivatives.
3.2
Potentially hard eigenvalue problems
In most of the literature on spectral clustering, it is taken for granted that the eigenvalue
problem is easy to solve. It turns out that in many situations, the (multiplicative) eigengap
is very close to one, making the eigenvector computation difficult (examples are given in
the next section). We acknowledge this potential problem by averaging over several initializations of the original subspace V . More precisely, let (Vm )m=1,...,M be M subspaces
of dimension R. Let Bm = ?(range((D?1/2 W D?1/2 )q Vm )) be the approximations of
the projections on the R-th principal subspace2 of D?1/2 W D?1/2 . The cost function that
PM
1
2
we use is the average error F (W, ?0 (e)) = 2M
m=1 ||Bm ? ?0 ||F . This cost function
can be rewritten as the distance between the average of the Bm and ?0 plus the variance
of the approximations, thus explicitly penalizing the non-convergence of the power iterations. We choose Vi to be equal to D1/2 times a set of R indicator vectors corresponding to
subsets of each cluster. In simulations, we used q = 128, M = R2 , and subsets containing
2/(log2 q + 1) times the number of original points in the clusters.
3.3
Empirical comparisons
In this section, we study the ability of various cost functions to track the gold standard
error measure in Eq. (2) as we vary the parameter ? in the similarity matrix Wpp0 =
exp(??||xp ? xp0 ||2 ). We study the cost function J(W, e), its approximation based on
the power method presented in Section 3, and two existing approaches, one based on a
Markov chain interpretation of spectral clustering [15] and one based on the alignment [16]
of D?1/2 W D?1/2 and ?0 . We carry out this experiment for the simple clustering example
The matrix D?1/2 W D?1/2 always has the same largest eigenvalue 1 with eigenvector
D 1 and we could consider instead the (R ? 1)-st principal subspace of D?1/2 W D?1/2 ?
D1/2 11> D1/2 / (1> D1).
2
1/2
?5
?6
?7
0.6
0.4
0.2
?9
0
1
2
3
log(?)
(a)
0.8
?8
0
(b)
1
error/cost
1
?4
error/cost
log(1?eigengap)
?3
0.8
0.6
0.4
0.2
0
1
2
log(?)
(c)
3
0
0
1
log(?)
2
3
(d)
Figure 2: Empirical comparison of cost functions. (a) Data. (b) Eigengap of the similarity
matrix as a function of ?. (c) Gold standard clustering error (solid), spectral cost function
J (dotted) and its approximation based on the power method (dashed). (d) Gold standard
clustering error (solid), the alignment (dashed), and a Markov-chain-based cost, divided by
16 (dotted).
shown in Figure 2(a). This apparently simple toy example captures much of the core difficulty of spectral clustering?nonlinear separability and thinness/sparsity of clusters (any
point has very few near neighbors belonging to the same cluster, so that the weighted graph
is sparse). In particular, in Figure 2(b) we plot the eigengap of the similarity matrix as
a function of ?, noting that at the optimum, this gap is very close to one, and thus the
eigenvalue problem is hard to solve.
In Figure 2(c) and (d), we plot the four cost functions against the gold standard. The
gold standard curve shows that the optimal ? lies near 2.5 on a log scale, and as seen in
Figure 2(c), the minima of the new cost function and its approximation lie near to this
value. As seen in Figure 2(d), on the other hand, the other two cost functions show a poor
match to the gold standard, and yield minima far from the optimum.
The problem with the alignment and Markov-chain-based cost functions is that these functions essentially measure the distance between the similarity matrix W (or a normalized
version of W ) and a matrix T which (after permutation) is block-diagonal with constant
blocks. Unfortunately, in examples like the one in Figure 2, the optimal similarity matrix
is very far from being block diagonal with constant blocks. Rather, given that data points
that lie in the same ring are in general far apart, the blocks are very sparse?not constant
and full. Methods that try to find constant blocks cannot find the optimal matrices in these
cases. In the language of spectral graph partitioning, where we have a weighted graph with
weights W , each cluster is a connected but very sparse graph. The power W q corresponds
to the q-th power of the graph; i.e., the graph in which two vertices are linked by an edge
if and only if they are linked by a path of length no more than q in the original graph.
Thus taking powers can be interpreted as ?thickening? the graph to make the clusters more
apparent, while not changing the eigenstructure of the matrix (taking powers of symmetric
matrices only changes the eigenvalues, not the eigenvectors).
4
Learning the similarity matrix
We now turn to the problem of learning the similarity matrix from data. We assume that we
are given one or more sets of data for which the desired clustering is known. The goal is to
design a ?similarity map,? that is, a mapping from datasets of elements in X to the space
of symmetric matrices with nonnegative elements. To turn this into a parametric learning
problem, we focus on similarity matrices that are obtained as Gram matrices of a kernel
function k(x, y) defined on X?X . In particular, for concreteness and simplicity, we restrict
ourselves in this paper to the case of Euclidean data (X = RF ) and a diagonally-scaled
Gaussian kernel k? (x, y) = exp(?(x?y)> diag(?)(x?y)), where ? ? RF ?while noting
that our methods apply more generally.
4.1
Learning algorithm
We assume that we are given N datasets Dn , n ? {1, . . . , N }, of points in RF . Each dataset
Dn is composed of Pn points xnp , p ? {1, . . . , Pn }. Each dataset is segmented, that is, for
each n we know the partition en , so that the ?target? matrix ?0 (en , ?) can be computed
for each dataset. For each
P n, we have a similarity matrix Wn (?). The cost function that
we use is H(?) = N1 n F (Wn (?), ?0 (en , ?)) + C||?||1 . The `1 penalty serves as a
feature selection term, tending to make the solution sparse. The learning algorithm is the
minimization of H(?) with respect to ? ? RF
+ , using the method of conjugate gradient
with line search.
Since the complexity of the cost function increases with q, we start the minimization with
small q and gradually increase q up to its maximum value. We have observed that for small
q, the function to optimize is smoother and thus easier to optimize?in particular, the long
plateaus of constant values are less pronounced.
Testing. The output of the learning algorithm is a vector ? ? RF . In order to cluster
previously unseen datasets, we compute the similarity matrix W and use the algorithm of
Figure 1. In order to further enhance performance, we can also adopt an idea due to [13]?
we hold the direction of ? fixed but perform a line search on its norm. This yields the
real number ? such that the weighted distortion obtained after application of the spectral
clustering algorithm of Figure 1, with the similarity matrices defined by ??, is minimum.3
4.2
Simulations
We performed simulations on synthetic datasets in two dimensions, where we consider
datasets similar to the one in Figure 2, with two rings whose relative distance is constant
across samples (but whose relative orientation has a random direction). We add D irrelevant
dimensions of the same magnitude as the two relevant variables. The goal is thus to learn
the diagonal scale ? ? RD+2 of a Gaussian kernel that leads to the best clustering on
unseen data. We learn ? from N sample datasets (N = 1 or 10), and compute the clustering
error of our algorithm with and without adaptive tuning of the norm of ? during testing (as
described in Section 4.1) on ten previously unseen datasets. We compare to an approach
that does not use the training data: ? is taken to be the vector of all ones and we again search
over the best possible norm during testing (we refer to this method as ?no learning?). We
report results in Table 1. Without feature selection, the performance of spectral clustering
degrades very rapidly when the number of irrelevant features increases, while our learning
approach is very robust, even with only one training dataset.
5
Conclusion
We have presented two algorithms?one for spectral clustering and one for learning the
similarity matrix. These algorithms can be derived as the minimization of a single cost
function with respect to its two arguments. This cost function depends directly on the
eigenstructure of the similarity matrix. We have shown that it can be approximated efficiently using the power method, yielding a method for learning similarity matrices that can
cluster effectively in cases in which non-adaptive approaches fail. Note in particular that
our new approach yields a spectral clustering method that is significantly more robust to
irrelevant features than current methods.
We are currently applying our algorithm to problems in speech separation and image segmentation, in particular with the objective of selecting features from among the numerous
3
In [13], this procedure is used to learn one parameter of the similarity matrix with no training
data; it cannot be used directly here to learn a more complex similarity matrix with more parameters,
because it would lead to overfitting.
Table 1: Performance on synthetic datasets: clustering errors (multiplied by 100) for
method without learning (but with tuning) and for our learning method with and without
tuning, with N = 1 or 10 training datasets; D is the number of irrelevant features.
D
0
1
2
4
8
16
32
no
learning
0
60.8
79.8
99.8
99.8
99.7
99.9
learning w/o tuning
N=1
15.5
37.7
36.9
37.8
37
38.8
38.9
N=10
10.5
9.5
9.5
9.7
10.7
10.9
15.1
learning with tuning
N=1
0
0
0
0.4
0
14
14.6
N=10
0
0
0
0
0
0
6.1
features that are available in these domains [6, 7]. The number of points in such datasets
can be very large and we have developed efficient implementations of both learning and
clustering based on sparsity and low-rank approximations [11].
Acknowledgments
We would like to acknowledge support from NSF grant IIS-9988642, MURI ONRN00014-01-1-0890 and a grant from Intel Corporation.
References
[1] K. Wagstaff, C. Cardie, S. Rogers, and S. Schr?odl. Constrained K-means clustering with background knowledge. In ICML, 2001.
[2] E. P. Xing, A. Y. Ng, M. I. Jordan, and S. Russell. Distance metric learning, with application to
clustering with side-information. In NIPS 15, 2003.
[3] S. X. Yu and J. Shi. Grouping with bias. In NIPS 14, 2002.
[4] S. D. Kamvar, D. Klein, and C. D. Manning. Spectral learning. In IJCAI, 2003.
[5] D. Martin, C. Fowlkes, D. Tal, and J. Malik. A database of human segmented natural images
and its application to evaluating segmentation algorithms and measuring ecological statistics.
In ICCV, 2001.
[6] G. J. Brown and M. P. Cooke. Computational auditory scene analysis. Computer Speech and
Language, 8:297?333, 1994.
[7] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Trans. PAMI, 22(8):888?
905, 2000.
[8] H. Zha, C. Ding, M. Gu, X. He, and H. Simon. Spectral relaxation for K-means clustering. In
NIPS 14, 2002.
[9] P. K. Chan, M. D. F. Schlag, and J. Y. Zien. Spectral K-way ratio-cut partitioning and clustering.
IEEE Trans. CAD, 13(9):1088?1096, 1994.
[10] M. Gu, H. Zha, C. Ding, X. He, and H. Simon. Spectral relaxation models and structure analysis
for K-way graph clustering and bi-clustering. Technical report, Penn. State Univ, Computer
Science and Engineering, 2001.
[11] F. R. Bach and M. I. Jordan. Learning spectral clustering. Technical report, UC Berkeley,
available at www.cs.berkeley.edu/?fbach, 2003.
[12] G. H. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins University Press, 1996.
[13] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In
NIPS 14, 2001.
[14] L. J. Hubert and P. Arabie. Comparing partitions. Journal of Classification, 2:193?218, 1985.
[15] M. Meila and J. Shi. Learning segmentation by random walks. In NIPS 13, 2002.
[16] N. Cristianini, J. Shawe-Taylor, and J. Kandola. Spectral kernel methods for clustering. In NIPS
14, 2002.
| 2388 |@word version:1 norm:4 heuristically:1 simulation:3 decomposition:1 p0:3 tr:8 solid:2 carry:1 configuration:1 selecting:1 existing:1 current:2 comparing:1 cad:1 written:1 john:1 additive:1 partition:20 plot:2 stationary:1 plane:1 short:2 core:1 provides:2 successive:1 mathematical:1 dn:2 inter:1 indeed:1 inspired:1 precursor:1 factorized:1 interpreted:2 eigenvector:3 developed:1 finding:2 corporation:1 berkeley:6 exactly:3 scaled:2 partitioning:4 grant:2 penn:1 organize:1 eigenstructure:5 positive:2 engineering:1 tends:1 path:1 pami:1 plus:1 initialization:1 equivalence:1 co:1 range:4 bi:1 acknowledgment:1 testing:3 block:6 procedure:1 foundational:1 empirical:2 significantly:2 projection:7 refers:1 get:1 cannot:2 close:6 selection:3 operator:5 applying:1 www:1 optimize:4 equivalent:3 map:1 shi:4 simplicity:1 immediately:1 orthonormal:4 spanned:2 notion:1 exploratory:1 target:1 heavily:1 element:6 approximated:1 cut:13 muri:1 database:1 observed:1 ding:2 solved:1 capture:1 connected:1 russell:1 complexity:1 asked:1 cristianini:1 mine:1 arabie:1 basis:4 gu:2 various:1 univ:1 fast:1 neighborhood:1 whose:4 apparent:1 solve:2 distortion:3 maxp:1 ability:2 statistic:2 unseen:3 gp:1 eigenvalue:9 rr:6 differentiable:3 relevant:1 rapidly:1 mixing:1 gold:6 description:1 frobenius:1 pronounced:1 qr:2 convergence:2 cluster:21 optimum:2 ijcai:1 produce:1 generating:1 converges:1 ring:2 derive:2 develop:3 eq:1 c:3 implies:2 direction:2 human:1 xp0:1 xnp:1 material:1 rogers:1 assign:1 alleviate:1 proposition:8 hold:2 considered:1 exp:2 algorithmic:1 mapping:1 vary:1 adopt:1 currently:1 largest:3 weighted:12 minimization:4 gaussian:3 always:3 rather:1 pn:2 derived:1 focus:2 rank:1 sense:1 wij:2 interested:1 arg:2 among:1 orientation:1 classification:1 constrained:1 uc:1 equal:5 having:2 ng:2 yu:1 icml:1 np:1 report:3 piecewise:4 few:1 composed:2 kandola:1 ourselves:1 n1:1 intra:1 alignment:3 golub:1 semidefinite:1 yielding:2 hubert:1 chain:3 implication:1 edge:1 necessary:1 orthogonal:6 decoupled:1 indexed:1 euclidean:1 taylor:1 walk:1 desired:1 column:5 ar:17 measuring:1 cost:31 vertex:2 subset:3 rounding:1 schlag:1 dp0:1 synthetic:2 referring:1 st:1 vm:2 invertible:1 michael:1 enhance:1 hopkins:1 again:1 containing:1 choose:2 derivative:1 toy:1 potential:1 de:4 availability:1 explicitly:2 depends:2 blind:1 vi:1 performed:2 multiplicative:2 try:1 apparently:1 francis:1 characterizes:2 linked:2 start:1 xing:1 zha:2 simon:2 minimize:2 square:1 variance:1 efficiently:1 yield:6 cardie:1 processor:1 plateau:1 manual:1 against:1 frequency:1 proof:5 auditory:1 dataset:7 knowledge:4 segmentation:5 attained:1 wei:1 formulation:1 hand:2 nonlinear:1 effect:1 normalized:13 brown:1 symmetric:5 nonzero:1 interchangeably:1 during:2 generalized:1 gg:1 complete:1 image:4 common:1 rotation:2 tending:1 interpretation:1 he:2 refer:1 rd:1 tuning:5 meila:1 pm:1 language:2 shawe:1 stable:2 similarity:36 add:1 dominant:1 closest:1 recent:1 chan:1 irrelevant:6 apart:1 certain:1 ecological:1 success:1 der:6 captured:1 minimum:4 seen:2 relaxed:2 care:1 r0:1 signal:2 dashed:2 zien:1 full:1 smoother:1 ii:1 segmented:3 technical:2 match:1 calculation:2 bach:2 long:1 divided:1 post:1 e1:2 variant:1 vision:2 metric:5 essentially:1 iteration:3 kernel:5 background:1 kamvar:1 fbach:2 tend:2 jordan:5 near:3 presence:1 noting:2 easy:1 wn:2 restrict:1 idea:1 motivated:1 granted:1 eigengap:6 penalty:1 f:7 speech:4 dramatically:1 generally:2 involve:1 eigenvectors:5 ten:1 nsf:1 dotted:2 disjoint:3 track:1 klein:1 odl:1 four:1 changing:1 penalizing:1 graph:11 relaxation:9 concreteness:1 sum:2 extends:1 throughout:1 almost:1 separation:2 bound:3 nonnegative:1 constraint:3 infinity:1 precisely:1 scene:1 tal:1 u1:2 argument:1 min:3 martin:1 poor:1 manning:1 belonging:1 conjugate:1 across:1 separability:1 making:1 intuitively:1 gradually:1 pr:1 wagstaff:1 iccv:1 taken:3 previously:3 turn:4 fail:1 needed:1 know:1 tractable:1 serf:1 available:3 rewritten:1 multiplied:1 apply:1 spectral:38 appropriate:1 fowlkes:1 alternative:1 rp:9 existence:1 original:4 clustering:47 log2:1 build:1 classical:2 malik:3 objective:1 parametric:1 degrades:1 diagonal:6 affinity:4 dp:13 subspace:17 distance:5 gradient:1 degrade:1 topic:1 length:1 providing:1 minimizing:12 ratio:2 difficult:1 unfortunately:1 disastrous:1 potentially:1 negative:1 design:1 implementation:1 perform:2 upper:1 datasets:11 markov:3 acknowledge:2 immediate:1 situation:1 precise:2 schr:1 arbitrary:2 california:2 nip:6 trans:2 suggested:1 usually:4 sparsity:2 rf:5 max:1 power:11 treated:1 rely:1 difficulty:1 natural:1 indicator:2 numerous:1 created:1 eigensubspace:4 prior:1 literature:1 relative:2 permutation:1 thickening:1 eigendecomposition:1 foundation:1 xp:1 minp:1 cooke:1 row:1 diagonally:2 side:1 bias:1 neighbor:1 taking:3 sparse:4 van:1 dpp:3 dimension:4 curve:1 gram:1 evaluating:1 made:2 adaptive:2 bm:3 far:3 implicitly:1 overfitting:1 reveals:2 b1:2 consuming:1 search:3 table:2 learn:5 robust:4 ca:2 complex:1 domain:2 diag:3 up0:1 motivation:1 complementary:1 referred:1 intel:1 en:3 wish:2 lie:3 weighting:1 removing:1 theorem:5 specific:1 showing:1 subspace2:1 er:19 r2:1 grouping:1 burden:1 incorporating:1 effectively:1 magnitude:2 gap:1 easier:1 likely:1 ordered:1 corresponds:1 goal:3 hard:4 change:1 loan:1 determined:1 averaging:1 principal:9 support:1 d1:9 |
1,527 | 2,389 | Information Dynamics and Emergent
Computation in Recurrent Circuits of Spiking
Neurons
Thomas Natschl?ager, Wolfgang Maass
Institute for Theoretical Computer Science
Technische Universitaet Graz
A-8010 Graz, Austria
{tnatschl, maass}@igi.tugraz.at
Abstract
We employ an efficient method using Bayesian and linear classifiers
for analyzing the dynamics of information in high-dimensional states of
generic cortical microcircuit models. It is shown that such recurrent circuits of spiking neurons have an inherent capability to carry out rapid
computations on complex spike patterns, merging information contained
in the order of spike arrival with previously acquired context information.
1
Introduction
Common analytical tools of computational complexity theory cannot be applied to recurrent circuits with complex dynamic components, such as biologically realistic neuron
models and dynamic synapses. In this article we explore the capability of information
theoretic concepts to throw light on emergent computations in recurrent circuit of spiking
neurons. This approach is attractive since it may potentially provide a solid mathematical
basis for understanding such computations. But it is methodologically difficult because of
systematic errors caused by under-sampling problems that are ubiquitous even in extensive
computer simulations of relatively small circuits. Previous work on these methodological problems had focused on estimating the information in spike trains, i.e. temporally
extended protocols of the activity of one or a few neurons. In contrast to that this paper
addresses methods for estimating the information that is instantly available to a neuron that
has synaptic connections to a large number of neurons.
We will define the specific circuit model used for our study in section 2 (although the
methods that we apply appear to be useful for to a much wider class of analog and digital recurrent circuits). The combination of information theoretic methods with methods
from machine learning that we employ is discussed in section 3. The results of applications of these methods to the analysis of the distribution and dynamics of information in
a generic recurrent circuit of spiking neurons are presented in section 4. Applications of
these methods to the analysis of emergent computations are discussed in section 5.
A possible templates for spike train segments
template
1. segment
2. segment
3. segment
4. segment
0
1
B an example for resulting input spike trains (with noise)
template 1 (s1=1)
0
0.1
template 0 (s2=0) template 1 (s =1)
3
0.2
0.3
0.4
time [sec]
0.5
template 0 (s4=0)
0.6
0.7
0.8
Figure 1: Input distribution used throughout the paper. Each input consists of 5 spike trains
of length 800 ms generated from 4 segments of length 200 ms each. A For each segment
2 templates 0 and 1 were generated randomly (Poisson spike trains with a frequency of
20 Hz). B The actual input spike trains were generated by choosing randomly for each
segment i, i = 1, . . . , 4, one of the two associated templates (si = 0 or si = 1), and then
generating a noisy version by moving each spike by an amount drawn from a Gaussian
distribution with mean 0 and SD 4 ms.
2
Our study case: A Generic Neural Microcircuit Model
As our study case for analyzing information in high-dimensional circuit states we used
a randomly connected circuit with sparse, primarily local connectivity consisting of 800
leaky integrate-and-fire (I&F) neurons, 20% of which were randomly chosen to be inhibitory. The 800 neurons of the circuit were arranged on two 20 ? 20 layers L1 and L2.
Circuit inputs consisting of 5 spike trains were injected into a randomly chosen subset of
neurons in layer L1 (the connection probability was set to 0.25 for each of the 5 input
channels and each neuron in layer L1). We modeled the (short term) dynamics of synapses
according to the model proposed in [1], with the synaptic parameters U (use), D (time
constant for depression), F (time constant for facilitation) randomly chosen from Gaussian
distributions that model empirical data for such connections. Parameters of neurons and
synapses were chosen as in [2] to fit data from microcircuits in rat somatosensory cortex
(based on [3] and [1]).
Since neural microcircuits in the nervous system often receive salient input in the form of
spatio-temporal firing patterns (e.g. from arrays of sensory neurons, or from other brain
areas), we have concentrated on circuit inputs of this type. Such firing pattern could for example represent visual information received during a saccade, or the neural representation
of a phoneme or syllable in auditory cortex. Information dynamics and emergent computation in recurrent circuits of spiking neurons were investigated for input streams over
800 ms consisting of sequences of noisy versions of 4 of such firing patterns. We restricted
our analysis to the case where in each of the four 200 ms segments one of two template
patterns is possible, see Fig. 1. In the following we write si = 1 (si = 0) if a noisy version
of template 1 (0) is used in the i-th time segment of the circuit input.
Fig. 2 shows the response of a circuit of spiking neurons (drawn from the distribution
specified above) to the input stream exhibited in Fig. 1B. Each frame in Fig. 2 shows the
current firing activity of one layer of the circuit at a particular point t in time. Since in
such rather small circuit (compared for example with the estimated 105 neurons below a
mm2 of cortical surface) very few neurons fire at any given ms, we have replaced each
spike by a pulse whose amplitude decays exponentially with a time constant of 30 ms. This
models the impact of a spike on the membrane potential of a generic postsynaptic neuron.
The resulting vector r(t) = hr1 (t), . . . , r800 (t)i consisting of 800 analog values from the
6
4
2
0
t=280 ms
t=290 ms
t=300 ms
t=310 ms
Figure 2: Snapshots of the first 400 components of the circuit state r(t) (corresponding to
the neurons in the layer L1) at various times t for the input shown at the bottom of fig. 1.
Black denotes high activity, white no activity. A spike at time ts ? t adds a value of
exp(?(t ? ts )/(30ms)) to the corresponding component of the state r(t).
800 neurons in the circuit is exactly the ?liquid state? of the circuit at time t in the context
of the abstract computational model introduced in [2]. In the subsequent sections we will
analyze the temporal dynamics of the information contained in these momentary circuit
states r(t).1
3
Methods for Analyzing the Information contained in Circuit States
The mutual information M I(X, R) between two randomP
variables X and R can be defined
by M I(X, R) = H(X) ? H(X|R), where H(X) = ? x?Range(X) p(x) log p(x) is the
entropy of X, and H(X|R) is the expected value (with regard to R) of the conditional
entropy of X given R, see e.g. [4]. It is well known that empirical estimates of the entropy
tend to underestimate the true entropy of a random variable (see e.g. [5, 6]). Hence in
situations where the true value of H(X) is known (as is typically the case in neuroscience
applications where X represents the stimulus, whose distribution is controlled by the experimentalist), the generic underestimate of H(X|R) yield a generic overestimate of the
mutual information M I(X, R) = H(X) ? H(X|R) for finite sample sizes. This undersampling effect has been addressed in a number of studies (see e.g. [7], [8] and [9] and the
references therein), and has turned out to be a serious obstacle for a wide-spread application
of information theoretic methods to the analysis of neural computation. The seriousness of
this problem becomes obvious from results achieved for our study case of a generic neural
microcircuit shown in Fig. 3A. The dashed line shows the dependence of ?raw? estimates
M Iraw of the mutual information M I(s2 , R) on the sample size2 N , which ranges here
from 103 to 2 ? 105 . The raw estimate of M I(s2 , R) results from a direct application of the
definition of M I to the observed occupancy frequencies for a discrete set of bins3 , where
R consists here of just d = 5 or d = 10 components of the 800-dimensional circuit state
r(t) for t = 660 ms, and s2 is the bit encoded by the second input segment. For more
components d of the current circuit state r(t), e.g. for estimating the mutual information
M I(s2 , R) between the preceding circuit input s2 and the current firing activity in a subcircuit consisting of d = 20 or more neurons, even sample sizes beyond 106 are likely to
severely overestimate this mutual information.
1
One should note that these circuit states do not reflect the complete current state of the underlying
dynamical system, only those parts of the state of the dynamical system that are in principle ?visible?
for neurons outside the circuit. The current values of the membrane potential of neurons in the circuit
and the current values of internal variables of dynamic synapses of the circuit are not visible in this
sense.
2
In our case the sample size N refers to the number of computer simulations of the circuit response
to new drawings of circuit inputs, with new drawings of temporal jitter in the input spike trains and
initial conditions of the neurons in the circuit.
3
For direct estimates of the M I the analog value of each component of the circuit state r(t) has
to be divided into discrete bins. We first linearly transformed each component of r(t) such that it has
zeros mean and variance ? 2 = 1.0. The transformed components are then binned with a resolution
of ? = 0.5. This means that there are four bins in the range ??.
MIraw
MInaive
MIinfinity
MI [bit]
0.45
0.4
0.35
B lower bounds (d=5, s2)
1
MIraw
0.8
MI [bit]
A corrected MI (d=5, s2)
0.3
D entropy of states (d=5)
7.5
H(R|X)raw
MI [bit]
entropy [bit]
H(R)Ma
7
Bayes (? = 0.5)
linear (? = 0.5)
linear (? = 0)
0.8
0.6
0.4
0.4
0.2
0.2
0
0
E lower bounds (d=5, s3)
1
H(R)raw
7.25
0.6
C lower bounds (d=10, s2)
1
F lower bounds (d=10, s3)
1
0.8
0.8
0.6
0.6
0.4
0.4
0.2
0.2
H(R|X)Ma
6.75
3
10
4
10
sample size
5
10
0
3
10
4
10
sample size
5
10
0
3
10
4
10
5
10
sample size
Figure 3: Estimated mutual information depends on sample size. In all panels d denotes
the number of components of the circuit state r(t) at time t = 660 ms (or equivalently
the number of neurons considered). A Dependence of the ?raw? estimate M Iraw and two
corrected estimates M Inaive and M Iinf inity of the mutual information M I(s2 , R) (see
text). B Lower bounds M I(s2 , h(R)) for the mutual information obtained via classifiers h
which are trained to predict the actual value of s2 given the circuit state r(t). Results are
shown for a) an empirical Bayes classifier (discretization ? = 0.5, see footnote 3 and 5), b)
a linear classifier trained on the discrete (? = 0.5) data and c) for a linear classifier trained
on the analog data (? = 0). In the case of the Bayes classifier M I(s2 , h(R)) was estimated
by employing a leave-one-out procedure (which is computationally efficient for a Bayes
classifier), whereas for the linear classifiers a test set of size 5 ? 104 was used (hence no
results beyond a sample size of 1.5 ? 105 ). C Same as B but for d = 10. D Estimates of the
entropies H(R) and H(R|X). The ?raw? estimates are compared with the corresponding
Ma-bounds (see text). The filled triangle marks the sample size from which on the Mabound is below the raw estimate. E Same as B but for M I(s3 , h(R)). F Same as E but for
d = 10.
Several methods for correcting this bias towards overestimation of M I have been suggested in the literature. In section 3.1 of [7] it is proposed to subtract one of two possible
bias correction terms Bnaive and Bf ull from the raw estimate M Iraw of the mutual information. The effect of subtracting Bnaive is shown for d = 5 components of r(t) in
Fig. 3A. This correction is too optimistic for these applications, since the corrected estimate M Inaive = M Iraw ? Bnaive at small sample sizes (e.g. 104 ) is still substantially larger than the raw estimate M Iraw at large sample sizes (e.g. 105 ). The subtraction of the second proposed term Bf ull is not applicable in our situation because it yields
forM If ull = M Iraw ? Bf ull values lower than zero for all considered sample sizes. The
reason is, that Bf ull is proportional to the quotient ?number of possible response bins? / N
and the number of possible response bins is in the order of 3010 in this example. Another
way to correct M Iraw is proposed in [10]. This approach is based on a series expansion of
M I in 1/N [6] and is effectively a method to get an empirical estimate M Iinf inity of the
mutual information for infinite sample size (N ? ?). It can be seen in Fig. 3A that for
moderate sample sizes M Iinf inity also yields too optimistic estimates for M I.
Another method for dealing with generic overestimates of M I has been proposed in [10].
This method it based on the equation M I(X, R) = H(R) ? H(R|X) and compares the
raw estimates of H(R) and H(R|X) with the so-called Ma-bounds, and suggests to judge
raw estimates of H(R) and H(R|X), and hence raw estimates of M I(X, R) = H(R) ?
H(R|X), as being trustworthy as soon as the sample size is so large that the corresponding
Ma-bounds (which are conjectured to be less affected by undersampling) assume values
below the raw estimates of H(R) and H(R|X). According to this criterion a sample size
of 9 ? 103 would be sufficient in the case of 5-neuron subcircuits (i.e., d = 5 components of
r(t)), c.f. Fig. 3D.4 However, Fig. 3A shows that the raw estimate M Iraw is still too high
for N = 9 ? 103 since M Iraw assumes a substantially smaller value at N = 2 ? 105 .
In view of this unreliability of ? even corrected ? estimates for the mutual information we
have employed standard methods from machine learning in order to derive lower bounds
for the M I (see for example [8] and [9] for references to preceding related work). This
method is computationally feasible and yields with not too large sample sizes reliable
lower bounds for the M I even for large numbers of components of the circuit state. In
fact, we will apply it in sections 4 and 5 even to the full 800-component circuit state
r(t). This method is quite simple. According to the data processing inequality [4] one
has M I(X, R) ? M I(X, h(R)) for any function h. Obviously M I(X, h(R)) is easier to
estimate than M I(X, R) if the dimension of h(R) is substantially lower than that of R,
especially if h(R) assumes just a few discrete values. Furthermore the difference between
M I(X, R) and M I(X, h(R)) is minimal if h(R) throws away only that information in R
that is not relevant for predicting the value of X. Hence it makes sense to use as h a predictor or classifier that has been trained to predict the current value of X. Similar approaches
for estimating a lower bound were motivated by the idea of predicting the stimulus (X)
given the neural response (R) (see [8], [9] and the references therein). To get an unbiased
estimate for M I(X, h(R)) one has to make sure that M I(X, h(R)) is estimated on data
which have not been used for the training of h. To make the best use of the data one can alternatively use cross-validation or even leave-one-out (see [11]) to estimate M I(X, h(R)).
Fig. 3B, 3C, 3E, and 3F show for 3 different predictors h how the resulting lower bounds
for the M I depend on the sample size N .
It is noteworthy that the lower bounds M I(X, h(R)) derived with the empirical Bayes
classifier5 increase significantly with the sample size6 and converge quite well to the upper
bounds M Iraw (X, R). This reflects the fact that the estimated joint probability density
between X and R gets more and more accurate. Furthermore the computationally less
demanding7 use of linear classifiers h also yields significant lower bounds for M I(X, R),
especially if the true value of M I(X, R) is not too small. In our application this does
not even require high numerical precision, since a coarse binning (see footnote 3) of the
analog components of r(t) suffices, see Fig. 3 B,C,E,F. All estimates of M I(X, R) in
4
These kind of results depend on a division of the space of circuit states into subspaces, which is
required for the calculation of the Ma-bound. In our case we have chosen the subspaces such that the
frequency counts of any two circuit states in the same subspace differ by at most 1.
5
The empirical Bayes classifier operates as follows: given observed (and discretized) d components r (d) (t) of the state r(t) it predicts the input which was observed most frequently for the
given state components r (d) (t) (maximum a posterior classification, see e.g. [11]). If r (d) (t) was
not observed so far a random guess about the input is made.
6
In fact, in the limit N ? ? the Bayes classifier is the optimal classifier for the discretized data
in the sense that it would yield the lowest classification error ? and hence the highest lower bound
on mutual information ? over all possible classifiers.
7
In contrast to the Bayes classifier the linear classifiers (both for analog and discrete data) yield
already for relatively small sample sizes N good results which do not improve much with increasing
N.
1
1?5
0.5
0
1
5?5
0.5
mutual information [bit]
0
1
1?10
0.5
0
1
1?20
0.5
0
1
5?160
0.5
0
1
0.5
s
0
0
s
1
0.2
s
2
s
3
0.4
0.6
1?800
4
0.8
time [sec]
Figure 4: Information in subset of neurons. Shown are lower bounds for mutual information
M I(si , h(R)) obtained with a linear classifier h operating on d components of the circuit
state r(t). The numbers a ? d to the right of each panel specify the number of components
d used by the linear classifier and for how many different choices a of such subsets of size
d the results are plotted in that panel.
the subsequent sections are lower bounds M I(X, h(R)) computed via linear classifiers h.
These types of lower bounds for M I(X, R) are of particular interest from the point of
view of neural computation, since a linear classifier can in principle be approximated by a
neuron that is trained (for example by a suitable variation of the perceptron learning rule)
to extract information about X from the current circuit state R. Hence a high value of a
lower bound M I(X, h(R)) for such h shows not only that information about X is present
in the current circuit state R, but also that this information is in principle accessible for
other neurons.
4
Distribution and Dynamics of Information in Circuit States
We have applied the method of estimating lower bounds for mutual information via linear
classifiers described in the preceding section to analyze the spatial distribution and temporal dynamics of information for our study case described in section 2. Fig. 4 shows the
temporal dynamics of information (estimated every 20ms as described in section 3) about
input bits si (encoded as described in section 2) for different components of the circuit state
r(t) corresponding to different randomly drawn subsets of neurons in the circuit. One sees
that even subsets of just 5 neurons absorb substantial information about the input bits si ,
however with a rather slow onset of the information uptake at the beginning of a segment
and little memory retention when this information is overwritten by the next input segment.
By merging the information from different subsets of neurons the uptake of new information gets faster and the memory retention grows. Note that for large sets of neurons (160
and 800) the information about each input bit si jumps up to its maximal value right at the
beginning of the corresponding ith segment of the input trains.
B
MI [% of H(f)]
MI [% of H(s)]
A
100
50
s
1
s
2
s
3
s
4
0
s ??s
?1 s ? s2
s 1? s 2
s1 ? s2
50
1
0
2
D
MI [% of H(f)]
C
MI [% of H(f)]
100
100
xor(s ,s )
1 2
50
0
0
0.2
0.4
time [sec]
0.6
0.8
100
parity(s ,s ,s )
1 2 3
parity(s2,s3,s4)
50
0
0
0.2
0.4
time [sec]
0.6
0.8
Figure 5: Emergent computations. A Dynamics of information about input bits as in the
bottom row of Fig. 4. H(s) denotes the entropy of a segment si (which is 1 bit for i =
1, 2, 3, 4). B, C, D Lower bounds for the mutual information M I(f, h(R)) for various
Boolean functions f (s1 , . . . , s4 ) obtained with a linear classifier h operating on the full
800-component circuit state R = r(t). H(f ) denotes the entropy of a Boolean function
f (s1 , . . . , s4 ) if the si are independently uniformly drawn from {0, 1}.
5
Emergent Computation in Recurrent Circuits of Spiking Neurons
In this section we apply the same method to analyze the mutual information between the
current circuit state and the target outputs of various computations on the information contained in the sequence of spatio-temporal spike patterns in the input stream to the circuit.
This provides an interesting new method for analyzing neural computation, rather than just
neural communication and coding. There exist 16 different Boolean functions f (s1 , s2 )
that depend just on the first two of the 4 bits s1 , . . . , s4 . Fig. 5B,C shows that all these
Boolean functions f are autonomously computed by the circuit, in the sense that the current circuit state contains high mutual information with the target output f (s1 , s2 ) of this
function f . Furthermore the information about the result f (s1 , s2 ) of this computation
can be extracted linearly from the current circuit state r(t) (in spite of the fact that the
computation of f (s1 , s2 ) from the spike patterns in the input requires highly nonlinear
computational operations). This is shown in Fig. 5B and 5C for those 5 Boolean functions
of 2 variables that are nontrivial in the sense that their output really depends on both input
variables. There exist 5 other Boolean functions which are nontrivial in this sense, which
are just the negations of the 5 Boolean functions shown (and for which the mutual information analysis therefore yields exactly the same result). In Fig. 5D corresponding results are
shown for parity functions that depend on three of the 4 bits s1 , s2 , s3 , s4 . These Boolean
functions are the most difficult ones to compute in the sense that knowledge of just 1 or 2
of their input bits does not give any advantage in guessing the output bit.
One noteworthy feature in all these emergent computations is that information about the
result of the computation is already present in the current circuit state long before the complete spatio-temporal input patterns that encode the relevant input bits have been received
by the circuit. In fact, the computation of f (s1 , s2 ) automatically just uses the temporal
order of the first spikes in the pattern encoding s2 , and merges information contained in the
order of these spikes with the ?context? defined by the preceding input pattern. In this way
the circuit automatically completes an ultra-rapid computation within just 20 ms of the beginning of the second pattern s2 . The existence of such ultra-rapid neural computations has
previously already been inferred [12] but models that could explain the possibility of such
ultra-rapid computations on the basis of generic models for recurrent neural microcircuits
have been missing.
6
Discussion
We have analyzed the dynamics of information in high-dimensional circuit states of a
generic neural microcircuit model. We have focused on that information which can be
extracted by a linear classifier (a linear classifier may be viewed as a coarse model for the
classification capability of a biological neuron). This approach also has the advantage that
significant lower bounds for the information content of high-dimensional circuit states can
already be achieved for relatively small sample sizes. Our results show that information
about current and preceding circuit inputs is spread throughout the circuit in a rather uniform manner. Furthermore our results show that a generic neural microcircuit model has
inherent capabilities to process new input in the context of other information that arrived
several hundred ms ago, and that information about the outputs of numerous potentially interesting target functions automatically accumulates in the current circuit state. Such emergent computation in circuits of spiking neurons is extremely fast, and therefore provides
an interesting alternative to models based on special-purpose constructions for explaining
empirically observed [12] ultra-rapid computations in neural systems.
The method for analyzing information contained in high-dimensional circuit states that we
have explored in this article for a generic neural microcircuit model should also be applicable to biological data from multi-unit recordings, f M RI etc., since significant lower
bounds for mutual information were achieved in our study case already for sample sizes in
the range of a few hundred (see Fig. 3). In this way one could get insight into the dynamics
of information and emergent computations in biological neural systems.
Acknowledgement: We would like to thank Henry Markram for inspiring discussions.
This research was partially supported by the Austrian Science Fund (FWF), project #
P15386.
References
[1] H. Markram, Y. Wang, and M. Tsodyks. Differential signaling via the same axon of neocortical
pyramidal neurons. Proc. Natl. Acad. Sci., 95:5323?5328, 1998.
[2] W. Maass, T. Natschl?ager, and H. Markram. Real-time computing without stable states: A new
framework for neural computation based on perturbations. Neural Computation, 14(11):2531?
2560, 2002.
[3] A. Gupta, Y. Wang, and H. Markram. Organizing principles for a diversity of GABAergic
interneurons and synapses in the neocortex. Science, 287:273?278, 2000.
[4] T. M. Cover and J. A. Thomas. Elements of Information Theory. Wiley, New York, 1991.
[5] M. S. Roulston. Estimating the errors on measured entropy and mutual information. Physica
D, 125:285?294, 1999.
[6] S. Panzeri and A. Treves. Analytical estimates of limited sampling biases in different information measures. Network: Computation in Neural Systems, 7:87?107, 1996.
[7] G. Pola, S. R. Schultz, R. S. Petersen, and S. Panzeri. A practical guide to information analysis
of spike trains. In R. K?otter, editor, Neuroscience Databases. A Practical Guide, chapter 10,
pages 139?153. Kluwer Academic Publishers (Boston), 2003.
[8] L. Paninski. Estimation of entropy and mutual information. Neural Computation, 15:1191?
1253, 2003.
[9] J. Hertz. Reading the information in the outcome of neural computation. online available via
http://www.nordita.dk/?hertz/papers/infit.ps.gz.
[10] S.P. Strong, R. Koberle, R. R. de Ruyter van Steveninck, and E. Bialek. Entropy and information
in neural spike trains. Physical Review Letters, 80(1):197?200, 1998.
[11] R. O. Duda, P.E. Hart, and D. G. Stork. Pattern Classification. John Wiley & Sons, 2nd edition,
2001.
[12] S. Thorpe, D. Fize, and C. Marlot. Speed of processing in the human visual system. Nature,
381:520?522, 1996.
| 2389 |@word version:3 duda:1 nd:1 bf:4 simulation:2 pulse:1 overwritten:1 methodologically:1 solid:1 carry:1 initial:1 series:1 contains:1 liquid:1 current:15 discretization:1 trustworthy:1 si:10 john:1 subsequent:2 realistic:1 visible:2 numerical:1 fund:1 guess:1 nervous:1 beginning:3 ith:1 short:1 coarse:2 provides:2 mathematical:1 direct:2 differential:1 consists:2 manner:1 acquired:1 expected:1 rapid:5 frequently:1 multi:1 brain:1 discretized:2 automatically:3 actual:2 little:1 increasing:1 becomes:1 project:1 estimating:6 underlying:1 circuit:63 panel:3 lowest:1 kind:1 substantially:3 temporal:8 every:1 exactly:2 ull:5 classifier:24 unit:1 appear:1 unreliability:1 overestimate:3 before:1 retention:2 local:1 sd:1 limit:1 severely:1 acad:1 encoding:1 accumulates:1 analyzing:5 firing:5 noteworthy:2 black:1 therein:2 suggests:1 limited:1 range:4 steveninck:1 practical:2 signaling:1 procedure:1 area:1 empirical:6 inaive:2 significantly:1 refers:1 spite:1 petersen:1 get:5 cannot:1 context:4 www:1 missing:1 independently:1 focused:2 resolution:1 correcting:1 rule:1 insight:1 array:1 facilitation:1 variation:1 target:3 construction:1 us:1 element:1 approximated:1 predicts:1 database:1 binning:1 bottom:2 observed:5 wang:2 tsodyks:1 graz:2 connected:1 autonomously:1 highest:1 substantial:1 complexity:1 overestimation:1 dynamic:15 trained:5 depend:4 segment:15 division:1 basis:2 triangle:1 joint:1 emergent:9 various:3 chapter:1 train:11 fast:1 choosing:1 outside:1 outcome:1 whose:2 encoded:2 larger:1 quite:2 drawing:2 noisy:3 online:1 obviously:1 sequence:2 advantage:2 analytical:2 subtracting:1 maximal:1 turned:1 relevant:2 organizing:1 p:1 generating:1 leave:2 wider:1 derive:1 recurrent:9 measured:1 received:2 strong:1 throw:2 quotient:1 somatosensory:1 judge:1 differ:1 correct:1 human:1 bin:4 require:1 suffices:1 really:1 ultra:4 biological:3 correction:2 physica:1 considered:2 exp:1 panzeri:2 predict:2 purpose:1 estimation:1 proc:1 applicable:2 tool:1 reflects:1 gaussian:2 rather:4 encode:1 derived:1 methodological:1 contrast:2 sense:7 size6:1 typically:1 transformed:2 classification:4 spatial:1 special:1 mutual:22 sampling:2 mm2:1 represents:1 stimulus:2 serious:1 inherent:2 thorpe:1 employ:2 few:4 primarily:1 randomly:7 seriousness:1 replaced:1 consisting:5 fire:2 negation:1 interest:1 interneurons:1 highly:1 possibility:1 marlot:1 analyzed:1 light:1 natl:1 accurate:1 ager:2 filled:1 plotted:1 theoretical:1 minimal:1 obstacle:1 boolean:8 cover:1 technische:1 subset:6 size2:1 predictor:2 uniform:1 hundred:2 too:5 randomp:1 density:1 accessible:1 systematic:1 connectivity:1 reflect:1 potential:2 diversity:1 de:1 sec:4 coding:1 caused:1 igi:1 depends:2 stream:3 experimentalist:1 onset:1 view:2 wolfgang:1 analyze:3 optimistic:2 bayes:8 capability:4 xor:1 phoneme:1 variance:1 yield:8 bayesian:1 raw:14 ago:1 footnote:2 synapsis:5 explain:1 synaptic:2 definition:1 underestimate:2 frequency:3 obvious:1 associated:1 mi:8 auditory:1 austria:1 knowledge:1 ubiquitous:1 amplitude:1 response:5 specify:1 arranged:1 microcircuit:9 furthermore:4 just:9 nonlinear:1 grows:1 effect:2 concept:1 true:3 unbiased:1 hence:6 maass:3 white:1 attractive:1 during:1 rat:1 m:17 criterion:1 arrived:1 theoretic:3 complete:2 neocortical:1 l1:4 common:1 spiking:8 empirically:1 physical:1 stork:1 exponentially:1 analog:6 discussed:2 kluwer:1 significant:3 had:1 henry:1 moving:1 stable:1 cortex:2 surface:1 operating:2 etc:1 add:1 posterior:1 conjectured:1 moderate:1 inequality:1 seen:1 preceding:5 employed:1 subtraction:1 converge:1 dashed:1 full:2 faster:1 academic:1 calculation:1 cross:1 long:1 divided:1 hart:1 controlled:1 impact:1 austrian:1 poisson:1 represent:1 achieved:3 receive:1 whereas:1 addressed:1 completes:1 pyramidal:1 publisher:1 exhibited:1 natschl:2 sure:1 hz:1 tend:1 recording:1 fwf:1 fit:1 idea:1 motivated:1 hr1:1 york:1 depression:1 useful:1 amount:1 s4:6 neocortex:1 concentrated:1 inspiring:1 http:1 exist:2 inhibitory:1 s3:5 estimated:6 neuroscience:2 instantly:1 write:1 discrete:5 nordita:1 affected:1 salient:1 four:2 drawn:4 undersampling:2 fize:1 letter:1 injected:1 jitter:1 throughout:2 bit:16 layer:5 bound:25 syllable:1 activity:5 nontrivial:2 binned:1 inity:3 ri:1 speed:1 subcircuits:1 extremely:1 relatively:3 according:3 combination:1 membrane:2 smaller:1 hertz:2 son:1 postsynaptic:1 biologically:1 s1:11 restricted:1 computationally:3 equation:1 previously:2 count:1 available:2 operation:1 apply:3 r800:1 away:1 generic:12 alternative:1 existence:1 thomas:2 denotes:4 assumes:2 tugraz:1 especially:2 already:5 spike:20 dependence:2 guessing:1 bialek:1 subcircuit:1 subspace:3 thank:1 sci:1 iinf:3 reason:1 length:2 modeled:1 equivalently:1 difficult:2 potentially:2 upper:1 neuron:38 snapshot:1 finite:1 t:2 situation:2 extended:1 communication:1 frame:1 perturbation:1 treves:1 inferred:1 introduced:1 required:1 specified:1 extensive:1 connection:3 merges:1 address:1 beyond:2 suggested:1 below:3 pattern:12 dynamical:2 reading:1 reliable:1 memory:2 suitable:1 predicting:2 occupancy:1 improve:1 temporally:1 numerous:1 uptake:2 gabaergic:1 gz:1 extract:1 koberle:1 text:2 review:1 understanding:1 l2:1 literature:1 acknowledgement:1 interesting:3 proportional:1 digital:1 validation:1 integrate:1 sufficient:1 article:2 principle:4 editor:1 row:1 supported:1 parity:3 soon:1 bias:3 guide:2 perceptron:1 institute:1 wide:1 template:10 explaining:1 markram:4 sparse:1 leaky:1 van:1 regard:1 dimension:1 cortical:2 sensory:1 made:1 jump:1 schultz:1 employing:1 far:1 absorb:1 dealing:1 otter:1 universitaet:1 spatio:3 alternatively:1 channel:1 nature:1 ruyter:1 expansion:1 investigated:1 complex:2 protocol:1 spread:2 linearly:2 s2:24 noise:1 arrival:1 edition:1 fig:18 slow:1 axon:1 wiley:2 precision:1 momentary:1 specific:1 explored:1 decay:1 dk:1 gupta:1 merging:2 effectively:1 easier:1 subtract:1 boston:1 entropy:12 p15386:1 paninski:1 explore:1 likely:1 visual:2 contained:6 partially:1 saccade:1 extracted:2 ma:6 conditional:1 viewed:1 towards:1 feasible:1 content:1 pola:1 infinite:1 corrected:4 operates:1 uniformly:1 called:1 tnatschl:1 internal:1 mark:1 |
1,528 | 239 | Digital-Analog Hybrid Synapse Chips for Electronic Neural Networks
Digital-Analog Hybrid Synapse Chips for
Electronic Neural Networks
A Moopenn, T. Duong, and AP. Thakoor
Center for Space Microelectronics Technology
Jet Propulsion Laboratory/California Institute of Technology
Pasadena, CA 91109
ABSTRACf
Cascadable, CMOS synapse chips containing a cross-bar array of
32x32 (1024) programmable synapses have been fabricated as
"building blocks" for fully parallel implementation of neural
networks. The synapses are based on a hybrid digital-analog
design which utilizes on-Chip 7-bit data latches to store quantized
weights and two-quadrant multiplying DAC's to compute weighted
outputs. The synapses exhibit 6-bit resolution and excellent
monotonicity and consistency in their transfer characteristics. A
64-neuron hardware incorporating four synapse chips has been
fabricated to investigate the performance of feedback networks in
optimization problem solving. In this study, a 7x7, one-to-one
assignment net and the Hop field-Tank 8-city traveling salesman
problem net have been implemented in the hardware. The
network's ability to obtain optimum or near optimum solutions in
real time has been demonstrated.
1 INTRODUCTION
A large number of electrically modifiable synapses is often required for fully
parallel analog neural network hardware. Electronic synapses based on CMOS,
EEPROM, as well as thin film technologies are actively being developed [1-5].
One preferred approach is based on a hybrid digital-analog design which can easily
be implemented in CMOS with simple interface and analog circuitry. The hybrid
design utilizes digital memories to store the synaptic weights and digital-to-analog
converters to perform analog multiplication. A variety of synaptiC chips based on
such hybrid designs have been developed and used as "building blocks" in larger
neural network hardware systems fabricated at JPL.
In this paper, the design and operational characteristics of the hybrid synapse chips
are described. The development of a 64-neuron hardware incorporating several of
769
770
Moopenn, Duong and Thakoor
the synapse chips is also discussed. Finally, a hardware implementation of two
global optimization nets, namely, the one-to-one assignment optimization net and
the Hopfield-Tank traveling salesman net [6], and their performance based on our
64-neuron hardware are discussed.
2 CHIP DESIGN AND ELECfRICAL CHARACfERISTICS
The basic design and operational characteristics of the hybrid digital analog synapse
chips are described in this section. A simplified block diagram of the chips is
shown in Fig. 1. The chips consist of an address/data de-multiplexer, row and
column address decoders, 64 analog input/output lines, and 1024 synapse cells
arranged in the form of a 32x32 cross-bar matrix. The synapse cells along the ith row have a common output, xi' and similarly, synapses along the j-th column
have a common input, yj' The synapse input/output lines are brought off-chip for
multi-chip expansion to a larger synaptic matrix. The synapse cell, based on a
hybrid digital analog design, essentially consists of a 7-bit static latch and a 7-bit,
two-quadrant multiplying DAC.
fROM NEURON OUTPUTS
YO
Y31
ROWICOL
ADDRESS.
DATA
KO
X31
Figure 1: Simplified block diagram of hybrid 32x32x7-bit synapse chip.
A circuit diagram of the 7-bit DAC is shown in Fig. 2. The DAC consists of a
current input circuit, a set of binary weighted current sources, and a current
steering circuit. The current in the input circuit is mirrored by the binary-weighted
current sources for all synapses along a column. In one version of the chips, a
single long-channel PET is used to convert the synapse input voltage to a current.
In addition, the gate of the transistor is connected internally to the gates of other
long channel transistors. This common gate is accessable off-chip and provides a
means for controlling the overall "gain" of the synapses in the chip. In a second
chip version, an external resistor is employed to perform input voltage to current
conversion when a high linearity in the synapse transfer characteristics is desired.
Digital-Analog Hybrid Synapse Chips for Electronic Neural Networks
Hybrid 32x32x7-bit synapse chips with and without long channel transistors were
fabricated through MOSIS using a 2-micron, n-well CMOS process. Typical
measured synapse response (I-V) curves from these chips are shown in Figs. 3a and
3b for weight values of 0, +/- 1, 3, 7, 15, 31, and 63. The curves in Fig. 3a were
obtained for a synapse incorporating an on-chip long-channel FET with a gate bias
of 5 volts. The non-linear synapse response is evident and can be seen to be
similar to that of a "threshold" current source. The non-linear behavior is mainly
attributed to the nonlinear drain characteristics of the long channel transistor. It
should be pointed out that synapses with such characteristics are especially suited
for neural networks with neurons operating in the high gain limit, in which case,
the nonlinearity may even be desirable. The set of curves in Fig. 3b were obtained
using an externall0-megaohm resistor for the V-I conversion. For input voltages
greater than about twice the transistor's threshold voltage (- 0.8 v), the synapse's
current output is a highly linear function of the input VOltage. The linear
characteristics achieved with the use of external resistors would be applicable in
feedforward nets with learning capabilities.
Vgg
--1
v,,
Figure 2: Circuit diagram of 7-bit multiplying DAC.
Figure 4 shows the measured output of the synapse as the weight is incremented
from -60 to +60. The synapse exhibits excellent monotonicity and step size
consistency. Based on a random sampling of synapses from several chips, the step
size standard deviation due to mismatched transistor characteristics is typically less
than 25 percent.
3
64-NEURON HARDWARE
The hybrid synapse chips are ideally suited for hardware implementation of
feedback neural networks for combinatorial global optimization problem solving
or associative recall where the synaptic weights are known a priori. For example,
in a Hopfield-type feedback net [7], the weights can be calculated directly from a
set of cost parameters or a set of stored vectors. The desired weights are
771
772
Moopenn, Duong and Thakoor
quantized and downloaded into the memories of the synapse chips. On the other
hand, in supervised learning applications, learning can be performed off-line, taking
into consideration the operating characteristics of the synapses, and the new
updated weights are simply reprogrammed into the synaptic hardware during each
training cycle.
(b)
(a)
100 ,.-,------.------,----r-----.--,---,
15
';i)
';i)
QI
QI
L
QI
L
50
a.
E
0
0
E
0
0
L
b
U
.!5
.?
o
t::J
t::J
[L
t::J
0
0
t-
z
5
o
[L
t::J
l.LJ
10
QI
a.
-5
t-
-50
z
l.LJ
0::
0::
0::
~-IO
::J
u
u
-100
0
4
2
6
B
VOLTAGE INPUT [volts)
-15
10
0
2
4
6
B
VOLTAGE INPUT [volts)
10
Figure 3: Transfer characteristics of a 7-bit synapse for weight values of 0, + /- 1,
3, 7, 15, 31, 63, (a) with long channel transistors for voltage to current conversion
(Vgg= 5.0 volts) and (b) with external 10 mega-ohm resistor.
75
50
[fi
[L
~
<
25
/
0
0::
U
3
0
t::J
e: -25
::J
0
-50
-75
-75
-50
-25
o
25
50
75
WEIGHT VALUES
Figure 4:
Synapse output as weight value is incremented from -60 to +60.
(Vgg=Vin= 5.0 volts)
Digital-Analog Hybrid Synapse Chips for Electronic Neural Networks
A 64-neuron breadboard system incorporating several of the hybrid synapse chips
has been fabricated to demonstrate the utility of these building block chips, and to
investigate the dynamical properties, global optimization problem solving abilities,
and application potential of neural networks. The system consists of an array of
64 discrete neurons and four hybrid synapse chips connected to form a 64x64 crossbar synapse matrix. Each neuron is an operational-amplifier operating as a current
summing amplifier. A circuit model of a neuron with some synapses is shown in
Fig. 5. The system dynamical equations are given by:
where Vi is the output of the neuron i, Tij is the synaptic weight from neuron j
to neuron i, R f and Cf are the feedback resistance and capacitance of the neuron,
l' f = R f Cf , and Ii is the external input current. For our system, R f was about 50
kilo-ohms, and Cf was about 10 pF, a value large enough to ensure stability against
oscillations. The system was interfaced to a microcomputer which allows
downloading of the synaptic weight data and analog readout of the neuron states.
c
v?I
Figure 5: Electronic circuit model of neuron and synapses.
4 GLOBAL OPTIMIZATION NEURAL NETS
Two combinatorial global optimization problems, namely, the one-to-one
assignment problem and the traveling salesman problem, were selected for our
neural net hardware implementation study.
Of particular interest is the
performance of the optimization network in terms of the quality and speed of
solutions in light of hardware limitations.
In the one-to-one assignment problem, given two sets of N elements and a cost
assignment matrix, the objective is to assign each element in one set to an element
in the second set so as to minimize the total assignment cost. In our neural net
implementation, the network is a Hopfield-type feedback net consisting of an NxN
array of assignment neurons. In this representation, a permissible set of one-toone assignments corresponds to a permutation matrix. Thus, lateral inhibition
773
774
Moopenn, Duong and Thakoor
between assignment neurons is employed to ensure that there is only one active
neuron in each row and in each column of the neuron array. To force the network
to favor assignment sets with low total assignment cost, each assignment neuron
is also given an analog prompt, that is, a fIxed analog excitation proportional to
a positive constant minus its assignment cost.
An 8-city Hopfield-Tank TSP net was implemented in the 64-neuron hardware.
Convergence statistics were similarly obtained from 100 randomly generated 8-city
positions. The network was observed to give good solutions using a large synapse
gain (common gate bias= 7 volts) and an annealing time of about one neuron time
constant (- 50 usee). As shown in Fig. 6b, the TSP net found tours which were
in the best 6%. It gave the best tours in 11 % of the cases and the first to third
best tours in 31% of the cases. Although these results are quite good, the
performance of the TSP net compares less favorably with the assignment net. This
can be expected due to the increased complexity of the TSP net. Furthermore,
since the initial state is arbitrary, the TSP net is more likely to settle into a local
On the other hand, in the
minimum before reaching the global minimum.
assignment net, the analog prompt helps to establish an initial state which is close
to the global minimum, thereby increasing its likelihood of converging to the
optimum solution.
(a)
(b)
30
35
25
30
25
10
5
o
000
00
-.
5
o
o
o
0.01
0.02
FR~CT ION
0. 03
0.04
0.05
OF BEST SOLUTIONS
0.06
0.07
~~
FR~[TlON
~~
~oo
~oo
0.10
OF THE BEST SOLUTIONS
Figure 6: Performance statistics for (a) 7x7 assignment problem and (b) 8-city
traveling salesman problem.
5 CONCLUSIONS
CMOS synapse chips based on a hybrid analog-digital design are ideally suited as
building blocks for the development of fully parallel and analog neural net
hardware. The chips described in this paper feature 1024 synapses arranged in a
32x32 cross-bar matrix with 120 programmable weight levels for each synapse.
Although limited by the process variation in the chip fabrication, a 6-bit weight
resolution is achieved with our design. A 64-neuron hardware incorporating several
Digital-Analog Hybrid Synapse Chips for Electronic Neural Networks
of the synapse chips is fabricated to investigate the performance of feedback
networks in optimization problem solving. The ability of such networks to provide
optimum or near optimum solutions to the one-to-one assignment problem and the
traveling salesman problem is demonstrated in hardware. The neural hardware is
capable of providing real time solutions with settling times in the 50-500 usec
In an energy function description, all valid assignment sets correspond to energy
minima of equal depth located at comers of the NxN dimensional hypercube (in
the large neuron gain limit). The analog prompt term in the energy function has
the effect of "tilting" the energy surface toward the hypercube corners with low
total assignment cost. Thus, the assignment net may be described as a first-order
global optimization net because the analog cost parameters appear only in the
linear term of the energy function, Le., the analog information simply appears as
fixed biases and the interaction between neurons is of a binary nature. Since the
energy surface contains a large number of local energy minima (-- N!) there is the
strong possibility that the network will get trapped in a local minimum, depending
on its initial state. Simulated annealing can be used to reduce this likelihood.
One approach is to start with very low neuron gain, and increasing it slowly as the
network evolves to a stable state. An alternative but similar approach which can
easily be implemented with the current hybrid synapse chips is to gradually increase
the synapse gain.
A 7X7 one-to-one assignment problem was implemented in the 64-neuron hardware
to investigate the performance of the assignment optimization net. An additional
neuron was used to provide the analog biases (quantized to 6 bits) to the
assignment neurons. Convergence statistics were obtained from 100 randomly
generated cost assignment matrices. For each cost matrix, the synapse gain and
annealing time were optimized and the solution obtained by the hardware was
recorded. The network generally performed well with a large synapse gain
(common gate bias of 7 VOlts) and an annealing time of about 10 neuron time
constants (- 500 usec). The unusually large anneal time observed emphasizes the
importance of suppressing the quadratic energy term while maintaining the analog
prompt in the initial course of the network's state trajectory. Solution distributions
for each cost matrix were also obtained from a computer search for the purpose
of rating the hardware solutions. The performance of the assignment net is
summarized in Fig. 6. In all cases, the network obtained solutions which were in
the best 1%. Moreover, the best solutions were obtained in 40% of the cases, and
the first, second, third best in 75% of the cases. These results are very
encouraging in spite of the limited resolution of the analog biases and the fact that
the analog biases also vary in time with the synapse gain.
The Hopfield-Tank's traveling salesman problem (TSP) network [6] was also
investigated in the 64-neuron hardware. In this implementation, the analog cost
information (Le., the inter-city distances) is encoded in the connection strength of
the synapses. Lateral inhibition is provided via binary synapses to ensure a valid
city tour. However, the intercity distance provides additional interaction between
775
776
Moopenn, Duong and Thakoor
neurons via excitatory synapses with strength proportional to a positive constant
minus the distance. Thus the TSP net, considerably more complex than the
assignment net, may be described as a second order global optimization net.
range, which can be further reduced to 1-10 usec with the incorporation of onchip neurons.
Acknowledgements
The work described in this paper was performed by the Center for Space
Microelectronics Technology, Jet Propulsion Laboratory, California Institute of
Technology, and was sponsored in part by the Joint Tactical Fusion Program Office
and the Defense Advanced Research Projects Agency, through an agreement with
the National Aeronautics and Space Administration. The authors thank John
Lambe and Assad Abidi for many useful discussions, and Tim Shaw for his valuable
assistance in the Chip-layout design.
References
1.
2.
3.
4.
5.
6.
7.
S. Eberhardt, T. Duong, and A Thakoor, "A VLSI Analog Synapse
'Building Block' Chip for Hardware Neural Network Implementations,"
Proc. IEEE 3rd Annual Parallel Processing Symp., Fullerton, ed. L.H.
Canter, vol. 1, pp. 257-267, Mar. 29-31, 1989.
A Moopenn, AP. Moopenn, and T. Duong, "Digital-Analog-Hybrid Neural
Simulator: A Design Aid for Custom VLSI Neurochips," Proc. SPIE Conf.
High Speed Computing, Los Angeles, ed. Keith Bromley, vol. 1058, pp.
147-157, Jan. 17-18, 1989.
M. Holler, S. Tam, H. Castro, R. Benson, "An Electrically Trainable
Artificial Neural Network (ETANN) with 10240 'Floating Gate' Synapses,"
Proc. IJCNN, Wash. D.C., vol. 2, pp. 191-196, June 18-22, 1989.
A.P. Thakoor, A Moopenn, J. Lambe, and S.K. Khanna, "Electronic
Hardware Implementations of Neural Networks," Appl. Optics, vol. 26, no.
23, 1987, pp. 5085-5092.
S. Thakoor, A. Moopenn, T. Daud, and AP. Thakoor, "Solid State Thin
Film Memistor for Electronic Neural Networks," J. Appl. Phys. 1990 (in
press).
J.J. Hopfield and D.W. Tank, "Neural Computation of Decisions in
Optimization Problems," BioI. Cybern., vol. 52, pp. 141-152, 1985.
J.J. Hopfield, "Neurons with Graded Response Have Collective
Computational Properties Like Those of Two-State Neurons," Proc. Nat'l
Acad. Sci., vol. 81, 1984, pp. 3088-3092.
| 239 |@word version:2 downloading:1 usee:1 thereby:1 minus:2 solid:1 etann:1 initial:4 contains:1 suppressing:1 duong:7 current:13 john:1 sponsored:1 selected:1 ith:1 provides:2 quantized:3 along:3 consists:3 symp:1 inter:1 expected:1 behavior:1 multi:1 simulator:1 encouraging:1 pf:1 increasing:2 provided:1 project:1 linearity:1 moreover:1 circuit:7 developed:2 microcomputer:1 fabricated:6 unusually:1 internally:1 appear:1 positive:2 before:1 local:3 limit:2 io:1 acad:1 breadboard:1 ap:3 onchip:1 twice:1 appl:2 limited:2 range:1 yj:1 block:7 jan:1 quadrant:2 spite:1 get:1 close:1 cybern:1 demonstrated:2 center:2 layout:1 resolution:3 x32:3 array:4 his:1 stability:1 x64:1 variation:1 updated:1 controlling:1 agreement:1 element:3 located:1 observed:2 abidi:1 readout:1 cycle:1 connected:2 incremented:2 valuable:1 agency:1 complexity:1 ideally:2 solving:4 comer:1 easily:2 joint:1 hopfield:7 chip:38 artificial:1 quite:1 encoded:1 film:2 larger:2 ability:3 favor:1 statistic:3 tsp:7 associative:1 transistor:7 net:26 interaction:2 fr:2 description:1 los:1 convergence:2 abstracf:1 optimum:5 cmos:5 help:1 oo:2 depending:1 tim:1 measured:2 keith:1 strong:1 implemented:5 thakoor:9 settle:1 assign:1 bromley:1 circuitry:1 vary:1 purpose:1 proc:4 applicable:1 combinatorial:2 tilting:1 city:6 weighted:3 brought:1 reaching:1 voltage:8 office:1 yo:1 june:1 likelihood:2 mainly:1 typically:1 lj:2 pasadena:1 vlsi:2 tank:5 overall:1 priori:1 development:2 field:1 equal:1 sampling:1 hop:1 thin:2 randomly:2 national:1 floating:1 consisting:1 amplifier:2 interest:1 investigate:4 highly:1 possibility:1 custom:1 light:1 capable:1 desired:2 toone:1 increased:1 column:4 assignment:26 cost:11 deviation:1 tour:4 fabrication:1 ohm:2 stored:1 considerably:1 off:3 holler:1 recorded:1 containing:1 slowly:1 external:4 corner:1 conf:1 tam:1 multiplexer:1 actively:1 potential:1 de:1 summarized:1 tactical:1 vi:1 performed:3 start:1 parallel:4 capability:1 vin:1 minimize:1 characteristic:10 interfaced:1 correspond:1 emphasizes:1 multiplying:3 trajectory:1 synapsis:18 phys:1 synaptic:7 ed:2 against:1 energy:8 pp:6 attributed:1 spie:1 static:1 gain:9 recall:1 appears:1 supervised:1 response:3 synapse:41 arranged:2 mar:1 furthermore:1 accessable:1 traveling:6 hand:2 crossbar:1 dac:5 nonlinear:1 khanna:1 quality:1 building:5 effect:1 volt:7 laboratory:2 latch:2 during:1 assistance:1 excitation:1 fet:1 evident:1 demonstrate:1 interface:1 percent:1 consideration:1 fi:1 common:5 analog:30 discussed:2 rd:1 consistency:2 similarly:2 pointed:1 nonlinearity:1 stable:1 operating:3 inhibition:2 surface:2 aeronautics:1 store:2 binary:4 seen:1 minimum:6 greater:1 additional:2 steering:1 employed:2 kilo:1 ii:1 desirable:1 jet:2 cross:3 long:6 qi:4 converging:1 basic:1 ko:1 essentially:1 achieved:2 cell:3 ion:1 addition:1 annealing:4 diagram:4 source:3 permissible:1 near:2 feedforward:1 enough:1 variety:1 gave:1 converter:1 reduce:1 vgg:3 administration:1 angeles:1 utility:1 defense:1 resistance:1 programmable:2 tij:1 generally:1 useful:1 hardware:23 reduced:1 mirrored:1 trapped:1 mega:1 modifiable:1 discrete:1 vol:6 four:2 threshold:2 mosis:1 convert:1 micron:1 electronic:9 utilizes:2 oscillation:1 x31:1 decision:1 bit:11 ct:1 quadratic:1 annual:1 strength:2 ijcnn:1 optic:1 incorporation:1 reprogrammed:1 x7:3 speed:2 electrically:2 evolves:1 castro:1 benson:1 gradually:1 y31:1 equation:1 lambe:2 salesman:6 shaw:1 alternative:1 gate:7 cf:3 ensure:3 maintaining:1 especially:1 establish:1 graded:1 hypercube:2 objective:1 capacitance:1 exhibit:2 distance:3 thank:1 lateral:2 simulated:1 decoder:1 sci:1 propulsion:2 toward:1 pet:1 usec:3 providing:1 favorably:1 implementation:8 design:12 collective:1 perform:2 conversion:3 neuron:36 arbitrary:1 prompt:4 rating:1 namely:2 required:1 optimized:1 connection:1 california:2 address:3 bar:3 dynamical:2 program:1 memory:2 hybrid:20 force:1 settling:1 advanced:1 technology:5 acknowledgement:1 drain:1 multiplication:1 nxn:2 fully:3 permutation:1 limitation:1 proportional:2 digital:13 downloaded:1 intercity:1 row:3 course:1 excitatory:1 bias:7 institute:2 mismatched:1 taking:1 feedback:6 curve:3 calculated:1 valid:2 depth:1 author:1 simplified:2 preferred:1 monotonicity:2 global:9 active:1 summing:1 xi:1 search:1 channel:6 transfer:3 nature:1 ca:1 operational:3 eberhardt:1 expansion:1 excellent:2 investigated:1 anneal:1 complex:1 moopenn:9 fig:8 aid:1 position:1 resistor:4 third:2 tlon:1 microelectronics:2 jpl:1 fusion:1 incorporating:5 consist:1 importance:1 wash:1 nat:1 suited:3 simply:2 likely:1 corresponds:1 bioi:1 eeprom:1 typical:1 total:3 assad:1 cascadable:1 trainable:1 |
1,529 | 2,390 | Learning Near-Pareto-Optimal Conventions in
Polynomial Time
Tuomas Sandholm
CS Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Xiaofeng Wang
ECE Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Abstract
We study how to learn to play a Pareto-optimal strict Nash equilibrium
when there exist multiple equilibria and agents may have different preferences among the equilibria. We focus on repeated coordination games
of non-identical interest where agents do not know the game structure
up front and receive noisy payoffs. We design efficient near-optimal algorithms for both the perfect monitoring and the imperfect monitoring
setting(where the agents only observe their own payoffs and the joint
actions).
1
Introduction
Recent years have witnessed a rapid development of multiagent learning theory. In particular, the use of reinforcement learning (RL) and game theory has attracted great attentions.
However, research on multiagent RL (MARL) is still facing some rudimentary problems.
Most importantly, what is the goal of a MARL algorithm? In a multiagent system, a learning agent generally cannot achieve its goal independent of other agents, which in turn tend
to pursue their own goals. This questions the definition of optimality: No silver bullet
guarantees maximization of each agent?s payoff.
In the setting of self play (where all agents use the same algorithm), most existing MARL
algorithms seek to learn to play a Nash equilibrium. It is the fixed point of the agents?
best-response process, that is, each agent maximizes its payoff given the other?s strategy.
An equilibrium can be viewed as a convention that the learning agents reach for playing the
unknown game. A key difficulty here is that a game usually contains multiple equilibria,
and the agents need to coordinate on which one to play. Furthermore, the agents may have
different preferences among the equilibria. Most prior work has avoided this problem by
focusing on games with a unique equilibrium or games in which the agents have common
interests.
In this paper, we advocate Pareto-optimal Nash equilibria as the equilibria that a MARL
algorithm should drive agents to. This is a natural goal: Pareto-optimal equilibria are
equilibria for which no other equilibrium exists where both agents are better off. We further
design efficient algorithms for learning agents to achieve this goal in polynomial time.
2
Definitions and background
We study a repeated 2-agent game where the agents do not know the game up front, and
try to learn how to play based on the experiences in the previous rounds of the game. As
usual, we assume that the agents observe each others? actions. We allow for the possibility
that the agents receive noisy but bounded payoffs (as is the case in many real-world MARL
settings); this complicates the game because the joint action does not determine the agents?
payoffs deterministically. Furthermore, the agents may prefer different outcomes of the
game. In the next subsection we discuss the (stage) game that is repeated over and over.
2.1
Coordination games (of potentially non-identical interest)
We consider two agents, 1 and 2. The set of actions that agent i can choose from is denoted
by Ai . We denote the other agent by ?i. Agents choose their individual actions ai ? Ai
independently and concurrently. The results of their joint action can be represented in
matrix form: The rows correspond to agent 1?s actions and the columns correspond to agent
2?s actions. Each cell {a1 , a2 } in the matrix has the payoffs u1 ({a1 , a2 }), u2 ({a1 , a2 }).
The agents may receive noisy payoffs. In this case, the ui functions are expected payoffs.
A strategy for agent i is a distribution ?i over its action set Ai . A pure strategy deterministically chooses one of the agent?s individual actions. A Nash equilibrium (NE) is a strategy
profile ? = {?i , ??i } in which no agent can improve its payoff by unilaterally deviating
to a different strategy: ui ({?i , ??i }) ? ui ({?i0 , ??i }) for both agents (i = 1, 2) and any
strategy ?i0 . We call a NE a pure strategy NE if the individuals? strategies in it are pure.
Otherwise, we call it a mixed strategy NE. The NE is strict if we can replace ??? with ?>?.
We focus on the important and widely studied class of games called coordination games: 1
Definition 1 [Coordination game] A 2-agent coordination game G is an N ? N matrix
game with N strict Nash equilibria (called conventions). (It follows that there are no other
pure-strategy equilibria.)
A coordination game captures the notion that agents have the common interest of being
coordinated (they both get higher payoffs by playing equilibria than other strategy profiles),
but at the same time there are potentially non-identical interests (each agent may prefer
different equilibria). The following small games illustrates this:
OPT OUT
SMALL DEMAND
LARGE DEMAND
OPT OUT
LARGE DEMAND
SMALL DEMAND
0,0
-0.1,0
-0.1,0
0,-0.1
0.3,0.5
-0.1,-0.1
0,-0.1
0.3,0.3
0.5,0.3
Table 1: Two agents negotiate to split a coin. Each one can demand a small share (0.4) or
a large share (0.6). There is a cost for bargaining (0.1). If the agents? demands add to less
than 1, each one gets its demand. In this game, though agents favor different conventions,
they would rather have a deal than opt out. The convention where both agents opt out is
Pareto-dominated and the other two conventions are Pareto-optimal.
Definition 2 [Pareto-optimality] A convention {a1 , a2 } is Pareto-dominated if there exists at least one other convention {a01 , a02 } such that ui ({a1 , a2 }) < ui ({a01 , a02 }) and
u?i ({a1 , a2 }) ? u?i ({a01 , a02 }). If the inequality is strict, the Pareto domination is strict.
Otherwise, it is weak. A convention is Pareto-optimal (PO) if and only if it is not Paretodominated.
1
The term ?coordination game? has sometimes been used to refer to special cases of coordination games, such as identicalinterest games where agents have the same preferences [2], minimum-effort games that have strict Nash equilibria on the diagonal
and both agents prefer equilibria further to the top left. Our definition is the most general (except that some have even called
games that have weak Nash equilibria coordination games).
A Pareto-dominated convention is unpreferable because there is another convention that
makes both agents better off. Therefore, we advocate that a MARL algorithm should at
least cause agents to learn a PO convention.
In the rest of the paper we assume, without loss of generality, that the game is normalized
so that all payoffs are strictly positive. We do this so that we can set artificial payoffs of
zero (as described later) and be guaranteed that they are lower than any real payoffs. This is
merely for ease of exposition; in reality we can set the artificial payoffs to a negative value
below any real payoff.
2.2
Learning in game theory: Necessary background
Learning in game theory [6] studies repeated interactions of agents, usually with the goal
of having the agents learn to play Nash equilibrium. There are key differences between
learning in game theory and MARL. In the former, the agents are usually assumed to know
the game before play, while in MARL the agents have to learn the game structure in addition
to learning how to play. Second, the former has paid little attention to the efficiency of
learning, a central issue in MARL. Despite the differences, the theory of learning in games
has provided important principle for MARL.
One most widely used learning model is fictitious play (FP). The basic FP does not guarantee to converge in coordination games while its variance, adaptive play (AP) [17], does.
Therefore, we take AP as a building block for our MARL algorithms.
2.2.1
Adaptive play (AP)
The learning process of AP is as follows: Learning agents are assumed to have a memory
to keep record of recent m plays of the game. Let at ? A be a joint action played at time
t over a game. Fix integers k and m such that 1 ? k ? m. When t ? m, each agent i
randomly chooses its actions. Starting from t = m+1, each agent looks back at the m most
recent plays ht = (at?m , at?m+1 , . . . , at?1 ) and randomly (without replacement) selects
k samples from ht . Let Kt (a?i ) be the number of times that an action a?i ? A?i appears
in the k samples at t. Agent i calculates its expected payoff w.r.t its individual action a i as
P
?i )
EP (ai ) = a?i ?A?i ui ({ai , a?i }) Kt (a
, and then randomly chooses an action from a
k
t
set of best responses: BRi = {ai | ai = arg maxa0i ?Ai EP (a0i )}.
The learning process of AP can be modeled as a Markov chain. We take the initial history
hm = (a1 , a2 , . . . , am ) as the initial state of the Markov chain. The definition of the other
states is inductive: A successor of state h is any state h0 obtained by deleting the left-most
element of h and appending a new right-most element. Let h0 be a successor of h, and let
a0 = {a01 , a02 } be the new element (joint action) that was appended to the right of h to get
h0 . Let ph,h0 be the transition probability from h to h0 . Now, ph,h0 > 0 if and only if for
each agent i, there exists a sample of size k in h to which a0i is i?s best response. Because
agent i chooses such a sample with probability independent of time t, the Markov chain is
stationary. In the Markov chain model, each state h = (a, . . . , a) with a being a convention
is an absorbing state. According to Theorem 1 in [17], AP in coordination games converge
to such an absorbing state with probability 1 if m ? 4k.
2.2.2
Adaptive play with persistent noise
AP does not choose a particular convention. However, Young showed that if there is small
constant noise in action selection, AP usually selects a particular convention. Young studied the problem under an independent random tremble model: Suppose that instead of always taking a best-response action, with a small probability ?, the agent chooses a random
action. This yields an irreducible and aperiodic perturbed process of the original Markov
chain (unperturbed process). Young showed that with sufficiently small ?, the perturbed
process converges to a stationary distribution in which the probability to play so called
stochastic stable convention(s) is at least 1 ? C?, where C is a positive constant (Theorem
4 and its proof in [17]).
The stochastic stable conventions of a game can be identified by considering the mistakes
being made during state transitions. We say an agent made a mistake if it chose an action
that is not a best response to any sample, of size k, taken from the m most recent steps
of history. Call the absorbing states in the unperturbed process convention states in the
perturbed process. For each convention state h, we construct an h-tree ?h (with each node
being a convention state) such that there is a unique direct path from every other convention
state to h. Label the direct edges (v, v 0 ) in ?h with the number of mistakes rv,v0 needed to
make the transitionP
from convention state v to convention state v 0 . The resistance of the
h-tree is r(?h ) = (v,v0 )??h rv,v0 . The stochastic potential of the convention state h is
the least resistance among all possible h-trees ?h . Young proved that the stochastic stable
states are the states with the minimal stochastic potentials.
2.3
Reinforcement learning
Reinforcement learning offers an effective way for agents to estimate the expected payoffs associated with individual actions based on previous experience?without knowing
the game structure. A simple and well-understood algorithm for single-agent RL is Qlearning [9]. The general form of Q-learning is for learning in a Markov decision process.
It is more than we need here. In our single-state setting, we take a simplified form of the
algorithm, with Q-value Qit (a) recording the estimate of the expected payoffs ui (a) for
agent i at time t. The agent updates its Q-values based on the sample of the payoff R t and
the observed action a.
Qit+1 (a) = Qit (a) + ?(Rt ? Qit (a))
(1)
In single-agent RL, if each action is sampled infinitely and the learning rate ? is decreased
over time fast enough but not too fast, the Q-values will converge to agent i?s expected
payoff ui . In our setting, we set ? = ?t1(a) , where ?t (a) is the number of times that action
a has been taken.
Most early literature on RL was about asymptotic convergence to optimum. The extension of the convergence results to MARL include the minimax-Q [11], Nash-Q [8],
friend-foe-Q [12] and correlated-Q [7]. Recently, significant attention has been paid to
efficiency results: near-optimal polynomial-time learning algorithms. Important results include Fiechter?s algorithm [5], Kearns and Singh?s E 3 [10], Brafman and Tennenholtz?s Rmax [3], and Pivazyan and Shoham?s efficient algorithms for learning a near-optimal policy
[14]. These algorithms aim at efficiency, accumulating a provably close-to-optimal average
payoff in polynomial running time with large probability. The equilibrium-selection problem in MARL has also been explored in the form of team games, a very restricted version
of coordination games [4, 16].
In this paper, we develop efficient MARL algorithms for learning a PO convention in an
unknown coordination game. We consider both the perfect monitoring setting where agents
observe each others? payoffs, and the imperfect monitoring setting where agents do not
observe each others? payoffs (and do not want to tell each other their payoffs). In the latter
setting, our agents learn to play PO conventions without learning each others? preferences
over conventions. Formally, the objectives of our MARL algorithms are:
Efficiency: Let 0 < ? < 1 and > 0 be constants. Then with probability at least 1 ? ?,
agents will start to play a joint policy a within steps polynomial in 1 , 1? , and N , such that
there exists no convention a0 that satisfies u1 (a) + < u1 (a0 ) and u2 (a) + < u2 (a0 ). We
call such a policy an -PO convention.
3
An efficient algorithm for the perfect monitoring setting
In order to play an -PO convention, agents need to find all these conventions first. Existing
efficient algorithms employ random sampling to learn game G before coordination. However, these approaches are thin for the goal: Even when the game structure estimated from
samples is within of G, its PO conventions might still be away from these of G. Here we
present a new algorithm to identify -PO conventions efficiently.
Learning game structure (perfect monitoring setting)
1. Choose > 0, 0.5 > ? > 0. Set w = 1.
2. Compute the number of samples M ( w
,
P r{maxa,i |QiM (a) ? ui (a)| ?
w}
?
2w?1
?1?
) by using Chernoff/Hoeffding bound [14], such that
?
2w?1
.
3. Start from t = 0, randomly try M actions with uniform distributions and update the Q-values using Equation 1.
4. If (1) GM = (Q1M , Q2M ) has N conventions, and (2) for every convention {ai , a?i } in GM and every agent i,
QiM ({ai , a?i }) > QiM ({a0i , a?i }) + 2 w
for every a0i 6= ai , then Stop; else w ? w + 1, Goto Step 2.
In Step 2 and Step 3, agent i samples the coordination game G sufficiently so that the game
GM = (Q1M , Q2M ) formed from M samples is within w of G with probability at least
?
1 ? 2w?1
. This is plausible because the agent can observe the other?s payoffs. In Step 4, if
Condition (1) and (2) are met and GM are within w of G, we know that GM has the same
set of conventions as G. So, any convention not strictly Pareto-dominated in G M is a 2-PO
convention in G by definition. The loop from Step 2 to Step 4 searches for a sufficiently
small w which has Condition (1) and (2) met. Throughout the learning, the probability
P
?
that GM always stays within w of G after Step 3 is at least 1 ? w 2w?1
> 1 ? 2?.
This implies that the algorithm will identify all the conventions of G with probability at
least 1 ? 2?. The total number of samples drawn is polynomial in (N, 1? , 1 ) according to
Chernoff bound [14].
After learning the game, the agents will further learn how to play, that is, to determine
which PO convention in GM to choose. A simple solution is to let two agents randomize
their action selection until they arrive at a PO convention in GM . However, this treatment
is problematic because each agent may have different preferences over the conventions and
thus will not randomly choose an action unless it believes the action is a best response to
the other?s strategy. In this paper, we consider the learning agents which use adaptive play
to negotiate the convention they should play. In game theory, AP was suggested as a simple
learning model for bargaining [18], where each agent dynamically adjusts its offer w.r.t its
belief about the other?s strategy. Here we further propose a new algorithm called k-step
adaptive play (KSAP) whose expected running time is polynomial in m and k.
Learning how to play (perfect monitoring setting)
1. Let V GM = (Q1M , Q2M ). Now, set those entries in V GM to zero that do not correspond to PO conventions.
2. Starting from a random initial state, sample the memory only every k steps. Specifically, with probability 0.5,
sample the most recent k plays, otherwise, just randomly draw k samples from the earlier m ? k observations
without replacement.
3. Choose an action against V GM as in adaptive play except that when there exist multiple best-response actions that
correspond to some conventions in the game, choose an action that belongs to a convention that offers the greatest
payoff (breaking remaining ties randomly).
4. Play that action k times.
5. Once observe that the last k steps are composed of the same strict NE, play that NE forever.
In Step 1, agents construct a virtual game V GM from the game GM = (Q1M , Q2M ) by
setting the payoffs of all actions except PO conventions to zero. This eliminates all Paretodominated conventions in GM . Step 2 to Step 5 is KSAP. Comparing with AP, KSAP lets
an agent sample the experience to update its opponent model every k steps. This makes
the expected steps to reach an absorbing state polynomial in k. A KSAP agent pays more
attentions on the most recent k observations and will freeze its action once coordinated.
This further enhances the performance of the learning algorithm.
Theorem 1 In any unknown 2-agent coordination game with perfect monitoring, if m ?
4k, agents that use the above algorithm will learn a 2-PO policy with probability at least
1 ? 2? in time poly(N, 1? , 1 , m, k).
Due to limited space, we present all proofs in a longer version of this paper [15].
4
An efficient algorithm for the imperfect monitoring setting
In this section, we present an efficient MARL algorithm for the imperfect monitoring setting where the agents do not observe each others? payoff during learning. Actually, since
agents can observe joint actions, they may explicitly signal to each other their preferences
over conventions through actions. This reduces the learning problem to that in the perfect
monitoring setting. Here we assume that agents are not willing to explicitly signal each
other their preferences over conventions, even part of such information (e.g., their most
preferable conventions).2 We study how to achieve optimal coordination without relying
on such preference information.
Because each agent is unable to observe the other?s payoffs and because there is noise in
payoffs received, it is difficult for the agent to determine when enough samples have been
taken to identify all conventions. We address this by allowing agents to demonstrate to each
other their understanding of game structure (where the conventions are) after sampling.
Learning the game structure (imperfect monitoring setting)
1. Each agent plays its actions in order, with wrap around, until both agents have just wrapped around. 3 The agents
name each others? actions 1,2,... according to the order of first appearance in play.
2. Given and ?, agents are randomly sampling the game until every joint action has been visited at least
?
M( w
, w?1
) times (with w = 1) and updating their Q-values using Equation 1 along the way.
2
3. Starting at the same time, each agent i goes through the other?s N individual actions a ?i in order, playing the action
+ QiM ({a0i , a?i }) for any a0i 6= ai . (If such an action ai does not exists
ai such that QiM ({ai , a?i }) > 2 w
for some a?i , then agent i plays action 1 throughout this demonstration phase.)
4. Each agent determines whether the agents hold the same view of the N strict Nash equilibria. If not, they let
w ? w + 1, Goto Step 2.
After learning the game, the agents start to learn how to play. The difficulty is, without
knowing about the other?s preferences over conventions, agents cannot explicitly eliminate
Pareto-dominated conventions in GM . A straightforward approach is to allow each agent
to choose its most preferable convention, and break tie randomly. This, however, requires
to disclose the preference information to the other agent, thereby violating our assumption.
Moreover, such a treatment limits the negotiation to only two solutions. Thus, even if there
exists a better convention in which one agent compromise a little but the other is better off
greatly, it will not be chosen. The intriguing question here is whether agents can learn to
play a PO convention without knowing the other?s preferences at all.
Adaptive play with persistent noise in action selection (see Section 2.2.2) causes agents
to choose ?stochastic stable? conventions most of time. This provides a potential solution
to the above problem. Specifically, over QiM , each agent i first constructs a best-response
set by including, for each possible action of the other agent a?i , the joint action {a?i , a?i }
where a?i is i?s best response to a?i . Then, agent i forms a virtual Q-function V QiM
which equals QiM , except that the values of the joint actions not in the best-reponse set
are zero. We have proved that in the virtual game (V Q1M , V Q2M ), conventions strictly
Pareto-dominated are not stochastic stable [15]. This implies that using AP with persistent
noise, agents will play 2-PO conventions most of time even without knowing the other?s
preferences. Therefore, if the agents can stop using noise in action selection at some point
(and will thus play a particular convention from then on), there is a high probability that
they end up playing a 2-PO convention. The rest of this section presents our algorithm in
more detail.
We first adapt KSAP (see Section 3) to a learning model with persistent noise. After choosing the best-response action suggested by KSAP, each agent checks whether the current
2
Agents may prefer to hide such information to avoid giving others some advantage in the future interactions.
In an N ? N game this occurs for both agents at the same time, but the technique also works for games with a different
number of actions per agent.
3
state (containing the m most recent joint actions) is a convention state. If it is not, the
agent plays KSAP as usual (i.e., k plays of the action selected). If it is, then in each of the
following k steps, the agent has probability ? to randomly independently choose an action,
and probability 1 ? ? to play the best-response action. We call this algorithm ?-KSAP.
We can model this learning process as a Markov chain, with the state space including all
and only convention states. Let st be the state at time t and sct be the first convention state
the agents reach after time t. The transition probability is p?h,h0 = P r{sct = h0 |st = h},
and it depends only on h, not t (for a fixed ?). Therefore, the Markov chain is stationary. It
is also irreducible and aperiodic, because with ? > 0, all actions have positive probability
to be chosen in a convention state. Therefore, Theorem 4 in [17] applies and thus the chain
has a unique stationary distribution circling around the stochastic stable conventions of
{V Q1 , V Q2 }. These conventions are 2-PO (Lemma 5 in [15]) with probability 1 ? 2?.
The proof of Lemma 1 in [17] further characterizes the support of the limit distribution.
With 0 < ? < 1, it is easy to obtain from the proof of Lemma 1 in [17] that the probability
of playing 2-PO conventions is at least 1 ? C?, where C > 0 is a constant.
Our algorithm intends to let agents stop taking noisy actions at some point and stick to a
particular convention. This amounts to sampling the stationary distribution of the Markov
chain. If the sampling is unbiased, the agents have a probability at least 1 ? C? to learn
a 2-convention. The issue is how to make the sampling unbiased. We address this by
applying a simple and efficient Markov chain Monte Carlo algorithm proposed by Lov?asz
and Winkler [13]. The algorithm first randomly selects a state h and randomly walks along
the chain until all states have been visited. During the walk, it generates a function A h :
S \ {h} ? S, where S is the set of all convention states. Ah can be represented as a direct
graph with a direct edge from each h0 to Ah (h0 ). After the walk, if agents find that Ah
defines an h-tree (see Section 2.2.2), h becomes the convention the agents play forever.
Otherwise, agents take another random sample from S and repeat random walk, and so
on. Lov?asz and Winkler proved that the algorithm makes an exact sampling of the Markov
chain and that its expected running time is O(?h3 log N ), where h
? is the maximum expected
time to transfer from one convention state to another. In our setting, we know that the
probability to transit from one convention state to another is polynomial in ? (probability
to make mistakes in convention states). So, h
? is polynomial in 1? . In addition, recall that
our Markov chain is constructed on the convention states instead of all states. The expected
time for making a transition in this chain is upper-bounded by the expected convergence
time of KSAP which is polynomial in m and k.
Recall that Lov?asz and Winkler?s algorithm needs to do uniform random experiments when
choosing h and constructing Ah . In our setting, individual agents generate random numbers
independently. Without knowing each others? random numbers, agents cannot commit to
a convention together. If one of our learning agents commits to the final action before the
other, the other may never commit because it is unable to complete the random walk. It
is nontrivial to coordinate a joint commitment time between the agents because the agents
cannot communicate (except via actions). We solve this problem by making the agents
use the same random numbers (without requiring communication). We accomplish this
via a random hash function technique, an idea common in cryptograhy [1]. Formally, a
random hash function is a mapping from a pre-image space to an image space. Denote the
random hash function with an image space X by ?X . It has two properties: (1) For any
input, ?X randomly with uniform distribution draws an image from X as an output. (2)
With the same input, ?X gives the same output. Such functions are easy to construct (e.g.,
standard hash functions like MD5 and SHA can be converted to random hash functions by
truncating their output [1]). In our learning setting, the agents share the same observations
of previous plays. Therefore, we take the pre-image to be the most recent m joint actions
appended by the number of steps played so far. Our learning agents have the same random
hash function ?X . Whenever an agent should make a call to a random number generator, it
instead inputs to ?X the m most recent joint actions and the total number of steps played
so far, and uses the output of ?X as the random number. 4 This way the agents see the same
uniform random numbers, and because the agents use the same algorithms, they will reach
commitment to the final action at the same step.
Learning how to play (imperfect monitoring setting)
1. Construct a virtual Q-function V Qi from Qit .
2.
3.
4.
5.
6.
For steps = 1, 2, 4, 8, . . . do5
For j = 1, 2, 3, . . . , 3N do
h = ?S (ht , t) (Use random hash function ?S to choose a convention state h uniformly from S.)
U = {h}
Do until U = S
(a) Play ?-KSAP until a convention state h0 6? U is reached
(b) y = ?{1,...,steps} (ht0 , t0 )
(c) Play ?-KSAP until convention states have been visited y times (counting duplicates). Denote the most recent
convention state by Ah (h0 )
(d) U = U ? {h0 }
7. If Ah defines an h-tree, play h forever
8. Endfor
9. Endfor
Theorem 2 In any unknown 2-agent coordination game with imperfect monitoring, for
0 < ? < 1 and some constant C > 0, if m ? 4k, using the above algorithm, the
agents learn a 2-PO deterministic policy with probability at least 1 ? 2? ? C? in time
poly(N, 1? , 1 , 1? , m, k).
5
Conclusions and future research
In this paper, we studied how to learn to play a Pareto-optimal strict Nash equilibrium when
there exist multiple equilibria and agents may have different preferences among the equilibria. We focused on 2-agent repeated coordination games of non-identical interest where
the agents do not know the game structure up front and receive noisy payoffs. We designed
efficient near-optimal algorithms for both the perfect monitoring and the imperfect monitoring setting (where the agents only observe their own payoffs and the joint actions). In a
longer version of the paper [15], we also present the convergence algorithms. In the future
work, we plan to extend all these results to n-agent and multistage coordination games.
References
[1] Bellare and Rogaway. Random oracle are practical: A paradigm for designing efficient protocols. In Proceedings of First
ACM Annual Conference on Computer and Communication Security, 93.
[2] Boutilier. Planning, learning and coordination in multi-agent decision processes. In TARK, 96.
[3] Brafman and Tennenholtz. R-max: A general polynomial time algorithm for near-optimal reinforcement learning. In IJCAI,
01.
[4] Claus and Boutilier. The dynamics of reinforcement learning in cooperative multi-agent systems. In AAAI, 98.
[5] Fiechter. Efficient reinforcement learning. In COLT, 94.
[6] Fudenberg and Levine. The theory of learning in games. MIT Press, 98.
[7] Greenwald and Hall. Correlated-q learning. In AAAI Spring Symposium, 02.
[8] Hu and Wellman. Multiagent reinforcement learning: theoretical framework and an algorithm. In ICML, 98.
[9] Kaelbling, Littman, and Moore. Reinforcement learning: A survey. JAIR, 96.
[10] Kearns and Singh. Near-optimal reinforcement learning in polynomial time. In ICML, 98.
[11] Littman. Value-function reinforcement learning in markov games. J. of Cognitive System Research, 2:55?66, 00.
[12] Littman. Friend-or-Foe Q-learning in general sum game. In ICML, 01.
[13] Lov?
asz and Winkler. Exact mixing in an unknown markov chain. Electronic Journal of Combinatorics, 95.
[14] Pivazyan and Shoham. Polynomial-time reinforcement learning of near-optimal policies. In AAAI, 02.
[15] Wang and Sandholm. Learning to play pareto-optimal equilibria: Convergence and efficiency.
www.cs.cmu.edu/?xiaofeng/LearnPOC.ps.
[16] Wang and Sandholm. Reinforcement learning to play an optimal Nash equilibrium in team markov game. In NIPS, 02.
[17] Young. The evolution of conventions. Econometrica, 61:57?84, 93.
[18] Young. An evolutionary model of bargaining. Journal of Economic Theory, 59, 93.
4
Recall that agents have established the same numbering of actions. This allows them to encode their joint actions for
inputting into ? in the same way.
5
The pattern of the for-loops is from the Lov?
asz-Winkler algorithm [13].
| 2390 |@word version:3 polynomial:14 a02:4 willing:1 seek:1 hu:1 q1:1 paid:2 thereby:1 initial:3 contains:1 existing:2 current:1 comparing:1 intriguing:1 attracted:1 designed:1 update:3 hash:7 stationary:5 selected:1 record:1 provides:1 node:1 preference:13 along:2 constructed:1 direct:4 symposium:1 persistent:4 advocate:2 lov:5 expected:11 rapid:1 planning:1 multi:2 relying:1 little:2 considering:1 becomes:1 provided:1 bounded:2 moreover:1 maximizes:1 what:1 rmax:1 pursue:1 maxa:1 q2:1 inputting:1 guarantee:2 every:7 tie:2 preferable:2 stick:1 positive:3 before:3 understood:1 t1:1 mistake:4 limit:2 despite:1 path:1 ap:11 might:1 chose:1 studied:3 dynamically:1 ease:1 limited:1 unique:3 practical:1 block:1 marl:16 shoham:2 pre:2 get:3 cannot:4 close:1 selection:5 applying:1 accumulating:1 www:1 deterministic:1 go:1 attention:4 starting:3 independently:3 straightforward:1 truncating:1 focused:1 survey:1 pure:4 adjusts:1 importantly:1 unilaterally:1 notion:1 coordinate:2 play:47 suppose:1 gm:15 exact:2 us:1 designing:1 pa:2 element:3 updating:1 cooperative:1 ep:2 observed:1 disclose:1 levine:1 wang:3 capture:1 intends:1 nash:12 ui:9 littman:3 multistage:1 econometrica:1 dynamic:1 singh:2 compromise:1 efficiency:5 po:20 joint:16 represented:2 fast:2 effective:1 monte:1 artificial:2 tell:1 outcome:1 h0:13 choosing:2 whose:1 widely:2 plausible:1 solve:1 say:1 qim:8 otherwise:4 favor:1 winkler:5 commit:2 noisy:5 final:2 advantage:1 propose:1 interaction:2 commitment:2 loop:2 mixing:1 achieve:3 convergence:5 ijcai:1 optimum:1 p:1 negotiate:2 perfect:8 silver:1 converges:1 andrew:1 friend:2 develop:1 h3:1 received:1 c:3 implies:2 convention:84 met:2 aperiodic:2 stochastic:8 successor:2 virtual:4 fix:1 opt:4 strictly:3 extension:1 hold:1 sufficiently:3 around:3 hall:1 great:1 equilibrium:29 mapping:1 early:1 a2:7 endfor:2 label:1 visited:3 coordination:21 circling:1 mit:1 concurrently:1 always:2 aim:1 rather:1 avoid:1 encode:1 focus:2 check:1 greatly:1 am:1 a01:4 i0:2 eliminate:1 a0:4 selects:3 provably:1 issue:2 among:4 arg:1 colt:1 denoted:1 negotiation:1 development:1 q2m:5 plan:1 special:1 equal:1 construct:5 once:2 having:1 fiechter:2 sampling:7 chernoff:2 identical:4 never:1 look:1 icml:3 thin:1 future:3 others:8 duplicate:1 employ:1 irreducible:2 randomly:13 composed:1 individual:7 deviating:1 phase:1 replacement:2 interest:6 possibility:1 wellman:1 a0i:6 chain:15 kt:2 edge:2 necessary:1 experience:3 unless:1 tree:5 rogaway:1 walk:5 theoretical:1 minimal:1 complicates:1 witnessed:1 column:1 earlier:1 maximization:1 cost:1 kaelbling:1 entry:1 uniform:4 front:3 too:1 perturbed:3 accomplish:1 chooses:5 st:2 stay:1 off:3 together:1 central:1 aaai:3 containing:1 choose:12 hoeffding:1 cognitive:1 potential:3 converted:1 coordinated:2 combinatorics:1 explicitly:3 depends:1 later:1 try:2 view:1 break:1 characterizes:1 reached:1 start:3 sct:2 appended:2 formed:1 variance:1 efficiently:1 correspond:4 yield:1 identify:3 weak:2 carlo:1 monitoring:16 drive:1 history:2 foe:2 ah:6 reach:4 whenever:1 definition:7 against:1 bargaining:3 proof:4 associated:1 sampled:1 stop:3 proved:3 treatment:2 recall:3 subsection:1 actually:1 back:1 focusing:1 appears:1 higher:1 jair:1 violating:1 response:11 reponse:1 though:1 generality:1 furthermore:2 just:2 stage:1 until:7 defines:2 bullet:1 building:1 name:1 normalized:1 unbiased:2 requiring:1 former:2 inductive:1 evolution:1 moore:1 deal:1 round:1 wrapped:1 game:70 self:1 during:3 complete:1 demonstrate:1 rudimentary:1 image:5 recently:1 common:3 absorbing:4 rl:5 extend:1 mellon:2 refer:1 significant:1 freeze:1 ai:16 stable:6 longer:2 v0:3 add:1 own:3 recent:10 showed:2 hide:1 belongs:1 inequality:1 minimum:1 determine:3 converge:3 paradigm:1 signal:2 rv:2 multiple:4 reduces:1 adapt:1 offer:3 a1:7 calculates:1 qi:1 basic:1 cmu:3 sometimes:1 cell:1 receive:4 background:2 addition:2 want:1 decreased:1 else:1 rest:2 eliminates:1 asz:5 strict:9 claus:1 recording:1 tend:1 goto:2 call:6 integer:1 near:8 counting:1 split:1 enough:2 easy:2 identified:1 imperfect:8 idea:1 economic:1 knowing:5 t0:1 whether:3 effort:1 resistance:2 cause:2 action:64 boutilier:2 generally:1 amount:1 bellare:1 ph:2 generate:1 exist:3 problematic:1 estimated:1 per:1 carnegie:2 key:2 drawn:1 ht:3 ht0:1 graph:1 merely:1 year:1 sum:1 communicate:1 arrive:1 throughout:2 electronic:1 draw:2 decision:2 prefer:4 bound:2 pay:1 guaranteed:1 played:3 oracle:1 annual:1 nontrivial:1 dominated:6 generates:1 u1:3 optimality:2 spring:1 bri:1 department:2 numbering:1 according:3 sandholm:4 making:2 restricted:1 tark:1 taken:3 equation:2 turn:1 discus:1 needed:1 know:6 end:1 opponent:1 observe:10 away:1 appending:1 coin:1 original:1 top:1 running:3 include:2 remaining:1 qit:5 commits:1 giving:1 objective:1 question:2 occurs:1 strategy:13 randomize:1 rt:1 usual:2 diagonal:1 sha:1 enhances:1 evolutionary:1 wrap:1 unable:2 transit:1 tuomas:1 modeled:1 demonstration:1 difficult:1 potentially:2 negative:1 design:2 policy:6 unknown:5 allowing:1 upper:1 observation:3 markov:15 payoff:33 communication:2 team:2 security:1 established:1 nip:1 address:2 tennenholtz:2 suggested:2 usually:4 below:1 pattern:1 fp:2 including:2 memory:2 max:1 deleting:1 belief:2 greatest:1 difficulty:2 natural:1 minimax:1 improve:1 ne:7 hm:1 prior:1 literature:1 understanding:1 asymptotic:1 multiagent:4 loss:1 mixed:1 fictitious:1 facing:1 generator:1 agent:139 principle:1 pareto:16 playing:5 share:3 row:1 brafman:2 last:1 repeat:1 allow:2 taking:2 world:1 transition:4 made:2 reinforcement:12 adaptive:7 avoided:1 simplified:1 far:2 qlearning:1 forever:3 keep:1 pittsburgh:2 assumed:2 search:1 table:1 reality:1 learn:15 transfer:1 poly:2 constructing:1 protocol:1 fudenberg:1 noise:7 profile:2 repeated:5 deterministically:2 breaking:1 young:6 theorem:5 xiaofeng:3 unperturbed:2 explored:1 exists:6 illustrates:1 demand:7 appearance:1 infinitely:1 u2:3 applies:1 satisfies:1 determines:1 acm:1 goal:7 viewed:1 greenwald:1 exposition:1 replace:1 specifically:2 except:5 uniformly:1 kearns:2 lemma:3 called:5 total:2 ece:1 domination:1 formally:2 support:1 latter:1 correlated:2 |
1,530 | 2,391 | Inferring State Sequences for Non-linear
Systems with Embedded Hidden Markov Models
Radford M. Neal, Matthew J. Beal, and Sam T. Roweis
Department of Computer Science
University of Toronto
Toronto, Ontario, Canada M5S 3G3
{radford,beal,roweis}@cs.utoronto.ca
Abstract
We describe a Markov chain method for sampling from the distribution
of the hidden state sequence in a non-linear dynamical system, given a
sequence of observations. This method updates all states in the sequence
simultaneously using an embedded Hidden Markov Model (HMM). An
update begins with the creation of ?pools? of candidate states at each
time. We then define an embedded HMM whose states are indexes within
these pools. Using a forward-backward dynamic programming algorithm, we can efficiently choose a state sequence with the appropriate
probabilities from the exponentially large number of state sequences that
pass through states in these pools. We illustrate the method in a simple
one-dimensional example, and in an example showing how an embedded HMM can be used to in effect discretize the state space without any
discretization error. We also compare the embedded HMM to a particle
smoother on a more substantial problem of inferring human motion from
2D traces of markers.
1
Introduction
Consider a dynamical model in which a sequence of hidden states, x = (x0 , . . . , xn?1 ), is
generated according to some stochastic transition model. We observe y = (y0 , . . . , yn?1 ),
with each yt being generated from the corresponding xt according to some stochastic observation process. Both the xt and the yt could be multidimensional. We wish to randomly
sample hidden state sequences from the conditional distribution for the state sequence given
the observations, which we can then use to make Monte Carlo inferences about this posterior distribution for the state sequence. We suppose in this paper that we know the dynamics
of hidden states and the observation process, but if these aspects of the model are unknown,
the method we describe will be useful as part of a maximum likelihood learning algorithm
such as EM, or a Bayesian learning algorithm using Markov chain Monte Carlo.
If the state space is finite, of size K, so that this is a Hidden Markov Model (HMM), a
hidden state sequence can be sampled by a forward-backwards dynamic programming algorithm in time proportional to nK 2 (see [5] for a review of this and related algorithms).
If the state space is <p and the dynamics and observation process are linear, with Gaussian
noise, an analogous adaptation of the Kalman filter can be used. For more general models,
or for finite state space models in which K is large, one might use Markov chain sampling
(see [3] for a review). For instance, one could perform Gibbs sampling or Metropolis updates for each xt in turn. Such simple Markov chain updates may be very slow to converge,
however, if the states at nearby times are highly dependent. A popular recent approach is
to use a particle smoother, such as the one described by Doucet, Godsill, and West [2], but
this approach can fail when the set of particles doesn?t adequately cover the space, or when
particles are eliminated prematurely.
In this paper, we present a Markov chain sampling method for a model with an arbitrary
state space, X , in which efficient sampling is facilitated by using updates that are based
on temporarily embedding an HMM whose finite state space is a subset of X , and then
applying the efficient HMM sampling procedure. We illustrate the method on a simple
one-dimensional example. We also show how it can be used to in effect discretize the state
space without producing any discretization error. Finally, we demonstrate the embedded
HMM on a problem of tracking human motion in 3D based on the 2D projections of marker
positions, and compare it with a particle smoother.
2
The Embedded HMM Algorithm
In our description of the algorithm, model probabilities will be denoted by P , which
will denote probabilities or probability densities without distinction, as appropriate for
the state space, X , and observation space, Y. The model?s initial state distribution is
given by P (x0 ), transition probabilities are given by P (xt | xt?1 ), and observation probabilities are given by P (yt | xt ). Our goal is to sample from the conditional distribution
P (x0 , . . . , xn?1 | y0 , . . . , yn?1 ), which we will abbreviate to ?(x0 , . . . , xn?1 ), or ?(x).
To accomplish this, we will simulate a Markov chain whose state space is X n ? i.e., a
state of this chain is an entire sequence of hidden states. We will arrange for the equilibrium distribution of this Markov chain to be ?(x0 , . . . , xn?1 ), so that simulating the chain
for a suitably long time will produce a state sequence from the desired distribution. The
(i)
(i)
state at iteration i of this chain will be written as x(i) = (x0 , . . . , xn?1 ). The transition
probabilities for this Markov chain will be denoted using Q. In particular, we will use some
initial distribution for the state of the chain, Q(x(0) ), and will simulate the chain according
to the transition probabilities Q(x(i) | x(i?1) ). For validity of the sampling method, we
need these transitions to leave ? invariant:
X
?(x0 ) =
?(x)Q(x0 | x), for all x0 in X n
(1)
n
x?X
(If X is continuous, the sum is replaced by an integral.) This is implied by the detailed
balance condition:
?(x)Q(x0 | x)
= ?(x0 )Q(x | x0 ), for all x and x0 in X n
(2)
The transition Q(x(i) | x(i?1) ) is defined in terms of ?pools? of states for each time. The
current state at time t is always part of the pool for time t. Other states in the pool are
produced using a pool distribution, ?t , which is designed so that points drawn from ?t are
plausible alternatives to the current state at time t. The simplest way to generate these
additional pool states is to draw points independently from ?t . This may not be feasible,
however, or may not be desirable, in which case we can instead simulate an ?inner? Markov
chain defined by transition probabilities written as Rt (? | ?), which leave the pool distribution, ?t , invariant. The transitions for the reversal of this chain with respect to ?t will be
? t (? | ?), and are defined so as to satisfy the following condition:
denoted by R
?t (xt )Rt (x0t | xt )
? t (xt | x0 ), for all xt and x0 in X
= ?t (x0t )R
t
t
(3)
? t will be the same as
If the transitions Rt satisfy detailed balance with respect to ?t , R
Rt . To generate pool states by drawing from ?t independently, we can let Rt (x0 |x) =
? t (x0 |x) = ?t (x0 ). For the proof of correctness below, we must not choose ?t or Rt based
R
on the current state, x(i) , but we may choose them based on the observations, y.
To perform a transition Q to a new state sequence, we begin by at each time, t, producing
(i?1)
a pool of K states, Ct . One of the states in Ct is the current state, xt
; the others are
? t . The new state sequence, x(i) , is then randomly selected from
produced using Rt and R
among all sequences whose states at each time t are in Ct , using a form of the forwardbackward procedure.
In detail, the pool of candidate states for time t is found as follows:
1) Pick an integer Jt uniformly from {0, . . . , K ?1}.
[0]
(i?1)
2) Let xt = xt
. (So the current state is always in the pool.)
[j]
3) For j from 1 to Jt , randomly pick xt according to the transition probabilities
[j]
[j?1]
Rt (xt | xt
).
[j]
4) For j from ?1 down to ?K + Jt + 1, randomly pick xt according to the reversed
[j+1]
? t (x[j]
transition probabilities, R
).
t | xt
[j]
5) Let Ct be the pool consisting of xt , for j ? {?K+Jt+1, . . . , 0, . . . , Jt }. If some
[j]
of the xt are the same, they will be present in the pool more than once.
Once the pools of candidate states have been found, a new state sequence, x (i) , is picked
from among all sequences, x, for which every xt is in Ct . The probability of picking
Qn?1
x(i) = x is proportional to ?(x)/ t=0 ?t (xt ), which is proportional to
Qn?1
Qn?1
P (x0 ) t=1 P (xt | xt?1 ) t=0 P (yt | xt )
(4)
Qn?1
t=0 ?t (xt )
Qn?1
The division by t=0 ?t (xt ) is needed to compensate for the pool states having been drawn
from the ?t distributions. If duplicate states occur in some of the pools, they are treated
as if they were distinct when picking a sequence in this way. In effect, we pick indexes of
states in these pools, with probabilities as above, rather than states themselves.
The distribution of these sequences of indexes can be regarded as the posterior distribution for a hidden Markov model, with the transition probability from state j at time t?1
[k]
[j]
to state k at time t being proportional to P (xt | xt?1 ), and the probabilities of the hypo[k]
[k]
thetical observed symbols being proportional to P (yt | xt )/?t (xt ). Crucially, using the
forward-backward technique, it is possible to randomly pick a new state sequence from this
distribution in time growing linearly with n, even though the number of possible sequences
[j]
grows as K n . After the above procedure has been used to produce the pool states, xt for
t = 0 to n?1 and j = ?K +Jt + 1 to Jt , this algorithm operates as follows (see [5]):
[j]
[j]
1) For t = 0 to n?1 and for j = ?K +Jt +1 to Jt , let ut,j = P (yt | xt )/?t (xt ).
[j]
2) For j = ?K +J0 +1 to J0 , let w0,j = u0,j P (X0 = x0 ).
3) For t = 1 to n?1 and for j = ?K +Jt + 1 to Jt , let
P
[j]
[k]
wt,j = ut,j wt?1,k P (xt | xt?1 )
k
4) Randomly pick sn?1 from {?K +Jn?1 +1, . . . , Jn?1 }, picking the value j with
probability proportional to wn?1,j .
5) For t = n?1 down to 1, randomly pick st?1 from {?K +Jt?1 +1, . . . , Jt?1 },
[s ]
[j]
picking the value j with probability proportional to wt?1,j P (xt t | xt?1 ).
Note that when implementing this algorithm, one must take some measure to avoid floatingpoint underflow, such as representing the wt,j by their logarithms.
Finally, the embedded HMM transition is completed by letting the new state sequence, x (i) ,
[sn?1 ]
[s ]
[s ]
be equal to (x0 0 , x1 1 , . . . , xn?1
)
3
Proof of Correctness
To show that a Markov chain with these transitions will converge to ?, we need to show that
it leaves ? invariant, and that the chain is ergodic. Ergodicity need not always hold, and
proving that it does hold may require considering the particulars of the model. However,
it is easy to see that the chain will be ergodic if all possible state sequences have non-zero
probability density under ?, the pool distributions, ?t , have non-zero density everywhere,
and the transitions Rt are ergodic. This probably covers most problems that arise in practice.
To show that the transitions Q(? | ?) leave ? invariant, it suffices to show that they satisfy
detailed balance with respect to ?. This will follow from the stronger condition that the
probability of moving from x to x0 (starting from a state picked from ?) with given values
for the Jt and given pools of candidate states, Ct , is the same as the corresponding probability of moving from x0 to x with the same pools of candidate states and with values Jt0
defined by Jt0 = Jt ? ht , where ht is the index (from ?K + Jt + 1 to Jt ) of x0t in the
candidate pool.
The probability of such a move from x to x0 is the product of several factors. First, there is
the probability of starting from x under ?, which is ?(x). Then, for each time t, there is the
probability of picking Jt , which is 1/K, and of then producing the states in the candidate
? t , which is
pool using the transitions Rt and R
Jt
Y
[j]
[j?1]
Rt (xt | xt
?1
Y
) ?
[j+1]
? t (x[j]
R
)
t | xt
j=?K+Jt +1
j=1
=
JY
t ?1
[j+1]
Rt (xt
[j]
| xt ) ?
[j+1]
Rt (xt
[j]
| xt )
j=?K+Jt +1
j=0
[?K+Jt +1]
=
?1
Y
?t (xt
[0]
?t (xt )
)
JY
t ?1
[j+1]
Rt (xt
[j]
?t (xt )
[j+1]
?t (xt
(5)
)
[j]
| xt )
(6)
j=?K+Jt +1
0
Finally, there is the probability of picking x from
Q among all the sequences with states from
the pools, Ct , which is proportional to ?(x0 )/ ?t (x0t ). The product of all these factors is
?
?
JY
n?1
t ?1
0
Y ?t (x[?K+Jt +1] )
1
[j+1]
[j]
t
? ? Q ?(x )
?
R
(x
|
x
)
?(x) ? n ?
t
t
t
n?1
[0]
0
K
?t (xt )
t=0 ?t (xt )
t=0
j=?K+Jt +1
=
?
n?1
Y
1
?(x)?(x0 )
t +1]
??t (x[?K+J
)
Q
t
n?1
n
0
K
?(x
)?(x
)
t
t
t=0
t=0
JY
t ?1
[j+1]
Rt (xt
j=?K+Jt +1
?
[j]
| xt )? (7)
We can now see that the corresponding expression for a move from x0 to x is identical,
[j]
[j?h ]
apart from a relabelling of candidate state xt as xt t .
4
A simple demonstration
The following simple example illustrates the operation of the embedded HMM. The state
space X and the observation space, Y, are both <, and each observation is simply the state
plus Gaussian noise of standard deviation ? ? i.e., P (yt | xt ) = N (yt | xt , ? 2 ). The state
transitions are defined by P (xt | xt?1 ) = N (xt | tanh(?xt?1 ), ? 2 ), for some constant
expansion factor ? and transition noise standard deviation ? .
Figure 1 shows a hidden state sequence, x0 , . . . , xn?1 , and observation sequence,
y0 , . . . , yn?1 , generated by this model using ? = 2.5, ? = 2.5, and ? = 0.4, with
n = 1000. The state sequence stays in the vicinity of +1 or ?1 for long periods, with
rare switches between these regions. Because of the large observation noise, there is considerable uncertainty regarding the state sequence given the observation sequence, with the
posterior distribution assigning fairly high probability to sequences that contain short-term
switches between the +1 and ?1 regions that are not present in the actual state sequence,
or that lack some of the short-term switches that are actually present.
We sampled from this distribution over state sequences using an embedded HMM in which
the pool distributions, ?t , were normal with mean zero and standard deviation one, and the
pool transitions simply sampled independently from this distribution (ignoring the current
pool state). Figure 2 shows that after only two updates using pools of ten states, embedded
HMM sampling produces a state sequence with roughly the correct characteristics. Figure 3
demonstrates how a single embedded HMM update can make a large change to the state
sequence. It shows a portion of the state sequence after 99 updates, the pools of states
produced for the next update, and the state sequence found by the embedded HMM using
these pools. A large change is made to the state sequence in the region from time 840 to
870, with states in this region switching from the vicinity of ?1 to the vicinity of +1.
This example is explored in more detail in [4], where it is shown that the embedded HMM
is superior to simple Metropolis methods that update one hidden state at a time.
5
Discretization without discretization error
A simple way to handle a model with a continuous state space is to discretize the space
by laying down a regular grid, after transforming to make the space bounded if necessary.
An HMM with grid points as states can then be built that approximates the original model.
Inference using this HMM is only approximate, however, due to the discretization error
involved in replacing the continuous space by a grid of points.
The embedded HMM can use a similar grid as a deterministic method of creating pools of
states, aligning the grid so that the current state lies on a grid point. This is a special case of
the general procedure for creating pools, in which ?t is uniform, Rt moves to the next grid
? t moves to the previous grid point, with both wrapping around when the first or
point and R
last grid point is reached. If the number of pool states is set equal to the number of points
in a grid, every pool will consist of a complete grid aligned to include the current state.
On their own, such embedded HMM updates will never change the alignments of the grids.
However, we can alternately apply such an embedded HMM update and some other MCMC
update (eg, Metropolis) which is capable of making small changes to the state. These small
changes will change the alignment of the new grids, since each grid is aligned to include the
current state. The combined chain will be ergodic, and sample (asymptotically) from the
correct distribution. This method uses a grid, but nevertheless has no discretization error.
We have tried this method on the example described above, laying the grid over the transformed state tanh(xt ), with suitably transformed transition densities. With K = 10, the
grid method samples more efficiently than when using N (0, 1) pool distributions, as above.
5
0
?5
0
200
400
600
800
1000
?5
0
5
Figure 1: A state sequence (black dots) and observation sequence (gray dots) of length
1000 produced by the model with ? = 2.5, ? = 2.5, and ? = 0.4.
0
200
400
600
800
1000
?6
?4
?2
0
2
4
6
Figure 2: The state sequence (black dots) produced after two embedded HMM updates,
starting with the states set equal to the data points (gray dots), as in the figure above.
820
840
860
880
900
920
940
Figure 3: Closeup of an embedded HMM update. The true state sequence is shown by
black dots and the observation sequence by gray dots. The current state sequence is shown
by the dark line. The pools of ten states at each time used for the update are shown as small
dots, and the new state sequence picked by the embedded HMM by the light line.
Figure 4: The four-second motion sequence used for the experiment, shown
in three snapshots with streamers showing earlier motion. The left plot shows
frames 1-59, the middle plot frames 5991, and the right plot frames 91-121.
There were 30 frames per second. The
orthographic projection in these plots
is the one seen by the model. (These
plots were produced using Hertzmann
and Brand?s mosey program.)
6
Tracking human motion
We have applied the embedded HMM to the more challenging problem of tracking 3D
human motion from 2D observations of markers attached to certain body points. We constructed this example using real motion-capture data, consisting of the 3D positions at each
time frame of a set of identified markers. We chose one subject, and selected six markers
(on left and right feet, left and right hands, lower back, and neck). These markers were
projected to a 2D viewing plane, with the viewing direction being known to the model.
Figure 4 shows the four-second sequence used for the experiment.1
Our goal was to recover the 3D motion of the six markers, by using the embedded HMM
to generate samples from the posterior distribution over 3D positions at each time (the
hidden states of the model), given the 2D observations. To do this, we need some model
of human dynamics. As a crude approximation, we used Langevin dynamics with respect
to a simple hand-designed energy function that penalizes unrealistic body positions. In
Langevin dynamics, a gradient descent step in the energy is followed by the addition of
Gaussian noise, with variance related to the step size. The equilibrium distribution for this
dynamics is the Boltzmann distribution for the energy function. The energy function we
used contains terms pertaining to the pairwise distances between the six markers and to the
heights of the markers above the plane of the floor, as well as a term that penalizes bending
the torso far backwards while the legs are vertical. We chose the step size for the Langevin
dynamics to roughly match the characteristics of the actual data.
The embedded HMM was initialized by setting the state at all times to a single frame of
the subject in a typical stance, taken from a different trial. As the pool distribution at time
t, we used the posterior distribution when using the Boltzmann distribution for the energy
as the prior and the single observation at time t. The pool transitions used were Langevin
updates with respect to this pool distribution.
For comparison, we also tried solving this problem with the particle smoother of [2], in
which a particle filter is applied to the data in time order, after which a state sequence is
selected at random in a backwards pass. We used a stratified resampling method to reduce
variance. The initial particle set was created by drawing frames randomly from sequences
other than the sequence being tested, and translating the markers in each frame so that their
centre of mass was at the same point as the centre of mass in the test sequence.
Both programs were implemented in M ATLAB. The particle smoother was run with 5000
particles, taking 7 hours of compute time. The resulting sampled trajectories roughly fit the
2D observations, but were rather unrealistic ? for instance, the subject?s feet often floated
above the floor. We ran the embedded HMM using five pool states for 300 iterations,
taking 1.7 hours of compute time. The resulting sampled trajectories were more realistic
1
Data from the graphics lab of Jessica Hodgins, at http://mocap.cs.cmu.edu. We chose
markers 167, 72, 62, 63, 31, 38, downsampled to 30 frames per second. The experiments reported
here use frames 400-520 of trial 20 for subject 14. The elevation of the view direction was 45 degrees,
and the azimuth was 45 degrees away from a front view of the person in the first frame.
than those produced by the particle smoother, and were quantitatively better with respect to
likelihood and dynamical transition probabilities. However, the distribution of trajectories
found did not overlap the true trajectory. The embedded HMM updates appeared to be
sampling from the correct posterior distribution, but moving rather slowly among those
trajectories that are plausible given the observations.
7
Conclusions
We have shown that the embedded HMM can work very well for a non-linear model with
a low-dimensional state. For the higher-dimensional motion tracking example, the embedded HMM has some difficulties exploring the full posterior distribution, due, we think,
to the difficulty of creating pool distributions with a dense enough sampling of states to
allow linking of new states at adjacent times. However, the particle smoother was even
more severely affected by the high dimensionality of this problem. The embedded HMM
therefore appears to be a promising alternative to particle smoothers in such contexts.
The idea behind the embedded HMM should also be applicable to more general treestructured graphical models. A pool of values would be created for each variable in the
tree (which would include the current value for the variable). The fast sampling algorithm
possible for such an ?embedded tree? (a generalization of the sampling algorithm used for
the embedded HMM) would then be used to sample a new set of values for all variables,
choosing from all combinations of values from the pools.
Finally, while much of the elaboration in this paper is designed to create a Markov chain
whose equilibrium distribution is exactly the correct posterior, ?(x), the embedded HMM
idea can be also used as a simple search technique, to find a state sequence, x, which
maximizes ?(x). For this application, any method is acceptable for proposing pool states
(though some proposals will be more useful than others), and the selection of a new state
sequence from the resulting embedded HMM is done using a Viterbi-style dynamic programming algorithm that selects the trajectory through pool states that maximizes ?(x). If
the current state at each time is always included in the pool, this Viterbi procedure will always either find a new x that increases ?(x), or return the current x again. This embedded
HMM optimizer has been successfully used to infer segment boundaries in a segmental
model for voicing detection and pitch tracking in speech signals [1], as well as in other
applications such as robot localization from sensor logs.
Acknowledgments. This research was supported by grants from the Natural Sciences and
Engineering Research Council of Canada, and by an Ontario Premier?s Research Excellence Award. Computing resources were provided by a CFI grant to Geoffrey Hinton.
References
[1] Achan, K., Roweis, S. T., and Frey, B. J. (2004) ?A Segmental HMM for Speech Waveforms?,
Technical Report UTML-TR-2004-001, University of Toronto, January 2004.
[2] Doucet, A., Godsill, S. J., and West, M. (2000) ?Monte Carlo filtering and smoothing with application to time-varying spectral estimation? Proc. IEEE International Conference on Acoustics,
Speech and Signal Processing, 2000, volume II, pages 701-704.
[3] Neal, R. M. (1993) Probabilistic Inference Using Markov Chain Monte Carlo Methods, Technical Report CRG-TR-93-1, Dept. of Computer Science, University of Toronto, 144 pages. Available from http://www.cs.utoronto.ca/?radford.
[4] Neal, R. M. (2003) ?Markov chain sampling for non-linear state space models using embedded
hidden Markov models?, Technical Report No. 0304, Dept. of Statistics, University of Toronto,
9 pages. Available from http://www.cs.utoronto.ca/?radford.
[5] Scott, S. L. (2002) ?Bayesian methods for hidden Markov models: Recursive computing in the
21st century?, Journal of the American Statistical Association, vol. 97, pp. 337?351.
| 2391 |@word trial:2 middle:1 stronger:1 suitably:2 crucially:1 tried:2 pick:7 tr:2 initial:3 contains:1 current:13 discretization:6 assigning:1 written:2 must:2 realistic:1 utml:1 designed:3 plot:5 update:18 resampling:1 selected:3 leaf:1 plane:2 short:2 toronto:5 five:1 height:1 relabelling:1 constructed:1 excellence:1 pairwise:1 x0:29 roughly:3 themselves:1 growing:1 actual:2 considering:1 begin:2 provided:1 bounded:1 maximizes:2 mass:2 proposing:1 every:2 multidimensional:1 exactly:1 demonstrates:1 grant:2 yn:3 producing:3 engineering:1 frey:1 severely:1 switching:1 might:1 plus:1 black:3 chose:3 challenging:1 stratified:1 acknowledgment:1 practice:1 orthographic:1 recursive:1 procedure:5 cfi:1 j0:2 projection:2 regular:1 downsampled:1 selection:1 closeup:1 context:1 applying:1 www:2 deterministic:1 yt:8 starting:3 independently:3 ergodic:4 regarded:1 embedding:1 proving:1 handle:1 century:1 analogous:1 suppose:1 programming:3 us:1 observed:1 capture:1 region:4 forwardbackward:1 ran:1 substantial:1 transforming:1 hertzmann:1 dynamic:10 solving:1 segment:1 creation:1 localization:1 division:1 distinct:1 fast:1 describe:2 monte:4 pertaining:1 choosing:1 whose:5 plausible:2 drawing:2 statistic:1 think:1 beal:2 sequence:53 product:2 adaptation:1 aligned:2 ontario:2 roweis:3 description:1 produce:3 leave:3 illustrate:2 implemented:1 c:4 direction:2 foot:2 waveform:1 correct:4 filter:2 stochastic:2 human:5 viewing:2 translating:1 implementing:1 require:1 suffices:1 generalization:1 elevation:1 crg:1 exploring:1 hold:2 around:1 normal:1 equilibrium:3 viterbi:2 matthew:1 arrange:1 optimizer:1 estimation:1 proc:1 applicable:1 tanh:2 council:1 treestructured:1 correctness:2 create:1 successfully:1 sensor:1 gaussian:3 always:5 rather:3 avoid:1 varying:1 likelihood:2 inference:3 dependent:1 entire:1 hidden:15 floatingpoint:1 transformed:2 selects:1 among:4 denoted:3 smoothing:1 special:1 fairly:1 equal:3 once:2 never:1 having:1 sampling:13 eliminated:1 identical:1 others:2 report:3 quantitatively:1 duplicate:1 randomly:8 simultaneously:1 replaced:1 consisting:2 jessica:1 detection:1 highly:1 alignment:2 light:1 behind:1 chain:22 integral:1 capable:1 necessary:1 tree:2 logarithm:1 penalizes:2 desired:1 initialized:1 instance:2 earlier:1 cover:2 deviation:3 subset:1 rare:1 uniform:1 azimuth:1 graphic:1 front:1 reported:1 accomplish:1 combined:1 st:2 density:4 person:1 international:1 stay:1 probabilistic:1 pool:48 picking:6 again:1 choose:3 slowly:1 creating:3 american:1 style:1 return:1 satisfy:3 view:2 picked:3 lab:1 portion:1 reached:1 recover:1 variance:2 characteristic:2 efficiently:2 bayesian:2 produced:7 carlo:4 trajectory:6 m5s:1 premier:1 energy:5 pp:1 involved:1 atlab:1 proof:2 sampled:5 popular:1 ut:2 dimensionality:1 torso:1 actually:1 back:1 appears:1 higher:1 follow:1 done:1 though:2 ergodicity:1 hand:2 replacing:1 marker:11 lack:1 gray:3 grows:1 effect:3 validity:1 contain:1 true:2 adequately:1 vicinity:3 stance:1 neal:3 eg:1 adjacent:1 complete:1 demonstrate:1 motion:9 superior:1 x0t:4 attached:1 exponentially:1 volume:1 linking:1 association:1 approximates:1 gibbs:1 grid:17 particle:13 centre:2 dot:7 moving:3 robot:1 aligning:1 segmental:2 posterior:8 own:1 recent:1 apart:1 certain:1 seen:1 additional:1 floor:2 converge:2 mocap:1 period:1 signal:2 u0:1 smoother:8 full:1 desirable:1 ii:1 infer:1 technical:3 match:1 long:2 compensate:1 elaboration:1 award:1 jy:4 pitch:1 cmu:1 iteration:2 proposal:1 addition:1 probably:1 subject:4 integer:1 backwards:3 easy:1 wn:1 enough:1 switch:3 fit:1 identified:1 inner:1 regarding:1 reduce:1 idea:2 expression:1 six:3 speech:3 useful:2 detailed:3 dark:1 ten:2 simplest:1 generate:3 http:3 per:2 vol:1 affected:1 four:2 nevertheless:1 drawn:2 ht:2 achan:1 backward:2 asymptotically:1 sum:1 run:1 facilitated:1 everywhere:1 uncertainty:1 draw:1 acceptable:1 ct:7 followed:1 occur:1 nearby:1 aspect:1 simulate:3 department:1 according:5 combination:1 em:1 sam:1 y0:3 g3:1 metropolis:3 making:1 leg:1 invariant:4 taken:1 resource:1 turn:1 fail:1 needed:1 know:1 letting:1 reversal:1 available:2 operation:1 apply:1 observe:1 away:1 appropriate:2 voicing:1 spectral:1 simulating:1 alternative:2 jn:2 original:1 include:3 completed:1 graphical:1 implied:1 move:4 wrapping:1 rt:16 gradient:1 reversed:1 distance:1 hmm:38 w0:1 laying:2 kalman:1 length:1 index:4 balance:3 demonstration:1 trace:1 godsill:2 boltzmann:2 unknown:1 perform:2 discretize:3 vertical:1 observation:20 snapshot:1 markov:19 finite:3 descent:1 january:1 langevin:4 hinton:1 prematurely:1 frame:11 arbitrary:1 canada:2 acoustic:1 distinction:1 hour:2 alternately:1 dynamical:3 below:1 scott:1 appeared:1 program:2 built:1 unrealistic:2 overlap:1 treated:1 difficulty:2 natural:1 abbreviate:1 representing:1 created:2 sn:2 bending:1 review:2 prior:1 embedded:35 proportional:8 filtering:1 geoffrey:1 degree:2 supported:1 last:1 allow:1 taking:2 boundary:1 xn:7 transition:24 doesn:1 qn:5 forward:3 made:1 projected:1 far:1 approximate:1 doucet:2 continuous:3 search:1 promising:1 ca:3 ignoring:1 expansion:1 hodgins:1 did:1 dense:1 linearly:1 noise:5 arise:1 x1:1 body:2 west:2 slow:1 inferring:2 position:4 wish:1 candidate:8 lie:1 crude:1 down:3 xt:64 utoronto:3 showing:2 jt:26 symbol:1 explored:1 consist:1 hypo:1 illustrates:1 nk:1 simply:2 temporarily:1 tracking:5 radford:4 underflow:1 conditional:2 goal:2 feasible:1 jt0:2 considerable:1 change:6 typical:1 included:1 uniformly:1 operates:1 wt:4 pas:2 neck:1 brand:1 dept:2 mcmc:1 tested:1 |
1,531 | 2,392 | Plasticity Kernels and Temporal Statistics
Peter Dayan1 Michael Hausser 2 Michael London1?2
1GCNU, 2WIBR, Dept of Physiology
UCL, Gower Street, London
[email protected]
{m.hausser,m.london}@ucl.ac.uk
Abstract
Computational mysteries surround the kernels relating the
magnitude and sign of changes in efficacy as a function of
the time difference between pre- and post-synaptic activity at
a synapse. One important idea34 is that kernels result from filtering, ie an attempt by synapses to eliminate noise corrupting
learning. This idea has hitherto been applied to trace learning
rules; we apply it to experimentally-defined kernels, using it to
reverse-engineer assumed signal statistics. We also extend it to
consider the additional goal for filtering of weighting learning
according to statistical surprise, as in the Z-score transform.
This provides a fresh view of observed kernels and can lead to
different, and more natural, signal statistics.
1 Introduction
Speculation and data that the rules governing synaptic plasticity should include a very special role for timel,7,13,17,20,21,23,24,26,27,31,32,3S was spectacularly confirmed by a set of highly influential experiments4?S,ll,l6, 25 showing that the precise relative timing of pre-synaptic and post-synaptic action potentials governs the magnitude and sign of the resulting plasticity.
These experimentally-determined rules (usually called spike-time dependent
plasticity or STDP rules), which are constantly being refined, 18,3o have inspired substantial further theoretical work on their modeling and interpretation.2?9,l0,22?28?29?33 Figure l(Dl-Gl)* depict some of the main STDP findings/
of which the best-investigated are shown in figure l(Dl;El), and are variants of
a 'standard' STDP rule. Earlier work considered rate-based rather than spikebased temporal rules, and so we adopt the broader term 'time dependent plasticity' or TDP. Note the strong temporal asymmetry in both the standard rules.
Although the theoretical studies have provided us with excellent tools for modeling the detailed consequences of different time-dependent rules, and understanding characteristics such as long-run stability and the relationship with
non-temporal learning rules such as BCM, 6 specifically computational ideas
about TDP are rather thinner on the ground. Two main qualitative notions
explored in various of the works cited above are that the temporal asymmetries in TDP rules are associated with causality or prediction. However, looking specifically at the standard STDP rules, models interested in prediction
*We refer to graphs in this figure by row and column.
concentrate mostly on the L1P component and have difficulty explaining the
predsely-timed nature of the LTD. Why should it be particularly detrimental to
the weight of a synapse that the pre-synaptic action potential comes just after
a post-synaptic action-potential, rather than 200ms later, for instance?. In the
case of time-difference or temporal difference rules, 29?32 why might the LTD
component be so different from the mirror reflection of the L1P component
(figure 1(?1)), at least short of being tied to some particular biophysical characteristic of the post-synaptic cell. We seek alternative computationally-sound
interpretations.
Wallis & Baddeley34 formalized the intuition underlying one class of TDP rules
(the so-called trace based rules, figure l(A1)) in terms of temporal filtering. In
their model, the actual output is a noisy version of a 'true' underlying signal.
They suggested, and showed in an example, that learning proceeds more proficiently if the output is filtered by an optimal noise-removal filter (in their case,
a Wiener filter) before entering into the learning rule. This is like using a prior
over the signal, and performing learning based on the (mean) of the posterior
over the signal given the observations (ie the output). If objects in the world
normally persist for substantial periods, then, under some reasonable assumptions about noise, it turns out to be appropriate to apply a low-pass filter to
the output. One version of this leads to a trace-like learning rule.
Of course, as seen in column 1 of figure 1, TDP rules are generally not tracelike. Here, we extend the Wallis-Baddeley (WB) treatment to rate-based versions
of the actual rules shown in the figure. We consider two possibilities, which infer optimal signal models from the rules, based on two different assumptions
about their computational role. One continues to regard them as Wiener filters.
The other, which is closely related to recent work on adaptation and modulation, 3 ?s, 15 ?36 has the kernel normalize frequency components according to their
standard deviations, as well as removing noise. Under this interpretation, the
learning signal is a Z-score-transformed veFsion of the output.
In section 2, we describe the WB model. In section 3, we extend this model to
the case of the observed rules for synaptic plasticity.
2 Filtering
Consider a set of pre-synaptic inputs i E { 1 ... n} with firing rates Xi ( t) at time
t to a neuron with output rate y ( t). A general TDP plasticity rule suggests that
synaptic weight Wi should change according to the correlation between input
Xi(t) and output y(t), through the medium of a temporal filter cf>(s)
~wi ex: Jdtxi(t) {J dt'y(t')cf>(t-t')} = J dt'y(t'){J dtxi(t)cp(t-t')}
(1)
Provided the temporal filters for each synapse on a single post-synaptic cell
are the same, equation 1 indicates that pre-synaptic and post-synaptic filtering
have essentially the same effect.
WB 34 consider the case that the output can be decomposed as y(t) = s(t) +
n(t), where s(t) is a 'true' underlying signal and n(t) is noise corrupting the
signal. They suggest defining the filter so that s(t) = f dt' y(t')cf>(t-t') is the
optimal least-squares estimate of the signal. Thus, learning would be based on
the best available information about the signal s (t). If signal and noise are statistically stationary signals, with power spectra IS ( w) 12 and IN (w) 12 respectively at (temporal) frequency w, then the magnitude of the Fourier transform
0
CD
kernel
kernel
spectrum
(A]~
;?
[[]
[!]
signal
spectrum
signal
spectrum
!wiener
!white
I
~
:o
......
w
t
L L
?j
0
t
I
w
klrug~N~
w
w
Figure 1: Time-dependent plastidty rules. The rows are for various suggested rules
(A; 17 B; 23 D; 25 E; 16 F; 2 G, 14 from Abbott & Nelson2 ); the columns show: (1) the kernels in
time t; (2) their temporal power spectra as a function of frequency w; (3) signal power
S (w) as function of w assuming the kernels are derived from the underlying Wiener
filter; (4) signal power S(w) assuming the kernels are derived from the noise-removal
and whitening filter. Different kernels also have different phase spectra. See text for
more details. The ordinates of the plots have been individually normalized; but the
abscissce for all the temporal (t) plots and, separately, all the the spectral (w) plots,
are the same, for the purposes of comparison. Numerical scales are omitted to focus
on structural characteristics. In the text, we refer to individual graphs in this figure by
their row letter (A-G) and column number (1-4).
of the (Wiener) filter is
IS(w)l 2
I<P(w)l = IS(w)l 2 + IN(w)l 2
(2)
Any filter with this spectral tuning will eliminate noise as best as possible;
the remaining freedom lies in choosing the phases of the various frequency
components. Following Foldiak,l? WB suggest using a causal filter for y(t),
with cp(t-t') = 0 fort< t'. This means that the input Xi(t) at timet is
correlated with weighted values of y(t') for times t'::;;; t only. In fact, WB derive
the optimal acausal filter and take its casual half, which is not necessarily the
same thing. Interestingly, the forms of TDP that have commonly been used
in reinforcement learning23 ?31 ?32 consider purely acausal filters for y(t) (such
that Xi(t) is correlated with future values of the output), and therefore use
exactly the opposite condition on the filter, namely that cp(t-t') = 0 fort> t'.
In the context of input coming from visually presented objects, WB suggest using white noise N ( w) = N, 'if w, and consider two possibilities for S ( w), based
on the assumptions that objects persist for either fixed, or randomly variable,
lengths of time. We summarize their main result in the first three rows of
figure 1. Figure l(A3) shows the assumed, scale-free, magnitude spectrum
IS(w)l = 1/w for the signal. Figure l(Al) shows the (truly optimal) purely
causal version of the filter that results - it can be shown to involve exactly
an exponential decay, with a rate constant which depends on the level of the
noise N. In WB's self-supervised setting, it is rather unclear a priori whether
the assumption of white noise is valid; WB's experiments bore it out to a rough
approximation, and showed that the filter of figure l(Al) worked well on a task
involving digit representation and recognition.
Figure l(Bl;B3) repeat the analysis, with the same signal spectrum, but for
the optimal purely acausal filter as used in reinforcement learning's synaptic
eligibility traces. Of course, the true TDP kernels (shown in figure l(Dl-Gl))
are neither purely casual nor acausal; figure l(Cl) shows the normal low pass
filter that results from assuming phase 0 for all frequency components.
Although the WB filter of figure l(Cl) somewhat resembles a Hebbian version
of the anti-Hebbian rule for layer IV spiny stellate cells shown in figure l(Gl),
it is clearly not a good match for the standard forms of TDP. One might also
question the relationship between the time constants of the kernels and the
signal spectrum that comes from object persistence. The next section considers two alternative possibilities for interpreting TDP kernels.
3 Signalling and Whitening
The main intent of this paper is to combine WB's idea about the role of filtering
in synaptic plasticity with the actual forms of the kernels that have been revealed in the experiments. Under two different models for the computational
goal of filtering, we work back from the experimental kernels to the implied
forms of the statistics of the signals. The first method employs WB's Wiener
filtering idea. The second method can be seen as using a more stringent defintion of statistical significance.
~ phase for E1 ~ phase=-?
(g DoG kernel @]
DoG signals
b
,,
\
\
w
t
t
Wiener
w
Figure 2: Kernel manipulation. A) The phase spectrum (ie kernel phase as a function
of frequency) for the kernel (shown in figure l(El)) with asymmetric LTP and LTD.l 6
B) The kernel that results from the power spectrum of figure l(E2) but constant phase
-rr/2. This kernel has symmetric LTP and LTD, with an intermediate time constant.
C) Plasticity kernel that is exactly a difference of two Gaussians (DoG; compare figure l(Fl)). White (solid; from equation 4) and Wiener (dashed; from equation 3) signal
spectra derived from the DoG kernel in (C). Here, the signal spectrum in the case of
whitening has been vertically displaced so it is clearer. Both signal spectra show dear
periodicities.
3.1 Reverse engineering signals from Wiener filtering
Accepting equation 2 as the form of the filter (note that this implies that
lci>(w)l ~ 1), and, with WB, making the assumption that the noise is white,
so IN(w)l = N, Vw, the assumed amplitude spectrum of the signal process
s(t) is
(3)
IS(w)l = N~lci>(w)l/(1-lci>(i.o)i).
Importantly, the assumed power of the noise does not affect the form of the
signal power, it only scales it.
Figure 1(D2-G2) shows the magnitude of the Fourier transform of the experimental kernels (which are shown in figure 1(D1-G1)), and figure 1(D3-G3) show
the implied signal spectra. Since there is no natural data that specify the absolute scale of the kernels (ie the maximum value of lci>(w) 1), we set it arbitrarily
to 0.5. Any value less than ~ 0.9 leads to similar predictions for the signal
spectra. We can relate figure 1(D3-G3) to to the heuristic criteria mentioned
above for the signal power spectrum. In two cases (D3;F3), the clear peaks
in the signal power spectra imply strong periodicities. For layer V pyramids
(D3), the time constant for the kernel is ~ 20ms, implying a peak frequency of
w =50Hz in they band. In the hippocampal case, the frequency may be a little lower. Certainly, the signal power spectra underlying the different kernels
have quite different forms.
3.2 Reverse engineering signals from whitening
WB's suggestion that the underlying signal s(t) should be extracted from the
output y (t) far from exhausts the possibilities for filtering. In particular, there
have been various suggestions 36 that learning should be licensed by statistical
surprise, ie according to how components of the output differ from expectations. A simple form of this that has gained recent currency is the Z-score
transformation, 8?15 ?36 which implies considering components of the signal in
units of (ie normalized by) their standard deviations. Mechanistically, this is
closely related to whitening in the face of input noise, but with a rather different computational rationale.
A simple formulation of a noise-sensitive Z-score is Dong & Atick's 12 whitening
filter. Under the same formulation as WB (equation 2), this suggests multiplying the Wiener filter by 1 I IS ( w) I, giving
I<I>(w)l = IS(w)II(IS(w)l 2
+ N(w) 2 ).
(4)
As in equation 3, it is possible to solve for the signal power spectra implied
by the various kernels. The 4th column of figure 1 shows the result of doing
this for the experimental kernels. In particular, it shows that the clear spectral
peaks suggested by the Wiener filter (in the 3rd column) may be artefactual they can arise from a form of whitening. Unlike the case of Wiener filtering, the
signal statistics derived from the assumption of whitening have the common
characteristic of monotonically decreasing signal powers as a function of frequency w, which is a common finding for natural scene statistics, for instance.
The case of the layer V pyramids25 (row Din figure 1) is particularly clear. If
the time constants of potentiation (LTP) and depression (LTD) are T, and LTP
and LTD are matched, then the Fourier transform of the plasticity kernel is
<I> (W) = - 1
VEi
(
1
iw + 1.
T
+
1
iw - 1.
T
)
-~
-
= -t
w
rr w2 + ~
T
. 2~
= -tT
-
tJ
rr ~
+ T2
W
(5)
which is exactly the form of equation 4 for S ( w) = 1 I w (which is duly shown
in figure 1(D4)). Note the factor of -i in <I>(w). This is determined by the
phases of the frequency components, and comes from the anti-symmetry of
the kernel. The phase of the components (L<I>(w) = -rr 12, by one convention)
implies the predictive nature of the kernel: Xi(t) is being correlated with led
(ie future) values of noise-filtered, significance-normalized, outputs.
The other cases in figure 1 follow in a similar vein. Row E, from cortical layer
II/ll, with its asymmetry between LTP and LID, has similar signal statistics,
but with an extra falloff constant w 0 , making S(w) = 11(w + w 0 ). Also, it
has a phase spectrum L<I>(w) which is not constant with w (see figure 2A).
Row F, from hippocampal GABAergic cells in culture, has a form that can arise
from an exponentially decreasing signal power and little assumed noise (small
N (w) ). Conversely, row G, in cortical layer IV spiny-stellate cells, arises from
the same signal statistics, but with a large noise term N(w). Unlike the case
of the Wiener filter (equation 3), the form of the signal statistics, and not just
their magnitude, depends on the amount of assumed noise.
Figure 2B-C show various aspects of how these results change with the parameters or forms of the kernels. Figure 2B shows that coupling the power spectrum (of figure 1E2) for the rule with asymmetric LTP and LTD with a constant
phase spectrum (-rr 12) leads to a rule with the same filtering characteristic,
but with symmetric LTP and LTD. The phase spectrum concerns the predictive
relationship between pre- and post-synaptic frequency components; it will be
interesting to consider the kernels that result from other temporal relationships between pre- and post-synaptic activities. Figure 2C shows the kernel
generated as a difference of two Gaussians (DoG). Although this kernel resembles that of figure 1F1, the signal spectra (figure 2D) calculated on the basis of
whitening (solid; vertically displaced) or Wiener filtering (dashed) are similar
to each other, and both involve strong periodicity near the spectral peak of the
kernel.
4 Discussion
Temporal asymmetries in synaptic plasticity have been irresistibly alluring to
theoretical treatments. We followed the suggestion that the kernels indicate
that learning is not based on simple correlation between pre- and post -synaptic
activity, but rather involves filtering in the light of prior information, either to
remove noise from the signals (Wiener filtering), or to remove noise and boost
components of the signals according to their statistical significance.
Adopting this view leads to new conclusions about the kernels, for instance revealing how the phase spectrum differentiates rules with symmetric and asymmetric potentiation and depression components (compare figures l(El); 2B).
Making some further assumptions about the characteristics of the assumed
noise, it permits us to reverse engineer the assumed statistics of the signals, ie
to give a window onto the priors at synapses or cells (columns 3;4 of figure 1).
Structural features in these signal statistics, such as strong periodicities, may
be related to experimentally observable characteristics such as oscillatory activity in relevant brain regions. Most importantly, on this view, the detailed
characteristics of the filtering might be expected to adapt in the light of patterns of activity. This suggests the straightforward experimental test of manipulating the input and/or output statistics and recording the consequences.
Various characteristics of the rules bear comment. Since we wanted to focus on
structural features of the rules, the graphs in the figures all lack precise time
or frequency scales. In some cases we know the time constants of the kernels,
and they are usually quite fast (on the order of tens of milliseconds). This can
suggest high frequency spectral peaks in assumed signal statistics. However, it
?also hints at the potential inadequacy of our rate-based treatment that we have
given, and suggests the importance of a spike-based treatment. 22 ?30 Recent
evidence that successive pairs of pre- and post-synaptic spikes do not interact
additively in determining the magnitude and direction of plasticity18 make the
averaging inherent in the rate-based approximation less appealing. Further,
we commented at the outset that pre- and post-synaptic filtering have similar
effects, provided that all the filters on one post-synaptic cell are the same. If
they are different, then synapses might well be treated as individual filters,
ascertaining important signals for learning. In our framework, it is interesting
to speculate about the role of (pre-)synaptic depression itself as a form of noise
filter (since noise should be filt?red before it can affect the activity of the postsynaptic cell, rather than just its plasticity); leaving the kernel as a significance
filter, as in the whitening treatment. Finally, largely because of the separate
roles of signal and noise, we have been unable to think of a simple experiment
that would test between Wiener and whitening filtering. However, it is a quite
critical issue in further exploring computational accounts of plasticity.
Acknowledgements
We are very grateful to Odelia Schwartz for helpful discussions. Funding was
from the Gatsby Charitable Foundation, the Wellcome Trust (MH) and an HFSP
Long Term Fellowship (ML).
References
? [1] Abbott, LF, & Blum, KI (1996) Functional significance of long-term potentiation for sequence learning
and prediction. Cerebral Cortex 6:406-416.
[2] Abbott, LF & Nelson, SB (2000) Synaptic plasticity: taming the beast. Nature Neuroscience 3:1178-1183.
[3] Atick, JJ, li, Z, & Redlich, AN (1992) Understanding retinal color coding from first principles. Neural
Computation 4:559-572.
[4] Bell, CC, Han, VZ, Sugawara, Y & Grant K (1997) Synaptic plasticity in a cerebellum-like structure
depends on temporal order. Nature 387:278-81.
[5] Bi, GQ & Poo, MM (1998) Synaptic modifications in cultured hippocampal neurons: dependence on
spike timing, synaptic strength, and postsynaptic cell type. Journal of Neurosdence 18:10464-10472.
[6] Bienenstock, EL, Cooper, IN, & Munro, PW (1982) Theory for the development of neuron selectivity:
Orientation specificity and binocular interaction in visual cortex. Journal of Neuroscience 2:32-48.
[7] Blum, KI, & Abbott, LF (1996) A model of spatial map formation in the hippocampus of the rat. Neural
Computation 8:85-93.
[8] Buiatti, M & Van Vreeswijk, C (2003) Variance normalisation: a key mechanism for temporal adaptation
in natural vision? VISion Research, in press.
[9] Cateau, H & Fukai, T (2003) A stochastic method to predict the consequence of arbitrary forms of
spike-timing-dependent plasticity. Neural Computation 15:597-620.
[1 0] Chechik, G (2003). Spike time dependent plasticity and information maximization. Neural Computation
in press.
[11] Debanne, D, Gahwiler, BH & Thompson, SM (1998) Long-term synaptic plasticity between pairs of
individual CA3 pyramidal cells in rat hippocampal slice cultures. Journal of Physiology 507:237-247.
[12] Dong, DW, & Atick, JJ (1995) Temporal decorrelation: A theory of lagged and nonlagged responses in
the lateral geniculate nucleus. Network: Computation in Neural Systems 6:159-178.
[13] Edelman, S & Weinshall, D (1991) A self-organizing multiple-view representation of 3D objects. Biological Cybernetics 64:209-219.
[14] Egger, V, Feldmeyer, D & Sakmann, B (1999) Coincidence detection and changes of synaptic efficacy in
spiny stellate neurons in rat barrel cortex. Nature Neurosdence 2:1098-1105.
[15] Fairhall, AL, Lewen, GD, Bialek, W & de Ruyter Van Steveninck, RR (2001) Efficiency and ambiguity in
an adaptive neural code. Nature 412:787-792.
[16] Feldman, DE (2000) Timing-based LTP and LTD at vertical inputs to layer II/III pyramidal cells in rat
barrel cortex. Neuron 27:45-56.
[17] Foldiflk, P (1991) Learning invariance from transformed sequences. Neural Computation 3:194-200.
[18] Froemke, RC & Dan, Y (2002) Spike-timing-dependent synaptic modification induced by natural spike
trains. Nature 416:433-438.
[19] Ganguly K, Kiss, L & Poo, M (2000) Enhancement of presynaptic neuronal excitability by correlated
presynaptic and postsynaptic spiking. Nature Neuroscience 3:1018-1026.
[20] Gerstner, W & Abbott, LF (1997) Learning navigational maps through potentiation and modulation of
hippocampal place cells. Journal of Computational Neurosdence 4:79-94.
[21] Gerstner, W, Kempter, R, van Hemmen, JL & Wagner, H (1996) A neuronal learning rule for submillisecond temporal coding. Nature 383:76-81.
[22] Gerstner, W & Kistler, WM (2002) Mathematical formulations ofHebbianlearning. Biological Cybernetics
87:404-15 ..
[23] Hull, CL (1943) Prindples of Behavior New York, NY: Appleton-Century.
[24] Levy, WB & Steward, D (1983) Temporal contiguity requirements for long-term associative potentiation/depression in the hippocampus. Neurosdence 8:791-797
[2 5] Markram, H, Lubke, J, Frotscher, M, & Sakmann, B (1997) Regulation of synaptic efficacy by coincidence
of postsynaptic APs and EPSPs. Science 275:213-215.
[26] Minai, AA, & Levy, WB (1993) Sequence learning in a single trial. International Neural Network Society
World Congress of Neural Networks II. Portland, OR: International Neural Network Society, 505-508.
[2 7] Pavlov, PI (192 7) Conditioned Reflexes Oxford, England, OUP.
[28] Porr, B & Worgotter, F (2003) Isotropic sequence order learning. Neural Computation 15:831-864.
[29] Rao, RP & Sejnowski, TJ (2001) Spike-timing-dependentHebbian plasticity as temporal difference learning. Neural Computation 13:2221-2237. ?
[30] Sjostrom, PJ, Turrigiano, GG & Nelson, SB (2001) Rate, timing, and cooperativity jointly determine
cortical synaptic plasticity. Neuron 32:1149-1164.
[31] Sutton, RS (1988) Learning to predict by the methods of temporal difference. Machine Learning 3:9-44.
[3 2] Sutton, RS & Barto, AG (1981) Toward a modern theory of adaptive networks: Expectation and prediction. Psychological Review 88:135-170.
[33] van Rossum, MC, Bi, GQ & Turrigiano, GG (2000) Stable Hebbian learning from spike timing-dependent
plasticity. journal of Neurosdence 20:8812-21.
[34] Wallis, G & Baddeley, R (1997) Optimal, unsupervised learning in invariant object recognition. Neural
Computation 9:883-894.
[35] Wallis, G & Rolls, ET (1997). Invariant face and object recognition in the visual system. it Progress in
Neurobiology 51:167-194.
[36] Yu, AJ & Dayan, P (2003) Expected and unexpected uncertainty: ACh & NE in the neocortex. In NIPS
2002 Cambridge, MA: MIT Press.
| 2392 |@word trial:1 version:5 pw:1 hippocampus:2 d2:1 additively:1 seek:1 r:2 solid:2 efficacy:3 score:4 interestingly:1 numerical:1 plasticity:21 wanted:1 remove:2 plot:3 depict:1 aps:1 stationary:1 half:1 implying:1 signalling:1 tdp:10 isotropic:1 short:1 dear:1 accepting:1 filtered:2 provides:1 successive:1 rc:1 mathematical:1 qualitative:1 edelman:1 combine:1 dan:1 expected:2 behavior:1 nor:1 brain:1 inspired:1 decomposed:1 decreasing:2 actual:3 little:2 window:1 considering:1 provided:3 underlying:6 matched:1 medium:1 barrel:2 hitherto:1 weinshall:1 contiguity:1 finding:2 transformation:1 ag:1 temporal:21 exactly:4 uk:2 schwartz:1 normally:1 unit:1 grant:1 rossum:1 before:2 engineering:2 timing:8 thinner:1 congress:1 vertically:2 consequence:3 sutton:2 oxford:1 firing:1 modulation:2 might:4 resembles:2 suggests:4 conversely:1 pavlov:1 bi:2 statistically:1 sugawara:1 steveninck:1 lf:4 digit:1 bell:1 physiology:2 revealing:1 persistence:1 pre:11 outset:1 chechik:1 specificity:1 suggest:4 spikebased:1 onto:1 bh:1 context:1 map:2 poo:2 straightforward:1 thompson:1 formalized:1 rule:30 importantly:2 dw:1 stability:1 century:1 notion:1 debanne:1 cultured:1 recognition:3 particularly:2 continues:1 asymmetric:3 persist:2 vein:1 observed:2 role:5 coincidence:2 artefactual:1 region:1 substantial:2 intuition:1 mentioned:1 grateful:1 predictive:2 purely:4 efficiency:1 basis:1 mh:1 various:7 train:1 fast:1 describe:1 london:2 sejnowski:1 cooperativity:1 formation:1 choosing:1 refined:1 quite:3 heuristic:1 solve:1 statistic:13 g1:1 ganguly:1 think:1 transform:4 noisy:1 itself:1 jointly:1 associative:1 timet:1 rr:6 biophysical:1 sequence:4 turrigiano:2 ucl:3 interaction:1 coming:1 gq:2 adaptation:2 relevant:1 organizing:1 normalize:1 enhancement:1 asymmetry:4 requirement:1 ach:1 egger:1 object:7 derive:1 coupling:1 ac:2 clearer:1 ex:1 progress:1 strong:4 epsps:1 involves:1 come:3 implies:3 convention:1 differ:1 concentrate:1 indicate:1 direction:1 closely:2 filter:29 stochastic:1 hull:1 stringent:1 kistler:1 potentiation:5 f1:1 stellate:3 biological:2 exploring:1 mm:1 considered:1 ground:1 stdp:4 visually:1 normal:1 predict:2 adopt:1 omitted:1 purpose:1 geniculate:1 iw:2 sensitive:1 individually:1 vz:1 tool:1 weighted:1 rough:1 clearly:1 mit:1 rather:7 barto:1 broader:1 l0:1 derived:4 focus:2 portland:1 indicates:1 helpful:1 worgotter:1 dayan:2 dependent:8 el:4 sb:2 eliminate:2 bienenstock:1 manipulating:1 transformed:2 interested:1 issue:1 orientation:1 priori:1 development:1 spatial:1 special:1 frotscher:1 f3:1 yu:1 unsupervised:1 future:2 t2:1 hint:1 employ:1 inherent:1 modern:1 randomly:1 individual:3 phase:14 attempt:1 freedom:1 detection:1 normalisation:1 highly:1 possibility:4 certainly:1 truly:1 light:2 tj:2 l1p:2 culture:2 iv:2 nonlagged:1 timed:1 causal:2 theoretical:3 psychological:1 instance:3 column:7 modeling:2 earlier:1 wb:16 rao:1 maximization:1 licensed:1 ca3:1 deviation:2 feldmeyer:1 gd:1 cited:1 peak:5 mechanistically:1 international:2 ie:8 dong:2 michael:2 ambiguity:1 li:1 account:1 potential:4 vei:1 de:2 retinal:1 speculate:1 exhaust:1 coding:2 depends:3 later:1 view:4 doing:1 spectacularly:1 red:1 wm:1 square:1 lubke:1 wiener:16 variance:1 characteristic:9 largely:1 roll:1 duly:1 mc:1 multiplying:1 confirmed:1 lci:4 cc:1 casual:2 cybernetics:2 oscillatory:1 synapsis:3 falloff:1 synaptic:32 frequency:13 e2:2 gcnu:1 associated:1 treatment:5 color:1 amplitude:1 back:1 dt:3 supervised:1 follow:1 gahwiler:1 specify:1 response:1 synapse:3 formulation:3 governing:1 just:3 atick:3 binocular:1 correlation:2 trust:1 lack:1 aj:1 b3:1 effect:2 normalized:3 true:3 din:1 entering:1 symmetric:3 excitability:1 white:5 cerebellum:1 ll:2 self:2 eligibility:1 d4:1 m:2 criterion:1 hippocampal:5 rat:4 gg:2 tt:1 dtxi:1 cp:3 reflection:1 interpreting:1 funding:1 common:2 functional:1 spiking:1 exponentially:1 cerebral:1 jl:1 extend:3 interpretation:3 relating:1 refer:2 surround:1 feldman:1 appleton:1 cambridge:1 tuning:1 rd:1 stable:1 han:1 cortex:4 whitening:11 posterior:1 showed:2 recent:3 foldiak:1 reverse:4 manipulation:1 selectivity:1 steward:1 arbitrarily:1 seen:2 additional:1 somewhat:1 determine:1 period:1 monotonically:1 signal:51 dashed:2 ii:4 currency:1 sound:1 multiple:1 infer:1 hebbian:3 match:1 adapt:1 england:1 long:5 post:12 e1:1 a1:1 prediction:5 variant:1 involving:1 essentially:1 expectation:2 vision:2 submillisecond:1 kernel:43 adopting:1 pyramid:1 cell:12 fellowship:1 separately:1 sjostrom:1 pyramidal:2 leaving:1 fukai:1 w2:1 extra:1 unlike:2 comment:1 hz:1 recording:1 ltp:8 induced:1 thing:1 structural:3 vw:1 near:1 revealed:1 intermediate:1 iii:1 affect:2 opposite:1 idea:4 whether:1 munro:1 inadequacy:1 ltd:9 filt:1 peter:1 york:1 jj:2 action:3 depression:4 generally:1 governs:1 detailed:2 involve:2 clear:3 amount:1 neocortex:1 band:1 ten:1 millisecond:1 sign:2 neuroscience:3 commented:1 key:1 blum:2 d3:4 neither:1 pj:1 abbott:5 graph:3 run:1 mystery:1 letter:1 ascertaining:1 uncertainty:1 place:1 reasonable:1 layer:6 fl:1 ki:2 followed:1 activity:6 fairhall:1 strength:1 worked:1 scene:1 fourier:3 aspect:1 performing:1 oup:1 influential:1 according:5 postsynaptic:4 spiny:3 wi:2 appealing:1 g3:2 lid:1 making:3 beast:1 modification:2 invariant:2 wellcome:1 computationally:1 equation:8 turn:1 vreeswijk:1 differentiates:1 mechanism:1 know:1 available:1 gaussians:2 permit:1 apply:2 appropriate:1 spectral:5 alternative:2 rp:1 remaining:1 include:1 cf:3 l6:1 gower:1 giving:1 society:2 bl:1 implied:3 question:1 spike:10 dependence:1 bialek:1 unclear:1 detrimental:1 separate:1 unable:1 lateral:1 street:1 nelson:2 presynaptic:2 considers:1 toward:1 fresh:1 assuming:3 length:1 code:1 relationship:4 regulation:1 mostly:1 relate:1 trace:4 intent:1 lagged:1 sakmann:2 vertical:1 observation:1 neuron:6 displaced:2 sm:1 anti:2 defintion:1 defining:1 neurobiology:1 looking:1 precise:2 arbitrary:1 ordinate:1 fort:2 namely:1 dog:5 speculation:1 pair:2 bcm:1 hausser:2 boost:1 nip:1 suggested:3 proceeds:1 usually:2 pattern:1 summarize:1 navigational:1 power:14 critical:1 decorrelation:1 natural:5 difficulty:1 treated:1 imply:1 ne:1 gabaergic:1 text:2 prior:3 understanding:2 acknowledgement:1 removal:2 taming:1 lewen:1 determining:1 relative:1 review:1 kempter:1 bear:1 rationale:1 suggestion:3 interesting:2 filtering:18 foundation:1 nucleus:1 principle:1 corrupting:2 charitable:1 pi:1 cd:1 row:8 course:2 periodicity:4 gl:3 repeat:1 free:1 explaining:1 face:2 wagner:1 markram:1 absolute:1 van:4 regard:1 slice:1 calculated:1 cortical:3 world:2 valid:1 commonly:1 reinforcement:2 adaptive:2 porr:1 far:1 observable:1 ml:1 assumed:9 xi:5 spectrum:27 why:2 nature:9 ruyter:1 symmetry:1 interact:1 investigated:1 excellent:1 necessarily:1 cl:3 froemke:1 gerstner:3 significance:5 main:4 noise:25 arise:2 neuronal:2 causality:1 redlich:1 hemmen:1 gatsby:1 cooper:1 ny:1 exponential:1 lie:1 tied:1 levy:2 weighting:1 removing:1 showing:1 explored:1 decay:1 a3:1 dl:3 concern:1 evidence:1 gained:1 importance:1 mirror:1 magnitude:7 conditioned:1 surprise:2 led:1 visual:2 unexpected:1 kiss:1 g2:1 reflex:1 aa:1 constantly:1 extracted:1 ma:1 goal:2 change:4 experimentally:3 acausal:4 determined:2 specifically:2 bore:1 averaging:1 engineer:2 called:2 wallis:4 pas:2 hfsp:1 experimental:4 invariance:1 minai:1 odelia:1 arises:1 dept:1 baddeley:2 d1:1 correlated:4 |
1,532 | 2,393 | Perspectives on Sparse Bayesian Learning
David Wipf, Jason Palmer, and Bhaskar Rao
Department of Electrical and Computer Engineering
University of California, San Diego, CA 92092
dwipf,[email protected], [email protected]
Abstract
Recently, relevance vector machines (RVM) have been fashioned from a
sparse Bayesian learning (SBL) framework to perform supervised learning using a weight prior that encourages sparsity of representation. The
methodology incorporates an additional set of hyperparameters governing the prior, one for each weight, and then adopts a specific approximation to the full marginalization over all weights and hyperparameters.
Despite its empirical success however, no rigorous motivation for this
particular approximation is currently available. To address this issue, we
demonstrate that SBL can be recast as the application of a rigorous variational approximation to the full model by expressing the prior in a dual
form. This formulation obviates the necessity of assuming any hyperpriors and leads to natural, intuitive explanations of why sparsity is achieved
in practice.
1
Introduction
In an archetypical regression situation, we are presented with a collection of N regressor/target pairs {?i ? <M , ti ? <}N
i=1 and the goal is to find a vector of weights w such
that, in some sense,
ti ? ?Ti w, ?i or t ? ?w,
(1)
T
where t , [t1 , . . . , tN ] and ? , [?1 , . . . , ?N ]T ? <N ?M . Ideally, we would like to
learn this relationship such that, given a new training vector ?? , we can make accurate
predictions of t? , i.e., we would like to avoid overfitting. In practice, this requires some
form of regularization, or a penalty on overly complex models.
Recently, a sparse Bayesian learning (SBL) framework has been derived to find robust solutions to (1) [3, 7]. The key feature of this development is the incorporation of a prior on the
weights that encourages sparsity in representation, i.e., few non-zero weights. When ? is
square and formed from a positive-definite kernel function, we obtain the relevance vector
machine (RVM), a Bayesian competitor of SVMs with several significant advantages.
1.1
Sparse Bayesian Learning
Given a new regressor vector ?? , the full Bayesian treatment of (1) involves finding the
predictive distribution p(t? |t).1 We typically compute this distribution by marginalizing
1
For simplicity, we omit explicit conditioning on ? and ?? , i.e., p(t? |t) ? p(t? |t, ?, ?? ).
over the model weights, i.e.,
Z
1
p(t? |w)p(w, t)dw,
(2)
p(t)
where the joint density p(w, t) = p(t|w)p(w) combines all relevant information from the
training data (likelihood principle) with our prior beliefs about the model weights. The
likelihood term p(t|w) is assumed to be Gaussian,
?
?
1
(3)
p(t|w) = (2?? 2 )?N/2 exp ? 2 kt ? ?wk2 ,
2?
p(t? |t) =
where for now we assume that the noise variance ? 2 is known. For sparse priors p(w)
(possibly improper), the required integrations, including the computation of the normalizing term p(t), are typically intractable, and we are forced to accept some form of approximation to p(w, t).
Sparse Bayesian learning addresses this issue by introducing a set of hyperparameters into
the specification of the problematic weight prior p(w) before adopting a particular approximation. The key assumption is that p(w) can be expressed as
M
M Z
Y
Y
p(w) =
p(wi ) =
p(wi |?i )p(?i )d?i ,
(4)
i=1
i=1
where ? = [?1 , . . . , ?M ]T represents a vector of hyperparameters, (one for each weight).
The implicit SBL derivation presented in [7] can then be reformulated as follows,
Z
1
p(t? |t) =
p(t? |w)p(t|w)p(w)dw
p(t)
Z Z
1
=
p(t? |w)p(t|w)p(w|?)p(?)dwd?.
(5)
p(t)
Proceeding further, by applying Bayes? rule to this expression, we can exploit the plugin
rule [2] via,
Z Z
p(?|t)
dwd?
p(t? |t) =
p(t? |w)p(t|w)p(w|?)
p(t|?)
Z Z
?(?M AP )
dwd?
?
p(t? |w)p(t|w)p(w|?)
p(t|?)
Z
1
=
p(t? |w)p(w, t; ?M AP )dw.
(6)
p(t; ?M AP )
The essential difference from (2) is that we have replaced p(w, t) with the approximate
distribution
p(w, t; ?M AP ) = p(t|w)p(w; ?M AP ). Also, the normalizing term becomes
R
p(w, t; ?M AP )dw and we assume that all required integrations can now be handled in
closed form. Of course the question remains, how do we structure this new set of parameters ? to accomplish this goal? The answer is that the hyperparameters enter as weight
prior variances of the form,
p(wi |?i ) = N (0, ?i ).
(7)
The hyperpriors are given by,
p(?i?1 ) ? ?i1?a exp(?b/?i ),
(8)
where a, b > 0 are constants. The crux of the actual learning procedure presented in [7]
is to find some MAP estimate of ? (or more accurately, a function of ?). In practice, we
find that many of the estimated ?i ?s converge to zero, leading to sparse solutions since
the corresponding weights, and therefore columns of ?, can effectively be pruned from
the model. The Gaussian assumptions, both on p(t|w) and p(w; ?), then facilitate direct,
analytic computation of (6).
1.2
Ambiguities in Current SBL Derivation
Modern Bayesian analysis is primarily concerned with finding distributions and locations of
significant probability mass, not just modes of distributions, which can be very misleading
in many cases [6]. With SBL, the justification for the additional level of sophistication
(i.e., the inclusion of hyperparameters) is that the adoption of the plugin rule (i.e., the
approximation p(w, t) ? p(w, t; ?M AP )) is reflective of the true mass, at least sufficiently
so for predictive purposes. However, no rigorous motivation for this particular claim is
currently available nor is it immediately obvious exactly how the mass of this approximate
distribution relates to the true mass.
A more subtle difficulty arises because MAP estimation, and hence the plugin rule, is not
invariant under a change in parameterization. Specifically, for an invertible function f (?),
[f (?)]M AP 6= f (?M AP ).
(9)
Different transformations lead to different modes and ultimately, different approximations
to p(w, t) and therefore p(t? |t). So how do we decide which one to use? The canonical
form of SBL, and the one that has displayed remarkable success in the literature, does not
in fact find a mode of p(?|t), but a mode of p(? log ?|t). But again, why should this mode
necessarily be more reflective of the desired mass than any other?
As already mentioned, SBL often leads to sparse results in practice, namely, the approximation p(w, t; ?M AP ) is typically nonzero only on a small subspace of M -dimensional w
space. The question remains, however, why should an approximation to the full Bayesian
treatment necessarily lead to sparse results in practice?
To address all of these ambiguities, we will herein demonstrate that the sparse Bayesian
learning procedure outlined above can be recast as the application of a rigorous variational
approximation to the distribution p(w, t).2 This will allow us to quantify the exact relationship between the true mass and the approximate mass of this distribution. In effect, we
will demonstrate that SBL is attempting to directly capture significant portions of the probability mass of p(w, t), while still allowing us to perform the required integrations. This
framework also obviates the necessity of assuming any hyperprior p(?) and is independent
of the (subjective) parameterization (e.g., ? or ? log ?, etc.). Moreover, this perspective
leads to natural, intuitive explanations of why sparsity is observed in practice and why, in
general, this need not be the case.
2
A Variational Interpretation of Sparse Bayesian Learning
To begin, we review that the ultimate goal of this analysis is to find a well-motivated approximation to the distribution
Z
Z
p(t? |t; H) ? p(t? |w)p(w, t; H)dw = p(t? |w)p(t|w)p(w; H)dw,
(10)
where we have explicitly noted the hypothesis of a model with a sparsity inducing (possibly
improper) weight prior by H. As already mentioned, the integration required by this form is
analytically intractable and we must resort to some form of approximation. To accomplish
this, we appeal to variational methods to find a viable approximation to p(w, t; H) [5].
We may then substitute this approximation into (10), leading to tractable integrations and
analytic posterior distributions. To find a class of suitable approximations, we first express
p(w; H) in its dual form by introducing a set of variational parameters. This is similar to a
procedure outlined in [4] in the context of independent component analysis.
2
We note that the analysis in this paper is different from [1], which derives an alternative SBL
algorithm based on variational methods.
2.1
Dual Form Representation of p(w; H)
At the heart of this methodology is the ability to represent a convex function in its dual
form. For example, given a convex function f (y) : < ? <, the dual form is given by
f (y) = sup [?y ? f ? (?)] ,
(11)
?
where f ? (?) denotes the conjugate function. Geometrically, this can be interpreted as
representing f (x) as the upper envelope or supremum of a set of lines parameterized by ?.
The selection of f ? (?) as the intercept term ensures that each line is tangent to f (y). If we
drop the maximization in (11), we obtain the bound
f (y) ? ?y ? f ? (?).
(12)
Thus, for any given ?, we have a lower bound on f (y); we may then optimize over ? to
find the optimal or tightest bound in a region of interest.
To apply this theory to the problem at hand, we specify the form for our sparse prior
QM
p(w; H) = i=1 p(wi ; H). Using (7) and (8), we obtain the prior
??(a+1/2)
?
Z
w2
,
(13)
p(wi ; H) = p(wi |?i )p(?i )d?i = C b + i
2
which for a, b > 0 is proportional to a Student-t density. The constant C is not chosen to
enforce proper normalization; rather, it is chosen to facilitate the variational analysis below.
Also, this density function can be seen to encourage sparsity since it has heavy tails and a
sharp peak at zero. Clearly p(wi ; H) is not convex in wi ; however, if we let yi , wi2 as
suggested in [5] and define
?
yi ?
,
(14)
f (yi ) , log p(wi ; H) = ?(a + 1/2) log C b +
2
we see that we now have a convex function in yi amenable to dual representation. By
computing the conjugate function f ? (yi ), constructing the dual, and then transforming
back to p(wi ; H), we obtain the representation (see Appendix for details)
?
?
?
?
?
?
b
w2
?i?a .
(15)
p(wi ; H) = max (2??i )?1/2 exp ? i exp ?
?i ?0
2?i
?i
As a, b ? 0, it is readily apparent from (15) that what were straight lines in the yi domain
are now Gaussian functions with variance ?i in the wi domain. Figure 1 illustrates this
connection. When we drop the maximization, we obtain a lower bound on p(wi ; H) of the
form
?
?
?
?
2
? , (2??i )?1/2 exp ? wi exp ? b ? ?a ,
p(wi ; H) ? p(wi ; H)
(16)
i
2?i
?i
which serves as our approximate prior to p(w; H). From this relationship, we see that
? does not integrate to one, except in the special case when a, b ? 0. We will now
p(wi ; H)
? or more accurately H(?),
?
incorporate these results into an algorithm for finding a good H,
since each candidate hypothesis is characterized by a different set of variational parameters.
2.2
Variational Approximation to p(w, t; H)
So now that we have a variational approximation to the problematic weight prior, we must
return to our original problem of estimating p(t? |t; H). Since the integration is intractable
? using p(w, t; H)
? =
under model hypothesis H, we will instead compute p(t? |t; H)
?
?
p(t|w)p(w; H), with p(w; H) defined as in (16). How do we choose this approximate
1
0.5
0.9
0
0.8
p (wi ; H)
Density
Log Density
0.7
?0.5
?1
?1.5
0.6
lower
? bounds
?
0.5
?
p wi ; H
0.4
0.3
0.2
?2
0.1
?2.5
0
0.5
1
1.5
yi
2
2.5
0
?5
3
?4
?3
?2
?1
0
wi
1
2
3
4
5
Figure 1: Variational approximation example in both yi space and wi space for a, b ? 0.
Left: Dual forms in yi space. The solid line represents the plot of f (yi ) while the dotted
lines represent variational lower bounds in the dual representation for three different values
of ?i . Right: Dual forms in wi space. The solid line represents the plot of p(wi ; H) while
the dotted lines represent Gaussian distributions with three different variances.
? are distinguished by a different set of varimodel? In other words, given that different H
ational parameters ?, how do we choose the most appropriate ?? Consistent with modern
Bayesian analysis, we concern ourselves not with matching modes of distributions, but
? we would
with aligning regions of significant probability mass. In choosing p(w, t; H),
therefore like to match, where possible, significant regions of probability mass in the true
? by minimizing
model p(w, t; H). For a given t, an obvious way to do this is to select H
the sum of the misaligned mass, i.e.,
Z ?
?
?
?
? ?? dw
H
=
arg min
?p(w, t; H) ? p(w, t; H)
?
H
Z
?
=
arg max
p(t|w)p(w; H)dw,
(17)
?
H
where the variational assumptions have allowed us to remove the absolute value (since
the argument must always be positive). Also, we note that (17) is tantamount to selecting
the variational approximation with maximal Bayesian evidence [6]. In other words, we
? out of a class of variational approximations to H, that most probably
are selecting the H,
explains the training data t, marginalized over the weights.
From an implementational standpoint, (17) can be reexpressed using (16) as,
Z
M
?
?
Y
? i ) dw
p wi ; H(?
? = arg max log p(t|w)
?
=
i=1
?
M ?
? X
b
1?
T ?1
? ? a log ?i ,
arg max ? log |?t | + t ?t t +
?
2
?i
i=1
(18)
where ?t , ? 2 I +?diag(?)?T . This is the same cost function as in [7] only without terms
resulting from a prior on ? 2 , which we will address later. Thus, the end result of this analysis is an evidence maximization procedure equivalent to the one in [7]. The difference is
that, where before we were optimizing over a somewhat arbitrary model parameterization,
now we see that it is actually optimization over the space of variational approximations to
a model with a sparse, regularizing prior. Also, we know from (17) that this procedure is
?
effectively matching, as much as possible, the mass of the full model p(w, t; H).
3
Analysis
While the variational perspective is interesting, two pertinent questions still remain:
1. Why should it be that approximating a sparse prior p(w; H) leads to sparse representations in practice?
2. How do we extend these results to handle an unknown, random variance ? 2 ?
We first treat Question (1). In Figure 2 below, we have illustrated a 2D example of evidence
maximization within the context of variational approximations to the sparse prior p(w; H).
For now, we will assume a, b ? 0, which from (13), implies that p(wi ; H) ? 1/|wi | for
each i. On the left, the shaded area represents the region of w space where both p(w; H)
and p(t|w) (and therefore p(w, t; H)) have significant probability mass. Maximization of
? with a substantial percentage
(17) involves finding an approximate distribution p(w, t; H)
of its mass in this region.
8
8
p (t|w1 , w2 )
4
4
2
2
0
w2
?2
?2
?4
p (t|w1 , w2 )
6
w2
6
?
?a
p w1 , w2 ; H
?
0
p (w1 , w2 ; H)
?4
?6
?
?
?b
p w1 , w2 ; H
?6
?8
?8
?8
?6
?4
?2
0
w1
2
4
6
8
?8
?6
?4
?2
variational
constraint
w0 1
2
4
6
8
Figure 2: Comparison between full model and approximate models with a, b ? 0. Left:
Contours of equiprobability density for p(w; H) and constant likelihood p(t|w); the prominent density and likelihood lie within each region respectively. The shaded region represents the area where both have significant mass. Right: Here we have added the contours
? for two different values of ?, i.e., two approximate hypotheses denoted H
? a and
of p(w; H)
? b . The shaded region represents the area where both the likelihood and the approximate
H
? a have significant mass. Note that by the variational bound, each p(w; H)
? must lie
prior H
within the contours of p(w; H).
In the plot on the right, we have graphed two approximate priors that satisfy the variational
bounds, i.e., they must lie within the contours of p(w; H). We see that the narrow prior that
aligns with the horizontal spine of p(w; H) places the largest percentage of its mass (and
? a )) in the shaded region. This corresponds with a prior of
therefore the mass of p(w, t; H
? a ) = p(w1 , w2 ; ?1 ? 0, ?2 ? 0).
p(w; H
(19)
This creates a long narrow prior since there is minimal variance along the w2 axis. In fact,
it can be shown that owing to the infinite density of the variational constraint along each
axis (which is allowed as a and b go to zero), the maximum evidence is obtained when
?2 is strictly equal to zero, giving the approximate prior infinite density along this axis as
well. This implies that w2 also equals zero and can be pruned from the model. In contrast,
? b , is hampered because it cannot
a model with significant prior variance along both axes, H
extend directly out (due to the dotted variational boundary) along the spine to penetrate the
likelihood.
Similar effective weight pruning occurs in higher dimensional problems as evidenced by
simulation studies and the analysis in [3]. In higher dimensions, the algorithm only retains
those weights associated with the prior spines that span a subspace penetrating the most
prominent portion of the likelihood mass (i.e., a higher-dimensional analog to the shaded
? navigates the variational constraints, placing
region already mentioned). The prior p(w; H)
as much as possible of its mass in this region, driving many of the ?i ?s to zero.
In contrast, when a, b > 0, the situation is somewhat different. It is not difficult to show
that, assuming a noise variance ? 2 > 0, the variational approximation to p(w, t; H) with
maximal evidence cannot have any ?i = wi = 0. Intuitively, this occurs because the now
finite spines of the prior p(w; H), which bound the variational approximation, do not allow
us to place infinite prior density in any region of weight space (as occurred previously
when any ?i ? 0). Consequently, if any ?i goes to zero with a, b > 0, the associated
approximate prior mass, and therefore the approximate evidence, must also fall to zero by
(16). As such, models with all non-zero weights will be now be favored when we form the
variational approximation. We therefore cannot assume an approximation to a sparse prior
will necessarily give us sparse results in practice.
We now address Question (2). Thus far, we have considered a known, fixed noise variance
? 2 ; however, what if ? 2 is unknown? SBL assumes it is unknown and random with prior
distribution p(1/? 2 ) ? (? 2 )1?c exp(?d/? 2 ), and c, d > 0. After integrating out the
unknown ? 2 , we arrive at the implicit likelihood equation,
p(t|w) =
Z
p(t|w, ? 2 )p(? 2 )d? 2 ?
?
1
d + kt ? ?wk2
2
??(?c+1/2)
,
(20)
where c? , c + (N ? 1)/2. We may then form a variational approximation to the likelihood
in a similar manner as before (with wi being replaced by kt ? ?wk) giving us,
?
?
?
?
1
d
p(t|w) ? (2?)?N/2 (? 2 )?1/2 exp ? 2 kt ? ?wk2 exp ? 2 (? 2 )??c
2?
?
?
?
?
?
d
1
(21)
= (2?? 2 )?N/2 exp ? 2 kt ? ?wk2 exp ? 2 (? 2 )?c ,
2?
?
where the second step follows by substituting back in for c?. By replacing p(t|w) with the
lower bound from (21), we then maximize over the variational parameters ? and ? 2 via
?
M ?
? X
1?
b
d
T ?1
?, ? = arg max
? log |?t | + t ?t t +
? ? a log ?i ? 2 ?c log ? 2 , (22)
2
?
?
?,? 2
i
i=1
2
the exact SBL optimization procedure. Thus, we see that the entire SBL framework, including noise variance estimation, can be seen in variational terms.
4
Conclusions
The end result of this analysis is an evidence maximization procedure that is equivalent to
the one originally formulated in [7]. The difference is that, where before we were optimizing over a somewhat arbitrary model parameterization, we now see that SBL is actually
searching a space of variational approximations to find an alternative distribution that captures the significant mass of the full model. Moreover, from the vantage point afforded
by this new perspective, we can better understand the sparsity properties of SBL and the
relationship between sparse priors and approximations to sparse priors.
Appendix: Derivation of the Dual Form of p(wi ; H)
To accommodate the variational analysis of Sec. 2.1, we require the dual representation of
p(wi ; H). As an intermediate step, we must find the dual representation of f (yi ), where
? ?
?
yi , wi2 and
yi ??(a+1/2)
f (yi ) , log p(wi ; H) = log C b +
.
(23)
2
To accomplish this, we find the conjugate function f ? (?i ) using the duality relation
?
?
?
?
?
yi ?
1
?
log b +
. (24)
f (?i ) = max [?i yi ? f (yi )] = max ?i yi ? log C + a +
yi
yi
2
2
To find the maximizing yi , we take the gradient of the left side and set it to zero, giving us,
1
a
? 2b.
(25)
yimax = ? ?
?i
2?i
Substituting this value into the expression for f ? (?i ) and selecting
? ?
?? ?
?(a+1/2)
1
1
C = (2?)?1/2 exp ? a +
a+
,
(26)
2
2
we arrive at
?
?
?
?
1
?1
1
f ? (?i ) = a +
log
+ log 2? ? 2b?i .
(27)
2
2?i
2
We are now ready to represent f (yi ) in its dual form, observing first that we only need
consider maximization over ?i ? 0 since f (yi ) is a monotonically decreasing function
(i.e., all tangent lines will have negative slope). Proceeding forward, we have
?
?
?
?
1
1
b
?yi
?
? a+
log ?i ? log 2? ?
, (28)
f (yi ) = max [?i yi ? f (?i )] = max
?i ?0 2?i
?i ?0
2
2
?i
where we have used the monotonically increasing transformation ?i = ?1/(2?i ), ?i ? 0.
The attendant dual representation of p(wi ; H) can then be obtained by exponentiating both
sides of (28) and substituting yi = wi2 ,
?
?
?
?
?
?
b
w2
1
exp ? i exp ?
?i?a .
(29)
p(wi ; H) = max ?
?i ?0
2?i
?i
2??i
Acknowledgments
This research was supported by DiMI grant #22-8376 sponsored by Nissan.
References
[1] C. Bishop and M. Tipping, ?Variational relevance vector machines,? Proc. 16th Conf. Uncertainty in Artificial Intelligence, pp. 46?53, 2000.
[2] R. Duda, P. Hart, and D. Stork, Pattern Classification, Wiley, Inc., New York, 2nd ed., 2001.
[3] A.C. Faul and M.E. Tipping, ?Analysis of sparse Bayesian learning,? Advances in Neural
Information Processing Systems 14, pp. 383?389, 2002.
[4] M. Girolami, ?A variational method for learning sparse and overcomplete representations,? Neural Computation, vol. 13, no. 11, pp. 2517?2532, 2001.
[5] M.I. Jordan, Z. Ghahramani, T. Jaakkola, and L.K. Saul, ?An introduction to variational methods
for graphical models,? Machine Learning, vol. 37, no. 2, pp. 183?233, 1999.
[6] D.J.C. MacKay, ?Bayesian interpolation,? Neural Comp., vol. 4, no. 3, pp. 415?447, 1992.
[7] M.E. Tipping, ?Sparse Bayesian learning and the relevance vector machine,? Journal of Machine
Learning, vol. 1, pp. 211?244, 2001.
| 2393 |@word duda:1 nd:1 simulation:1 solid:2 accommodate:1 necessity:2 selecting:3 subjective:1 current:1 must:7 readily:1 analytic:2 pertinent:1 remove:1 drop:2 plot:3 sponsored:1 intelligence:1 parameterization:4 location:1 along:5 direct:1 viable:1 combine:1 manner:1 spine:4 nor:1 decreasing:1 actual:1 increasing:1 becomes:1 begin:1 estimating:1 moreover:2 mass:22 what:2 interpreted:1 finding:4 transformation:2 ti:3 exactly:1 qm:1 grant:1 omit:1 t1:1 positive:2 engineering:1 before:4 treat:1 despite:1 plugin:3 interpolation:1 ap:10 wk2:4 shaded:5 misaligned:1 palmer:1 adoption:1 acknowledgment:1 practice:8 definite:1 procedure:7 area:3 empirical:1 matching:2 word:2 integrating:1 vantage:1 cannot:3 selection:1 context:2 applying:1 intercept:1 optimize:1 equivalent:2 map:2 maximizing:1 go:2 convex:4 simplicity:1 penetrate:1 immediately:1 rule:4 dw:9 handle:1 searching:1 justification:1 diego:1 target:1 exact:2 hypothesis:4 observed:1 electrical:1 capture:2 region:12 ensures:1 improper:2 mentioned:3 substantial:1 transforming:1 ideally:1 ultimately:1 predictive:2 creates:1 joint:1 derivation:3 forced:1 effective:1 artificial:1 choosing:1 apparent:1 ability:1 advantage:1 maximal:2 relevant:1 japalmer:1 intuitive:2 inducing:1 involves:2 implies:2 faul:1 quantify:1 girolami:1 owing:1 explains:1 require:1 crux:1 strictly:1 sufficiently:1 considered:1 exp:14 claim:1 substituting:3 driving:1 purpose:1 estimation:2 proc:1 currently:2 rvm:2 largest:1 equiprobability:1 clearly:1 gaussian:4 always:1 rather:1 avoid:1 jaakkola:1 derived:1 ax:1 likelihood:9 contrast:2 rigorous:4 sense:1 typically:3 entire:1 accept:1 relation:1 i1:1 issue:2 dual:15 arg:5 classification:1 denoted:1 favored:1 development:1 integration:6 special:1 mackay:1 equal:2 represents:6 placing:1 wipf:1 few:1 primarily:1 modern:2 replaced:2 ourselves:1 interest:1 amenable:1 accurate:1 kt:5 encourage:1 hyperprior:1 desired:1 overcomplete:1 minimal:1 column:1 rao:1 implementational:1 retains:1 maximization:7 cost:1 introducing:2 answer:1 accomplish:3 density:10 peak:1 invertible:1 regressor:2 w1:7 again:1 ambiguity:2 choose:2 possibly:2 conf:1 resort:1 leading:2 return:1 student:1 wk:1 sec:1 inc:1 satisfy:1 explicitly:1 later:1 jason:1 closed:1 observing:1 sup:1 portion:2 bayes:1 reexpressed:1 slope:1 square:1 formed:1 variance:10 bayesian:16 accurately:2 dimi:1 comp:1 straight:1 aligns:1 ed:1 competitor:1 pp:6 obvious:2 associated:2 treatment:2 subtle:1 actually:2 back:2 higher:3 originally:1 supervised:1 tipping:3 methodology:2 specify:1 formulation:1 governing:1 implicit:2 just:1 fashioned:1 hand:1 horizontal:1 replacing:1 mode:6 graphed:1 facilitate:2 effect:1 true:4 regularization:1 hence:1 analytically:1 nonzero:1 illustrated:1 encourages:2 noted:1 prominent:2 penetrating:1 demonstrate:3 tn:1 variational:35 sbl:15 recently:2 stork:1 conditioning:1 extend:2 tail:1 interpretation:1 analog:1 occurred:1 expressing:1 significant:10 enter:1 outlined:2 inclusion:1 specification:1 etc:1 aligning:1 navigates:1 posterior:1 perspective:4 optimizing:2 success:2 yi:27 seen:2 additional:2 somewhat:3 converge:1 maximize:1 monotonically:2 relates:1 full:7 dwd:3 match:1 characterized:1 long:1 hart:1 prediction:1 regression:1 kernel:1 adopting:1 represent:4 normalization:1 achieved:1 standpoint:1 envelope:1 w2:13 probably:1 incorporates:1 bhaskar:1 jordan:1 reflective:2 intermediate:1 concerned:1 marginalization:1 motivated:1 expression:2 handled:1 ultimate:1 penalty:1 reformulated:1 york:1 svms:1 percentage:2 problematic:2 canonical:1 dotted:3 estimated:1 overly:1 vol:4 express:1 key:2 geometrically:1 sum:1 parameterized:1 uncertainty:1 place:2 arrive:2 decide:1 appendix:2 bound:10 incorporation:1 constraint:3 afforded:1 argument:1 min:1 span:1 pruned:2 attempting:1 department:1 conjugate:3 remain:1 wi:33 intuitively:1 invariant:1 heart:1 equation:1 remains:2 previously:1 know:1 tractable:1 serf:1 end:2 available:2 tightest:1 apply:1 hyperpriors:2 enforce:1 appropriate:1 distinguished:1 alternative:2 substitute:1 obviates:2 denotes:1 original:1 hampered:1 assumes:1 graphical:1 marginalized:1 exploit:1 giving:3 ghahramani:1 approximating:1 question:5 already:3 added:1 occurs:2 gradient:1 subspace:2 w0:1 assuming:3 relationship:4 minimizing:1 difficult:1 nissan:1 negative:1 proper:1 unknown:4 perform:2 allowing:1 upper:1 finite:1 displayed:1 situation:2 ucsd:2 sharp:1 arbitrary:2 david:1 evidenced:1 pair:1 required:4 namely:1 connection:1 california:1 herein:1 narrow:2 address:5 suggested:1 below:2 pattern:1 wi2:3 sparsity:7 recast:2 including:2 max:10 explanation:2 belief:1 suitable:1 natural:2 difficulty:1 dwipf:1 representing:1 misleading:1 axis:3 ready:1 prior:33 literature:1 review:1 tangent:2 marginalizing:1 tantamount:1 interesting:1 proportional:1 remarkable:1 integrate:1 consistent:1 principle:1 heavy:1 course:1 supported:1 side:2 allow:2 understand:1 fall:1 saul:1 absolute:1 sparse:23 boundary:1 dimension:1 attendant:1 contour:4 adopts:1 collection:1 forward:1 san:1 exponentiating:1 far:1 approximate:13 pruning:1 supremum:1 overfitting:1 assumed:1 why:6 learn:1 robust:1 ca:1 complex:1 necessarily:3 brao:1 constructing:1 domain:2 diag:1 motivation:2 noise:4 hyperparameters:6 allowed:2 wiley:1 ational:1 explicit:1 archetypical:1 candidate:1 lie:3 specific:1 bishop:1 appeal:1 normalizing:2 derives:1 intractable:3 essential:1 concern:1 evidence:7 effectively:2 illustrates:1 sophistication:1 expressed:1 corresponds:1 goal:3 formulated:1 consequently:1 change:1 specifically:1 except:1 infinite:3 ece:1 duality:1 select:1 arises:1 relevance:4 incorporate:1 regularizing:1 |
1,533 | 2,394 | Maximum Likelihood Estimation of a Stochastic
Integrate-and-Fire Neural Model?
Jonathan W. Pillow, Liam Paninski, and Eero P. Simoncelli
Howard Hughes Medical Institute
Center for Neural Science
New York University
{pillow, liam, eero}@cns.nyu.edu
Abstract
Recent work has examined the estimation of models of stimulus-driven
neural activity in which some linear filtering process is followed by
a nonlinear, probabilistic spiking stage. We analyze the estimation
of one such model for which this nonlinear step is implemented by a
noisy, leaky, integrate-and-fire mechanism with a spike-dependent aftercurrent. This model is a biophysically plausible alternative to models
with Poisson (memory-less) spiking, and has been shown to effectively
reproduce various spiking statistics of neurons in vivo. However, the
problem of estimating the model from extracellular spike train data has
not been examined in depth. We formulate the problem in terms of maximum likelihood estimation, and show that the computational problem
of maximizing the likelihood is tractable. Our main contribution is an
algorithm and a proof that this algorithm is guaranteed to find the global
optimum with reasonable speed. We demonstrate the effectiveness of our
estimator with numerical simulations.
A central issue in computational neuroscience is the characterization of the functional relationship between sensory stimuli and neural spike trains. A common model for this relationship consists of linear filtering of the stimulus, followed by a nonlinear, probabilistic
spike generation process. The linear filter is typically interpreted as the neuron?s ?receptive
field,? while the spiking mechanism accounts for simple nonlinearities like rectification
and response saturation. Given a set of stimuli and (extracellularly) recorded spike times,
the characterization problem consists of estimating both the linear filter and the parameters
governing the spiking mechanism.
One widely used model of this type is the Linear-Nonlinear-Poisson (LNP) cascade model,
in which spikes are generated according to an inhomogeneous Poisson process, with rate
determined by an instantaneous (?memoryless?) nonlinear function of the filtered input.
This model has a number of desirable features, including conceptual simplicity and computational tractability. Additionally, reverse correlation analysis provides a simple unbiased estimator for the linear filter [5], and the properties of estimators (for both the linear
filter and static nonlinearity) have been thoroughly analyzed, even for the case of highly
non-symmetric or ?naturalistic? stimuli [12]. One important drawback of the LNP model,
* JWP and LP contributed equally to this work. We thank E.J. Chichilnisky for helpful discussions.
L?NLIF model
LNP model
)ekips(P
0
50
Figure 1: Simulated responses of LNLIF and LNP models to 20 repetitions of a fixed 100-ms stimulus segment of temporal white noise.
Top: Raster of responses of L-NLIF
model, where ?noise /?signal = 0.5
and g gives a membrane time constant of 15 ms. The top row shows
the fixed (deterministic) response of
the model with ?noise set to zero.
Middle: Raster of responses of LNP
model, with parameters fit with standard methods from a long run of
the L-NLIF model responses to nonrepeating stimuli. Bottom: (Black
line) Post-stimulus time histogram
(PSTH) of the simulated L-NLIF response. (Gray line) PSTH of the
LNP model. Note that the LNP
model fails to preserve the fine temporal structure of the spike trains,
100 relative to the L-NLIF model.
time (ms)
however, is that Poisson processes do not accurately capture the statistics of neural spike
trains [2, 9, 16, 1]. In particular, the probability of observing a spike is not a functional of
the stimulus only; it is also strongly affected by the recent history of spiking.
The leaky integrate-and-fire (LIF) model provides a biophysically more realistic spike
mechanism with a simple form of spike-history dependence. This model is simple, wellunderstood, and has dynamics that are entirely linear except for a nonlinear ?reset? of the
membrane potential following a spike. Although this model?s overriding linearity is often
emphasized (due to the approximately linear relationship between input current and firing
rate, and lack of active conductances), the nonlinear reset has significant functional importance for the model?s response properties. In previous work, we have shown that standard
reverse correlation analysis fails when applied to a neuron with deterministic (noise-free)
LIF spike generation; we developed a new estimator for this model, and demonstrated that a
change in leakiness of such a mechanism might underlie nonlinear effects of contrast adaptation in macaque retinal ganglion cells [15]. We and others have explored other ?adaptive?
properties of the LIF model [17, 13, 19].
In this paper, we consider a model consisting of a linear filter followed by noisy LIF spike
generation with a spike-dependent after-current; this is essentially the standard LIF model
driven by a noisy, filtered version of the stimulus, with an additional current waveform
injected following each spike. We will refer to this as the the ?L-NLIF? model. The probabilistic nature of this model provides several important advantages over the deterministic
version we have considered previously. First, an explicit noise model allows us to couch
the problem in the terms of classical estimation theory. This, in turn, provides a natural
?cost function? (likelihood) for model assessment and leads to more efficient estimation of
the model parameters. Second, noise allows us to explicitly model neural firing statistics,
and could provide a rigorous basis for a metric distance between spike trains, useful in
other contexts [18]. Finally, noise influences the behavior of the model itself, giving rise to
phenomena not observed in the purely deterministic model [11].
Our main contribution here is to show that the maximum likelihood estimator (MLE) for
the L-NLIF model is computationally tractable. Specifically, we describe an algorithm
for computing the likelihood function, and prove that this likelihood function contains no
non-global maxima, implying that the MLE can be computed efficiently using standard
ascent techniques. The desirable statistical properties of this estimator (e.g. consistency,
efficiency) are all inherited ?for free? from classical estimation theory. Thus, we have a
compact and powerful model for the neural code, and a well-motivated, efficient way to
estimate the parameters of this model from extracellular data.
The Model
We consider a model for which the (dimensionless) subthreshold voltage variable V evolves
according to
i?1
X
~
dV = ? gV (t) + k ? ~x(t) +
h(t ? tj ) dt + ?Nt ,
(1)
j=0
and resets to Vr whenever V = 1. Here, g denotes the leak conductance, ~k ? ~x(t) the
projection of the input signal ~x(t) onto the linear kernel ~k, h is an ?afterpotential,? a current
waveform of fixed amplitude and shape whose value depends only on the time since the last
spike ti?1 , and Nt is an unobserved (hidden) noise process with scale parameter ?. Without
loss of generality, the ?leak? and ?threshold? potential are set at 0 and 1, respectively, so the
cell spikes whenever V = 1, and V decays back to 0 with time constant 1/g in the absence
of input. Note that the nonlinear behavior of the model is completely determined by only
a few parameters, namely {g, ?, Vr }, and h (where the function h is allowed to take values
in some low-dimensional vector space). The dynamical properties of this type of ?spike
response model? have been extensively studied [7]; for example, it is known that this class
of models can effectively capture much of the behavior of apparently more biophysically
realistic models (e.g. Hodgkin-Huxley).
Figures 1 and 2 show several simple comparisons of the L-NLIF and LNP models. In
1, note the fine structure of spike timing in the responses of the L-NLIF model, which is
qualitatively similar to in vivo experimental observations [2, 16, 9]). The LNP model fails
to capture this fine temporal reproducibility. At the same time, the L-NLIF model is much
more flexible and representationally powerful, as demonstrated in Fig. 2: by varying V r
or h, for example, we can match a wide variety of dynamical behaviors (e.g. adaptation,
bursting, bistability) known to exist in biological neurons.
The Estimation Problem
Our problem now is to estimate the model parameters {~k, ?, g, Vr , h} from a sufficiently
rich, dynamic input sequence ~x(t) together with spike times {ti }. A natural choice is
the maximum likelihood estimator (MLE), which is easily proven to be consistent and
statistically efficient here. To compute the MLE, we need to compute the likelihood and
develop an algorithm for maximizing it.
The tractability of the likelihood function for this model arises directly from the linearity
of the subthreshold dynamics of voltage V (t) during an interspike interval. In the noiseless case [15], the voltage trace during an interspike interval t ? [ti?1 , ti ] is given by the
solution to equation (1) with ? = 0:
?
?
Z t
i?1
X
?~k ? ~x(s) +
V0 (t) = Vr e?gt +
h(s ? tj )? e?g(t?s) ds,
(2)
ti?1
j=0
stimulus
A
responses
h current
0
0
0
0
t
0.2 0
t (sec)
1
stimulus
c
B
x
responses
c=1
h current
0
c=2
0
t
0.2
c=5
0
t (sec)
1
stimulus
C
0
h current
responses
0
0
0
t
.05
0
t (sec)
1
Figure 2: Illustration of diverse behaviors
of L-NLIF model.
A: Firing rate adaptation. A positive
DC current (top) was injected into three
model cells differing only in their h currents (shown on left: top, h = 0; middle, h depolarizing; bottom, h hyperpolarizing). Voltage traces of each cell?s response (right, with spikes superimposed)
exhibit rate facilitation for depolarizing h
(middle), and rate adaptation for hyperpolarizing h (bottom).
B: Bursting. The response of a model cell
with a biphasic h current (left) is shown as
a function of the three different levels of
DC current. For small current levels (top),
the cell responds rhythmically. For larger
currents (middle and bottom), the cell responds with regular bursts of spikes.
C: Bistability. The stimulus (top) is a
positive followed by a negative current
pulse. Although a cell with no h current
(middle) responds transiently to the positive pulse, a cell with biphasic h (bottom)
exhibits a bistable response: the positive
pulse puts it into a stable firing regime
which persists until the arrival of a negative pulse.
which is simply a linear convolution of the input current with a negative exponential. It
is easy to see that adding Gaussian noise to the voltage during each time step induces a
Gaussian density over V (t), since linear dynamics preserve Gaussianity [8]. This density is
uniquely characterized by its first two moments; the mean is given by (2), and its covariance
is ? 2 Eg EgT , where Eg is the convolution operator corresponding to e?gt . Note that this
density is highly correlated for nearby points in time, since noise is integrated by the linear
dynamics. Intuitively, smaller leak conductance g leads to stronger correlation in V (t)
at nearby time points. We denote this Gaussian density G(~xi , ~k, ?, g, Vr , h), where index
i indicates the ith spike and the corresponding stimulus chunk ~xi (i.e. the stimuli that
influence V (t) during the ith interspike interval).
Now, on any interspike interval t ? [ti?1 , ti ], the only information we have is that V (t)
is less than threshold for all times before ti , and exceeds threshold during the time bin
containing ti . This translates to a set of linear constraints on V (t), expressed in terms of
the set
\
Ci =
V (t) < 1 ? V (ti ) ? 1 .
ti?1 ?t<ti
Therefore, the likelihood that the neuron first spikes at time ti , given a spike at time ti?1 ,
is the probability of the event V (t) ? Ci , which is given by
Z
~
L~xi ,ti (k, ?, g, Vr , h) =
G(~xi , ~k, ?, g, Vr , h),
Ci
the integral of the Gaussian density G(~xi , ~k, ?, g, Vr , h) over the set Ci .
Figure 3: Behavior of the L-NLIF model
during a single interspike interval, for
a single (repeated) input current (top).
Top middle: Ten simulated voltage traces
V (t), evaluated up to the first threshold
crossing, conditional on a spike at time
zero (Vr = 0). Note the strong correlation between neighboring time points,
and the sparsening of the plot as traces are
eliminated by spiking. Bottom Middle:
Time evolution of P (V ). Each column
represents the conditional distribution of
V at the corresponding time (i.e. for all
traces that have not yet crossed threshold). Bottom: Probability density of the
interspike interval (isi) corresponding to
this particular input. Note that probability
mass is concentrated at the points where
input drives V0 (t) close to threshold.
stimulus
Vthr
V traces
0
Vthr
P(V)
0
P(isi)
0
0
100
200
t (msec)
Spiking resets V to Vr , meaning that the noise contribution to V in different interspike
intervals is independent. This ?renewal? property, in turn, implies that the density over V (t)
for an entire experiment factorizes into a product of conditionally independent terms, where
each of these terms is one of the Gaussian integrals derived above for a single interspike
interval. The likelihood for the entire spike train is therefore the product of these terms
over all observed spikes. Putting all the pieces together, then, the full likelihood is
YZ
~
L{~xi ,ti } (k, ?, g, Vr , h) =
G(~xi , ~k, ?, g, Vr , h),
i
Ci
where the product, again, is over all observed spike times {ti } and corresponding stimulus
chunks {~xi }.
Now that we have an expression for the likelihood, we need to be able to maximize it. Our
main result now states, basically, that we can use simple ascent algorithms to compute the
MLE without getting stuck in local maxima.
Theorem 1. The likelihood L{~xi ,ti } (~k, ?, g, Vr , h) has no non-global extrema in the parameters (~k, ?, g, Vr , h), for any data {~xi , ti }.
The proof [14] is based on the log-concavity of L{~xi ,ti } (~k, ?, g, Vr , h) under a certain
parametrization of (~k, ?, g, Vr , h). The classical approach for establishing the nonexistence
of non-global maxima of a given function uses concavity, which corresponds roughly to the
function having everywhere non-positive second derivatives. However, the basic idea can
be extended with the use of any invertible function: if f has no non-global extrema, neither
will g(f ), for any strictly increasing real function g. The logarithm is a natural choice for
g in any probabilistic context in which independence plays a role, since sums are easier
to work with than products. Moreover, concavity of a function f is strictly stronger than
logconcavity, so logconcavity can be a powerful tool even in situations for which concavity
is useless (the Gaussian density is logconcave but not concave, for example). Our proof
relies on a particular theorem [3] establishing the logconcavity of integrals of logconcave
functions, and proceeds by making a correspondence between this type of integral and the
integrals that appear in the definition of the L-NLIF likelihood above.
We should also note that the proof extends without difficulty to some other noise processes which generate logconcave densities (where white noise has the standard Gaussian
density); for example, the proof is nearly identical if Nt is allowed to be colored or nonGaussian noise, with possibly nonzero drift.
Computational methods and numerical results
Theorem 1 tells us that we can ascend the likelihood surface without fear of getting stuck
in local maxima. Now how do we actually compute the likelihood? This is a nontrivial
problem: we need to be able to quickly compute (or at least approximate, in a rational way)
integrals of multivariate Gaussian densities G over simple but high-dimensional orthants
Ci . We discuss two ways to compute these integrals; each has its own advantages.
The first technique can be termed ?density evolution? [10, 13]. The method is based on the
following well-known fact from the theory of stochastic differential equations [8]: given
the data (~xi , ti?1 ), the probability density of the voltage process V (t) up to the next spike
ti satisfies the following partial differential (Fokker-Planck) equation:
?P (V, t)
?2 ? 2 P
?[(V ? Veq (t))P ]
=
,
+g
2
?t
2 ?V
?V
under the boundary conditions
(3)
P (V, ti?1 ) = ?(V ? Vr ),
P (Vth , t) = 0;
where Veq (t) is the instantaneous equilibrium potential:
?
?
i?1
X
1 ?~
Veq (t) =
h(t ? tj )? .
k ? ~x(t) +
g
j=0
Moreover, the conditional firing rate f (t) satisfies
Z t
Z
f (s)ds = 1 ? P (V, t)dV.
ti?1
Thus standard techniques for solving the drift-diffusion evolution equation (3) lead to
a fast method for computing f (t) (as illustrated in Fig. 2). Finally, the likelihood
L~xi ,ti (~k, ?, g, Vr , h) is simply f (ti ).
While elegant and efficient, this density evolution technique turns out to be slightly more
powerful than what we need for the MLE: recall that we do not need to compute the conditional rate function f at all times t, but rather just at the set of spike times {ti }, and thus
we can turn to more specialized techniques for faster performance. We employ a rapid
technique for computing the likelihood using an algorithm due to Genz [6], designed to
compute exactly the kinds of multidimensional Gaussian probability integrals considered
here. This algorithm works well when the orthants Ci are defined by fewer than ? 10 linear
constraints on V (t). The number of actual constraints on V (t) during an interspike interval
(ti+1 ? ti ) grows linearly in the length of the interval: thus, to use this algorithm in typical
data situations, we adopt a strategy proposed in our work on the deterministic form of the
model [15], in which we discard all but a small subset of the constraints. The key point
is that, due to strong correlations in the noise and the fact that the constraints only figure
significantly when the V (t) is driven close to threshold, a small number of constraints often
suffice to approximate the true likelihood to a high degree of precision.
true h
estim h
true K
STA
estim K
0
0
-200
-100
t (msec before spike)
0
0
30
t (msec after spike)
60
Figure 4: Demonstration of the estimator?s performance on simulated data. Dashed lines
show the true kernel ~k and aftercurrent h; ~k is a 12-sample function chosen to resemble the
biphasic temporal impulse response of a macaque retinal ganglion cell, while h is function
specified in a five-dimensional vector space, whose shape induces a slight degree of burstiness in the model?s spike responses. The L-NLIF model was stimulated with parameters
g = 0.05 (corresponding to a membrane time constant of 20 time-samples), ? noise = 0.5,
and Vr = 0. The stimulus was 30,000 time samples of white Gaussian noise with a standard
deviation of 0.5. With only 600 spikes of output, the estimator is able to retrieve an estimate of ~k (gray curve) which closely matches the true kernel. Note that the spike-triggered
average (black curve), which is an unbiased estimator for the kernel of an LNP neuron [5],
differs significantly from this true kernel (see also [15]).
The accuracy of this approach improves with the number of constraints considered, but
performance is fastest with fewer constraints. Therefore, because ascending the likelihood
function requires evaluating the likelihood at many different points, we can make this ascent process much quicker by applying a version of the coarse-to-fine idea. Let L k denote
the approximation to the likelihood given by allowing only k constraints in the above algorithm. Then we know, by a proof identical to that of Theorem 1, that Lk has no local
maxima; in addition, by the above logic, Lk ? L as k grows. It takes little additional effort
to prove that
argmax Lk ? argmax L;
thus, we can efficiently ascend the true likelihood surface by ascending the ?coarse? approximants Lk , then gradually ?refining? our approximation by letting k increase.
An application of this algorithm to simulated data is shown in Fig. 4. Further applications
to both simulated and real data will be presented elsewhere.
Discussion
We have shown here that the L-NLIF model, which couples a linear filtering stage to a
biophysically plausible and flexible model of neuronal spiking, can be efficiently estimated
from extracellular physiological data using maximum likelihood. Moreover, this model
lends itself directly to analysis via tools from the modern theory of point processes. For
example, once we have obtained our estimate of the parameters (~k, ?, g, Vr , h), how do we
verify that the resulting model provides an adequate description of the data? This important
?model validation? question has been the focus of some recent elegant research, under the
rubric of ?time rescaling? techniques [4]. While we lack the room here to review these
methods in detail, we can note that they depend essentially on knowledge of the conditional
firing rate function f (t). Recall that we showed how to efficiently compute this function
in the last section and examined some of its qualitative properties in the L-NLIF context in
Figs. 2 and 3.
We are currently in the process of applying the model to physiological data recorded both
in vivo and in vitro, in order to assess whether it accurately accounts for the stimulus preferences and spiking statistics of real neurons. One long-term goal of this research is to
elucidate the different roles of stimulus-driven and stimulus-independent activity on the
spiking patterns of both single cells and multineuronal ensembles.
References
[1] B. Aguera y Arcas and A. Fairhall. What causes a neuron to spike?
15:1789?1807, 2003.
Neral Computation,
[2] M. Berry and M. Meister. Refractoriness and neural precision. Journal of Neuroscience,
18:2200?2211, 1998.
[3] V. Bogachev. Gaussian Measures. AMS, New York, 1998.
[4] E. Brown, R. Barbieri, V. Ventura, R. Kass, and L. Frank. The time-rescaling theorem and its
application to neural spike train data analysis. Neural Computation, 14:325?346, 2002.
[5] E. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in Neural Systems, 12:199?213, 2001.
[6] A. Genz. Numerical computation of multivariate normal probabilities. Journal of Computational and Graphical Statistics, 1:141?149, 1992.
[7] W. Gerstner and W. Kistler. Spiking Neuron Models: Single Neurons, Populations, Plasticity.
Cambridge University Press, 2002.
[8] S. Karlin and H. Taylor. A Second Course in Stochastic Processes. Academic Press, New York,
1981.
[9] J. Keat, P. Reinagel, R. Reid, and M. Meister. Predicting every spike: a model for the responses
of visual neurons. Neuron, 30:803?817, 2001.
[10] B. Knight, A. Omurtag, and L. Sirovich. The approach of a neuron population firing rate to a
new equilibrium: an exact theoretical result. Neural Computation, 12:1045?1055, 2000.
[11] J. Levin and J. Miller. Broadband neural encoding in the cricket cercal sensory system enhanced
by stochastic resonance. Nature, 380:165?168, 1996.
[12] L. Paninski. Convergence properties of some spike-triggered analysis techniques. Network:
Computation in Neural Systems, 14:437?464, 2003.
[13] L. Paninski, B. Lau, and A. Reyes. Noise-driven adaptation: in vitro and mathematical analysis.
Neurocomputing, 52:877?883, 2003.
[14] L. Paninski, J. Pillow, and E. Simoncelli. Maximum likelihood estimation of a stochastic
integrate-and-fire neural encoding model. submitted manuscript (cns.nyu.edu/?liam), 2004.
[15] J. Pillow and E. Simoncelli. Biases in white noise analysis due to non-poisson spike generation.
Neurocomputing, 52:109?115, 2003.
[16] D. Reich, J. Victor, and B. Knight. The power ratio and the interval map: Spiking models and
extracellular recordings. The Journal of Neuroscience, 18:10090?10104, 1998.
[17] M. Rudd and L. Brown. Noise adaptation in integrate-and-fire neurons. Neural Computation,
9:1047?1069, 1997.
[18] J. Victor. How the brain uses time to represent and process visual information. Brain Research,
886:33?46, 2000.
[19] Y. Yu and T. Lee. Dynamical mechanisms underlying contrast gain control in sing le neurons.
Physical Review E, 68:011901, 2003.
| 2394 |@word version:3 middle:7 stronger:2 pulse:4 simulation:1 covariance:1 moment:1 contains:1 egt:1 current:17 ka:1 nt:3 yet:1 realistic:2 hyperpolarizing:2 numerical:3 interspike:9 shape:2 plasticity:1 gv:1 plot:1 designed:1 overriding:1 implying:1 fewer:2 parametrization:1 ith:2 filtered:2 leakiness:1 colored:1 characterization:2 provides:5 coarse:2 psth:2 preference:1 five:1 mathematical:1 burst:1 differential:2 qualitative:1 consists:2 prove:2 ascend:2 rapid:1 roughly:1 isi:2 behavior:6 brain:2 actual:1 little:1 increasing:1 estimating:2 linearity:2 moreover:3 suffice:1 mass:1 underlying:1 what:2 kind:1 interpreted:1 developed:1 differing:1 unobserved:1 extremum:2 biphasic:3 temporal:4 every:1 multidimensional:1 ti:29 concave:1 exactly:1 control:1 medical:1 underlie:1 appear:1 planck:1 reid:1 before:2 positive:5 persists:1 timing:1 local:3 representationally:1 encoding:2 establishing:2 barbieri:1 firing:7 approximately:1 black:2 might:1 studied:1 examined:3 bursting:2 fastest:1 liam:3 statistically:1 hughes:1 differs:1 cascade:1 significantly:2 projection:1 regular:1 naturalistic:1 onto:1 close:2 operator:1 put:1 context:3 influence:2 dimensionless:1 applying:2 deterministic:5 demonstrated:2 center:1 maximizing:2 map:1 formulate:1 simplicity:1 estimator:10 reinagel:1 facilitation:1 retrieve:1 population:2 elucidate:1 play:1 enhanced:1 exact:1 us:2 crossing:1 bottom:7 observed:3 role:2 quicker:1 capture:3 knight:2 sirovich:1 burstiness:1 leak:3 dynamic:5 depend:1 solving:1 segment:1 purely:1 efficiency:1 basis:1 completely:1 easily:1 various:1 train:7 fast:1 couch:1 describe:1 tell:1 whose:2 widely:1 plausible:2 larger:1 statistic:5 noisy:3 itself:2 advantage:2 sequence:1 triggered:2 karlin:1 product:4 reset:4 adaptation:6 neighboring:1 reproducibility:1 description:1 getting:2 convergence:1 optimum:1 develop:1 strong:2 implemented:1 resemble:1 implies:1 waveform:2 inhomogeneous:1 drawback:1 closely:1 filter:5 stochastic:5 bistable:1 kistler:1 bin:1 biological:1 strictly:2 sufficiently:1 considered:3 normal:1 equilibrium:2 adopt:1 estimation:9 currently:1 repetition:1 tool:2 gaussian:11 rather:1 varying:1 voltage:7 factorizes:1 derived:1 focus:1 refining:1 likelihood:27 superimposed:1 indicates:1 orthants:2 contrast:2 rigorous:1 am:1 helpful:1 dependent:2 typically:1 integrated:1 entire:2 hidden:1 reproduce:1 issue:1 flexible:2 resonance:1 lif:5 renewal:1 field:1 once:1 having:1 eliminated:1 identical:2 represents:1 yu:1 nearly:1 others:1 stimulus:22 transiently:1 few:1 employ:1 sta:1 modern:1 preserve:2 neurocomputing:2 argmax:2 consisting:1 cns:2 fire:5 conductance:3 highly:2 analyzed:1 light:1 tj:3 integral:8 partial:1 taylor:1 logarithm:1 theoretical:1 column:1 approximants:1 bistability:2 tractability:2 cost:1 deviation:1 subset:1 levin:1 thoroughly:1 chunk:2 density:14 probabilistic:4 lee:1 invertible:1 together:2 quickly:1 nongaussian:1 again:1 central:1 recorded:2 containing:1 possibly:1 genz:2 derivative:1 rescaling:2 account:2 potential:3 nonlinearities:1 retinal:2 sec:3 gaussianity:1 explicitly:1 depends:1 crossed:1 piece:1 extracellularly:1 analyze:1 observing:1 apparently:1 inherited:1 depolarizing:2 vivo:3 contribution:3 ass:1 accuracy:1 efficiently:4 ensemble:1 subthreshold:2 miller:1 biophysically:4 accurately:2 basically:1 drive:1 history:2 submitted:1 whenever:2 definition:1 raster:2 proof:6 static:1 couple:1 rational:1 gain:1 recall:2 knowledge:1 improves:1 amplitude:1 actually:1 back:1 manuscript:1 dt:1 response:20 evaluated:1 refractoriness:1 strongly:1 generality:1 governing:1 stage:2 just:1 correlation:5 d:2 until:1 nonlinear:9 assessment:1 lack:2 gray:2 impulse:1 sparsening:1 grows:2 effect:1 verify:1 unbiased:2 true:7 brown:2 evolution:4 memoryless:1 symmetric:1 nonzero:1 illustrated:1 white:5 eg:2 conditionally:1 during:7 uniquely:1 m:3 demonstrate:1 reyes:1 meaning:1 instantaneous:2 common:1 specialized:1 functional:3 spiking:13 vitro:2 physical:1 slight:1 significant:1 refer:1 cambridge:1 consistency:1 nonlinearity:1 stable:1 reich:1 surface:2 v0:2 gt:2 multivariate:2 own:1 recent:3 showed:1 driven:5 reverse:2 termed:1 discard:1 certain:1 lnp:10 victor:2 additional:2 maximize:1 dashed:1 signal:2 wellunderstood:1 full:1 simoncelli:3 desirable:2 exceeds:1 match:2 characterized:1 faster:1 aguera:1 long:2 academic:1 post:1 equally:1 mle:6 basic:1 essentially:2 metric:1 poisson:5 noiseless:1 arca:1 histogram:1 kernel:5 represent:1 cell:11 addition:1 fine:4 interval:11 ascent:3 logconcave:3 recording:1 elegant:2 effectiveness:1 easy:1 variety:1 independence:1 fit:1 rhythmically:1 jwp:1 idea:2 translates:1 whether:1 motivated:1 expression:1 effort:1 multineuronal:1 york:3 cause:1 adequate:1 useful:1 extensively:1 ten:1 induces:2 concentrated:1 generate:1 exist:1 neuroscience:3 estimated:1 cercal:1 diverse:1 affected:1 putting:1 key:1 threshold:7 neither:1 diffusion:1 sum:1 run:1 everywhere:1 injected:2 powerful:4 hodgkin:1 extends:1 reasonable:1 rudd:1 entirely:1 bogachev:1 followed:4 guaranteed:1 correspondence:1 activity:2 nontrivial:1 fairhall:1 constraint:9 huxley:1 nearby:2 speed:1 extracellular:4 according:2 neral:1 membrane:3 smaller:1 slightly:1 lp:1 evolves:1 making:1 dv:2 intuitively:1 gradually:1 lau:1 rectification:1 computationally:1 equation:4 previously:1 turn:4 discus:1 mechanism:6 know:1 letting:1 tractable:2 ascending:2 rubric:1 meister:2 alternative:1 top:8 denotes:1 graphical:1 giving:1 nonexistence:1 yz:1 classical:3 question:1 spike:43 receptive:1 strategy:1 dependence:1 responds:3 exhibit:2 cricket:1 lends:1 distance:1 thank:1 simulated:6 code:1 length:1 index:1 relationship:3 illustration:1 useless:1 demonstration:1 ratio:1 ventura:1 frank:1 trace:6 negative:3 rise:1 contributed:1 allowing:1 neuron:15 observation:1 convolution:2 howard:1 sing:1 situation:2 extended:1 dc:2 drift:2 namely:1 chichilnisky:2 specified:1 macaque:2 able:3 proceeds:1 dynamical:3 pattern:1 regime:1 saturation:1 including:1 memory:1 keat:1 power:1 event:1 natural:3 difficulty:1 predicting:1 lk:4 review:2 berry:1 relative:1 loss:1 generation:4 filtering:3 proven:1 validation:1 integrate:5 degree:2 consistent:1 row:1 elsewhere:1 course:1 last:2 free:2 vthr:2 bias:1 institute:1 wide:1 leaky:2 boundary:1 depth:1 curve:2 evaluating:1 pillow:4 rich:1 concavity:4 sensory:2 stuck:2 qualitatively:1 adaptive:1 approximate:2 compact:1 estim:2 logic:1 global:5 active:1 conceptual:1 eero:2 xi:13 additionally:1 stimulated:1 nature:2 gerstner:1 logconcavity:3 main:3 linearly:1 noise:21 arrival:1 allowed:2 repeated:1 neuronal:2 fig:4 broadband:1 vr:20 precision:2 fails:3 explicit:1 msec:3 exponential:1 theorem:5 emphasized:1 nyu:2 explored:1 decay:1 physiological:2 adding:1 effectively:2 importance:1 ci:7 easier:1 paninski:4 simply:2 ganglion:2 visual:2 vth:1 expressed:1 fear:1 corresponds:1 fokker:1 satisfies:2 relies:1 conditional:5 goal:1 room:1 absence:1 change:1 determined:2 except:1 specifically:1 typical:1 experimental:1 arises:1 jonathan:1 phenomenon:1 correlated:1 |
1,534 | 2,395 | Mechanism of neural interference
by transcranial magnetic stimulation:
network or single neuron?
Yoichi Miyawaki
RIKEN Brain Science Institute
Wako, Saitama 351-0198, JAPAN
yoichi [email protected]
Masato Okada
RIKEN Brain Science Institute
PRESTO, JST
Wako, Saitama 351-0198, JAPAN
[email protected]
Abstract
This paper proposes neural mechanisms of transcranial magnetic stimulation (TMS). TMS can stimulate the brain non-invasively through a
brief magnetic pulse delivered by a coil placed on the scalp, interfering
with specific cortical functions with a high temporal resolution. Due to
these advantages, TMS has been a popular experimental tool in various
neuroscience fields. However, the neural mechanisms underlying TMSinduced interference are still unknown; a theoretical basis for TMS has
not been developed. This paper provides computational evidence that inhibitory interactions in a neural population, not an isolated single neuron,
play a critical role in yielding the neural interference induced by TMS.
1 Introduction
Transcranial magnetic stimulation (TMS) is an experimental tool for stimulating neurons
via brief magnetic pulses delivered by a coil placed on the scalp. TMS can non-invasively
interfere with neural functions related to a target cortical area with high temporal accuracy.
Because of these unique and powerful features, TMS has been popular in various fields,
including cognitive neuroscience and clinical application. However, despite its utility, the
mechanisms of how TMS stimulates neurons and interferes with neural functions are still
unknown. Although several studies have modeled spike initiation and inhibition with a
brief magnetic pulse imposed on an isolated single neuron [1][2], it is rather more plausible
to assume that a large number of neurons are stimulated massively and simultaneously
because the spatial extent of the induced magnetic field under the coil is large enough for
this to happen.
In this paper, we computationally analyze TMS-induced effects both on a neural population
level and on a single neuron level. Firstly, we demonstrate that the dynamics of a simple
excitatory-inhibitory balanced network well explains the temporal properties of visual percept suppression induced by a single pulse TMS. Secondly, we demonstrate that sustained
inhibitory effect by a subthreshold TMS is reproduced by the network model, but not by an
isolated single neuron model. Finally, we propose plausible neural mechanisms underlying
TMS-induced interference with coordinated neural activities in the cortical network.
Figure 1: A) The network architecture. TMS was delivered to all neurons uniformly
and simultaneously. B) The bistability in the network. The afferent input consisted of
a suprathreshold transient and subthreshold sustained component leads the network into
the bistable regime. The parameters used here are = 0.1, ? = 0.25, J0 = 73, J2 =
110, and T = 1.
2 Methods
2.1
TMS on neural population
2.1.1 Network model for feature selectivity
We employed a simple excitatory-inhibitory balanced network model that is well analyzed
as a model for a sensory feature detector system [3] (Fig. 1A):
?m
d
m(?, t) =
dt
h(?, t) =
J(? ? ? ) =
hext (?, t) =
?m(?, t) + g[h(?, t)]
?2
d?
J(? ? ? )m(? , t) + hext (?, t)
?
??
2
?J0 + J2 cos 2(? ? ? )
c(t)[1 ? + cos 2(? ? ?0 )]
(1)
(2)
(3)
(4)
Here, m(?, t) is the activity of neuron ? and ?m is the microscopic characteristic time
analogous to the membrane time constant of a neuron (Here we set ?m = 10 ms). g[h] is a
quasi-linear output function,
0
(h < T )
?(h ? T ) (T ? h < T + 1/?)
g[h] =
(5)
1
(h ? T + 1/?)
where T is the threshold of the neuron, ? is the gain factor, and h(?, t) is the input to neuron
?. For simplicity, we assume that m(?, t) has a periodic boundary condition (??/2 ? ? ?
?/2), and the connections of each neuron are limited to this periodic range.
?0 is a stimulus feature to be detected, and the afferent input, hext (?, t), has its maximal
amplitude c(t) at ? = ?0 . We assume a static visual stimulus so that ?0 is constant during
the stimulation (Hereafter we set ?0 = 0). is an afferent tuning coefficient, describing
how the afferent input to the target population has already been localized around ?0 (0 ?
? 1/2).
The synaptic weight from neuron ? to ? , J(? ? ? ), consists of the uniform inhibition
J0 and a feature-specific interaction J2 . J0 increases an effective threshold and regulates
the whole network activity through all-to-all inhibition. J2 facilitates neurons neighboring
in the feature space and suppresses distant ones through a cosine-type connection weight.
Through these recurrent interactions, the activity profile of the network evolves and sharpens after the afferent stimulus onset.
The most intuitive and widely accepted example representable by this model is the orientation tuning function of the primary visual cortex [3][4][5]. Assuming that the coded feature
is the orientation of a stimulus, we can regard ? as a neuron responding to angle ?, hext as
an input from the lateral geniculate nucleus (LGN), and J as a recurrent interaction in the
primary visual cortex (V1).
Because the synaptic weight and afferent input have only the 0th and 2nd Fourier components, the network state can be fully described by the two order parameters m0 and m2 ,
which are 0th- and 2nd-order Fourier coefficients of m(?, t). The macroscopic dynamics
of the network is thus derived by Fourier transformation of m(?, t),
?2
d
d?
?m m0 (t) = ?m0 (t) +
g[h(?, t)]
(6)
?
dt
?2 ?
?2
d
d?
g[h(?, t)] cos 2?
(7)
?m m2 (t) = ?m2 (t) +
? ?
dt
?2
where m0 (t) represents the mean activity of the entire network and m2 (t) represents the
degree of modulation of the activity profile of the network. h(?, t) is also described by the
order parameter,
h(?, t) = ?J0 m0 (t) + c(t)(1 ? ) + (c(t) + J2 m2 (t)) cos 2?
(8)
Substituting Eq.8 into Eq.6 and 7, the network dynamics can be calculated numerically.
2.1.2 TMS induction
We assumed that the TMS perturbation would be constant for all neurons in the network
because the spatial extent of the neural population that we were dealing with is small compared with the spatial gradient of the induced electric field. Thus we modified the input
? t) = h(?, t) + ITMS (t). Eq.6 to 8 were also modified accordingly by refunction as h(?,
placing h with ?
h. Here we employ a simple rectangular input (amplitude: ITMS , duration:
DTMS ) as a TMS-like perturbation (see the middle graph of Fig. 2A).
2.1.3 Bistability and afferent input model
TMS applied to the occipital area after visual stimulus presentation typically suppresses its
visual percept [6][7][8]. To determine whether the network model produces suppression
similar to the experimental data, we applied a TMS-like perturbation at various timings
after the afferent onset and examined whether the final state was suppressed or not. For this
purpose, the network must hold two equilibria for the same afferent input condition and
reach one of them depending on the specific timing and intensity of TMS. We thus chose
proper sets of ?, J0 , and J2 that operated the network in the non-linear regime. In addition,
we employed an afferent input model consisting of suprathreshold transient (amplitude:
At > T , duration: Dt ) and subthreshold sustained (amplitude: As < T ) components (see
the bottom graph of Fig. 2A). This is the simplest input model to lead the network into
the bistable range (Fig. 1B), yet it still captures the common properties of neural signals in
brain areas such as the LGN and visual cortex.
2.2
TMS on single neuron
2.2.1 Compartment model of cortical neuron
We also examined the effect of TMS on an isolated single neuron by using a compartment
model of a neocortical neuron analyzed by Mainen and Sejnowski [9]. The model included
Figure 2: A) The time course of the order parameters, the perturbation, and the afferent input. B) The network state in the order parameter?s plane. The network bifurcates depending
on the induction timing of the perturbation and converges to either of the attractors. Two
examples of TMS induction timing (10 and 20 ms after the afferent onset) are shown here.
The dotted lines indicate the control condition without the perturbation in both graphs.
the following membrane ion channels: a low density of Na+ channels in soma and dendrites and a high density in the axon hillock and the initial segment, fast K+ channels in
soma but not in dendrites, slow calcium- and voltage-dependent K+ channels in soma and
dendrites, and high-threshold Ca2+ channels in soma and dendrites. We examined several
types of cellular morphology as Mainen?s report but excluded axonal compartments in order to evaluate the effect of induced current only from dendritic arborization. We injected a
constant somatic current and observed a specific spiking pattern depending on morphology
(Fig. 5).
2.2.2 TMS induction
There have been several reports on theoretically estimating the intracellular current induced by TMS [1][2][10]. Here we briefly describe a simple expression for the axial and
transmembrane current induced by TMS. The electric field E induced by a brief magnetic pulse is given by the temporal derivative of the magnetic vector potential A, i.e.,
E(s, t) = ??A(s, t)/?t. Suppose the spatial gradient of the induced magnetic field is so
small compared to a single cellular dimension that E can be approximated to be constant
over all compartments. The simplest case is that one compartment has one distal and one
proximal connection, in which the transmembrane current can be defined as the difference
between the axial current going into and coming out of the adjacent compartment. The
axial current between the adjacent compartment can be uniquely determined by distance
and axial conductance between them (Fig. 5B),
sk
TMS
Ia (j, k) = Gjk
E(s) ? ds = Gjk E ? sjk .
(9)
sj
Hence the transmembrane current in the k-th compartment is,
TMS
(k) = IaTMS (j, k) ? IaTMS (k, l) = E ? (Gjk sjk ? Gkl skl ).
Im
(10)
Now we see that the important factors to produce a change in local membrane potential by
TMS are the differences in axial conductance and position between adjacent compartments.
As Nagarajan and Kamitani pointed out [1][2], if the cellular size is small, the heterogeneity
of the local cellular properties (e.g. branching, ending, bending of dendrites, and change in
dendrite diameter) could be crucial in inducing an intracellular current by TMS. A multiple
branching formulation is easily obtained from Eq.10. For simplicity, the induced electric
field was approximated as a rectangular pulse. The pulse?s duration was set to be 1 ms, as in
the network model, and the amplitude was varied within a physically valid range according
to the numerical experiment?s conditions.
Figure 3: A) The minimum intensity of the suppressive perturbation in our model (solid
line for single- and dashed line for paired-pulse). The width of each curve indicates the
suppressive latency range for a particular intensity of the perturbation (e.g. if At = 1.5 and
ITMS = 12, the network is suppressed during -35.5 to 64.2 ms for a single pulse case; thus
the suppressive latency range is 99.7 ms.) B) Experimental data of suppressive effect on
a character recognition task replotted and modified from [7] and [11]. Both graph A and
B equivalently indicate the susceptibility to TMS at the particular timing. To compare the
absolute timing, the model results must be biased with the proper amount of delay in neural
signal transmission given to the target neural population because these are measured from
the timing of afferent signal arrival, not from the onset of the visual stimulus presentation.
3 Results
3.1
Temporally selective suppression of neural population
The time course of the order parameters are illustrated in Fig. 2A. The network state can
be also depicted as a point on a two-dimensional plane of the order parameters (Fig. 2B).
Because TMS was modeled as a uniform perturbation, the mean activity, m0 , was transiently increased just after the onset of the perturbation and was followed by a decrease
of both m0 and m2 . This result was obtained regardless of the onset timing of the perturbation. The final state of the network, however, critically depended on the onset timing
of the perturbation. It converged to either of the bistable states; the silent state in which
the network activity is zero or the active state in which the network holds a local excitation. When the perturbation was applied temporally close to the afferent onset, the network
was completely suppressed and converged to the silent state. On the other hand, when the
perturbation was too early or too late from the afferent onset, the network was transiently
perturbed but finally converged to the active state.
We could thus find the latency range during which the perturbation could suppress the
network activity (Fig. 3A). The width of suppressive latency range increased with the amplitude of the perturbation and reached over 100 ms, which is comparable to typical experimental data of suppression of visual percepts by occipital TMS [6][7]. When we supplied a
strong afferent input to the network, equivalent to a contrast increase in the visual stimulus,
the suppressive latency range narrowed and shifted upward, and consequently, it became
difficult to suppress the network activity without a strict timing control and larger amplitude of the perturbation. These results also agree with experiments using visual stimuli of
various contrasts or visibilities [8][13]. The suppressive latency range consistently had a
bell shape with the bottom at the afferent onset regardless of parameter changes, indicating
that TMS works most suppressively at the timing when the afferent signal reaches the target
Figure 4: Threshold reduction by paired pulses in the steady state. A) Network model
and B) experimental data of the phosphene threshold replotted from [12]. The dashed line
indicates the threshold for a single pulse TMS.
neural population.
3.2
Sustained inhibition of neural population by subthreshold pulse
Multiple TMS pulses within a short interval, or repetitive TMS (rTMS), can evoke
phosphene or visual deficits even though each single pulse fails to elicit any perceptible
effect. This experimental fact suggests that a TMS pulse, even if it is a subthreshold one,
induces a certain sustained inhibitory effect and reduces the next pulse?s threshold to elicit
perceptible interference.
We considered the effect of paired pulses on a neural population and determined the duration of the threshold reduction by a subthreshold TMS. Here we set the subthreshold level
at the upper limit of intensity which could not suppress the network at the induction timing.
For the steady state, the initial subthreshold perturbation significantly reduced the suppressive threshold for the subsequent perturbation; the original threshold level was restored to
more than 100 ms after the initial TMS (Fig. 4A). The threshold slightly increased when
the pulse interval was shorter than ?m . These results agree with experimental data of occipital TMS examining the relationship between phosphene threshold and the paired-pulse
TMS interval [12] (Fig. 4B).
For the transient state, we also observed that the initial subthreshold perturbation, indicated
by the arrow in Fig. 3A, significantly reduced the suppressive threshold for the subsequent
perturbation, and consequently, the suppressive latency range was extended up to 60 ms
(Fig. 3A). These results are consistent with Amassian?s experimental results demonstrating
that a preceding subthreshold TMS to the occipital cortex increased the suppressive latency
range in a character recognition task [11] (Fig. 3B).
3.3
Transient inhibition of single neuron by subthreshold pulse
Next, we focus on the effect of TMS on a single neuron. Results from a layer V pyramidal cell are illustrated in Fig. 5. An intense perturbation could inhibit the spike train for
over 100ms after a brief spike burst (Fig. 5C1). This sustained spike inhibition might be
caused by mechanisms similar to after-hyperpolarization or adaptation because the intracellular concentration of Ca2+ rapidly increased during the bursting period. These results
are basically the same as Kamitani?s report [1] using Poisson synapses as current inputs to
the neuron. We tried several types of morphology and found that it was difficult to suppress their original spike patterns when the size of the neuron was small (e.g. stellate cell)
or when the neuron initially showed spike bursts (e.g. pyramidal cell with more bushy
dendritic arbors).
Figure 5: A) Layer V pyramidal cell. B) Compartment model of the neuron and the transmembrane current induced by TMS. C1, C2) The spike train perturbed by a suprathreshold
and subthreshold TMS. C3) The temporal variation of the TMS threshold for inducing the
spike inhibition. Thin lines in C1?C3 indicate the control condition without TMS.
Using a morphology whose spike train was most easily suppressed (i.e. a pyramidal cell in
Fig. 5A), we determined whether a preceding subthreshold pulse could induce the sustained
inhibitory effect. Here, the suppressive threshold was defined as the lowest intensity of the
perturbation yielding a spike inhibitory period whose duration was more than 100 ms. The
perturbation below the suppressive threshold caused the spike timing shift as illustrated in
Fig. 5C2. In the single cell?s case, the suppressive threshold highly depended on the relative
timing within the spike interval and repeated its pattern periodically. In the initial spike
interval from the subthreshold perturbation to the next spike, the suppressive threshold
decreased but it recovered to the original level immediately after the next spike initiation
(Fig. 5C3). This fast recovery of the suppressive threshold occurred regardless of the
induction timing of the subthreshold perturbation, indicating that the sustained inhibitory
effect by the preceding subthreshold perturbation lasted on the order of one (or two at most)
spike interval, even with the most suppressible neuron model. The result is incomparably
shorter than the experimental data as noted in Sec. 3.2, suggesting that it is impossible to
attribute the neural substrates of the threshold reduction caused by the subthreshold pulse
to only the membrane dynamics of a single neuron.
4 Discussion
This paper focused on the dichotomy to determine what is essential for TMS-induced
suppression?a network or a single neuron? Our current answer is that the network is essential because the temporal properties of suppression observed in the neural population
model were totally consistent with the experimental data. In a single neuron model, we
can actually observe a spike inhibition whose duration is comparable to the silent period
of the electromyogram induced by TMS on the motor cortex [14]; however, the degree of
suppression is highly dependent on the property of the high-threshold Ca2+ channel and
is also very selective about the cellular morphology. In addition, the most critical point is
that the sustained inhibitory effect of a subthreshold pulse cannot be explained by only the
membrane mechanisms of a single neuron. These results indicate that TMS can induce a
spike inhibition or a spike timing shift on a single neuron level, which yet seems not enough
to explain the whole experimental data.
As Walsh pointed out [15], TMS is highly unlikely to evoke a coordinated activity pattern
or to stimulate a specific functional structure with a fine spatial resolution in the target cortical area. Rather, TMS seems to induce a random activity irrespective of the existing neural
activity pattern. This paper simply modeled TMS as a uniform perturbation simultaneously
applied to all neurons in the network. Walsh?s idea and our model are basically equivalent
in that TMS gives a neural stimulation irrespective of the existing cortical activity evoked
by the afferent input. Thus inactive parts of the network, or opponent neurons far from
?0 , can be also activated by the perturbation if it is strong enough to raise such inactive
neurons above the activation threshold, resulting in suppression of the original local excitation through lateral inhibitory connections. To suppress the network activity, TMS needs
to be applied before the local excitation is built up and the inactive neurons are strongly
suppressed. In the paired-pulse case, even though each TMS pulse was not strong enough
to activate the suppressed neurons, the pre-activation by the preceding TMS can facilitate
the subsequent TMS?s effect if it is applied until the network restores its original activity
pattern. These are the basic mechanisms of TMS-induced suppression in our model, by
which the computational results are consistent with the various experimental data. In addition to our computational evidence, recent neuropharmacological studies demonstrated that
GABAergic drugs [16] and hyperventilation environment [17] could modulate TMS effect,
suggesting that transsynaptic inhibition via inhibitory interneuron might be involved in
TMS-induced effects. All these facts indicate that TMS-induced neural interference is mediated by a transsynaptic network, not only by single neuron properties, and that inhibitory
interactions in a neural population play a critical role in yielding neural interference and its
temporal properties.
Acknowledgments
We greatly appreciate our fruitful discussions with Dr. Yukiyasu Kamitani.
References
[1] Y. Kamitani, V. Bhalodi, Y. Kubota, and S. Shimojo, Neurocomputing 38-40, 697 (2001).
[2] S. Nagarajan, D. Durand, and E. Warman, IEEE Trans Biomed Eng 40, 1175 (1993).
[3] R. Ben-Yishai, R. Bar-Or, and H. Sompolinsky, Proc Natl Acad Sci USA 92, 3844 (1995).
[4]
[5]
[6]
[7]
H. Sompolinsky and R. Shapley, Curr Opin Neurobiol 7, 514 (1997).
D. Somers, S. Nelson, and M. Sur, J Neurosci 15, 5448 (1995).
Y. Kamitani and S. Shimojo, Nat Neurosci 2, 767 (1999).
V. Amassian, R. Cracco, P. Maccabee, J. Cracco, A. Rudell, and L. Eberle, Electroencephalogr
Clin Neurophysiol 74, 458 (1989).
[8]
[9]
[10]
[11]
T. Kammer and H. Nusseck, Neuropsychologia 36, 1161 (1998).
Z. Mainen and T. Sejnowski, Nature 382, 363 (1996).
B. Roth and P. Basser, IEEE Trans Biomed Eng 37, 588 (1990).
V. Amassian, P. Maccabee, R. Cracco, J. Cracco, A. Rudell, and E. L, Brain Res 605, 317
(1993).
[12] P. Ray, K. Meador, C. Epstein, D. Loring, and L. Day, J Clin Neurophysiol 15, 351 (1998).
[13] V. Amassian, R. Cracco, P. Maccabee, and J. Cracco, Handbook of Transcranial Magnetic
Stimulation (Arnold Publisher, 2002), chap. 30, pp. 323?34.
[14] M. Inghilleri, A. Berardelli, G. Cruccu, and M. Manfredi, J Physiol 466, 521 (1993).
[15] V. Walsh and A. Cowey, Nat Rev Neurosci 1, 73 (2000).
[16] U. Ziemann, J. Rothwell, and M. Ridding, J Physiol 496.3, 873 (1996).
[17] A. Priori, A. Berardelli, B. Mercuri, M. Inghilleri, and M. Manfredi, Electroencephalogr Clin
Neurophysiol 97, 69 (1995).
| 2395 |@word briefly:1 middle:1 sharpens:1 seems:2 nd:2 pulse:25 tried:1 eng:2 solid:1 reduction:3 initial:5 hereafter:1 mainen:3 wako:2 existing:2 current:12 recovered:1 activation:2 yet:2 must:2 physiol:2 distant:1 happen:1 numerical:1 subsequent:3 shape:1 periodically:1 motor:1 visibility:1 opin:1 accordingly:1 plane:2 short:1 provides:1 firstly:1 burst:2 c2:2 consists:1 sustained:9 shapley:1 ray:1 theoretically:1 morphology:5 brain:7 gjk:3 chap:1 totally:1 estimating:1 underlying:2 lowest:1 what:1 neurobiol:1 suppresses:2 developed:1 miyawaki:2 transformation:1 temporal:7 control:3 before:1 timing:16 local:5 depended:2 limit:1 acad:1 despite:1 modulation:1 might:2 chose:1 examined:3 bursting:1 suggests:1 evoked:1 co:4 limited:1 walsh:3 range:11 unique:1 acknowledgment:1 eberle:1 j0:6 area:4 incomparably:1 elicit:2 bell:1 significantly:2 drug:1 pre:1 induce:3 cannot:1 close:1 impossible:1 equivalent:2 imposed:1 demonstrated:1 fruitful:1 roth:1 regardless:3 occipital:4 duration:6 rectangular:2 resolution:2 focused:1 simplicity:2 recovery:1 immediately:1 m2:6 population:12 variation:1 analogous:1 target:5 play:2 suppose:1 substrate:1 approximated:2 recognition:2 electromyogram:1 bottom:2 role:2 observed:3 capture:1 sompolinsky:2 decrease:1 inhibit:1 transmembrane:4 balanced:2 environment:1 dynamic:4 raise:1 segment:1 suppressively:1 basis:1 completely:1 neurophysiol:3 easily:2 various:5 riken:4 train:3 fast:2 effective:1 describe:1 sejnowski:2 activate:1 detected:1 dichotomy:1 whose:3 widely:1 plausible:2 larger:1 final:2 delivered:3 reproduced:1 advantage:1 interferes:1 bifurcates:1 propose:1 interaction:5 maximal:1 coming:1 adaptation:1 j2:6 neighboring:1 rapidly:1 intuitive:1 inducing:2 transmission:1 produce:2 converges:1 ben:1 depending:3 recurrent:2 axial:5 measured:1 eq:4 strong:3 indicate:5 attribute:1 transient:4 suprathreshold:3 jst:1 bistable:3 explains:1 sjk:2 nagarajan:2 stellate:1 dendritic:2 secondly:1 im:1 hold:2 around:1 considered:1 equilibrium:1 substituting:1 m0:7 early:1 susceptibility:1 purpose:1 proc:1 geniculate:1 tool:2 electroencephalogr:2 modified:3 rather:2 voltage:1 derived:1 focus:1 consistently:1 indicates:2 lasted:1 greatly:1 contrast:2 suppression:9 dependent:2 entire:1 typically:1 unlikely:1 initially:1 quasi:1 going:1 lgn:2 selective:2 biomed:2 upward:1 orientation:2 priori:1 proposes:1 spatial:5 restores:1 field:7 represents:2 placing:1 thin:1 report:3 stimulus:8 transiently:2 employ:1 simultaneously:3 neurocomputing:1 consisting:1 attractor:1 curr:1 arborization:1 conductance:2 highly:3 analyzed:2 yielding:3 operated:1 activated:1 natl:1 yishai:1 shorter:2 intense:1 re:1 isolated:4 theoretical:1 increased:5 bistability:2 saitama:2 uniform:3 delay:1 examining:1 too:2 hillock:1 answer:1 stimulates:1 periodic:2 perturbed:2 proximal:1 density:2 na:1 dr:1 cognitive:1 derivative:1 japan:2 suggesting:2 potential:2 skl:1 sec:1 coefficient:2 coordinated:2 kamitani:5 caused:3 afferent:19 onset:10 analyze:1 reached:1 narrowed:1 compartment:10 accuracy:1 became:1 characteristic:1 percept:3 subthreshold:18 rudell:2 critically:1 basically:2 converged:3 gkl:1 detector:1 synapsis:1 reach:2 explain:1 synaptic:2 pp:1 involved:1 static:1 gain:1 popular:2 amplitude:7 actually:1 dt:4 day:1 formulation:1 though:2 strongly:1 just:1 until:1 d:1 hand:1 interfere:1 epstein:1 indicated:1 stimulate:2 facilitate:1 effect:15 usa:1 consisted:1 hence:1 excluded:1 illustrated:3 distal:1 adjacent:3 during:4 branching:2 width:2 uniquely:1 noted:1 excitation:3 steady:2 cosine:1 m:10 neocortical:1 demonstrate:2 common:1 stimulation:6 spiking:1 hyperpolarization:1 regulates:1 functional:1 jp:2 yukiyasu:1 occurred:1 numerically:1 tuning:2 pointed:2 had:1 cortex:5 inhibition:10 showed:1 recent:1 massively:1 selectivity:1 certain:1 initiation:2 meador:1 cowey:1 durand:1 transcranial:4 minimum:1 preceding:4 employed:2 determine:2 period:3 signal:4 dashed:2 multiple:2 reduces:1 clinical:1 coded:1 paired:5 basic:1 poisson:1 physically:1 repetitive:1 ion:1 cell:6 c1:3 addition:3 fine:1 interval:6 decreased:1 basser:1 pyramidal:4 macroscopic:1 crucial:1 suppressive:16 biased:1 publisher:1 strict:1 induced:18 facilitates:1 neuropsychologia:1 axonal:1 enough:4 manfredi:2 architecture:1 silent:3 idea:1 tm:64 masato:1 shift:2 inactive:3 whether:3 expression:1 utility:1 latency:8 amount:1 induces:1 simplest:2 diameter:1 reduced:2 supplied:1 inhibitory:12 shifted:1 dotted:1 neuroscience:2 soma:4 threshold:22 demonstrating:1 v1:1 graph:4 bushy:1 angle:1 powerful:1 ca2:3 injected:1 somers:1 comparable:2 layer:2 followed:1 scalp:2 activity:16 fourier:3 kubota:1 according:1 representable:1 membrane:5 slightly:1 suppressed:6 character:2 perceptible:2 evolves:1 rev:1 explained:1 interference:7 computationally:1 agree:2 describing:1 mechanism:8 presto:1 opponent:1 observe:1 magnetic:11 original:5 responding:1 clin:3 appreciate:1 already:1 spike:19 restored:1 primary:2 concentration:1 microscopic:1 gradient:2 distance:1 deficit:1 lateral:2 sci:1 nelson:1 extent:2 cellular:5 induction:6 assuming:1 sur:1 modeled:3 relationship:1 equivalently:1 difficult:2 suppress:5 proper:2 calcium:1 unknown:2 upper:1 neuron:40 heterogeneity:1 extended:1 perturbation:29 somatic:1 varied:1 intensity:5 c3:3 connection:4 trans:2 bar:1 below:1 pattern:6 regime:2 replotted:2 built:1 including:1 ia:1 critical:3 brief:5 temporally:2 gabaergic:1 irrespective:2 mediated:1 bending:1 relative:1 fully:1 localized:1 nucleus:1 degree:2 consistent:3 interfering:1 neuropharmacological:1 excitatory:2 course:2 placed:2 institute:2 arnold:1 absolute:1 regard:1 boundary:1 calculated:1 cortical:6 dimension:1 ending:1 valid:1 curve:1 sensory:1 far:1 sj:1 evoke:2 dealing:1 active:2 handbook:1 assumed:1 shimojo:2 sk:1 stimulated:1 channel:6 okada:2 nature:1 dendrite:6 dtms:1 electric:3 intracellular:3 arrow:1 whole:2 neurosci:3 profile:2 arrival:1 repeated:1 fig:19 slow:1 axon:1 fails:1 position:1 late:1 specific:5 invasively:2 evidence:2 essential:2 nat:2 interneuron:1 depicted:1 simply:1 visual:12 coil:3 stimulating:1 modulate:1 presentation:2 consequently:2 yoichi:2 change:3 included:1 determined:3 typical:1 uniformly:1 accepted:1 experimental:13 arbor:1 indicating:2 evaluate:1 |
1,535 | 2,396 | ICA-Based Clustering of Genes from
Microarray Expression Data
Su-In Lee* and Serafim Batzoglou?
Department of Electrical Engineering
?
Department of Computer Science
Stanford University, Stanford, CA 94305
[email protected], [email protected]
*
Abstract
We propose an unsupervised methodology using independent
component analysis (ICA) to cluster genes from DNA microarray
data. Based on an ICA mixture model of genomic expression
patterns, linear and nonlinear ICA finds components that are specific
to certain biological processes. Genes that exhibit significant
up-regulation or down-regulation within each component are
grouped into clusters. We test the statistical significance of
enrichment of gene annotations within each cluster. ICA-based
clustering outperformed other leading methods in constructing
functionally coherent clusters on various datasets. This result
supports our model of genomic expression data as composite effect
of independent biological processes. Comparison of clustering
performance among various ICA algorithms including a
kernel-based nonlinear ICA algorithm shows that nonlinear ICA
performed the best for small datasets and natural-gradient
maximization-likelihood worked well for all the datasets.
1
In trod u ction
Microarray technology has enabled genome-wide expression profiling, promising to
provide insight into underlying biological mechanism involved in gene regulation. To
aid such discoveries, mathematical tools that are versatile enough to capture the
underlying biology and simple enough to be applied efficiently on large datasets are
needed. Analysis tools based on novel data mining techniques have been proposed
[1]-[6]. When applying mathematical models and tools to microarray analysis,
clustering genes that have the similar biological properties is an important step for
three reasons: reduction of data complexity, prediction of gene function, and
evaluation of the analysis approach by measuring the statistical significance of
biological coherence of gene clusters.
Independent component analysis (ICA) linearly decomposes each of N vectors into M
common component vectors (N?M) so that each component is statistically as
independent from the others as possible. One of the main applications of ICA is blind
source separation (BSS) that aims to separate source signals from their mixtures.
There have been a few attempts to apply ICA to the microarray expression data to
extract meaningful signals each corresponding to independent biological process
[5]-[6]. In this paper, we provide the first evidence that ICA is a superior
mathematical model and clustering tool for microarray analysis, compared to the most
widely used methods namely PCA and k-means clustering. We also introduce the
application of nonlinear ICA to microarray analysis, and show that it outperforms
linear ICA on some datasets.
We apply ICA to microarray data to decompose the input data into statistically
independent components. Then, genes are clustered in an unsupervised fashion into
non-mutually exclusive clusters. Each independent component is assigned a putative
biological meaning based on functional annotations of genes that are predominant
within the component. We systematically evaluate the clustering performance of
several ICA algorithms on four expression datasets and show that ICA-based
clustering is superior to other leading methods that have been applied to analyze the
same datasets. We also proposed a kernel based nonlinear ICA algorithm for dealing
with more realistic mixture model. Among the different linear ICA algorithms
including six linear and one nonlinear ICA algorithm, the natural-gradient
maximum-likelihood estimation method (NMLE) [7]-[8] performs well in all the
datasets. Kernel-based nonlinear ICA method worked better for three small datasets.
2
M a t h e m at i c a l m o de l o f g e n om e - wi d e e x p r e s s i on
Several distinct biological processes take place simultaneously inside a cell; each
biological process has its own expression program to up-regulate or down-regulate the
level of expression of specific sets of genes. We model a genome-wide expression
pattern in a given condition (measured by a microarray assay) as a mixture of signals
generated by statistically independent biological processes with different activation
levels. We design two kinds of models for genomic expression pattern: a linear and
nonlinear mixture model.
Suppose that a cell is governed by M independent biological processes S = (s1, ?,
sM)T, each of which is a vector of K gene expression levels, and that we measure the
levels of expression of all genes in N conditions, resulting in a microarray expression
matrix X = (x1,?,xN)T. The expression level at each different condition j can be
expressed as linear combinations of the M biological processes: xj=aj1s1+?+ajMsM.
We can express this idea concisely in matrix notation as follows.
X = AS ,
? x1 ? ? a11 L a1M ? ? s1 ?
? M ?=? M
M ?? ?? M ??
? ? ?
?? x N ?? ??a N 1 L a NM ?? ?? s M ??
(1)
More generally, we can express X = (x1,?,xN)T as a post-nonlinear mixture of the
underlying independent processes as follows, where f(.) is a nonlinear mapping from
N to N dimensional space.
X = f ( AS ),
? ? a11 L a1M ? ? s1 ? ?
? x1 ?
? M ? = f ?? M
?? M ? ?
M
?
? ?
?
?? ? ?
?
?
?? x N ??
? ?? a N 1 L a NM ?? ?? s M ?? ?
(2)
3
I n d e p e n d e n t c o m po n e n t a n a l y s i s
In the models described above, since we assume that the underlying biological
processes are independent, we suggest that vectors S=(s1,?,sM) are statistically
independent and so ICA can recover S from the observed microarray data X. For linear
ICA, we apply natural-gradient maximum estimation (NMLE) method which was
proposed in [7] and was made more efficient by using natural gradient method in [8].
We also apply nonlinear ICA using reproducible kernel Hilbert spaces (RKHS) based
on [9], as follows:
1. We map the N dimensional input data xi to ?(xi) in the feature space by using the
kernel trick. The feature space is defined by the relationship ?(xi)T?(xj)=k(xi,, xj).
That is, inner product of mapped data is determined to by a kernel function k(.,.) in
the input space; we used a Gaussian radial basis function (RBF) kernel
(k(x,y)=exp(-|x-y|2)) and a polynomial kernel of degree 2 (k(x,y)=(xTy+1)2). To
perform mapping, we found orthonormal bases of the feature space by randomly
sampling L input data v={v1,?,vL} 1000 times and choosing one set minimizing the
condition number of ?v=(?(v1),?,?(vL)). Then, a set of orthonormal bases of the
feature space is determined by the selected L images of input data in v as ? =
?v(?v T?v )-1/2. We map all input data x1,?,xK, each corresponding to a gene, to
?(x1),?,?(xK) in the feature space with basis ?, as follows:
? k (v1 , v1 ) K k (v1 , v L ) ?
?
M
M
?
?
??k (v L , v1 ) L k (v L , v L ) ??
?(xi)=(?vT?v )-1/2?v T?v (xi) ?
=
?1 / 2
? k (v1 , xi ) ?
? ? ? L (1? i?K) (3)
?
M
?
?
??k (v L , xi ) ??
2. We linearly decompose the mapped data ?=[?(x1),.,?(xK)] ?RL?K into statistically
independent components using NMLE.
4 Proposed approach
The microarray dataset we are given is in matrix form where each element xij
corresponds to the level of expression of the jth gene in the ith experimental condition.
Missing values are imputed by KNNImpute [10], an algorithm based on k nearest
neighbors that is widely used in microarray analysis. Given the expression matrix X of
N experiment by K genes, we perform the following steps.
1. Apply ICA to decompose X into independent components y1, ?,yM as in Equations
(1) and (2). Prior to applying ICA, remove any rows that make the expression
matrix X singular. After ICA, each component denoted by yi is a vector
comprising K loads gene expression levels, i.e., yi = (yi1, ...,yiK). We chose to let
the number of components M to be maximized, which is equal the number of
microarray experiments N because the maximum for N in our datasets was 250,
which is smaller than the number of biological processes we hypothesize to act
within a cell.
2. For each component, cluster genes according to their relative loads yij/mean(yi).
Based on our ICA model, each component is a putative genomic expression
program of an independent biological process. Thus, our hypothesis is that genes
showing relatively high or low expression level within the component are the most
important for the process. We create two clusters for each component: one cluster
containing genes with expression level higher than a threshold, and one cluster
containing genes with expression level lower than a threshold.
Cluster i,1 = {gene j | y ij > mean( y i ) + c ? std( y i )}
Cluster i,2 = {gene j | y ij < mean( y i ) ? c ? std( y i )}
(4)
Here, mean(yi) is the average, std(yi) is the standard deviation of yi; and c is an
adjustable coefficient. The value of the coefficient c was varied from 1.0 to 2.0 and
the result for c=1.25 was presented in this paper. The results for other values of c
are similar, and are presented on the website www.stanford.edu/~silee/ICA/.
3. For each cluster, measure the enrichment of each cluster with genes of known
functional annotations. Using the Gene Ontology (GO) [11] and KEGG [12] gene
annotation databases, we calculate the p-value for each cluster with every gene
annotation, which is the probability that the cluster contains the observed number
of genes with the annotation by chance assuming the hypergeometric distribution
(details in [4]). For each gene annotation, the minimum p-value that is smaller than
10-7 obtained from any cluster was collected. If no p-value smaller than 10-7 is
found, we consider the gene annotation not to be detected by the approach. As a
result, we can assign biological meaning to each cluster and the corresponding
independent component and we can evaluate the clustering performance by
comparing the collected minimum p-value for each gene annotation with that from
other clustering approach.
5
P e r f o r m a n c e e v a l uat i o n
We tested the ICA-based clustering to four expression datasets (D1?D4) described in
Table 1.
Table 1: The four datasets used in our analysis
ARRAY
TYPE
DESCRIPTION
# OF
GENES (K)
# OF
EXPS (N)
D1
Spotted
4579
22
D2
Oligonucl
eotide
Spotted
Oligonucl
eotide
Budding yeast during cell cycle and
CLB2/CLN3 overactive strain [13]
Budding yeast during cell cycle [14]
6616
17
C. elegans in various conditions [3]
Normal human tissue including 19
kinds of tissues [15]
17817
7070
553
59
D3
D4
For D1 and D4, we compared the biological coherence of ICA components with that
of PCA applied in the same datasets in [1] and [2], respectively. For D2 and D3, we
compared with k-means clustering and the topomap method, applied in the same
datasets in [4] and [3], respectively. We applied nonlinear ICA to D1, D2 and D4.
Dataset D3 is very large and makes the nonlinear algorithm unstable.
D1 was preprocessed to contain log-ratios xij=log2(Rij/Gij) between red and green
intensities. In [1], principal components, referred to as eigenarrays, were
hypothesized to be genomic expression programs of distinct biological processes. We
compared the biological coherence of independent components with that of principal
components found by [1]. Comparison was done in two ways: (1) For each
component, we grouped genes within top x% of significant up-regulation and
down-regulation (as measured by the load of the gene in the component) into two
clusters with x adjusted from 5% to 45%. For each value of x, statistical significance
was measured for clusters from independent components and compared with that from
principal components based on the minimum p-value for each gene annotation, as
described in Section 4. We made a scatter plot to compare the negative log of the
collected best p-values for each gene annotation when x is fixed to be 15%, shown in
Figure 1 (a) (2) Same as before, except we did not fix the value of x; instead, we
collected the minimum p-value from each method for each GO and KEGG gene
annotation category and compared the collected p-values (Figure 1 (b)). For both
cases, in the majority of the gene annotation categories ICA produced significantly
lower p-values than PCA did, especially for gene annotation for which both ICA and
PCA showed high significance.
Figure 1. Comparison of linear ICA (NMLE) to PCA on dataset D1 (a) when x is fixed
to be 15%; (b) when x is not fixed. (c) Three independent components of dataset D4.
Each gene is mapped to a point based on the value assigned to the gene in three
independent components, which are enriched with liver- (red), Muscle- (orange) and
vulva-specific (green) genes, respectively.
The expression levels of genes in D4 were normalized across the 59 experiments, and
the logarithms of the resulting values were taken. Experiments 57, 58, and 59 were
removed because they made the expression matrix nearly singular. In [2], a clustering
approach based on PCA and subsequent visual inspection was applied to an earlier
version of this dataset, containing 50 of the 59 samples. After we performed ICA, the
most significant independent components were enriched for liver-specific,
muscle-specific and vulva-specific genes with p-value of 10-133, 10-124 and 100-117,
respectively. In the ICA liver cluster, 198 genes were liver specific (out of a total of
244), as compared with the 23 liver-specific genes identified in [2] using PCA. The
ICA muscle cluster of 235 genes contains 199 muscle specific genes compared to 19
muscle-specific genes identified in [2]. We generated a 3-dimensional scatter plot of
the load expression levels of all genes annotated in [15] on these significant ICA
components in Figure 1 (c). We can see that the liver-specific, muscle-specific and
vulva-specific genes are strongly biased to lie on the x-, y-, and z- axis, respectively.
We applied nonlinear ICA on this dataset and the first four most significant clusters
from nonlinear ICA with Gaussian RBF kernel were muscle-specific, liver-specific,
vulva-specific and brain-specific with p-value of 10-158, 10-127, 10-112 and 10-70,
respectively, showing considerable improvement over the linear ICA clusters.
For D2, variance-normalization was applied to the 3000 most variant genes as in [4].
The 17th experiment, which made the expression matrix close to singular, was
removed. We measured the statistical significance of clusters as described in Section
4 and compared the smallest p-value of each gene annotation from our approach to
that from k-means clustering applied to the same dataset [4]. We made a scatter plot
for comparing the negative log of the smallest p-value (y-axis) from ICA clusters with
that from k-means clustering (x-axis). The coefficient c is varied from 1.0 to 2.0 and
the superiority of ICA-based clustering to k-means clustering does not change. In
many practical settings, estimation of the best c is not needed; we can adjust c to get a
desired size of the cluster unless our focus is to blindly find the size of clusters. Figure
2 (a) (b) (c) shows for c=1.25 a comparison of the performance of linear ICA
(NMLE), nonlinear ICA with Gaussian RBF kernel (NICA gauss), and k-means
clustering (k-means).
For D3, first we removed experiments that contained more than 7000 missing values,
because ICA does not perform properly when the dataset contains many missing
values. The 250 remaining experiments were used, containing expression levels for
17817 genes preprocessed to be log-ratios xij=log2(Rij/Gij) between red and green
intensities. We compared the biological coherence of clusters by our approach with
that of topomap-based approach applied to the same dataset in [3]. The result when
c=1.25 is plotted in the Figure 2 (d). We observe that the two methods perform very
similarly, with most categories having roughly the same p-value in ICA and in the
topomap clusters. The topomap clustering approach performs slightly better in a
larger fraction of the categories. Still, we consider this performance a confirmation
that ICA is a widely applicable method that requires minimal training: in this case the
missing values and high diversity of the data make clustering especially challenging,
while the topomap approach was specifically designed and manually trained for this
dataset as described in [3].
Finally, we compared different ICA algorithms in terms of clustering performance.
We tested six linear ICA methods: Natural Gradient Maximum Likelihood Estimation
(NMLE) [7][8], Joint Approximate Diagonalization of Eigenmatrices [16], Fast Fixed
Point ICA with three different measures of non-Gaussianity [17], and Extended
Information Maximization (Infomax) [18]. We also tested two kernels for nonlinear
ICA: Gaussian RBF kernel, and polynomial kernel (NICA ploy). For each dataset, we
compared the biological coherence of clusters generated by each method. Among the
six linear ICA algorithms, NMLE was the best in all datasets. Among both linear and
nonlinear methods, the Gaussian kernel nonlinear ICA method was the best in
Datasets D1, D2 and D4, the polynomial kernel nonlinear ICA method was best in
Dataset D4, and NMLE was best in the large datasets (D3 and D4). In Figure 3, we
compare the NMLE method with three other ICA methods for the dataset D2. Overall,
the NMLE algorithm consistently performed well in all datasets. The nonlinear ICA
algorithms performed best in the small datasets, but were unstable in the two largest
datasets. More comparison results are demonstrated in the website
www.stanford.edu/~silee/ICA/.
Figure 2: Comparison of (a) linear ICA (NMLE) with k-means clustering, (b)
nonlinear ICA with Gaussian RBF kernel to linear ICA (NMLE), and (c) nonlinear
ICA with Gaussian RBF kernel to k-means clustering on the dataset D2. (d)
Comparison of linear ICA (NMLE) to topomap-based approach on the dataset D3.
Figure 3: Comparison of linear ICA (NMLE) to (a) Extended Infomax ICA algorithm,
(b) Fast ICA with symmetric orthogonalization and tanh nonlinearity and (c)
Nonlinear ICA with polynomial kernel of degree 2 on the Dataset (B).
6
D i s c u s s i on
ICA is a powerful statistical method for separating mixed independent signals. We
proposed applying ICA to decompose microarray data into independent gene
expression patterns of underlying biological processes, and to group genes into
clusters that are mutually non-exclusive with statistically significant functional
coherence. Our clustering method outperformed several leading methods on a variety
of datasets, with the added advantage that it requires setting only one parameter,
namely the fraction c of standard deviations beyond which a gene is considered to be
associated with a component?s cluster. We observed that performance was not very
sensitive to that parameter, suggesting that ICA is robust enough to be used for
clustering with little human intervention.
The empirical performance of ICA in our tests supports the hypothesis that statistical
independence is a good criterion for separating mixed biological signals in microarray
data. The Extended Infomax ICA algorithm proposed in [18] can automatically
determine whether the distribution of each source signal is super-Gaussian or
sub-Gaussian. Interestingly, the application of Extended Infomax ICA to all the
expression datasets uncovered no source signal with sub-Gaussian distribution. A
likely explanation is that global gene expression profiles are mixtures of
super-Gaussian sources rather than of sub-Gaussian sources. This finding is consistent
with the following intuition: underlying biological processes are super-Gaussian,
because they affect sharply the relevant genes, typically a small fraction of all genes,
and leave the majority of genes relatively unaffected.
Acknowledgments
We thank Te-Won Lee for helpful feedback. We thank Relly Brandman, Chuong Do,
and Yueyi Liu for edits to the manuscript.
References
[1] Alter O, Brown PO, Botstein D. Proc. Natl. Acad. Sci. USA 97(18):10101-10106, 2000.
[2] Misra J, Schmitt W, et al. Genome Research 12:1112-1120, 2002.
[3] Kim SK, Lund J, et al. Science 293:2087-2092, 2001.
[4] Tavazoie S, Hughes JD, et al. Nature Genetics 22(3):281-285, 1999.
[5] Hori G, Inoue M, et al. Proc. 3rd Int. Workshop on Independent Component Analysis and
Blind Signal Separation, Helsinki, Finland, pp. 151-155, 2000.
[6] Liebermeister W. Bioinformatics 18(1):51-60, 2002.
[7] Bell AJ. and Sejnowski TJ. Neural Computation, 7:1129-1159, 1995.
[8] Amari S, Cichocki A, et al. In Advances in Neural Information Processing Systems 8, pp.
757-763. Cambridge, MA: MIT Press, 1996.
[9] Harmeling S, Ziehe A, et al. In Advances in Neural Information Processing Systems 8, pp.
757-763. Cambridge, MA: MIT Press, .
[10] Troyanskaya O., Cantor M, et al. Bioinformatics 17:520-525, 2001.
[11] The Gene Ontology Consortium. Genome Research 11:1425-1433, 2001.
[12] Kanehisa M., Goto S. In Current Topics in Computational Molecular Biology, pp.
301?315. MIT-Press, Cambridge, MA, 2002.
[13] Spellman PT, Sherlock G, et al. Mol. Biol. Cell 9:3273-3297, 1998.
[14] Cho RJ, Campell MJ, et al. Molecular Cell 2:65-73, 1998.
[15] Hsiao L, Dangond F, et al. Physiol. Genomics 7:97-104, 2001.
[16] Cardoso JF, Neural Computation 11(1):157-192, 1999.
[17] Hyvarinen A. IEEE Transactions on Neural Network 10(3):626?634, 1999.
[18] Lee TW, Girolami M, et al. Neural Computation 11:417?441, 1999.
| 2396 |@word version:1 polynomial:4 d2:7 serafim:2 versatile:1 reduction:1 liu:1 contains:3 uncovered:1 rkhs:1 interestingly:1 outperforms:1 current:1 comparing:2 activation:1 scatter:3 physiol:1 subsequent:1 realistic:1 remove:1 reproducible:1 hypothesize:1 plot:3 designed:1 selected:1 website:2 inspection:1 xk:3 yi1:1 ith:1 mathematical:3 inside:1 introduce:1 ica:74 roughly:1 ontology:2 brain:1 automatically:1 little:1 underlying:6 notation:1 kind:2 finding:1 a1m:2 every:1 act:1 intervention:1 superiority:1 before:1 engineering:1 acad:1 hsiao:1 chose:1 challenging:1 statistically:6 practical:1 acknowledgment:1 harmeling:1 hughes:1 empirical:1 bell:1 significantly:1 composite:1 radial:1 batzoglou:1 suggest:1 get:1 consortium:1 close:1 applying:3 www:2 map:2 demonstrated:1 missing:4 go:2 insight:1 array:1 orthonormal:2 enabled:1 pt:1 suppose:1 hypothesis:2 trick:1 element:1 std:3 database:1 observed:3 electrical:1 capture:1 rij:2 calculate:1 cycle:2 removed:3 intuition:1 complexity:1 exps:1 trained:1 basis:2 po:2 joint:1 various:3 distinct:2 fast:2 ction:1 sejnowski:1 detected:1 choosing:1 stanford:6 widely:3 larger:1 tested:3 amari:1 advantage:1 propose:1 product:1 relevant:1 description:1 cluster:33 a11:2 leave:1 liver:7 measured:4 ij:2 nearest:1 c:1 girolami:1 annotated:1 human:2 assign:1 fix:1 clustered:1 decompose:4 biological:24 yij:1 adjusted:1 considered:1 normal:1 exp:1 mapping:2 finland:1 smallest:2 eigenmatrices:1 estimation:4 proc:2 outperformed:2 applicable:1 tanh:1 troyanskaya:1 sensitive:1 grouped:2 largest:1 create:1 tool:4 mit:3 genomic:5 gaussian:13 aim:1 super:3 rather:1 focus:1 improvement:1 properly:1 consistently:1 likelihood:3 kim:1 helpful:1 vl:2 typically:1 comprising:1 overall:1 among:4 denoted:1 orange:1 equal:1 having:1 sampling:1 manually:1 biology:2 unsupervised:2 nearly:1 alter:1 others:1 few:1 randomly:1 simultaneously:1 attempt:1 mining:1 edits:1 evaluation:1 adjust:1 predominant:1 mixture:7 natl:1 tj:1 trod:1 unless:1 logarithm:1 desired:1 plotted:1 silee:3 minimal:1 earlier:1 measuring:1 maximization:2 deviation:2 cho:1 lee:3 infomax:4 ym:1 nm:2 containing:4 leading:3 suggesting:1 de:1 diversity:1 gaussianity:1 coefficient:3 int:1 blind:2 performed:4 chuong:1 analyze:1 red:3 recover:1 annotation:15 om:1 variance:1 efficiently:1 maximized:1 produced:1 unaffected:1 tissue:2 pp:4 involved:1 associated:1 dataset:16 hilbert:1 manuscript:1 higher:1 methodology:1 botstein:1 done:1 strongly:1 su:1 nonlinear:24 aj:1 yeast:2 usa:1 effect:1 hypothesized:1 contain:1 normalized:1 brown:1 assigned:2 symmetric:1 assay:1 during:2 won:1 d4:9 criterion:1 performs:2 orthogonalization:1 meaning:2 image:1 novel:1 common:1 superior:2 functional:3 rl:1 functionally:1 significant:6 cambridge:3 rd:1 similarly:1 nonlinearity:1 base:2 own:1 showed:1 cantor:1 certain:1 misra:1 vt:1 yi:6 muscle:7 minimum:4 determine:1 signal:8 rj:1 profiling:1 post:1 molecular:2 spotted:2 prediction:1 variant:1 blindly:1 kernel:18 normalization:1 cell:7 singular:3 source:6 microarray:16 biased:1 goto:1 elegans:1 enough:3 variety:1 xj:3 independence:1 affect:1 identified:2 inner:1 idea:1 whether:1 expression:32 pca:7 six:3 yik:1 generally:1 cardoso:1 category:4 dna:1 imputed:1 xij:3 express:2 group:1 four:4 threshold:2 d3:6 preprocessed:2 v1:7 fraction:3 powerful:1 place:1 separation:2 putative:2 coherence:6 worked:2 sharply:1 helsinki:1 xty:1 relatively:2 department:2 according:1 combination:1 smaller:3 across:1 slightly:1 wi:1 tw:1 s1:4 hori:1 kegg:2 taken:1 equation:1 mutually:2 mechanism:1 needed:2 apply:5 observe:1 regulate:2 jd:1 top:1 clustering:25 remaining:1 log2:2 especially:2 added:1 exclusive:2 exhibit:1 gradient:5 separate:1 mapped:3 separating:2 thank:2 majority:2 sci:1 topic:1 collected:5 unstable:2 reason:1 assuming:1 relationship:1 ratio:2 minimizing:1 regulation:5 negative:2 design:1 adjustable:1 perform:4 budding:2 datasets:22 sm:2 extended:4 strain:1 y1:1 varied:2 intensity:2 enrichment:2 namely:2 coherent:1 concisely:1 hypergeometric:1 beyond:1 pattern:4 lund:1 program:3 sherlock:1 including:3 green:3 explanation:1 natural:5 spellman:1 technology:1 inoue:1 axis:3 extract:1 cichocki:1 genomics:1 prior:1 discovery:1 relative:1 mixed:2 degree:2 consistent:1 schmitt:1 systematically:1 row:1 genetics:1 jth:1 wide:2 neighbor:1 feedback:1 bs:1 xn:2 genome:4 made:5 hyvarinen:1 transaction:1 approximate:1 gene:62 dealing:1 global:1 xi:8 decomposes:1 sk:1 table:2 promising:1 nature:1 mj:1 robust:1 ca:1 confirmation:1 mol:1 constructing:1 did:2 significance:5 main:1 linearly:2 profile:1 x1:7 enriched:2 referred:1 fashion:1 aid:1 sub:3 lie:1 governed:1 uat:1 down:3 load:4 specific:17 showing:2 evidence:1 workshop:1 diagonalization:1 te:1 likely:1 visual:1 expressed:1 contained:1 corresponds:1 chance:1 ma:3 kanehisa:1 rbf:6 jf:1 considerable:1 change:1 determined:2 except:1 specifically:1 principal:3 total:1 gij:2 experimental:1 gauss:1 meaningful:1 ziehe:1 support:2 bioinformatics:2 evaluate:2 d1:7 biol:1 nica:2 |
1,536 | 2,397 | Max-Margin Markov Networks
Ben Taskar Carlos Guestrin Daphne Koller
{btaskar,guestrin,koller}@cs.stanford.edu
Stanford University
Abstract
In typical classification tasks, we seek a function which assigns a label to a single object. Kernel-based approaches, such as support vector machines (SVMs),
which maximize the margin of confidence of the classifier, are the method of
choice for many such tasks. Their popularity stems both from the ability to
use high-dimensional feature spaces, and from their strong theoretical guarantees. However, many real-world tasks involve sequential, spatial, or structured
data, where multiple labels must be assigned. Existing kernel-based methods ignore structure in the problem, assigning labels independently to each object, losing much useful information. Conversely, probabilistic graphical models, such
as Markov networks, can represent correlations between labels, by exploiting
problem structure, but cannot handle high-dimensional feature spaces, and lack
strong theoretical generalization guarantees. In this paper, we present a new
framework that combines the advantages of both approaches: Maximum margin Markov (M3 ) networks incorporate both kernels, which efficiently deal with
high-dimensional features, and the ability to capture correlations in structured
data. We present an efficient algorithm for learning M3 networks based on a
compact quadratic program formulation. We provide a new theoretical bound
for generalization in structured domains. Experiments on the task of handwritten character recognition and collective hypertext classification demonstrate very
significant gains over previous approaches.
1
Introduction
In supervised classification, our goal is to classify instances into some set of discrete categories. Recently, support vector machines (SVMs) have demonstrated impressive successes on a broad range of tasks, including document categorization, character recognition,
image classification, and many more. SVMs owe a great part of their success to their
ability to use kernels, allowing the classifier to exploit a very high-dimensional (possibly
even infinite-dimensional) feature space. In addition to their empirical success, SVMs are
also appealing due to the existence of strong generalization guarantees, derived from the
margin-maximizing properties of the learning algorithm.
However, many supervised learning tasks exhibit much richer structure than a simple categorization of instances into one of a small number of classes. In some cases, we might
need to label a set of inter-related instances. For example: optical character recognition
(OCR) or part-of-speech tagging both involve labeling an entire sequence of elements into
some number of classes; image segmentation involves labeling all of the pixels in an image; and collective webpage classification involves labeling an entire set of interlinked
webpages. In other cases, we might want to label an instance (e.g., a news article) with
multiple non-exclusive labels. In both of these cases, we need to assign multiple labels simultaneously, leading to a classification problem that has an exponentially large set of joint
labels. A common solution is to treat such problems as a set of independent classification
tasks, dealing with each instance in isolation. However, it is well-known that this approach
fails to exploit significant amounts of correlation information [7].
An alternative approach is offered by the probabilistic framework, and specifically by
probabilistic graphical models. In this case, we can define and learn a joint probabilistic
model over the set of label variables. For example, we can learn a hidden Markov model,
or a conditional random field (CRF) [7] over the labels and features of a sequence, and
then use a probabilistic inference algorithm (such as the Viterbi algorithm) to classify these
instances collectively, finding the most likely joint assignment to all of the labels simultaneously. This approach has the advantage of exploiting the correlations between the different
labels, often resulting in significant improvements in accuracy over approaches that classify
instances independently [7, 10]. The use of graphical models also allows problem structure
to be exploited very effectively. Unfortunately, even probabilistic graphical models that are
trained discriminatively do not usually achieve the same level of generalization accuracy
as SVMs, especially when kernel features are used. Moreover, they are not (yet) associated
with generalization bounds comparable to those of margin-based classifiers.
Clearly, the frameworks of kernel-based and probabilistic classifiers offer complementary strengths and weaknesses. In this paper, we present maximum margin Markov (M 3 )
networks, which unify the two frameworks, and combine the advantages of both. Our approach defines a log-linear Markov network over a set of label variables (e.g., the labels
of the letters in an OCR problem); this network allows us to represent the correlations between these label variables. We then define a margin-based optimization problem for the
parameters of this model. For Markov networks that can be triangulated tractably, the resulting quadratic program (QP) has an equivalent polynomial-size formulation (e.g., linear
for sequences) that allows a very effective solution. By contrast, previous margin-based
formulations for sequence labeling [3, 1] require an exponential number of constraints. For
non-triangulated networks, we provide an approximate reformulation based on the relaxation used by belief propagation algorithms [8, 12]. Importantly, the resulting QP supports
the same kernel trick as do SVMs, allowing probabilistic graphical models to inherit the
important benefits of kernels. We also show a generalization bound for such margin-based
classifiers. Unlike previous results [3], our bound grows logarithmically rather than linearly with the number of label variables. Our experimental results on character recognition
and on hypertext classification, demonstrate dramatic improvements in accuracy over both
kernel-based instance-by-instance classification and probabilistic models.
2
Structure in classification problems
In supervised classification, the task is to learn a function h : X 7? Y from a set of m i.i.d.
instances S = {(x(i) , y(i) = t(x(i) ))}m
i=1 , drawn from a fixed distribution DX ?Y . The
classification function h is typically selected from some parametric family H. A common
choice is the linear family: Given n real-valued basis functions fj : X ? Y 7? IR, a
hypothesis hw ? H is defined by a set of n coefficients wj such that:
n
X
hw (x) = arg max
wj fj (x, y) = arg max w> f (x, y) ,
(1)
y
i=1
y
where the f (x, y) are features or basis functions.
The most common classification setting ? single-label classification ? takes Y =
{y1 , . . . , yk }. In this paper, we consider the much more general setting of multi-label
classification, where Y = Y1 ? . . . ? Yl with Yi = {y1 , . . . , yk }. In an OCR task, for
example, each Yi is a character, while Y is a full word. In a webpage collective classification task [10], each Yi is a webpage label, whereas Y is a joint label for an entire website.
In these cases, the number of possible assignments to Y is exponential in the number of
labels l. Thus, both representing the basis functions fj (x, y) in (1) and computing the
maximization arg maxy are infeasible.
An alternative approach is based on the framework of probabilistic graphical models. In
this case, the model defines (directly or indirectly) a conditional distribution P (Y | X ). We
can then select the label arg maxy P (y | x). The advantage of the probabilistic framework
is that it can exploit sparseness in the correlations between labels Yi . For example, in the
OCR task, we might use a Markov model, where Yi is conditionally independent of the rest
of the labels given Yi?1 , Yi+1 .
We can encode this structure using a Markov network. In this paper, purely for simplicity of presentation, we focus on the case of pairwise interactions between labels. We
emphasize that our results extend easily to the general case. A pairwise Markov network
is defined as a graph G = (Y, E), where each edge (i, j) is associated with a potential
function ?ij (x,
Qyi , yj ). The network encodes a joint conditional probability distribution as
P (y | x) ? (i,j)?E ?ij (x, yi , yj ). These networks exploit the interaction structure to
parameterize a classifier very compactly. In many cases (e.g., tree-structured networks),
we can use effective dynamic programming algorithms (such as the Viterbi algorithm) to
find the highest probability label y; in others, we can use approximate inference algorithms
that also exploit the structure [12].
The Markov network distribution is simply a log-linear model, with the pairwise potential
?ij (x, yi , yj ) representing (in log-space) a sum of basis functions over x, yi , yj . We can
therefore parameterize such a model using a set of pairwise basis functions f (x, y i , yj ) for
(i, j) ? E. We assume for simplicity of notation that all edges in the graph denote the same
type of interaction, so that we can define aX
set of features
fk (x, y) =
fk (x, yi , yj ).
(2)
(i,j)?E
Pn
The network potentials
are then ?ij (x, yi , yj ) = exp [ k=1 wk fk (x, yi , yj )] =
exp w> f (x, yi , yj ) .
The parameters w in a log-linear model can be trained to fit the data, typically by maximizing the likelihood or conditional likelihood (e.g., [7, 10]). This paper presents an algorithm for selecting w that maximize the margin, gaining all of the advantages of SVMs.
3
Margin-based structured classification
For a single-label binary classification problem, support vector machines (SVMs) [11] provide an effective method of learning a maximum-margin decision boundary. For singlelabel multi-class classification, Crammer and Singer [5] provide a natural extension of this
framework by maximizing the margin ? subject to constraints:
maximize ? s.t. ||w|| ? 1; w> ?fx (y) ? ?, ? x ? S, ?y 6= t(x);
(3)
where ?fx (y) = f (x, t(x)) ? f (x, y). The constraints in this formulation ensure that
arg maxy w> f (x, y) = t(x). Maximizing ? magnifies the difference between the value
of the true label and the best runner-up, increasing the ?confidence? of the classification.
In structured problems, where we are predicting multiple labels, the loss function is usually not simple 0-1 loss I(arg maxy w> fx (y) = t(x)), but per-label loss, such as the
proportion of incorrect labels predicted. In order to extend the margin-based framework
to the multi-label setting, we must generalize the notion of margin to take into account
the number of labels in y that are misclassified. In particular, we would like the margin
between t(x) and y to scale linearly with the number of wrong labels in y, ?tx (y):
maximize ? s.t. ||w|| ? 1; w> ?fx (y) ? ? ?tx (y), ?x ? S, ? y;
(4)
Pl
where ?tx (y) = i=1 ?tx (yi ) and ?tx (yi ) ? I(yi 6= (t(x))i ). Now, using a standard
transformation to eliminate ?, we get a quadratic program (QP):
1
minimize ||w||2 s.t. w> ?fx (y) ? ?tx (y), ?x ? S, ? y.
(5)
2
Unfortunately, the data is often not separable by a hyperplane defined over the space of
the given set of features. In such cases, we need to introduce slack variables ? x to allow
some constraints to be violated. We can now present the complete form of our optimization
problem, as well as the equivalent dual problem [2]:
Primal formulation (6)
X
1
||w||2 + C
?x ;
2
x
min
s.t. w> ?fx (y) ? ?tx (y) ? ?x , ?x, y.
Dual formulation
(7)
??2
??
??
?
?
X
1 ? ?X
??
?x (y)?fx (y)??;
max
?x (y)?tx (y) ? ??
??
??
2
x,y
x,y
X
s.t.
?x (y) = C, ?x; ?x (y) ? 0 , ?x, y.
y
(Note: for each x, we add an extra dual variable ?x (t(x)), with no effect on the solution.)
Exploiting structure in M3 networks
4
Unfortunately, both the number of constraints in the primal QP in (6), and the number of
variables in the dual QP in (7) are exponential in the number of labels l. In this section, we
present an equivalent, polynomially-sized, formulation.
Our main insight is that the variables ?x (y) in the
P dual formulation (7) can be interpreted
as a density function over y conditional on x, as y ?x (y) = C and ?x (y) ? 0. The dual
objective is a function
(y) and ?fx (y) with respect to ?x (y). Since
P of expectations of ?txP
both ?tx (y) = i ?tx (yi ) and ?fx (y) = (i,j) ?fx (yi , yj ) are sums of functions over
nodes and edges, we only need node and edge marginals of the measure ?x (y) to compute
their expectations. We define the marginal dual variables as follows:
P
?x (yi , yj ) =
? (y), ? (i, j) ? E, ?yi , yj , ? x;
Py?[yi ,yj ] x
(8)
?x (yi )
=
? i, ?yi , ? x;
y?[yi ] ?x (y),
where y ? [yi , yj ] denotes a full assignment y consistent with partial assignment yi , yj .
Now we can reformulate our entire QP (7) in terms of these dual variables. Consider, for
example,
the first term
in the objective function:
X
X
X
X
XX
i,yi
i
y
?x (yi )?tx (yi ).
?x (y) =
?tx (yi )
?x (y)?tx (yi ) =
?x (y)?tx (y) =
y
y?[yi ]
i,yi
The decomposition of the second term in the objective uses edge marginals ? x (yi , yj ).
In order to produce an equivalent QP, however, we must also ensure that the dual variables
?x (yi , yj ), ?x (yi ) are the marginals resulting from a legal density ?(y); that is, that they
belong to the marginal polytope [4]. In particular, we must enforce consistency between
the pairwise and singleton
X marginals (and hence between overlapping pairwise marginals):
?x (yi , yj ) = ?x (yj ), ?yj , ?(i, j) ? E, ?x.
(9)
yi
If the Markov network for our basis functions is a forest (singly connected), these constraints are equivalent to the requirement that the ? variables arise from a density. Therefore, theX
following
factored dual QP
equivalent
X
Xis X
X to the original dual QP:
max
?x (yi )?tx (yi ) ?
x
s.t.
X
i,yi
?x (yi , yj ) = ?x (yj );
yi
1
2
?x (yi , yj )?x? (yr , ys )fx (yi , yj )> fx? (yr , ys );
x,?
x (i,j) (r,s)
yi ,yj yr ,ys
X
?x (yi ) = C;
?x (yi , yj ) ? 0.
(10)
yi
Similarly, the original primal can be factored as follows:
min
s.t.
XX
XX
1
||w||2 + C
?x,i + C
?x,ij ;
2
x
x (i,j)
i
X
X
w> ?fx (yi , yj ) +
mx,i0 (yj ) +
mx,j 0 (yi ) ? ??x,ij ;
(i0 ,j):i0 6=i
X
(i,j)
mx,j (yi ) ? ?tx (yi ) ? ?x,i ;
(j 0 ,i):j 0 6=j
?x,ij ? 0, ?x,i ? 0.
(11)
The solution to the factored dual gives us: w =
P P
x
(i,j)
P
yi ,yj
?x (yi , yj )?fx (yi , yj ).
Theorem 4.1 If for each x the edges E form a forest, then a set of weights w will be
optimal for the QP in (6) if and only if it is optimal for the factored QP in (11).
If the underlying Markov net is not a forest, then the constraints in (9) are not sufficient to
enforce the fact that the ??s are in the marginal polytope. We can address this problem by
triangulating the graph, and introducing new ? LP variables that now span larger subsets of
Yi ?s. For example, if our graph is a 4-cycle Y1 ?Y2 ?Y3 ?Y4 ?Y1 , we might triangulate
the graph by adding an arc Y1 ?Y3 , and introducing ? variables over joint instantiations of
the cliques Y1 , Y2 , Y3 and Y1 , Y3 , Y4 . These new ? variables are used in linear equalities
that constrain the original ? variables to be consistent with a density. The ? variables appear
only in the constraints; they do not add any new basis functions nor change the objective
function. The number of constraints introduced is exponential in the number of variables
in the new cliques. Nevertheless, in many classification problems, such as sequences and
other graphs with low tree-width [4], the extended QP can be solved efficiently.
Unfortunately, triangulation is not feasible in highly connected problems. However, we
can still solve the QP in (10) defined by an untriangulated graph with loops. Such a procedure, which enforces only local consistency of marginals, optimizes our objective only over
a relaxation of the marginal polytope. In this way, our approximation is analogous to the
approximate belief propagation (BP) algorithm for inference in graphical models [8]. In
fact, BP makes an additional approximation, using not only the relaxed marginal polytope
but also an approximate objective (Bethe free-energy) [12]. Although the approximate QP
does not offer the theoretical guarantee in Theorem 4.1, the solutions are often very accurate in practice, as we demonstrate below.
As with SVMs [11], the factored dual formulation in (10) uses only dot products between
basis functions. This allows us to use a kernel to define very large (and even infinite) set of
features. In particular, we define our basis functions by fx (yi , yj ) = ?(yi , yj )?ij (x), i.e.,
the product of a selector function ?(yi , yj ) with a possibly infinite feature vector ?ij (x).
For example, in the OCR task, ?(yi , yj ) could be an indicator function over the class of
two adjacent characters i and j, and ?ij (x) could be an RBF kernel on the images of these
two characters. The operation fx (yi , yj )> fx? (yr , ys ) used in the objective function of the
? , r, s), where K? (x, i, j, x
? , r, s) =
factored dual QP is now ?(yi , yj )?(yr , ys )K? (x, i, j, x
?ij (x) ? ?rs (x) is the kernel function for the feature ?. Even for some very complex
functions ?, the dot-product required to compute K? can be executed efficiently [11].
5
SMO learning of M3 networks
Although the number of variables and constraints in the factored dual in (10) is polynomial
in the size of the data, the number of coefficients in the quadratic term (kernel matrix) in the
objective is quadratic in the number of examples and edges in the network. Unfortunately,
this matrix is often too large for standard QP solvers. Instead, we use a coordinate descent
method analogous to the sequential minimal optimization (SMO) used for SVMs [9].
Let us begin by considering the original dual problem (7). The SMO P
approach solves
this QP by analytically optimizing two-variable subproblems. Recall that y ?x (y) = C.
We can therefore take any two variables ?x (y1 ), ?x (y2 ) and ?move weight? from one to
the other, keeping the values of all other variables fixed. More precisely, we optimize for
?x0 (y1 ), ?x0 (y2 ) such that ?x0 (y1 ) + ?x0 (y2 ) = ?x (y1 ) + ?x (y2 ).
Clearly, however, we cannot perform this optimization in terms of the original dual, which
is exponentially large. Fortunately, we can perform precisely the same optimization in
terms of the marginal dual variables. Let ? = ?x0 (y1 ) ? ?x (y1 ) = ?x (y2 ) ? ?x0 (y2 ).
Consider a dual variable ?x (yi , yj ). It is easy to see that a change from ?x (y1 ), ?x (y2 ) to
?x0 (y1 ), ?x0 (y2 ) has the following effect on ?x (yi , yj ):
?0x (yi , yj ) = ?x (yi , yj ) + ?I(yi = yi1 , yj = yj1 ) ? ?I(yi = yi2 , yj = yj2 ).
(12)
We can solve the one-variable quadratic subproblem in ? analytically and update the appropriate ? variables. We use inference in the network to test for optimality of the current
solution (the KKT conditions [2]) and use violations from optimality as a heuristic to select
the next pair y1 , y2 . We omit details for lack of space.
6
Generalization bound
In this section, we show a generalization bound for the task of multi-label classification
that allows us to relate the error rate on the training set to the generalization error. As we
shall see, this bound is significantly stronger than previous bounds for this problem.
Our goal in multi-label classification is to maximize the number of correctly classified labels. Thus an appropriate error function is the average per-label loss L(w, x) =
1
>
l ?tx (arg maxy w fx (y)). As in other generalization bounds for margin-based classifiers, we relate the generalization error to the margin of the classifier. In Sec. 3, we define
the notion of per-label margin, which grows with the number of mistakes between the correct assignment and the best runner-up. We can now define a ?-margin per-label loss:
L? (w, x) = supz: |z(y)?w> fx (y)|???tx (y); ?y 1l ?tx (arg maxy z(y)).
This loss function measures the worst per-label loss on x made by any classifier z which
is perturbed from w> fx by at most a ?-margin per-label. We can now prove that the generalization accuracy of any classifier is bounded by its expected ?-margin per-label loss on
the training data, plus a term that grows inversely with the margin.Intuitively, the first term
corresponds to the ?bias?, as margin ? decreases the complexity of our hypothesis class by
considering a ?-per-label margin ball around w > fx and selecting one (the worst) classifier
within this ball. As ? shrinks, our hypothesis class becomes more complex, and the first
term becomes smaller, but at the cost of increasing the second term, which intuitively corresponds to the ?variance?. Thus, the result provides a bound to the generalization error
that trades off the effective complexity of the hypothesis space with the training error.
Theorem 6.1 If the edge features have bounded 2-norm, max(i,j),yi ,yj kfx (yi , yj )k2 ?
Redge , then for a family of hyperplanes parameterized by w, and any ? > 0, there exists a
constant K such that for any ? >v0 per-label margin, and m > 1 samples, the per-label
#
u " 2
loss is bounded by:
2 2
u
Ex L(w, x)
?
ES L? (w, x) + t
K
m
Redge kwk2 q
1
[ln m + ln l + ln q + ln k] + ln
?2
?
;
with probability at least 1 ? ?, where q = maxi |{(i, j) ? E}| is the maximum edge degree
in the network, k is the number of classes in a label, and l is the number of labels.
Unfortunately, we omit the proof due to lack of space. (See a longer version of the paper at
http://cs.stanford.edu/?btaskar/.) The proof uses a covering number argument
analogous to previous results in SVMs [13]. However we propose a novel method for
covering structured problems by constructing a cover to the loss function from a cover
of the individual edge basis function differences ?fx (yi , yj ). This new type of cover is
polynomial in the number of edges, yielding significant improvements in the bound.
Specifically, our bound has a logarithmic dependence on the number of labels (ln l) and
depends only on the 2-norm of the basis functions per-edge (Redge ). This is a significant
gain over the previous result of Collins [3] which has linear dependence on the number of
labels (l), and depends on the joint 2-norm of all of the features (which is ? lRedge , unless
each sequence is normalized separately, which is often ineffective in practice). Finally, note
that if ml = O(1) (for example, in OCR, if the number of instances is at least a constant
times the length of a word), then our bound is independent of the number of labels l. Such
a result was, until now, an open problem for margin-based sequence classification [3].
7
Experiments
We evaluate our approach on two very different tasks: a sequence model for handwriting
recognition and an arbitrary topology Markov network for hypertext classification.
0.35
0.25
quadratic
0.25
0.2
0.15
0.1
0.05
0
RMN
M^3N
0.2
0.15
0.1
0.05
0
Log-Reg
(a)
mSVM
cubic
Test error (pages per school)
Test error (average per-character)
linear
0.3
CRF
mSVM
(b)
M^3N
Cor
Tex
Was
Wis
Ave
(c)
Figure 1: (a) 3 example words from the OCR data set; (b) OCR: Average per-character test error for
logistic regression, CRFs, multiclass SVMs, and M3 Ns, using linear, quadratic, and cubic kernels;
(c) Hypertext: Test error for multiclass SVMs, RMNs and M3 Ns, by school and average.
Handwriting Recognition. We selected a subset of ? 6100 handwritten words, with average length of ? 8 characters, from 150 human subjects, from the data set collected by
Kassel [6]. Each word was divided into characters, each character was rasterized into an
image of 16 by 8 binary pixels. (See Fig. 1(a).) In our framework, the image for each word
corresponds to x, a label of an individual character to Yi , and a labeling for a complete
word to Y. Each label Yi takes values from one of 26 classes {a, . . . , z}.
The data set is divided into 10 folds of ? 600 training and ? 5500 testing examples.
The accuracy results, summarized in Fig. 1(b), are averages over the 10 folds. We implemented a selection of state-of-the-art classification algorithms: independent label approaches, which do not consider the correlation between neighboring characters ? logistic
regression, multi-class SVMs as described in (3), and one-against-all SVMs (whose performance was slightly lower than multi-class SVMs); and sequence approaches ? CRFs, and
our proposed M3 networks. Logistic regression and CRFs are both trained by maximizing the conditional likelihood of the labels given the features, using a zero-mean diagonal
Gaussian prior over the parameters, with a standard deviation between 0.1 and 1. The other
methods are trained by margin maximization. Our features for each label Yi are the corresponding image of ith character. For the sequence approaches (CRFs and M 3 ), we used an
indicator basis function to represent the correlation between Yi and Yi+1 . For margin-based
methods (SVMs and M3 ), we were able to use kernels (both quadratic and cubic were evaluated) to increase the dimensionality of the feature space. Using these high-dimensional
feature spaces in CRFs is not feasible because of the enormous number of parameters.
Fig. 1(b) shows two types of gains in accuracy: First, by using kernels, margin-based
methods achieve a very significant gain over the respective likelihood maximizing methods.
Second, by using sequences, we obtain another significant gain in accuracy. Interestingly,
the error rate of our method using linear features is 16% lower than that of CRFs, and
about the same as multi-class SVMs with cubic kernels. Once we use cubic kernels our
error rate is 45% lower than CRFs and about 33% lower than the best previous approach.
For comparison, the previously published results, although using a different setup (e.g., a
larger training set), are about comparable to those of multiclass SVMs.
Hypertext. We also tested our approach on collective hypertext classification, using the
data set in [10], which contains web pages from four different Computer Science departments. Each page is labeled as one of course, faculty, student, project, other. In all of our
experiments, we learn a model from three schools, and test on the remaining school. The
text content of the web page and anchor text of incoming links is represented using a set
of binary attributes that indicate the presence of different words. The baseline model is a
simple linear multi-class SVM that uses only words to predict the category of the page. The
second model is a relational Markov network (RMN) of Taskar et al. [10], which in addition to word-label dependence, has an edge with a potential over the labels of two pages
that are hyper-linked to each other. This model defines a Markov network over each web
site that was trained to maximize the conditional probability of the labels given the words
and the links. The third model is a M3 net with the same features but trained by maximizing
the margin using the relaxed dual formulation and loopy BP for inference.
Fig. 1(c) shows a gain in accuracy from SVMs to RMNs by using the correlations between
labels of linked web pages, and a very significant additional gain by using maximum margin
training. The error rate of M3Ns is 40% lower than that of RMNs, and 51% lower than
multi-class SVMs.
8
Discussion
We present a discriminative framework for labeling and segmentation of structured data
such as sequences, images, etc. Our approach seamlessly integrates state-of-the-art kernel
methods developed for classification of independent instances with the rich language of
graphical models that can exploit the structure of complex data. In our experiments with
the OCR task, for example, our sequence model significantly outperforms other approaches
by incorporating high-dimensional decision boundaries of polynomial kernels over character images while capturing correlations between consecutive characters. We construct our
models by solving a convex quadratic program that maximizes the per-label margin. Although the number of variables and constraints of our QP formulation is polynomial in the
example size (e.g., sequence length), we also address its quadratic growth using an effective optimization procedure inspired by SMO. We provide theoretical guarantees on the
average per-label generalization error of our models in terms of the training set margin.
Our generalization bound significantly tightens previous results of Collins [3] and suggests
possibilities for analyzing per-label generalization properties of graphical models.
For brevity, we simplified our presentation of graphical models to only pairwise Markov
networks. Our formulation and generalization bound easily extend to interaction patterns
involving more than two labels (e.g., higher-order Markov models). Overall, we believe
that M3 networks will significantly further the applicability of high accuracy margin-based
methods to real-world structured data.
Acknowledgments.
This work was supported by ONR Contract F3060-01-2-0564P00002 under DARPA?s EELD program.
References
[1] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In Proc.
ICML, 2003.
[2] D. Bertsekas. Nonlinear Programming. Athena Scientific, Belmont, MA, 1999.
[3] M. Collins. Parameter estimation for statistical parsing models: Theory and practice of
distribution-free methods. In IWPT, 2001.
[4] R.G. Cowell, A.P. Dawid, S.L. Lauritzen, and D.J. Spiegelhalter. Probabilistic Networks and
Expert Systems. Springer, New York, 1999.
[5] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernelbased vector
machines. Journal of Machine Learning Research, 2(5):265?292, 2001.
[6] R. Kassel. A Comparison of Approaches to On-line Handwritten Character Recognition. PhD
thesis, MIT Spoken Language Systems Group, 1995.
[7] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for
segmenting and labeling sequence data. In Proc. ICML01, 2001.
[8] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan Kaufmann, 1988.
[9] J. Platt. Using sparseness and analytic QP to speed training of support vector machines. In
NIPS, 1999.
[10] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In
Proc. UAI02, Edmonton, Canada, 2002.
[11] V.N. Vapnik. The Nature of Statistical Learning Theory. Springer-Verlag, New York, 1995.
[12] J. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In NIPS, 2000.
[13] T. Zhang. Covering number bounds of certain regularized linear function classes. Journal of
Machine Learning Research, 2:527?550, 2002.
| 2397 |@word faculty:1 version:1 polynomial:5 proportion:1 stronger:1 norm:3 open:1 seek:1 r:1 decomposition:1 dramatic:1 contains:1 selecting:2 document:1 interestingly:1 outperforms:1 existing:1 current:1 assigning:1 yet:1 must:4 dx:1 parsing:1 belmont:1 hofmann:1 analytic:1 update:1 selected:2 website:1 yr:5 mccallum:1 yi1:1 ith:1 provides:1 node:2 hyperplanes:1 daphne:1 zhang:1 incorrect:1 prove:1 combine:2 introduce:1 x0:8 pairwise:7 inter:1 tagging:1 expected:1 magnifies:1 nor:1 multi:10 inspired:1 freeman:1 txp:1 tex:1 solver:1 increasing:2 considering:2 begin:1 xx:3 moreover:1 notation:1 underlying:1 bounded:3 qyi:1 maximizes:1 project:1 interpreted:1 developed:1 spoken:1 finding:1 transformation:1 guarantee:5 y3:4 growth:1 classifier:11 wrong:1 k2:1 platt:1 omit:2 appear:1 bertsekas:1 segmenting:1 local:1 treat:1 mistake:1 analyzing:1 might:4 plus:1 conversely:1 suggests:1 range:1 acknowledgment:1 enforces:1 yj:46 testing:1 practice:3 procedure:2 empirical:1 significantly:4 confidence:2 word:11 altun:1 get:1 cannot:2 selection:1 tsochantaridis:1 py:1 optimize:1 equivalent:6 demonstrated:1 maximizing:7 crfs:7 independently:2 convex:1 yj2:1 unify:1 simplicity:2 assigns:1 factored:7 insight:1 supz:1 importantly:1 handle:1 notion:2 fx:22 coordinate:1 analogous:3 losing:1 programming:2 us:4 hypothesis:4 trick:1 element:1 logarithmically:1 recognition:7 dawid:1 labeled:1 taskar:3 subproblem:1 solved:1 capture:1 hypertext:6 parameterize:2 worst:2 wj:2 news:1 connected:2 cycle:1 decrease:1 highest:1 trade:1 yk:2 complexity:2 dynamic:1 trained:6 solving:1 purely:1 basis:12 compactly:1 easily:2 joint:7 darpa:1 represented:1 tx:19 effective:5 labeling:7 hyper:1 whose:1 richer:1 stanford:3 valued:1 larger:2 solve:2 heuristic:1 ability:3 advantage:5 sequence:15 net:2 propose:1 interaction:4 product:3 neighboring:1 loop:1 achieve:2 exploiting:3 webpage:4 requirement:1 produce:1 categorization:2 ben:1 object:2 ij:11 lauritzen:1 school:4 solves:1 strong:3 implemented:1 c:2 involves:2 predicted:1 indicate:1 triangulated:2 correct:1 attribute:1 human:1 require:1 assign:1 abbeel:1 generalization:17 btaskar:2 extension:1 pl:1 around:1 exp:2 great:1 viterbi:2 predict:1 algorithmic:1 consecutive:1 estimation:1 proc:3 integrates:1 label:66 mit:1 clearly:2 gaussian:1 rather:1 pn:1 encode:1 derived:1 focus:1 ax:1 improvement:3 rasterized:1 likelihood:4 seamlessly:1 contrast:1 ave:1 baseline:1 inference:5 i0:3 entire:4 typically:2 eliminate:1 hidden:2 koller:3 misclassified:1 singlelabel:1 pixel:2 arg:8 classification:28 uai02:1 dual:20 overall:1 spatial:1 art:2 marginal:6 field:2 once:1 construct:1 broad:1 icml:1 triangulate:1 others:1 intelligent:1 simultaneously:2 individual:2 owe:1 highly:1 possibility:1 runner:2 weakness:1 violation:1 yielding:1 primal:3 accurate:1 edge:13 partial:1 respective:1 unless:1 tree:2 theoretical:5 minimal:1 instance:12 classify:3 cover:3 assignment:5 maximization:2 loopy:1 cost:1 introducing:2 deviation:1 subset:2 applicability:1 too:1 perturbed:1 density:4 probabilistic:15 yl:1 off:1 contract:1 thesis:1 possibly:2 iwpt:1 expert:1 leading:1 account:1 potential:4 singleton:1 sec:1 wk:1 summarized:1 coefficient:2 student:1 depends:2 linked:2 carlos:1 minimize:1 ir:1 accuracy:9 variance:1 kaufmann:1 efficiently:3 generalize:1 handwritten:3 published:1 classified:1 eeld:1 against:1 energy:1 associated:2 proof:2 handwriting:2 gain:7 recall:1 dimensionality:1 segmentation:2 higher:1 supervised:3 wei:1 formulation:12 evaluated:1 shrink:1 rmns:3 correlation:10 until:1 web:4 nonlinear:1 overlapping:1 propagation:3 lack:3 defines:3 logistic:3 scientific:1 grows:3 believe:1 effect:2 normalized:1 true:1 y2:11 hence:1 assigned:1 equality:1 analytically:2 deal:1 conditionally:1 adjacent:1 width:1 covering:3 generalized:1 crf:2 demonstrate:3 complete:2 fj:3 reasoning:1 image:9 novel:1 recently:1 common:3 rmn:2 qp:19 tightens:1 exponentially:2 extend:3 belong:1 marginals:6 kwk2:1 significant:8 fk:3 consistency:2 similarly:1 language:2 dot:2 impressive:1 longer:1 v0:1 etc:1 add:2 triangulation:1 optimizing:1 optimizes:1 verlag:1 certain:1 binary:3 success:3 onr:1 yi:78 exploited:1 guestrin:2 morgan:1 additional:2 relaxed:2 fortunately:1 maximize:6 multiple:4 full:2 stem:1 offer:2 divided:2 y:5 involving:1 regression:3 expectation:2 kernel:20 represent:3 addition:2 want:1 whereas:1 separately:1 extra:1 rest:1 unlike:1 ineffective:1 subject:2 lafferty:1 presence:1 easy:1 isolation:1 fit:1 topology:1 multiclass:4 speech:1 york:2 useful:1 involve:2 amount:1 singly:1 svms:21 category:2 http:1 popularity:1 per:17 correctly:1 discrete:1 shall:1 group:1 four:1 reformulation:1 nevertheless:1 enormous:1 drawn:1 msvm:2 graph:7 relaxation:2 sum:2 letter:1 parameterized:1 family:3 decision:2 comparable:2 capturing:1 bound:16 fold:2 quadratic:11 strength:1 constraint:11 precisely:2 constrain:1 bp:3 encodes:1 speed:1 argument:1 min:2 span:1 optimality:2 separable:1 optical:1 structured:9 department:1 ball:2 smaller:1 slightly:1 character:18 wi:1 appealing:1 lp:1 maxy:6 intuitively:2 legal:1 ln:6 previously:1 slack:1 singer:2 cor:1 yj1:1 operation:1 yedidia:1 ocr:9 indirectly:1 enforce:2 appropriate:2 alternative:2 existence:1 original:5 denotes:1 remaining:1 ensure:2 graphical:10 exploit:6 especially:1 kassel:2 objective:8 move:1 parametric:1 exclusive:1 dependence:3 diagonal:1 exhibit:1 mx:3 link:2 athena:1 polytope:4 collected:1 length:3 y4:2 reformulate:1 setup:1 unfortunately:6 executed:1 relate:2 subproblems:1 implementation:1 collective:4 perform:2 allowing:2 markov:19 arc:1 descent:1 extended:1 relational:2 y1:17 arbitrary:1 canada:1 introduced:1 pair:1 required:1 smo:4 pearl:1 tractably:1 nip:2 address:2 able:1 usually:2 interlinked:1 below:1 pattern:1 program:5 max:6 including:1 gaining:1 belief:3 natural:1 regularized:1 predicting:1 indicator:2 representing:2 spiegelhalter:1 inversely:1 text:2 prior:1 loss:10 discriminatively:1 degree:1 offered:1 sufficient:1 consistent:2 article:1 becomes:2 course:1 supported:1 free:2 keeping:1 infeasible:1 bias:1 allow:1 benefit:1 boundary:2 world:2 rich:1 made:1 simplified:1 polynomially:1 approximate:5 compact:1 ignore:1 emphasize:1 selector:1 kfx:1 dealing:1 clique:2 ml:1 kkt:1 instantiation:1 anchor:1 incoming:1 xi:1 discriminative:2 nature:1 learn:4 bethe:1 m3ns:1 forest:3 complex:3 constructing:1 domain:1 inherit:1 main:1 yi2:1 linearly:2 arise:1 complementary:1 fig:4 site:1 edmonton:1 cubic:5 n:2 fails:1 pereira:1 exponential:4 third:1 hw:2 theorem:3 maxi:1 svm:1 exists:1 incorporating:1 vapnik:1 sequential:2 effectively:1 adding:1 phd:1 sparseness:2 margin:35 logarithmic:1 simply:1 likely:1 collectively:1 cowell:1 springer:2 corresponds:3 ma:1 conditional:8 goal:2 presentation:2 sized:1 rbf:1 feasible:2 change:2 content:1 typical:1 infinite:3 specifically:2 hyperplane:1 triangulating:1 experimental:1 e:1 m3:10 select:2 support:6 kernelbased:1 crammer:2 collins:3 brevity:1 violated:1 incorporate:1 evaluate:1 reg:1 tested:1 ex:1 |
1,537 | 2,398 | Local Phase Coherence
and the Perception of Blur
Zhou Wang and Eero P. Simoncelli
Howard Hughes Medical Institute
Center for Neural Science and Courant Institute of Mathematical Sciences
New York University, New York, NY 10003
[email protected], [email protected]
Humans are able to detect blurring of visual images, but the mechanism
by which they do so is not clear. A traditional view is that a blurred
image looks ?unnatural? because of the reduction in energy (either globally or locally) at high frequencies. In this paper, we propose that the
disruption of local phase can provide an alternative explanation for blur
perception. We show that precisely localized features such as step edges
result in strong local phase coherence structures across scale and space in
the complex wavelet transform domain, and blurring causes loss of such
phase coherence. We propose a technique for coarse-to-fine phase prediction of wavelet coefficients, and observe that (1) such predictions are
highly effective in natural images, (2) phase coherence increases with the
strength of image features, and (3) blurring disrupts the phase coherence
relationship in images. We thus lay the groundwork for a new theory of
perceptual blur estimation, as well as a variety of algorithms for restoration and manipulation of photographic images.
1
Introduction
Blur is one of the most common forms of image distortion. It can arise from a variety
of sources, such as atmospheric scatter, lens defocus, optical aberrations of the lens, and
spatial and temporal sensor integration. Human observers are bothered by blur, and our
visual systems are quite good at reporting whether an image appears blurred (or sharpened)
[1, 2]. However, the mechanism by which this is accomplished is not well understood.
Clearly, detection of blur requires some model of what constitutes an unblurred image. In
recent years, there has been a surge of interest in the modelling of natural images, both for
purposes of improving the performance of image processing and computer vision systems,
and also for furthering our understanding of biological visual systems. Early statistical
models were almost exclusively based on a description of global Fourier power spectra.
Specifically, image spectra are found to follow a power law [3?5]. This model leads to
an obvious method of detecting and compensating for blur. Specifically, blurring usually
reduces the energy of high frequency components, and thus the power spectrum of a blurry
image should fall faster than a typical natural image. The standard formulation of the
?deblurring? problem, due to Wiener [6], aims to restore those high frequency components
to their original amplitude. But this proposal is problematic, since individual images show
significant variability in their Fourier amplitudes, both in their shape and in the rate at which
they fall [1]. In particular, simply reducing the number of sharp features (e.g., edges) in
an image can lead to a steeper falloff in global amplitude spectrum, even though the image
will still appear sharp [7]. Nevertheless, the visual system seems to be able to compensate
for this when estimating blur [1, 2, 7].
Over the past two decades, researchers from many communities have converged on a view
that images are better represented using bases of multi-scale bandpass oriented filters.
These representations, loosely referred to as ?wavelets?, are effective at decoupling the
high-order statistical features of natural images. In addition, they provide the most basic
model for neurons in the primary visual cortex of mammals, which are presumably adapted
to efficiently represent the visually relevant features of images. Many recent statistical image models in the wavelet domain are based on the amplitudes of the coefficients, and the
relationship between the amplitudes of coefficients in local neighborhoods or across different scales [e.g. 8]. In both human and computer vision, the amplitudes of complex wavelets
have been widely used as a mechanism for localizing/representing features [e.g. 9?13]. It
has also been shown that the relative wavelet amplitude as a function of scale can be used
to explain a number of subjective experiments on the perception of blur [7].
In this paper, we propose the disruption of local phase as an alternative and effective measure for the detection of blur. This seems counterintuitive, because when an image is
blurred through convolution with a symmetric linear filter, the phase information in the
(global) Fourier transform domain does not change at all. But we show that this is not true
for local phase information.
In previous work, Fourier phase has been found to carry important information about image
structures and features [14] and higher-order Fourier statistics have been used to examine
the phase structure in natural images [15]. It has been pointed out that at the points of
isolated even and odd symmetric features such as lines and step edges, the arrival phases
of all Fourier harmonics are identical [11, 16]. Phase congruency [11, 17] provides a quantitative measure for the agreement of such phase alignment pattern. It has also been shown
that maximum phase congruency feature detection is equivalent to maximum local energy
model [18]. Local phase has been used in a number of machine vision and image processing applications, such as estimation of image motion [19] and disparity [20], description
of image textures [21], and recognition of persons using iris patterns [22]. However, the
behaviors of local phase at different scales in the vicinity of image features, and the means
by which blur affects such behaviors have not been deeply investigated.
2
Local Phase Coherence of Isolated Features
Wavelet transforms provide a convenient framework for localized representation of signals
simultaneously in space and frequency. The wavelets are dilated/contracted and translated
versions of a ?mother wavelet? w(x). In this paper, we consider symmetric (linear phase)
wavelets whose mother wavelets may be written as a modulation of a low-pass filter:
w(x) = g(x) ej?c x ,
(1)
where ?c is the center frequency of the modulated band-pass filter, and g(x) is a slowly
varying and symmetric function. The family of wavelets derived from the mother wavelet
are then
1
x?p
1
x?p
ws,p (x) = ? w
=? g
ej?c (x?p)/s ,
(2)
s
s
s
s
where s ? R+ is the scale factor, and p ? R is the translation factor. Considering the fact
that g(?x) = g(x), the wavelet transform of a given real signal f (x) can be written as
Z ?
1 x j?c x/s
?
F (s, p) =
f (x) ws,p
(x) dx = f (x) ? ? g
e
.
(3)
s
s
??
x=p
Now assume that the signal f (x) being analyzed is localized near the position x 0 , and we
rewrite it into a function f0 (x) that satisfies f (x) = f0 (x ? x0 ). Using the convolution
theorem and the shifting and scaling properties of the Fourier transform, we can write
Z ?
?
1
F (?) s G(s ? ? ?c ) ej?p d?
F (s, p) =
2? ??
Z ?
?
1
F0 (?) s G(s ? ? ?c ) ej?(p?x0 ) d?
=
2? ??
Z ?
?
1
?
G(? ? ?c ) ej?(p?x0 )/s d? ,
(4)
F0
=
s
2? s ??
where F (?), F0 (?) and G(?) are the Fourier transforms of f (x), f0 (x) and g(x), respectively.
We now examine how the phase of F (s, p) evolves across space p and scale s. From Eq.
(4), we see that the phase of F (s, p) highly depends on the nature of F0 (?). If F0 (?) is
scale-invariant, meaning that
?
F0
= K(s)F0 (?) ,
(5)
s
where K(s) is a real function of only s, but independent of ?, then from Eq. (4) and Eq.
(5) we obtain
Z ?
K(s)
?
F0 (?) G(? ? ?c ) ej?(p?x0 )/s d?
F (s, p) =
2? s ??
K(s)
p ? x0
= ? F (1, x0 +
).
(6)
s
s
Since both K(s) and s are real, we can write the phase as:
p ? x0
?(F (s, p)) = ?(F (1, x0 +
)) .
(7)
s
This equation suggests a strong phase coherence relationship across scale and space. An
illustration is shown in Fig. 1(a), where it can be seen that equal-phase contours in the (s, p)
plane form straight lines defined by
p ? x0
x0 +
=C,
(8)
s
where C can be any real constant. Further, all these straight lines converge exactly at the
location of the feature x0 . More generally, the phase at any given scale may be computed
from the phase at any other scale by simply rescaling the position axis.
This phase coherence relationship relies on the scale-invariance property of Eq. (5) of the
signal. Analytically, the only type of continuous spectrum signal that satisfies Eq. (5)
follows a power law:
F0 (?) = K0 ? P .
(9)
In the spatial domain, the functions f0 (x) that satisfy this scale-invariance condition include the step function f0 (x) = K(u(x) ? 12 ) (where K is a constant and F0 (?) = K/j?)
and its derivatives, such as the delta function f0 (x) = K?(x) (where K is a constant and
F0 (?) = K). Notice that both functions of f0 (x) are precisely localized in space.
Figure 1(b) shows that this precisely convergent phase behavior is disrupted by blurring.
Specifically, if we convolve a sharp feature (e.g., an step edge) with a low-pass filter, the
resulting signal will no longer satisfy the scale-invariant property of Eq. (5) and the phase
coherence relationship of Eq. (7). Thus, a measure of phase coherence can be used to detect
blur. Note that the phase congruency relationship [11, 17], which expresses the alignment
of phase at the location of a feature, corresponds to the center (vertical) contour of Fig. 1,
which remains intact after blurring. Thus, phase congruency measures [11, 17] provide no
information about blur.
x
0
x0
x0
x0
x0
x
0
signal space
wavelet space
s (scale)
...
...
...
...
1
0
p (position)
x0
x0
(a)
(b)
Fig. 1: Local phase coherence of precisely localized (scale-invariant) features, and the
disruption of this coherence in the presence of blur. (a) precisely localized features. (b)
blurred features.
3
Phase Prediction in Natural Images
In this section, we show that if the local image features are precisely localized (such as the
delta and the step functions), then in the discrete wavelet transform domain, the phase of
nearby fine-scale coefficients can be well predicted from their coarser-scale parent coefficients. We then examine these phase predictions in both sharp and blurred natural images.
3.1
Coarse-to-fine Phase Prediction
From Eq. (3), it is straightforward to prove that for f0 (x) = K?(x),
?(F (1, p)) = ? ?c (p ? x0 ) + n1 ? ,
(10)
?c (p ? x0 )
+ n1 ? .
s
(11)
where n1 is an integer whose value depends on the value range of ?c (p ? x0 ) and the sign
of Kg(p ? x0 ). Using the phase coherence relation of Eq. (7), we have
?(F (s, p)) = ?
It can also be shown that for a step function f0 (x) = K[u(x) ? 12 ], when g(x) is slowly
varying and p is located near the feature location x0 ,
?(F (s, p)) ?
?c (p ? x0 ) ?
? + n2 ? .
s
2
(12)
Similarly, n2 is an integer.
The discrete wavelet transform corresponds to a discrete sampling of the continuous
wavelet transform F (s, p). A typical sampling grid is illustrated in Fig. 2(a), where between every two adjacent scales, the scale factor s doubles and the spatial sampling rate
is halved. Now we consider three consecutive scales and group the neighboring coefficients {a, b1 , b2 , c1 , c2 , c3 , c4 } as shown in Fig. 2(a), then it can be shown that the phases
a
s
4
a
s
b 11
b 12
p2
p1
2
1
b1
b 21
b2
c 11
c 21
c1 c2 c3 c4
c 31
p
c 41
(a)
b 22
c 12
c 22
c 32
c 42
c 13
c 23
c 33
c 43
c 14
c 24
c 34
c 44
(b)
Fig. 2: Discrete wavelet transform sampling grid in the continuous wavelet transform domain. (a) 1-D sampling; (b) 2-D sampling.
of the finest scale coefficients {c1 , c2 , c3 , c4 } can be well predicted from the coarser scale
coefficients {a, b1 , b2 }, provided the local phase satisfies the phase coherence relationship.
? for {c1 , c2 , c3 , c4 } can be expressed as
Specifically, the estimated phase ?
?
?
?
? 3 ??
b1
c1
c
b2 b ? ?
?
?
?
?
2
?
2
??
?
= ? ?(a ) ? ? 1 22 ?? .
(13)
?
c3
b1 b2
c4
b32
We can develop a similar technique for the two dimensional case. As shown in Fig. 2(b),
the phase prediction expression from the coarser scale coefficients {a, b11 , b12 , b21 , b22 } to
the group of finest scale coefficients {cij } is as follows:
?
? 3
??
b11
b211 b12
b11 b212
b312
2
2
2
b b22
b11 b12 b22 b12 b22 ??
? 2 ? b b21
? ij }) = ? ?
?({c
.
(14)
?(a ) ? ? b11 b2 b11 b b
2
b
b12 b222 ??
11 21
11 21 22
11 b22
3
2
2
3
b21
b21 b22
b21 b22
b22
3.2
Image Statistics
We decompose the images using the ?steerable pyramid? [23], a multi-scale wavelet decomposition whose basis functions are spatially localized, oriented, and roughly one octave
in bandwidth. A 3-scale 8-orientation pyramid is calculated for each image, resulting in 26
subbands (24 oriented, plus highpass and lowpass residuals). Using Eq. (14), the phase
of each coefficient in the 8 oriented finest-scale subbands is predicted from the phases of
its coarser-scale parent and grandparent coefficients as illustrated in Fig. 2(b). We applied
such a phase prediction method to a dataset of 1000 high-resolution sharp images as well as
their blurred versions, and then examined the errors between the predicted and true phases
at the fine scale.
The summary histograms are shown in Fig. 3. In order to demonstrate how blurring affects
the phase prediction accuracy, in all these conditional histograms, the magnitude axis corresponds to the coefficient magnitudes of the original image, so that the same column in the
three histograms correspond to the same set of coefficients in spatial location. From Fig.
phase pred. error
(a)
(b)
phase pred. error
(c)
blurred image
highly blurred image
0
(d)
??
sharp
blurred
highly
blurred
0.04
phase pred. error
natural sharp image
?
?
0.03
0
(e) 0.02
??
??
?
0
phase prediction error
?
(g)
0
(f)
??
original coefficient magnitude
Fig. 3: Local phase coherence statistics in sharp and blurred images. (a),(b),(c): example
natural, blurred and highly blurred images taken from the test image database of 1000
(512?512, 8bits/pixel, gray-scale) natural images with a wide variety of contents (humans,
animals, plants, landscapes, man-made objects, etc.). Images are cropped to 200?200 for
visibility; (d),(e),(f): conditional histograms of phase prediction error as a function of the
original coefficient magnitude for the three types of images. Each column of the histograms
is scaled individually, such that the largest value of each column is mapped to white; (g)
phase prediction error histogram of significant coefficients (magnitude greater than 20).
3, we observe that phase coherence is highly effective in natural images and the phase prediction error decreases as the coefficient magnitude increases. Larger coefficients implies
stronger local phase coherence. Furthermore, as expected, the blurring process clearly reduces the phase prediction accuracy. We thus hypothesize that it is perhaps this disruption
of local phase coherence that the visual system senses as being ?unnatural?.
4
Discussion
This paper proposes a new view of image blur based on the observation that blur induces
distortion of local phase, in addition to the widely noted loss of high-frequency energy.
We have shown that isolated precisely localized features create strong local phase coherence, and that blurring disrupts this phase coherence. We have also developed a particular
measure of phase coherence based on coarse-to-fine phase prediction, and shown that this
measure can serve as an indication of blur in natural images. In the future, it remains to
be seen whether the visual systems detect blur by comparing the relative amplitude of localized filters at different scales [7], or alternatively, comparing the relative spread of local
phase across scale and space.
The coarse-to-fine phase prediction method was developed in order to facilitate examination of phase coherence in real images, but the computations involved bear some resemblance to the behaviors of neurons in the primary visual cortex (area V1) of mammals.
First, phase information is measured using pairs of localized bandpass filters in quadrature, as are widely used to describe the receptive field properties of neurons in mammalian
primary visual cortex (area V1) [24]. Second, the responses of these filters must be ex-
ponentiated for comparison across different scales. Many recent models of V1 response
incorporate such exponentiation [25]. Finally, responses are seen to be normalized by the
magnitudes of neighboring filter responses. Similar ?divisive normalization? mechanisms
have been successfully used to account for many nonlinear behaviors of neurons in both
visual and auditory neurons [26, 27]. Thus, it seems that mammalian visual systems are
equipped with the basic computational building blocks that can be used to process local
phase coherence.
The importance of local phase coherence in blur perception seems intuitively sensible from
the perspective of visual function. In particular, the accurate localization of image features
is critical to a variety of visual capabilities, including various forms of hyperacuity, stereopsis, and motion estimation. Since the localization of image features depends critically
on phase coherence, and blurring disrupts phase coherence, blur would seem to be a particularly disturbing artifact. This perhaps explains the subjective feeling of frustration when
confronted with a blurred image that cannot be corrected by visual accommodation.
For purposes of machine vision and image processing applications, we view the results of
this paper as an important step towards the incorporation of phase properties into statistical
models for images. We believe this is likely to lead to substantial improvements in a variety
of applications, such as deblurring or sharpening by phase restoration, denoising by phase
restoration, image compression, image quality assessment, and a variety of more creative
photographic applications, such as image blending or compositing, reduction of dynamic
range, or post-exposure adjustments of depth-of-field.
Furthermore, if we would like to detect the position of an isolated precisely localized feature from phase samples measured above a certain allowable scale, then infinite precision
can be achieved using the phase convergence property illustrated in Fig. 1(a), provided
the phase measurement is perfect. In other words, the detection precision is limited by
the accuracy of phase measurement, rather than the highest spatial sampling density. This
provides a workable mechanism of ?seeing beyond the Nyquist limit? [28], which could explain a number of visual hyperacuity phenomena [29, 30], and may be used for the design
of super-precision signal detection devices.
References
[1] Y. Tadmor and D. J. Tolhurst, ?Discrimination of changes in the second-order statistics of natural and synthetic images,? Vis Res, vol. 34, no. 4, pp. 541?554, 1994.
[2] M. A. Webster, M. A. Georgeson, and S. M. Webster, ?Neural adjustments to image
blur,? Nature Neuroscience, vol. 5, no. 9, pp. 839?840, 2002.
[3] E. R. Kretzmer, ?The statistics of television signals,? Bell System Tech. J., vol. 31,
pp. 751?763, 1952.
[4] D. J. Field, ?Relations between the statistics of natural images and the response properties of cortical cells,? J. Opt. Soc. America, vol. 4, pp. 2379?2394, 1987.
[5] D. L. Ruderman, ?The statistics of natural images,? Network: Computation in Neural
Systems, vol. 5, pp. 517?548, 1996.
[6] N. Wiener, Nonlinear Problems in Random Theory. New York: John Wiley and Sons,
1958.
[7] D. J. Field and N. Brady, ?Visual sensitivity, blur and the sources of variability in the
amplitude spectra of natural scenes,? Vis Res, vol. 37, no. 23, pp. 3367?3383, 1997.
[8] E. P. Simoncelli, ?Statistical models for images: Compression, restoration and synthesis,? in Proc 31st Asilomar Conf on Signals, Systems and Computers, (Pacific Grove,
CA), pp. 673?678, Nov 1997.
[9] E. H. Adelson and J. R. Bergen, ?Spatiotemporal energy models for the perception of
motion,? J Optical Society, vol. 2, pp. 284?299, Feb 1985.
[10] J. R. Bergen and E. H. Adelson, ?Early vision and texture perception,? Nature,
vol. 333, pp. 363?364, 1988.
[11] M. C. Morrone and R. A. Owens, ?Feature detection from local energy,? Pattern
Recognition Letters, vol. 6, pp. 303?313, 1987.
[12] N. Graham, Visual pattern analyzers. New York: Oxford University Press, 1989.
[13] P. Perona and J. Malik, ?Detecting and localizing edges composed of steps, peaks and
roofs,? in Proc. 3rd Int?l Conf Comp Vision, (Osaka), pp. 52?57, 1990.
[14] A. V. Oppenheim and J. S. Lim, ?The importance of phase in signals,? Proc. of the
IEEE, vol. 69, pp. 529?541, 1981.
[15] M. G. A. Thomson, ?Visual coding and the phase structure of natural scenes,? Network: Comput. Neural Syst., no. 10, pp. 123?132, 1999.
[16] M. C. Morrone and D. C. Burr, ?Feature detection in human vision: A phasedependent energy model,? Proc. R. Soc. Lond. B, vol. 235, pp. 221?245, 1988.
[17] P. Kovesi, ?Phase congruency: A low-level image invariant,? Psych. Research, vol. 64,
pp. 136?148, 2000.
[18] S. Venkatesh and R. A. Owens, ?An energy feature detection scheme,? Int?l Conf on
Image Processing, pp. 553?557, 1989.
[19] D. J. Fleet and A. D. Jepson, ?Computation of component image velocity from local
phase information,? Int?l J Computer Vision, no. 5, pp. 77?104, 1990.
[20] D. J. Fleet, ?Phase-based disparity measurement,? CVGIP: Image Understanding,
no. 53, pp. 198?210, 1991.
[21] J. Portilla and E. P. Simoncelli, ?A parametric texture model based on joint statistics
of complex wavelet coefficients,? Int?l J Computer Vision, vol. 40, pp. 49?71, 2000.
[22] J. Daugman, ?Statistical richness of visual phase information: update on recognizing
persons by iris patterns,? Int?l J Computer Vision, no. 45, pp. 25?38, 2001.
[23] E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger, ?Shiftable multiscale transforms,? IEEE Trans Information Theory, vol. 38, pp. 587?607, Mar 1992.
[24] D. A. Pollen and S. F. Ronner, ?Phase relationships between adjacent simple cells in
the cat,? Science, no. 212, pp. 1409?1411, 1981.
[25] D. J. Heeger, ?Half-squaring in responses of cat striate cells,? Visual Neuroscience,
no. 9, pp. 427?443, 1992.
[26] D. J. Heeger, ?Normalization of cell responses in cat striate cortex,? Visual Neuroscience, no. 9, pp. 181?197, 1992.
[27] O. Schwartz and E. P. Simoncelli, ?Natural signal statistics and sensory gain control,?
Nature Neuroscience, no. 4, pp. 819?825, 2001.
[28] D. L. Ruderman and W. Bialek, ?Seeing beyond the Nyquist limit,? Neural Comp.,
no. 4, pp. 682?690, 1992.
[29] G. Westheimer and S. P. McKee, ?Spatial configurations for visual hyperacuity,? Vison
Res., no. 17, pp. 941?947, 1977.
[30] W. S. Geisler, ?Physical limits of acuity and hyperacuity,? J. Opti. Soc. America, no. 1,
pp. 775?782, 1984.
| 2398 |@word version:2 compression:2 seems:4 stronger:1 decomposition:1 mammal:2 carry:1 reduction:2 configuration:1 exclusively:1 disparity:2 groundwork:1 past:1 subjective:2 comparing:2 scatter:1 dx:1 written:2 finest:3 must:1 john:1 blur:22 shape:1 webster:2 visibility:1 hypothesize:1 update:1 discrimination:1 half:1 device:1 plane:1 coarse:4 detecting:2 provides:2 location:4 tolhurst:1 org:1 mathematical:1 c2:4 prove:1 burr:1 x0:23 expected:1 roughly:1 disrupts:3 surge:1 examine:3 multi:2 p1:1 behavior:5 compensating:1 freeman:1 globally:1 equipped:1 considering:1 provided:2 estimating:1 what:1 kg:1 psych:1 developed:2 sharpening:1 brady:1 temporal:1 quantitative:1 every:1 exactly:1 scaled:1 schwartz:1 control:1 medical:1 appear:1 understood:1 local:23 limit:3 oxford:1 opti:1 modulation:1 plus:1 examined:1 suggests:1 b12:5 limited:1 range:2 hughes:1 block:1 steerable:1 area:2 bell:1 convenient:1 word:1 seeing:2 cannot:1 equivalent:1 center:3 kovesi:1 straightforward:1 exposure:1 resolution:1 counterintuitive:1 osaka:1 deblurring:2 agreement:1 velocity:1 recognition:2 hyperacuity:4 located:1 particularly:1 lay:1 mammalian:2 coarser:4 database:1 wang:1 richness:1 decrease:1 highest:1 deeply:1 substantial:1 dynamic:1 rewrite:1 serve:1 localization:2 blurring:10 basis:1 translated:1 lowpass:1 joint:1 k0:1 represented:1 various:1 america:2 cat:3 effective:4 describe:1 neighborhood:1 quite:1 whose:3 widely:3 larger:1 distortion:2 statistic:9 transform:9 confronted:1 indication:1 propose:3 neighboring:2 relevant:1 compositing:1 description:2 parent:2 double:1 convergence:1 perfect:1 object:1 develop:1 measured:2 ij:1 odd:1 eq:10 p2:1 soc:3 strong:3 predicted:4 implies:1 filter:9 human:5 explains:1 decompose:1 opt:1 biological:1 blending:1 visually:1 presumably:1 early:2 consecutive:1 purpose:2 estimation:3 proc:4 individually:1 largest:1 create:1 successfully:1 clearly:2 sensor:1 aim:1 super:1 rather:1 zhou:1 ej:6 varying:2 derived:1 acuity:1 improvement:1 modelling:1 tech:1 detect:4 bergen:2 squaring:1 w:2 relation:2 perona:1 pixel:1 orientation:1 proposes:1 animal:1 spatial:6 integration:1 equal:1 field:4 sampling:7 identical:1 look:1 adelson:3 constitutes:1 future:1 oriented:4 composed:1 simultaneously:1 individual:1 roof:1 phase:89 n1:3 detection:8 interest:1 highly:6 workable:1 alignment:2 analyzed:1 sens:1 accurate:1 grove:1 edge:5 unblurred:1 loosely:1 re:3 isolated:4 column:3 localizing:2 restoration:4 shiftable:1 recognizing:1 spatiotemporal:1 synthetic:1 person:2 disrupted:1 density:1 sensitivity:1 st:1 peak:1 geisler:1 synthesis:1 sharpened:1 frustration:1 slowly:2 conf:3 derivative:1 rescaling:1 syst:1 account:1 vison:1 b2:6 coding:1 dilated:1 coefficient:20 blurred:14 int:5 satisfy:2 depends:3 vi:2 view:4 observer:1 steeper:1 capability:1 accuracy:3 wiener:2 efficiently:1 correspond:1 landscape:1 critically:1 comp:2 researcher:1 straight:2 converged:1 explain:2 falloff:1 oppenheim:1 energy:8 frequency:6 involved:1 pp:28 obvious:1 gain:1 auditory:1 dataset:1 lim:1 amplitude:9 appears:1 higher:1 courant:1 follow:1 response:7 formulation:1 though:1 mar:1 ronner:1 furthermore:2 ruderman:2 nonlinear:2 assessment:1 multiscale:1 quality:1 gray:1 resemblance:1 perhaps:2 artifact:1 believe:1 facilitate:1 building:1 normalized:1 true:2 vicinity:1 analytically:1 spatially:1 symmetric:4 illustrated:3 white:1 adjacent:2 noted:1 iris:2 octave:1 allowable:1 thomson:1 demonstrate:1 motion:3 disruption:4 image:66 harmonic:1 meaning:1 common:1 mckee:1 physical:1 significant:2 measurement:3 mother:3 rd:1 grid:2 similarly:1 pointed:1 analyzer:1 tadmor:1 f0:20 cortex:4 longer:1 etc:1 base:1 feb:1 accommodation:1 halved:1 recent:3 perspective:1 manipulation:1 certain:1 accomplished:1 seen:3 b22:8 greater:1 converge:1 signal:12 simoncelli:6 photographic:2 reduces:2 faster:1 compensate:1 post:1 prediction:15 basic:2 vision:10 histogram:6 represent:1 aberration:1 normalization:2 pyramid:2 achieved:1 cell:4 c1:5 proposal:1 addition:2 cropped:1 fine:6 source:2 seem:1 integer:2 near:2 presence:1 variety:6 affect:2 bandwidth:1 fleet:2 whether:2 expression:1 unnatural:2 nyquist:2 york:4 cause:1 generally:1 clear:1 transforms:3 daugman:1 locally:1 band:1 induces:1 problematic:1 notice:1 sign:1 delta:2 estimated:1 neuroscience:4 write:2 discrete:4 vol:14 express:1 group:2 nevertheless:1 v1:3 year:1 exponentiation:1 letter:1 reporting:1 almost:1 family:1 coherence:26 scaling:1 graham:1 bit:1 convergent:1 adapted:1 strength:1 precisely:8 incorporation:1 scene:2 nearby:1 fourier:8 lond:1 optical:2 pacific:1 creative:1 across:6 son:1 evolves:1 intuitively:1 invariant:4 taken:1 asilomar:1 equation:1 remains:2 mechanism:5 subbands:2 observe:2 blurry:1 alternative:2 original:4 convolve:1 include:1 cvgip:1 society:1 malik:1 receptive:1 primary:3 parametric:1 striate:2 traditional:1 bialek:1 grandparent:1 mapped:1 sensible:1 relationship:8 illustration:1 westheimer:1 cij:1 design:1 vertical:1 neuron:5 convolution:2 observation:1 howard:1 variability:2 portilla:1 highpass:1 sharp:8 community:1 atmospheric:1 pred:3 venkatesh:1 pair:1 c3:5 c4:5 pollen:1 trans:1 able:2 beyond:2 usually:1 perception:6 pattern:5 b21:5 including:1 explanation:1 shifting:1 power:4 critical:1 natural:18 examination:1 restore:1 residual:1 representing:1 scheme:1 axis:2 understanding:2 relative:3 law:2 loss:2 plant:1 bear:1 localized:12 translation:1 summary:1 institute:2 fall:2 wide:1 calculated:1 depth:1 cortical:1 contour:2 sensory:1 made:1 disturbing:1 feeling:1 nov:1 global:3 b1:5 eero:2 morrone:2 alternatively:1 spectrum:6 stereopsis:1 continuous:3 decade:1 nature:4 b11:6 ca:1 decoupling:1 defocus:1 improving:1 congruency:5 investigated:1 complex:3 domain:6 jepson:1 spread:1 arise:1 arrival:1 n2:2 quadrature:1 contracted:1 referred:1 fig:12 ny:1 wiley:1 precision:3 position:4 heeger:3 bandpass:2 comput:1 perceptual:1 wavelet:22 theorem:1 nyu:1 importance:2 texture:3 magnitude:7 television:1 simply:2 likely:1 visual:22 expressed:1 adjustment:2 corresponds:3 satisfies:3 relies:1 conditional:2 towards:1 owen:2 man:1 content:1 change:2 specifically:4 typical:2 reducing:1 corrected:1 infinite:1 denoising:1 lens:2 pas:3 invariance:2 divisive:1 intact:1 modulated:1 incorporate:1 phenomenon:1 ex:1 |
1,538 | 2,399 | Optimal Manifold Representation of Data:
An Information Theoretic Approach
Denis Chigirev and William Bialek
Department of Physics and the Lewis-Sigler Institute for Integrative Genomics
Princeton University, Princeton, New Jersey 08544
chigirev,[email protected]
Abstract
We introduce an information theoretic method for nonparametric, nonlinear dimensionality reduction, based on the infinite cluster limit of rate
distortion theory. By constraining the information available to manifold
coordinates, a natural probabilistic map emerges that assigns original
data to corresponding points on a lower dimensional manifold. With
only the information-distortion trade off as a parameter, our method determines the shape of the manifold, its dimensionality, the probabilistic
map and the prior that provide optimal description of the data.
1
A simple example
Some data sets may not be as complicated as they appear. Consider the set of points on a
plane in Figure 1. As a two dimensional set, it requires a two dimensional density ?(x, y)
for its description. Since the data are sparse the density will be almost singular. We may
use a smoothing kernel, but then the data set will be described by a complicated combination of troughs and peaks with no obvious pattern and hence no ability to generalize. We
intuitively, however, see a strong one dimensional structure (a curve) underlying the data.
In this paper we attempt to capture this intuition formally, through the use of the infinite
cluster limit of rate distortion theory.
Any set of points can be embedded in a hypersurface of any intrinsic dimensionality if we
allow that hypersurface to be highly ?folded.? For example, in Figure 1, any curve that
goes through all the points gives a one dimensional representation. We would like to avoid
such solutions, since they do not help us discover structure in the data. Looking for a
simpler description one may choose to penalize the curvature term [1]. The problem with
this approach is that it is not easily generalized to multiple dimensions, and requires the
dimensionality of the solution as an input.
An alternative approach is to allow curves of all shapes and sizes, but to send the reduced
coordinates through an information bottleneck. With a fixed number of bits, position along
a highly convoluted curve becomes uncertain. This will penalize curves that follow the data
too closely (see Figure 1). There are several advantages to this approach. First, it removes
the artificiality introduced by Hastie [2] of adding to the cost function only orthogonal errors. If we believe that data points fall out of the manifold due to noise, there is no reason to
treat the projection onto the manifold as exact. Second, it does not require the dimension-
9
8
Figure 1: Rate distortion curve for a
data set of 25 points (red). We used
1000 points to represent the curve which
where initialized by scattering them uniformly on the plane. Note that the produced curve is well defined, one dimensional and smooth.
7
6
5
4
3
2
1
0
2
4
6
8
10
12
ality of the solution manifold as an input. By adding extra dimensions, one quickly looses
the precision with which manifold points are specified (due to the fixed information bottleneck). Hence, the optimal dimension emerges naturally. This also means that the method
works well in many dimensions with no adjustments. Third, the method handles sparse
data well. This is important since in high dimensional spaces all data sets are sparse, i.e.
they look like points in Figure 1, and the density estimation becomes impossible. Luckily,
if the data are truly generated by a lower dimensional process, then density estimation in
the data space is not important (from the viewpoint of prediction or any other). What is
critical is the density of the data along the manifold (known in latent variable modeling as
a prior), and our algorithm finds it naturally.
2
Latent variable models and dimensionality reduction
Recently, the problem of reducing the dimensionality of a data set has received renewed
attention [3,4]. The underlying idea, due to Hotelling [5], is that most of the variation in
many high dimensional data sets can often be explained by a few latent variables. Alternatively, we say that rather than filling the whole space, the data lie on a lower dimensional
manifold. The dimensionality of this manifold is the dimensionality of the latent space and
the coordinate system on this manifold provides the latent variables.
Traditional tools of principal component analysis (PCA) and factor analysis (FA) are still
the most widely used methods in data analysis. They project the data onto a hyperplane, so
the reduced coordinates are easy to interpret. However, these methods are unable to deal
with nonlinear correlations in a data set. To accommodate nonlinearity in a data set, one
has to relax the assumption that the data is modeled by a hyperplane, and allow a general
low dimensional manifold of unknown shape and dimensionality. The same questions that
we asked in the previous section apply here. What do we mean by requiring that ?the
manifold models the data well?? In the next section, we formalize this notion by defining
the manifold description of data as a doublet (the shape of the manifold and the projection
map). Note that we do not require the probability distribution over the manifold (known
for generative models [6,7] as a prior distribution over the latent variables and postulated a
priori). It is completely determined by the doublet.
Nonlinear correlations in data can also be accommodated implicitly, without constructing
an actual low dimensional manifold. By mapping the data from the original space to an
even higher dimensional feature space, we may hope that the correlations will become
linearized and PCA will apply. Kernel methods [8] allow us to do this without actually
constructing an explicit map to feature space. They introduce nonlinearity through an a
priori nonlinear kernel. Alternatively, autoassociative neural networks [9] force the data
through a bottleneck (with an internal layer of desired dimensionality) to produce a reduced
description. One of the disadvantages of these methods is that the results are not easy to
interpret.
Recent attempts to describe a data set with a low dimensional representation generally follow into two categories: spectral methods and density modeling methods. Spectral methods
(LLE [3], ISOMAP [4], Laplacian eigenmaps [10]) give reduced coordinates of an a priori dimensionality by introducing a quadratic cost function in reduced coordinates (hence
eigenvectors are solutions) that mimics the relationships between points in the original data
space (geodesic distance for ISOMAP, linear reconstruction for LLE). Density modeling
methods (GTM [6], GMM [7]) are generative models that try to reproduce the data with
fewer variables. They require a prior and a parametric generative model to be introduced a
priori and then find optimal parameters via maximum likelihood.
The approach that we will take is inspired by the work of Kramer [9] and others who tried
to formulate dimensionality reduction as a compression problem. They tried to solve the
problem by building an explicit neural network encoder-decoder system which restricted
the information implicitly by limiting the number of nodes in the bottleneck layer. Extending their intuition with the tools of information theory, we recast dimensionality reduction
as a compression problem where the bottleneck is the information available to manifold
coordinates. This allows us to define the optimal manifold description as that which produces the best reconstruction of the original data set, given that the coordinates can only be
transmitted through a channel of fixed capacity.
3
Dimensionality reduction as compression
Suppose that we have a data set X in a high dimensional state space RD described by a
density function ?(x). We would like to find a ?simplified? description of this data set.
One may do so by visualizing a lower dimensional manifold M that ?almost? describes
the data. If we have a manifold M and a stochastic map PM : x ? PM (?|x) to points
? on the manifold, we will say that they provide a manifold description of the data set X.
Note that the stochastic map here is well justified: if a data point does not lie exactly on
the manifold then we should expect some uncertainty in the estimation of the value of its
latent variables. Also note that we do not need to specify the inverse (generative) map:
M ? RD ; it can be obtained by Bayes? rule.
The manifold description (M, PM ) is a less than faithful representation of the data. To
formalize this notion we will introduce the distortion measure D(M, PM , ?):
Z
Z
D(M, PM , ?) =
?(x)PM (?|x)kx ? ?k2 dD xD?.
(1)
x?RD
??M
Here we have assumed the Euclidean distance function for simplicity.
The stochastic map, PM (?|x), together with the density, ?(x), define a joint probability
function P (M, X) that allows us to calculate the mutual information between the data and
its manifold representation:
Z
Z
P (x, ?)
I(X, M) =
P (x, ?) log
dD xD?.
(2)
?(x)PM (?)
x?X ??M
This quantity tells us how many bits (on average) are required to encode x into ?. If we
view the manifold representation of X as a compression scheme, then I(X, M) tells us the
necessary capacity of the channel needed to transmit the compressed data.
Ideally, we would like to obtain a manifold description {M, PM (M|X)} of the data set
X that provides both a low distortion D(M, PM , ?) and a good compression (i.e. small
I(X, M)). The more bits we are willing to provide for the description of the data, the more
detailed a manifold that can be constructed. So there is a trade off between how faithful a
manifold representation can be and how much information is required for its description.
To formalize this notion we introduce the concept of an optimal manifold.
DEFINITION. Given a data set X and a channel capacity I, a manifold description
(M, PM (M|X)) that minimizes the distortion D(M, PM , X), and requires only information I for representing an element of X, will be called an optimal manifold M(I, X).
Note that another way to define an optimal manifold is to require that the information
I(M, X) is minimized while the average distortion is fixed at value D. The shape and the
dimensionality of optimal manifold depends on our information resolution (or the description length that we are willing to allow). This dependence captures our intuition that for
real world, multi-scale data, a proper manifold representation must reflect the compression
level we are trying to achieve.
To find the optimal manifold (M(I), PM(I) ) for a given data set X, we must solve a
constrained optimization problem. Let us introduce a Lagrange multiplier ? that represents
the trade off between information and distortion. Then optimal manifold M(I) minimizes
the functional:
F(M, PM ) = D + ?I.
(3)
Let us parametrize the manifold M by t (presumably t ? Rd for some d ? D). The
function ?(t) : t ? M maps the points from the parameter space onto the manifold and
therefore describes the manifold. Our equations become:
Z Z
D
=
Z Z
I
=
dD x dd t ?(x)P (t|x)kx ? ?(t)k2 ,
dD x dd t ?(x)P (t|x) log
P (t|x)
,
P (t)
F(?(t), P (t|x)) = D + ?I.
(4)
(5)
(6)
Note that both information and distortion measures are properties of the manifold description doublet {M, PM (M|X)} and are invariant under reparametrization. We require the variations of the functional to vanish for optimal manifolds ?F/??(t) = 0 and
?F/?P (t|x) = 0, to obtain the following set of self consistent equations:
Z
dD x ?(x)P (t|x),
Z
1
?(t) =
dD x x?(x)P (t|x),
P (t)
P (t) ? 1 kx?? (t)k2
P (t|x) =
e ?
,
?(x)
Z
2
1
?(x) =
dd t P (t)e? ? kx?? (t)k .
P (t)
=
(7)
(8)
(9)
(10)
In practice we do not have the full density
P ?(x), but only a discrete number of samples.
So we have to approximate ?(x) = N1
?(x ? xi ), where N is the number of samples,
i is the sample label, and xi is the multidimensional vector describing the ith sample.
Similarly, instead of using a continuous variable t we use a discrete set t ? {t1 , t2 , ..., tK }
of K points to model the manifold. Note that in (7 ? 10) the variable t appears only as an
argument for other functions, so we can replace the integral over t by a sum over k = 1..K.
Then P (t|x) becomes Pk (xi ),?(t) is now ? k , and P (t) is Pk . The solution to the resulting
set of equations in discrete variables (11 ? 14) can be found by an iterative Blahut-Arimoto
procedure [11] with an additional EM-like step. Here (n) denotes the iteration step, and ?
is a coordinate index in RD . The iteration scheme becomes:
(n)
=
N
1 X (n)
P (xi )
N i=1 k
(11)
(n)
=
N
1 X
(n)
xi,? Pk (xi ),
(n) N
P
i=1
(12)
Pk
?k,?
1
k
where ?
?(n) (xi )
=
1, . . . , D,
K
X
(n) 2
1
(n)
=
Pk e? ? kxi ?? k k
(13)
k=1
(n)
(n+1)
Pk
(xi )
=
(n) 2
Pk
1
e? ? kxi ?? k k .
(n)
? (xi )
(14)
One can initialize ?k0 and Pk0 (xi ) by choosing K points at random from the data set and
letting ?k = xi(k) and Pk0 = 1/K, then use equations (13) and (14) to initialize the
association map Pk0 (xi ). The iteration procedure (11 ? 14) is terminated once
max |?kn ? ?kn?1 | < ,
(15)
k
where determines the precision with which the manifold points are located. The above
algorithm requires the information distortion cost ? = ??D/?I as a parameter. If we want
to find the manifold description (M, P (M|X)) for a particular value of information I,
we can plot the curve I(?) and, because it?s monotonic, we can easily find the solution
iteratively, arbitrarily close to a given value of I.
4
Evaluating the solution
The result of our algorithm is a collection of K manifold points, ?k ? M ? RD , and
a stochastic projection map, Pk (xi ), which maps the points from the data space onto the
manifold. Presumably, the manifold M has a well defined intrinsic dimensionality d. If
we imagine a little ball of radius r centered at some point on the manifold of intrinsic
dimensionality d, and then we begin to grow the ball, the number of points on the manifold
that fall inside will scale as rd . On the other hand, this will not be necessarily true for the
original data set, since it is more spread out and resembles locally the whole embedding
space RD . The Grassberger-Procaccia algorithm [12] captures this intuition by calculating
the correlation dimension. First, calculate the correlation integral:
N
N
XX
2
H(r ? |xi ? xj |),
C(r) =
N (N ? 1) i=1 j>i
(16)
where H(x) is a step function with H(x) = 1 for x > 0 and H(x) = 0 for x < 0. This
measures the probability that any two points fall within the ball of radius r. Then define
0
original data
manifold representation
-2
ln C(r)
-4
-6
-8
-10
-12
-14
-5
-4
-3
-2
-1
0
1
2
3
4
ln r
Figure 2: The semicircle. (a) N = 3150 points randomly scattered around a semicircle of
radius R = 20 by a normal process with ? = 1 and the final positions of 100 manifold
points. (b) Log log plot of C(r) vs r for both the manifold points (squares) and the original
data set (circles).
the correlation dimension at length scale r as the slope on the log log plot.
dcorr (r) =
d log C(r)
.
d log r
(17)
For points lying on a manifold the slope remains constant and the dimensionality is fixed,
while the correlation dimension of the original data set quickly approaches that of the
embedding space as we decrease the length scale. Note that the slope at large length scales
always tends to decrease due to finite span of the data and curvature effects and therefore
does not provide a reliable estimator of intrinsic dimensionality.
5
5.1
Examples
Semi-Circle
We have randomly generated N = 3150 data points scattered by a normal distribution with
? = 1 around a semi-circle of radius R = 20 (Figure 2a). Then we ran the algorithm with
K = 100 and ? = 8, and terminated the iterative algorithm once the precision = 0.1 had
been reached. The resulting manifold is depicted in red.
To test the quality of our solution, we calculated the correlation dimension as a function of
spatial scale for both the manifold points and the original data set (Figure 2b). As one can
see, the manifold solution is of fixed dimensionality (the slope remains constant), while the
original data set exhibits varying dimensionality. One should also note that the manifold
points have dcorr (r) = 1 well into the territory where the original data set becomes two
dimensional. This is what we should expect: at a given information level (in this case,
I = 2.8 bits), the information about the second (local) degree of freedom is lost, and the
resulting structure is one dimensional.
A note about the parameters. Letting K ? ? does not alter the solution. The information
I and distortion D remain the same, and the additional points ?k also fall on the semi-circle
and are simple interpolations between the original manifold points. This allows us to claim
that what we have found is a manifold, and not an agglomeration of clustering centers.
Second, varying ? changes the information resolution I(?): for small ? (high information
rate) the local structure becomes important. At high information rate the solution undergoes
3
3.5
3
3
2.5
2.5
2
2.5
2
2
1.5
1.5
1.5
1
1
1
0.5
0.5
0
0.5
-0.5
0
0
-1
5
-0.5
-0.5
4
1
3
0.5
2
-1
-1
0
1
-0.5
0
-1
-1.5
-1.5
-1
-0.5
0
0.5
1
1.5
-1.5
-1.5
-1
-0.5
0
0.5
1
1.5
Figure 3: S-shaped sheet in 3D. (a) N = 2000 random points on a surface of an S-shaped
sheet in 3D. (b) Normal noise added. XY-plane projection of the data. (c) Optimal manifold
points in 3D, projected onto an XY plane for easy visualization.
a phase transition, and the resulting manifold becomes two dimensional to take into account
the local structure. Alternatively, if we take ? ? ?, the cost of information rate becomes
very high and the whole manifold collapses to a single point (becomes zero dimensional).
5.2
S-surface
Here we took N = 2000 points covering an S-shaped sheet in three dimensions (Figure
3a), and then scattered the position of each point by adding Gaussian noise. The resulting
manifold is difficult to visualize in three dimensions, so we provided its projection onto
an XY plane for an illustrative purpose (Figure 3b). After running our algorithm we have
recovered the original structure of the manifold (Figure 3c).
6
Discussion
The problem of finding low dimensional manifolds in high dimensional data requires regularization to avoid hgihly folded, Peano curve like solutions which are low dimensional
in the mathematical sense but fail to capture our geometric intuition. Rather than constraining geometrical features of the manifold (e.g., the curvature) we have constrained the
mutual information between positions on the manifold and positions in the original data
space, and this is invariant to all invertible coordinate transformations in either space. This
approach enforces ?smoothness? of the manifold only implicitly, but nonetheless seems
to work. Our information theoretic approach has considerable generality relative to methods based on specific smoothing criteria, but requires a separate algorithm, such as LLE, to
give the manifold points curvilinear coordinates. For data points not in the original data set,
equations (9-10) and (13-14) provide the mapping onto the manifold. Eqn. (7) gives the
probability distribution over the latent variable, known in the density modeling literature as
?the prior.?
The running time of the algorithm is linear in N . This compares favorably with other methods and makes it particularly attractive for very large data sets. The number of manifold
points K usually is chosen as large as possible, given the computational constraints, to have
a dense sampling of the manifold. However, a value of K << N is often sufficient, since
D(?, K) ? D(?) and I(?, K) ? I(?) approach their limits rather quickly (the convergence improves for large ? and deteriorates for small ?). In the example of a semi-circle,
the value of K = 30 was sufficient at the compression level of I = 2.8 bits. In general, the
threshold value for K scales exponentially with the latent dimensionality (rather than with
the dimensionality of the embedding space).
The choice of ? depends on the desired information resolution, since I depends on ?.
Ideally, one should plot the function I(?) and then choose the region of interest. I(?)
is a monotonically decreasing function, with the kinks corresponding to phase transitions
where the optimal manifold abruptly changes its dimensionality. In practice, we may want
to run the algorithm only for a few choices of ?, and we would like to start with values
that are most likely to correspond to a low dimensional latent variable representation. In
this case, as a rule
p of thumb, we choose ? smaller, but on the order of the largest linear
dimension (i.e.
?/2 ? Lmax ). The dependence of the optimal manifold M(I) on
information resolution reflects the multi-scale nature of the data and should not be taken as
a shortcoming.
References
[1] Bregler, C. & Omohundro, S. (1995) Nonlinear image interpolation using manifold learning.
Advances in Neural Information Processing Systems 7. MIT Press.
[2] Hastie, T. & Stuetzle, W. (1989) Principal curves. Journal of the American Statistical Association,
84(406), 502-516.
[3] Roweis, S. & Saul, L. (2000) Nonlinear dimensionality reduction by locally linear embedding.
Science, 290, 2323?2326.
[4] Tenenbaum, J., de Silva, V., & Langford, J. (2000) A global geometric framework for nonlinear
dimensionality reduction. Science, 290 , 2319?2323.
[5] Hotelling, H. (1933) Analysis of a complex of statistical variables into principal components.
Journal of Educational Psychology, 24:417-441,498-520.
[6] Bishop, C., Svensen, M. & Williams, C. (1998) GTM: The generative topographic mapping.
Neural Computation,10, 215?234.
[7] Brand, M. (2003) Charting a manifold. Advances in Neural Information Processing Systems 15.
MIT Press.
[8] Scholkopf, B., Smola, A. & Muller K-R. (1998) Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10, 1299-1319.
[9] Kramer, M. (1991) Nonlinear principal component analysis using autoassociative neural networks. AIChE Journal, 37, 233-243.
[10] Belkin M. & Niyogi P. (2003) Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6), 1373-1396.
[11] Blahut, R. (1972) Computation of channel capacity and rate distortion function. IEEE Trans.
Inform. Theory, IT-18, 460-473.
[12] Grassberger, P., & Procaccia, I. (1983) Characterization of strange attractors. Physical Review
Letters, 50, 346-349.
| 2399 |@word compression:7 seems:1 integrative:1 willing:2 linearized:1 tried:2 ality:1 accommodate:1 reduction:8 renewed:1 recovered:1 must:2 grassberger:2 shape:5 remove:1 plot:4 v:1 generative:5 fewer:1 plane:5 ith:1 provides:2 characterization:1 node:1 denis:1 simpler:1 mathematical:1 along:2 constructed:1 become:2 scholkopf:1 inside:1 introduce:5 multi:2 inspired:1 decreasing:1 actual:1 little:1 becomes:9 project:1 discover:1 underlying:2 begin:1 xx:1 provided:1 what:4 minimizes:2 loos:1 finding:1 transformation:1 multidimensional:1 xd:2 exactly:1 k2:3 appear:1 t1:1 local:3 treat:1 tends:1 limit:3 interpolation:2 resembles:1 collapse:1 faithful:2 enforces:1 practice:2 lost:1 procedure:2 stuetzle:1 semicircle:2 projection:5 onto:7 close:1 sheet:3 impossible:1 map:12 center:1 send:1 go:1 attention:1 pk0:3 educational:1 williams:1 formulate:1 resolution:4 simplicity:1 assigns:1 rule:2 estimator:1 embedding:4 handle:1 notion:3 coordinate:11 variation:2 limiting:1 transmit:1 imagine:1 suppose:1 exact:1 element:1 particularly:1 located:1 capture:4 calculate:2 region:1 trade:3 decrease:2 ran:1 intuition:5 asked:1 ideally:2 sigler:1 geodesic:1 completely:1 easily:2 joint:1 k0:1 jersey:1 gtm:2 describe:1 shortcoming:1 wbialek:1 tell:2 choosing:1 widely:1 solve:2 distortion:13 say:2 relax:1 compressed:1 encoder:1 ability:1 niyogi:1 topographic:1 final:1 advantage:1 eigenvalue:1 took:1 reconstruction:2 achieve:1 roweis:1 description:16 convoluted:1 curvilinear:1 kink:1 convergence:1 cluster:2 extending:1 produce:2 tk:1 help:1 svensen:1 received:1 strong:1 radius:4 closely:1 stochastic:4 luckily:1 centered:1 require:5 bregler:1 lying:1 around:2 normal:3 presumably:2 mapping:3 visualize:1 claim:1 purpose:1 estimation:3 label:1 largest:1 tool:2 reflects:1 hope:1 mit:2 always:1 gaussian:1 rather:4 avoid:2 varying:2 encode:1 likelihood:1 sense:1 reproduce:1 priori:4 smoothing:2 constrained:2 initialize:2 mutual:2 spatial:1 once:2 shaped:3 sampling:1 dcorr:2 represents:1 look:1 filling:1 alter:1 mimic:1 minimized:1 others:1 t2:1 few:2 belkin:1 randomly:2 phase:2 blahut:2 william:1 n1:1 attempt:2 freedom:1 attractor:1 interest:1 highly:2 truly:1 integral:2 necessary:1 xy:3 orthogonal:1 euclidean:1 accommodated:1 initialized:1 desired:2 circle:5 uncertain:1 modeling:4 disadvantage:1 cost:4 introducing:1 eigenmaps:2 too:1 kn:2 kxi:2 density:11 peak:1 probabilistic:2 off:3 physic:1 invertible:1 together:1 quickly:3 reflect:1 choose:3 american:1 account:1 de:1 trough:1 postulated:1 depends:3 try:1 view:1 red:2 reached:1 bayes:1 start:1 complicated:2 reparametrization:1 slope:4 square:1 who:1 correspond:1 generalize:1 territory:1 thumb:1 produced:1 inform:1 definition:1 nonetheless:1 obvious:1 naturally:2 emerges:2 dimensionality:27 improves:1 formalize:3 actually:1 appears:1 scattering:1 higher:1 follow:2 specify:1 generality:1 smola:1 correlation:8 langford:1 hand:1 eqn:1 nonlinear:9 undergoes:1 chigirev:2 quality:1 believe:1 building:1 effect:1 requiring:1 concept:1 isomap:2 multiplier:1 true:1 hence:3 regularization:1 iteratively:1 deal:1 attractive:1 visualizing:1 self:1 covering:1 illustrative:1 criterion:1 generalized:1 trying:1 theoretic:3 omohundro:1 silva:1 geometrical:1 image:1 recently:1 agglomeration:1 functional:2 physical:1 arimoto:1 exponentially:1 association:2 interpret:2 smoothness:1 rd:8 pm:15 similarly:1 nonlinearity:2 had:1 surface:2 curvature:3 aiche:1 recent:1 arbitrarily:1 muller:1 transmitted:1 additional:2 monotonically:1 semi:4 multiple:1 full:1 smooth:1 doublet:3 laplacian:2 prediction:1 iteration:3 kernel:4 represent:1 penalize:2 justified:1 want:2 singular:1 grow:1 extra:1 constraining:2 easy:3 xj:1 psychology:1 hastie:2 idea:1 bottleneck:5 pca:2 abruptly:1 autoassociative:2 generally:1 detailed:1 eigenvectors:1 nonparametric:1 locally:2 tenenbaum:1 category:1 reduced:5 deteriorates:1 discrete:3 threshold:1 gmm:1 sum:1 run:1 inverse:1 letter:1 uncertainty:1 almost:2 strange:1 bit:5 layer:2 quadratic:1 constraint:1 argument:1 span:1 department:1 combination:1 ball:3 describes:2 remain:1 em:1 smaller:1 intuitively:1 explained:1 restricted:1 invariant:2 taken:1 ln:2 equation:5 visualization:1 remains:2 describing:1 fail:1 needed:1 letting:2 available:2 parametrize:1 apply:2 spectral:2 hotelling:2 alternative:1 original:15 denotes:1 clustering:1 running:2 calculating:1 question:1 quantity:1 added:1 fa:1 parametric:1 dependence:2 traditional:1 bialek:1 exhibit:1 distance:2 unable:1 separate:1 capacity:4 decoder:1 manifold:79 reason:1 charting:1 length:4 modeled:1 relationship:1 index:1 difficult:1 favorably:1 proper:1 unknown:1 finite:1 defining:1 looking:1 introduced:2 required:2 specified:1 trans:1 usually:1 pattern:1 recast:1 max:1 reliable:1 critical:1 natural:1 force:1 representing:1 scheme:2 genomics:1 prior:5 geometric:2 literature:1 review:1 relative:1 embedded:1 expect:2 degree:1 sufficient:2 consistent:1 dd:9 viewpoint:1 lmax:1 allow:5 lle:3 institute:1 fall:4 saul:1 sparse:3 curve:11 dimension:12 calculated:1 world:1 evaluating:1 transition:2 collection:1 projected:1 simplified:1 hypersurface:2 approximate:1 implicitly:3 global:1 assumed:1 xi:14 alternatively:3 continuous:1 latent:10 iterative:2 channel:4 nature:1 necessarily:1 complex:1 constructing:2 pk:8 spread:1 dense:1 terminated:2 whole:3 noise:3 scattered:3 precision:3 position:5 explicit:2 lie:2 vanish:1 third:1 specific:1 bishop:1 intrinsic:4 adding:3 kx:4 depicted:1 likely:1 lagrange:1 adjustment:1 monotonic:1 determines:2 lewis:1 kramer:2 replace:1 considerable:1 change:2 determined:1 infinite:2 folded:2 uniformly:1 reducing:1 hyperplane:2 principal:4 called:1 brand:1 formally:1 procaccia:2 internal:1 princeton:3 |
1,539 | 24 | 95
OPTIMAL NEURAL SPIKE CLASSIFICATION
Amir F. Atiya(*) and James M. Bower(**)
(*) Dept. of Electrical Engineering
(**) Division of Biology
California Institute of Technology
Ca 91125
Abstract
Being able to record the electrical activities of a number of neurons simultaneously is likely
to be important in the study of the functional organization of networks of real neurons. Using
one extracellular microelectrode to record from several neurons is one approach to studying
the response properties of sets of adjacent and therefore likely related neurons. However, to
do this, it is necessary to correctly classify the signals generated by these different neurons.
This paper considers this problem of classifying the signals in such an extracellular recording,
based upon their shapes, and specifically considers the classification of signals in the case when
spikes overlap temporally.
Introduction
How single neurons in a network of neurons interact when processing information is likely
to be a fundamental question central to understanding how real neural networks compute.
In the mammalian nervous system we know that spatially adjacent neurons are, in general,
more likely to interact, as well as receive common inputs. Thus neurobiologists are interested
in devising techniques that allow adjacent groups of neurons to be sampled simultaneously.
Unfortunately, the small scale of real neural networks makes inserting one recording electrode
per cell impractical. Therefore, one is forced to use single electrodes designed to sample neural signals evoked by several cells at once. While this approach provides the multi-neuron
recordings being sought, it also presents a rather serious waveform classification problem because the actual temporal sequence of action potentials in each individual neuron must be
deciphered. This paper describes a method for classifying the activities of several individual
neurons recorded simultaneously using a single electrode.
Description of the Problem
Over the last two decades considerable attention 1-8 has been devoted to the problem of
classification of action potentials in multi-neuron recordings. These action potentials (also
referred to as "spikes") are the extracellularly recorded signal produced by a single neuron
when it is passing information to other neurons (Fig. 1). Fortunately, spikes recorded from the
same cell are more or less similar in shape, while spikes coming from different neurons usually
have somewhat different shapes, depending on the neuron type, electrode characteristics, the
distance between the electrode and the neuron, and the intervening medium. Fig. 1 illustrates
some representative variations in spike shapes. It is our objective to detect and classify different
spikes based on their shapes. However, relying entirely on the shape of the spikes presents
difficulties. For example spikes from different neurons can overlap temporally producing novel
waveforms (see Fig. 2 for an example of an overlap). To deal with these overlaps, one has first
to detect the occurrence of an overlap, and then estimate the constituent spikes. Unfortunately,
only a few of the available spike separation algorithms consider these events, even though they
are potentially very important in understanding neural networks. Those few tend to rely
? American Institute of Physics 1988
96
on heuristic rules and subtractive methods to resolve overlap cases. No currently published
method we are aware of attempts to use knowledge of the likelihood of overlap events for
detecting them, which is at the basis of the method we will describe.
Fig. 1
An example of a multi-neuron recording
overlapping spikes
Fig. 2
An example of a temporal overlap of action potentials
General Approach
The first step in classifying neural waveforms is obviously to identify the typical spike
shapes occurring in a particular recording. To do this we have applied a learning algorithm
on the beginning portion of the recording, which in an unsupervised fashion (i.e. without the
intervention of a human operator) estimates the shapes. After the learning stage we have
the classification stage, which is applied on the remaining portion of the recording. A new
classification method is proposed, which gives minimum probability of error, even in case of the
occurrence of overlapping spikes. Both the learning and the classification algorithms require
a preprocessing step to detect the position of the spike candidate in the data record.
Detection: For the first task of detection most researchers use a simple level detecting
algorithm, that signals a spike when recorded voltage levels cross a certain voltage threshold.
However, variations in recording position due to natural brain movements during recording
(e.g. respiration) can cause changes in relative height of the positive to the negative peak.
Thus, a level detector (using either a positive or a negative threshold) can miss some spikes.
Alternatively, we have chosen to detect an event by sliding a window of fixed length until a
time when the peak to peak value within the window exceeds a certain threshold.
Learning: Learning is performed on the beginning portion of the sampled data using
the Isodata clustering algorithm 9. The task is to estimate the number of neurons n whose
spikes are represented in the waveform and learn the different shapes of the spikes of the
various neurons. For that purpose we apply the clustering algorithm choosing only one feature
97
from the spike, the peak to peak value which we have found to be quite an effective feature.
Note that using the peak to peak value in the learning stage does not necessitate using it for
classification (one might need additional or different features, especially for tackling the case
of spike overlap) .
The Optimal Olassification Rule: Once we have identified the number of different events
present, the classification stage is concerned with estimating the identities of the spikes in the
recording, based on the typical spike shapes obtained in the learning stage. In our classification
scheme we make use of the information given by the shape of the detected spike as well
as the firing rates of the different neurons. Although the shape plays in general the most
important role in the classification, the rates become a more significant factor when dealing
with overlapping events. This is because in general overlap is considerably less frequent than
single spikes. The shape information is given by set of features extracted from the waveform.
Let x be the feature vector of the detected spike (e.g. the samples of the spike waveform). Let
N I , ... , N n represent the different neurons. The detection algorithm tells us only that at least
one spike occurred in the narrow interval (t - TI,t + T 2) (= say 1) where t is the instant of
the peak of the detected spike, TI and T2 are constants chosen subjectively according to the
smallest possible time separation between two consecutive spikes, identifiable as two separate
(nonoverlapping) spikes. By definition, if more than one spike occurs in the interval I, then
we have an overlap. As a matter of convention, the instant of the occurrence of a spike i...
taken to be that of the spike peak. For simplicity, we will consider the case of two possibly
overlapping spikes, though the method can be extended easily to more. The classification rule
which results in minimum probability of error is the one which chooses the neuron (or pair of
neurons in case of overlap) which has the maximum likelihood. We have therefore to compare
the Pi'S and the P,/s, defined as
a
~
= P(Ni
fired in Ilx, A),
P,j = P(N, and N j fired in Ilx, A),
i
= 1, ... ,n
l,j=I, ... ,n,
j<l
where A represents the event that one or two spikes occurred in the interval I. In other words
Pi the probability that what has been detected is a single spike from neuron i, whereas P,j
is the probability that we have two overlapping spikes from neurons land j (note that spikes
from the same neuron never overlap). Henceforth we will use I to denote probability density.
For the purpose of abbreviation let Bi(t) mean "neuron Ni fired at t". The classification
problem can be reduced to comparing the following likelihood functions:
i = 1, ... ,n
(la)
" j = 1, ... , n, j < I
(lb)
(for a derivation refer to Appendix). Let Ii be the density of the inter-spike interval and Ti be
the most recent firing instant of neuron Ni . IT we are given the fact that neuron Ni has been
idle for at least a period of duration t - Ti, we get
A disadvantage of using (2) is that the available /i's and T&,S are only estimates, which depend
on the previous classification results, Further, for reliable estimation of the densities Ii, one
needs a large number of spikes and therefore a long learning period since we are estimating a
98
whole function. Therefore, we have not used this form, but instead have used the following two
schemes. In the first one, we ignore the knowledge about the previous firing pattern except
for the estimated firing rates >'1, ... , >'n of the different neurons Nl, ... , N n respectively. Then
the probability of a spike coming from neuron Ni in an interval of duration dt is simply >'idt.
Hence
In the second scheme we do not use any previous knowledge except for the total firing rate (of
all neurons), say a. Then
Although the second scheme does not use as much of the information about the firing
pattern as the first scheme does, it has the advantage of obtaining and using a more reliable
estimate of the firing rate, because in general the overall firing rate changes less with time than
the individual rates and because the estimate of a does not depend on previous classification
results. However, it is useful mostly when the firing rates of the different neurons do not vary
much, otherwise the firt scheme is preferred.
In real recording situations, sometimes one encounters voltage signals which are much
different than any of the previously learned typical spike shapes or their pairwise overlaps.
This can happen for example due to a falsely detected noise event, a spike from a class not
encountered in the learning stage, or to the overlap of three or more spikes. To cope with
these cases we use the reject option. This means that we refuse to classify the detected spike
because of the unlikeliness of the assumed event A. The reject option is therefore employed
whenever P(Alx) is smaller than a certain threshold. We know that
P(Alx) = J(A,x)/[J(A,x)
+ J(A C,x)]
where AC is the complement of the event A. The density f(AC,x) can be approximated as
uniform (over the possible values of x) because a large variety of cases are covered by the event
AC. It follows that one can just compare J(A,x) to a threshold. Hence the decision strategy
becomes finally: Reject if the sum of the likelihood functions is less than a threshold. Otherwise
choose the neuron (or pair of neurons) corresponding to the largest likelihood functions. Note
that the sum of the likelihood functions equals J(A,x) (refer to Appendix).
Now, let us evaluate the integrals in (1). Overlapping spikes are assumed to add linearly.
Since we intend to handle the overlap case, we have to use a set of features Xm which obeys
the following. Given the features of two of the waveforms, then one can compute those of their
overlap. A good such candidate is the set of the samples of the spike (or possibly also just
part of the samples). The added noise, partly thermal noise from the electrode and partly
due to firings from distant neurons, can usually be approximated as white Gaussian. Let the
variance be a 2 ? The integrals in the likelihood functions can be approximated as summations
(note in fact that we have samples available, not a continuous waveform). Let yi represent the
typical feature vector (template) associated with neuron N i , with the mth component being
y;". Then
M
J(xIB/(kI), Bj(kd) =
(21r)~/2aM exp[ - 2~2 '~l (x m
-
y!n-k 1
-
y~_k2)2]
99
where Xm is the mth component of x, and M is the dimension of x. This leads to the following
likelihood functions
M~
L~ = f(Bd k ))
exp[- 2~2
L
kl=-M 1
M)
LL? = f(B,(k))f(Bj(k)) L
Ml
L
kl=-Mlkl=-Ml
M
:L (xm - Y:n_kJ 2]
m=l
exp[- 2~2
M
L (x m -
y!n-k 1
-
y~_kl)2]
m=l
where k is the spike instant, and the interval from -Ml to M2 corresponds to the interval I
defined at the beginning of the Section.
Implementation
The techniques we have just described were tested in the following way. For the first
experiment we identified two spike classes in a recording from the rat cerebellum. A signal
is created, composed of a number of spikes from the two classes at random instants, plus
noise. To make the situation as realistic as possible, the added noise is taken from idle periods
(i.e. non-spiking) of a real recording. The reason for using such an artificially generated
signal is to be able to know the class identities of the spikes, in order to test our approach
quantitatively. We implement the detection and classification techniques on the obtained
signal, with various values of noise amplitude. In our case the ratio of the peak to peak values
of the templates turns out to be 1.375. Also, the spike rate of one of the clases is twice that of
the other class. Fig.3a shows the results with applying the first scheme (i.e. using Eq. 3). The
overall percentage correct classification for all spikes (solid curve) and the percentage correct
classification for overlapping spikes (dashed curve) are plotted versus the standard deviation
of the noise (]" normalized with respect to the peak h of the large template. Notice that the
overall classification accuracy is near 100% for (]" I h less than 0.15, which is actually the range
of noise amplitudes we mostly encountered in our work with real recordings. Observe also
the good results for classifying overlapping events. We have applied also the second scheme
(i.e. using Eq. 4) and obtained similar results. We wish to mention that the thresholds for
detection and for the reject option are set up so as to obtain no more than 3% falsely detected
spikes.
A similar experiment is performed with three waveforms (three classes), where two of the
waveforms are the same as those used in the first experiment . The third is the average of
the first two. All the three neurons have the same spike rate (i.e. ..\1 = ..\2 = ..\3)' Hence
both classification schemes are equivalent in this case. Fig. 3b shows the overall as well as
the sub-category of overlap classification results. One observes that the results are worse than
those for the two-class case. This is because the spacings between the templates are in general
smaller. Notice also that the accuracy in resolving overlapping events is now tangibly less
than the overall accuracy. However, one can say that the results are acceptable in the range
of (]"
less than 0.1. The following experiment is also performed using the same data. We
would like to investigate the importance of the information given by the (overall) firing rate on
the problem of classifying overlapping events. In our method the summation in the likelihood
functions for single spikes is multiplied by Otln, while that for overlapping spikes is multiplied
by (Otln)2 . Usually Otln is considerably less than one. Hence we have a factor which gives less
weight for overlapping events. Now, consider the case of ignoring completely the information
given by the firing rate and relying solely on sha.pe information. We assume that overlapping
spikes from any two given classes represent "new" class of waveforms and that each of these
overlap classes has the same rate as that of a single-spike cla.ss. In that case we can obtain
expressions for the likelihood functions as consisting just the summations, i.e. free of the rate
Ih
100
..-
-
1?? -
-;
1?? -
..
?
;- 51.?
C
?e d._
-
51.-
"
..... ...
C
;
;
...
It._
It._
I.
I.
....S
..."
1.111
I . ISZ
'.l,'
I.
I.
????? /t.
..
...
1.1"
? I ???
a
I . ISZ
l .ltI
,1.
b
1? ?-
-;
~
??
c
C
?
;?
...
.,-
-
51.-
...
It._
I.
I.
.....
1.11t
1.1"
1.I!Il
????? /t.
c
Fig. 3
a) Overall (solid curve) and overlap (dashed curve)
classification accuracy for a two class case
b) Overall (solid curve) and overlap (dashed curve)
classification accuracy for a three class case
c)Percent of incorrect classification of single spikes as overlap
solid curve: scheme utilzing the spike rate
dashed curve: scheme not utilising the spike rate
factor Olin (refer to Appendix). An experiment is performed using that scheme (on the same
three class data). One observes that the method classifies a number of single spikes wrongly
as overlaps, much more than our original scheme does (see Fig. 3c), especially for the large
noise case. On the other hand, the number of overlaps which are classified wrongly as single
spikes is near zero for both schemes.
Finally, in the last experiment the techniques are implemented on real recordings from the
rat cerebellum. The recorded signal is band-pass-filtered in the frequency range 300 Hz - 10
KHz, then sampled with a rate of 20KHz. For classification, we take 20 samples per spike as
features. Fig. 4 shows the results ofthe proposed method, using the first scheme (Eq. 3). The
number of neurons whose spikes are represented in the waveform is estimated to be four. The
101
detection threshold is set up so that spikes which are too small are disregarded, because they
come from several neurons far away from the electrode and are hard to distinguish. Notice
the overlap of classes 1 and 2, which was detected. We used the second scheme also on the
same portion and it gave similar results as those of the first scheme (only one of the spikes is
classified differently). Overall, the discrepancies between classifications done by the proposed
method and an experienced human observer were found to be small.
3
2
3
3
4
1
2
3
3
1,2
2
3
1
3
1
3
2
4
Fig. 4
Classification results for a recording from the rat cerebellum
Conclusion
Many researchers have considered the problem of spike classification in multi-neuron
recordings, but only few have tackled the case of spike overlap, which could occur frequently,
particularly if the group of neurons under study is stimulated. In this work we propose a
method for spike classification, which can also aid in detecting and classifying overlapping
spikes. By taking into account the statistical properties of the discharges of the neurons sampled, this method minimizes the probability of classification error. The application of the
method to artificial as well as real recordings confirm its effectiveness.
Appendix
Consider first P'i' We can write
102
We can also obtain
R. =
It+T2[t+T2
IJ
t-T1
t-Tl
f(x,AIBI(t d ,Bj (t 2 ))f(B (t ) B ?(t ))dt dt
f( A)
I 1, J 2
1 2?
x,
Now, consider the two events B1(td and B j (t 2 ). In the absense of any information about their
dependence, we assume that they are independent. We get
Within the interval I, both f(B/(tt)) and f(B j (t2)) hardly vary because the duration of
I is very small compared to a typical inter-spike interval. Therefore we get the following
approximation:
f(B/(td) ~ f(B,(t))
f(B j (t2)) ~ f(Bj(t)).
The expression for
P"j
P"j ~
becomes
f(B,(t))f(B ?(t))
f
()J
X,
A
[t+T2
t-Tl
[t-
T2
f(xIB/(td, B j (t2))dt 1 dt 2 ?
t-Tl
Notice that the term A was omitted from the argument of the density inside the integral,
because the occurrence of two spikes at tl and t2El implies the occurrence of A. A similar
derivation for ~ results in
The term f(x, A) is common to all the Pils and the Pi's. Hence one can simply compare the
following likelihood functions:
Aeknow ledgement
Our thanks to Dr. Yaser Abu-Mostafa for his assistance with this work. This project was
supported by the Caltech Program of Advanced Technology (sponsored by Aerojet,GM,GTE,
and TRW), and the Joseph Drown Foundation.
References
II] M. Abeles and M. Goldstein, Proc. IEEE, 65, pp.762-773, 1977.
12] G. Dinning and A. Sanderson, IEEE Trans . Bio - M ed. Eng., BME-28, pp. 804-812,
1981.
13] E. D'Hollander and G. Orban, IEEE Trans . Bio-Med. Eng., BME-26, pp. 279-284, 1979.
14] D. Mishelevich, IEEE Trans. Bio-Med. Eng., BMFr17, pp. 147-150, 1970.
Is] V. Prochazka and H. Kornhuber, Electroenceph. din. Neurophysiol., 32, pp. 91-93, 1973.
16] W . Roberts, Bioi. Gybernet., 35, pp. 73-80, 1979.
17] W. Roberts and D. Hartline, Brain Res., 94, pp. 141-149, 1975.
18] E. Schmidt, J. Neurosci. Methods, 12, pp. 95-111, 1984.
19] R. Duda and P. Hart, Pattern Classification and Scene Analysis, John Wiley, 1973.
| 24 |@word duda:1 eng:3 cla:1 mention:1 solid:4 comparing:1 tackling:1 must:1 bd:1 john:1 realistic:1 distant:1 happen:1 shape:14 designed:1 sponsored:1 devising:1 nervous:1 amir:1 beginning:3 record:3 filtered:1 provides:1 detecting:3 height:1 become:1 incorrect:1 inside:1 falsely:2 pairwise:1 inter:2 frequently:1 multi:4 brain:2 relying:2 td:3 resolve:1 actual:1 window:2 becomes:2 project:1 estimating:2 classifies:1 medium:1 what:1 minimizes:1 impractical:1 temporal:2 ti:4 k2:1 bio:3 intervention:1 producing:1 positive:2 t1:1 engineering:1 firing:12 solely:1 might:1 plus:1 twice:1 evoked:1 bi:1 range:3 obeys:1 implement:1 reject:4 word:1 idle:2 get:3 operator:1 wrongly:2 applying:1 equivalent:1 attention:1 duration:3 simplicity:1 m2:1 rule:3 his:1 handle:1 variation:2 discharge:1 play:1 gm:1 approximated:3 particularly:1 mammalian:1 aerojet:1 role:1 electrical:2 kornhuber:1 movement:1 observes:2 depend:2 upon:1 division:1 basis:1 completely:1 neurophysiol:1 easily:1 differently:1 represented:2 various:2 derivation:2 forced:1 effective:1 describe:1 detected:8 artificial:1 tell:1 choosing:1 whose:2 heuristic:1 quite:1 hollander:1 say:3 s:1 otherwise:2 obviously:1 sequence:1 advantage:1 propose:1 coming:2 frequent:1 inserting:1 fired:3 description:1 intervening:1 constituent:1 electrode:7 depending:1 absense:1 ac:3 bme:2 ij:1 eq:3 implemented:1 come:1 implies:1 convention:1 waveform:12 correct:2 deciphered:1 human:2 require:1 summation:3 considered:1 drown:1 exp:3 bj:4 mostafa:1 sought:1 consecutive:1 smallest:1 vary:2 omitted:1 purpose:2 estimation:1 proc:1 currently:1 largest:1 gaussian:1 rather:1 clases:1 voltage:3 likelihood:11 detect:4 am:1 mth:2 interested:1 microelectrode:1 overall:9 classification:31 equal:1 once:2 aware:1 never:1 biology:1 represents:1 unsupervised:1 discrepancy:1 t2:8 idt:1 quantitatively:1 serious:1 few:3 composed:1 simultaneously:3 individual:3 consisting:1 attempt:1 detection:6 organization:1 investigate:1 nl:1 devoted:1 integral:3 necessary:1 re:1 plotted:1 classify:3 disadvantage:1 deviation:1 uniform:1 too:1 abele:1 considerably:2 chooses:1 thanks:1 density:5 fundamental:1 peak:12 physic:1 central:1 recorded:5 choose:1 possibly:2 henceforth:1 dr:1 necessitate:1 worse:1 american:1 account:1 potential:4 nonoverlapping:1 matter:1 performed:4 observer:1 extracellularly:1 portion:4 option:3 il:1 ni:5 accuracy:5 variance:1 characteristic:1 identify:1 ofthe:1 produced:1 researcher:2 hartline:1 published:1 classified:2 detector:1 whenever:1 ed:1 definition:1 frequency:1 pp:8 james:1 associated:1 sampled:4 knowledge:3 amplitude:2 actually:1 goldstein:1 trw:1 dt:5 response:1 done:1 though:2 just:4 stage:6 until:1 hand:1 overlapping:14 normalized:1 hence:5 din:1 spatially:1 deal:1 white:1 adjacent:3 ll:1 during:1 cerebellum:3 assistance:1 rat:3 tt:1 percent:1 novel:1 common:2 functional:1 spiking:1 khz:2 occurred:2 significant:1 respiration:1 refer:3 xib:2 subjectively:1 add:1 recent:1 certain:3 yi:1 caltech:1 minimum:2 fortunately:1 somewhat:1 additional:1 employed:1 period:3 signal:11 ii:3 sliding:1 resolving:1 alx:2 dashed:4 exceeds:1 cross:1 long:1 dept:1 hart:1 represent:3 sometimes:1 cell:3 receive:1 whereas:1 spacing:1 interval:9 recording:19 tend:1 hz:1 med:2 effectiveness:1 near:2 concerned:1 variety:1 gave:1 identified:2 expression:2 yaser:1 passing:1 cause:1 hardly:1 action:4 useful:1 covered:1 band:1 atiya:1 category:1 reduced:1 percentage:2 notice:4 estimated:2 correctly:1 per:2 write:1 abu:1 group:2 four:1 threshold:8 lti:1 sum:2 utilising:1 neurobiologist:1 separation:2 decision:1 appendix:4 acceptable:1 entirely:1 ki:1 distinguish:1 tackled:1 encountered:2 identifiable:1 activity:2 occur:1 scene:1 argument:1 orban:1 extracellular:2 according:1 kd:1 describes:1 smaller:2 joseph:1 taken:2 previously:1 turn:1 know:3 sanderson:1 studying:1 available:3 multiplied:2 apply:1 observe:1 away:1 occurrence:5 schmidt:1 encounter:1 original:1 remaining:1 clustering:2 instant:5 especially:2 objective:1 intend:1 question:1 added:2 spike:76 occurs:1 strategy:1 sha:1 dependence:1 distance:1 separate:1 considers:2 reason:1 length:1 ratio:1 unfortunately:2 mostly:2 robert:2 potentially:1 negative:2 implementation:1 neuron:46 thermal:1 situation:2 extended:1 lb:1 complement:1 pair:2 kl:3 unlikeliness:1 california:1 learned:1 narrow:1 trans:3 able:2 usually:3 pattern:3 xm:3 refuse:1 program:1 reliable:2 overlap:26 event:15 difficulty:1 rely:1 natural:1 olin:1 advanced:1 scheme:17 technology:2 temporally:2 isodata:1 created:1 understanding:2 relative:1 versus:1 foundation:1 subtractive:1 classifying:6 pi:3 land:1 supported:1 last:2 free:1 allow:1 institute:2 template:4 taking:1 curve:8 dimension:1 electroenceph:1 preprocessing:1 far:1 cope:1 ignore:1 preferred:1 dealing:1 ml:3 confirm:1 b1:1 assumed:2 alternatively:1 continuous:1 decade:1 stimulated:1 learn:1 ca:1 ignoring:1 obtaining:1 interact:2 artificially:1 aibi:1 linearly:1 neurosci:1 whole:1 noise:9 fig:11 referred:1 representative:1 tl:4 fashion:1 aid:1 wiley:1 sub:1 position:2 experienced:1 wish:1 candidate:2 bower:1 pe:1 third:1 ih:1 importance:1 illustrates:1 occurring:1 disregarded:1 ilx:2 simply:2 likely:4 corresponds:1 extracted:1 abbreviation:1 bioi:1 identity:2 considerable:1 change:2 hard:1 specifically:1 typical:5 except:2 miss:1 gte:1 total:1 pas:1 partly:2 la:1 evaluate:1 tested:1 |
1,540 | 240 | 274
WeinshalI, Edelman and BiiIthofT
A self-organizing multiple-view representation
of 3D objects
Daphna Weinshall
Center for Biological
Information Processing
MIT E25-201
Cambridge, MA 02139
Shimon Edelman
Center for Biological
Information Processing
MIT E25-201
Cambridge, MA 02139
Heinrich H. BiilthofF
Dept. of Cognitive and
Linguistic Sciences
Brown University
Providence, Rl 02912
ABSTRACT
We demonstrate the ability of a two-layer network of thresholded
summation units to support representation of 3D objects in which
several distinct 2D views are stored for ea.ch object. Using unsupervised Hebbian relaxation, the network learned to recognize ten
objects from different viewpoints. The training process led to the
emergence of compact representations of the specific input views.
When tested on novel views of the same objects, the network exhibited a substantial generalization capability. In simulated psychophysical experiments, the network's behavior was qualitatively
similar to that of human subjects.
1
Background
Model-based object recognition involves, by definition, a compa.rison between the
input image and models of different objects that are internal to the recognition
system. The form in which these models are best stored depends on the kind of
information available in the input, and on the trade-off between the amount of
memory allocated for the storage and the degree of sophistication required of the
recognition process.
In computer vision, a distinction can be made between representation schemes that
use 3D object-centered coordinate systems and schemes that store viewpoint-specific
information such as 2D views of objects. In principle, storing enough 2D views would
A Self-Organizing Multiple-View Representation of 3D Objects
allow the system to use simple recognition techniques such as template matching.
If only a few views of each object are remembered, the system must have the capability to normalize the appearance of an input object, by carrying out appropriate
geometrical transformations, before it can be directly compared to the stored represen tat ions .
What representation strategy is employed by the human visual system? The notion
that objects are represented in viewpoint-dependent fashion is supported by the
finding that commonplace objects are more readily recognized from certain so-called
canonical vantage points than from other, random viewpoints (Palmer et al. 1981).
Namely, canonical views are identified more quickly (and more accurately) than
others, with response times decreasing monotonically with increasing subjective
goodness.!
The monotonic increase in the recognition latency with misorientation of the object
relative to a canonical view prompts the interpretation of the recognition process in
terms of a mechanism related to mental rotation. In the classical mental rotation
task (see Shepard & Cooper 1982), the subject is required to decide whether two
simultaneously presented images are two views of the same 3D object. The average
latency of correct response in this task is linearly dependent on the difference in
the 3D attitude of the object in the two images. This dependence is commonly
accounted for by postulating a process that attempts to rotate the 3D shapes perceived in the two images into congruence before making the identity decision. The
rotation process is sometimes claimed to be analog, in the sense that the representation of the object appears to pass through intermediate orientation stages as the
rotation progresses (Shepard & Cooper 1982).
Psychological findings seem to support the involvement of some kind of mental
rotation in recognition by demonstrating the dependence of recognition latency for
an unfamiliar view of an object on the distance to its closest familiar view. There
is, however, an important qualification. Practice with specific objects appears to
cause this strategy to be abandoned in favor of a more memory-intensive, less timeconsuming direct comparison strategy. Under direct comparison, many views of the
objects are stored and recognition proceeds in essentially constant time, provided
that the presented views are sufficiently close to one of the stored views (Tarr &
Pinker 1989, Edelman et al. 1989).
From the preceding outline, it appears that a faithful model of object representation in the human visual system should provide both for the ability to "rotate"
3D objects and for the fast direct-comparison strategy that supersedes mental rotation for highly familiar objects. Surprisingly, it turns out that mental rotation
in recognition can be replicated by a self-organizing memory-intensive model based
on direct comparison. The rest of the present paper describes such a model, called
CLF (conjunctions of localized features; see Edelman & Weins hall 1989).
1 Canonical viewl of objects can be reliably identified in lubjective judgement al well as in
recognition talb. For example, when alked to form a mental image of an object, people Ulually
imagine it as leen &om a canonical perspective.
275
276
Weinshall, Edelman and Bulthoff
INPUT (feature) LAYER
F
\I
I I
II
(
A
1\
I \ /
I
fepre$etltation of
V1
REPRESENTATION LAYER
FOOTPRINT of object 01
Figure 1: The network consists of two layers, F (input, or feature, layer) and
R (representation layer). Only a small part of the projections from F to Rare
shown. The network encodes input patterns by making units in the R-Iayer respond
selectively to conjunctions of features localized in the F-Iayer. The curve connecting
the representations of the different views of the same object in R-Iayer symbolizes
the association that builds up between these views as a result of practice.
2
The model
The structure of the model appears in Figur~ 1 (see Edelman &; Weins hall 1989 for
details). The first (input, or feature) layer of the network is a feature map. In our
experiments, vertices of wire-frame objects served as the input features. Every unit
in the (feature) F-Iayer is connected to all units in the second (representation) Rlayer. The initial strength of a "vertical" (V) connection between an F-unit and an
R-unit decreases monotonically with the "horizontal" distance between the units,
according to an inverse square law (which may be considered the first approximation
to a Gaussian distribution). In our simulations the size of the F-layer was 64 x 64
units and the size of the R-Iayer - 16 x 16 units. Let (z,1I) be the coordinates of an
F-unit and (i, j) - the coordinates of an R-unit. The initial weight between these
two units is w"'rijlt=o = (0'[1 + (z - 4i)2 + (11- 4j)2])-1, where 0' = 50 and (4i,4j)
is the point in the F-Iayer that is directly "above" the R-unit (i, j).
The R-units in the representation layer are connected among themselves by lateral
(L) connections, whose initial strength is zero. Whereas the V-connections form the
representations ofindividual views of an object, the L-connections form associations
among different views of the same object.
2.1
Operation
During training, the input to the model is a sequence of appearances of an object,
encoded by the 2D locations of concrete sensory features (vertices) rather than a lis t
A Self-Organizing Multiple-View Representation of 3D Objects
of abstract features. At the first presentation of a stimulus several representation
units are active, all with different strengths (due to the initial distribution of vertical
connection strengths).
2.1.1
Winner Take All
We employ a simple winner-take-all (WTA) mechanism to identify for each view
of the input object a few most active R-units, which subsequently are recruited to
represent that view. The WTA mechanism works as follows. The net activities
of the R-units are uniformly thresholded. Initially, the threshold is high enough to
ensure that all activity in the R-Iayer is suppressed. The threshold is then gradually
decreased, by a fixed (multiplicative) amount, until some activity appears in the
R-layer. If the decrease rate of the threshold is slow enough, only a few units will
remain active at the end of the WTA process. In our implementation, the decrease
rate was 0.95. In most cases, only one winner emerged.
Note that although the WTA can be obtained by a simple computation, we prefer
the stepwise algorithm above because it has a natural interpretation in biological
terms. Such an interpretation requires postulating two mechanisms that operate in
parallel. The first mechanism, which looks at the activity of the R-Iayer, may be
thought as a high fan-in OR gate. The second mechanism, which performs uniform
adjustable thresholding on all the R-units, is similar to a global bias. Together, they
resemble feedback-regulated global arousal networks that are thought to be present,
e.g., in the medulla and in the limbic system of the brain (Kandel & Schwartz 1985).2
2.1.2
Adjustment of weights and thresholds
In the next stage, two changes of weights and thresholds occur that make the
currently active R-units (the winners of the WTA stage) selectively responsive to
the present view of the input object. First, there is an enhancement of the Vconnections from the active (input) F-units to the active R-units (the winners).
At the same time, the thresholds of the active R-units are raised, so that at the
presentation of a different input these units will be less likely to respond and to be
recruited anew. We employ Hebbian relaxation to enhance the V-connections from
the input layer to the active R-unit (or units). The connection strength tid from
F-unit a to R-unit b (i, j) changes by
=
(1)
where Aii is the activation of the R-unit (i, j) after WTA, tim,,!!: is an upper bound
on a connection strength and a is a parameter controlling the rate of convergence.
The threshold of a winner R-unit is increased by
:3 The relationship of this approach to other WTA algoritluns is discussed in Edehnan It: Wein.hall1989.
277
278
Weinshall, Edelman and BiiIthofT
(2)
where 6 < 1. This rule keeps the thresholded activity level of the unit growing
while the unit becomes more input specific. As a result, the unit encodes the
spatial structure of a specific view, responding selectively to that view after only a
few (two or three) presentations.
2.1.3
Between-views association
The principle by which specific views of the same object are grouped is that of
temporal association. New views of the object appear in a natural order, corresponding to their succession during an arbitrary rotation of the object. The lateral
(L) connections in the representation layer are modified by a time-delay Hebbian relaxation. L-connection Wbc between R-units b = (i,i) and e = (I, m) that represent
successive views is enhanced in proportion to the closeness of their peak activations
in time, up to a certain time difference K:
(3)
The strength of the association between two views is made proportional to a coefficient, AM(b, e), that measures the strength of the apparent motion effect that
would ensue if the two views were presented in succession to a human subject (see
Edelman & WeinshallI989).
2.1.4
Multiple-view representation
The appearance of a new object is explicitly signalled to the network, so that two
different objects do not become associated by this mechanism. The parameter -r1c
decreases with Ikl so that the association is stronger for units whose activation is
closer in time. In this manner, a footprint of temporally associated view-specific representations is formed in the second layer for each object. Together, the view-specific
representations form a distributed multiple-view representation of the object.
3
Testing the model
We have subjected the eLF network to simulated experiments, modeled after the
experiments of (Edelman et al. 1989). Some of the results of the real and simulated
experiments appear in Figures 2 and 3. In the experiments, each of ten novel 3D
wire-frame objects served in turn as target. The task was to distinguish between
the target and the other nine, non-target, objects. The network was first trained
on a set of projections of the target's vertices from 16 evenly spaced viewpoints.
After learning the target using Hebbian relaxation as described above, the network
A Self-Organizing Multiple-View Representation of 3D Objects
_
0.8..----~--~---~
U
4)
................................. .: ........ .
~ 0.1 ............. .
Ia::
~
~
.
..
0.6 ............. ... ........ ... ..... ; .......... .. .... : .. ... ... .
O.S ?...?..?... ...?. . .. ....... ... .? .~ ... .. ..?....... .!.. ...... .
o
SO
.:
.:
100
150
D. dlst. from best view (deg)
a::
a::
0.8
...
0.6 .... .... . .. ... .. ; ........ .... ..... ~ .. ...... .... .... ~ .... .... .
8
:
:
:
:
.
:
0.1~???? .... ? .. ?f .. ??? .. ?? .. ?????~ ..~
..
..
..:
.
??:
?
:
.
..
O.S ... .... . .. .. .. .. ! ............. .... ~ ......... ....... ! .... .... .
o
SO
100
lS0
D. dlst. from best view (deg)
Figure 3: Another comparison of human performance (left panel) with that of the
CLF model (right panel). Define the best view for each object as the view with the
shortest RT (highest CORR). If recognition involves rotation to the best (canonical)
view, RT or CORR should depend monotonically on D
D(ta.,.get, view). the
distance between the best view and the actually shown view. (The decrease in RT
or CORR at D
180 0 is due to the fact that for the wire-frame objects used in the
experiments the view diametrically opposite the best one is also easily recognized.)
For both human subjects and the model, the dependence is clear for the first session
of the experiment (upper curves), but disappears with practice (second session lower curves).
=
=
We note that blurring the input prior to its application to the F-Iayer can significantly extend the generalization ability of the eLF model. Performing autoassociation on a dot pattern blurred with a Gaussian is computationally equivalent to
correlating the input with a set of templates, realized as Gaussian receptive fields.
This, in turn, appears to be related to interpolation with Radial Basis Functions
(Moody & Darken 1989, Poggio & Girosi 1989, Poggio & Edelman 1989).
4
Summary
We have described a two-layer network ofthresholded summation units which is capable of developing multiple-view representations of 3D objects in an unsupervised
fashion, using fast Hebbian learning. Using this network to model the performance
of human subjects on similar stimuli, we replicated psychophysical experiments that
investigated the phenomena of canonical views and mental rotation. The model's
performance closely parallels that of the human subjects, even though the network
has no a priori mechanism for "rotating" object representations. In the model, a
semblance of rotation is created by progressive activation of object footprints (chains
of representation units created through association during training). Practice causes
the footprints to lose their linear structure through the creation of secondary association links between random representation units, leading to the disappearance
of orientation effects. Our results may indicate that a different interpretation of
findings that are usually taken to signify mental rotation is possible. The foot-
279
280
Weinshall, Edelman and Biilthoff
_60.----------------------,
.
.
.
....
;:' so ???????????????!???????????????!??????????????t???????...... .
~40
o
.. .. ...... ....
~~~ ......... .. .. .
E
:
~so ........ ?.. ?.. l~r
2O ' ..... .... ....
!~
. . ?. . . .
............ ..
10 ~----;....
. ----...;.- ------;..------1
o.S
1.0
1.S
~o
~S
session
_60r---------------~----.
.... so .. ............. .............. .............. ............. .
'-'
a:
40 .. .. .... ............ ..... ....... .. ...... . .... .. .. ... .. .
oo
so ..... ..........;~ .......... .. .
'0
20 .... .... .......[ ........ .. ............ .... .. ........... .
>
10 ....... .. .. ....i ........ .....................;..... ....... .. .
o
;
:
o~---...;;~--~----~
: ----~
o.S
1.0
1.S
~o
2.S
session
Figure 2: Performance of five human subjects (left panel) and of the eLF model
(right panel). The variation of the performance measure (for human subjects, response time RTj for the model, correlation CORR between the input and a stored
representation) over different views of an object serves as an estimate of the strength
of the canonical views phenomenon. In both human subjects and the model, practice appears to reduce the strength of this phenomenon.
was tested on a sequence of inputs, half of which consisted of familiar views of the
target, and half of views of other, not necessarily familiar, objects.
The presentation of an input to the F-Iayer activated units in the representation
layer. The activation then spread to other R-units via the L-connections. After a
fixed number of lateral activation cycles, we correlated the resulting pattern of activity with footprints of objects learned so far. The object whose footprint yielded
the highest correlation was recognized by definition. In the beginning of the testing stage, this correlation, which served as an analog of response time,S exhibited
strong dependence on object orientation, replicating the effect of mental rotation
in recognition. During testing, successive activation of R-units through association
strengthened the L-connection between them, leading to an obliteration of the linear
structure of R-unit sequences responsible for mental rotation effects.
3.1
Generalization to novel views
The usefulness of a recognition scheme based on multiple-view representation depends on its ability to classify correctly novel views of familiar objects. To assess
the generalization ability of the CLF network, we have tested it on views obtained
by rotating the objects away from learned views by as much as 23? (see Figure 4).
The classification rate was better than chance for the entire range of rotation. For
rotations of up to 4? it was close to perfect, decreasing to 30% at 23? (chance level
was 10% because we have used ten objects). One may compare this result with
the finding (Rock & DiVita 1987) that people have difficulties in recognizing or
imagining wire-frame objects in a novel orientation that differs by more than 30?
from a familiar one.
3The justification tor this use ot correlation appear. in Edelman" Weinshall1989.
A Self-Organizing Multiple-View Representation of 3D Objects
!
....
III
~
~
8'
~
...
~
.. .. ........ . .. .. ~ . .. ' .. ... . ......
u
0,4
' ? .. "
t.2
.... ......
''1' .. . ,.. ,:... . .. . " ....... ,.,
I'" ',?" . . ?..
r. . . . '?. . . ,? .,. . . . . .... . . . .,
t
~
.
o. o~--~--~--~---_.-J
o
~
u
~
Distance from learned position (deg)
Figure 4: Performance of the network on novel orientations of familiar objects
(mean of 10 objects, bars denote the variance).
prints formed in the representation layer in our model provide a hint as to what the
substrate upon which the mental rotation phenomena are based may look like.
References
[1] S. Edelman, H. Biilthoff, and D. Weinshall. Stimulus familiarity determines
recognition strategy for novel 3D objects. MIT A.I. Memo No. 1138, 1989.
[2] S. Edelman and D. Weinshall. A self-organizing multiple-view representation
of 3D objects. MIT A.I. Memo No. 1146, 1989.
[3] E. R. Kandel and J. H. Schwartz. Principle6
0/ neural 6cience.
Elsevier, 1985.
[4] J. Moody and C. Darken. Fast learning in networks oflocally tuned processing
units. Neural Computation, 1:281-289, 1989.
[5] S. Palmer, E. Rosch, and P. Chase. Canonical perspective and the perception
of objects. In J. Long and A. Baddeley, eds, Attn. & Perf. IX, 135-151.
Erlbaum, 1981.
[6] T. Poggio and S. Edelman. A network that learns to recognize 3D objects.
Nature, 1989, in press.
[7] T. Poggio and F. Girosi. A theory of networks for approximation and learning.
MIT A.I. Memo No. 1140, 1989.
[8] I. Rock and J. DiVita. A case of viewer-centered object perception. Cognitive
P6ychology, 19:280-293, 1987.
[9] R. N. Shepard and L. A. Cooper. Mental image6 and their tran6/ormation6.
MIT Press, 1982.
[10] M. Tall and S. Pinker. Mental rotation and orientation-dependence in shape
recognition. Cognitive Psychology, 21, 1989.
281
| 240 |@word weins:2 judgement:1 proportion:1 stronger:1 tat:1 simulation:1 diametrically:1 initial:4 tuned:1 subjective:1 activation:7 must:1 readily:1 r1c:1 girosi:2 shape:2 half:2 beginning:1 divita:2 mental:13 location:1 successive:2 five:1 direct:4 become:1 edelman:15 consists:1 oflocally:1 manner:1 behavior:1 themselves:1 growing:1 brain:1 decreasing:2 increasing:1 becomes:1 provided:1 panel:4 what:2 weinshall:6 kind:2 finding:4 transformation:1 temporal:1 every:1 schwartz:2 unit:43 appear:3 before:2 tid:1 qualification:1 interpolation:1 autoassociation:1 palmer:2 range:1 faithful:1 responsible:1 testing:3 practice:5 differs:1 footprint:6 thought:2 significantly:1 matching:1 projection:2 vantage:1 radial:1 get:1 close:2 storage:1 equivalent:1 map:1 center:2 timeconsuming:1 rule:1 notion:1 coordinate:3 variation:1 justification:1 imagine:1 controlling:1 enhanced:1 target:6 substrate:1 recognition:16 commonplace:1 connected:2 cycle:1 trade:1 decrease:5 highest:2 substantial:1 limbic:1 tran6:1 heinrich:1 trained:1 carrying:1 depend:1 creation:1 upon:1 blurring:1 basis:1 easily:1 aii:1 represented:1 attitude:1 distinct:1 fast:3 whose:3 encoded:1 emerged:1 apparent:1 ability:5 favor:1 emergence:1 chase:1 sequence:3 net:1 rock:2 organizing:7 normalize:1 convergence:1 enhancement:1 perfect:1 object:67 tim:1 oo:1 cience:1 figur:1 tall:1 progress:1 strong:1 involves:2 resemble:1 indicate:1 foot:1 closely:1 correct:1 subsequently:1 centered:2 human:11 generalization:4 biological:3 summation:2 viewer:1 sufficiently:1 hall:2 considered:1 congruence:1 tor:1 perceived:1 lose:1 currently:1 grouped:1 mit:6 gaussian:3 modified:1 rather:1 conjunction:2 linguistic:1 sense:1 am:1 elsevier:1 dependent:2 entire:1 initially:1 among:2 orientation:6 classification:1 priori:1 raised:1 spatial:1 field:1 tarr:1 progressive:1 look:2 unsupervised:2 others:1 stimulus:3 elf:3 hint:1 few:4 employ:2 simultaneously:1 recognize:2 familiar:7 attempt:1 highly:1 signalled:1 activated:1 chain:1 closer:1 capable:1 poggio:4 rotating:2 arousal:1 psychological:1 increased:1 classify:1 goodness:1 vertex:3 rare:1 uniform:1 usefulness:1 delay:1 medulla:1 recognizing:1 erlbaum:1 stored:6 providence:1 peak:1 off:1 enhance:1 connecting:1 e25:2 quickly:1 concrete:1 moody:2 together:2 cognitive:3 leading:2 li:1 coefficient:1 blurred:1 explicitly:1 depends:2 multiplicative:1 view:59 pinker:2 capability:2 parallel:2 om:1 square:1 formed:2 ass:1 rison:1 variance:1 succession:2 ensue:1 symbolizes:1 identify:1 spaced:1 misorientation:1 accurately:1 served:3 ed:1 definition:2 associated:2 dlst:2 ea:1 actually:1 appears:7 ta:1 response:4 leen:1 though:1 stage:4 until:1 correlation:4 bulthoff:1 horizontal:1 ikl:1 effect:4 brown:1 consisted:1 during:4 self:7 outline:1 demonstrate:1 performs:1 motion:1 geometrical:1 image:5 novel:7 rotation:18 rl:1 winner:6 shepard:3 analog:2 interpretation:4 association:9 discussed:1 extend:1 unfamiliar:1 compa:1 cambridge:2 session:4 replicating:1 dot:1 closest:1 perspective:2 involvement:1 store:1 certain:2 claimed:1 remembered:1 preceding:1 employed:1 recognized:3 shortest:1 monotonically:3 ii:1 multiple:10 hebbian:5 long:1 vision:1 essentially:1 sometimes:1 represent:2 ion:1 background:1 whereas:1 signify:1 decreased:1 allocated:1 ot:1 rest:1 operate:1 exhibited:2 subject:9 recruited:2 seem:1 supersedes:1 intermediate:1 iii:1 enough:3 psychology:1 identified:2 opposite:1 reduce:1 intensive:2 whether:1 cause:2 nine:1 latency:3 clear:1 amount:2 ten:3 canonical:9 correctly:1 demonstrating:1 threshold:7 thresholded:3 v1:1 relaxation:4 inverse:1 respond:2 decide:1 decision:1 prefer:1 wein:1 layer:16 bound:1 distinguish:1 fan:1 yielded:1 activity:6 strength:10 occur:1 represen:1 encodes:2 wbc:1 performing:1 developing:1 according:1 describes:1 remain:1 ls0:1 suppressed:1 wta:7 making:2 gradually:1 taken:1 computationally:1 turn:3 mechanism:8 subjected:1 end:1 serf:1 available:1 operation:1 away:1 appropriate:1 responsive:1 gate:1 abandoned:1 responding:1 ensure:1 daphna:1 build:1 classical:1 psychophysical:2 print:1 realized:1 rosch:1 strategy:5 receptive:1 dependence:5 rt:3 disappearance:1 regulated:1 distance:4 link:1 simulated:3 lateral:3 evenly:1 clf:3 modeled:1 relationship:1 memo:3 implementation:1 reliably:1 adjustable:1 upper:2 vertical:2 wire:4 darken:2 frame:4 arbitrary:1 prompt:1 namely:1 required:2 connection:12 learned:4 distinction:1 bar:1 proceeds:1 usually:1 pattern:3 perception:2 memory:3 rtj:1 ia:1 natural:2 difficulty:1 scheme:3 temporally:1 disappears:1 created:2 perf:1 prior:1 relative:1 law:1 proportional:1 localized:2 degree:1 principle:2 viewpoint:5 thresholding:1 storing:1 summary:1 accounted:1 supported:1 surprisingly:1 bias:1 allow:1 attn:1 template:2 distributed:1 curve:3 feedback:1 sensory:1 qualitatively:1 made:2 commonly:1 replicated:2 far:1 biilthoff:3 compact:1 keep:1 deg:3 global:2 active:8 anew:1 correlating:1 iayer:10 nature:1 imagining:1 investigated:1 necessarily:1 spread:1 linearly:1 fashion:2 postulating:2 cooper:3 slow:1 strengthened:1 position:1 kandel:2 ix:1 learns:1 shimon:1 familiarity:1 specific:8 closeness:1 stepwise:1 corr:4 led:1 sophistication:1 appearance:3 likely:1 visual:2 adjustment:1 monotonic:1 ch:1 chance:2 viewl:1 determines:1 ma:2 identity:1 presentation:4 change:2 uniformly:1 called:2 pas:1 secondary:1 selectively:3 internal:1 support:2 people:2 rotate:2 dept:1 baddeley:1 tested:3 phenomenon:4 correlated:1 |
1,541 | 2,400 | Wormholes Improve Contrastive Divergence
Geoffrey Hinton, Max Welling and Andriy Mnih
Department of Computer Science, University of Toronto
10 King?s College Road, Toronto, M5S 3G5 Canada
{hinton,welling,amnih}@cs.toronto.edu
Abstract
In models that define probabilities via energies, maximum likelihood
learning typically involves using Markov Chain Monte Carlo to sample
from the model?s distribution. If the Markov chain is started at the data
distribution, learning often works well even if the chain is only run for a
few time steps [3]. But if the data distribution contains modes separated
by regions of very low density, brief MCMC will not ensure that different
modes have the correct relative energies because it cannot move particles
from one mode to another. We show how to improve brief MCMC by
allowing long-range moves that are suggested by the data distribution.
If the model is approximately correct, these long-range moves have a
reasonable acceptance rate.
1 Introduction
One way to model the density of high-dimensional data is to use a set of parameters, ? to
deterministically assign an energy, E(x|?) to each possible datavector, x [2].
e?E(x|?)
(1)
p(x|?) = R ?E(y|?)
e
dy
The obvious way to fit such an energy-based model to a set of training data is to follow the
gradient of the likelihood. The contribution of a training case, x, to the gradient is:
Z
? log p(x|?)
?E(x|?)
?E(y|?)
= ?
+ p(y|?)
dy
(2)
??j
??j
??j
The last term in equation 2 is an integral over all possible datavectors and is usually intractable, but it can be approximated by running a Markov chain to get samples from the
Boltzmann distribution defined by the model?s current parameters. The main problem with
this approach is the time that it takes for the Markov chain to approach its stationary distribution. Fortunately, in [3] it was shown that if the chain is started at the data distribution,
running the chain for just a few steps is often sufficient to provide a signal for learning.
The way in which the data distribution gets distorted by the model in the first few steps of
the Markov chain provides enough information about how the model differs from reality to
allow the parameters of the model to be improved by lowering the energy of the data and
raising the energy of the ?confabulations? produced by a few steps of the Markov chain.
So the steepest ascent learning algorithm implied by equation 2 becomes
?
?
?
?
?E(.|?)
?E(.|?)
+
(3)
??j ? ?
??j
??j
data
conf abulations
Ek
?k
3
second
hidden
layer
2
1
f
f
k
0
first
hidden
layer
?1
f
f
j
Wij
?2
?3
?3
Ej
?j
Wjk
i
data
?2
?1
0
1
2
3
(a)
(b)
Figure 1: a) shows a two-dimensional data distribution that has four well-separated modes. b) shows
a feedforward neural network that is used to assign an energy to a two-dimensional input vector. Each
hidden unit takes a weighted sum of its inputs, adds a learned bias, and puts this sum through a logistic
non-linearity to produce an output that is sent to the next layer. Each hidden unit makes a contribution
to the global energy that is equal to its output times a learned scale factor. There are 20 units in the
first hidden layer and 3 in the top layer.
where the angle brackets denote expected values under the distribution specified by the
subscript.
If we use a Markov chain that obeys detailed balance, it is clear that when the training data
is dense and the model is perfect, the learning procedure in equation 3 will leave the parameters unchanged because the Markov chain will already be at its stationary distribution
so the confabulations will have the same distribution as the training data.
Unfortunately, real training sets may have modes that are separated by regions of very
low density, and running the Markov chain for only a few steps may not allow it to move
between these modes even when there is a lot of data. As a result, the relative energies
of data points in different modes can be completely wrong without affecting the learning
signal given by equation 3. The point of this paper is to show that, in the context of modelfitting, there are ways to use the known training data to introduce extra mode-hopping
moves into the Markov chain. We rely on the observation that after some initial training,
the training data itself provides useful suggestions about where the modes of the model are
and how much probability mass there is in each mode.
2 A simple example of wormholes
Figure 1a shows some two-dimensional training data and a model that was used to model
the density of the training data. The model is an unsupervised deterministic feedforward
neural network with two hidden layers of logistic units. The parameters of the model are
the weights and biases of the hidden units and one additional scale parameter per hidden
unit which is used to convert the output of the hidden unit into an additive contribution to
the global energy. By using backpropagation through the model, it is easy to compute the
derivatives of the global energy assigned to an input vector w.r.t. the parameters (needed in
equation 3), and it is also easy to compute the gradient of the energy w.r.t. each component
of the input vector (i.e the slope of the energy surface at that point in dataspace). The latter
gradient is needed for the ?Hybrid Monte Carlo? sampler that we discuss next.
The model is trained on 1024 datapoints for 1000 parameter updates using equation 3. To
produce the confabulations we start at the datapoints and use a Markov chain that is a sim-
(a)
(b)
Figure 2: (a) shows the probabilities learned by the network without using wormholes, displayed on
a 32 ? 32 grid in the dataspace. Some modes have much too little probability mass. (b) shows that
the probability mass in the different minima matches the data distribution after 10 parameter updates
using point-to-point wormholes defined by the vector differences between pairs of training points.
The mode-hopping allowed by the wormholes increases the number of confabulations that end up in
the deeper minima which causes the learning algorithm to raise the energy of these minima.
plified version of Hybrid Monte Carlo. Each datapoint is treated as a particle on the energy
surface. The particle is given a random initial momentum chosen from a unit-variance
isotropic Gaussian and its deterministic trajectory along the energy surface is then simulated for 10 time steps. If this simulation has no numerical errors the increase, ?E, in
the combined potential and kinetic energy will be zero. If ?E is positive, the particle is
returned to its initial position with a probability of 1 ? exp(??E). The step size is adapted
after each batch of trajectories so that only about 10% of the trajectories get rejected. Numerical errors up to second order are eliminated by using a ?leapfrog? method [5] which
uses the potential energy gradient at time t to compute the velocity increment between time
t ? 21 and t + 21 and uses the velocity at time t + 21 to compute the position increment
between time t and t + 1.
Figure 2a shows the probability density over the two-dimensional space. Notice that the
model assigns much more probability mass to some minima than to others. It is clear that
the learning procedure in equation 3 would correct this imbalance if the confabulations
were generated by a time-consuming Markov chain that was able to concentrate the confabulations in the deepest minima1 , but we want to make use of the data distribution to
achieve the same goal much faster.
Figure 2b shows how the probability density is corrected by 10 parameter updates using a
Markov chain that has been modified by adding an optional long-range jump at the end of
each accepted trajectory. The candidate jump is simply the vector difference between two
randomly selected training points. The jump is always accepted if it lowers the energy. If it
raises the energy it is accepted with a probability of exp(??E). Since the probability that
point A in the space will be offered a jump to point B is the same as the probability that
B will be offered a jump to A, the jumps do not affect detailed balance. One way to think
about the jumps is to imagine that every point in the dataspace is connected by wormholes
to n(n ? 1) other points so that it can move to any of these points in a single step.
To understand how the long-range moves deal with the trade-off between energy and entropy, consider a proposed move that is based on the vector offset between a training point
1
Note that depending on the height of the energy barrier between the modes this may take too
long for practical purposes.
that lies in a deep narrow energy minimum and a training point that lies in a broad shallow
minimum. If the move is applied to a random point in the deep minimum, it stands a good
chance of moving to a point within the broad shallow minimum, but it will probably be rejected because the energy has increased. If the opposite move is applied to a random point
in the broad minimum, the resulting point is unlikely to fall within the narrow minimum,
though if it does it is very likely to be accepted. If the two minima have the same free
energy, these two effects exactly balance.
Jumps generated by random pairs of datapoints work well if the minima are all the same
shape, but in a high-dimensional space it is very unlikely that such a jump will be accepted
if different energy minima are strongly elongated in different directions.
3 A local optimization-based method
In high dimensions the simple wormhole method will have a low acceptance rate because
most jumps will land in high-energy regions. One way avoid to that is to use local optimization: after a jump has been made descend into a nearby low-energy region. The obvious
difficulty with this approach is that care must be taken to preserve detailed balance. We
use a variation on the method proposed in [7]. It fits Gaussians to the detected low energy
regions in order to account for their volume.
A Gaussian is fitted using the following procedure. Given a point x, let mx be the point
found by running a minimization algorithm on E(x) for a few steps (or until convergence)
starting at x. Let Hx be the Hessian of E(x) at mx , adjusted to ensure that it is positive
definite by adding a multiple of the identity matrix to it. Let ?x be the inverse of Hx . A
Gaussian density gx (y) is then defined by the mean mx and the covariance matrix ?x .
To generate a jump proposal, we make a forward jump by adding the vector difference d
between two randomly selected data points to the initial point x0 , obtaining x. Then we
compute mx and ?x , and sample a proposed jump destination y from gx (y). Then we
make a backward jump by adding ?d to y to obtain z, and compute mz and ?z , specifying
gz (x). Finally, we accept the proposal y with probability
p = min(1,
exp(?E(y)) gz (x0 )
).
exp(?E(x0 )) gx (y)
Our implementation of the algorithm executes 20 steps of steepest descent to find mx and
mz . To save time, instead of computing the full Hessian, we compute a diagonal approximation to the Hessian using the method proposed in [1].
4 Gaping wormholes
In this section we describe a third method based on ?darting MCMC? [8] to jump between
the modes of a distribution. The idea of this technique is to define spherical regions on the
modes of the distribution and to jump only between corresponding points in those regions.
When we consider a long-range move we check whether or not we are inside a wormhole. When inside a wormhole we initiate a jump to some other wormhole (e.g. chosen
uniformly); when outside we stay put in order to maintain detailed balance. If we make a
jump we must also use the usual Metropolis rejection rule to decide whether to accept the
jump.
In high dimensional spaces this procedure may still lead to unacceptably high rejection
rates because the modes will likely decay sharply in at least a few directions. Since these
ridges of probability are likely to be uncorrelated across the modes, the proposed target
location of the jump will most of the time have very low probability, resulting in almost
certain rejection. To deal with this problem, we propose a generalization of the described
method, where the wormholes can have arbitrary shapes and volumes. As before, when
we are considering a long-range move we first check our position, and if we are located
inside a wormhole we initiate a jump (which may be rejected) while if we are located
outside a wormhole we stay put. To maintain detailed balance between wormholes we
need to compensate for their potentially different volume factors. To that end, we impose
the constraint
Vi Pi?j = Vj Pj?i
(4)
on all pairs of wormholes, where Pi?j is a transition probability and Vi and Vj are the
volumes of the wormholes i and j respectively. This in fact defines a separate Markov
chain between the wormholes with equilibrium distribution,
Vi
PiEQ = P
j Vj
(5)
The simplest method2 to compensate for the different volume factors is therefore to sample a target wormhole from this distribution P EQ . When the target wormhole has been
determined we can either sample a point uniformly within its volume or design some deterministic mapping (see also [4]). Finally, once the arrival point has been determined we
need to compensate for the fact that the probability of the point of departure is likely to be
different than the probability of the point of arrival. The usual Metropolis rule applies in
this case,
?
?
Parrive
Paccept = min 1,
(6)
Pdepart
This combined set of rules ensures that detailed balance holds and that the samples will
eventually come from the correct probability distribution. One way of employing this sampler in conjunction with contrastive divergence learning is to fit a ?mixture of Gaussians?
model to the data distribution in a preprocessing step. The region inside an iso-probability
contour of each Gaussian mixture component defines an elliptical wormhole with volume
Qd
d
? 2 ?d i=1 ?i
(7)
Vellipse =
?(1 + d2 )
where ?(x) is the gamma function, ?i is the standard deviation of the i?th eigen-direction of
the covariance matrix and ? is a free parameter controlling the size of the wormhole. These
regions provide good jump points during CD-learning because it is expected that the valleys
in the energy landscape correspond to the regions where the data cluster. To minimize the
rejection rate we map points in one ellipse to ?corresponding? points in another ellipse as
follows. Let ?depart and ?arrive be the covariance matrices of the wormholes in question.
and let ? = U SU T be an eigenvalue decomposition. The following transformation maps
iso-probability contours in one wormhole to iso-probability contours in another,
1/2
?1/2
T
xarrive ? ?arrive = ?Uarrive Sarrive Sdepart Udepart
(xdepart ? ?depart )
(8)
with ? the center location of the ellipse. The negative sign in front of the transformation
is to promote better exploration when the target wormhole turns out to be the same as
the wormhole from which the jump is initiated. It is important to realize that although
the mapping is one-to-one, we still need to satisfy the constraint in equation 4 because a
volume element dx will change under the mapping. Thus, wormholes are sampled from
P EQ and proposed moves are accepted according to equation 6.
For both the deterministic and the stochastic moves we may also want to consider regions
that overlap. For instance, if we generate wormholes by fitting a mixture of Gaussians it
2
Other methods that respect the constraint 4 are possible but are suboptimal in the sense that they
mix slower to the equilibrium distribution.
3
2
1
0
?1
?2
?3
?3
?2
?1
0
1
(a)
2
3
(b)
Figure 3: (a) Dataset of 1024 cases uniformly distributed on 2 orthogonal narrow rectangles. (b)
Probability density of the model learned with contrastive divergence. The size of each square indicates the probability mass at the corresponding location.
is very hard to check whether these regions overlap somewhere in space. Fortunately, we
can adapt the sampling procedure to deal with this case as well. First define narrive as
the total number of regions that contain point xarrive and similarly for ndepart . Detailed
balance can still be maintained for both deterministic and stochastic moves if we adapt the
Metropolis acceptance rule as follows,
?
?
ndepart Parrive
(9)
Paccept = min 1,
narrive Pdepart
Further details can be found in [6].
5 An experimental comparison of the three methods
To highlight the difference between the point and the region wormhole sampler, we sampled
1024 data points along two very narrow orthogonal ridges (see figure 3a), with half of
the cases in each mode. A model with the same architecture as depicted in figure 1 was
learned using contrastive divergence, but with ?Cauchy? nonlinearities of the form f (x) =
log(1 + x2 ) instead of the logistic function. The probability density of the model that
resulted is shown in figure 3b. Clearly, the lack of mixing between the modes has resulted
in one mode being much stronger than the other one. Subsequently, learning was resumed
using a Markov chain that proposed a long-range jump for all confabulations after each
brief HMC run. The regions in the region wormhole sampler were generated by fitting
a mixture of two Gaussians to the data using EM, and setting ? = 10. Both the point
wormhole method and the region wormhole method were able to correct the asymmetry in
the solution but the region method does so much faster as shown in figure 4b. The reason
is that a much smaller fraction of the confabulations succeed in making a long-range jump
as shown in figure 4a.
We then compared all three wormhole algorithms on a family of datasets of varying dimensionality. Each dataset contained 1024 n-dimensional points, where n was one of 2,
4, 8, 16, or 32. The first two components of each point were sampled uniformly from two
axis-aligned narrow orthogonal ridges and then rotated by 45? around the origin to ensure
that the diagonal approximation to the Hessian, used by the local optimization-based algorithm, was not unfairly accurate. The remaining n ? 2 components of each data point were
sampled independently from a sharp univariate Gaussian with mean 0 and std. 0.02.
2
1
500
0
?1
400
log?odds
accepted long range jumps
600
300
200
?2
?3
?4
?5
?6
100
?7
0
0
20
40
60
parameter updates
80
100
?8
0
20
40
60
80
100
paramater updates
(a)
(b)
Figure 4: (a) Number of successful jumps between the modes for point wormhole MCMC (dashed
line) and region wormhole MCMC (solid line). (b) Log-odds of the probability masses contained
in small volumes surrounding the two modes for the point wormhole method (dashed line) and the
region wormhole method (solid line). The log-odds is zero when the probability mass is equal in both
modes.
The networks used for comparison had architectures identical to the one depicted in Figure
1 in all respects except for the number and the type of units used. The second hidden
layer consisted of Cauchy units, while the first hidden layer consisted of some Cauchy and
some sigmoid units. The networks were trained for 2000 parameter updates using HMC
without wormholes. To speed up the training, an adaptive learning rate and a momentum
of 0.95 were used. We also used a weight decay rate of 0.0001 for weights and 0.000001
for scales. Gaussian noise was added to the last n ? 2 components of each data point. The
std. of the noise started at 0.2 and was gradually decreased to zero as training progressed.
This prevented HMC from being slowed down by the narrow energy ravines resulting from
the tight constraints on the last n ? 2 components.
After the model was trained (without wormholes), we compared the performance of the
three jump samplers by allowing each sampler to make a proposal for each training case and
then comparing the acceptance rates. This was repeated 25 times to improve the estimate
of the acceptance rate. In each sampler, HMC was run for 10 steps before offering points
an opportunity to jump.
The average number of successful jumps between modes per iteration is shown in the table
below.
Dimensionality
2
4
8
16
32
Network
architecture
10+10, 2
20+10, 4
20+10, 6
40+10, 8
50+10, 10
Relative
run time
Simple
wormholes
10
6
3
1
1
1
Optimizationbased
15
17
19
13
9
2.6
Region
wormholes
372
407
397
338
295
1
The network architecture column shows the number of units in the hidden layers with each
entry giving the number of Cauchy units plus the number of sigmoid units in the first hidden
layer and the number of Cauchy units in the second hidden layer.
6 Summary
Maximum likelihood learning of energy-based models is hard because the gradient of the
log probability of the data with respect to the parameters depends on the distribution defined
by the model and it is computationally expensive to even get samples from this distribution.
Minimizing contrastive divergence is much easier than maximizing likelihood but the brief
Markov chain does not have time to mix between separated modes in the distribution3 .
The result is that the local structure around each data cluster is modelled well, but the
relative masses of different cluster are not. In this paper we proposed three algorithms
to deal with this phenomenon. Their success relies on the fact that the data distribution
provides valuable suggestions about the location of the modes of a good model. Since the
probability of the model distribution is expected to be substantial in these regions they can
be successfully used as target locations for long-range moves in a MCMC sampler.
The MCMC sampler with point-to-point wormholes is simple but has a high rejection rate
when the modes are not aligned. Performing local gradient descent after a jump significantly increases the acceptance rate, but only leads to a modest improvement in efficiency
because of the extra computations required to maintain detailed balance. The MCMC sampler with region-to-region wormholes targets its moves to regions that are likely to have
high probability under the model and therefore has a much better acceptance rate, provided
the distribution can be modelled well by a mixture. None of the methods we have proposed will work well for high-dimensional, approximately factorial distributions that have
exponentially many modes formed by the cross-product of multiple lower-dimensional distributions.
Acknowledgements This research was funded by NSERC, CFI, OIT. We thank Radford Neal and
Yee-Whye Teh for helpful advice and Sam Roweis for providing software.
References
[1] S. Becker and Y. LeCun. Improving the convergence of back-propagation learning with sec
ond-order methods. In D. Touretzky, G. Hinton, and T. Sejnowski, editors, Proc. of the 1988
Connectionist Models Summer School, pages 29?37, San Mateo, 1989. Morgan Kaufman.
[2] Y. Bengio, R. Ducharme, and P. Vincent. A neural probabilistic language model. In Advances in
Neural Information Processing Systems, 2001, 2001.
[3] G.E. Hinton. Training products of experts by minimizing contrastive divergence. Neural Computation, 14:1771?1800, 2002.
[4] C. Jarzynski. Targeted free energy perturbation. Technical Report LAUR-01-2157, Los Alamos
National Laboratory, 2001.
[5] R.M. Neal. Probabilistic inference using markov chain monte carlo methods. Technical Report
CRG-TR-93-1, University of Toronto, Computer Science, 1993.
[6] C. Sminchisescu, M.Welling, and G. Hinton. Generalized darting monte carlo. Technical report,
University of Toronto, 2003. Technical Report CSRG-478.
[7] H. Tjelemeland and B.K. Hegstad. Mode jumping proposals in mcmc. Technical report, Norwegian University of Science and Technology, Trondheim, Norway, 1999. Rep. No. Statistics
no.1/1999.
[8] A. Voter. A monte carlo method for determining free-energy differences and transition state
theory rate constants. 82(4), 1985.
3
However, note that in cases where the modes are well separated, even Markov chains that run for
an extraordinarily long time will not mix properly between those modes, and the results of this paper
become relevant.
| 2400 |@word version:1 stronger:1 d2:1 simulation:1 covariance:3 decomposition:1 contrastive:6 tr:1 solid:2 initial:4 contains:1 offering:1 current:1 elliptical:1 comparing:1 dx:1 must:2 realize:1 numerical:2 additive:1 shape:2 update:6 stationary:2 half:1 selected:2 unacceptably:1 isotropic:1 iso:3 steepest:2 provides:3 toronto:5 gx:3 location:5 height:1 along:2 become:1 fitting:2 inside:4 introduce:1 x0:3 expected:3 spherical:1 little:1 considering:1 becomes:1 provided:1 linearity:1 mass:8 kaufman:1 method2:1 transformation:2 every:1 exactly:1 wrong:1 unit:15 positive:2 before:2 local:5 initiated:1 subscript:1 approximately:2 plus:1 voter:1 mateo:1 specifying:1 range:10 obeys:1 practical:1 lecun:1 ond:1 definite:1 differs:1 backpropagation:1 optimizationbased:1 procedure:5 cfi:1 significantly:1 road:1 get:4 cannot:1 valley:1 put:3 context:1 yee:1 deterministic:5 elongated:1 map:2 center:1 maximizing:1 starting:1 independently:1 assigns:1 rule:4 datapoints:3 variation:1 increment:2 imagine:1 target:6 controlling:1 us:2 origin:1 velocity:2 element:1 approximated:1 expensive:1 located:2 std:2 descend:1 region:25 ensures:1 connected:1 trade:1 mz:2 valuable:1 substantial:1 trained:3 raise:2 tight:1 efficiency:1 completely:1 surrounding:1 separated:5 describe:1 monte:6 sejnowski:1 detected:1 outside:2 extraordinarily:1 ducharme:1 statistic:1 think:1 itself:1 eigenvalue:1 propose:1 product:2 aligned:2 relevant:1 mixing:1 achieve:1 roweis:1 paramater:1 wjk:1 los:1 convergence:2 cluster:3 asymmetry:1 produce:2 distribution3:1 perfect:1 leave:1 rotated:1 depending:1 school:1 eq:2 sim:1 c:1 involves:1 come:1 qd:1 concentrate:1 direction:3 correct:5 stochastic:2 subsequently:1 exploration:1 hx:2 assign:2 generalization:1 adjusted:1 crg:1 hold:1 around:2 exp:4 equilibrium:2 mapping:3 purpose:1 proc:1 successfully:1 weighted:1 minimization:1 clearly:1 gaussian:6 always:1 modified:1 avoid:1 ej:1 varying:1 conjunction:1 leapfrog:1 improvement:1 properly:1 likelihood:4 check:3 indicates:1 sense:1 helpful:1 inference:1 typically:1 unlikely:2 accept:2 hidden:14 wij:1 equal:2 once:1 eliminated:1 sampling:1 identical:1 broad:3 unsupervised:1 progressed:1 promote:1 others:1 connectionist:1 report:5 few:7 randomly:2 preserve:1 divergence:6 gamma:1 resulted:2 national:1 maintain:3 acceptance:7 mnih:1 mixture:5 bracket:1 chain:21 accurate:1 integral:1 jumping:1 orthogonal:3 modest:1 fitted:1 increased:1 instance:1 column:1 resumed:1 deviation:1 entry:1 alamo:1 successful:2 too:2 front:1 combined:2 density:9 stay:2 destination:1 off:1 probabilistic:2 conf:1 ek:1 derivative:1 expert:1 account:1 potential:2 nonlinearities:1 sec:1 satisfy:1 vi:3 depends:1 lot:1 start:1 slope:1 parrive:2 contribution:3 minimize:1 square:1 formed:1 variance:1 correspond:1 landscape:1 modelled:2 vincent:1 produced:1 none:1 carlo:6 trajectory:4 m5s:1 executes:1 datapoint:1 touretzky:1 energy:34 obvious:2 sampled:4 dataset:2 dimensionality:2 back:1 norway:1 follow:1 improved:1 though:1 strongly:1 just:1 rejected:3 until:1 su:1 lack:1 propagation:1 defines:2 mode:31 logistic:3 effect:1 contain:1 consisted:2 assigned:1 laboratory:1 neal:2 deal:4 during:1 maintained:1 generalized:1 whye:1 ridge:3 sigmoid:2 exponentially:1 volume:9 grid:1 similarly:1 particle:4 language:1 had:1 funded:1 moving:1 surface:3 add:1 certain:1 rep:1 success:1 morgan:1 minimum:13 fortunately:2 care:1 additional:1 impose:1 signal:2 dashed:2 multiple:2 full:1 mix:3 technical:5 match:1 faster:2 adapt:2 cross:1 long:12 compensate:3 prevented:1 iteration:1 proposal:4 affecting:1 want:2 decreased:1 extra:2 ascent:1 probably:1 sent:1 wormhole:43 odds:3 feedforward:2 bengio:1 enough:1 easy:2 affect:1 fit:3 laur:1 architecture:4 andriy:1 opposite:1 suboptimal:1 idea:1 oit:1 whether:3 becker:1 returned:1 hessian:4 cause:1 deep:2 useful:1 detailed:8 clear:2 factorial:1 simplest:1 generate:2 notice:1 sign:1 per:2 four:1 pj:1 lowering:1 backward:1 rectangle:1 fraction:1 sum:2 convert:1 run:5 angle:1 inverse:1 distorted:1 arrive:2 almost:1 reasonable:1 decide:1 family:1 dy:2 layer:11 summer:1 adapted:1 constraint:4 sharply:1 x2:1 software:1 nearby:1 speed:1 min:3 performing:1 department:1 according:1 jarzynski:1 across:1 smaller:1 em:1 sam:1 shallow:2 metropolis:3 making:1 slowed:1 gradually:1 taken:1 trondheim:1 computationally:1 equation:9 discus:1 eventually:1 turn:1 needed:2 initiate:2 dataspace:3 end:3 amnih:1 gaussians:4 save:1 batch:1 confabulation:8 eigen:1 slower:1 top:1 running:4 ensure:3 remaining:1 opportunity:1 hopping:2 somewhere:1 giving:1 ellipse:3 unchanged:1 paccept:2 implied:1 move:17 already:1 question:1 depart:2 added:1 usual:2 diagonal:2 g5:1 gradient:7 mx:5 separate:1 thank:1 simulated:1 cauchy:5 reason:1 providing:1 balance:9 minimizing:2 unfortunately:1 hmc:4 potentially:1 negative:1 implementation:1 design:1 boltzmann:1 allowing:2 imbalance:1 teh:1 observation:1 markov:18 datasets:1 descent:2 displayed:1 optional:1 hinton:5 norwegian:1 perturbation:1 arbitrary:1 sharp:1 canada:1 pair:3 required:1 specified:1 raising:1 learned:5 narrow:6 able:2 suggested:1 usually:1 below:1 departure:1 max:1 overlap:2 treated:1 rely:1 hybrid:2 difficulty:1 improve:3 technology:1 brief:4 axis:1 started:3 gz:2 deepest:1 acknowledgement:1 determining:1 relative:4 highlight:1 suggestion:2 geoffrey:1 offered:2 sufficient:1 editor:1 uncorrelated:1 pi:2 land:1 cd:1 summary:1 last:3 free:4 unfairly:1 bias:2 allow:2 deeper:1 understand:1 fall:1 barrier:1 distributed:1 dimension:1 stand:1 transition:2 contour:3 forward:1 made:1 jump:32 preprocessing:1 adaptive:1 san:1 employing:1 welling:3 global:3 consuming:1 ravine:1 reality:1 table:1 obtaining:1 improving:1 sminchisescu:1 vj:3 main:1 dense:1 noise:2 arrival:2 allowed:1 repeated:1 advice:1 momentum:2 position:3 deterministically:1 candidate:1 lie:2 third:1 down:1 offset:1 decay:2 intractable:1 adding:4 easier:1 rejection:5 entropy:1 depicted:2 simply:1 likely:5 univariate:1 contained:2 nserc:1 applies:1 radford:1 chance:1 relies:1 kinetic:1 succeed:1 goal:1 identity:1 king:1 targeted:1 change:1 hard:2 determined:2 except:1 corrected:1 uniformly:4 sampler:10 total:1 accepted:7 experimental:1 college:1 latter:1 mcmc:9 phenomenon:1 |
1,542 | 2,401 | A probabilistic model of auditory space
representation in the barn owl
Brian J. Fischer
Dept. of Electrical and Systems Eng.
Washington University in St. Louis
St. Louis, MO 63110
[email protected]
Charles H. Anderson
Department of Anatomy and Neurbiology
Washington University in St. Louis
St. Louis, MO 63110
[email protected]
Abstract
The barn owl is a nocturnal hunter, capable of capturing prey using auditory information alone [1]. The neural basis for this localization behavior is the existence of auditory neurons with spatial receptive fields
[2]. We provide a mathematical description of the operations performed
on auditory input signals by the barn owl that facilitate the creation of a
representation of auditory space. To develop our model, we first formulate the sound localization problem solved by the barn owl as a statistical
estimation problem. The implementation of the solution is constrained
by the known neurobiology.
1 Introduction
The barn owl shows great accuracy in localizing sound sources using only auditory information [1]. The neural basis for this localization behavior is the existence of auditory
neurons with spatial receptive fields called space specific neurons [2]. Experimental evidence supports the hypothesis that spatial selectivity in auditory neurons arises from tuning
to a specific combination of the interaural time difference (ITD) and the interaural level
difference (ILD) [3]. Still lacking, however, is a complete account of how ITD and ILD
spectra are integrated across frequency to give rise to spatial selectivity. We describe a
computational model of the operations performed on the auditory input signals leading to
an initial representation of auditory space. We develop the model in the context of a statistical estimation formulation of the localization problem that the barn owl must solve. We
use principles of signal processing and estimation theory to guide the construction of the
model, but force the implementation to respect neurobiological constraints.
2 The environment
The environment consists of Ns point sources and a source of ambient noise. Each point
source is defined by a sound signal, si (t), and a direction (?i , ?i ) where ?i is the azimuth
and ?i is the elevation of the source relative to the owl?s head. In general, source location may change over time. For simplicity, however, we assume that source locations are
fixed. Source signals can be broadband or narrowband. Signals with onsets are modeled as
PNi
broadband noise signals modulated by a temporal envelope, si (t) = [ n=1
win (t)]ni (t),
2
2
1
where win (t) = Ain e? 2 (t?cin ) /?in and ni (t) is Gaussian white noise bandlimited to 12
kHz (see figure (4A)). The ambient noise is described below.
3 Virtual Auditory Space
The first step in the localization process is the location-dependent mapping of source signals
to the received pressure waveforms at the eardrums. For a given source location, the system
describing the transformation of a source signal to the waveform received at the eardrum is
well approximated by a linear system. This system is characterized by its transfer function
called the head related transfer function (HRTF) or, equivalently, by its impulse response,
the head related impulse response (HRIR). Additionally, when multiple sources are present
the composite waveform at each ear is the sum of the waveforms received due to each
source alone. Therefore, we model the received pressure waveforms at the ears as
rL (t) =
Ns
X
hL(?i ,?i ) (t)?si (t)+nL (t) and rR (t) =
i=1
Ns
X
hR(?i ,?i ) (t)?si (t)+nR (t) (1)
i=1
where hL(?,?) (t) and hR(?,?) (t) are the HRIRs for the left and right ears, respectively, when
the source location is (?, ?), [4], and nL (t), nR (t) are the ambient noises experienced by
the left and right ears, respectively. For our simulations, the ambient noise for each ear
is created using a sample of a natural sound recording of a stream, sb (t) [5]. The sample
is filtered byP
HRIRs for all locations in the frontal hemisphere,
?, then averaged so that
P
1
1
nL (t) = |?|
i?? hL(?i ,?i ) (t) ? sb (t) and nR (t) = |?|
i?? hR(?i ,?i ) (t) ? sb (t).
4 Cue Extraction
In our model, location information is not inferred directly from the received signals but is
obtained from stimulus-independent binaural location cues extracted from the input signals
[6],[7]. The operations used in our model to process the auditory input signals and extract
cues are motivated by the known processing in the barn owl?s auditory system and by the
desire to extract stimulus-independent location cues from the auditory signals that can be
used to infer the locations of sound sources.
4.1 Cochlear processing
In the first stage of our model, input signals are filtered with a bank of linear band-pass
filters. Following linear filtering, input signals undergo half-wave rectification. So, the
input signals to the two ears rL (t) and rR (t) are decomposed into a set of scalar valued
functions uL (t, ?k ) and uR (t, ?k ) defined by
uL (t, ?k ) = [f?k ? rL (t)]+ and uR (t, ?k ) = [f?k ? rR (t)]+
(2)
where f?k (t) is the linear bandpass filter for the channel with center frequency ?k . Here
we use the standard gammatone filter f?k (t) = t??1 e?t/?k cos(?k t) with ? = 4 [8].
Following rectification there is a gain control step that is a modified version of the divisive
normalization model of Schwartz and Simoncelli [9]. We introduce intermediate variables
?L (t, ?k ) and ?R (t, ?k ) that dynamically compute the intensity of the signals within each
frequency channel as
?? L (t, ?k ) = ?
uL (t, ?k )
?L (t, ?k )
+P
?
n akn ?(t, ?n ) + ?
(3)
and
uR (t, ?k )
?R (t, ?k )
+P
(4)
?
n akn ?(t, ?n ) + ?
where ?(t, ?n ) = ?L (t, ?k ) + ?R (t, ?k ). We define the output of the cochlear filter in
frequency channel k to be
uL (t, ?k )
uR (t, ?k )
vL (t, ?k ) = P
and vR (t, ?k ) = P
(5)
n akn ?(t, ?n ) + ?
n akn ?(t, ?n ) + ?
for the left and right, respectively. Note that the rectified outputs from the left and right ears,
uL (t, ?k ) and uR (t, ?k ), are normalized by the same term so that binaural disparities are
not introduced by the gain control operation. Initial cue extraction operations are performed
within distinct frequency channels established by this filtering process.
?? R (t, ?k ) = ?
4.2 Level difference cues
The level difference pathway has two stages. First, the outputs of the filter banks are
integrated over time to obtain windowed intensity measures for the components of the
left and right ear signals. Next, signals from the left and right ears are combined within
each frequency channel to measure the location dependent level difference. We compute
the intensity of the signal in each frequency channel over a small time window, w(t), as:
Z t
Z t
yL (t, ?k ) =
vL (?, ?k )w(t ? ?)d? and yR (t, ?k ) =
vR (?, ?k )w(t ? ?)d?. (6)
0
0
?t/?
We use a simple exponential window w(t) = e
tion.
H(t) where H(t) is the unit step func-
The magnitude of yL (t, ?k ) and yR (t, ?k ) vary with both the signal intensity and the gain
of the HRIR in the frequency band centered at ?k . To compute the level difference between
the input signals that is introduced by the HRIRs in a manner that is invariant to changes in
the intensity of the source signal we compute
yR (t, ?k )
z(t, ?k ) = log(
).
(7)
yL (t, ?k )
4.3 Temporal difference cues
We use a modified version of the standard windowed cross correlation operation to measure
time differences. Our modifications incorporate three features that model processing in the
barn owl?s auditory system. First, signals are passed through a saturating nonlinearity to
model the saturation of the nucleus magnocellularis (NM) inputs to the nucleus laminaris
(NL) [10]. We define ?L (t, ?k ) = F (vL (t, ?k )) and ?R (t, ?k ) = F (vR (t, ?k )), where
F (?) is a saturating nonlinearity. Let x(t, ?k , m) denote the value of the cross correlation
in frequency channel k at delay index m ? {0, . . . , N }, defined by
x(t, ?k , m)
x(t,
? ?k , m) = ?
+ [?L (t ? ?m, ?k ) + ?][?R (t ? ?(N ? m), ?k ) + ?]. (8)
? (y(t, ?k ))
Here, ? (y(t, ?k )) is a time constant that varies with the intensity of the stimulus in the
frequency channel where y(t, ?k ) = yL (t, ?k ) + yR (t, ?k ). The time constant decreases as
y(t, ?k ) increases, so that for more intense sounds information is integrated over a smaller
time window. This operation functions as a gain control and models the inhibition of NL
neurons by superior olive neurons [11]. The constants ?, ? > 0 are included to reflect the
fact that NL neurons respond to monaural stimulation, [12], and are chosen so that at input
levels above threshold (0 ? 5 dB SPL) the cross correlation term dominates. We choose
the delay increment ? to satisfy ?N = 200?s so that the full range of possible delays is
covered.
5 Representing auditory space
The general localization problem that the barn owl must solve is that of localizing multiple
objects in its environment using both auditory and visual cues. An abstract discussion
of a possible solution to the localization problem will motivate our model of the owl?s
initial representation of auditory space. Let Ns (t) denote the number of sources at time
t. Assume that each source is characterized by the direction pair (?i , ?i ) that obeys a
dynamical system (??i , ?? i ) = f (?i , ?i , ?i ) where ?i is a noise term and f : R3 ? R2 is a
possibly nonlinear mapping. We assume that (?i (t), ?i (t)) defines a stationary stochastic
process with known density p(?i , ?i ) [6],[7]. At time t, let ?ta denote a vector of cues
computed from auditory input and let ?tv denote a vector of cues computed from visual
input. The problem is to estimate, at each time, the number and locations of sources in
the environment using past measurements of the auditory and visual cues at a finite set of
sample times. A simple Bayesian approach is to introduce a minimal state vector ?t =
[?(t) ?(t)]T where ?? t = f (?t , ?t ) and compute the posterior density of ?t given the
cue measurements. Here the number and locations of sources can be inferred from the
existence and placement of multiple modes in the posterior. If we assume that the state
sequence {?tn } is a Markov process and that the state is conditionally independent of past
cue measurements given the present cue measurement, then we can recursively compute
the posterior through a process of prediction and correction described by the equations
Z Z
p(?tn |?t1 :tn?1 ) =
p(?tn |?tn?1 )p(?tn?1 |?t1 :tn?1 )d?tn?1
(9)
p(?tn |?t1 :tn ) ? p(?tn |?tn )p(?tn |?t1 :tn?1 ) = p(?tan |?tn )p(?tvn |?tn )p(?tn |?t1 :tn?1 ) (10)
where ?t = [?ta ?tv ]T . This formulation suggests that at each time auditory space can be
represented in terms of the likelihood function p(?ta |?(t), ?(t)).
6 Combining temporal and intensity difference signals
To facilitate the calculation of the likelihood function over the locations, we introduce compact notation for the cues derived from the auditory signals.
Let
x(t, ?k ) = [x(t, ?k , 0), . . . , x(t, ?k , N )]/k[x(t, ?k , 0), . . . , x(t, ?k , N )]k be the normalized vector of cross correlations computed within frequency channel k. Let x(t) =
[x(t, ?1 ), . . . , x(t, ?NF )] denote the spectrum of cross correlations and let z(t) =
[z(t, ?1 ), . . . , z(t, ?NF )] denote the spectrum of level differences where NF is the number of frequency channels. Let ?ta = [x(t) z(t)]T . We assume that ?ta = [x(t) z(t)]T =
[?
x(?, ?) ?
z(?, ?)]T + ?(t) where x
?(?, ?) and ?
z(?, ?) are the expected values of the cross
correlation and level difference spectra, respectively, for a single source located at (?, ?),
and ?(t) is Gaussian white noise [6],[7].
Experimental evidence about the nature of auditory space maps in the barn owl suggests
that spatial selectivity occurs after both the combination of temporal and level difference
cues and the combination of information across frequency [3],[13]. The computational
model specifies that the transformation from cues computed from the auditory input signals to a representation of space occurs by performing inference on the cues through the
likelihood function
1
p(?ta |?, ?) = p(x(t), z(t)|?, ?) ? exp(? k(x(t), z(t)) ? (?
x(?, ?), ?
z(?, ?))k2??1 ). (11)
n
2
The known physiology of the barn owl places constraints on how this likelihood function
can be computed. First, the spatial tuning of auditory neurons in the optic tectum is consistent with a model where spatial selectivity arises from tuning to combinations of time
difference and level difference cues within each frequency channel [14]. This suggests
x
?3
?29
10
x 10
10
4
80
80
9
3.5
60
60
8
40
3
20
40
7
20
6
0
5
?20
4
?
2.5
0
2
?20
1.5
?40
3
?40
1
?60
2
?60
1
?80
0.5
?50
0
?
50
?80
?50
0
?
50
Figure 1: Non-normalized likelihood functions at t = 26Pms with sources located at
o o
o
o
(?25
P , 0 ) and (0 , 25 ). Source signals are s1 (t) = A i cos(?i1 (t)) and s2 (t) =
A j cos(?j2 (t)) where ?i1 6= ?j2 for any i, j. Left: Linear model of frequency combination. Right: Multiplicative model of frequency combination.
that time and intensity information is initially combined multiplicatively within frequency
channels.
Given this constraint we propose two models of the frequency combination step. In the first
model of frequency integration we assume that the likelihood is a product of kernels
Y
p(x(t), z(t)|?, ?) ?
K(x(t, ?k ), z(t, ?k ); ?, ?).
(12)
k
Each kernel is a product of a temporal difference function and a level difference function
to respect the first constraint,
K(x(t, ?k ), z(t, ?k ); ?, ?) = Kx (x(t, ?k ); ?, ?)Kz (z(t, ?k ); ?, ?).
(13)
RIf Rwe require that each kernel is normalized,
K(x(t? , ?k ), z(t? , ?k ); ?, ?)dx(t? , ?k )dz(t? , ?k ) = 1, for each t? then the multiplicative model is a factorization of the likelihood into a product of the conditional probabilities
p(x(t? , ?k ), z(t? , ?k )|?, ?). The second model is a linear model of frequency integration
where the likelihood is approximated by a kernel estimate of the form
X
ck (y(t, ?k ))K(x(t, ?k ), z(t, ?k ); ?, ?)
(14)
p(x(t), z(t)|?, ?) ?
k
where each kernel is of the above product form. We again assume that the kernels are
normalized, but we weight each kernel by the intensity of the signal in that frequency
channel.
Experiments performed in multiple source environments by Takahashi et al. suggest that
information is not multiplied across frequency channels [15]. Takahashi et al. measured the
response of space specific neurons in the external nucleus of the inferior colliculus under
conditions of two sound sources located on the horizontal plane with each signal consisting
of a unique combination of sinusoids. Their results suggest that a bump of activity will
be present at each source location in the space map. Using identical stimuli (see Table 1
columns A and C in [15]) we compute the likelihood function using the linear model and
the multiplicative model. The results shown in figure (1) demonstrate that with a linear
model the likelihood function will display a peak corresponding to each source location,
but with the multiplicative model only a spurious location that is consistent among the
kernels remains and information about the two sources is lost. Therefore, we use a model
in which time difference and level difference information is first combined multiplicatively
within frequency channels and is then summed across frequency.
7 Examples
7.1 Parameters
In each example stimuli are presented for 100 ms and HRIRs for owl 884 recorded by Keller
et al., [4], are used to generate the input signals. We use six gammatone filters for each ear
x
80
?3
10
x
80
16
60
?3
10
3.5
60
14
3
40
40
12
2.5
20
20
10
?
2
0
0
8
?20
?20
1.5
?40
1
6
?40
4
?60
?60
0.5
2
?80
?80
?50
0
?
50
?50
0
?
50
Figure 2: Non-normalized likelihood functions at t = 21.1 ms for a single source located
at (?25o , ?15o ). Left: Broadband source signal at 50 dB SPL. Right: Source signal is a 7
kHz tone at 50 dB SPL.
x
?3
x
10
11
80
80
10
60
9
40
8
?
20
7
6
0
40
7
6
0
8
40
7
20
6
0
5
5
?20
3
?3
10
9
60
8
20
4
?20
4
?40
x
80
9
5
?20
?3
10
11
10
60
4
?40
3
?40
3
2
2
?60
?60
2
1
?80
1
?80
?50
0
?
50
?50
0
?
50
?60
1
?80
?50
0
?
50
Figure 3: Non-normalized likelihood functions under conditions of summing localization.
In each case sources are located at (?20o , 0o ) and (20o , 0o ) and produce scaled versions
of the same waveform. Left: Left signal at 50 dB SPL, right signal at 40 dB SPL. Center:
Left signal at 50 dB SPL, right signal at 50 dB SPL. Right: Left signal at 40 dB SPL, right
signal at 50 dB SPL.
with center frequencies {4.22, 5.14, 6.16, 7.26, 8.47, 9.76} kHz, and Q10 values chosen to
match the auditory nerve fiber data of K?oppl [16]. In each example we use a Gaussian form
for the temporal and level difference kernels, Kx (x(t, ?k ); ?, ?) ? exp(? 21 kx(t, ?k ) ?
x
?(?, ?)k2 /? 2 ) and Kz (z(t, ?k ); ?, ?) ? exp(? 21 kz(t, ?k ) ? z?(?, ?)k2 /? 2 ) where ? 2 =
0.1. The terms x
?(?, ?) and z?(?, ?) correspond to the time average of the cross correlation
and level difference cues for a broadband noise stimulus. Double polar coordinates are
used to describe source locations. Only locations in the frontal hemisphere are considered.
Ambient noise is present at 10 dB SPL.
7.2 Single source
In figure (2) we show the approximate likelihood function of equation (19) at a single
time during the presentation of a broadband noise stimulus and a 7 kHz tone from direction
(?25o , ?15o ). In response to the broadband signal there is a peak at the source location. In
response to the tone there is a peak at the true location and significant peaks near (60o , ?5o )
and (20o , ?25o ).
7.3 Multiple sources
In figure (3) we show the response of our model under the condition of summing localization. The top signal shown in figure (4A) was presented from (?20o , 0o ) and (20o , 0o )
with no delay between the two sources, but with varied intensities for each signal. In each
case there is a single phantom bump at an intermediate location that is biased toward the
more intense source.
In figure (4) we simulate an echoic environment where the signal at the top of 4A is presented from (?20o , 0o ) and a copy delayed by 2 ms shown at the bottom of 4A is presented
from (20o , 0o ). We plot the likelihood function at the three times indicated by vertical dotted lines in 4A. At the first time the initial signal dominates and there is a peak at the
location of the leading source. At the second time when both the leading and lagging
sounds have similar envelope amplitudes there is a phantom bump at an intermediate, al-
x
?4
x
10
B
A
?3
10
C
80
11
x
?3
10
D
80
6
80
6
10
60
60
60
5
9
40
40
5
40
8
20
7
6
0
20
4
20
4
0
0
3
5
?20
?20
3
?20
4
?40
2
?40
?40
2
3
?60
2
?60
?60
1
1
?80
0
50
time
100
(ms)
?50
0
?
50
1
?80
?80
?50
0
?
50
?50
0
?
50
Figure 4: Non-normalized likelihoods under simulated echoic conditions. The leading
signal is presented from (?20o , 0o ) and the lagging source from (20o , 0o ). Both signals
are presented at 50 dB SPL. A: The top signal is the leading signal and the bottom is the
lagging. Vertical lines show times at which the likelihood function is plotted in B,C,D. B:
Likelihood at t = 14.3 ms. C: Likelihood at t = 21.1 ms. D: Likelihood at t = 30.6 ms.
though elevated, location. At the third time where the lagging source dominates there are
peaks at both the leading and lagging locations.
8 Discussion
We used a Bayesian approach to the localization problem faced by the barn owl to guide
our modeling of the computational operations supporting sound localization in the barn
owl. In the context of our computational model, auditory space is initially represented in
terms of a likelihood function parameterized by time difference and level difference cues
computed from the auditory input signals.
In transforming auditory cues to spatial locations, the model relies on stimulus invariance
in the cue values achieved by normalizing the cross correlation vector and computing a
ratio of the left and right signal intensities within each frequency channel. It is not clear
from existing experimental data where or if this invariance occurs in the barn owl?s auditory
system.
In constructing a model of the barn owl?s solution to the estimation problem, the operations that we employ are constrained to be consistent with the known physiology. As stated
above, physiological data is consistent with the multiplication of temporal difference and
level difference cues in each frequency channel, but not with multiplication across frequency. This model does not explain, however, across frequency nonlinearities that occur
in the processing of temporal difference cues [17].
The likelihood function used in our model is a linear approximation to the likelihood specified in equation (11). The multiplicative model clearly does not explain the response of the
space map to multiple sound sources producing spectrally nonoverlapping signals [15]. The
linear approximation may reflect the requirement to function in a multiple source environment. We must more precisely define the multi-target tracking problem that the barn owl
solves and include all relevant implementation constraints before interpreting the nature of
the approximation.
The tuning of space specific neurons to combinations of ITD and ILD has been interpreted
as a multiplication of ITD and ILD related signals [3]. Our model suggests that, to be
consistent with known physiology, the multiplication of ITD and ILD signals occurs in the
medial portion of the lateral shell of the central nucleus of the inferior colliculus before
frequency convergence [13]. Further experiments must be done to determine if the multiplication is a network property of the first stage of lateral shell neurons or if multiplication
occurs at the level of single neurons in the lateral shell.
We simulated the model?s responses under conditions of summing localization and simulated echoes. The model performs as expected for two simultaneous sources with a phantom bump occurring in the likelihood function at a location intermediate between the two
source locations. Under simulated echoic conditions the likelihood shows evidence for
both the leading and lagging source, but only the leading source location appears alone.
This suggests that with this instantaneous estimation procedure the lagging source would
be perceptible as a source location, however, possibly less so than the leading. It is likely
that a feedback mechanism, such as the Bayesian filtering described in equations (14) and
(15), will need to be included to explain the decreased perception of lagging sources.
Acknowledgments
We thank Kip Keller, Klaus Hartung, and Terry Takahashi for providing the head related
transfer functions. We thank Mike Lewicki for providing the natural sound recordings.
This work was supported by the Mathers Foundation.
References
[1] Payne, R.S., ?Acoustic location of prey by barn owls (Tyto alba).?, J. Exp. Biol., 54: 535-573,
1971.
[2] Knudsen, E.I., Konishi, M., ?A neural map of auditory space in the owl.?, Science, 200: 795-797,
1978.
[3] Pe?na, J.L., Konishi, M., ?Auditory receptive fields created by multiplication.?, Science, 292: 249252, 2001.
[4] Keller, C.H., Hartung, K., Takahashi, T.T., ?Head-related transfer functions of the barn owl:
measurement and neural responses.?, Hearing Research, 118: 13-34, 1998.
[5] Lewicki, M.S., ?Efficient coding of natural sounds.?, Nature Neurosci., 5(4): 356-363, 2002.
[6] Martin, K.D., ?A computational model of spatial hearing.?, Masters thesis, MIT, 1995.
[7] Duda, R.O., ?Elevation dependence of the interaural transfer function.?, In Gilkey, R. and Anderson, T.R. (eds.), Binaural and Spatial Hearing, 49-75, 1994.
[8] Slaney, M., ?Auditory Toolbox.?, Apple technical report 45, Apple Computer Inc., 1994.
[9] Schwartz, O., Simoncelli, E.P., ?Natural signal statistics and sensory gain control.?, Nature Neurosci., 4(8): 819-825, 2001.
[10] Sullivan, W.E., Konishi, M., ?Segregation of stimulus phase and intensity coding in the cochlear
nucleus of the barn owl.?, J. Neurosci., 4(7): 1787-1799, 1984.
[11] Yang, L., Monsivais, P., Rubel, E.W., ?The superior olivary nucleus and its influence on nucleus
laminaris: A source of inhibitory feedback for coincidence detection in the avian auditory brainstem.?, J. Neurosci., 19(6): 2313-2325, 1999.
[12] Carr, C.E., Konishi, M., ?A circuit for detection of interaural time differences in the brain stem
of the barn owl.? J. Neurosci., 10(10): 3227-3246, 1990.
[13] Mazer, J.A., ?Integration of parallel processing streams in the inferior colliculus of the barn
owl.?, Ph.D thesis, Caltech 1995.
[14] Brainard, M.S., Knudsen, E.I., Esterly, S.D., ?Neural derivation of sound source location: Resolution of spatial ambiguities in binaural cues.?, J. Acoust. Soc. Am., 91(2): 1015-1026, 1992.
[15] Takahashi, T.T., Keller, C.H., ?Representation of multiple sources in the owl?s auditory space
map.?, J. Neurosci., 14(8): 4780-4793, 1994.
[16] K?oppl, C., ?Frequency tuning and spontaneous activity in the auditory nerve and cochlear nucleus magnocellularis of the barn owl Tyto alba.?, J. Neurophys., 77: 364-377, 1997.
[17] Takahashi, T.T., Konishi, M., ?Selectivity for interaural time difference in the owl?s midbrain.?,
J. Neurosci., 6(12): 3413-3422, 1986.
| 2401 |@word version:3 duda:1 cha:1 simulation:1 eng:1 pressure:2 recursively:1 initial:4 disparity:1 past:2 existing:1 neurophys:1 si:4 dx:1 must:4 olive:1 plot:1 medial:1 alone:3 cue:26 half:1 yr:4 stationary:1 tone:3 plane:1 filtered:2 location:32 mathematical:1 windowed:2 consists:1 interaural:5 pathway:1 manner:1 lagging:8 introduce:3 expected:2 behavior:2 multi:1 brain:1 decomposed:1 window:3 notation:1 circuit:1 tyto:2 interpreted:1 hrirs:4 spectrally:1 acoust:1 transformation:2 temporal:8 esterly:1 nf:3 olivary:1 k2:3 scaled:1 schwartz:2 control:4 unit:1 louis:4 producing:1 t1:5 before:2 dynamically:1 suggests:5 co:3 factorization:1 range:1 averaged:1 obeys:1 unique:1 acknowledgment:1 lost:1 sullivan:1 procedure:1 physiology:3 composite:1 wustl:2 suggest:2 context:2 influence:1 map:5 phantom:3 dz:1 center:3 keller:4 formulate:1 resolution:1 simplicity:1 konishi:5 coordinate:1 increment:1 construction:1 tan:1 tectum:1 target:1 spontaneous:1 hypothesis:1 approximated:2 located:5 bottom:2 mike:1 coincidence:1 electrical:1 solved:1 decrease:1 cin:1 environment:7 transforming:1 motivate:1 creation:1 localization:12 basis:2 binaural:4 represented:2 fiber:1 derivation:1 distinct:1 describe:2 klaus:1 solve:2 valued:1 statistic:1 fischer:1 rwe:1 echo:1 sequence:1 rr:3 propose:1 product:4 j2:2 relevant:1 combining:1 payne:1 gammatone:2 description:1 q10:1 convergence:1 double:1 requirement:1 produce:1 object:1 brainard:1 develop:2 avian:1 measured:1 received:5 solves:1 soc:1 direction:3 waveform:6 anatomy:1 filter:6 stochastic:1 centered:1 brainstem:1 virtual:1 owl:28 require:1 elevation:2 brian:1 correction:1 considered:1 barn:22 itd:5 exp:4 great:1 mapping:2 mo:2 bump:4 mathers:1 vary:1 estimation:5 polar:1 ain:1 mit:1 clearly:1 gaussian:3 modified:2 ck:1 derived:1 rubel:1 likelihood:24 am:1 inference:1 dependent:2 sb:3 integrated:3 vl:3 initially:2 spurious:1 i1:2 among:1 pcg:2 spatial:11 constrained:2 integration:3 summed:1 field:3 extraction:2 washington:2 identical:1 report:1 stimulus:9 employ:1 delayed:1 phase:1 consisting:1 detection:2 nl:6 ambient:5 capable:1 intense:2 plotted:1 minimal:1 column:1 modeling:1 localizing:2 hearing:3 delay:4 azimuth:1 byp:1 varies:1 combined:3 st:4 density:2 peak:6 probabilistic:1 yl:4 na:1 thesis:2 again:1 reflect:2 ear:10 nm:1 choose:1 possibly:2 recorded:1 central:1 ambiguity:1 external:1 slaney:1 gilkey:1 leading:9 account:1 takahashi:6 nonlinearities:1 nonoverlapping:1 coding:2 alba:2 inc:1 satisfy:1 onset:1 stream:2 performed:4 tion:1 multiplicative:5 portion:1 wave:1 parallel:1 ni:2 accuracy:1 correspond:1 bayesian:3 hunter:1 rectified:1 apple:2 explain:3 simultaneous:1 ed:1 frequency:31 gain:5 auditory:36 amplitude:1 nerve:2 appears:1 rif:1 ta:6 response:9 formulation:2 done:1 though:1 anderson:2 stage:3 correlation:8 horizontal:1 ild:5 nonlinear:1 defines:1 mode:1 indicated:1 impulse:2 magnocellularis:2 facilitate:2 normalized:8 true:1 sinusoid:1 white:2 conditionally:1 during:1 inferior:3 m:7 complete:1 demonstrate:1 carr:1 tn:18 performs:1 interpreting:1 narrowband:1 instantaneous:1 charles:1 superior:2 stimulation:1 rl:3 khz:4 elevated:1 measurement:5 significant:1 tuning:5 pm:1 nonlinearity:2 inhibition:1 posterior:3 hemisphere:2 eardrum:2 selectivity:5 caltech:1 determine:1 signal:54 multiple:8 sound:13 simoncelli:2 infer:1 stem:1 full:1 technical:1 match:1 characterized:2 calculation:1 cross:8 prediction:1 kernel:9 normalization:1 achieved:1 decreased:1 source:53 envelope:2 biased:1 recording:2 undergo:1 db:11 near:1 yang:1 intermediate:4 hartung:2 laminaris:2 motivated:1 six:1 passed:1 ul:5 nocturnal:1 akn:4 covered:1 clear:1 band:2 ph:1 generate:1 specifies:1 inhibitory:1 dotted:1 threshold:1 prey:2 sum:1 colliculus:3 parameterized:1 master:1 respond:1 place:1 spl:11 capturing:1 mazer:1 display:1 activity:2 occur:1 placement:1 constraint:5 optic:1 precisely:1 simulate:1 performing:1 martin:1 department:1 tv:2 combination:9 across:6 smaller:1 ur:5 perceptible:1 modification:1 s1:1 midbrain:1 hl:3 invariant:1 rectification:2 equation:4 segregation:1 remains:1 describing:1 r3:1 mechanism:1 echoic:3 operation:9 multiplied:1 existence:3 top:3 include:1 occurs:5 receptive:3 dependence:1 nr:3 win:2 thank:2 simulated:4 lateral:3 cochlear:4 toward:1 modeled:1 index:1 multiplicatively:2 ratio:1 providing:2 equivalently:1 stated:1 rise:1 implementation:3 vertical:2 neuron:12 markov:1 finite:1 supporting:1 knudsen:2 neurobiology:1 head:5 varied:1 monaural:1 intensity:12 inferred:2 introduced:2 pair:1 specified:1 toolbox:1 kip:1 acoustic:1 established:1 below:1 dynamical:1 perception:1 saturation:1 terry:1 bandlimited:1 natural:4 force:1 hr:3 representing:1 created:2 hrtf:1 extract:2 func:1 faced:1 multiplication:7 relative:1 lacking:1 filtering:3 foundation:1 nucleus:8 consistent:5 principle:1 bank:2 supported:1 copy:1 guide:2 pni:1 feedback:2 kz:3 sensory:1 approximate:1 compact:1 neurobiological:1 summing:3 spectrum:4 table:1 additionally:1 channel:17 transfer:5 nature:4 constructing:1 neurosci:7 s2:1 noise:11 broadband:6 vr:3 n:4 experienced:1 bandpass:1 exponential:1 pe:1 third:1 specific:4 r2:1 physiological:1 evidence:3 dominates:3 normalizing:1 magnitude:1 occurring:1 kx:3 likely:1 visual:3 desire:1 saturating:2 tracking:1 scalar:1 lewicki:2 relies:1 extracted:1 shell:3 conditional:1 presentation:1 change:2 included:2 called:2 pas:1 invariance:2 experimental:3 divisive:1 support:1 arises:2 modulated:1 frontal:2 incorporate:1 dept:1 biol:1 |
1,543 | 2,402 | Towards social robots: Automatic evaluation of
human-robot interaction by face detection and
expression classification
M.S. Bartlett , G. Littlewort
, I. Fasel , J. Chenu
, T. Kanda ,
H. Ishiguro , and J.R. Movellan
Institute for Neural Computation, University of California, San Diego
Intelligent Robotics and Communication Laboratory, ATR, Kyoto Japan.
Email: gwen, marni, ian, joel, javier @inc.ucsd.edu
Abstract
Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how
computers could be used in daily life. Face to face communication is
a real-time process operating at a time scale of less than a second. In
this paper we present progress on a perceptual primitive to automatically
detect frontal faces in the video stream and code them with respect to 7
dimensions in real time: neutral, anger, disgust, fear, joy, sadness, surprise. The face finder employs a cascade of feature detectors trained with
boosting techniques [13, 2]. The expression recognizer employs a novel
combination of Adaboost and SVM?s. The generalization performance
to new subjects for a 7-way forced choice was 93.3% and 97% correct
on two publicly available datasets. The outputs of the classifier change
smoothly as a function of time, providing a potentially valuable representation to code facial expression dynamics in a fully automatic and
unobtrusive manner. The system was deployed and evaluated for measuring spontaneous facial expressions in the field in an application for
automatic assessment of human-robot interaction.
1 Introduction
Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life.
Face to face communication is a real-time process operating at a time scale of less than
a second. Thus fulfilling the idea of machines that interact face to face with us requires
development of robust real-time perceptive primitives. In this paper we present first steps
towards the development of one such primitive: a system that automatically finds faces in
the visual video stream and codes facial expression dynamics in real time. The system automatically detects frontal faces and codes them with respect to 7 dimensions: Joy, sadness,
surprise, anger, disgust, fear, and neutral. Speed and accuracy are enhanced by a novel technique that combines feature selection based on Adaboost with feature integration based on
support vector machines. We host an online demo of the system at http://mplab.ucsd.edu.
The system was trained and tested on two publicly avaliable datasets of facial expressions
collected by experimental psychologists expert in facial behavior. In addition, we deployed
and evaluated the system in an application for recognizing spontaneous facial expressions
from continuous video in the field. We assess the system as a method for automatic measurement of human-robot interaction.
2 Face detection
We developed a real-time face-detection system based on [13] capable of detection and
false positive rates equivalent to the best published results [11, 12, 10, 13]. The system
consists of a cascade of classifiers trained by boosting techniques. Each classifier employs
integral image filters reminiscent of Haar Basis functions, which
can
be
computed very fast
at any location and scale in constant time (see Figure 1). In a
pixel window, there
are over 160,000 possible filters of this type. For each stage in the cascade, a subset of
features are chosen using a feature selection procedure based on Adaboost [3].
We enhance the approach in [13] in the following ways: (1) Once a feature is selected by
boosting, we refine the selection by finding the best performing single-feature classifier
from a new set of filters generated by shifting and scaling the chosen filter by two pixels
in each direction, as well as composite filters made by reflecting each shifted and scaled
feature horizontally about the center and superimposing it on the original. This can be
thought of as a single generation genetic algorithm, and is much faster than exhaustively
searching for the best classifier among all 160,000 possible filters and their reflection-based
cousins.
(2) While [13] use Adaboost in their feature selection algorithm, which requires binary
classifiers, we employed Gentleboost, described in [4], which uses real valued features.
Figure 2 shows the first two filters chosen by the system along with the real valued output
of the weak learners (or tuning curves) built on those filters. Note the bimodal distribution
of filter 2.
(3) We have also developed a training procedure so that after each single feature, the system
can decide whether to test another feature or to make a decision. This system retains
information about the continuous outputs of each feature detector rather than converting
to binary decisions at each stage of the cascade. Preliminary results show potential for
dramatic improvements in speed with no loss of accuracy over the current system.
The face detector was trained on 5000 faces and millions of non-face patches from about
8000 images collected from the web by Compaq Research Laboratories. Accuracy on the
CMU-MIT dataset (a standard, public data set for benchmarking frontal face detection
systems) is comparable to [13]. Because the strong classifiers early in the sequence need
very few features to achieve good performance (the first stage can reject
of the nonfaces using only features, using only 20 simple operations, or about 60 microprocessor
instructions), the average number of features that need to be evaluated for each window is
very small, making the overall system very fast. The source code for the face detector is
freely available at http://www.sourceforge.net/projects/kolmogorov.
3 Facial Expression Classification
3.1
Data set
The facial expression system was trained and tested on Cohn and Kanade?s DFAT-504
dataset [6]. This dataset consists of 100 university students ranging in age from 18 to 30
years. 65% were female, 15% were African-American, and 3% were Asian or Latino.
Videos were recoded in analog S-video using a camera located directly in front of the subject. Subjects were instructed by an experimenter to perform a series of 23 facial expres-
a.
b.
c.
d.
Figure 1: Integral image filters (after Viola & Jones, 2001 [13]). a. The value of the pixel
at is the sum of all the pixels
above
and
"to
! the left. b. The sum of the pixels within
rectangle can be computed as
. (c) Each feature is computed by taking
the difference of the sums of the pixels in the white boxes and grey boxes. Features include
those shown in (c), as in [13], plus (d) the same features superimposed on their reflection
about the Y axis.
a.
b.
c.
d.
Figure 2: The first two features (a,c) and their respective tuning curves (b,d). Each feature
is shown over the average face. The first tuning curve shows that a dark horizontal region
over a bright horizontal region in the center of the window is evidence for a face, and for
non-face otherwise. The output of the second filter is bimodal. Both a strong positive and
a strong negative output is evidence for a face, while output closer to zero is evidence for
non-face.
sions. Subjects began and ended each display with a neutral face. Before performing each
display, an experimenter described and modeled the desired display. Image sequences from
neutral to target display were digitized into 640 by 480 pixel arrays with 8-bit precision for
grayscale values.
For our study, we selected 313 sequences from the dataset. The only selection criterion
was that a sequence be labeled as one of the 6 basic emotions. The sequences came from
90 subjects, with 1 to 6 emotions per subject. The first and last frames (neutral and peak)
were used as training images and for testing generalization to new subjects, for a total of
625 examples. The trained classifiers were later applied to the entire sequence.
All faces in this dataset were successfully detected. The automatically located faces were
rescaled to 48x48 pixels.The typical distance between the centers of the eyes was roughly
24 pixels. A comparison was also made at double resolution (96x96). No further registration was performed. Other approaches to automatic facial expression recognition include
explicit detection and alignment of internal facial features. The recognition system presented here performs well without that step, providing a considerable savings in processing
time. The images were converted into a Gabor magnitude representation, using a bank of
Gabor filters at 8 orientations and 5 spatial frequencies (4:16 pixels per cycle at 1/2 octave
steps) [7].
4 SVM?s and Adaboost
SVM performance was compared to Adaboost for emotion classification. The system performed a 7-way forced choice between the following emotion categories: Happiness, sadness, surprise, disgust, fear, anger, neutral. The classification was performed in two stages.
First, seven binary classifiers were trained to discriminate each emotion from everything
else. The emotion category decision was then implemented by choosing the classifier with
the maximum output for the test example.
Support vector machines (SVM?s) are well suited to this task because the high dimensionality of the Gabor representation does not affect training time for kernel classifiers.
Linear, polynomial, and RBF kernels with Laplacian, and Gaussian basis functions were
explored. Linear and RBF kernels employing a unit-width Gaussian performed best, and
are presented here. Generalization to novel subjects was tested using leave-one-subject-out
cross-validation. Results are presented in Table 1.
The features employed for the Adaboost emotion classifier were the individual Gabor filters. There were 48x48x40 = 92160 possible features. A subset of these filters was chosen
using Adaboost. On each training round, the threshold and scale parameter of each filter
was optimized and the feature that provided best performance on the boosted distribution
was chosen.
During Adaboost, training for each emotion classifier continued until the distributions for
the positive and negative samples were separated by a gap proportional to the widths of
the two distributions. The total number of filters selected using this procedure was 538.
Since Adaboost is significantly slower to train than SVM?s, we did not do ?leave one subject out? cross validation. Instead we separated the subjects randomly into ten groups of
roughly equal size and did ?leave one group out? cross validation. SVM performance for
this training strategy is shown for comparison.
Results are shown in Table 1. The generalization performance, 85.0%, was comparable
to linear SVM performance on the leave-group-out testing paradigm, but Adaboost was
substantially faster, as shown in Table 2. Here, the system calculated the output of Gabor
filters less efficiently, as the convolutions were done in pixel space rather than Fourier
space, but the use of 200 times fewer Gabor filters nevertheless resulted in a substantial
speed benefit.
5 AdaSVM?s
Adaboost provides an added value of choosing which features are most informative to test
at each step in the cascade. Figure 3a illustrates the first 5 Gabor features chosen for each
emotion. The chosen features show no preference for direction, but the highest frequencies
are chosen more often. Figure 3b shows the number of chosen features at each of the 5
wavelengths used.
A combination approach, in which the Gabor Features chosen by Adaboost were used as a
reduced representation for training SVM?s (AdaSVM?s) outperformed Adaboost by 3.8 percent points, a difference that was statistically significant (z=1.99, p=0.02). AdaSVM?s outperformed SVM?s by an average of 2.7 percent points, an improvement that was marginally
significant (z = 1.55, p = 0.06).
After examination of the frequency distribution of the Gabor filter selected by Adaboost, it
became apparent that higher spatial frequency Gabors and higher resolution images could
potentially improve performance. Indeed, by doubling the resolution to 96x96 and increasing the number of Gabor wavelengths from 5 to 9 so that they spanned 2:32 pixels in 1/2
octave steps improved performance of the nonlinear AdaSVM to 93.3% correct. As the
resolution goes up, the speed benefit of AdaSVM?s becomes even more apparent. At the
ANGER
DISGUST
Wavelength distribution of Adaboost?chosen features
FEAR
250
200
150
100
JOY
SADNESS
SURPRISE
50
a.
b.
0
2
4
6
8
10
12
wavelength in pixels
14
16
18
Figure 3: a. Gabors selected by Adaboost for each expression. White dots indicate locations of all selected Gabors. Below each expression is a linear combination of the real part
of the first 5 Adaboost features selected for that expression. Faces shown are a mean of 10
individuals. b. Wavelength distribution of features selected by Adaboost.
higher resolution, the full Gabor representation increased by a factor of 7, whereas the
number of Gabors selected by Adaboost only increased by a factor of 1.75.
Performance of the system was also evaluated on a second publicly available dataset, Pictures of Facial Affect[1]. We obtained 97% accuracy for generalization to novel subjects,
trained by leave-one-subject-out cross-validation. This is about 10 percentage points higher
than the best previously reported results on this dataset [9, 8].
An emergent property was that the outputs of the classifier change smoothly as a function
of time, providing a potentially valuable representation to code facial expression dynamics
in a fully automatic and unobtrusive manner. (See Figure 5.) In the next section, we apply
this system to assessing spontaneous facial expressions in the field.
Leave-group-out
Adaboost SVM
Linear
RBF
85.0
84.8
86.9
Leave-subject-out
SVM AdaSVM
86.2
88.0
88.8
90.7
Table 1: Performance of Adaboost,SVM?s and AdaSVM?s (48x48 images).
SVM
Lin RBF
Time t
Time t#
Memory
t
t
m
90t
90t
90m
Adaboost
0.01t
0.16t
3m
AdaSVM
Lin
RBF
0.01t
0.16t
3m
0.0125t
0.2t
3.3m
Table 2: Processing time and memory considerations. Time t # includes the extra time to
calculate the outputs of the 538 Gabors in pixel space for Adaboost and AdaSVM, rather
than the full FFT employed by the SVM?s.
6 Deployment and evaluation: Automatic Evaluation of
Human-Robot Interaction
We are currently evaluating the system as a tool for automatically measuring the quality
of human-robot social interaction. This test involves recognition of spontaneous facial
expressions in the continuous video stream during unconstrained interaction with RoboVie,
a social robot under development at ATR and the University of Osaka [5]. This study was
conducted at ATR in Kyoto, Japan. 14 participants, male and female, were instructed to
interact with RoboVie for 5 minutes. Their facial expressions were recorded via 4 video
cameras. The study was followed by a questionnaire in which the participants were asked
to evaluate different aspects of their interaction with RoboVie.
Figure 4: Human response during interaction with the RoboVie robot at ATR is measured
by automatic expression analysis.
Faces were automatically detected and facial expressions classified in the continuous video
streams of each of the four cameras. With the multi-camera paradigm, one or more cameras
often provides a better view than the others. When the face is rotated, partially occluded,
or misaligned, the expression classification is less reliable. A confidence measure from
the face detection step consisted of the final unthresholded output of the cascade passed
through a softmax transform over the four cameras. This measure indicated how much like
a frontal face the system determined the selected window from each camera to be.
We compared the system?s expression labels with a form of ground truth from human judgment. Four naive human observers were presented with the videos of each subject at 1/3
speed. The observers indicated the amount of happiness shown by the subject in each video
by turning a dial.
The outputs of the four cameras were integrated by training a linear regression on 32 numbers, the continuous outputs of the seven emotion classifiers (the margin) plus the confidence measure from the face detector for each of the four cameras, to predict the human
facial expression judgments. Figure 5 compares the human judgments with the automated
system. Preliminary results are promising. The automated system predicted the human expression judgments with a correlation coefficient of 0.87, which was within the agreement
range of the four human observers.
$
These are results from one subject. Test results based on 14 subjects will be available in one
week. We are also comparing facial expression measurements by both human and computer to the
self-report questionnaires.
Frame
Figure 5: Human labels (blue/dark) compared to automated system labels (red/light) for
?joy? (one subject, one observer).
7 Conclusions
Computer animated agents and robots bring a social dimension to human computer interaction and force us to think in new ways about how computers could be used in daily life.
Social robots and agents designed to recognize facial expression might provide a much
more interesting and engaging social interaction, which can benefit applications from automated tutors to entertainment robots. Face to face communication is a real-time process
operating at a time scale of less than a second. The level of uncertainty at this time scale
is considerable, making it necessary for humans and machines to rely on sensory rich perceptual primitives rather than slow symbolic inference processes. In this paper we present
progress on one such perceptual primitive: Real time recognition of facial expressions.
Our results suggest that user independent fully automatic real time coding of basic expressions is an achievable goal with present computer power, at least for applications in
which frontal views or multiple cameras can be assumed. Good performance results were
obtained for directly processing the output of an automatic face detector without the need
for explicit detection and registration of facial features. A novel classification technique
was presented that combines feature selection based on Adaboost with feature integration
based on support vector machines. The AdaSVM?s outperformed Adaboost and SVM?s
alone, and gave a considerable advantage in speed over SVM?s. Strong performance results, 93% and 97% accuracy for generalization to novel subjects, were presented for two
publicly available datasets of facial expressions collected by experimental psychologists
expert in facial expressions.
We introduced a technique for automatically evaluating the quality of human-robot interaction based on the analysis of facial expressions. This test involved recognition of spontaneous facial expressions in the continuous video stream during unconstrained behavior.
The system predicted human judgements of joy with a correlation of 0.87.
Within the past decade, significant advances in machine learning and machine perception
open up the possibility of automatic analysis of facial expressions. Automated systems
will have a tremendous impact on basic research by making facial expression measurement
more accessible as a behavioral measure, and by providing data on the dynamics of facial
behavior at a resolution that was previously unavailable. Such systems will also lay the
foundations for computers that can understand this critical aspect of human communication. Computer systems with this capability have a wide range of applications in basic and
applied research areas, including man-machine communication, security, law enforcement,
psychiatry, education, and telecommunications.
Acknowledgments
Support for this project was provided by ONR N00014-02-1-0616, NSF-ITR IIS-0220141
and IIS-0086107, DCI contract No.2000-I-058500-000, and California Digital Media Innovation Program DiMI 01-10130, and the MIND Institute. This research was supported
in part by the Telecommunications Advancement Organization of Japan.
References
[1] P. Ekman and W. Friesen. Pictures of facial affect. Photographs, 1976. Available from Human
Interaction Laboratory, UCSF, HIL-0984, San Francisco, CA 94143.
[2] I. Fasel and J. R. Movellan. Comparison of neurally inspired face detection algorithms. In
Proceedings of the international conference on artificial neural networks (ICANN 2002). UAM,
2002.
[3] Yoav Freund and Robert E. Schapire. Experiments with a new boosting algorithm. In Proc. 13th
International Conference on Machine Learning, pages 148?146. Morgan Kaufmann, 1996.
[4] J Friedman, T Hastie, and R Tibshirani. Additive logistic regression: A statistical view of
boosting. ANNALS OF STATISTICS, 28(2):337?374, 2000.
[5] H. Ishiguro, T. Ono, M. Imai, T. Maeda, and T. Kandaand R. Nakatsu. Robovie: an interactive
humanoid robot. 28(6):498?503, 2001.
[6] T. Kanade, J.F. Cohn, and Y. Tian. Comprehensive database for facial expression analysis. In
Proceedings of the fourth IEEE International conference on automatic face and gesture recognition (FG?00), pages 46?53, Grenoble, France, 2000.
[7] M. Lades, J. Vorbr?uggen, J. Buhmann, J. Lange, W. Konen, C. von der Malsburg, and R. W?urtz.
Distortion invariant object recognition in the dynamic link architecture. IEEE Transactions on
Computers, 42(3):300?311, 1993.
[8] M. Lyons, J. Budynek, A. Plante, and S. Akamatsu. Classifying facial attributes using a 2-d
gabor wavelet representation and discriminant analysis. In Proceedings of the 4th international
conference on automatic face and gesture recognition, pages 202?207, 2000.
[9] C. Padgett and G. Cottrell. Representing face images for emotion classification. In M. Mozer,
M. Jordan, and T. Petsche, editors, Advances in Neural Information Processing Systems, volume 9, Cambridge, MA, 1997. MIT Press.
[10] H. Rowley, S. Baluja, and T. Kanade. Neural network-based face detection. IEEE Trans. on
Pattern Analysis and Machine Intelligence, 1(20):23?28, 1998.
[11] H. Schneiderman and T. Kanade. Probabilistic modeling of local appearance and spatial relationships for object recognition. In Proc. IEEE Intl. Conf. on Computer Vision and Pattern
Recognition, pages 45?51, 1998.
[12] Kah Kay Sung and Tomaso Poggio. Example based learning for view-based human face detection. Technical Report AIM-1521, 1994.
[13] Paul Viola and Michael Jones. Robust real-time object detection. Technical Report CRL
20001/01, Cambridge ResearchLaboratory, 2001.
| 2402 |@word judgement:1 achievable:1 polynomial:1 open:1 instruction:1 grey:1 dramatic:1 series:1 genetic:1 animated:3 past:1 current:1 comparing:1 reminiscent:1 cottrell:1 additive:1 informative:1 designed:1 joy:5 alone:1 intelligence:1 selected:10 fewer:1 advancement:1 provides:2 boosting:5 location:2 preference:1 along:1 consists:2 combine:2 behavioral:1 manner:2 indeed:1 roughly:2 tomaso:1 behavior:3 multi:1 mplab:1 inspired:1 detects:1 automatically:7 lyon:1 gwen:1 window:4 increasing:1 becomes:1 project:2 provided:2 medium:1 substantially:1 developed:2 finding:1 ended:1 sung:1 interactive:1 classifier:15 scaled:1 unit:1 positive:3 fasel:2 before:1 local:1 akamatsu:1 might:1 plus:2 sadness:4 misaligned:1 deployment:1 range:2 statistically:1 tian:1 kah:1 acknowledgment:1 camera:10 testing:2 movellan:2 procedure:3 area:1 cascade:6 composite:1 gabor:17 significantly:1 thought:1 reject:1 confidence:2 suggest:1 symbolic:1 selection:6 marni:1 www:1 equivalent:1 center:3 primitive:5 go:1 resolution:6 continued:1 array:1 spanned:1 osaka:1 kay:1 searching:1 annals:1 diego:1 spontaneous:5 enhanced:1 target:1 user:1 padgett:1 us:1 agreement:1 engaging:1 recognition:10 located:2 lay:1 labeled:1 database:1 calculate:1 region:2 cycle:1 rescaled:1 highest:1 valuable:2 substantial:1 mozer:1 questionnaire:2 rowley:1 asked:1 occluded:1 dynamic:5 exhaustively:1 trained:8 learner:1 basis:2 emergent:1 kolmogorov:1 train:1 separated:2 forced:2 fast:2 detected:2 artificial:1 choosing:2 apparent:2 valued:2 distortion:1 otherwise:1 compaq:1 littlewort:1 statistic:1 think:3 transform:1 final:1 online:1 sequence:6 advantage:1 net:1 interaction:14 achieve:1 sourceforge:1 double:1 assessing:1 intl:1 leave:7 rotated:1 object:3 measured:1 progress:2 strong:4 implemented:1 predicted:2 involves:1 indicate:1 direction:2 correct:2 attribute:1 filter:19 human:23 public:1 everything:1 education:1 generalization:6 preliminary:2 konen:1 ground:1 predict:1 week:1 early:1 recognizer:1 proc:2 outperformed:3 label:3 currently:1 successfully:1 tool:1 mit:2 gaussian:2 aim:1 hil:1 rather:4 sion:1 boosted:1 improvement:2 superimposed:1 psychiatry:1 detect:1 inference:1 entire:1 integrated:1 unobtrusive:2 france:1 pixel:14 overall:1 classification:7 among:1 orientation:1 development:3 spatial:3 integration:2 softmax:1 field:3 once:1 emotion:11 saving:1 equal:1 jones:2 anger:4 others:1 report:3 intelligent:1 employ:3 few:1 grenoble:1 randomly:1 resulted:1 recognize:1 asian:1 individual:2 comprehensive:1 friedman:1 detection:12 organization:1 possibility:1 evaluation:3 joel:1 alignment:1 male:1 light:1 integral:2 capable:1 closer:1 daily:3 necessary:1 poggio:1 respective:1 facial:32 x48:2 desired:1 increased:2 modeling:1 measuring:2 retains:1 yoav:1 happiness:2 neutral:6 subset:2 recognizing:1 conducted:1 front:1 reported:1 peak:1 international:4 accessible:1 contract:1 probabilistic:1 enhance:1 michael:1 von:1 recorded:1 conf:1 expert:2 american:1 japan:3 potential:1 converted:1 student:1 coding:1 includes:1 coefficient:1 inc:1 stream:5 later:1 performed:4 view:4 observer:4 red:1 participant:2 capability:1 ass:1 bright:1 publicly:4 accuracy:5 became:1 kaufmann:1 efficiently:1 judgment:4 weak:1 marginally:1 dimi:1 published:1 expres:1 african:1 classified:1 detector:6 email:1 frequency:4 involved:1 plante:1 dataset:7 experimenter:2 dimensionality:1 javier:1 reflecting:1 higher:4 adaboost:26 response:1 improved:1 friesen:1 evaluated:4 box:2 done:1 stage:4 until:1 correlation:2 horizontal:2 web:1 x96:2 cohn:2 nonlinear:1 assessment:1 logistic:1 quality:2 indicated:2 consisted:1 lades:1 laboratory:3 white:2 round:1 during:4 width:2 self:1 criterion:1 octave:2 performs:1 bring:3 reflection:2 percent:2 image:9 ranging:1 consideration:1 novel:6 began:1 volume:1 million:1 analog:1 measurement:3 significant:3 cambridge:2 automatic:13 tuning:3 unconstrained:2 dot:1 robot:15 operating:3 female:2 n00014:1 binary:3 came:1 onr:1 life:3 der:1 morgan:1 employed:3 converting:1 freely:1 paradigm:2 imai:1 ii:2 full:2 multiple:1 neurally:1 kyoto:2 technical:2 faster:2 gesture:2 cross:4 lin:2 host:1 finder:1 laplacian:1 impact:1 basic:4 regression:2 vision:1 cmu:1 kernel:3 bimodal:2 robotics:1 addition:1 whereas:1 else:1 source:1 extra:1 subject:20 jordan:1 fft:1 automated:5 affect:3 gave:1 hastie:1 architecture:1 lange:1 idea:1 itr:1 cousin:1 whether:1 expression:34 bartlett:1 passed:1 amount:1 dark:2 ten:1 category:2 reduced:1 http:2 schapire:1 percentage:1 nsf:1 shifted:1 per:2 tibshirani:1 blue:1 group:4 four:6 threshold:1 nevertheless:1 registration:2 rectangle:1 year:1 sum:3 schneiderman:1 uncertainty:1 telecommunication:2 fourth:1 disgust:4 decide:1 patch:1 decision:3 scaling:1 comparable:2 bit:1 followed:1 display:4 refine:1 fourier:1 speed:6 aspect:2 performing:2 combination:3 making:3 psychologist:2 invariant:1 fulfilling:1 previously:2 enforcement:1 mind:1 available:6 operation:1 uam:1 apply:1 petsche:1 slower:1 original:1 include:2 entertainment:1 dial:1 malsburg:1 tutor:1 added:1 strategy:1 distance:1 link:1 atr:4 seven:2 collected:3 discriminant:1 code:6 modeled:1 relationship:1 providing:4 innovation:1 robert:1 potentially:3 dci:1 negative:2 recoded:1 perform:1 convolution:1 datasets:3 ishiguro:2 viola:2 communication:6 digitized:1 frame:2 ucsd:2 introduced:1 optimized:1 security:1 california:2 tremendous:1 trans:1 below:1 perception:1 pattern:2 maeda:1 program:1 built:1 reliable:1 memory:2 video:11 including:1 shifting:1 power:1 critical:1 force:3 examination:1 haar:1 turning:1 rely:1 buhmann:1 representing:1 improve:1 eye:1 picture:2 axis:1 naive:1 law:1 freund:1 fully:3 loss:1 generation:1 interesting:1 proportional:1 age:1 validation:4 foundation:1 digital:1 humanoid:1 agent:4 editor:1 bank:1 classifying:1 supported:1 last:1 understand:1 institute:2 wide:1 face:41 taking:1 unthresholded:1 fg:1 benefit:3 curve:3 dimension:5 calculated:1 evaluating:2 rich:1 sensory:1 instructed:2 made:2 san:2 employing:1 social:8 transaction:1 assumed:1 francisco:1 demo:1 grayscale:1 continuous:6 decade:1 table:5 kanade:4 promising:1 robust:2 ca:1 unavailable:1 interact:2 microprocessor:1 did:2 icann:1 paul:1 gentleboost:1 benchmarking:1 deployed:2 slow:1 precision:1 explicit:2 perceptual:3 wavelet:1 ian:1 minute:1 ono:1 explored:1 svm:16 evidence:3 false:1 magnitude:1 illustrates:1 margin:1 gap:1 surprise:4 suited:1 smoothly:2 photograph:1 wavelength:5 appearance:1 visual:1 horizontally:1 partially:1 doubling:1 fear:4 truth:1 ma:1 goal:1 rbf:5 towards:2 man:1 considerable:3 change:2 ekman:1 crl:1 typical:1 determined:1 baluja:1 total:2 discriminate:1 experimental:2 superimposing:1 perceptive:1 support:4 internal:1 frontal:5 ucsf:1 evaluate:1 tested:3 |
1,544 | 2,403 | Invariant Pattern Recognition
by Semidefinite Programming Machines
Thore Graepel
Microsoft Research Ltd.
Cambridge, UK
[email protected]
Ralf Herbrich
Microsoft Research Ltd.
Cambridge, UK
[email protected]
Abstract
Knowledge about local invariances with respect to given pattern
transformations can greatly improve the accuracy of classification.
Previous approaches are either based on regularisation or on the generation of virtual (transformed) examples. We develop a new framework for learning linear classifiers under known transformations based
on semidefinite programming. We present a new learning algorithm?
the Semidefinite Programming Machine (SDPM)?which is able to
find a maximum margin hyperplane when the training examples are
polynomial trajectories instead of single points. The solution is found
to be sparse in dual variables and allows to identify those points on
the trajectory with minimal real-valued output as virtual support vectors. Extensions to segments of trajectories, to more than one transformation parameter, and to learning with kernels are discussed. In
experiments we use a Taylor expansion to locally approximate rotational invariance in pixel images from USPS and find improvements
over known methods.
1
Introduction
One of the central problems of pattern recognition is the exploitation of known invariances in the pattern domain. In images these invariances may include rotation,
translation, shearing, scaling, brightness, and lighting direction. In addition, specific
domains such as handwritten digit recognition may exhibit invariances such as line
thinning/thickening and other non-uniform deformations [8]. The challenge is to combine the training sample with the knowledge of invariances to obtain a good classifier.
Possibly the most straightforward way of incorporating invariances is by including
virtual examples into the training sample which have been generated from actual examples by the application of the invariance T : R ? Rn ? Rn at some fixed ? ? R,
e.g. the method of virtual support vectors [7]. Images x subjected to the transformation T (?, ?) describe highly non-linear trajectories or manifolds in pixel space. The
tangent distance [8] approximates the distance between the trajectories (manifolds)
by the distance between their tangent vectors (planes) at a given value ? = ?0 and can
be used with any kind of distance-based classifier. Another approach, tangent prop
[8], incorporates the invariance T directly into the objective function for learning by
penalising large values of the derivative of the classification function w.r.t. the given
transformation parameter. A similar regulariser can be applied to support vector
machines [1].
We take up the idea of considering the trajectory given by the combination of training
vector and transformation. While data in machine learning are commonly represented
as vectors x ? Rn we instead consider more complex training examples each of which
is represented as a (usually infinite) set
{T (?, xi ) : ? ? R} ? Rn ,
(1)
n
which constitutes a trajectory in R . Our goal is to learn a linear classifier that separates well the training trajectories belonging to different classes. In practice, we may
be given a ?standard? training example x together with a differentiable transformation T representing an invariance of the learning problem. The problem can be solved
? polynomial in ?, e.g.,
if the transformation T is approximated by a transformation T
a Taylor expansion of the form
? ?
?
r
r
X
X
1 dj T (?, xi ) ??
? (?, xi ) ?
=
?j ? (Xi )j,? .
T
?j ?
(2)
?
j
j!
d?
?=0
j=0
j=0
Our approach is based on a powerful theorem by Nesterov [5] which states that the
+
set P2l
of polynomials of degree 2l non-negative on the entire real line is a convex set
+
representable by positive semidefinite (psd) constraints. Hence, optimisation over P2l
can be formulated as a semidefinite program (SDP). Recall that an SDP [9] is given
by a linear objective function minimised subject to a linear matrix inequality (LMI),
c> w
minimise
n
w?R
subject to
A (w) :=
n
X
wj Aj ? B ? 0 ,
(3)
j=1
with Aj ? Rm?m for all j ? {0, . . . , n}. The LMI A (w) ? 0 means that
A (w) is required
i.e., that for all v ? Rn we have
? positive
? semidefinite,
Pn to be
>
>
v> A (w) v =
w
v
A
v
?
v
Bv
?
0
which
reveals that LMI constraints
j
j=1 j
correspond to infinitely many linear constraints. This expressive power can be used
to enforce constraints for training examples as given by (1), i.e., constraints required
to hold for all values ? ? R. Based on this representability theorem for non-negative
polynomials we develop a learning algorithm?the Semidefinite Programming Machine
(SDPM)?that maximises the margin on polynomial training samples, much like the
support vector machine [2] for ordinary single vector data.
2
Semidefinite Programming Machines
Linear Classifiers and Polynomial Examples We consider binary classification
problems and linear classifiers. Given a training sample ((x1 , y1 ) , . . . , (xm , ym )) ?
m
1
n
(Rn ? {?1, +1})
? >we?aim at learning a weight vector w ? R to classify examples x
by y (x) = sign w x . Assuming linear separability of the training sample the principle of empirical risk minimisation recommends finding a weight vector w such that
for all i ? {1, . . . , m} we have yi w> xi ? 0. As such this constitutes a linear feasibility
problem and is easily solved by the perceptron algorithm [6]. Additionally requiring
the solution to maximise the margin leads to the well-known quadratic program of
support vector learning [2].
In order to be able to cope with known invariances T (?, ?) we would like to generalise
the above setting to the following feasibility problem:
find w ? Rn
1
such that
?i ? {1, . . . , m} : ?? ? R :
We omit an explicit threshold to unclutter the presentation.
yi w> xi (?) ? 0 ,
(4)
0.65
0.6
0.55
?2(x)
0.5
0.45
SVM version space
0.4
0.35
0.3
0.25
0.2
0.1
0.2
0.3
?1(x)
0.4
0.5
SDPM version space
Figure 1: (Left) Approximated trajectories for rotated USPS images (2) for r = 1
(dashed line) and r = 2 (dotted line). The features are the mean pixel intensities in
the top and bottom half of the image. (Right) Set of weight vectors w which are
consistent with the six images (top) and the six trajectories (bottom). The SDPM
version space is smaller and thus determines the weight vector more precisely. The
dot corresponds to the separating plane in the left plot.
that is, we would require the weight vector to classify correctly every transformed
training example xi (?) := T (?, xi ) for every value of the transformation parameter ?.
The situation is illustrated in Figure 1. In general, such a set of constraints leads to a
very complex and difficult-to-solve feasibility problem. As a consequence, we consider
? (?, x) of polynomial form, i.e., x
? (?, xi ) = X> ?, each
? i (?) := T
only transformations T
i
? i (?) being represented by a polynomial in the row vectors of
polynomial example x
>
Xi ? R(r+1)?n , with ? := (1, ?, . . . , ?r ) . Then the problem (4) can be written as
find w ? Rn
such that
?i ? {1, . . . , m} : ?? ? R :
yi w> X>
i ? ? 0,
(5)
which is equivalent to finding a weight vector w such that the polynomials pi (?) :=
+
yi w> X>
i ? are non-negative everywhere, i.e., pi ? Pr . The following proposition by
Nesterov [5] paves the way for an SDP formulation of the above problem if r = 2l.
Proposition 1 (SD Representation of Non-Negative Polynomials [5]). The
+
set P2l
of polynomials non-negative everywhere on the real line is SD-representable:
1. For every P ? 0 the polynomial p (?) = ? > P ? is non-negative everywhere.
+
2. For every polynomial p ? P2l
there exists a P ? 0 such that p (?) = ? > P ?.
Proof. Any polynomial p ? P2l can be written as p (?) = ? > P ?, where P = P> ?
1
R(l+1)?(l+1) . Statement 1 : P ? 0 implies ?? ? R : p (?) = ? > P ? = kP 2 ?k2 ? 0,
+
+
hence p ? P2l . Statement 2 : Every non-negative polynomial p ? P2l can be written as
?P
?
P
a sum of squared polynomials [4], hence ?qi ? Pl : p (?) = i qi2 (?) = ? >
qi q>
?
i
i
P
where P := i qi q>
?
0
and
q
is
the
coefficient
vector
of
polynomial
q
.
i
i
i
Maximising Margins on Polynomial Samples Here we develop an SDP formulation for learning a maximum margin classifier given the polynomial constraints
(5). It is well-known that SDPs include quadratic programs as a special case [9].
2
The squared objective kwk is minimised by replacing it with an auxiliary variable t
2
subject to a quadratic constraint t ? kwk that is written as an LMI using Schur?s
complement lemma,
?
?
1
In w
t subject to F (w, t) :=
minimise
? 0,
w> t
2
(w,t)
n
?
?
X
and ?i : G (w, Xi , yi ) := G0 +
wj Gj (Xi )?,j , yi ? 0 . (6)
j=1
This constitutes an SDP as in (3) by the fact that a block-diagonal matrix is psd if
and only if all its diagonal blocks are psd.
For the sake of illustration consider the case of l = 0 (the simplest non-trivial case).
The matrix G (w, Xi , yi ) reduces to a scalar yi w> xi ? 1, which translates into the
standard SVM constraint yi w> xi ? 1 linear in w.
For the case l = 1 we have G (w, Xi , yi ) ? R2?2 and
?
?
yi w> (Xi )0,? ? 1 21 yi w> (Xi )1,?
.
G (w, Xi , yi ) =
1
>
yi w> (Xi )2,?
2 yi w (Xi )1,?
(7)
Although we require G (w, Xi , yi ) to be psd the resulting optimisation problem can
be formulated in terms of a second-order cone program (SOCP) because the matrices
involved are only 2 ? 2.2
For the case l ? 2 the resulting program constitutes a genuine SDP. Again for the sake
of illustration we consider the case l = 2 first. Since a polynomial p of degree four is
fully determined by its five coefficients p0 , . . . , p4 , but the symmetric matrix P ? R3?3
in p (?) = ? > P ? has six degrees of freedom we require one auxiliary variable ui per
training example,
?
?
2yi w> (Xi )0,? ? 2 yi w> (Xi )1,? yi w> (Xi )2,? ? ui
1?
?.
yi w> (Xi )1,?
2ui
yi w> (Xi )3,?
G (w, ui , Xi , yi ) =
2
>
>
>
yi w (Xi )2,? ? ui yi w (Xi )3,?
yi w (Xi )4,?
In general, since a polynomial of degree 2l has 2l + 1 coefficients and a symmetric
(l + 1) ? (l + 1) matrix has (l + 1) (l + 2) /2 degrees of freedom we require (l ? 1) l/2
auxiliary variables.
Dual Program and Complementarity Let us consider the dual SDPs corresponding to the optimisation problems above. For the sake of clarity, we restrict the
presentation to the case l = 1. The dual of the general SDP (3) is given by
maximise
tr (B?) subject to ?j ? {1, . . . , n} : tr (Aj ?) = cj ; ? ? 0,
m?m
??R
where we introduced a matrix ? of dual variables. The complementarity conditions
for the optimal solution (w? , t? ) read A ((w? , t? )) ?? = 0 . The dual formulation of
(6) with matrix (7) combined with the F (w, t) part of the complementarity conditions
reads
m m
m
X
1 XX
>
y
y
[?
x
(?
,
maximise
?
?
,
?
,
X
)]
[?
x
(?
,
?
,
?
,
X
)]
+
?i
i
j
i
i
i
i
j
j
j
j
2 i=1 j=1
(?,?,?)?R3m
i=1
?
?
?i ?i
subject to
?i ? {1, . . . , m} : Mi :=
? 0,
(8)
? i ?i
2
The characteristic polynomial of a 2?2 matrix is quadratic and has at most two solutions.
The condition that the lower eigenvalue be non-negative can be expressed as a second-order
cone constraint. The SOCP formulation?if applicable?can be solved more efficiently than
the SDP formulation.
? (?i , ?i , ?i , Xi ) := ?i (Xi )0,? +
where we define extrapolated training examples x
?i (Xi )1,? + ?i (Xi )2,? . As before this program with quadratic objective and psd constraints can be formulated as a standard SDP in the form (3) and is easily solved by
a standard SDP solver3 . In addition, the complementarity conditions reveal that the
optimal weight vector w? can be expanded as
w? =
m
X
? (?i , ?i , ?i , Xi ) ,
yi x
(9)
i=1
in analogy to the corresponding result for support vector machines [2].
It remains to analyse the complementarity conditions related to the example-related
G (w, Xi , yi ) constraints in (6). Using (7) and assuming primal and dual feasibility
we obtain for all i ? {1, . . . , m} at the solution (w? , t? , M?i ),
G (w? , Xi , yi ) ? M?i = 0 ,
(10)
the trace of which translates into
yi w?,> [?i? (Xi )0,? + ?i? (Xi )1,? + ?i? (Xi )2,? ] = ?i? .
(11)
These relations enable us to characterise the solution by the following propositions:
Proposition 2 (Sparse Expansion). The expansion (9) of w? in terms of Xi is
sparse: Only those examples Xi (?support vectors?) may have non-zero expansion
coefficients ?i? which lie on the margin, i.e., for which det (Gi (w? , Xi , yi )) = 0. Furthermore, in this case ?i? = 0 implies ?i? = ?i? = 0.
Proof. We assume ?i? 6= 0 and derive a contradiction. From G (w? , Xi , yi ) ? 0 we
conclude using Proposition 1 that for all ? ? R we have yi w?,> ((Xi )0,? + ?(Xi )1,? +
?2 (Xi )2,? ) > 1. Furthermore, we conclude from (10) that det(M?i ) = ?i? ?i? ? ?i?2 = 0,
which together with the assumption ?i? 6= 0 implies that there exists ?? ? R such that
? ? and ? ? = ? ?2 /?? = ??2 ?? . Inserting this into (11) leads to a contradiction,
?i? = ??
i
i
i
i
i
hence ?i? = 0. Then, det(M?i ) = 0 implies ?i? = 0 and the fact that G (w? , Xi , yi ) ?
0 ? yi w?,> (Xi )2,? 6= 0 ensures that ?i? = 0 holds as well.
Proposition 3 (Truly Virtual Support Vectors). For all examples Xi lying on
the margin, i.e., satisfying det (G (w? , Xi , yi )) = 0 and det (M?i ) = 0 there exist
?i ? R ? {?} such that the optimal weight vector w? can be written as
w? =
m
X
i=1
? i (?i ) =
?i? yi x
m
X
?
?
yi ?i? (Xi )0,? + ?i? (Xi )1,? + ?i?2 (Xi )2,?
i=1
Proof. (sketch) We have det(M?i ) = ?? ? ? ?? ?2 = 0. We only need to consider ?i? 6= 0,
in which case there exists ?i? such that ?i? = ?i? ?i? and ?i? = ?i?2 ?i? . The other cases
are ruled out by the complementarity conditions (10).
Based on this proposition it is possible not only to identify which examples Xi are
used in the expansion of the optimal weight vector w? , but also the corresponding
values ?i? of the transformation parameter ?. This extends the idea of virtual support
vectors [7] in that Semidefinite Programming Machines are capable of finding truly
virtual support vectors that were not explicitly provided in the training sample.
3
We used the SDP solver SeDuMi together with the LMI parser Yalmip under MATLAB
(see also http://www-user.tu-chemnitz.de/?helmberg/semidef.html ).
3
Extensions to SDPMs
Optimisation on a Segment In many applications it may not be desirable to
enforce correct classification on the entire trajectory given by the polynomial example
? (?). In particular, when the polynomial is used as a local approximation to a global
x
invariance we would like to restrict the example to a segment of the trajectory. To
this end consider the following corollary to Proposition 1.
Corollary 1 (SD-Representability on a segment [5]). For any l ? N, the set
Pl+ (??, ? ) of polynomials non-negative on a segment [??, ? ] is SD-representable.
Proof. (sketch) Consider a polynomial p ? Pl+ (??, ? ) where p := x 7?
Pl
i=0
pi xi and
?l
?
q := x 7? 1 + x2 ? [p(? (2x2 (1 + x2 )?1 ? 1))] .
+
If q ? P2l
is non-negative everywhere then p is non-negative in [??, ? ].
? (?) to a segment ? ? [??, ? ]
The proposition shows how we can restrict the examples x
by effectively doubling the degree of the polynomial used. This is the SDPM version
used in the experiments in Section 4. Note that the matrix G (w, Xi , yi ) is sparse
because the resulting polynomial contains only even powers of ?.
Multiple Transformation Parameters In practice it would be desirable to treat
more than one transformation at once. For example, in handwritten digit recognition
transformations like rotation, scaling, translation, shearing, thinning/thickening etc.
may all be relevant [8]. Unfortunately, Proposition 1 only holds for polynomials in
one variable. However, its first statement may be generalised to polynomials of more
than one variable: for every psd matrix P ? 0 the polynomial p (?) = ? >
? P ? ? is
non-negative everywhere, even if ?i is any monomial in ?1 , . . . , ?D . This means, that
optimisation is only over a subset of these polynomials4 . Considering polynomials of
degree two and ? ? := (1, ?1 , . . . , ?D ) we have,
?
?
xi (0)
?>
>
? xi (0)
x
?i (?) ? ? ?
?? ,
?? xi (0) ?? ?>
? xi (0)
>
where ?>
? denotes the gradient and ?? ?? denotes the Hessian operator.
Note that the scaling behaviour with regard to the number D of parameters is more
benign than that of the naive method of adding virtual examples to the training
sample on a grid. Such a procedure would incur an exponential growth in the number
of examples, whereas the approximation above only exhibits a linear growth in the
size of the matrices involved.
Learning with Kernels Support vector machines derive much of their popularity
from the flexibility added by the use of kernels [2, 7]. Due to space restrictions we
cannot discuss kernels in detail. However, taking the dual SDPM (8) as a starting
point and assuming the Taylor expansion (2) the crucial point is that in order to
represent the polynomial trajectory in feature space we need to differentiate through
the kernel function.
Let us assume a feature map ? : Rn ? F ? RN and k : X ? X ? R be the kernel
? ).
function corresponding to ? in the sense that ?x, x
? ? X : [?(x)]> [?(?
x)] = k (x, x
4
There exist polynomials in more than one variable that are non-negative everywhere yet
cannot be written as a sum of squares and are hence not SD-representable.
0.14
0.2
0.2
0.18
0.18
0.12
0.16
0.1
SDPM error
0.16
0.14
0.12
SDPM error
0.14
0.12
0.1
0.08
0.06
0.1
0.04
0.08
0.02
0.08
0.06
0.04
0.02
0
0
0.1
0.15
0.2
0.25
0.3
0.05
0.1
SVM error
0.35
(a)
(b)
0.15
0.2
0
0
0.02
0.04
0.06
0.08
0.1
0.12
0.14
VSVM error
(c)
Figure 2: (a) A linear classifier learned with the SDPM on 10 2D-representations of
the USPS digits ?1? and ?9? (see Figure 1 for details). Note that the ?support? vector
is truly virtual since it was never directly supplied to the algorithm (inset zoom-in).
(b) Mean test errors of classifiers learned with the SVM vs. SDPM (see text) and (c)
virtual SVM vs. SDPM algorithm on 50 independent training sets of size m = 20 for
all 45 digit classification tasks.
The Taylor expansion (2) is now carried out in F. Then an inner product expression
between data points xi and xj differentiated, respectively, u and v times reads
?
?
?
! ?
?
N
?
h
i> h
i X
u
v
?
?
d ?s (x(?)) ?
d ?s (?
x(?)) ?
?.
??
?(v) (xj ) =
?(u) (xi )
?
?
u
v
?
?
d?
d
?
x=xi ,?=0
{z
}
|
?
s=1
k(u,v) (xi ,xj )
?=xj ,?=0
x
The kernel trick may help avoid the sum over N feature space dimensions, however,
it does so at the cost of additional terms by the product rule of differentiation. It
turns out that for polynomials
? ?of degree r = 2 the exact calculation of elements of the
kernel matrix is already O n4 and needs to be approximated efficiently in practice.
4
Experimental Results
In order to test and illustrate the SDPM we used the well-known USPS data set of
16 ? 16 pixel images in [0, 1] of handwritten digits. We considered the transformation
rotation by angle ? and calculated the first and second derivatives x0i (? = 0) and
x00i (? = 0) based on an image representation smoothed by a Gaussian of variance 0.09.
For the purpose of illustration we calculated two simple features, averaging the first
and the second 128 pixel intensities, respectively. Figure 2 (a) shows a plot of 10
training examples of digits ?1? and ?9? together with the quadratically approximated
trajectories for ? ? [?20? , 20? ]. The examples are separated by the solution found with
an SDPM restricted to the same segment of the trajectory. Following Propositions 2
and 3 the weight vector found is expressed as a linear combination of truly virtual
support vectors that had not been supplied in the training sample directly (see inset).
In a second experiment, we probed the performance of the SDPM algorithm on the
full feature set of 256 pixel intensities using 50 training sets of size m = 20 for each
of the 45 one-versus-one classification tasks between all of the digits from ?0? to ?9?
from the USPS data set. For each task, the digits in one class were rotated by ?10?
and the digits of the other class by +10? . We compared the performance of the SDPM
algorithm to the performance of the original support vector machine (SVM) [2] and
the virtual support vector machine (VSVM) [7] measured on independent test sets
of size 250. The VSVM takes the support vectors of the ordinary SVM run and is
trained on a sample that contains these support vectors together with transformed
versions rotated by ?10? and +10? in the quadratic approximation. The results are
shown in the form of scatter plots of the errors for the 45 tasks in Figure 2 (b) and (c).
Clearly, taking into account the invariance is useful and leads to SDPM performance
superior to the ordinary SVM. The SDPM also performs slightly better than the
VSVM, however, this could be attributed to the pre-selection of support vectors to
which the transformation is applied. It is expected that for increasing number D of
transformations the performance improvement becomes more pronounced because in
high dimensions most volume is concentrated on the boundary of the convex hull of
the polynomial manifold.
5
Conclusion
We introduced Semidefinite Programming Machines as a means of learning on infinite
families of examples given in terms of polynomial trajectories or?more generally?
manifolds in data space. The crucial insight lies in the SD-representability of nonnegative polynomials which allows us to replace the simple non-negativity constraint
in algorithms such as support vector machines by positive semidefinite constraints.
While we have demonstrated the performance of the SDPM only on very small data
sets it is expected that modern interior-point methods make it possible to scale SDPMs
to problems of m ? 105 ? 106 data points, in particular in primal space where the
number of variables is given by the number of features. This expectation is further
supported by the following: (i) The resulting SDP is well structured in the sense that
A (w, t) is block-diagonal with many small blocks. (ii) It may often be sufficient to
satisfy the constraints?e.g., by a version of the perceptron algorithm for semidefinite
feasibility problems [3]?without necessarily maximising the margin.
Open questions remain about training SDPMs with multiple parameters and about
the efficient application of SDPMs with kernels. Finally, it would be interesting to
obtain learning theoretical results regarding the fact that SDPMs effectively make use
of an infinite number of (non IID) training examples.
References
[1] O. Chapelle and B. Sch?
olkopf. Incorporating invariances in non-linear support vector
machines. In T. G. Dietterich, S. Becker, and Z. Ghahramani, editors, Advances in
Neural Information Processing Systems 14, pages 609?616, Cambridge, MA, 2002. MIT
Press.
[2] C. Cortes and V. Vapnik. Support vector networks. Machine Learning, 20:273?297, 1995.
[3] T. Graepel, R. Herbrich, A. Kharechko, and J. Shawe-Taylor. Semidefinite programming
by perceptron learning. In S. Thrun, L. Saul, and B. Sch?
olkopf, editors, Advances in
Neural Information Processing Systems 16. MIT Press, 2004.
[4] A. Nemirovski. Five lectures on modern convex optimization, 2002. Lecture notes of the
C.O.R.E. Summer School on Modern Convex Optimization.
[5] Y. Nesterov. Squared functional systems and optimization problems. In H. Frenk,
K. Roos, T. Terlaky, and S. Zhang, editors, High Performance Optimization, pages 405?
440. Kluwer Academic Press, 2000.
[6] F. Rosenblatt. The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65(6):386?408, 1958.
[7] B. Sch?
olkopf. Support Vector Learning. R. Oldenbourg Verlag, M?
unchen, 1997. Doktorarbeit, TU Berlin. Download: http://www.kernel-machines.org.
[8] P. Simard, Y. LeCun, J. Denker, and B. Victorri. Transformation invariance in pattern
recognition, tangent distance and tangent propagation. In G. Orr and M. K., editors,
Neural Networks: Tricks of the trade. Springer, 1998.
[9] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review, 38(1):49?95,
1996.
| 2403 |@word exploitation:1 version:6 polynomial:39 open:1 p0:1 brightness:1 thoreg:1 tr:2 contains:2 com:2 yet:1 scatter:1 written:6 oldenbourg:1 benign:1 plot:3 v:2 half:1 plane:2 herbrich:2 org:1 zhang:1 five:2 combine:1 expected:2 shearing:2 sdp:12 brain:1 actual:1 considering:2 solver:1 increasing:1 provided:1 xx:1 becomes:1 kind:1 finding:3 transformation:19 differentiation:1 every:6 growth:2 classifier:9 rm:1 uk:2 k2:1 omit:1 generalised:1 positive:3 maximise:3 local:2 before:1 sd:6 treat:1 consequence:1 nemirovski:1 lecun:1 practice:3 block:4 digit:9 procedure:1 empirical:1 boyd:1 pre:1 cannot:2 interior:1 selection:1 operator:1 storage:1 risk:1 www:2 equivalent:1 restriction:1 map:1 demonstrated:1 straightforward:1 starting:1 convex:4 contradiction:2 rule:1 insight:1 vandenberghe:1 ralf:1 parser:1 user:1 exact:1 programming:9 complementarity:6 trick:2 element:1 recognition:5 approximated:4 satisfying:1 bottom:2 solved:4 wj:2 ensures:1 trade:1 ui:5 nesterov:3 trained:1 segment:7 incur:1 usps:5 yalmip:1 easily:2 represented:3 separated:1 describe:1 kp:1 valued:1 solve:1 gi:1 analyse:1 differentiate:1 differentiable:1 eigenvalue:1 product:2 p4:1 inserting:1 tu:2 relevant:1 flexibility:1 pronounced:1 olkopf:3 rotated:3 help:1 derive:2 develop:3 illustrate:1 measured:1 x0i:1 school:1 auxiliary:3 implies:4 direction:1 correct:1 hull:1 enable:1 virtual:12 require:4 behaviour:1 proposition:11 extension:2 pl:4 hold:3 lying:1 considered:1 purpose:1 applicable:1 x00i:1 mit:2 clearly:1 gaussian:1 aim:1 pn:1 avoid:1 minimisation:1 corollary:2 improvement:2 greatly:1 sense:2 entire:2 relation:1 transformed:3 pixel:6 classification:6 dual:8 html:1 special:1 genuine:1 once:1 never:1 constitutes:4 modern:3 zoom:1 microsoft:4 psd:6 freedom:2 organization:1 highly:1 truly:4 semidefinite:14 primal:2 capable:1 sedumi:1 taylor:5 ruled:1 deformation:1 theoretical:1 minimal:1 psychological:1 classify:2 ordinary:3 cost:1 subset:1 uniform:1 terlaky:1 combined:1 siam:1 probabilistic:1 minimised:2 together:5 ym:1 squared:3 central:1 again:1 possibly:1 lmi:5 r3m:1 derivative:2 simard:1 account:1 socp:2 de:1 orr:1 coefficient:4 satisfy:1 explicitly:1 kwk:2 square:1 accuracy:1 variance:1 characteristic:1 efficiently:2 correspond:1 identify:2 handwritten:3 sdps:2 helmberg:1 iid:1 trajectory:16 lighting:1 involved:2 proof:4 mi:1 attributed:1 recall:1 knowledge:2 penalising:1 graepel:2 cj:1 thinning:2 unchen:1 formulation:5 furthermore:2 sketch:2 expressive:1 replacing:1 propagation:1 aj:3 reveal:1 thore:1 dietterich:1 requiring:1 hence:5 read:3 symmetric:2 illustrated:1 performs:1 image:8 superior:1 rotation:3 functional:1 volume:1 discussed:1 approximates:1 kluwer:1 cambridge:3 grid:1 shawe:1 dj:1 dot:1 had:1 chapelle:1 gj:1 etc:1 verlag:1 inequality:1 binary:1 yi:38 additional:1 dashed:1 ii:1 multiple:2 desirable:2 full:1 reduces:1 academic:1 calculation:1 feasibility:5 qi:3 optimisation:5 expectation:1 kernel:10 represent:1 addition:2 whereas:1 victorri:1 crucial:2 sch:3 subject:6 incorporates:1 schur:1 recommends:1 rherb:1 xj:4 restrict:3 inner:1 idea:2 regarding:1 translates:2 det:6 minimise:2 six:3 expression:1 ltd:2 becker:1 hessian:1 matlab:1 useful:1 generally:1 characterise:1 locally:1 concentrated:1 simplest:1 http:2 qi2:1 supplied:2 exist:2 dotted:1 sign:1 correctly:1 per:1 popularity:1 rosenblatt:1 probed:1 four:1 threshold:1 clarity:1 sum:3 cone:2 run:1 angle:1 everywhere:6 powerful:1 extends:1 family:1 scaling:3 summer:1 quadratic:6 nonnegative:1 bv:1 constraint:15 precisely:1 x2:3 sake:3 expanded:1 structured:1 combination:2 representable:4 belonging:1 smaller:1 slightly:1 remain:1 separability:1 n4:1 invariant:1 pr:1 restricted:1 remains:1 discus:1 r3:1 turn:1 subjected:1 end:1 denker:1 enforce:2 differentiated:1 original:1 top:2 denotes:2 include:2 ghahramani:1 objective:4 g0:1 added:1 already:1 question:1 pave:1 diagonal:3 exhibit:2 gradient:1 distance:5 separate:1 separating:1 thrun:1 berlin:1 manifold:4 trivial:1 assuming:3 maximising:2 illustration:3 rotational:1 representability:3 difficult:1 unfortunately:1 statement:3 trace:1 negative:13 regulariser:1 maximises:1 situation:1 y1:1 rn:10 smoothed:1 download:1 intensity:3 introduced:2 complement:1 required:2 learned:2 quadratically:1 able:2 usually:1 pattern:5 xm:1 challenge:1 program:7 including:1 power:2 representing:1 improve:1 carried:1 negativity:1 naive:1 text:1 review:2 tangent:5 regularisation:1 fully:1 lecture:2 generation:1 interesting:1 thickening:2 analogy:1 versus:1 degree:8 sufficient:1 consistent:1 vsvm:4 principle:1 editor:4 pi:3 translation:2 row:1 extrapolated:1 supported:1 monomial:1 perceptron:4 generalise:1 saul:1 taking:2 sparse:4 regard:1 boundary:1 dimension:2 calculated:2 p2l:8 commonly:1 cope:1 approximate:1 global:1 roos:1 reveals:1 conclude:2 xi:66 additionally:1 learn:1 expansion:8 complex:2 necessarily:1 domain:2 x1:1 chemnitz:1 explicit:1 exponential:1 lie:2 theorem:2 specific:1 inset:2 r2:1 svm:8 cortes:1 incorporating:2 exists:3 vapnik:1 adding:1 effectively:2 margin:8 infinitely:1 expressed:2 scalar:1 doubling:1 springer:1 corresponds:1 determines:1 ma:1 prop:1 goal:1 formulated:3 presentation:2 replace:1 infinite:3 determined:1 hyperplane:1 averaging:1 lemma:1 invariance:15 experimental:1 support:22 |
1,545 | 2,404 | Approximate Expectation
Tom Heskes, Onno Zoeter, and Wim Wiegerinck
SNN, University of Nijmegen
Geert Grooteplein 21, 6525 EZ, Nijmegen, The Netherlands
Abstract
We discuss the integration of the expectation-maximization (EM) algorithm
for maximum likelihood learning of Bayesian networks with belief propagation
algorithms for approximate inference. Specifically we propose to combine the
outer-loop step of convergent belief propagation algorithms with the M-step
of the EM algorithm. This then yields an approximate EM algorithm that is
essentially still double loop, with the important advantage of an inner loop
that is guaranteed to converge. Simulations illustrate the merits of such an
approach.
1
Introduction
The EM (expectation-maximization) algorithm [1, 2] is a popular method for maximum likelihood learning in probabilistic models with hidden variables. The E-step
boils down to computing probabilities of the hidden variables given the observed
variables (evidence) and current set of parameters. The M-step then, given these
probabilities, yields a new set of parameters guaranteed to increase the likelihood.
In Bayesian networks, that will be the focus of this article, the M-step is usually
relatively straightforward. A complication may arise in the E-step, when computing
the probability of the hidden variables given the evidence becomes intractable.
An often used approach is to replace the exact yet intractable inference in the Estep with approximate inference, either through sampling or using a deterministic
variational method. The use of a "mean-field" variational method in this context
leads to an algorithm known as variational EM and can be given the interpretation of
minimizing a free energy with respect to both a tractable approximate distribution
(approximate E-step) and the parameters (M-step) [2].
Loopy belief propagation [3] and variants thereof, such as generalized belief propagation [4] and expectation propagation [5], have become popular alternatives to
the "mean-field" variational approaches, often yielding somewhat better approximations. And indeed, they can and have been applied for approximate inference
in the E-step of the EM algorithm (see e.g. [6, 7]). A possible worry, however, is
that standard application of these belief propagation algorithms does not always
lead to convergence. So-called double-loop algorithms with convergence guarantees
have been derived, such as CCCP [8] and UPS [9], but they tend to be an order of
magnitude slower than standard belief propagation~ .
The goal of this article is to integrate expectation-maximization with belief propagation. As for variational EM, this integration relies on the free-energy interpretation
of EM that is reviewed in Section 2. In Section 3 we describe how the exact free
energy can be approximated with a Kikuchi free energy and how this leads to an
approximate EM algorithm. Section 4 contains our main result: integrating the
outer-loop of a convergent double-loop algorithm with the M-step, we are left with
an overall double-loop algorithm, where the inner loop is now a convex constrained
optimization problem with a unique solution. The methods are illustrated in Section 5; implications and extensions are discussed in Section 6.
2
The free energy interpretation of EM
We consider probabilistic models P(x; fJ), with fJ the model parameters to be learned
and x the variables in the model. We subdivide the variables into hidden variables h
and observed, evidenced variables e. For ease of notation, we consider just a single
set of observed variables e (in fact, if we have N sets of observed variables, we can
simply copy our probability model N times and view this as our single probability
model with "shared" parameters fJ). In maximum likelihood learning, the goal is
to find the parameters fJ that maximize the likelihood P(e; fJ) or, equivalently, that
minimize minus the loglikelihood
L(O) = -log pee; 0) = -log
[~p(e, h; 0)]
.
The EM algorithm can be understood from the observation, made in [2], that
L(B) == min F( Q, fJ) ,
QEP
with P the set of all probability distributions defined on h and F(Q, B) the so-called
free energy
F(Q, 0)
Q(h) ]
= L(O) + ~ Q(h) log [ P(hle;
0) = E(Q, 0) -
with the "energy"
SeQ) ,
(1) .
L Q(h) logP(e, h; B) ,
E(Q, fJ) == -
h
and the "entropy"
8(Q) == -
L Q(h) log Q(h) .
h
The EM algorithm now boils down to alternate minimization with respect to Q and
fJ:
E-step: fix fJ and solve Q == argmin F( QI, B)
Q'EP
M-step:
fix Q and solve
B == argminF(Q,B 1 ) == argrninE(Q,B1 )
()'
(2)
()'
The advantage of the M-step over direct minimization of -logP(e; fJ) is that the
summation over h is now outside the logarithm, which in many cases implies that
the minimum with respect to () can be computed explicitly. The main inference
problem is then in the E-step. Its solution follows directly from (1):
)
Q(h) = P (hI
e;O
P(h, e; fJ)
= L-h P(h',e;O)
'
,
(3)
with fJ the current setting of the parameters. However, in complex probability
models P(hle; fJ) can be difficult and even intractable to compute, mainly because
of the normalization in the denominator. For later purposes we note that the EM
algorithm can be interpreted as a general "bound optimization algorithm" [10]. In
this interpretation the free energy F(Q,B) is an upper bound on the function L(B)
that we try to minimize; the E-step corresponds to a reset of the bound and the
M-step to the minimization of the upper bound.
In variational EM [2] one restricts the probability distribution Q to a specific set
pI, such that the E-step. becomes tractable. Note that this restriction affects
both the energy term and the entropy term. By construction the approximate
minQEpl F(Q, B) is an upper bound on L(B).
3
Approximate free energies
In several studies, propagation algorithms like loopy belief propagation [6J and expectation propagation [7] have been applied to find approximate solutions for the
E-step. As we will see, the corresponding approximate EM-algorithm can be interpreted as alternate minimization of a Bethe or Kikuchi free energy. For the moment,
we will consider the case of loopy and generalized belief propagation applied to
probability models with just discrete variables. The generalization to expectation
propagation is discussed in Section 6.
The joint probability implied by a Bayesian network can be written in the form
P(x; B) ==
II wa(x a;Ba) ,
where a denotes a subset of variables and Wa is a potential function. The parameters
Ba may be shared, i.e., we may have Ba == Bal for some a =1= al. For a Bayesian
network, the energy term simplifies into a sum over local terms:
E(Q,B) == - LLQ(ha)log'1ia(ha,ea;Ba).
a
hex
However, the entropy term is as intractable as the normalization in (3) that we try
to prevent. In the Bethe or more generally Kikuchi approximation, this entropy
term is approximated through [4]
S(Q) == - LQ(h)logQ(h) ~ LSa(Q)
h
a
+ LcIJSIJ(Q).== S(Q) ,
IJ
with
and similarly for SIJ(Q). The subsets indexed by f3 correspond to intersections
between the subsets indexed by a, intersections of intersections, and so on. The
parameters clJ are called Moebius or overcounting numbers. In the above description, the a-clusters correspond to the potential subsets, i.e., the clusters in the
moralized graph. However, we can also choose them to be larger, e.g., combining several potentials into a single cluster. The Kikuchi/Bethe approximation is
exact if the a-clusters form a singly-connected structure. That is, exact inference
is obtained when the a-clusters correspond to cliques in a junction tree. The f3
subsets then play the role of the separators and have overcounting numbers 1 - nlJ
with n{J the number of neighboring cliq~es. The larger the clusters, the higher the
computational complexity.
There are different kinds of approximations (Bethe, CVM, junction graphs), each
corresponding to a somewhat different choice of a-clusters, f3-subsets and overcounting numbers (see [4] for an overview). In the following we will refer to all of them
as Kikuchi approximations. The important point is that the approximate entropy
is, like the energy, a sum of local terms. Furthermore, the Kikuchi free energy as
a function of the probability distribution Q only depends on the marginals Q(xa:)
and Q(xf3). The minimization of the exact free energy with respect to a probability
distribution Q has been turned into the minimization of the Kikuchi free energy
F(Q,()) == E (Q, ()) -8(Q) with respect to a set of pseudo-marginals Q == {Q a: , Qf3 }.
For the approximation to make any sense, these pseudo-marginals have to be properly normalized as well as consistent, which boils down to a set of linear constraints
of the form
(4)
The approximate EM algorithm based on the Kikuchi free energy now reads
approximate E-step:
fix () and solve
Q == argminF(Q',8)
Q/EP
M-step:
fix Q and solve () == argrninF(Q,()') == argrninE(Q,()')
(jl
where
0
'
(5)
P refers to
all sets of consistent and properly normalized pseudo-marginals
{Qa:, Qf3}. Because the entropy does not depend on the parameters (), the M-step of
the approximate EM algorithm is completely equivalent to the M-step of the exact
EM algorithm. The only difference is that the statistics required for this M-step is
computed approximately rather than exactly. In other words, the seemingly naive
procedure of using generalized or loopy belief propagation to compute the statistics
in the E-step and use it in the M-step, can be interpreted as alternate minimization
of the Kikuchi approximation of the exact free energy. That is, algorithm (5) can
be interpreted as a bound optimization algorithm for minimizing
L(8) == miI! F(Q, 8) ,
QEP
which we hope to be a good approximation (not necessarily a bound) of the original
L(8).
4
Constrained optimization
There are two kinds of approaches for finding the minimum of the Kikuchi free
energy. The first one is to run loopy or generalized belief propagation, e.g., using
Algorithm 1 in the hope that it converges to such a minimum. However, convergence
guarantees can only be given in special cases and in practice one does observe
convergence problems. In the following we will refer to the use of standard belief
propagation in the E-step as the "naive algorithm".
Recently, there have been derived double-loop algorithms that explicitly minimize
the Kikuchi free energy [8, 9, 11]. Technically, finding the minimum of the Kikuchi
free energy with respect to consistent marginals corresponds to a non-convex constrained optimization problem. The consistency and normalization constraints on
the marginals are linear in Q and so is the energy term E (Q, 8). The non-convexity
stems from the entropy terms and specifically those with negative overcounting
numbers. Most currently described techniques, such as CCCP [8], UPS [9] and
variants thereof, can be understood as general bound optimization algorithms. In
CCCP concave terms are bounded with a linear term, yielding a convex bound and
thus, in combination with the linear constraints, a convex optimization problem to
be solved in the inner loop. In particular we can write
F(Q,()) == miI!G(Q,R,8) with G(Q,R,8) ==F(Q,(}) +'K(Q,R) ,
REP
(6)
Algorithm 1 Generalized belief propagation.
1: while -,converged do
2:
3:
4:
for all f3 do
for all a :J
Qa(XIJ)
f3
do
== L.Qa(Xa);
xO:\{3
5:
end for
6:
QIJ (XIJ) ex:
7:
a-.:JIJ
for all a :J f3 do
1
8:
J-l1J-+a (xlJ)
J-la-+IJ (XIJ) n{3+c{3
==
QIJ(xlJ)
() ;
J.La-+1J xlJ
Qa(Xa) ex: Wa(Xa)
II J.LIJ-+a(xlJ)
IJCa
end for
end for
11: end while
9:
10:
where
K(Q, R)
==
L
ICIJI
IJ;C{3 <0
L QIJ(hlJ) log [~~~~~~]
IJ IJ
,
h{3
is a weighted sum of local Kullback-Leibler divergences. By construction G(Q, R, 0)
is convex in Q - the concave QIJ log QIJ terms in F(Q, 0) cancel with those in K (Q, R)
- as well as an upper bound on F(Q, B) since K(Q, R) ~ O. The now convex optimization problem in the inner loop can be solved with a message passing algorithm
very similar to standard loopy or generalized belief propagation. In fact, we can
use Algorithm 1, with clJ == 0 and after a slight redefinition of the potentials Wa
such that they incorporate the linear bound of the concave entropy terms (see [11]
for details). The messages in this algorithm are in one-to-one correspondence with
the Lagrange multipliers of the concave dual. Most importantly, with the particular scheduling in Algorithm 1, each update is guaranteed to increase the dual and
therefore the inner-loop algorithm must converge to its unique solution. The outer
loop simply sets R == Q and corresponds to a reset of the bound.
Incorporating this double-loop algorithm into our approximate EM algorithm (5),
we obtain
inner-loop E-step:
fix {B, R} solve
Q == argmin G (QI , R, fJ)
outer-loop E-step:
fix {Q, 8} solve
R == argminG(Q,R',fJ) == argminK(Q,R)
M-step:
fix {Q, R} solve
B == argrninG(Q,R,fJl) == argrninE(Q, 81 )
QiE'P
WE'P
()I
WE'P
()I
(7)
To distinguish it from the naive algorithm, we will refer to (7) as the "convergent
algorithm". The crucial observation is that we can combine the outer-loop E-step
with the usual M-step: there is no need to run the double-loop algorithm in the
E-step until convergence. This gives us then an overall double-loop rather than
triple-loop algorithm. In principle (see however the next section) the algorithmic
complexity of the convergent algorithm is- the same as that of the naive algorithm.
70.----,-----,---.,----,.------,
"'-
\
"-
'.\
\
:\.
:\
\
\
\
:. .\
\
\
\
"-
\
\
"-
"-
"-
"'-
--''=::-. =.-=55 '-------'-----'----'-----'--------'
---'-.:..,..=;=.....~.
o
20
40
-, -.
60
80
100
outer loops
(a) Coupled hidden Markov model.
(b) Simulation results.
Figure 1: Learning a coupled hidden Markov model. (a) Architecture for 3 time slice
Qa'1d 4 hidden nodes per time slice. (b) 11inus the loglikelihood in the Kikuchi/Bethe
approximation as a function of the number of M-steps. Naive algorithm (solid
line), convergent algorithm (dashed), convergent algorithm with tighter bound and
overrelaxation (dash-dotted), same for a Kikuchi approximation (dotted). See text
for details.
5
'Simulations
For illustration, we compare the naive and convergent approximate EM algorithms
for learning in a coupled hidden Markov model. The architecture of coupled hidden
Markov models is sketched in Figure l(a) for T == 3 time slices and M == 4 hiddenvariable nodes per time slice. In our simulations we used M == 5 and T == 20;
all nodes are binary. The parameters to be learned are the observation matrix
p(em,t == ilhm,t == j) and two transition matrices: p(h1,t+l == ilh1,t == j, h 2 ,t == k) ==
p(hM,t+l == ilhM,t == j, hM-l,t == k) for the outer nodes and p(hm,t+l == ilhm-1,t ==
j, hm,t == k, h m+1 ,t == l) for the middle nodes. The prior for the first time slice
is fixed and uniform. We randomly generated properly normalized transition and
observation matrices and evidence given those matrices. fuitial parameters were set
to another randomly generated instance. In the inner loop of both the naive and
the convergent algorithm, Algorithm 1 was run for 10 iterations.
Loopy belief propagation, which for dynamic Bayesian networks can be interpreted
as an iterative version of the Boyen-Koller algorithm [12], converged just fine for the
many instances that we have seen. The naive algorithm nicely minimizes the Bethe
approximation of minus the loglikelihood L(O), as can be seen from the solid line in
Figure 1(b). The Bethe approximation is fairly accurate in this model and plots of
the exact loglikelihood, both those learned with exact and with approximate EM,
are very similar (not shown). The convergent algorithm also works fine, but takes
more time to converge (dashed line). This is to. be expected: the additional bound
implied by the outer-loop E-step makes G(Q,R,(}) a looser bound of L((}) than
F(Q, (}) and the tighter the bound in a bound optimization algorithm, the faster
the convergence. Therefore, it makes sense to use tighter convex bounds on F(Q, (}),
for example those derived in [Ill. On top of that, we can use overrelaxation, i.e., set
log Q == 'TJ log R + (1 - 'TJ) log QO d . (up to normalization) with QOld the previous set
of pseudo-marginals. See e.g. [10] for the general idea; here we took 'TJ == 1.4 fixed.
Application of these two "tricks" yields the dash-dotted line. It gives an indication
of how close one can bring the convergent to the naive algorithm (overrelaxation
applied to the M-step affects both algorithms in the same way and is therefore not
considered here). Another option is to repeat the inner and outer E-steps N times,
before updating the parameters in the M-step. Plots for N ~ 3 are indistinguishable
from the solid line for the naive algorithm.
The above shows that the price to be paid for an algorithm that is guaranteed to
converge is relatively low. Obviously, the true value of the convergent algorithm
becomes clear when the naive algorithm fails. Many instances of non-convergence of
loopy and especially generalized belief propagation have been reported (see e.g. [3,
11] and [12] specifically on coupled hidden Markov models). Some but not all of
these problems disappear when the updates are damped, which further has the
drawback of slowing down convergence as well as requiring additional tuning. In
the context of the coupled hidden Markov models we observed serious problems with
generalized belief propagation. For example, with a-clusters of size 12, consisting of
3 neighboring hidden and evidence nodes in two subsequent time slices, we did not
manage to get the naive algorithm to converge properly. The convergent algorithm
alvlays converged vlithout any problem, yielding the dotted line in Figure l(b) for
the particular problem instance considered for the Bethe approximation as welL
Note that, where the inner loops for the Bethe approximations take about the same
amount of time (which makes the number of outer loops roughly proportional to
cpu time), an inner loop for the Kikuchi approximation is in this case about two
times slower.
6
Discussion
The main idea of this article, that there is no need to run a converging doubleloop algorithm in an approximate E-step until convergence, only applies to directed
probabilistic graphical models like Bayesian networks. In undirected graphical models like Boltzmann machines there is a global normalization constant that typically
depends on all parameters .f) and is intractable to compute analytically. For this
so-called partition function, the bound used in converging double-loop algorithms
works in the opposite direction as the bound implicit in the EM algorithm. The
convex bound of [13] does work in the right direction, but cannot (yet) handle missing values. In [14] standard loopy belief propagation is used in the inner loop of
iterative proportional fitting (IPF). Also here it is not yet clear how to integrate IPF
with convergent belief propagation without ending up with a triple-loop algorithm.
Following the same line of reasoning, expectation maximization can be combined
with expectation propagation (EP) [5]. EP can be understood as a generalization of loopy belief propagation. Besides neglecting possible loops in the gI;'aphical
structure, expectation propagation can also handle projections onto an exponential
family of distributions. The approximate free energy for EP is the same Bethe
free energy, only the constraints are different. That is, the "strong" marginalization constraints (4) are replaced by the "weak" marginalization constraints that all
These constraints are still linear
subsets marginals agree upon their moments.
in Qa and Q{3 and we can make the same decomposition (6) of the Bethe free energy into a convex and a concave term to derive a double-loop algorithm with a
convex optimization problem in the inner loop. However, EP can have reasons for
non-convergence that are not necessarily resolved with a double-loop version. For
example, it can happen that while projecting onto Gaussians negative covariance
matrices appear. This problem has, to the best of our knowledge, not yet been
solved and is subject to ongoing research.
It has been emphasized before [13] that it makes no sense to learn with approxi-
mate inference and then apply exact inference given the learned parameters. The
intuition is that we tune the parameters to the evidence, incorporating the errors
that are made while doing approximate inference. In that context it is important
that the results of approximate inference are reproducable and the use of convergent
algorithms is a relevant step in that direction.
References
[1] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete
data via the EM algorithm. Journal of the Royal Statistical Society B, 39:1-38,
1977.
[2] R. Neal and G. Hinton. A view of the EM algorithm that justifies incremental,
sparse, and other variants. In M. Jordan, editor, Learning in Graphical Models,
pages 355-368. Kluwer Academic Publishers, Dordrecht, 1998.
[3] K. Murphy, Y. Weiss, and M. Jordan. 'Loopy belief propagation for approximate
inference: An empirical study. In Proceedings of the Fifteenth Conference on
Uncertainty in Articial Intelligence, pages 467-475, San Francisco, CA, 1999.
Morgan Kaufmann.
[4] J. Yedidia, W. Freeman, and Y. Weiss. Constructing free energy approximations and generalized belief propagation algorithms. Technical report, Mitsubishi Electric Research Laboratories, 2002.
[5] T. Minka. Expectation propagation for approximate Bayesian inference. In
Uncertainty in Artificial Intelligence: Proceedings of the Seventeenth Conference (UAI-2001), pages 362-369, San Francisco, CA, 2001. Morgan Kaufmann
Publishers.
[6] B. Frey and A. Kanna. Accumulator networks: Suitors of local probability
propagation. In T. Leen, T. Dietterich, and V. Tresp, editors, Advances in
Neural Information Processing Systems 13, pages 486-492. MIT Press, 2001.
[7] T. Minka and J. Lafferty. Expectation propagation for the generative aspect
model. In Proceedings of UAI-2002, pages 352-359, 2002.
[8] A. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies:
Convergent alternatives to belief propagation. Neural Computation, 14:16911722,2002.
[9] Y. Teh and M. Welling. The unified propagation and scaling algorithm. In
NIPS 14, 2002.
[10] R. Salakhutdinov and S. Roweis. Adaptive overrelaxed bound optimization
methods. In ICML-2003, 2003.
[11] T. Heskes, K. Albers, and B. Kappen. Approximate inference and constrained
optimization. In UAI-2003, 2003.
[12] K. Murphy and Y. Weiss.. The factored frontier algorithm for approximate
inference in DBNs. In UAI-2001, pages 378-385, 2001.
[13] M. Wainwright, T. Jaakkola, and A. WHIsky. Tree-reweighted belief propagation algorithms and approximate ML estimation via pseudo-moment matching.
In AISTATS-2003, 2003.
[14] Y. Teh and M. Welling. On improving the efficiency of the iterative proportional fitting procedure. In AISTATS-2003, 2003.
| 2404 |@word version:2 middle:1 grooteplein:1 simulation:4 mitsubishi:1 decomposition:1 covariance:1 paid:1 minus:2 solid:3 kappen:1 moment:3 contains:1 current:2 yet:4 written:1 must:1 subsequent:1 happen:1 partition:1 plot:2 update:2 intelligence:2 generative:1 slowing:1 complication:1 node:6 direct:1 become:1 qij:5 combine:2 fitting:2 expected:1 indeed:1 roughly:1 freeman:1 salakhutdinov:1 snn:1 cpu:1 becomes:3 notation:1 bounded:1 argmin:2 interpreted:5 kind:2 minimizes:1 unified:1 finding:2 guarantee:2 pseudo:5 concave:5 exactly:1 lsa:1 appear:1 before:2 understood:3 local:4 frey:1 approximately:1 ease:1 seventeenth:1 directed:1 unique:2 accumulator:1 practice:1 procedure:2 empirical:1 projection:1 ups:2 word:1 integrating:1 refers:1 matching:1 get:1 hlj:1 onto:2 close:1 hle:2 scheduling:1 cannot:1 context:3 restriction:1 equivalent:1 deterministic:1 missing:1 straightforward:1 overcounting:4 convex:10 factored:1 importantly:1 geert:1 handle:2 argming:1 construction:2 play:1 dbns:1 exact:10 trick:1 approximated:2 updating:1 logq:1 observed:5 ep:6 role:1 solved:3 connected:1 intuition:1 dempster:1 convexity:1 complexity:2 dynamic:1 depend:1 technically:1 upon:1 yuille:1 efficiency:1 completely:1 resolved:1 joint:1 describe:1 artificial:1 outside:1 dordrecht:1 larger:2 solve:7 loglikelihood:4 statistic:2 gi:1 laird:1 seemingly:1 obviously:1 advantage:2 indication:1 took:1 propose:1 jij:1 reset:2 neighboring:2 turned:1 loop:33 combining:1 argmink:1 relevant:1 roweis:1 description:1 convergence:10 double:11 cluster:8 overrelaxed:1 incremental:1 converges:1 kikuchi:16 illustrate:1 derive:1 ij:5 albers:1 strong:1 implies:1 direction:3 drawback:1 qold:1 fix:7 generalization:2 tighter:3 summation:1 extension:1 frontier:1 considered:2 algorithmic:1 pee:1 purpose:1 estimation:1 currently:1 wim:1 weighted:1 minimization:7 hope:2 mit:1 always:1 rather:2 jaakkola:1 derived:3 focus:1 properly:4 likelihood:6 mainly:1 sense:3 inference:14 typically:1 hidden:12 koller:1 sketched:1 overall:2 dual:2 ill:1 constrained:4 integration:2 special:1 fairly:1 field:2 f3:6 nicely:1 sampling:1 articial:1 cancel:1 icml:1 report:1 serious:1 randomly:2 divergence:1 murphy:2 replaced:1 consisting:1 llq:1 message:2 yielding:3 tj:3 damped:1 implication:1 accurate:1 neglecting:1 indexed:2 tree:2 incomplete:1 logarithm:1 qie:1 instance:4 doubleloop:1 logp:2 maximization:4 loopy:11 subset:7 uniform:1 reported:1 combined:1 probabilistic:3 manage:1 choose:1 potential:4 explicitly:2 depends:2 later:1 view:2 try:2 h1:1 xf3:1 doing:1 zoeter:1 option:1 minimize:4 l1j:1 kaufmann:2 yield:3 correspond:3 weak:1 bayesian:7 converged:3 energy:27 minka:2 thereof:2 boil:3 popular:2 knowledge:1 ea:1 worry:1 higher:1 tom:1 wei:3 leen:1 furthermore:1 just:3 xa:4 implicit:1 until:2 qf3:2 qo:1 propagation:34 ipf:2 fjl:1 dietterich:1 normalized:3 multiplier:1 true:1 requiring:1 analytically:1 read:1 leibler:1 laboratory:1 neal:1 illustrated:1 reweighted:1 indistinguishable:1 onno:1 bal:1 generalized:9 bring:1 fj:15 reasoning:1 variational:6 recently:1 overview:1 jl:1 discussed:2 interpretation:4 slight:1 kluwer:1 marginals:8 refer:3 tuning:1 consistency:1 heskes:2 similarly:1 rep:1 binary:1 morgan:2 seen:2 minimum:4 somewhat:2 additional:2 converge:5 maximize:1 dashed:2 ii:2 stem:1 technical:1 faster:1 academic:1 cccp:4 qi:2 converging:2 variant:3 denominator:1 essentially:1 expectation:12 redefinition:1 fifteenth:1 iteration:1 normalization:5 fine:2 crucial:1 publisher:2 subject:1 tend:1 undirected:1 lafferty:1 jordan:2 affect:2 marginalization:2 architecture:2 opposite:1 inner:12 simplifies:1 idea:2 passing:1 generally:1 clear:2 tune:1 netherlands:1 singly:1 amount:1 xij:3 restricts:1 dotted:4 per:2 discrete:1 write:1 prevent:1 graph:2 overrelaxation:3 sum:3 run:4 uncertainty:2 family:1 looser:1 seq:1 mii:2 scaling:1 bound:22 hi:1 guaranteed:4 distinguish:1 convergent:15 correspondence:1 dash:2 constraint:7 aspect:1 min:1 relatively:2 estep:1 alternate:3 combination:1 em:25 xlj:4 projecting:1 sij:1 xo:1 agree:1 discus:1 merit:1 tractable:2 end:4 junction:2 gaussians:1 yedidia:1 apply:1 observe:1 alternative:2 slower:2 subdivide:1 original:1 denotes:1 top:1 graphical:3 moebius:1 especially:1 disappear:1 society:1 implied:2 usual:1 qep:2 outer:10 reason:1 besides:1 illustration:1 minimizing:2 equivalently:1 difficult:1 argminf:2 nijmegen:2 negative:2 ba:4 boltzmann:1 teh:2 upper:4 observation:4 markov:6 mate:1 hinton:1 evidenced:1 required:1 nlj:1 learned:4 nip:1 qa:6 usually:1 boyen:1 royal:1 belief:24 wainwright:1 ia:1 hm:4 naive:12 coupled:6 lij:1 tresp:1 text:1 prior:1 proportional:3 triple:2 integrate:2 consistent:3 article:3 principle:1 rubin:1 editor:2 clj:2 pi:1 repeat:1 free:22 copy:1 hex:1 sparse:1 slice:6 transition:2 ending:1 made:2 adaptive:1 san:2 welling:2 approximate:28 kullback:1 clique:1 ml:1 global:1 approxi:1 uai:4 b1:1 francisco:2 iterative:3 reviewed:1 bethe:12 learn:1 ca:2 improving:1 complex:1 separator:1 necessarily:2 constructing:1 electric:1 did:1 aistats:2 main:3 arise:1 cvm:1 fails:1 lq:1 exponential:1 down:4 moralized:1 specific:1 emphasized:1 evidence:5 intractable:5 incorporating:2 magnitude:1 justifies:1 entropy:8 intersection:3 simply:2 ez:1 lagrange:1 applies:1 corresponds:3 relies:1 goal:2 replace:1 shared:2 price:1 specifically:3 wiegerinck:1 called:4 e:1 la:2 ongoing:1 incorporate:1 ex:2 |
1,546 | 2,405 | Classification with Hybrid
Generative/Discriminative Models
Rajat Raina, Yirong Shen, Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
Andrew McCallum
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
Abstract
Although discriminatively trained classifiers are usually more accurate
when labeled training data is abundant, previous work has shown that
when training data is limited, generative classifiers can out-perform
them. This paper describes a hybrid model in which a high-dimensional
subset of the parameters are trained to maximize generative likelihood,
and another, small, subset of parameters are discriminatively trained to
maximize conditional likelihood. We give a sample complexity bound
showing that in order to fit the discriminative parameters well, the number of training examples required depends only on the logarithm of the
number of feature occurrences and feature set size. Experimental results
show that hybrid models can provide lower test error and can produce
better accuracy/coverage curves than either their purely generative or
purely discriminative counterparts. We also discuss several advantages
of hybrid models, and advocate further work in this area.
1
Introduction
Generative classifiers learn a model of the joint probability, p(x, y), of the inputs x and
the label y, and make their predictions by using Bayes rule to calculate p(y|x), and then
picking the most likely label y. In contrast, discriminative classifiers model the posterior
p(y|x) directly. It has often been argued that for many application domains, discriminative
classifiers often achieve higher test set accuracy than generative classifiers (e.g., [6, 4, 14]).
Nonetheless, generative classifiers also have several advantages, among them straightforward EM methods for handling missing data, and often better performance when training
set sizes are small. Specifically, it has been shown that a simple generative classifier (naive
Bayes) outperforms its conditionally-trained, discriminative counterpart (logistic regression) when the amount of available labeled training data is small [11].
In an effort to obtain the best of both worlds, this paper explores a class of hybrid models
for supervised learning that are partly generative and partly discriminative. In these models,
a large subset of the parameters are trained to maximize the generative, joint probability
of the inputs and outputs of the supervised learning task; another, much smaller, subset of
the parameters are discriminatively trained to maximize the conditional probability of the
outputs given the inputs.
Motivated by an application in text classification as well as a desire to begin by exploring a
simple, pure form of hybrid classification, we describe and give results with a ?generativediscriminative? pair [11] formed by naive Bayes and logistic regression, and a hybrid al-
gorithm based on both. We also give two natural by-products of the hybrid model: First, a
scheme for allowing different partitions of the variables to contribute more or less strongly
to the classification decision?for an email classification example, modeling the text in
the subject line and message body separately, with learned weights for the relative contributions. Second, a method for improving accuracy/coverage curves of models that make
incorrect independence assumptions, such as naive Bayes.
We also prove a sample complexity result showing that the number of training examples
needed to fit the discriminative parameters depends only on the logarithm of the vocabulary
size and document length. In experimental results, we show that the hybrid model achieves
significantly more accurate classification than either its purely generative or purely discriminative counterparts. We also demonstrate that the hybrid model produces class posterior
probabilities that better reflect empirical error rates, and as a result produces improved
accuracy/coverage curves.
2
The Model
We begin by briefly reviewing the multinomial naive Bayes classifier applied to text categorization [10], and then describe our hybrid model and its relation to logistic regression.
Let Y = {0, 1} be the set of possible labels for a document classification task, and let
W = {w1 , w2 , . . . , w|W| } be a dictionary of words. A document of N words is represented
by a vector X = (X1 , X2 , . . . , XN ) of length N . The ith word in the document is Xi ? W.
Note that N can vary for different documents. The multinomial naive Bayes model assumes
that the label Y is chosen from some prior distribution P (Y = ?), the length N is drawn
from some distribution P (N = ?) independently of the label, and each word Xi is drawn
independently from some distribution P (W = ?|Y ) over the dictionary. Thus, we have:1
Qn
P (X = x, Y = y) = P (Y = y)P (N = n) i=1 P (W = xi |Y = y).
(1)
Since the length n of the document does not depend on the label and therefore does not
play a significant role, we leave it out of our subsequent derivations.
The parameters in the naive Bayes model are P? (Y ) and P? (W |Y ) (our estimates of P (Y )
and P (W |Y )). They are set to maximize the joint (penalized) log-likelihood of the x and
(i)
y pairs in a labeled training set, M = {(x(i) , y (i) )}m
be the length of document
i=1 . Let n
(i)
x . Specifically, for any k ? {0, 1}, we have:
Pm
1
(i)
P? (Y = k) = m
= k}
(2)
i=1 1{y
Pm Pn(i) (i)
1{x =w , y (i) =k}+1
i=1
Pmj=1(i) j (i) l
P? (W = wl |Y = k) =
,
(3)
n 1{y =k}+|W|
i=1
where 1{?} is the indicator function (1{True} = 1, 1{False} = 0), and we have applied
Laplace (add-one) smoothing in obtaining the estimates of the word probabilities. Using
Bayes rule, we obtain the estimated class posterior probabilities for a new document x as:
?
=1)P? (Y =1)
P? (Y = 1|X = x) = P P (X=x|Y
?
?
y?Y
where
P (X=x|Y =y)P (Y =y)
Qn
P? (X = x|Y = y) = i=1 P? (W = xi |Y = y).
(4)
The predicted class for the new document is then simply arg maxy?Y P? (Y = y|X = x).
In many text classification applications, the documents involved consist of several disjoint
regions that may have different dependencies with the document label. For example, a
U SE N ET news posting includes both a subject region and a message body region.2 Because
1
We adopt the notational convention that upper-case is used to denote random variables, and
lower-case is used to denote particular values taken by the random variables.
2
Other possible text classification examples include: Emails consisting of subject and body; technical papers consisting of title, abstract, and body; web pages consisting of title, headings, and body.
of the strong assumptions used by naive Bayes, it treats the words in the different regions
of a document in exactly the same way, ignoring the fact that perhaps words in a particular
region (such as words in the subject) might be more ?important.? Further, it also tends to
allow the words in the longer region to dominate. (Explained below.)
In the sequel, we assume that every input document X can be naturally divided into R
regions X 1 , X 2 , . . . , X R . Note that R can be one. The regions are of variable lengths
N1 , N2 , . . . , NR . For the sake of conciseness and clarity, in the following discussion we
will focus on the case of R = 2 regions, the generalization offering no difficulties. Thus,
the document probability in Equation (4) is now replaced with:
P? (X = x|Y = y) = P? (X 1 = x1 |Y = y)P? (X 2 = x2 |Y = y)
(5)
Qn1 ?
Qn2 ?
1
2
=
P (W = x |Y = y)
P (W = x |Y = y) (6)
i
i=1
i
i=1
Here, xji denotes the ith word in the jth region. Naive Bayes will predict y = 1 if:
Pn1
Pn2
1
2
?
?
?
i=1 log P (W = xi |Y = 1) +
i=1 log P (W = xi |Y = 1) + log P (Y = 1) ?
Pn1
P
n
2
log P? (W = x1 |Y = 0) +
log P? (W = x2 |Y = 0) + log P? (Y = 0)
i
i=1
i
i=1
and predict y = 0 otherwise. In an email or U SE N ET news classification problem, if
the first region is the subject, and the second region is the message body, then n2 n1 ,
since message bodies are usually much longer than subjects. Thus, in the equation above,
the message body contributes to many more terms in both the left and right sides of the
summation, and the result of the ??? test will be largely determined by the message body
(with the message subject essentially ignored or otherwise having very little effect).
Given the importance and informativeness of message subjects, this suggests that we might
obtain better performance than the basic naive Bayes classifier by considering a modified
algorithm that assigns different ?weights? to different regions, and normalizes for region
lengths. Specifically, consider making a prediction using the modified inequality test:
?1 Pn1
?2 Pn2
1
2
?
?
?
i=1 log P (W = xi |Y = 1) + n2
i=1 log P (W = xi |Y = 1) + log P (Y = 1) ?
n1
P
P
n1
n2
?2
?1
1
2
?
?
?
i=1 log P (W = xi |Y = 0) + n2
i=1 log P (W = xi |Y = 0) + log P (Y = 0)
n1
Here, the vector of parameters ? = (?1 , ?2 ) controls the relative ?weighting? between the
message subjects and bodies, and will be fit discriminatively. Specifically, we will model
the class posteriors, which we denote by P?? to make explicit the dependence on ?, as:3
?1
P?? (y|x) =
?2
P? (y)P? (x1 |y) n1 P? (x2 |y) n2
P? (Y =0)P? (x1 |Y
?1
=0) n1
P? (x2 |Y
?2
=0) n2
?1
?2
(7)
+P? (Y =1)P? (x1 |Y =1) n1 P? (x2 |Y =1) n2
We had previously motivated our model as assigning different weights to different parts of
the document. A second reason for using this model is that the independence assumptions
of naive Bayes are too strong. Specifically, with a document of length n, the classifier ?assumes? that it has n completely independent pieces of evidence supporting its conclusion
about the document?s label. Putting nr in the denominator of the exponent as a normalization factor can be viewed as a way of counteracting the overly strong independence
assumptions.4
After some simple manipulations, we obtain the following expression for P?? (Y = 1|x):
P?? (Y = 1|x) =
P? (Y =1)
P? (Y =0)
1
1+exp(?a??1 b1 ?...??R bR )
P? (xr |Y =1)
). With this expression
P? (xr |Y =0)
(8)
where a = log
and br = n1r (log
for P?? (y|x), we
see that it is very similar to the form of the class posteriors used by logistic regression, the
3
When there is no risk of ambiguity, we will sometimes replace P (X = x|Y = y), P (Y =
y|X = x), P (W = xi |Y = y), etc. with P (x|y), P (y|x), P (xi |y).
4
?r can also be viewed as an ?effective region length? parameter, where we assume that region r
of the document can be treated as only ?r independent pieces of observation. For example, note that
if each region r of the document has ?r words exactly, then this model reduces to naive Bayes.
only difference being that in this case a is a constant calculated from the estimated class
priors. To make the parallel to logistic regression complete, we define b0 = 1, redefine ?
as ? = (?0 , ?1 , ?2 ), and define a new class posterior
1
(9)
P?? (Y = 1|x) =
T
1+exp(?? b)
Throughout the derivation, we had assumed that the parameters P? (x|y) were fit generatively as in Equation (3) (and b is in turn derived from these parameters as described
above). It therefore remains only to specify how ? is chosen. One method would be to pick
? by maximizing the conditional log-likelihood of the training set M = {x(i) , y (i) }m
i=1 :
Pm
(i)
(i)
?
? = arg max?0
log P?0 (y |x )
(10)
i=1
However, the word generation probabilities that were used to calculate b were also trained
from the training set M . This procedure therefore fits the parameters ? to the training
data, using ?features? b that were also fit to the data. This leads to a biased estimator.
Specifically, since what we care about is the generalization performance of the algorithm,
a better method is to pick ? to maximize the log-likelihood of data that wasn?t used to
calculate the ?features? b, because when we see a test example, we will not have had the
luxury of incorporating information from the test example into the b?s (cf. [15, 12]). This
leads to the following ?leave-one-out? strategy of picking ?:
Pm
? = arg max?0
log P??0 ,?i (y (i) |x(i) ),
(11)
i=1
where P??,?i
(y (i) |x(i) ) is as given in Equation (9), except that each br is computed from
?
word generation probabilities that were estimated with the ith example of the training set
held out. We note that optimizing this objective to find ? is still the same optimization
problem as in logistic regression, and hence is convex and can be solved efficiently. Further, the word generation probabilities with the ith example left out can also be computed
efficiently.5
The predicted label for a new document under this method is arg maxy?Y P?? (y|x). We
call this method the normalized hybrid algorithm. For the sake of comparison, we will also
consider an algorithm in which the exponents in Equation (7) are not normalized by nr .
In other words, we replace ?r /nr there by just ?r . We refer to this latter method as the
unnormalized hybrid algorithm.
3
Experimental Results
We now describe the results of experiments testing the effectiveness of our methods. All
experiments were run using pairs of newsgroups from the 20newsgroups dataset [8] of
U SE N ET news postings. When parsing this data, we skipped everything in the U SE N ET
headers except the subject line; numbers and email addresses were replaced by special
tokens NUMBER and EMAILADDR; and tokens were formed after stemming.
In each experiment, we compare the performance of the basic naive Bayes algorithm with
that of the normalized hybrid algorithm and logistic regression with Gaussian priors on the
parameters. We used logistic regression with word-counts in the feature vectors (as in [6]),
which forms a discriminative-generative pair with multinomial naive Bayes. All results
reported in this section are averages over 10 random train-test splits.
Figure 1 plots learning curves for the algorithms, when used to classify between various
pairs of newsgroups. We find that in every experiment, for the training set sizes considered,
the normalized hybrid algorithm with R = 2 has test error that is either the lowest or very
near the lowest among all the algorithms. In particular, it almost always outperforms the
5
Specifically, by precomputing the numerator and denominator of Equation (3), we can later
remove any example by subtracting out the terms in the numerator and denominator corresponding
to that example.
pc.hardware vs mac.hardware
atheism vs religion.misc
0.4
0.35
0.12
0.25
500
1000
size of training set
0.25
0.2
0.06
0.04
0.1
0.02
(a)
500
1000
size of training set
0
0
1500
(b)
atheism vs sci.med
0.15
0.15
0.06
0
0
test error
0.08
0.05
0.1
0.05
500
1000
size of training set
1500
(d)
0
0
1500
hockey vs christian
0.2
0.1
500
1000
size of training set
(c)
autos vs motorcycles
0.2
test error
test error
0.08
0.15
0.05
0
1500
0.1
test error
0.3
0.2
0
0.14
0.3
test error
test error
0.35
graphics vs mideast
0.4
0.04
0.02
500
1000
size of training set
(e)
1500
0
0
500
1000
size of training set
1500
(f)
Figure 1: Plots of test error vs training size for several different newsgroup pairs. Red
dashed line is logistic regression; blue dotted line is standard naive Bayes; black solid line
is the hybrid algorithm. (Colors where available.) (If more training data were available,
logistic regression would presumably out-perform naive Bayes; cf. [6, 11].)
basic naive Bayes algorithm. The difference in performance is especially dramatic for small
training sets.
Although these results are not shown here, the hybrid algorithm with R = 2 (breaking the
document into two regions) outperforms R = 1. Further, the normalized version of the
hybrid algorithm generally outperforms the unnormalized version.
4
Theoretical Results
In this section, we give a distribution free uniform convergence bound for our algorithm.
Classical learning and VC theory indicates that, given a discriminative model with a small
number of parameters, typically only a small amount of training data should be required
to fit the parameters ?well? [14]. In our model, a large number of parameters P? are fit
generatively, but only a small number (the ??s) are fit discriminatively. We would like
to show that only a small training set is required to fit the discriminative parameters ?.6
However, standard uniform convergence results do not apply to our problem, because the
?features? bi given to the discriminative logistic regression component also depend on the
training set. Further, the ?i ?s are fit using the leave-one-out training procedure, so that every
pair of training examples is actually dependent.
For our analysis, we assume the training set of size m is drawn i.i.d.from some distribution
D over X ? Y. Although not necessary, for simplicity we assume that each document
PR
has the same total number of words n = i=1 ni , though the lengths of the individual
regions may vary. (It also suffices to have an upper- and a lower-bound on document
length.) Finally, we also assume that each word occurs at most Cmax times in a single
document, and that the distribution D from which training examples are drawn satisfies
6
For a result showing that naive Bayes? generatively fit parameters (albeit one using a different
event model) converge to their population (asymptotic) values after a number of training examples
that depends logarithmically on the size of the number of features, also see [11].
?min ? P (Y = 1) ? 1 ? ?min , for some fixed ?min > 0.
Note that we do not assume that the ?naive Bayes assumption? (that words are conditionally
independent given the class label) holds. Specifically, even when the naive Bayes assumption does not hold, the naive Bayes algorithm (as well as our hybrid algorithm) can still be
applied, and our results apply to this setting.
Given a set M of m training examples, for a particular setting of the parameter ?, the
expected log likelihood of a randomly drawn test example is:
?M (?) = E(x,y)?D log P?? (y|x)
(12)
where P?? is the probability model trained on M as described in the previous section, using
parameters P? fit to the entire training set. Our algorithm uses a leave-one-out estimate of
the true log likelihood; we call this the leave-one-out log likelihood:
Pm
1
(i) (i)
?
??M
(13)
?1 (?) =
i=1 log P?,?i (y |x )
m
?
where P?,?i represents the probability model trained with the ith example left out.
We would like to choose ? to maximize ?M , but we do not know ?M . Now, it is well-known
that if we have some estimate ?? of a generalization error measure ?, and if |?
?(?) ? ?(?)| ?
for all ?, then optimizing ?? will result in a value for ? that comes within 2 of the best
possible value for ? [14]. Thus, in order to show that optimizing ??M
?1 is a good ?proxy? for
optimizing ?M , we only need to show that ??M
(?)
is
uniformly
close
to ?M (?). We have:
?1
Theorem 1 Under the previous set of assumptions, in order to ensure that with probability
at least 1 ? ?, we have |?M (?) ? ??M
?1 (?)| < for all parameters ? such that ||?||? ? ?, it
suffices that m = O(poly(1/?, 1/, log n, log |W|, R, ?)R ).
The full proof of this result is fairly lengthy, and is deferred to the full version of this
paper [13]. From the theorem, the number of training examples m required to fit the ?
parameters (under the fairly standard regularity condition that ? be bounded) depends only
on the logarithms of the document length n and the vocabulary size |W|. In our bound,
there is an exponential dependence on R; however, from our experience, R does not need
to be too large for significantly improved performance. In fact, our experimental results
demonstrate good performance for R = 2.
5
Calibration Curves
We now consider a second application of these ideas, to a text classification setting where
the data is not naturally split into different regions (equivalently, where R = 1). In this
setting we cannot use the ?reweighting? power of the hybrid algorithm to reduce classification error. But, we will see that, by giving better class posteriors, our method still gives
improved performance as measured on accuracy/coverage curves.
An accuracy/coverage curve shows the accuracy (fraction correct) of a classifier if it is
asked only to provide x% coverage?that is, if it is asked only to label the x% of the test
data on which it is most confident. Accuracy/coverage curves towards the upper-right of the
graph mean high accuracy even when the coverage is high, and therefore good performance.
Accuracy value at coverage 100% is just the normal classification error. In settings where
both human and computer label documents, accuracy/coverage curves play a central role
in determining how much data has to be labeled by humans. They are also indicative of
the quality of a classifier?s class posteriors, because a classifier with better class posteriors
would be able to better judge which x% of the test data it should be most confident on, and
achieve higher accuracy when it chooses to label that x% of the data.
Figure 2 shows accuracy/coverage curves for classifying several pairs of newsgroups from
the 20newsgroups dataset. Each plot is obtained by averaging the results of ten 50%/50%
random train/test splits. The normalized hybrid algorithm (R = 1) does significantly better
than naive Bayes, and has accuracy/coverage curves that are higher almost everywhere.
pc.hardware vs mac.hardware
graphics vs mideast
1.005
0.95
0.98
1
0.9
0.96
0.995
0.85
Accuracy
1
Accuracy
Accuracy
atheism vs religion.misc
1
0.94
0.99
0.8
0.92
0.985
0.75
0.9
0.98
0.7
0
0.2
0.4
0.6
Coverage
0.8
0.88
0
1
0.2
0.4
0.6
Coverage
(a)
0.8
0.975
0
1
0.2
(b)
1
1.01
0.99
1
0.98
0.8
1
0.8
1
(c)
autos vs motorcycles
atheism vs sci.med
1.02
0.4
0.6
Coverage
hockey vs christian
1.005
0.99
Accuracy
Accuracy
Accuracy
1
0.97
0.98
0.96
0.97
0.95
0.995
0.99
0.96
0
0.2
0.4
0.6
Coverage
0.8
1
0.94
0
0.2
(d)
0.4
0.6
Coverage
(e)
0.8
1
0.985
0
0.2
0.4
0.6
Coverage
(f)
Figure 2: Accuracy/Coverage curves for different newsgroups pairs. Black solid line is
our normalized hybrid algorithm with R = 1; magenta dash-dot line is naive Bayes; blue
dotted line is unnormalized hybrid, and red dashed line is logistic regression. (Colors where
available.)
For example, in Figure 2a, the normalized hybrid algorithm with R = 1 has a coverage
of over 40% at 95% accuracy, while naive Bayes? coverage is 0 for the same accuracy.
Also, the unnormalized algorithm has performance about the same as naive Bayes. Even in
examples where the various algorithms have comparable overall test error, the normalized
hybrid algorithm has significantly better accuracy/coverage.
6
Discussion and Related Work
This paper has described a hybrid generative/discriminative model, and presented experimental results showing that a simple hybrid model can perform better than either its purely
generative or discriminative counterpart. Furthermore, we showed that in order to fit the
parameters ? of the model, only a small number of training examples is required.
There have been a number of previous efforts to modify naive Bayes to obtain more empirically accurate posterior probabilities. Lewis and Gale [9] use logistic regression to recalibrate naive Bayes posteriors in an active learning task. Their approach is similar to the
lower-performing unnormalized version of our algorithm, with only one region. Bennett [1]
studies the problem of using asymmetric parametric models to obtain high quality probability estimates from the scores outputted by text classifiers such as naive Bayes. Zadrozny
and Elkan [16] describe a simple non-parametric method for calibrating naive Bayes probability estimates. While these methods can obtain good class posteriors, we note that in
order to obtain better accuracy/coverage, it is not sufficient to take naive Bayes? output
p(y|x) and find a monotone mapping from that to a set of hopefully better class posteriors
(e.g., [16]). Specifically, in order to obtain better accuracy/coverage, it is also important to
rearrange the confidence orderings that naive Bayes gives to documents (which our method
does because of the normalization).
Jaakkola and Haussler [3] describe a scheme in which the kernel for a discriminative classifier is extracted from a generative model. Perhaps the closest to our work, however, is
the commonly-used, simple ?reweighting? of the language model and acoustic model in
speech recognition systems (e.g., [5]). Each of the two models is trained generatively; then
a single weight parameter is set using hold-out cross-validation.
In related work, there are also a number of theoretical results on the quality of leave-oneout estimates of generalization error. Some examples include [7, 2]. (See [7] for a brief
survey.) Those results tend to be for specialized models or have strong assumptions on the
model, and to our knowledge do not apply to our setting, in which we are also trying to fit
the parameters ?.
In closing, we have presented one hybrid generative/discriminative algorithm that appears
to do well on a number of problems. We suggest that future research in this area is poised
to bear much fruit. Some possible future work includes: automatically determining which
parameters to train generatively and which discriminatively; training methods for more
complex models with latent variables, that require EM to estimate both sets of parameters;
methods for taking advantage of the hybrid nature of these models to better incorporate
domain knowledge; handling missing data; and support for semi-supervised learning.
Acknowledgments. We thank Dan Klein, David Mulford and Ben Taskar for helpful conversations. Y. Shen is supported by an NSF graduate fellowship. This work was also supported by the Department of the Interior/DARPA under contract number NBCHD030010,
and NSF grant #IIS-0326249.
References
[1] Paul N. Bennett. Using asymmetric distributions to improve text classifier probability estimates.
In Proceedings of SIGIR-03, 26th ACM International Conference on Research and Development
in Information Retrieval, 2003.
[2] Luc P. Devroye and T. J. Wagner. Distribution-free performance bounds for potential function
rules. IEEE Transactions on Information Theory, 5, September 1979.
[3] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In
Advances in Neural Information Processing Systems 11, 1998.
[4] T. Jebara and A. Pentland. Maximum conditional likelihood via bound maximization and the
cem algorithm. In Advances in Neural Information Processing Systems 11, 1998.
[5] D. Jurafsky and J. Martin. Speech and language processing. Prentice Hall, 2000.
[6] John Lafferty Kamal Nigam and Andrew McCallum. Using maximum entropy for text classification. In IJCAI-99 Workshop on Machine Learning for Information Filtering, 1999.
[7] Michael Kearns and Dana Ron. Algorithmic stability and sanity-check bounds for leave-one-out
cross-validation. Computational Learning Theory, 1997.
[8] Ken Lang. Newsweeder: learning to filter netnews. In Proceedings of the Ninth European
Conference on Machine Learning, 1997.
[9] David D. Lewis and William A. Gale. A sequential algorithm for training text classifiers. In
Proceedings of SIGIR-94, 17th ACM International Conference on Research and Development
in Information Retrieval, 1994.
[10] Andrew McCallum and Kamal Nigam. A comparison of event models for naive bayes text
classification. In AAAI-98 Workshop on Learning for Text Categorization, 1998.
[11] Andrew Y. Ng and Michael I. Jordan. On discriminative vs. generative classifiers: a comparison
of logistic regression and naive bayes. In NIPS 14, 2001.
[12] John C. Platt. Probabilistic outputs for support vector machines and comparisons to regularized likelihood methods. In A. Smola, P. Bartlett, B. Scholkopf, and D. Schuurmans, editors,
Advances in Large Margin Classifiers. MIT Press, 1999.
[13] R. Raina, Y. Shen, A. Y. Ng, and A. McCallum. Classification with hybrid generative/discriminative models. http://www.cs.stanford.edu/?rajatr/nips03.ps, 2003.
[14] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, 1998.
[15] David H. Wolpert. Stacked generalization. Neural Networks, 5(2):241?260, 1992.
[16] Bianca Zadrozny and Charles Elkan. Obtaining calibrated probability estimates from decision
trees and naive bayesian classifiers. In ICML ?01, 2001.
| 2405 |@word version:4 briefly:1 pick:2 dramatic:1 solid:2 generatively:5 score:1 offering:1 document:27 outperforms:4 lang:1 assigning:1 parsing:1 john:3 stemming:1 subsequent:1 partition:1 christian:2 remove:1 plot:3 v:14 generative:19 indicative:1 mccallum:4 ith:5 contribute:1 ron:1 scholkopf:1 incorrect:1 prove:1 advocate:1 poised:1 redefine:1 dan:1 expected:1 xji:1 automatically:1 little:1 considering:1 begin:2 bounded:1 lowest:2 what:1 every:3 exactly:2 classifier:22 platt:1 control:1 grant:1 treat:1 tends:1 modify:1 might:2 black:2 suggests:1 jurafsky:1 limited:1 bi:1 graduate:1 acknowledgment:1 testing:1 xr:2 procedure:2 area:2 empirical:1 significantly:4 outputted:1 word:19 confidence:1 suggest:1 cannot:1 close:1 interior:1 prentice:1 risk:1 www:1 missing:2 pn2:2 maximizing:1 straightforward:1 independently:2 convex:1 survey:1 shen:3 sigir:2 simplicity:1 assigns:1 pure:1 rule:3 estimator:1 haussler:2 dominate:1 population:1 stability:1 yirong:1 laplace:1 play:2 us:1 elkan:2 logarithmically:1 recognition:1 asymmetric:2 gorithm:1 labeled:4 role:2 taskar:1 solved:1 calculate:3 region:21 news:3 ordering:1 complexity:2 asked:2 trained:10 depend:2 reviewing:1 purely:5 completely:1 joint:3 darpa:1 represented:1 various:2 derivation:2 train:3 stacked:1 describe:5 effective:1 netnews:1 header:1 sanity:1 stanford:3 otherwise:2 advantage:3 subtracting:1 product:1 motorcycle:2 achieve:2 exploiting:1 convergence:2 regularity:1 ijcai:1 p:1 produce:3 categorization:2 leave:7 ben:1 andrew:5 measured:1 b0:1 strong:4 coverage:24 predicted:2 c:1 come:1 judge:1 convention:1 correct:1 filter:1 vc:1 human:2 everything:1 argued:1 require:1 suffices:2 generalization:5 summation:1 exploring:1 hold:3 considered:1 hall:1 normal:1 exp:2 presumably:1 mapping:1 predict:2 algorithmic:1 achieves:1 dictionary:2 vary:2 adopt:1 label:13 title:2 wl:1 mit:1 gaussian:1 always:1 modified:2 pn:1 jaakkola:2 derived:1 focus:1 notational:1 likelihood:10 indicates:1 check:1 contrast:1 skipped:1 helpful:1 dependent:1 typically:1 entire:1 relation:1 arg:4 classification:16 among:2 overall:1 exponent:2 development:2 smoothing:1 special:1 fairly:2 having:1 ng:3 represents:1 icml:1 kamal:2 future:2 randomly:1 individual:1 replaced:2 consisting:3 n1:8 luxury:1 william:1 message:9 deferred:1 pc:2 rearrange:1 held:1 accurate:3 necessary:1 experience:1 tree:1 logarithm:3 abundant:1 theoretical:2 classify:1 modeling:1 maximization:1 mac:2 recalibrate:1 subset:4 uniform:2 too:2 graphic:2 reported:1 dependency:1 chooses:1 confident:2 calibrated:1 explores:1 amherst:1 international:2 sequel:1 contract:1 probabilistic:1 picking:2 michael:2 w1:1 reflect:1 ambiguity:1 central:1 aaai:1 choose:1 gale:2 potential:1 includes:2 depends:4 piece:2 later:1 red:2 bayes:34 parallel:1 contribution:1 formed:2 ni:1 accuracy:26 largely:1 efficiently:2 bayesian:1 email:4 lengthy:1 nonetheless:1 involved:1 naturally:2 proof:1 conciseness:1 dataset:2 massachusetts:1 color:2 knowledge:2 conversation:1 actually:1 appears:1 higher:3 supervised:3 specify:1 improved:3 though:1 strongly:1 furthermore:1 just:2 smola:1 web:1 reweighting:2 hopefully:1 logistic:14 quality:3 perhaps:2 effect:1 calibrating:1 normalized:9 true:2 counterpart:4 hence:1 misc:2 conditionally:2 numerator:2 qn1:1 unnormalized:5 trying:1 complete:1 demonstrate:2 charles:1 specialized:1 multinomial:3 empirically:1 significant:1 refer:1 pn1:3 pm:5 closing:1 language:2 had:3 dot:1 calibration:1 longer:2 etc:1 add:1 posterior:13 closest:1 showed:1 optimizing:4 manipulation:1 inequality:1 care:1 converge:1 maximize:7 dashed:2 semi:1 ii:1 full:2 reduces:1 technical:1 cross:2 retrieval:2 divided:1 prediction:2 regression:14 basic:3 denominator:3 essentially:1 normalization:2 sometimes:1 kernel:1 fellowship:1 separately:1 w2:1 biased:1 subject:10 med:2 tend:1 lafferty:1 effectiveness:1 jordan:1 call:2 counteracting:1 near:1 split:3 newsgroups:6 independence:3 fit:16 reduce:1 idea:1 wasn:1 br:3 motivated:2 expression:2 bartlett:1 effort:2 speech:2 ignored:1 generally:1 se:4 amount:2 ten:1 hardware:4 ken:1 http:1 nsf:2 dotted:2 estimated:3 disjoint:1 overly:1 klein:1 blue:2 putting:1 drawn:5 clarity:1 graph:1 monotone:1 fraction:1 run:1 everywhere:1 throughout:1 almost:2 decision:2 comparable:1 bound:7 dash:1 x2:6 sake:2 min:3 performing:1 martin:1 department:3 describes:1 smaller:1 em:2 son:1 making:1 maxy:2 explained:1 pr:1 taken:1 pmj:1 equation:6 previously:1 remains:1 discus:1 turn:1 count:1 needed:1 know:1 available:4 apply:3 occurrence:1 assumes:2 denotes:1 include:2 cf:2 ensure:1 cmax:1 giving:1 especially:1 classical:1 objective:1 occurs:1 strategy:1 parametric:2 dependence:2 nr:4 september:1 thank:1 sci:2 reason:1 nips03:1 length:12 devroye:1 equivalently:1 oneout:1 perform:3 allowing:1 upper:3 observation:1 pentland:1 supporting:1 zadrozny:2 ninth:1 jebara:1 david:3 pair:9 required:5 generativediscriminative:1 acoustic:1 learned:1 nbchd030010:1 nip:1 address:1 able:1 usually:2 below:1 max:2 power:1 event:2 natural:1 hybrid:30 difficulty:1 treated:1 indicator:1 regularized:1 raina:2 scheme:2 improve:1 brief:1 naive:33 auto:2 text:12 prior:3 determining:2 relative:2 asymptotic:1 discriminatively:6 bear:1 generation:3 filtering:1 dana:1 validation:2 sufficient:1 proxy:1 informativeness:1 fruit:1 editor:1 classifying:1 normalizes:1 penalized:1 token:2 supported:2 free:2 heading:1 jth:1 side:1 allow:1 taking:1 wagner:1 curve:12 calculated:1 vocabulary:2 world:1 xn:1 qn:2 commonly:1 transaction:1 cem:1 active:1 b1:1 assumed:1 discriminative:20 xi:12 latent:1 hockey:2 learn:1 nature:1 ca:1 ignoring:1 obtaining:2 contributes:1 schuurmans:1 improving:1 nigam:2 poly:1 complex:1 european:1 domain:2 precomputing:1 paul:1 n2:8 atheism:4 body:10 x1:6 bianca:1 wiley:1 explicit:1 exponential:1 breaking:1 weighting:1 mideast:2 posting:2 theorem:2 magenta:1 showing:4 evidence:1 consist:1 incorporating:1 workshop:2 false:1 albeit:1 sequential:1 importance:1 vapnik:1 margin:1 entropy:1 wolpert:1 simply:1 likely:1 desire:1 religion:2 newsweeder:1 satisfies:1 lewis:2 extracted:1 ma:1 acm:2 conditional:4 viewed:2 towards:1 replace:2 bennett:2 luc:1 specifically:9 determined:1 except:2 uniformly:1 averaging:1 kearns:1 total:1 partly:2 experimental:5 newsgroup:1 support:2 latter:1 rajat:1 incorporate:1 handling:2 |
1,547 | 2,406 | Identifying Structure across Prepartitioned Data
Ido Dagan
Zvika Marx
Department of CS
Neural Computation Center
Bar-Ilan University
The Hebrew University
Jerusalem, Israel, 91904 Ramat-Gan, Israel, 52900
Eli Shamir
School for CS
The Hebrew University
Jerusalem, Israel, 91904
Abstract
We propose an information-theoretic clustering approach that
incorporates a pre-known partition of the data, aiming to identify
common clusters that cut across the given partition. In the standard
clustering setting the formation of clusters is guided by a single
source of feature information. The newly utilized pre-partition
factor introduces an additional bias that counterbalances the impact
of the features whenever they become correlated with this known
partition.
The resulting algorithmic framework was applied
successfully to synthetic data, as well as to identifying text-based
cross-religion correspondences.
1
In t ro d u c t i o n
The standard task of feature-based data clustering deals with a single set of elements
that are characterized by a unified set of features. The goal of the clustering task is
to identify implicit constructs, or themes, within the clustered set, grouping together
elements that are characterized similarly by the features. In recent years there has
been growing interest in more complex clustering settings, in which additional
information is incorporated [1], [2]. Several such extensions ([3]-[5]) are based on
the information bottleneck (IB) framework [6], which facilitates coherent
information-theoretic representation of different information types.
In a recent line of research we have investigated the cross-dataset clustering task
[7], [8]. In this setting, some inherent a-priori partition of the clustered data to
distinct subsets is given. The clustering goal it to identify corresponding
(analogous) structures that cut across the different subsets, while ignoring internal
structures that characterize individual subsets. To accomplish this task, those
features that commonly characterize elements across the different subsets guide the
clustering process, while within-subset regularities are neutralized.
In [7], we presented a distance-based hard clustering algorithm for the coupledclustering problem, in which the clustered data is pre-partitioned to two subsets. In
[8], our setting, generalized to pre-partitions of any number of subsets, was
addressed by a heuristic extension of the probabilistic IB algorithm, yielding
improved empirical results. Specifically, the algorithm in [8] was based on a
modification of the IB stable-point equation, which amplified the impact of features
characterizing a formed cluster across all, or most, subsets.
This paper describes an information-theoretic framework that motivates and extends
the algorithm proposed in [8]. The given pre-partitioning is represented via a
probability distribution variable, which may represent ?soft? pre-partitioning of the
data, versus the strictly disjoint subsets assumed in the earlier cross-dataset
framework. Further, we present a new functional that captures the cross-partition
motivation. From the new functional, we derive a stable-point equation underlying
our algorithmic framework in conjunction with the corresponding IB equation.
Our algorithm was tested empirically on synthetic data and on a real-world textbased task that aimed to identify corresponding themes across distinct religions. We
have cross-clustered five sets of keywords that were extracted from topical corpora
of texts about Buddhism, Christianity, Hinduism, Islam and Judaism. In distinction
from standard clustering results, our algorithm reveals themes that are common to
all religions, such as sacred writings, festivals, narratives and myths and theological
principles, and avoids topical clusters that correspond to individual religions (for
example, ?Christmas? and ?Easter? are clustered together with ?Ramadan? rather than
with ?Church?).
Finally, we have paid specific attention to the framework of clustering with side
information [4]. While this approach was presented for a somewhat different
mindset, it might be used directly to address clustering across pre-partitioned data.
We compare the technical details of the two approaches and demonstrate
empirically that clustering with side information does not seem appropriate for the
kind of cross-partition tasks that we explored.
2
Th e In fo rmat i o n B ot t len eck M et h od
Probabilistic (?soft?) data clustering outputs, for each element x of the set being
clustered and each cluster c, an assignment probability p(c|x). The IB method [6]
interprets probabilistic clustering as lossy data compression. The given data is
represented by a random variable X ranging over the clustered elements. X is
compressed through another random variable C, ranging over the clusters. Every
element x is characterized by conditional probability distribution p(Y|x), where Y is
a third random variable taking the members y of a given set of features as values.
The IB method formalizes the clustering task as minimizing the IB functional:
L(IB) = I(C; X) ? ? I(C; Y) .
(1)
As known from information theory (Ch. 13 of [9]), minimizing the mutual
information I(C; X) optimizes distorted compression rate. A complementary bias to
maximize I(C; Y) is interpreted in [6] as articulating the level of relevance of Y to
the obtained clustering, inferred from the level by which C can predict Y. ? is a free
parameter counterbalancing the two biases. It is shown in [6] that p(c|x) values that
minimize L(IB) satisfy the following equation:
p(c|x) =
1
p (c )e ?? DKL [ p ( Y |x )|| p (Y |c ) ] ,
z( ? , x)
(2)
where DKL stands for the Kullback-Leibler (KL) divergence, or relative entropy,
between two distributions and z(? ,x) is a normalization function over C. Eq. (2)
implies that, optimally, x is assigned to c in proportion to their KL distance in a
feature distribution space, where the distribution p(Y|c) takes the role of a
Start at time t = 0 and iterate the following update-steps, till convergence:
IB1:
initialize p t (c|x) randomly or arbitrarily
?? DKL [ p (Y | x )|| pt ?1 (Y |c ) ]
pt (c|x) ?
IB2: pt (c)
=
IB3: pt (y|c) =
pt ?1 (c ) e
?
x
(t = 0)
(t > 0)
p t (c | x ) p ( x )
1
? pt ( c | x) p ( y | x ) p ( x)
p t (c ) x
Figure 1: The Information Bottleneck iterative algorithm (with fixed ? and |C|).
representative, or centroid, of c. The feature variable Y is hence utilized as the
(exclusive) means to guide clustering, beyond the random nature of compression.
Figure 1 presents the IB iterative algorithm for a fixed value of ? . The IB1 update
step follows Eq. (2). The other two steps, which are derived from the IB functional
as well, estimate the p(c) and p(y|c) values required for the next iteration. The
algorithm converges to a local minimum of the IB functional. The IB setting,
particularly the derivation of steps IB1 and IB3 of the algorithm, assumes that Y and
C are independent given X, that is: I(C; Y|X) = ?x p(x) I(C|x; Y|x) = 0.
The balancing parameter ? affects the number of distinct clusters being formed in a
manner that resembles (inverse) temperature in physical systems. The higher ? is
(i.e., the stronger the bias to construct C that predicts Y well), more distinct clusters
are required for encoding the data. For each |C| = 2, 3, ?, there is a minimal ?
value, enabling the formation of |C| distinct clusters. Setting ? to be smaller than
this critical value corresponding to the current |C| would result in two or more
clusters that are identical to one another. Based on this, the iterative algorithm is
applied repeatedly within a gradual cooling-like (deterministic annealing) scheme:
starting with random initialization of the p0 (c|x)'s, generate two clusters with the
critical ? value, found empirically, for |C| = 2. Then, use a perturbation on the
obtained two-cluster configuration to initialize the p0(c|x)'s for a larger set of
clusters and execute additional runs of the algorithm to identify the critical ? value
for the larger |C|. And so on: each output configuration is used as a basis for a more
granular one. The final outcome is a ?soft hierarchy? of probabilistic clusters.
3
Cro ss- p a rt i t i o n Clu st eri n g
Cross-partition (CP) clustering introduces a factor ? a pre-given partition of the
clustered data ? additional to what considered in a standard clustering setting. For
representing this factor we introduce the pre-partitioning variable W, ranging over
all parts w of the pre-given partition. Every data element x is associated with W
through a given probability distribution p(W|x). Our goal is to cluster the data, so
that the clusters C would not be correlated with W. We notice that Y, which is
intended to direct the formation of clusters, might be a-priori correlated with W, so
the formed clusters might end up being correlated with W as well. Our method aims
at eliminating this aspect of Y.
3.1
I n f or ma t i o n D e f oc us i n g
As noted, some of the information conveyed by Y characterizes structures correlated
with W, while the other part of the information characterizes the target cross-W
structures. We are interested in detecting the latter while filtering out the former.
However, there is no direct a-priori separation between the two parts of the Ymediated information. Our strategy in tackling this difficulty is: we follow in
general Y's directions, as the IB method does, while avoiding Y's impact whenever it
entails undesired inter-dependencies of C and W.
Our strategy implies conflicting biases with regard to the mutual information I(C,Y):
it should be maximized in order to form meaningful clusters, but be minimized as
well in the specific context where Y entails C?W dependencies. Accordingly, we
propose a computational procedure directed by two distinct cost-terms in tandem.
The first one is the IB functional (Eq. 1), introducing the bias to maximize I(C,Y).
With this bias alone, Y might dictate (or ?explain?, in retrospect) substantial C?W
dependencies, implying a low I(C;W|Y) value. 1 Hence, the guideline of preventing Y
from accounting for C?W dependencies is realized through an opposing bias of
maximizing I(C;W|Y) = ?y p(y) I(C|y; W|y). The second cost term ? the Information
Defocusing (ID) functional ? consequently counterbalances minimization of I(C,Y)
against the new bias:
L(ID) = I(C; Y) ? ? I(C;W|Y) ,
(3)
where ? is a free parameter articulating the tradeoff between the biases. The ID
functional captures our goal of reducing the impact of Y selectively: ?defocusing? a
specific aspect of the information Y conveys: the information correlated with W.
In a like manner to the stable-point equation of the IB functional (Eq. 2), we derive
the following stable-point equation for the ID functional:
?
p ( w)
1
p ( c )? w p ( y | c, w) ? +1
,
p(c|y) =
z (? , y )
(4)
where z(?,y) is a normalization function over C. The derivation relies on an
additional assumption, I(C;W) = 0, imposing the intended independence between C
and W (the detailed derivation will be described elsewhere).
The intuitive interpretation of Eq. (4) is as follows: a feature y is to be associated
with a cluster c in proportion to a weighted, though flattened, geometric mean of the
?W-projected centroids? p(y|c,w), priored by p(c). 2 This scheme overweighs y's that
contribute to c evenly across W. Thus, clusters satisfying Eq. (4) are situated
around centroids biased towards evenly contributing features. The higher ? is,
heavier emphasis is put on suppressing disagreements between the w's. For ? ? ? a
plain weighted geometric-mean scheme is obtained. The inclusion of a step derived
from Eq. (4) in our algorithm (see below) facilitates convergence on a configuration
with centroids dominated by features that are evenly distributed across W.
3.2
T h e Cr os s - p a r t i t i on C l us t e r i n g A l g or i t h m
Our proposed cross partition (CP) clustering algorithm (Fig. 2) seeks a clustering
configuration that optimizes simultaneously both the IB and ID functionals,
1
Notice that ?Z explaining well the dependencies between A and B? is equivalent with ?A
and B sharing little information in common given Z?, i.e. low I(A;B|Z) . Complete
conditional independence is exemplified in the IB framework, assuming I(C;Y|X) = 0.
2
Eq. (4) resembles our suggestion in [8] to compute a geometric average over the
subsets; in the current paper this scheme is analytically derived from the ID functional.
Start at time t = 0 and iterate the following update-steps, till convergence:
CP1:
Initialize p t (c|x) randomly or arbitrarily
?? DKL [ p (Y | x )|| pt ?1 (Y |c ) ]
pt (c|x) ?
CP2: pt (c)
=
CP3: p*t (y|c,w) =
CP4:
(t = 0)
p t ?1 (c ) e
?
x
(t > 0)
p t (c | x ) p ( x )
1
? pt ( c | x ) p ( y | x ) p ( w | x ) p ( x )
p t ( c ) p ( w) x
Initialize p*t (c) randomly or arbitrarily
(t = 0)
p*t (c)
(t > 0)
=
?
y
p *t ?1 (c | y ) p ( y )
?
CP5: p*t (c|y) ?
p *t (c)?w p *t ( y | c, w) ? +1
CP6: pt (y|c) =
p *t (c | y ) p ( y )
p *t (c )
p ( w)
Figure 2: The cross-partition clustering iterative algorithm (with fixed ?, ?, and |C|).
thus obtaining clusters that cut across the pre-given partition W. To this end, the
algorithm interleaves an iterative computation of the stable-point equations, and the
additional estimated parameters, for both functionals. Steps CP1, CP2 and CP6
correspond to the computations related to the IB functional, while steps CP3, CP4
and CP5, which compute a separate set of parameters (denoted by an asterisk),
correspond to the ID functional. Figure 3 summarizes the roles of the two
functionals in the dynamics of the CP algorithm. The two components of the
iterative cycle are tied together in steps CP3 and CP6, in which parameters from one
set are used as input to compute a parameter of other set. The derivation of step
CP3 relies on an additional assumption, namely that C, Y and W are jointly
independent given X. This assumption, which extends to W the underlying
assumption of the IB setting that C and Y are independent given X, still entails the
IB stable point equation. At convergence, the stable point equations for both the IB
and ID functionals are satisfied, each by its own set of parameters (in steps CP1 and
CP5).
The deterministic annealing scheme, which gradually increases ? over repeated runs
(see Sec. 2), is applied for the CP algorithm as well with ? held fixed. For a given
target number of clusters |C|, the algorithm empirically converges with a wide range
of ? values 3.
I(C;X) ?
? IB ?
??
I(C;Y) ? ? ID ?
??
I(C; W|Y)
I(C; Y; W|X) = 0 ? assumptions ? I(C;W) = 0
Figure 3: The interplay of the IB and the ID functionals in the CP algorithm.
High ? values tend to dictate centroids with features that are unevenly distributed
across W, resulting in shrinkage of some of the clusters. Further analysis will be
provided in future work.
3
4
Exp e ri men t a l Resu lt s
Our synthetic setting consisted of 75 virtual elements, evenly pre-partitioned into
three 25-element parts denoted X 1 , X2 and X3 (in our formalism, for each clustered
element x, p(w|x) = 1 holds for either w = 1, 2, or 3). On top of this pre-partition,
we partitioned the data twice, getting two (exhaustive) clustering configurations:
1. Target cross-W clustering: five clusters, each with representatives from all X w's;
2. Masking within-w clustering: six clusters, each consisting of roughly half the
elements of either X 1, X 2 or X3 with no representatives from the other X w's.
Each cluster, of both configurations, was characterized by a designated subset of
features. Masking clusters were designed to be more salient than target clusters:
they had more designated features (60 vs. 48 per cluster, i.e., 360 vs. 240 in total)
and their elements shared higher feature-element (virtual) co-occurrence counts with
those designated features (900 vs. 450 per element-feature pair). Noise (random
positive integer < 200) was added to all counts associating elements with their
designated features (for both within-w and cross-W clusters), as well as to roughly
quarter of the zero counts associating elements with the rest of the features.
The plain IB method consistently produced configurations strongly correlated with
the masking clustering, while the CP algorithm revealed the target configuration.
We got (see Table 1A) almost perfect results in configurations of nearly equal-sized
cross-W clusters, and somewhat less perfect reconstruction in configurations of
diverging sizes (6, 9, 15, 21 and 24). Performance level was measured relatively to
optimal target-output cluster match by the proportion of elements correctly
assigned, where assignment of an element x follows its highest p(c|x). The results
indicated were averaged over 200 runs. They were obtained for the optimal ?,
which was found to be higher in the diverging-sizes task.
In the text-based task, the clustered elements ? keywords ? were automatically
extracted from five distinct corpora addressing five religions: introductory web
pages, online magazines, encyclopedic entries etc., all downloaded from the
Internet. The clustered keyword set X was consequently pre-partitioned to disjoint
subsets {X w} w?W, one for each religion4 (|X w| ? 200 for each w). We conducted
experiments simultaneously involving religion pairs as well as all five religions.
We took the features Y to be a set of words that commonly occur within all five
corpora (|Y| ? 7000). x?y co-occurrences were recorded within ?5-word sliding
window truncated by sentence boundaries. ? was fixed to a value (1.0) enabling the
formation of 20 clusters in all settings. The obtained clusters revealed interesting
cross religion themes (see Sec. 1). For instance, the cluster (one of nine) capturing
the theme of sacred festivals: the three highest p(c/x) members within each religion
were Full-moon, Ceremony, Celebration (Buddhism); Easter, Sunday, Christmas
Table 1: Average correct assignment proportion scores for the synthetic task (A) and
Jaccard-coefficient scores for the religion keyword classification task (B).
A. Synthetic Data
IB
CP
B. Religion Data
IB
Coupled Clustering [7]
CP
(cross-expert agreement on religion pairs .462?.232)
equal-size clusters .305 .985
non-equal clusters .292 .827
4
religion pairs
all five (one case)
.200?.100
.220?.138
.407?.144
.104
???????
.167
A keyword x that appeared in the corpora of different religions was considered as a
distinct element for each religion, so the Xw were kept disjointed.
(Chrsitianity); Puja, Ceremony, Festival (Hinduism); Id-al-Fitr, Friday, Ramadan,
(Islam); and Sukkoth, Shavuot, Rosh-Hodesh (Judaism). The closest cluster
produced by the plain IB method was poorer by far, including Islamic Ramadan, and
Id and Jewish Passover, Rosh-Hashanah and Sabbath (which our method ranked
high too), but no single related term from the other religions.
Our external evaluation standards were cross-religion keyword classes constructed
manually by experts of comparative religion studies. One such expert classification
involved all five religions, and eight classifications addressed religions in pairs.
Each of the eight religion-pair classifications was contributed by two independent
experts using the same keywords, so we could also assess the agreement between
experts. As an overlap measure we employed the Jaccard coefficient: the number of
element pairs co-assigned together by both one of the evaluated clusters and one of
the expert classes, divided by the number of pairs co-assigned by either our clusters
or the expert (or both). We did not assume the number of expert classes is known in
advance (as done in the synthetic experiments), so the results were averaged over all
configurations of 2?16 cluster hierarchy, for each experiment. The results shown in
Table 1B ? clear improvement relatively to plain IB and the distance-based coupled
clustering [7] ? are, however, persistent when the number of clusters is taken to be
equal to the number of classes, or if only the best score in hierarchy is considered.
The level of cross-expert agreement indicates that our results are reasonably close to
the scores expected in such subjective task.
5
C o mp a ri so n t o R e la t ed W o r k
The information bottleneck framework served as the basis for several approaches
that represent additional information in their clustering setting. The multivariate
information bottleneck (MIB) adapts the IB framework for networks of multiple
variables [3]. However, all variables in such networks are either compressed (like
X), or predicted (like Y). The incorporation of an empirical variable to be masked
or defocused in the sense of our W is not possible. Including such variables in the
MIB framework might be explored in future work.
Particularly relevant to our work is the IB-based method for extracting relevant
constructs with side information [4]. This approach addresses settings in which two
different types of features are distinguished explicitly: relevant versus irrelevant
ones, denoted by Y+ and Y?. Both types of features are incorporated within a single
functional to be minimized: L(IB-side-info) = I(C; X) ? ? ( I(C; Y +) ? ? I(C; Y?) ), which
directly drives clustering to de-correlate C and Y?.
Formally, our setting can be mapped to the side information setting by regarding the
pre-partition W simply as the additional set of irrelevant features, giving symmetric
(and opposite) roles to W and Y. However, it seems that this view does not address
properly the desired cross-partition setting. In our setting, it is assumed that
clustering should be guided in general by Y, while W should only neutralize
particular information within Y that would otherwise yield the undesired correlation
between C and W (as described in Section 3.1). For that reason, the defocusing
functional tie the three variables together by conditioning the de-correlation of C
and W on Y, while its underlying assumption ensures the global de-correlation.
Indeed, our method was found empirically superior on the cross-dataset task. The
side-information IB method (the iterative algorithm with best scoring ?) achieves
correct assignment proportion of 0.52 in both synthetic tasks, where our method
scored 0.99 and 0.83 (see Table 1A) and, in the religion-pair keyword classification
task, Jaccard coefficient improved by 20% relatively to plain IB (compared to our
100% improvement, see Table 1B).
6
C o n c lu si o n s
This paper addressed the problem of clustering a pre-partitioned dataset, aiming to
detect new internal structures that are not correlated with the pre-given partition but
rather cut across its components. The proposed framework extends the cross-dataset
clustering algorithm [8], providing better formal grounding and representing any
pre-given (soft) partition of the dataset. Supported by empirical evidence, we
suggest that our framework is better suited for the cross-partition task than applying
the side-information framework [4], which was originally developed to address a
somewhat different setting. We also demonstrate substantial empirical advantage
over the distance-based coupled-clustering algorithm [7].
As an applied real-world goal, the algorithm successfully detects cross-religion
commonalities. This goal exemplifies the more general notion of detecting
analogies across different systems, which is a somewhat vague and non-consensual
task and therefore especially challenging for a computational framework. Our
approach can be viewed as an initial step towards principled identification of
?hidden? commonalities between substantially different real world systems, while
suppressing the vast majority of attributes that are irrelevant for the analogy.
Further research may study the role of defocusing in supervised learning, where
some pre-given partitions might mask the role of underlying discriminative features.
Additionally, it would be interesting to explore relationships to other disciplines,
e.g., network information theory ([9], Ch. 14) which provided motivation for the
side-information approach. Finally, both frameworks (ours and side-information)
suggest the importance of dealing wisely with information that should not dictate
the clustering output directly.
A c k n ow l e d g me n t s
We thank Yuval Krymolowski for helpful discussions and Tiina Mahlam?ki, Eitan
Reich and William Shepard, for contributing the religion keyword classifications.
References
[1] Hofmann, T. (2001) Unsupervised learning by probabilistic latent semantic analysis.
Journal of Machine Learning Research, 41(1):177-196.
[2] Wagstaff K., Cardie C., Rogers S. and Schroedl S., 2001. Constrained K-Means
clustering with background knowledge. The 18th International Conference on Machine
Learning (ICML-2001), pp 577-584.
[3] Friedman N., Mosenzon O., Slonim N. & Tishby N. (2002) Multivariate information
bottleneck. The 17th conference on Uncertainty in Artificial Intelligence (UAI-17), pp. 152161.
[4] Chechik G. & Tishby N. (2002) Extracting relevant structures with side information.
Advances in Neural Processing Information Systems 15 (NIPS'02).
[5] Globerson, A., Chechik G. & Tishby N. (2003) Sufficient dimensionality reduction.
Journal of Machine Learning Research, 3:1307-1331.
[6] Tishby, N., Pereira, F. C. & Bialek, W. (1999) The information bottleneck method. The
37th Annual Allerton Conference on Communication, Control, and Computing, pp. 368-379.
[7] Marx, Z., Dagan, I., Buhmann, J. M. & Shamir E. (2002) Coupled clustering: A method
for detecting structural correspondence. Journal of Machine Learning Research, 3:747-780.
[8] Dagan, I., Marx, Z. & Shamir E (2002) Cross-dataset clustering: Revealing corresponding
themes across multiple corpora. Proceedings of the 6th Conference on Natural Language
Learning (CoNLL-2002), pp. 15-21.
[9] Cover T. M. & Thomas J. A. (1991) Elements of Information Theory.
Sons, Inc., New York, New York.
John Wiley &
| 2406 |@word eliminating:1 compression:3 seems:1 proportion:5 stronger:1 gradual:1 seek:1 accounting:1 p0:2 paid:1 reduction:1 initial:1 configuration:11 cp2:2 score:4 ours:1 suppressing:2 subjective:1 current:2 od:1 si:1 tackling:1 john:1 partition:21 hofmann:1 designed:1 update:3 v:3 alone:1 implying:1 half:1 cp3:4 intelligence:1 accordingly:1 detecting:3 contribute:1 allerton:1 five:8 constructed:1 direct:2 become:1 persistent:1 introductory:1 introduce:1 manner:2 inter:1 mask:1 indeed:1 expected:1 roughly:2 growing:1 detects:1 eck:1 automatically:1 little:1 window:1 tandem:1 provided:2 underlying:4 israel:3 what:1 kind:1 interpreted:1 substantially:1 developed:1 unified:1 formalizes:1 every:2 tie:1 ro:1 partitioning:3 control:1 positive:1 local:1 slonim:1 aiming:2 encoding:1 id:12 might:6 emphasis:1 initialization:1 resembles:2 twice:1 challenging:1 co:4 ramat:1 range:1 averaged:2 directed:1 globerson:1 x3:2 procedure:1 empirical:4 got:1 dictate:3 revealing:1 chechik:2 pre:19 word:2 suggest:2 close:1 put:1 context:1 applying:1 writing:1 equivalent:1 deterministic:2 center:1 maximizing:1 jewish:1 jerusalem:2 attention:1 starting:1 christianity:1 identifying:2 notion:1 analogous:1 shamir:3 pt:11 hierarchy:3 target:6 magazine:1 agreement:3 element:22 satisfying:1 particularly:2 utilized:2 cut:4 predicts:1 cooling:1 role:5 capture:2 ensures:1 cycle:1 keyword:6 cro:1 highest:2 substantial:2 principled:1 dynamic:1 basis:2 vague:1 represented:2 derivation:4 distinct:8 artificial:1 formation:4 outcome:1 ceremony:2 exhaustive:1 heuristic:1 larger:2 s:1 otherwise:1 compressed:2 jointly:1 final:1 online:1 interplay:1 advantage:1 judaism:2 took:1 propose:2 reconstruction:1 relevant:4 till:2 amplified:1 adapts:1 intuitive:1 getting:1 convergence:4 cluster:43 regularity:1 comparative:1 perfect:2 converges:2 derive:2 measured:1 school:1 keywords:3 eq:8 c:2 predicted:1 implies:2 direction:1 guided:2 correct:2 attribute:1 virtual:2 rogers:1 clustered:11 extension:2 strictly:1 hold:1 around:1 considered:3 exp:1 algorithmic:2 predict:1 clu:1 achieves:1 commonality:2 narrative:1 festival:3 neutralize:1 successfully:2 weighted:2 minimization:1 sunday:1 aim:1 rather:2 cr:1 shrinkage:1 conjunction:1 derived:3 exemplifies:1 improvement:2 consistently:1 properly:1 indicates:1 centroid:5 sense:1 detect:1 helpful:1 textbased:1 hidden:1 cp6:3 interested:1 classification:6 denoted:3 priori:3 constrained:1 initialize:4 ramadan:3 mutual:2 equal:4 construct:3 manually:1 identical:1 unsupervised:1 nearly:1 icml:1 future:2 minimized:2 inherent:1 randomly:3 simultaneously:2 divergence:1 individual:2 intended:2 consisting:1 opposing:1 william:1 friedman:1 interest:1 evaluation:1 introduces:2 yielding:1 held:1 poorer:1 mib:2 desired:1 minimal:1 instance:1 formalism:1 soft:4 earlier:1 cover:1 assignment:4 cost:2 introducing:1 addressing:1 subset:12 entry:1 masked:1 conducted:1 too:1 tishby:4 optimally:1 characterize:2 dependency:5 ido:1 accomplish:1 synthetic:7 disjointed:1 st:1 international:1 probabilistic:5 discipline:1 together:5 satisfied:1 recorded:1 external:1 expert:9 friday:1 ilan:1 de:3 sec:2 coefficient:3 inc:1 satisfy:1 mp:1 explicitly:1 view:1 characterizes:2 start:2 len:1 masking:3 minimize:1 formed:3 ass:1 moon:1 maximized:1 correspond:3 identify:5 yield:1 identification:1 produced:2 lu:1 cardie:1 served:1 drive:1 explain:1 fo:1 whenever:2 sharing:1 ed:1 against:1 celebration:1 involved:1 pp:4 conveys:1 associated:2 newly:1 dataset:7 knowledge:1 dimensionality:1 higher:4 originally:1 supervised:1 follow:1 improved:2 execute:1 though:1 strongly:1 evaluated:1 done:1 implicit:1 myth:1 correlation:3 retrospect:1 web:1 o:1 indicated:1 lossy:1 grounding:1 consisted:1 counterbalancing:1 former:1 hence:2 assigned:4 analytically:1 symmetric:1 leibler:1 buddhism:2 semantic:1 deal:1 undesired:2 noted:1 oc:1 generalized:1 theoretic:3 demonstrate:2 complete:1 cp:8 temperature:1 ranging:3 cp4:2 common:3 superior:1 functional:15 quarter:1 empirically:5 physical:1 conditioning:1 shepard:1 interpretation:1 imposing:1 similarly:1 inclusion:1 language:1 had:1 stable:7 neutralized:1 entail:3 interleaf:1 reich:1 etc:1 closest:1 own:1 recent:2 multivariate:2 optimizes:2 irrelevant:3 arbitrarily:3 scoring:1 minimum:1 additional:9 somewhat:4 employed:1 maximize:2 sliding:1 full:1 multiple:2 technical:1 match:1 characterized:4 cross:23 divided:1 dkl:4 impact:4 involving:1 iteration:1 represent:2 normalization:2 background:1 addressed:3 annealing:2 unevenly:1 source:1 ot:1 biased:1 rest:1 tend:1 facilitates:2 member:2 incorporates:1 seem:1 integer:1 extracting:2 structural:1 revealed:2 iterate:2 affect:1 independence:2 associating:2 opposite:1 interprets:1 regarding:1 tradeoff:1 bottleneck:6 six:1 heavier:1 articulating:2 york:2 nine:1 cp5:3 repeatedly:1 detailed:1 aimed:1 clear:1 situated:1 generate:1 cp1:3 wisely:1 notice:2 estimated:1 disjoint:2 per:2 correctly:1 salient:1 kept:1 vast:1 year:1 run:3 eli:1 inverse:1 uncertainty:1 distorted:1 extends:3 almost:1 eitan:1 separation:1 summarizes:1 jaccard:3 conll:1 capturing:1 ki:1 internet:1 correspondence:2 krymolowski:1 annual:1 occur:1 incorporation:1 ri:2 x2:1 dominated:1 aspect:2 relatively:3 department:1 designated:4 across:14 describes:1 smaller:1 son:1 partitioned:6 modification:1 gradually:1 defocusing:4 wagstaff:1 taken:1 equation:9 ib1:3 count:3 end:2 eight:2 appropriate:1 disagreement:1 occurrence:2 distinguished:1 thomas:1 assumes:1 clustering:39 top:1 eri:1 gan:1 xw:1 giving:1 especially:1 added:1 realized:1 schroedl:1 strategy:2 exclusive:1 rt:1 bialek:1 ow:1 distance:4 separate:1 mapped:1 thank:1 majority:1 evenly:4 me:1 reason:1 assuming:1 relationship:1 providing:1 minimizing:2 hebrew:2 info:1 guideline:1 motivates:1 contributed:1 enabling:2 truncated:1 incorporated:2 communication:1 topical:2 perturbation:1 inferred:1 namely:1 required:2 kl:2 pair:9 sentence:1 coherent:1 distinction:1 conflicting:1 nip:1 address:4 beyond:1 bar:1 below:1 exemplified:1 appeared:1 marx:3 including:2 critical:3 overlap:1 difficulty:1 ranked:1 natural:1 islam:2 buhmann:1 counterbalance:2 scheme:5 representing:2 church:1 coupled:4 text:3 geometric:3 contributing:2 relative:1 easter:2 suggestion:1 men:1 filtering:1 resu:1 interesting:2 versus:2 granular:1 analogy:2 mindset:1 asterisk:1 downloaded:1 conveyed:1 sufficient:1 principle:1 balancing:1 encyclopedic:1 elsewhere:1 supported:1 free:2 bias:10 guide:2 side:10 formal:1 explaining:1 dagan:3 characterizing:1 taking:1 wide:1 distributed:2 regard:1 boundary:1 plain:5 world:3 avoids:1 stand:1 preventing:1 commonly:2 projected:1 far:1 ib2:1 correlate:1 functionals:5 kullback:1 dealing:1 christmas:2 global:1 reveals:1 uai:1 corpus:5 assumed:2 discriminative:1 iterative:7 latent:1 table:5 additionally:1 ib3:2 nature:1 reasonably:1 ignoring:1 obtaining:1 investigated:1 complex:1 did:1 motivation:2 noise:1 scored:1 repeated:1 complementary:1 fig:1 representative:3 wiley:1 theme:6 pereira:1 tied:1 ib:34 third:1 specific:3 explored:2 evidence:1 grouping:1 flattened:1 importance:1 mosenzon:1 suited:1 entropy:1 lt:1 simply:1 explore:1 religion:24 ch:2 relies:2 extracted:2 ma:1 conditional:2 goal:6 sized:1 viewed:1 consequently:2 towards:2 shared:1 hard:1 specifically:1 reducing:1 yuval:1 total:1 diverging:2 la:1 meaningful:1 selectively:1 formally:1 internal:2 latter:1 relevance:1 tested:1 avoiding:1 correlated:8 |
1,548 | 2,407 | Tree-structured approximations by expectation
propagation
Thomas Minka
Department of Statistics
Carnegie Mellon University
Pittsburgh, PA 15213 USA
[email protected]
Yuan Qi
Media Laboratory
Massachusetts Institute of Technology
Cambridge, MA 02139 USA
[email protected]
Abstract
Approximation structure plays an important role in inference on loopy
graphs. As a tractable structure, tree approximations have been utilized
in the variational method of Ghahramani & Jordan (1997) and the sequential projection method of Frey et al. (2000). However, belief propagation represents each factor of the graph with a product of single-node
messages. In this paper, belief propagation is extended to represent factors with tree approximations, by way of the expectation propagation
framework. That is, each factor sends a ?message? to all pairs of nodes
in a tree structure. The result is more accurate inferences and more frequent convergence than ordinary belief propagation, at a lower cost than
variational trees or double-loop algorithms.
1
Introduction
An important problem in approximate inference is improving the performance of belief
propagation on loopy graphs. Empirical studies have shown that belief propagation (BP)
tends not to converge on graphs with strong positive and negative correlations (Welling
& Teh, 2001). One approach is to force the convergence of BP, by appealing to a freeenergy interpretation (Welling & Teh, 2001; Teh & Welling, 2001; Yuille, In press 2002).
Unfortunately, this doesn?t really solve the problem because it dramatically increases the
computational cost and doesn?t necessarily lead to good results on these graphs (Welling &
Teh, 2001).
The expectation propagation (EP) framework (Minka, 2001a) gives another interpretation
of BP, as an algorithm which approximates multi-variable factors by single-variable factors
(f (x1 , x2 ) ? f?1 (x1 )f?2 (x2 )). This explanation suggests that it is BP?s target approximation which is to blame, not the particular iterative scheme it uses. Factors which encode
strong correlations should not be well approximated in this way. The connection between
failure to converge and poor approximation holds true for EP algorithms in general, as
shown by Minka (2001a) and Heskes & Zoeter (2002).
Yedidia et al. (2000) describe an extension of BP involving the Kikuchi free-energy. The
resulting algorithm resembles BP on a graph of node clusters, where again multi-variable
factors are decomposed into independent parts (f (x1 , x2 , x3 ) ? f?1 (x1 )f?23 (x2 , x3 )).
In this paper, the target approximation of BP is enriched by exploiting the connection to
expectation propagation. Instead of approximating each factor by disconnected nodes or
clusters, it is approximated by a tree distribution. The algorithm is a strict generalization of
belief propagation, because if the tree has no edges, then the results are identical to (loopy)
belief propagation.
This approach is inspired by previous work employing trees. For example, Ghahramani
& Jordan (1997) showed that tree structured approximations could improve the accuracy
of variational bounds. Such bounds are tuned to minimize the ?exclusive? KL-divergence
KL(q||p), where q is the approximation. Frey et al. (2000) criticized this error measure
and described an alternative method for minimizing the ?inclusive? divergence KL(p||q).
Their method, which sequentially projects graph potentials onto a tree structure, is closely
related to expectation propagation and the method in this paper. However, their method is
not iterative and therefore sensitive to the order in which the potentials are sequenced.
There are also two tangentially related papers by Wainwright et al. (2001); Wainwright
et al. (2002). In the first paper, a ?message-free? version of BP was derived, which used
multiple tree structures to propagate evidence. The results it gives are nevertheless the
same as BP. In the second paper, tree structures were used to obtain an upper bound on
the normalizing constant of a Markov network. The trees produced by that method do not
necessarily approximate the original distribution well.
The following section describes the EP algorithm for updating the potentials of a tree approximation with known structure. Section 3 then describes the method we use to choose
the tree structure. Section 4 gives numerical results on various graphs, comparing the new
algorithm to BP, Kikuchi, and variational methods.
2
Updating the tree potentials
This section describes an expectation-propagation algorithm to approximate a given distribution (of arbitrary structure) by a tree with known structure. It elaborates section 4.2.2
of Minka (2001b), with special attention to efficiency. Denote the original distribution by
p(x), written as a product of factors:
Y
fi (x)
(1)
p(x) =
i
For example, if p(x) is a Bayesian network or Markov network, the factors are conditional
probability distributions or potentials which each depend on a small subset of the variables
in x. In this paper, the variables are assumed to be discrete, so that the factors fi (x) are
simply multidimensional tables.
2.1
Junction tree representation
The target approximation q(x) will have pairwise factors along a tree T :
Q
(j,k)?T q(xj , xk )
Q
q(x) =
s?S q(xs )
(2)
In this notation, q(xs ) is the marginal distribution for variable xs and q(xj , xk ) is the
marginal distribution for the two variables xj and xk . These are going to be stored as multidimensional tables. The division is necessary to cancel over-counting in the numerator. A
useful way to organize these divisions is to construct a junction tree connecting the cliques
(j, k) ? T (Jensen et al., 1990). This tree has a different structure than T ?the nodes in the
junction tree represent cliques in T , and the edges in the junction tree represent variables
which are shared between cliques. These separator variables S in the junction tree are
p
q
junction tree of q
Figure 1: Approximating a complete graph p by a tree q. The junction tree of q is used to
organize computations.
exactly the variables that go in the denominator of (2). Note that the same variable could
be a separator more than once, so technically S is a multiset.
Figure 1 shows an example of how this all works. We want to approximate the distribution
p(x), which has a complete graph, by q(x), whose graph is a spanning tree. The marginal
representation of q can be directly read off of the junction tree:
q(x) =
2.2
q(x1 , x4 )q(x2 , x4 )q(x3 , x4 )
q(x4 )q(x4 )
(3)
EP updates
The algorithm iteratively tunes q(x) so that it matches p(x) as closely as possible, in the
sense of ?inclusive? KL-divergence. Specifically, q tries to preserve the marginals and
pairwise marginals of p:
q(xj )
q(xj , xk )
? p(xj )
? p(xj , xk )
(j, k) ? T
(4)
(5)
Expectation propagation is a general framework for approximating distributions of the form
(1) by approximating the factors one by one. The final approximation q is then the product
of the approximate factors. The functional form of the approximate factors is determined
by considering the ratio of two different q?s. In our case, this leads to approximations of
the form
Q
?
(j,k)?T fi (xj , xk )
fi (x) ? f?i (x) =
(6)
Q
f?i (xs )
s?S
A product of such factors gives a distribution of the desired form (2). Note that f?i (xj , xk )
is not a proper marginal distribution, but just a non-negative function of two variables.
The algorithm starts by initializing the clique and separator potentials on the junction tree
to 1. If a factor in p only depends on one variable, or variables which are adjacent in T ,
then its approximation is trivial. It can be multiplied into the corresponding clique potential
right away and removed from further consideration. The remaining factors in p, the off-tree
factors, have their approximations f?i initialized to 1.
To illustrate, consider the graph of figure 1. Suppose all the potentials in p are pairwise, one
for each edge. The edges {(1, 4), (2, 4), (3, 4)} are absorbed directly into q. The off-tree
edges are {(1, 2), (1, 3), (2, 3)}.
The algorithm then iteratively passes through the off-tree factors in p, performing the following three steps until all f?i converge:
(a) Deletion. Remove f?i from q to get an ?old? approximation q \i :
q \i (xj , xk )
=
q \i (xs )
=
q(xj , xk )
(j, k) ? T
f?i (xj , xk )
q(xs )
s?S
f?i (xs )
(7)
(8)
(b) Incorporate evidence. Form the product fi (x)q \i (x), by considering f (x) as ?evidence?
for the junction tree. Propagate the evidence to obtain new clique marginals q(x j , xk ) and
separators q(xs ) (details below).
(c) Update. Re-estimate f?i by division:
2.3
f?i (xj , xk )
=
f?i (xs )
=
q(xj , xk )
(j, k) ? T
q \i (xj , xk )
q(xs )
s?S
q \i (xs )
(9)
(10)
Incorporating evidence by cutset conditioning
The purpose of the ?incorporate evidence? step is to find a distribution q minimizing
KL(fi (x)q \i || q). This is equivalent to matching the marginal distributions corresponding
to each clique in q. By definition, fi depends on a set of variables which are not adjacent
in T , so the graph structure corresponding to fi (x)q \i (x) is not a tree, but has one or more
loops. One approach is to apply a generic exact inference algorithm to fi (x)q \i (x) to obtain the desired marginals, e.g. construct a new junction tree in which fi (x) is a clique and
propagate evidence in this tree. But this does not exploit the fact that we already have a
junction tree for q \i on which we can perform efficient inference.
Instead we use a more efficient approach?Pearl?s cutset conditioning algorithm?to incorporate the evidence. Suppose fi (x) depends on a set of variables V. The domain of
fi (x) is the set of all possible assignments to V. Find the clique (j, k) ? T which has the
largest overlap with this domain?call this the root clique. Then enumerate the rest of the
domain V\(xj , xk ). For each possible assignment to these variables, enter it as evidence
in q?s junction tree and propagate to get marginals and an overall scale factor (which is the
probability of that assignment). When the variables V\(xj , xk ) are fixed, entering evidence
simply reduces to zeroing out conflicting entries in the junction tree, and multiplying the
root clique (j, k) by fi (x). After propagating evidence multiple times, average the results
together according to their scale factors, to get the final marginals and separators of q.
Continuing the example of figure 1, suppose we want to process edge (1, 2), whose factor
is f1 (x1 , x2 ). When added to q, this creates a loop. We cut the loop by conditioning on the
variable with smallest arity. Suppose x1 is binary, so we condition on it. The other clique,
(2, 4), becomes the root. In one case, the evidence is (x1 = 0, f1 (0, x2 )) and in the other it
is (x1 = 1, f1 (1, x2 )). Propagating evidence for both cases and averaging the results gives
the new junction tree potentials.
Because it is an expectation-propagation algorithm, we know that a fixed point always
exists, but we may not always find one. In these cases, the algorithm could be stabilized by
a stepsize or double-loop iteration. But overall the method is very stable, and in this paper
no convergence control is used.
2.4
Within-loop propagation
A further optimization is also used, by noting that evidence does not need to be propagated
to the whole junction tree. In particular, it only needs to be propagated within the subtree
that connects the nodes in V. Evidence propagated to the rest of the tree will be exactly
canceled by the separators, so even though the potentials may change, the ratios in (2) will
not. For example, when we process edge (1, 2) in figure 1, there is no need to propagate
evidence to clique (3, 4), because when q(x3 , x4 ) is divided by the separator q(x4 ), we
have q(x3 |x4 ) which is the same before and after the evidence.
Thus evidence is propagated as follows: first collect evidence from V to the root, then distribute evidence from the root back to V, bypassing the rest of the tree (these operations are
defined formally by Jensen et al. (1990)). In the example, this means we collect evidence
from clique (1, 4) to the root (2, 4), then distribute back from (2, 4) to (1, 4), ignoring
(3, 4). This simplification also means that we don?t need to store f?i for the cliques that are
never updated by factor i. When moving to the next factor, once we?ve designated the root
for that factor, we collect evidence from the previous root. In this way, the results are the
same as if we always propagated evidence to the whole junction tree.
3
Choosing the tree structure
This section describes a simple method to choose the tree structure. It leaves open the
problem of finding the ?optimal? approximation structure; instead, it presents a simple rule
which works reasonably well in practice.
Intuitively, we want edges between the variables which are the most correlated. The approach is based on Chow & Liu (1968): estimate the mutual information between adjacent
nodes in p?s graph, call this the ?weight? of the edge between them, and then find the spanning tree with maximal total weight. The mutual information between nodes requires an
estimate of their joint distribution. In our implementation, this is obtained from the product
of factors involving only these two nodes, i.e. the single-node potentials times the edge between them. While crude, it does capture the amount of correlation provided by the edge,
and thus whether we should have it in the approximation.
4
4.1
Numerical results
The four-node network
This section illustrates the algorithm on a concrete problem, comparing it to other methods
for approximate inference. The network and approximation will be the ones pictured in
figure 1, with all nodes binary. The potentials were chosen randomly and can be obtained
from the authors? website.
Five approximate inference methods were compared. The proposed method (TreeEP) used
the tree structure specified in figure 1. Mean-field (MF) fit a variational bound with independent variables, and TreeVB fit a tree-structured variational bound, with the same
structure as TreeEP. TreeVB was implemented using the general method described by
Wiegerinck (2000), with the same junction tree optimizations as in TreeEP.
Generalized belief propagation (GBP) was implemented using the parent-child algorithm
of Yedidia et al. (2002) (with special attention to the damping described in section 8).
We also used GBP to perform ordinary loopy belief propagation (BP). Our implementation
tries to be efficient in terms of FLOPS, but we do not know if it is the fastest possible. GBP
and BP were first run using stepsize 0.5, and if didn?t converge, halved it and started over.
The time for these ?trial runs? was not counted.
Method
Exact
TreeEP
GBP
TreeVB
BP
MF
FLOPS
200
800
2200
11700
500
11500
E[x1 ]
0.474
0.467
0.467
0.460
0.499
0.000
E[x2 ]
0.468
0.459
0.459
0.460
0.499
0.000
E[x3 ]
0.482
0.477
0.477
0.476
0.5
0.094
E[x4 ]
0.536
0.535
0.535
0.540
0.501
0.946
Error
0
0.008
0.008
0.014
0.035
0.474
Table 1: Node means estimated by various methods (TreeEP = the proposed method, BP =
loopy belief propagation, GBP = generalized belief propagation on triangles, MF = meanfield, TreeVB = variational tree). FLOPS are rounded to the nearest hundred.
The algorithms were all implemented in Matlab using Kevin Murphy?s BNT toolbox (Murphy, 2001). Computational cost was measured by the number of floating-point operations
(FLOPS). Because the algorithms are iterative and can be stopped at any time to get a result, we used a ?5% rule? to determine FLOPS. The algorithm was run for a large number
of iterations, and the error at each iteration was computed. At each iteration, we then get
an error bound, which is the maximum error from that iteration onwards. The first iteration
whose error bound is within 5% of the final error is chosen for the official FLOP count.
(The official error is still the final error.)
The results are shown in table 1. TreeEP is more accurate than BP, with less cost than
TreeVB and GBP. GBP was run with clusters {(1, 2, 4), (1, 3, 4), (2, 3, 4)}. This gives the
same result as TreeEP, because these clusters are exactly the off-tree loops.
4.2
Complete graphs
The next experiment tests the algorithms on complete graphs of varying size. The graphs
have random single-node
and pairwise potentials,
of the form fi (xj ) = [exp(?j ) exp(??j )]
"
#
exp(wjk ) exp(?wjk )
and fi (xj , xk ) =
. The ?external fields? ?j were drawn inexp(?wjk ) exp(wjk )
dependently from a Gaussian with mean 0 and standard deviation 1. The ?couplings?
? w jk
were drawn independently from a Gaussian with mean 0 and standard deviation 3/ n ? 1,
where n is the number of nodes. Each node has n ? 1 neighbors, so this tries to keep the
overall coupling level constant.
Figure 2(a) shows the approximation error as n increases. For each n, 10 different potentials were drawn, giving 110 networks in all. For each one, the maximum absolute
difference between the estimated means and exact means was computed. These errors are
averaged over potentials and shown separately for each graph size. TreeEP and TreeVB
always used the same structure, picked according to section 3. TreeEP outperforms BP
consistently, but TreeVB does not.
For this type of graph, we found that GBP works well with clusters in a ?star? pattern, i.e.
the clusters are {(1, 2, 3), (1, 3, 4), (1, 4, 5), ..., (1, n, 2)}. Node ?1? is the center of the star,
and was chosen to be the node with highest average coupling to its neighbors. As shown
in figure 2(a), this works much better than using all triples of nodes, as done by Kappen &
Wiegerinck (2001). Note that if TreeEP is given a similar ?star? structure, the results are the
same as GBP. This is because the GBP clusters coincide with the off-tree loops. In general,
if the off-tree loops are triangles, then GBP on those triangles will give identical results.
Figure 2(b) shows the cost as n increases. TreeEP and TreeVB scale the best, with TreeEP
being the fastest method on large graphs.
6
10
?1
10
BP
GBP?star
GBP?triples
5
TreeVB
FLOPS
Error
10
TreeVB
?2
BP
4
10
10
TreeEP
TreeEP
Exact
3
10
GBP?star/TreeEP?star
4
6
8
10
# of nodes
12
4
14
6
8
10
# of nodes
(a)
12
14
(b)
Figure 2: (a) Error in the estimated means for complete graphs with randomly chosen
potentials. Each point is an average over 10 potentials. (b) Average FLOPS for the results
in (a).
7
10
TreeVB
?2
10
TreeVB
BP
6
10
GBP?squares
?3
TreeEP
FLOPS
Error
10
TreeEP
5
10
BP
?4
10
4
10
GBP?squares
Exact
?5
10
3
10
20
40
60
80
# of nodes
(a)
100
120
20
40
60
80
# of nodes
100
120
(b)
Figure 3: (a) Error in the estimated means for grid graphs with randomly chosen potentials.
Each point is an average over 10 potentials. (b) Average FLOPS for the results in (a).
4.3
Grids
The next experiment tests the algorithms on square grids of varying size. The external
fields ?j were drawn as before, and the couplings wjk had standard deviation 1. The GBP
clusters were overlapping squares, as in Yedidia et al. (2000).
Figure 3(a) shows the approximation error as n increases, with results averaged over 10
trials as in the previous section. TreeVB performs consistently worse than BP, even though
it is using the same tree structures as TreeEP. The plot also shows that these structures,
being automatically chosen, are not as good as the hand-crafted clusters used by GBP. We
have hand-crafted tree structures that perform just as well on grids, but for simplicity we
do not include these results.
Figure 3(b) shows that TreeEP is the fastest on large grids, even faster than BP, because BP
must use increasingly smaller stepsizes. GBP is more than a factor of ten slower.
5
Conclusions
Tree approximation allows a smooth tradeoff between cost and accuracy in approximate
inference. It improves on BP for a modest increase in cost. In particular, when ordinary BP
doesn?t converge, TreeEP is an attractive alternative to damping or double-loop iteration.
TreeEP performs better than the corresponding variational bounds, because it minimizes
the inclusive KL-divergence. We found that TreeEP was equivalent to GBP in some cases,
which deserves further study.
We hope that these results encourage more investigation into approximation structure for
inference algorithms, such as finding the ?optimal? structure for a given problem. There are
many other opportunities for special approximation structure to be exploited, especially
in hybrid networks, where not only do the independence assumptions matter but also the
distributional forms.
Acknowledgments
We thank an anonymous reviewer for advice on comparisons to GBP.
References
Chow, C. K., & Liu, C. N. (1968). Approximating discrete probability distributions with dependence
trees. IEEE Transactions on Information Theory, 14, 462?467.
Frey, B. J., Patrascu, R., Jaakkola, T., & Moran, J. (2000). Sequentially fitting inclusive trees for
inference in noisy-OR networks. NIPS 13.
Ghahramani, Z., & Jordan, M. I. (1997). Factorial hidden Markov models. Machine Learning, 29,
245?273.
Heskes, T., & Zoeter, O. (2002). Expectation propagation for approximate inference in dynamic
Bayesian networks. Proc UAI.
Jensen, F. V., Lauritzen, S. L., & Olesen, K. G. (1990). Bayesian updating in causal probabilistic
networks by local computations. Computational Statistics Quarterly, 5, 269?282.
Kappen, H. J., & Wiegerinck, W. (2001). Novel iteration schemes for the cluster variation method.
NIPS 14.
Minka, T. P. (2001a). Expectation propagation for approximate Bayesian inference. UAI (pp. 362?
369).
Minka, T. P. (2001b). A family of algorithms for approximate Bayesian inference. Doctoral dissertation, Massachusetts Institute of Technology.
Murphy, K. (2001). The Bayes Net Toolbox for Matlab. Computing Science and Statistics, 33.
Teh, Y. W., & Welling, M. (2001). The unified propagation and scaling algorithm. NIPS 14.
Wainwright, M. J., Jaakkola, T., & Willsky, A. S. (2001). Tree-based reparameterization for approximate estimation on loopy graphs. NIPS 14.
Wainwright, M. J., Jaakkola, T. S., & Willsky, A. S. (2002). A new class of upper bounds on the log
partition function. Proc UAI.
Welling, M., & Teh, Y. W. (2001). Belief optimization for binary networks: A stable alternative to
loopy belief propagation. UAI.
Wiegerinck, W. (2000). Variational approximations between mean field theory and the junction tree
algorithm. Proc UAI.
Yedidia, J. S., Freeman, W. T., & Weiss, Y. (2000). Generalized belief propagation. NIPS 13.
Yedidia, J. S., Freeman, W. T., & Weiss, Y. (2002). Constructing free energy approximations and
generalized belief propagation algorithms (Technical Report). MERL Research Lab.
Yuille, A. (In press, 2002). A double-loop algorithm to minimize the Bethe and Kikuchi free energies.
Neural Computation.
| 2407 |@word trial:2 version:1 open:1 propagate:5 kappen:2 liu:2 tuned:1 outperforms:1 comparing:2 written:1 must:1 numerical:2 partition:1 remove:1 plot:1 update:2 leaf:1 website:1 xk:17 dissertation:1 multiset:1 node:23 five:1 along:1 yuan:1 fitting:1 pairwise:4 multi:2 inspired:1 freeman:2 decomposed:1 automatically:1 considering:2 becomes:1 project:1 provided:1 notation:1 medium:2 didn:1 minimizes:1 unified:1 finding:2 multidimensional:2 exactly:3 cutset:2 control:1 organize:2 positive:1 before:2 frey:3 local:1 tends:1 doctoral:1 resembles:1 suggests:1 collect:3 fastest:3 averaged:2 acknowledgment:1 practice:1 x3:6 empirical:1 projection:1 matching:1 get:5 onto:1 equivalent:2 reviewer:1 center:1 go:1 attention:2 independently:1 simplicity:1 rule:2 reparameterization:1 variation:1 updated:1 target:3 play:1 suppose:4 exact:5 us:1 pa:1 approximated:2 jk:1 utilized:1 updating:3 cut:1 distributional:1 ep:4 role:1 initializing:1 capture:1 removed:1 highest:1 dynamic:1 depend:1 technically:1 yuille:2 division:3 efficiency:1 creates:1 triangle:3 joint:1 various:2 describe:1 kevin:1 choosing:1 whose:3 solve:1 elaborates:1 statistic:3 noisy:1 final:4 net:1 product:6 maximal:1 frequent:1 loop:11 wjk:5 exploiting:1 convergence:3 double:4 cluster:10 parent:1 kikuchi:3 illustrate:1 coupling:4 propagating:2 stat:1 measured:1 nearest:1 lauritzen:1 strong:2 bnt:1 implemented:3 closely:2 f1:3 generalization:1 really:1 investigation:1 anonymous:1 extension:1 bypassing:1 hold:1 exp:5 smallest:1 purpose:1 estimation:1 proc:3 sensitive:1 largest:1 hope:1 mit:1 always:4 gaussian:2 varying:2 stepsizes:1 jaakkola:3 encode:1 derived:1 consistently:2 sense:1 inference:13 chow:2 hidden:1 going:1 overall:3 canceled:1 special:3 mutual:2 marginal:5 field:4 construct:2 once:2 never:1 identical:2 represents:1 x4:9 cancel:1 report:1 randomly:3 preserve:1 divergence:4 ve:1 murphy:3 floating:1 connects:1 onwards:1 message:3 accurate:2 edge:11 encourage:1 necessary:1 modest:1 damping:2 tree:62 old:1 continuing:1 initialized:1 desired:2 re:1 causal:1 stopped:1 criticized:1 merl:1 assignment:3 loopy:7 ordinary:3 cost:7 deviation:3 subset:1 entry:1 deserves:1 hundred:1 stored:1 probabilistic:1 off:7 rounded:1 connecting:1 together:1 concrete:1 again:1 choose:2 worse:1 external:2 potential:19 distribute:2 star:6 matter:1 depends:3 try:3 root:8 picked:1 lab:1 zoeter:2 start:1 bayes:1 minimize:2 square:4 accuracy:2 tangentially:1 bayesian:5 produced:1 multiplying:1 definition:1 failure:1 energy:3 pp:1 minka:7 propagated:5 massachusetts:2 improves:1 back:2 wei:2 done:1 though:2 just:2 correlation:3 until:1 hand:2 overlapping:1 propagation:26 freeenergy:1 usa:2 true:1 read:1 entering:1 laboratory:1 iteratively:2 dependently:1 attractive:1 adjacent:3 numerator:1 generalized:4 complete:5 performs:2 variational:9 consideration:1 novel:1 fi:15 functional:1 conditioning:3 interpretation:2 approximates:1 marginals:6 mellon:1 cambridge:1 enter:1 grid:5 heskes:2 zeroing:1 blame:1 had:1 moving:1 stable:2 halved:1 showed:1 store:1 binary:3 exploited:1 converge:5 determine:1 multiple:2 reduces:1 smooth:1 technical:1 match:1 faster:1 divided:1 qi:1 involving:2 denominator:1 expectation:10 cmu:1 iteration:8 represent:3 sequenced:1 want:3 separately:1 sends:1 rest:3 strict:1 pass:1 jordan:3 call:2 counting:1 noting:1 xj:19 fit:2 independence:1 tradeoff:1 whether:1 matlab:2 enumerate:1 dramatically:1 useful:1 tune:1 factorial:1 amount:1 ten:1 stabilized:1 estimated:4 carnegie:1 discrete:2 four:1 nevertheless:1 drawn:4 graph:23 run:4 family:1 scaling:1 bound:9 simplification:1 bp:25 x2:9 inclusive:4 performing:1 structured:3 department:1 according:2 designated:1 poor:1 disconnected:1 describes:4 smaller:1 increasingly:1 appealing:1 intuitively:1 count:1 know:2 tractable:1 junction:19 operation:2 yedidia:5 multiplied:1 apply:1 quarterly:1 away:1 generic:1 stepsize:2 alternative:3 slower:1 thomas:1 original:2 remaining:1 include:1 opportunity:1 exploit:1 giving:1 ghahramani:3 especially:1 approximating:5 already:1 added:1 exclusive:1 dependence:1 thank:1 trivial:1 spanning:2 willsky:2 ratio:2 minimizing:2 unfortunately:1 negative:2 implementation:2 proper:1 perform:3 teh:6 upper:2 markov:3 flop:10 extended:1 arbitrary:1 pair:1 kl:6 specified:1 connection:2 toolbox:2 gbp:21 deletion:1 conflicting:1 pearl:1 nip:5 below:1 pattern:1 explanation:1 belief:15 wainwright:4 overlap:1 meanfield:1 force:1 hybrid:1 pictured:1 scheme:2 improve:1 technology:2 started:1 triple:2 free:4 institute:2 neighbor:2 absolute:1 doesn:3 author:1 coincide:1 counted:1 employing:1 welling:6 transaction:1 approximate:13 keep:1 clique:15 sequentially:2 uai:5 pittsburgh:1 assumed:1 don:1 iterative:3 table:4 bethe:1 reasonably:1 ignoring:1 improving:1 necessarily:2 separator:7 constructing:1 domain:3 official:2 whole:2 child:1 x1:10 enriched:1 crafted:2 advice:1 crude:1 arity:1 jensen:3 moran:1 x:11 evidence:23 normalizing:1 incorporating:1 exists:1 sequential:1 subtree:1 illustrates:1 mf:3 simply:2 absorbed:1 patrascu:1 ma:1 conditional:1 shared:1 change:1 specifically:1 determined:1 averaging:1 wiegerinck:4 total:1 formally:1 olesen:1 incorporate:3 correlated:1 |
1,549 | 2,408 | Analytical solution of spike-timing dependent
plasticity based on synaptic biophysics
Bernd Porr, Ausra Saudargiene and Florentin W?org?otter
Computational Neuroscience
Psychology
University of Stirling
FK9 4LR Stirling, UK
{Bernd.Porr,ausra,worgott}@cn.stir.ac.uk
Abstract
Spike timing plasticity (STDP) is a special form of synaptic plasticity
where the relative timing of post- and presynaptic activity determines the
change of the synaptic weight. On the postsynaptic side, active backpropagating spikes in dendrites seem to play a crucial role in the induction of spike timing dependent plasticity. We argue that postsynaptically
the temporal change of the membrane potential determines the weight
change. Coming from the presynaptic side induction of STDP is closely
related to the activation of NMDA channels. Therefore, we will calculate
analytically the change of the synaptic weight by correlating the derivative of the membrane potential with the activity of the NMDA channel.
Thus, for this calculation we utilise biophysical variables of the physiological cell. The final result shows a weight change curve which conforms with measurements from biology. The positive part of the weight
change curve is determined by the NMDA activation. The negative part
of the weight change curve is determined by the membrane potential
change. Therefore, the weight change curve should change its shape depending on the distance from the soma of the postsynaptic cell. We find
temporally asymmetric weight change close to the soma and temporally
symmetric weight change in the distal dendrite.
1
Introduction
Donald Hebb [1] postulated half a century ago that the change of synaptic strength depends
on the correlation of pre- and postsynaptic activity: cells which fire together wire together.
Here we want to concentrate on a special form of correlation based learning, namely, spike
timing dependent plasticity (STDP, [2, 3]). STDP is asymmetrical in time: Weights grow
if the pre-synaptic event precedes the postsynaptic event. This phenomenon is called longterm potentiation (LTP). Weights shrink when the temporal order is reversed. This is called
long-term depression (LTD).
Correlations between pre- and postsynaptic activity can take place at different locations
of the cell. Here we will focus on the dendrite of the cell (see Fig. 1). The dendrite has
attracted interest recently because of its ability to propagate spikes back from the soma
of the cell into its distal regions. Such spikes are called backpropagating spikes. The
transmission is active which guarantees that the spikes can reach even the distal regions of
the dendrite [4]. Backpropagating spikes have been suggested to be the driving force for
STDP in the dendrite [5]. On the presynaptic side the main contribution to STDP comes
from Ca2+ flow through the NMDA channels [6].
The goal of this study is to derive an analytical solution for STDP on the basis of the
biophysical properties of the NMDA channel and the cell membrane. We will show that
mainly the timing of the backpropagating spike determines the shape of the learning curve.
With fast decaying backpropagating spikes we obtain STDP while with slow decaying
backpropagating spikes we approximate temporally symmetric Hebbian learning.
presyn. event at the
plastic NMDA synapse
T
postsyn. event = BP-spike
t
Plastic
Synapse
g
?
NMDA
0
g
C
dV
=
dt
?
i
Ii
ms
100
I BP
BP-Spike
Figure 1: Schematic diagram of the model setup. The inset shows the time course of an
NMDA response as modelled by Eq. 2.
2
The Model
The goal is to define a weight change rule which correlates the dynamics of an NMDA
channel with a variable which is linked to the dynamics of a backpropagating spike. The
precise biophysical mechanisms of STDP are still to a large degree unresolved. It is, however, known that high levels of Ca2+ concentration resulting from Ca2+ influx mainly
through NMDA-channels will lead to LTP, while lower levels will lead to LTD. Several
biophysically more realistic models for STDP were recently designed which rely on this
mechanisms [7, 8, 9]. Recent physiological results (reviewed in detail in [10]), however
suggest that not only the Ca2+ concentration but maybe more importantly the change of
the Ca2+ concentration determines if LTP or LTD is observed. This clearly suggests that
a differential term should be included in the learning rule, when trying to model STDP.
On theoretical grounds such a suggestion has also been made by several authors [11] who
discussed that the abstract STDP models [12] are related to the much older model class of
differential Hebbian learning rules [13]. In our model we assume that the Ca2+ concentration and the membrane potential are highly correlated. Consequently, our learning rule
utilises the derivative of the membrane potential for the postsynaptic activity.
After having identified the postsynaptic part of the weight change rule we have to define
the presynaptic part. This shall be the conductance function of the NMDA channel [6].
The conventional membrane equation reads:
dv(t)
Vrest ? v(t)
= ? g(t)[E ? v(t)] + iBP (t) +
,
(1)
dt
R
where v is the membrane potential, ? the synaptic weight of the NMDA-channel and g, E
are its conductance and equilibrium potential, respectively. The current, which a BP-spike
elicits, is given by iBP and the last term represents the passive repolarisation property
of the membrane towards its resting potential Vrest = ?70 mV . We set the membrane
capacitance C = 50 pF and the membrane resistance to R = 100 M ?. E is set to zero.
The NMDA channel has the following equation:
C
g(t) = g?
e?b1 t ? e?a1 t
[a1 ? b1 ][1 + ?e??V (t) ]
(2)
For simpler notation, in general we use inverse time-constants a1 = ?a?1 , b1 = ?b?1 , etc. In
addition, the term a1 ? b1 in the denominator is required for later easier integration in the
Laplace domain. Thus, we adjust for this by defining g? = 12 mS/ms which represents the
peak conductance (4 nS) multiplied by b1 ? a1 . The other parameters were: a1 = 3.0/ms,
b1 = 0.025/ms, ? = 0.06/mV . Since we will not vary the M g 2+ concentration we have
already abbreviated: ? = ?[M g 2+ ], ? = 0.33/mM , [M g 2+ ] = 1 mM [14].
The synaptic weight of the NMDA channel is changed by correlating the conductance of
this NMDA channel with the change (derivative) of the membrane potential:
d
? = g(t)v 0 (t)
(3)
dt
To describe the weight change, we wish to solve:
Z ?
??(T ) =
g(T + ? )v 0 (? )d?,
(4)
0
where T is the temporal shift between the presynaptic activity and the postsynaptic activity. The shift T > 0 means that the backpropagating spike follows after the trigger of
the NMDA channel. The shift T < 0 means that the temporal sequence of the pre- and
postsynaptic events is reversed.
To solve Eq. 4 we have to simplify it, however, without loosing biophysical realism. In
this paper we are interested in different shapes of backpropagating spikes. The underlying mechanisms which establish backpropagating spikes will not be addressed here. The
backpropagating spike shall be simply modelled as a potential change in the dendrite and
its shape is determined by its amplitude, its rise time and its decay time.
First we observe that the influence of a single (or even a few) NMDA-channels on the
membrane potential can be neglected in comparison to a BP-spike1 , which, due to active
processes, leads to a depolarisation of often more than 50 mV even at distal dendrites
because of active processes [15]. Thus, we can assume that the dynamics of the membrane
potential is established by the backpropagating spike and the resting potential Vrest :
dv(t)
Vrest ? v(t)
= iBP (t) +
(5)
dt
R
This equation can be further simplified. Next we assume that the second passive repolarisation term can also be absorbed into iBP , thus resulting to itotal (t) = iBP (t) + VrestR?v(t) .
To this end we model itotal as a derivative of a band-pass filter function:
C
itotal (t) = ?itotal
a2 e?a2 t ? b2 e?b2 t
a2 ? b2
(6)
1
Note that in spines, however, synaptic input can lead to large changes in the postsynaptic potential. In such cases g(t) contributes substantially to v(t).
where ?itotal is the current amplitude. This filter function causes first an influx of charges
into the dendrite and then again an outflux of charges. The time constants a2 and b2 determine the timing of the current flow and therefore the rise and decay time. The total charge
flux is zero so that the resting potential is reestablished after a backpropagating spike.
In this way the active de- and repolarising properties of a BP-spike can be combined with
the passive properties of the membrane, in practise by a curve fitting procedure which yields
a2 , b2 . As a result we find that the membrane equation in our case reduces to:
dv(t)
C
= itotal (t)
(7)
dt
We receive the resulting membrane potential simply by integrating Eq. 6:
?itotal e?b2 t ? e?a2 t
(8)
v(t) =
C
a2 ? b2
Note the sign inversion between v (Eq. 8) and i (Eq. 6, the one being the derivative of the
other.
The NMDA conductance g is more complex, because the membrane potential enters the
denominator in Eq. 2. To simplify we perform a Taylor expansion around v = 0 mV .
We expand around 0 mV and not around the resting potential. There are two reasons.
First, we are interested in the open NMDA channel. This is the case for voltages towards
0 mV . Second, the NMDA channel has a strong non-linearity around the resting potential.
Towards 0 mV , however, the NMDA channel has a linear voltage/current curve. Therefore
it makes sense to expand around 0 mV .
The NMDA conductance can now be written as:
e?b1 t ? e?a1 t
??v(t)
1
g(t) = g?
+
+ . . .)
?(
a1 ? b1
? + 1 (? + 1)2
and finally the potential v(t) (Eq. 8) can be inserted:
e?b1 t ? e?a1 t
g(t) = g?
?
a1 ? b1
?itotal ??e?b2 t
?itotal ??e?a2 t
1
+
?
+
.
.
.
? + 1 C(? + 1)2 (a2 ? b2 ) C(? + 1)2 (a2 ? b2 )
terminating the Taylor series after the second term this leads to three contributions
conductance:
g? e?b1 t ? e?a1 t
g(t) =
?+1
a ? b1
{z1
}
|
(9)
(10)
(11)
to the
(12)
g (0)
g??itotal ?? e?(b1 +a2 )t ? e?(a1 +a2 )t
?
(? + 1)2 C (a1 ? b1 )(a2 ? b2 )
|
{z
}
(13)
g??itotal ?? e
? e?(a1 +b2 )t
+
(? + 1)2 C (a1 ? b1 )(a2 ? b2 )
|
{z
}
(14)
g (1a)
?(b1 +b2 )t
g (1b)
To perform the correlation in Eq. 4 we transform the required terms into the Laplace domain
getting:
e??t ? e??t
1
g (0,1a,1b) (t) = k
? G(0,1a,1b) (s) = k
(15)
???
(s + ?)(s + ?)
a2 e?a2 t ? b2 e?b2 t
s
itotal (t) = ?itotal
? Itotal (s) = ?itotal
(16)
a2 ? b2
(s + a2 )(s + b2 )
where ? and ? take the coefficient values from the exponential terms in g (0) , g (1a) , g (1b) ,
respectively and k are the corresponding multiplicative factors2 .
A correlation in the Laplace domain is expressed by Plancherel?s theorem [16]:
Z +?
1
G(0) (???)e???T It (??)d?
?? =
2?
??
Z +?
?
G(1a) (???)e???T It (??)d?
??
Z +?
+
G(1b) (???)e???T It (??)d?
(17)
(18)
(19)
??
The solution is calculated with the method of residuals which leads to a split of the result
into T ? 0 and T < 0 and we get:
For T ? 0:
g??itotal
??(T ) =
(? + 1)C
?
?
??itotal
? (?+1)(a
2 ?b2 )C
??itotal
+ (?+1)(a
2 ?b2 )C
(0)
b1 e?b1 T
(0)
B+
?
a1 e?a1 T
(0)
A+
(b1 +a2 )e?(b1 +a2 )T
(1)
B+
(b1 +b2 )e?(b1 +b2 )T
(1)
B+
?
?
(20)
(a1 +a2 )e?(a1 +a2 )T
(1)
A+
(a1 +b2 )e?(a1 +b2 )T
(1)
A+
(1)
(21)
(22)
(0)
with A+ = (a1 ?b1 )(a1 +a2 )(a1 +b2 ), A+ = (a1 ?b1 )(a1 +2a2 )(a1 +a2 +b2 ), B+ =
(1)
(a1 ? b1 )(b1 + b2 )(a2 + b1 ), B+ = (a1 ? b1 )(2a2 + b1 )(a2 + b1 + b2 ).
For T < 0:
??(T ) =
g??itotal
(? + 1)C
a2 ea2 T
(0)
A?
?
?
?
??itotal
? (?+1)(a
2 ?b2 )C
??itotal
+ (?+1)(a
2 ?b2 )C
(0)
(1a)
b 2 eb 2 T
(0)
B?
(23)
a2 ea2 T
(1a)
A?
?
b 2 eb 2 T
(1a)
B?
a2 ea2 T
(1b)
A?
?
b 2 eb 2 T
(1b)
B?
(24)
(25)
(1b)
with A? = (a2 ? b2 )(a1 + a2 )(a2 + b1 ), A? = (a2 ? b2 )(a1 + 2a2 )(2a2 + b1 ), A? =
(0)
(1a)
(a2 ? b2 )(a1 + b2 + a2 )(a2 + b1 + b2 ), B? = (a2 ? b2 )(a1 + b2 )(b1 + b2 ), B? =
(1b)
(a2 ? b2 )(a1 + a2 + b2 )(b1 + a2 + b2 ), B? = (a2 ? b2 )(a1 + 2b2 )(b1 + 2b2 ).
The resulting equations contain interesting symmetries which makes the interpretation easy.
We observe that they split into three terms. For T > 0 the first term captures the NMDA
influence only, while for T < 0 it captures the influence of only the BP-spike (apart from
scaling factors). Mixed influences arise from the second and third terms which scale with
the peak current amplitude ?itotal of the BP-spike.
3
Results
While the properties of mature NMDA channels are captured by the parameters given for
Eq. 2 and remain fairly constant, BP-spikes change their shapes along the dendrite. Thus,
2
We use lower-case letters for functions in the time-domain and upper-case letters for their equivalent in the Laplace domain.
Figure 2: (A-F) STDP-curves obtained from Eqs. 22, 25 and corresponding normalised
BP-spikes (G-I, ?itotal = 1, left y-axis: current, right y-axis: integrated potential). Panels
A-C were obtained with different peak currents ?itotal = 0.5 nA, 0.1nA and 25pA. These
currents cause peak voltages of 40mV, 50mV and 40mV respectively. Panels D-F were
all simulated with a peak current of ?itotal = 5.0 nA. This current is unrealistic, however, it
is chosen for illustrative purposes to show the different contributions to the learning curve
(the dashed lines for G(0) and the dotted lines for G(1a,b) and the solid lines for the sum
of the two contributions). Time constants for the BP-spikes were: (A,D,G) a?1
= ?a =
2
?1
0.0095 ms, b2 = ?b = 0.01 ms (B,E,H) ?a = 0.05 ms, ?b = 0.1 ms (C,F,I) ?a = 0.1 ms,
?b = 1.0 ms.
we kept the NMDA properties unchanged and varied the time constants of the BP-spikes
as well as the current amplitude to simulate this effect. Fig. 2 shows STDP curves (solid
lines, A-F) and the corresponding BP-spikes (G-I). The contributions of the different terms
to the STDP curves are also shown (first term, dashed, as well as second and third term
scaled with their fore-factor, dotted). All curves have arbitrary units. As expected we find
that the first term dominates for small (realistic) currents (top panels), while the second and
third terms dominate for higher currents (middle panels). Furthermore, we find that long
BP-spikes will lead to plain Hebbian learning, where only LTP but no LTD is observed
(B,C,E,F).
4
Discussion
We believe that two of our findings could be of longer lasting relevance for the understanding of synaptic learning, provided they withstand physiological scrutinising: 1) The
shape of the weight change curves heavily relies on the shape of the backpropagating spike.
2) STDP can turn into plain Hebbian learning if the postsynaptic depolarisation (i.e., the
BP-spike) rises shallow.
Physiological studies suggest that weight change curves can indeed have a widely varying
shape (reviewed in [17]). In this study we argue that in particular the shape of the back-
propagating spike influences the shape of the weight change curve. In fact the dendrites
can be seen as active filters which change the shape of backpropagating spikes during their
journey to the distal parts of the dendrite [18]. In particular, the decay time of the BP spike
is increased in the distal parts of the dendrite [15]. The different decay times determine if
we get pure symmetric Hebbian learning or STDP (see Fig. 2). Thus, the theoretical result
would suggest temporal symmetric Hebbian learning in the distal dendrites and STDP in
the proximal dendrites. From a computational perspective this would mean that the distal
dendrites perform principle component analysis [19] and the proximal dendrites temporal
sequence learning [20].
Now, our model has to be compared to other models of STDP. We can count our model
to the ?state variable models?. Such models can either adopt a rather descriptive approach
[21], where appropriate functions are being fit to the measured weight change curves. Others are closer to the kinetic models in trying to fit phenomenological kinetic equations
[7, 22, 23, 9]. Those models establish a more realistic relation between calcium concentration and membrane potential. The calcium concentration seems to be a low-pass filtered
version of the membrane potential [24]. Such a low pass filter hlow could be added to the
learning rule Eq. 3 resulting in: d?/dt = g(t)hlow (t) ? v 0 (t).
The approaches of [9] as well as of Karmarkar and co-workers [23] are closely related to
our model. Both models investigate the effects of different calcium concentration levels by
assuming certain (e.g. exponential) functional characteristics to govern its changes. This
allows them to address the question of how different calcium levels will lead to LTD or
LTP [25]. Both model-types [9, 23, 8] were designed to produce a zero-crossing (transition
between LTD and LTP) at T = 0. The differential Hebbian rule employed by us leads to the
observed results as the consequence of the fact that the derivative of any generic unimodal
signal will lead to a bimodal curve. We utilise the derivative of the unimodal membrane
potential to obtain a bimodal weight change curve. The derivative of the membrane pot
tential is proportional to the charge transfer dq
dt = it across the (post-synaptic) membrane
(see Eq. 7). There is wide ranging support that synaptic plasticity is strongly dominated by
calcium transfer through NMDA channels [26, 27, 6]. Thus it seems reasonable to assume
that a part of dQ
dt represents calcium flow through the NMDA channel.
References
[1] D. O. Hebb. The organization of behavior: A neurophychological study. WileyInterscience, New York, 1949.
[2] H Markram, J L?ubke, M Frotscher, and B Sakman. Regulation of synaptic efficacy
by coincidence of postsynaptic aps and epsps. Science, 275:213?215, 1997.
[3] J. C. Magee and D. Johnston. A synaptically controlled, associative signal for
Hebbian plasticity in hippocampal neurons. Science, 275:209?213, 1997.
[4] Daniel Johnston, Brian Christie, Andreas Frick, Richard Gray, Dax A. Hoffmann,
Lalania K. Schexnayder, Shigeo Watanabe, and Li-Lian Yuan. Active dendrites,
potassium channels and synaptic plasticity. Phil. Trans. R. Soc. Lond. B, 358:667?
674, 2003.
[5] D. J. Linden. The return of the spike: Postsynaptic action potentials and the induction
of LTP and LTD. Neuron, 22:661?666, 1999.
[6] R. C. Malenka and R. A. Nicoll. Long-term potentiation ? a decade of progress?
Science, 285:1870?1874, 1999.
[7] W. Senn, H. Markram, and M. Tsodyks. An algorithm for modifying neurotransmitter
release probability based on pre-and postsynaptic spike timing. Neural Comp., 13:35?
67, 2000.
[8] U. R. Karmarkar, M. T. Najarian, and D. V. Buonomano. Mechanisms and significance of spike-timing dependent plasticity. Biol. Cybern., 87:373?382, 2002.
[9] H. Z. Shouval, M. F. Bear, and L. N. Cooper. A unified model of NMDA
receptor-dependent bidirectional synaptic plasticity. Proc. Natl. Acad. Sci. (USA),
99(16):10831?10836, 2002.
[10] G. Q. Bi. Spatiotemporal specificity of synaptic plasticity: cellular rules and mechanisms. Biol. Cybern., 87:319?332, 2002.
[11] Patrick D. Roberts. Temporally asymmetric learning rules: I. Differential Hebbian
Learning. Journal of Computational Neuroscience, 7(3):235?246, 1999.
[12] Richard Kempter, Wulfram Gerstner, and J. Leo van Hemmen. Hebbian learning and
spiking neurons. Physical Review E, 59:4498?4514, 1999.
[13] R.S. Sutton and A.G. Barto. Towards a modern theory of adaptive networks: Expectation and prediction. Psychological Review, 88:135?170, 1981.
[14] C. Koch. Biophysics of Computation. Oxford University Press, 1999.
[15] Greg Stuart, Nelson Spruston, Bert Sakmann, and Michael H?ausser. Action potential
initiation and backpropagation in neurons of the mammalian cns. Trends Neurosci.,
20(3):125?131, 1997.
[16] John L. Stewart. Fundamentals of Signal Theory. Mc Graw-Hill, New York, 1960.
[17] P. D. Roberts and C. C. Bell. Spike timing dependent synaptic plasticity in biological
systems. Biol. Cybern., 87:392?403, 2002.
[18] Nace L. Golding, William L. Kath, and Nelson Spruston. Dichotomy of action potential backpropagation in ca1 pyramidal neuron dendrites. J Neurophysiol, 86:2998?
3009, 2001.
[19] E. Oja. A simplified neuron model as a principal component analyzer. J Math Biol,
15(3):267?273, 1982.
[20] Bernd Porr and Florentin W?org?otter. Isotropic Sequence Order learning. Neural
Computation, 15:831?864, 2003.
[21] H. D. I. Abarbanel, R. Huerta, and M. I. Rabinovich. Dynamical model of long-term
synaptic plasticity. Proc. Natl. Acad. Sci. (USA), 99(15):10132?10137, 2002.
[22] G. C. Castellani, E. M. Quinlan, L. N. Cooper, and H. Z. Shouval. A biophysical
model of bidirectional synaptic plasticity: Dependence on AMPA and NMDA receptors. Proc. Natl. Acad. Sci. (USA), 98(22):12772?12777, 2001.
[23] U. R. Karmarkar and D. V. Buonomano. A model of spike-timing dependent plasticity: One or two coincidence detectors? J. Neurophysiol., 88:507?513, 2002.
[24] G. Stuart, J. Schiller, and B. Sakmann. Action potential initiation and propagation in
rat neocortical pyramidal neurons. J Physiol, 505:617?632, 1997.
[25] M. Nishiyama, K. Hong, K. Mikoshiba, M. Poo, and K. Kato. Calcium stores regulate
the polarity and input specificity of synaptic modification. Nature, 408:584?588,
2000.
[26] J. Schiller, Y. Schiller, and D. E. Clapham. Amplification of calcium influx into
dendritic spines during associative pre- and postsynaptic activation: The role of direct
calcium influx through the NMDA receptor. Nat. Neurosci., 1:114?118, 1998.
[27] R. Yuste, A. Majewska, S. S. Cash, and W. Denk. Mechanisms of calcium influx into
hippocampal spines: heterogeneity among spines, coincidence detection by NMDA
receptors, and optical quantal analysis. J. Neurosci., 19:1976?1987, 1999.
| 2408 |@word middle:1 version:1 longterm:1 inversion:1 seems:2 open:1 propagate:1 postsynaptically:1 solid:2 series:1 efficacy:1 daniel:1 current:13 activation:3 attracted:1 written:1 john:1 physiol:1 realistic:3 plasticity:15 shape:11 designed:2 aps:1 half:1 isotropic:1 realism:1 lr:1 filtered:1 math:1 location:1 org:2 simpler:1 along:1 direct:1 differential:4 yuan:1 fitting:1 indeed:1 expected:1 spine:4 behavior:1 depolarisation:2 pf:1 provided:1 notation:1 underlying:1 linearity:1 panel:4 dax:1 substantially:1 ca1:1 unified:1 finding:1 guarantee:1 temporal:6 charge:4 scaled:1 uk:2 unit:1 positive:1 timing:11 consequence:1 acad:3 receptor:4 sutton:1 oxford:1 eb:3 suggests:1 co:1 bi:1 backpropagation:2 procedure:1 bell:1 postsyn:1 pre:6 integrating:1 donald:1 specificity:2 suggest:3 get:2 close:1 huerta:1 influence:5 cybern:3 conventional:1 equivalent:1 phil:1 poo:1 pure:1 rule:9 importantly:1 dominate:1 century:1 laplace:4 play:1 trigger:1 heavily:1 pa:1 crossing:1 trend:1 asymmetric:2 mammalian:1 observed:3 role:2 inserted:1 coincidence:3 enters:1 capture:2 calculate:1 tsodyks:1 region:2 govern:1 practise:1 dynamic:3 neglected:1 denk:1 terminating:1 basis:1 neurophysiol:2 neurotransmitter:1 shouval:2 leo:1 fast:1 describe:1 precedes:1 dichotomy:1 widely:1 solve:2 ability:1 transform:1 final:1 associative:2 sequence:3 descriptive:1 biophysical:5 analytical:2 coming:1 unresolved:1 kato:1 amplification:1 getting:1 potassium:1 transmission:1 produce:1 depending:1 derive:1 ac:1 propagating:1 measured:1 ibp:5 progress:1 eq:12 strong:1 epsps:1 pot:1 soc:1 come:1 concentrate:1 vrest:4 closely:2 filter:4 modifying:1 potentiation:2 brian:1 biological:1 dendritic:1 mm:2 around:5 koch:1 ground:1 stdp:19 equilibrium:1 driving:1 vary:1 adopt:1 a2:45 purpose:1 proc:3 clearly:1 worgott:1 rather:1 cash:1 varying:1 voltage:3 barto:1 release:1 focus:1 mainly:2 sense:1 dependent:7 integrated:1 relation:1 expand:2 interested:2 among:1 special:2 integration:1 fairly:1 frotscher:1 having:1 biology:1 represents:3 stuart:2 ubke:1 others:1 simplify:2 richard:2 few:1 modern:1 oja:1 cns:1 fire:1 william:1 conductance:7 organization:1 interest:1 detection:1 highly:1 investigate:1 adjust:1 natl:3 closer:1 worker:1 conforms:1 spruston:2 taylor:2 theoretical:2 graw:1 psychological:1 increased:1 stewart:1 stirling:2 rabinovich:1 spatiotemporal:1 proximal:2 combined:1 peak:5 fundamental:1 michael:1 together:2 na:3 again:1 derivative:8 abarbanel:1 withstand:1 return:1 li:1 potential:28 de:1 b2:45 coefficient:1 postulated:1 mv:11 depends:1 later:1 multiplicative:1 linked:1 decaying:2 contribution:5 stir:1 greg:1 who:1 characteristic:1 yield:1 modelled:2 biophysically:1 plastic:2 mc:1 fore:1 comp:1 ago:1 ea2:3 detector:1 reach:1 synaptic:20 nmda:31 amplitude:4 back:2 bidirectional:2 higher:1 dt:8 response:1 synapse:2 shrink:1 strongly:1 furthermore:1 correlation:5 propagation:1 gray:1 believe:1 usa:3 effect:2 contain:1 asymmetrical:1 analytically:1 read:1 symmetric:4 distal:8 during:2 backpropagating:15 illustrative:1 rat:1 m:11 hong:1 trying:2 hippocampal:2 hill:1 neocortical:1 passive:3 ranging:1 recently:2 functional:1 spiking:1 physical:1 discussed:1 interpretation:1 resting:5 measurement:1 analyzer:1 phenomenological:1 longer:1 etc:1 patrick:1 florentin:2 recent:1 perspective:1 ausser:1 apart:1 store:1 certain:1 initiation:2 captured:1 seen:1 utilises:1 employed:1 determine:2 signal:3 dashed:2 ii:1 unimodal:2 reduces:1 hebbian:10 calculation:1 long:4 post:2 a1:35 biophysics:2 schematic:1 controlled:1 prediction:1 denominator:2 expectation:1 bimodal:2 synaptically:1 cell:7 receive:1 addition:1 want:1 addressed:1 diagram:1 grow:1 johnston:2 pyramidal:2 crucial:1 ltp:7 mature:1 flow:3 seem:1 split:2 easy:1 castellani:1 fit:2 psychology:1 identified:1 andreas:1 cn:1 shift:3 golding:1 ltd:7 resistance:1 york:2 cause:2 action:4 depression:1 fk9:1 maybe:1 band:1 dotted:2 senn:1 sign:1 neuroscience:2 shall:2 soma:3 kept:1 sum:1 inverse:1 letter:2 ca2:6 journey:1 place:1 reasonable:1 scaling:1 activity:7 strength:1 bp:16 influx:5 dominated:1 simulate:1 lond:1 malenka:1 optical:1 buonomano:2 membrane:23 remain:1 across:1 postsynaptic:15 shallow:1 modification:1 lasting:1 dv:4 equation:6 nicoll:1 abbreviated:1 turn:1 mechanism:6 count:1 end:1 multiplied:1 observe:2 presyn:1 appropriate:1 generic:1 regulate:1 top:1 saudargiene:1 quinlan:1 establish:2 unchanged:1 capacitance:1 already:1 added:1 spike:42 question:1 hoffmann:1 concentration:8 dependence:1 distance:1 reversed:2 elicits:1 simulated:1 sci:3 schiller:3 nelson:2 presynaptic:5 argue:2 cellular:1 reason:1 induction:3 assuming:1 polarity:1 quantal:1 setup:1 regulation:1 robert:2 negative:1 rise:3 sakmann:2 calcium:10 perform:3 upper:1 wire:1 neuron:7 defining:1 heterogeneity:1 precise:1 varied:1 bert:1 arbitrary:1 bernd:3 namely:1 required:2 z1:1 wileyinterscience:1 established:1 trans:1 address:1 suggested:1 dynamical:1 unrealistic:1 event:5 force:1 rely:1 residual:1 older:1 temporally:4 axis:2 magee:1 review:2 understanding:1 relative:1 kempter:1 bear:1 mixed:1 suggestion:1 interesting:1 proportional:1 yuste:1 degree:1 principle:1 dq:2 course:1 changed:1 last:1 side:3 normalised:1 wide:1 markram:2 van:1 curve:18 calculated:1 plain:2 transition:1 author:1 porr:3 made:1 frick:1 simplified:2 adaptive:1 flux:1 correlate:1 approximate:1 otter:2 active:7 correlating:2 b1:36 decade:1 reviewed:2 channel:20 transfer:2 nature:1 plancherel:1 symmetry:1 contributes:1 dendrite:19 expansion:1 gerstner:1 complex:1 ampa:1 domain:5 significance:1 main:1 neurosci:3 arise:1 fig:3 hemmen:1 hebb:2 slow:1 cooper:2 n:1 watanabe:1 wish:1 exponential:2 third:3 nishiyama:1 theorem:1 inset:1 decay:4 physiological:4 linden:1 dominates:1 nat:1 easier:1 simply:2 absorbed:1 expressed:1 utilise:2 determines:4 relies:1 kinetic:2 goal:2 consequently:1 loosing:1 towards:4 change:28 wulfram:1 included:1 determined:3 principal:1 called:3 total:1 pas:3 support:1 relevance:1 phenomenon:1 lian:1 karmarkar:3 biol:4 correlated:1 |
1,550 | 2,409 | A Mixed-Signal VLSI for Real-Time
Generation of Edge-Based Image Vectors
Masakazu Yagi, Hideo Yamasaki, and Tadashi Shibata*
Department of Electronic Engineering
*Department of Frontier Informatics
The University of Tokyo
7-3-1 Hongo, Bunkyo-ku, Tokyo, 113-8656, Japan
[email protected], [email protected], [email protected]
Abstract
A mixed-signal image filtering VLSI has been developed aiming at
real-time generation of edge-based image vectors for robust image
recognition. A four-stage asynchronous median detection architecture based on analog digital mixed-signal circuits has been introduced to determine the threshold value of edge detection, the key
processing parameter in vector generation. As a result, a fully
seamless pipeline processing from threshold detection to edge feature map generation has been established. A prototype chip was
designed in a 0.35-?m double-polysilicon three-metal-layer CMOS
technology and the concept was verified by the fabricated chip. The
chip generates a 64-dimension feature vector from a 64x64-pixel
gray scale image every 80?sec. This is about 104 times faster than the
software computation, making a real-time image recognition system
feasible.
1
In tro du c ti o n
The development of human-like image recognition systems is a key issue in information technology. However, a number of algorithms developed for robust image
recognition so far [1]-[3] are mostly implemented as software systems running on
general-purpose computers. Since the algorithms are generally complex and include a
lot of floating point operations, they are computationally too expensive to build
real-time systems. Development of hardware-friendly algorithms and their direct
VLSI implementation would be a promising solution for real-time response systems.
Being inspired by the biological principle that edge information is firstly detected in
the visual cortex, we have developed an edge-based image representation algorithm
compatible to hardware processing. In this algorithm, multiple-direction edges extracted from an original gray scale image is utilized to form a feature vector. Since the
spatial distribution of principal edges is represented by a vector, it was named Projected Principal-Edge Distribution (PPED) [4],[5], or formerly called Principal Axis
Projection (PAP) [6],[7]. (The algorithm is explained later.) Since the PPED vectors
very well represent the human perception of similarity among images, robust image
recognition systems have been developed using PPED vectors in conjunction with the
analog soft pattern classifier [4],[8], the digital VQ (Vector Quantization) processor
[9], and support vector machines [10] .
The robust nature of PPED representation is demonstrated in Fig. 1, where the system
was applied to cephalometric landmark identification (identifying specific anatomical
landmarks on medical radiographs) as an example, one of the most important clinical
practices of expert dentists in orthodontics [6],[7]. Typical X-ray images to be experienced by apprentice doctors were converted to PPED vectors and utilized as
templates for vector matching. The system performance has been proven for 250 head
film samples regarding the fundamental 26 landmarks [11]. Important to note is the
successful detection of the landmark on the soft tissue boundary (the tip of the lower
lip) shown in Fig. 1(c). Landmarks on soft tissues are very difficult to detect as
compared to landmarks on hard tissues (solid bones) because only faint images are
captured on radiographs. The successful detection is due to the median algorithm that
determines the threshold value for edge detection.
Sella
Nasion
Orbitale
By our system
(a)
By expert dentists
Landmark on soft tissue
(b)
(c)
Fig. 1: Image recognition using PPED vectors: (a,b) cephalometric landmark identification; (c) successful landmark detection on soft tissue.
We have adopted the median value of spatial variance of luminance within the filtering kernel (5x5 pixels), which allows us to extract all essential features in a delicate
gray scale image. However, the problem is the high computational cost in determining
the median value. It takes about 0.6 sec to generate one PPED vector from a
64x64-pixel image (a standard image size for recognition in our system) on a SUN
workstation, making real time processing unrealistic. About 90% of the computation
time is for edge detection from an input image, in which most of the time is spent for
median detection.
Then the purpose of this work is to develop a new architecture median-filter VLSI
subsystem for real-time PPED-vector generation. Special attention has been paid to
realize a fully seamless pipeline processing from threshold detection to edge feature
map generation by employing the four-stage asynchronous median detection architecture.
2
P r o je c t e d P r i n c i pa l E dg e Dis tribution (PPED )
Projected Principal Edge Distribution (PPED) algorithm [5],[6] is briefly explained
using Fig. 2(a). A 5x5-pixel block taken from a 64x64-pixel target image is subjected
to edge detection filtering in four principal directions, i.e. horizontal, vertical, and
?45-degree directions. In the figure, horizontal edge filtering is shown as an example.
(The filtering kernels used for edge detection are given in Fig. 2(b).) In order to determine the threshold value for edge detection, all the absolute-value differences
between two neighboring pixels are calculated in both vertical and horizontal directions and the median value is taken as the threshold. By scanning the 5x5-pixel filtering kernels in the target image, four 64x64 edge-flag maps are generated, which are
called feature maps. In the horizontal feature map, for example, edge flags in every
four rows are accumulated and spatial distribution of edge flags are represented by a
histogram having 16 elements. Similar procedures are applied to other three directions
to form respective histograms each having 16 elements. Finally, a 64-dimension
vector is formed by series-connecting the four histograms in the order of horizontal,
+45-degree, vertical, and ?45-degree.
64x64
Feature Map (64x64)
(Horizontal)
0 0 0 0 0
1 1 1 1 1
0 0 0 0 0
-1-1-1-1-1
0 0 0 0 0
(Horizontal)
Threshold
||
Median
Scan
(16 elements)
Edge Detection
Edge Filter
PPED Vector
(Horizontal Section)
0 0 0 0 0
1 1 1 1 1
0 0 0 0 0
-1 -1 -1 -1 -1
0 0 0 0 0
0 0 0 1 0
0 1 1 0 -1
0 1 0 -1 0
1 0 -1 -1 0
0 -1 0 0 0
Horizontal
+45-degree
0
0
0
0
0
Threshold Detection
Absolute value
difference between
neiboring pels.
1
1
1
1
1
0 -1
0 -1
0 -1
0 -1
0 -1
0
0
0
0
0
0 -1 0 0 0
1 0 -1 -1 0
0 1 0 -1 0
0 1 1 0 -1
0 0 0 1 0
Vertical
(a)
-45-degree
(b)
Fig. 2: PPED algorithm (a) and filtering kernels for edge detection (b).
3
Sy stem Orga ni za ti o n
The system organization of the feature map generation VLSI is illustrated in Fig. 3.
The system receives one column of data (8-b x 5 pixels) at each clock and stores the
data in the last column of the 5x6 image buffer. The image buffer shifts all the stored
data to the right at every clock. Before the edge filtering circuit (EFC) starts detecting
four direction edges with respect to the center pixel in the 5x5 block, the threshold
value calculated from all the pixel data in the 5x5 block must be ready in time for the
processing. In order to keep the coherence of the threshold detection and the edge
filtering processing, the two last-in data locating at column 5 and 6 are given to median filter circuit (MFC) in advance via absolute value circuit (AVC). AVC calculates
all luminance differences between two neighboring pixels in columns 5 and 6.
In this manner, a fully seamless pipeline processing from threshold detection to edge
feature map generation has been established. The key requirement here is that MFC
must determine the median value of the 40 luminance difference data from the
5x5-pixel block fast enough to carry out the seamless pipeline processing. For this
purpose, a four-stage asynchronous median detection architecture has been developed
which is explained in the following.
Edge Filtering Circuit (EFC)
6 5 4 3 2 1
Edge flags
H
+45
V
Image buffer
8-b x 5 pixels
(One column)
Absolute Value
Circuit (AVC)
Threshold
value
Median Filter
Circuit (MFC)
-45
Feature maps
Fig. 3: System organization of feature map generation VLSI.
The well-known binary search algorithm was adopted for fast execution of median
detection. The median search processing for five 4-b data is illustrated in Fig. 4 for the
purpose of explanation. In the beginning, majority voting is carried out for the MSB?s
of all data. Namely, the number of 1?s is compared with the number of 0?s and the
majority group wins. The majority group flag (?0? in this example) is stored as the
MSB of the median value. In addition, the loser group is withdrawn in the following
voting by changing all remaining bits to the loser MSB (?1? in this example). By
repeating the processing, the median value is finally stored in the median value register.
Elapse of time
Median Register :
0 1 X X
0 1 1 0
0 0 1 1
1
0 0 1 1
1
0 0 0 0
0
0 0 0 0
1 1 0 1
1 1 1 1
1 1 1 1
1 1 1 1
0 1 1 0
0 1 1 0
0 1 1 0
0 1 1 0
0 1 0 1
0 1 0 1
0 1 0 1
0 1 0 0
1 0 1 1
1 1 1 1
1 1 1 1
1 1 1 1
MVC0
MVC1
MVC2
MVC3
MVC0
MVC1
MVC2
MVC3
MVC0
MVC1
MVC2
MVC3
MVC0
MVC1
MVC2
MVC3
Majority Flag : 0
0 X X X
Majority Voting Circuit (MVC)
Fig. 4: Hardware algorithm for median detection by binary search.
How the median value is detected from all the 40 8-b data (20 horizontal luminance
difference data and 20 vertical luminance difference data) is illustrated in Fig. 5. All
the data are stored in the array of median detection units (MDU?s). At each clock, the
array receives four vertical luminance difference data and five horizontal luminance
difference data calculated from the data in column 5 and 6 in Fig. 3. The entire data are
shifted downward at each clock. The median search is carried out for the upper four
bits and the lower four bits separately in order to enhance the throughput by pipelining.
For this purpose, the chip is equipped with eight majority voting circuits (MVC 0~7).
The upper four bits from all the data are processed by MVC 4~7 in a single clock cycle
to yield the median value. In the next clock cycle, the loser information is transferred
to the lower four bits within each MDU and MVC0~3 carry out the median search for
the lower four bits from all the data in the array.
Vertical Luminance Difference
AVC AVC AVC AVC
Horizontal Luminance Difference
AVC AVC AVC AVC AVC
Shift
Shift
Median Detection Unit (MDU)
x (40 Units)
Lower 4bit
Upper 4bit
MVC0
MVC2
MVC1
MVC3
MVC4
MVC5
MVC6
MVC7
MVCs for upper 4bit
MVCs for lower 4bit
Fig. 5: Median detection architecture for all 40 luminance difference data.
The majority voting circuit (MVC) is shown in Fig. 6. Output connected CMOS inverters are employed as preamplifiers for majority detection which was first proposed
in Ref. [12]. In the present implementation, however, two preamps receiving input
data and inverted input data are connected to a 2-stage differential amplifier. Although this doubles the area penalty, the instability in the threshold for majority
detection due to process and temperature variations has been remarkably improved as
compared to the single inverter thresholding in Ref. [12]. The MVC in Fig. 6 has 41
input terminals although 40 bits of data are inputted to the circuit at one time. Bit ?0?
is always given to the terminal IN40 to yield ?0? as the majority when there is a tie in
the majority voting.
PREAMP
IN0
PREAMP
2W/L
IN0
2W/L
OUT
W/L
ENBL
W/L
W/L
IN1
IN1
2W/L
2W/L
W/L
ENBL
IN40
W/L
W/L
IN40
Fig. 6: Majority voting circuit (MVC).
The edge filtering circuit (EFC) in Fig. 3 is composed as a four-stage pipeline of
regular CMOS digital logic. In the first two stages, four-direction edge gradients are
computed, and in the succeeding two stages, the detection of the largest gradient and
the thresholding is carried out to generate four edge flags.
4
E x p e r i m e n t a l R es u l t s
The feature map generation VLSI was fabricated in a 0.35-?m double-poly
three-metal-layer CMOS technology. A photomicrograph of the proof-of-concept
chip is shown in Fig. 7. The measured waveforms of the MVC at operating frequencies of 10MHz and 90MHz are demonstrated in Fig. 8. The input condition is in the
worst case. Namely, 21 ?1? bits and 20 ?0? bits were fed to the inputs. The observed
computation time is about 12 nsec which is larger than the simulation result of 2.5
nsec. This was caused by the capacitance loading due to the probing of the test circuit.
In the real circuit without external probing, we confirmed the average computation
time of 4~5 nsec.
Edge-detection
Filtering Circuit
Processing Technology 0.35?m CMOS 2-Poly 3-Metal
Median Filter Control Unit
Chip Size 4.5mm x 4.5mm
MVC
Majority Voting Circuit X8
Supply Voltage 3.3 V
Operation Frequengy 50MHz
Vector
Generator
Fig. 7: Photomicrograph and specification of the fabricated proof-of-concept chip.
1V/div 5ns/div
MVC_Output
1V/div 8ns/div
MVC_OUT
IN
IN
1
Majority Voting operation
(a)
Majority Voting operation
(b)
Fig. 8: Measured waveforms of majority voting circuit (MVC) at operation frequencies of 10MHz (a) and 90 MHz (b) for the worst-case input data.
The feature maps generated by the chip at the operation frequency of 25 MHz are
demonstrated in Fig. 9. The power dissipation was 224 mW. The difference between
the flag bits detected by the chip and those obtained by computer simulation are also
shown in the figure. The number of error flags was from 80 to 120 out of 16,384 flags,
only a 0.6% of the total. The occurrence of such error bits is anticipated since we
employed analog circuits for median detection. However, such error does not cause
any serious problems in the PPED algorithm as demonstrated in Figs. 10 and 11.
The template matching results with the top five PPED vector candidates in Sella
identification are demonstrated in Fig. 11, where Manhattan distance was adopted as
the dissimilarity measure. The error in the feature map generation processing yields a
constant bias to the dissimilarity and does not affect the result of the maximum likelihood search.
Generated Feature maps
Difference as compared
to computer simulation
Sella
Horizontal
Plus 45-degrees
Vertical
Minus 45-degrees
Fig. 9: Feature maps for Sella pattern generated by the chip.
Generated PPED vector by the chip
Sella
Difference as compared
to computer simulation
Dissimilarity (by Manhattan Distance)
Fig. 10: PPED vector for Sella pattern generated by the chip. The difference in the
vector components between the PPED vector generated by the chip and that obtained
by computer simulation is also shown.
1200
Measured Data
1000
800
Computer Simulation
600
400
200
0
1st (Correct)
2nd
3rd
4th
5th
Candidates in Sella recognition
Fig. 11: Comparison of template matching results.
5
Conclusion
A mixed-signal median filter VLSI circuit for PPED vector generation is presented. A
four-stage asynchronous median detection architecture based on analog digital
mixed-signal circuits has been introduced. As a result, a fully seamless pipeline
processing from threshold detection to edge feature map generation has been established. A prototype chip was designed in a 0.35-?m CMOS technology and the fab-
ricated chip generates an edge based image vector every 80 ?sec, which is about 10 4
times faster than the software computation.
Acknowledgments
The VLSI chip in this study was fabricated in the chip fabrication program of VLSI
Design and Education Center (VDEC), the University of Tokyo with the collaboration
by Rohm Corporation and Toppan Printing Corporation. The work is partially supported by the Ministry of Education, Science, Sports, and Culture under Grant-in-Aid
for Scientific Research (No. 14205043) and by JST in the program of CREST.
References
[1] C. Liu and Harry Wechsler, ?Gabor feature based classification using the enhanced fisher
linear discriminant model for face recognition?, IEEE Transactions on Image Processing, Vol.
11, No.4, Apr. 2002.
[2] C. Yen-ting, C. Kuo-sheng, and L. Ja-kuang, ?Improving cephalogram analysis through
feature subimage extraction?, IEEE Engineering in Medicine and Biology Magazine, Vol. 18,
No. 1, 1999, pp. 25-31.
[3] H. Potlapalli and R. C. Luo, ?Fractal-based classification of natural textures?, IEEE
Transactions on Industrial Electronics, Vol. 45, No. 1, Feb. 1998.
[4] T. Yamasaki and T. Shibata, ?Analog Soft-Pattern-Matching Classifier Using Floating-Gate MOS Technology,? Advances in Neural Information Processing Systems 14, Vol. II,
pp. 1131-1138.
[5] Masakazu Yagi, Tadashi Shibata, ?An Image Representation Algorithm Compatible to
Neural-Associative-Processor-Based Hardware Recognition Systems,? IEEE Trans. Neural
Networks, Vol. 14, No. 5, pp. 1144-1161, September (2003).
[6] M. Yagi, M. Adachi, and T. Shibata, "A hardware-friendly soft-computing algorithm for
image recognition," in Proc. EUSIPCO 2000, Sept. 2000, pp. 729-732.
[7] M. Yagi, T. Shibata, and K. Takada, "Human-perception-like image recognition system
based on the associative processor architecture," in Proc. EUSIPCO 2002, Vol. I, pp. 103-106,
Sept. 2002.
[8] M. Yagi and T. Shibata, "An associative-processor-based mixed signal system for robust
image recognition," in Proc. ISCAS 2002, May 2002, pp. V-137-V-140.
[9] M. Ogawa, K. Ito, and T. Shibata, "A general-purpose vector-quantization processor employing two-dimensional bit-propagating winner-take-all," in Symp. on VLSI Circuits Dig.
Tech. Papers, Jun. 2002, p.p. 244-247.
[10] S. Chakrabartty, M. Yagi, T. Shibata, and G. Cauwenberghs, ?Robust Cephalometric
Landmark Identification Using Support Vector Machines,? ICASSP 2003, Hong Kong, April
6-10, 2003, pp. II-825-II-828.
[11] Masakazu Yagi, Tadashi Shibata, Chihiro Tanikawa, and Kenji Takada, ?A Robust
Medical Image Recognition System Employing Edge-Based Feature Vector Representation,?
in the Proceeding of 13th Scandinavian Conference on Image Analysis (SCIA2003),
pp.534-540, Goteborg, Sweden, Jun. 29-Jul. 2, 2003.
[12] C.L. Lee and C.-W. Jen, ?Bit-sliced median filter design based on majority gate,? in IEE
Proceedings-G, Vol. 139, No. 1, Feb. 1992, pp. 63-71.
| 2409 |@word kong:1 briefly:1 loading:1 nd:1 simulation:6 paid:1 minus:1 solid:1 carry:2 electronics:1 liu:1 series:1 luo:1 must:2 realize:1 designed:2 succeeding:1 msb:3 beginning:1 detecting:1 firstly:1 five:3 direct:1 differential:1 supply:1 ray:1 symp:1 manner:1 terminal:2 inspired:1 equipped:1 circuit:22 pel:1 developed:5 fabricated:4 corporation:2 every:4 ti:2 friendly:2 voting:11 tie:1 classifier:2 control:1 unit:4 medical:2 grant:1 before:1 engineering:2 aiming:1 eusipco:2 plus:1 acknowledgment:1 practice:1 tribution:1 block:4 procedure:1 area:1 gabor:1 projection:1 matching:4 regular:1 subsystem:1 instability:1 map:16 demonstrated:5 center:2 attention:1 identifying:1 array:3 osaka:1 x64:6 variation:1 target:2 enhanced:1 magazine:1 pa:1 element:3 recognition:14 expensive:1 utilized:2 observed:1 worst:2 cycle:2 sun:1 connected:2 avc:12 icassp:1 chip:17 represented:2 fast:2 detected:3 film:1 larger:1 associative:3 neighboring:2 loser:3 ogawa:1 double:3 requirement:1 cmos:6 spent:1 develop:1 ac:3 propagating:1 measured:3 implemented:1 kenji:1 nsec:3 direction:7 waveform:2 tokyo:5 correct:1 filter:7 pipelining:1 human:3 jst:1 education:2 ja:1 cephalometric:3 biological:1 frontier:1 dent:1 mm:2 withdrawn:1 mo:1 inverter:2 inputted:1 purpose:6 proc:3 mvc:9 largest:1 always:1 voltage:1 conjunction:1 likelihood:1 tech:1 industrial:1 detect:1 accumulated:1 entire:1 vlsi:11 shibata:10 pixel:13 issue:1 among:1 classification:2 development:2 spatial:3 special:1 having:2 extraction:1 biology:1 throughput:1 anticipated:1 serious:1 dg:1 composed:1 floating:2 iscas:1 sella:7 delicate:1 amplifier:1 detection:32 organization:2 edge:36 culture:1 respective:1 chakrabartty:1 sweden:1 column:6 soft:7 mhz:6 cost:1 kuang:1 successful:3 fabrication:1 too:1 iee:1 stored:4 scanning:1 st:1 fundamental:1 seamless:5 lee:1 informatics:1 receiving:1 rohm:1 tip:1 connecting:1 enhance:1 external:1 expert:2 japan:1 converted:1 harry:1 sec:3 toppan:1 register:2 caused:1 radiograph:2 later:1 bone:1 lot:1 doctor:1 start:1 cauwenberghs:1 jul:1 yen:1 formed:1 ni:1 variance:1 sy:1 yield:3 identification:4 confirmed:1 dig:1 processor:5 tissue:5 za:1 yamasaki:2 frequency:3 pp:9 proof:2 workstation:1 takada:2 x6:1 response:1 improved:1 april:1 stage:8 clock:6 sheng:1 receives:2 horizontal:13 goteborg:1 gray:3 scientific:1 concept:3 illustrated:3 x5:6 vdec:1 hong:1 dissipation:1 temperature:1 image:32 tro:1 winner:1 jp:3 analog:5 rd:1 mfc:3 specification:1 scandinavian:1 cortex:1 similarity:1 operating:1 feb:2 store:1 buffer:3 binary:2 inverted:1 captured:1 ministry:1 employed:2 determine:3 signal:6 ii:3 elapse:1 multiple:1 stem:1 faster:2 polysilicon:1 clinical:1 calculates:1 histogram:3 represent:1 kernel:4 addition:1 remarkably:1 separately:1 median:32 ee:1 mw:1 enough:1 affect:1 architecture:7 regarding:1 prototype:2 shift:3 penalty:1 locating:1 bunkyo:1 cause:1 fractal:1 generally:1 repeating:1 hardware:5 processed:1 generate:2 shifted:1 anatomical:1 vol:7 group:3 key:3 four:18 adachi:1 threshold:14 photomicrograph:2 changing:1 verified:1 luminance:10 pap:1 named:1 electronic:1 coherence:1 bit:18 layer:2 masakazu:3 software:3 generates:2 yagi:7 transferred:1 department:2 making:2 explained:3 pipeline:6 taken:2 computationally:1 vq:1 subjected:1 fed:1 adopted:3 operation:6 eight:1 occurrence:1 apprentice:1 gate:2 original:1 top:1 running:1 include:1 remaining:1 in0:2 wechsler:1 medicine:1 ting:1 build:1 capacitance:1 in1:2 tadashi:3 div:4 gradient:2 win:1 september:1 distance:2 landmark:10 majority:17 discriminant:1 difficult:1 mostly:1 implementation:2 design:2 upper:4 vertical:8 head:1 introduced:2 namely:2 fab:1 established:3 trans:1 perception:2 pattern:4 program:2 explanation:1 unrealistic:1 power:1 natural:1 technology:6 axis:1 ready:1 carried:3 x8:1 jun:2 extract:1 sept:2 formerly:1 determining:1 manhattan:2 fully:4 mixed:6 generation:13 filtering:12 proven:1 generator:1 digital:4 degree:7 metal:3 principle:1 thresholding:2 collaboration:1 row:1 compatible:2 supported:1 last:2 asynchronous:4 dis:1 bias:1 template:3 face:1 subimage:1 absolute:4 hideo:2 boundary:1 dimension:2 calculated:3 projected:2 far:1 employing:3 transaction:2 crest:1 hongo:1 keep:1 logic:1 search:6 lip:1 promising:1 ku:1 nature:1 robust:7 improving:1 du:1 complex:1 poly:2 apr:1 ref:2 sliced:1 fig:27 pped:18 je:1 probing:2 n:2 aid:1 experienced:1 candidate:2 printing:1 ito:1 specific:1 jen:1 faint:1 essential:1 quantization:2 texture:1 dissimilarity:3 execution:1 downward:1 efc:3 visual:1 partially:1 sport:1 determines:1 extracted:1 fisher:1 feasible:1 hard:1 typical:1 flag:10 principal:5 called:2 total:1 kuo:1 e:1 support:2 scan:1 |
1,551 | 241 | 266
Zemel, Mozer and Hinton
TRAFFIC: Recognizing Objects Using
Hierarchical Reference Frame Transformations
Richard S. Zemel
Computer Science Dept.
University of Toronto
Toronto, ONT M5S lA4
Michael C. Mozer
Computer Science Dept.
University of Colorado
Boulder, CO 80309-0430
Geoffrey E. Hinton
Computer Science Dept.
University of Toronto
Toronto, ONT M5S lA4
ABSTRACT
We describe a model that can recognize two-dimensional shapes in
an unsegmented image, independent of their orientation, position,
and scale. The model, called TRAFFIC, efficiently represents the
structural relation between an object and each of its component
features by encoding the fixed viewpoint-invariant transformation
from the feature's reference frame to the object's in the weights of a
connectionist network. Using a hierarchy of such transformations,
with increasing complexity of features at each successive layer, the
network can recognize multiple objects in parallel. An implementation of TRAFFIC is described, along with experimental results
demonstrating the network's ability to recognize constellations of
stars in a viewpoint-invariant manner.
1
INTRODUCTION
A key goal of machine vision is to recognize familiar objects in an unsegmented
image, independent of their orientation, position, and scale. Massively parallel
models have long been used for lower-level vision tasks, such as primitive feature
extraction and stereo depth. Models addressing "higher-level" vision have generally
been restricted to pattern matching types of problems, in which much of the inherent
complexity of the domain has been eliminated or ignored.
The complexity of object recognition stems primarily from the difficult search required to find the correspondence between features of candidate objects and image
TRAFFIC: Recognizing Objects
features. Images contain spurious features, which do not correspond to any object
features; objects in an image may have missing or occluded features; and noisy
measurements make it impossible to align object features to image features exactly. These problems are compounded in realistic domains, where images are not
segmented and normalized and the number of candidate objects is large.
In this paper, we present a structured, general model of object recognition - called
TRAFFIC (a loose acronym for "transforming feature instances") - that addresses
these difficult problems through a combination of strategies. First, we directly build
constraints on the spatial relationships between features of an object directly into
the architecture of a connectionist network. We thereby limit the space of possible
matches by constructing only plausible assignments of image features to objects.
Second, we embed this construction into a hierarchical architecture, which allows
the network to handle unsegmented, non-normalized images, and also allows for a
wide range of candidate objects. Third, we allow TRAFFIC to discover the critical
spatial relationships among features through training on examples of the target
objects in various poses.
2
MODEL HIGHLIGHTS
The following sections outline the three fundamental aspects of TRAFFIC. For a
more complete discussion of the details of TRAFFIC, see (Zemel, 1989).
2.1
ENCODING STRUCTURAL RELATIONS
The first key aspect of TRAFFIC concerns its encoding and use of the fixed spatial
relations between a rigid object and each of its component features. If we assume
that each feature has an intrinsic reference frame, then for a rigid object and a
particular feature of that object, there is a fixed viewpoint-independent transformation from the feature's reference frame to the object's. This transformation can
be used to predict the object's reference frame from the feature's. To recognize
objects, TRAFFIC takes advantage of the fact that all features of the same object
will predict the identical reference frame for that object (the "viewpoint consistency
constraint" (Lowe, 1987)).
Each reference frame transformation can be expressed as a matrix multiplication
that is efficiently implemented in a connectionist network. Consider a two-layer
network, with one layer containing units representing particular features, the other
containing units representing objects. For two-dimensional shapes, each feature is
described by a set of four instantiation units. These real-valued units represent
the parameter values associated with the feature: (x,y)-position, orientation, and
scale. The objects have a set of instantiation units as well. The units representing
particular features are connected to the units representing each object containing
that feature, thereby assigning each feature-object pair its own set of weighted
connections. The fixed matrix that describes the transformation from the feature's
intrinsic reference frame to the object's can be directly implemented in the set of
weights connecting the instantiation units of the feature and the object.
267
268
Zemel, Mozer and Hinton
We can describe any instantiation, or any transformation between instantiations, as
a vector of four parameters. Let Pif (xif' Yif, cif, s;,f) specify the refere:p.ce frame
of the feature with respect to the image, where xif and Yif represent the coordinates
of the feature origin relative to the image frame, cif and sif represent the scale and
angle of the feature frame w.r.t. the image frame. Rather than encoding these
values directly, cif represents the product of the scale and the cosine of the angle,
while sif represesents the product of the scale and the sine of the angle. 1 Let Pio
= (Xio, Yio, Ciol Sio), specify the reference frame of the object with respect to the
image. Finally, let Pfo = (xfol Yfo, cfol sfo) specify the transformation from the
reference frame of the object to that of the feature.
=
Each of these sets of parameters can be placed into a transformation matrix which
converts points in one reference frame to points in another. We can express Pif as
the matrix Iif, a transformation from the feature frame to the image frame:
Xif ]
Yif
1
Likewise, we can express Pfo as the matrix Tfo, a transformation from the object
to feature frame, and Pio as Iio, a transformation from the object to image frame.
Because Tfo is fixed for a given feature-object pair and Iif is derived from the image,
Iio can easily be computed by composing these two transforms: Iio
1';,f Tf o.
=
The four parameters underlying Iio can then be extracted, which results in the
following four equations for Pio:
Yio
+ Si,fYfo + Xif
-SifXfo + Ci,fYfo + Yi,f
Cio
Ci,fCfo - Si,fSfo
Si,o
Ci,fSfo
Xio
Ci,fX fo
+ S;,fCfo
This transformation is easily implemented in a network by connecting the units
representing Pi,f to the units representing P;,o with the appropriate weights (Figure
1). In this manner, TRAFFIC directly encodes the reference frame transformation
from a feature to an object in the connections from the set of units representing
the feature's reference frame to units representing the object's frame. The specification of an object's reference frame can therefore be derived directly from each
of its component features on the basis of the structural relationship between the
feature and the object. Because each feature of an object should predict the same
reference frame parameters for the object, we can determine whether the object is
really present in the image by checking to see if the various features make identical
1 We represent angles by their sines and cosines to avoid the discontinuities involved in representing orientation by a single number and to eliminate the non-linear step of computing sin Bil
from Bi/. Note that we represent the four degrees of freedom in the instantiation parameters using
four units; a neurally plausible extension to this scheme which does not require single units with
arbitrary precision could allocate a pool of units to each of these parameters.
TRAFFIC: Recognizing Objects
Figure 1: The matrix TJo is a fixed coordinate transformation from the reference
frame of feature f to the reference frame of object o. This figure shows how TJo
can be built into the weights connecting the object-instantiation units and the
feature-instantiation units.
predictions. In Section 2.3 we discuss how the object instantiation is formed in
cases where the object parameters predicted by the features do not agree perfectly.
2.2
FEATURE ABSTRACTION HIERARCHY
TRAFFIC recursively extends the notion of reference frame ~ransformations between features and objects in a hierarchical architecture. It is impractical to hope
that any network will be able to directly map low-level input features to complex
objects. The input features must be simple enough to be easily extracted from
images without relying on sophisticated segmentation and interpretation. If they
are simple, however, they will be unable to uniquely predict the object's reference
frame, since a complex object may contain many copies of a single simple feature.
To address this problem, we adopt a hierarchical approach, introducing several
layers of intermediate features between the input and output layers. In each layer,
several features are grouped together to form an 'object' in the layer above; this
'object' then serves as a feature for 'objects' in the next layer. The lowest layer
contains simple features, such as edges and various corner types. The objects to be
recognized appear at the top of the hierarchy - the output layer of the network.
This composition hierarchy builds up a description of objects by selectively grouping
sets of features, forming an increasingly abstract set of features. The power of this
representation comes in the sharing of a set of features in one layer by objects in
the layer above.
To represent multiple features of the same type simultaneously, we carve up the
image into spatially-contiguous regions, each allowing the representation of one
269
270
Zemel, Mozer and Hinton
instance of each feature. The network can thus represent several instances of a
feature type simultaneously, provided they lie in different regions.
We tailor the regions to the abstraction hierarchy as follows. In the lowest layers,
the features are simple and numerous, so we need many regions, but with only a
few feature types per region. In upper layers of the hierarchy, the features become
increasingly complex and span a larger area of the image; the number of feature
types increases and the regions become larger, while the instantiation units retain
accurate viewpoint information. In the highest layer, there is a single region, and it
spans the entire original image. At this level, the network can recognize and specify
parameters for a single instance of each object it has been trained on.
2.3
FORMING OBJECT HYPOTHESES
The third key aspect of TRAFFIC is its method of combining information from
features to determine both an object's reference frame and an overall estimate of
the likelihood that the object is actually present in the image. This likelihood,
called the object's confidence, is represented by an additional unit associated with
each object.
Each feature individually predicts the object's reference frame, and TRAFFIC forms
a single vector of object instantiation-parameters by averaging the predicted instantiations, weighted by the confidence of their corresponding features. 2 Every set of
units representing an object is sensitive to feature instances appearing in a fixed
area of the image - the receptive field of the object. The confidence of the object
is then a function of the confidence of the features lying in its receptive field, as
well as the variance of their predictions, because low variance indicates a highly
self-consistent object instantiation.
Once the network has been defined - the regions, receptive fields, and feature
types specified at each level, and the reference frame transformations encoded in
the weights - recognition occurs in a single bottom-up pass through the network.
TRAFFIC accepts as input a set of simple features and a description of their pose
in the image. At each layer in turn, the network forms many candidate object
instantiations from the set of feature instantiations in the layer below, and then
suppresses the object instantiations that are not consistently predicted by several
of their component features. At the output level of the network, the confidence
unit of each object describes the likelihood that that object is in the image, and its
instantiation units specify its pose.
3
IMPLEMENTING TRAFFIC
The domain we selected for study involves the recognition of constellations of stars.
This problem has several interesting properties: the image is by nature unseg2This averaging technique contains an implicit assumption that the maximum expected deviation of a prediction from the actual value is a function of the number of features, and that there
will always be enough good values to smooth out any large deviations. We are currently exploring
improved methods of forming object hypotheses.
TRAFFIC: Recognizing Objects
mented; there are many false partial matches; no bottom-up cues suggest a natural frame of reference; and it requires the ability to perform 2-D transformationinvariant recognition.
Each image contains the set of visible stars in a region of the sky. The input
to TRAFFIC is a set of features that represent triples of stars in particular configurations. This input is computed by first dividing the image into regions and
extracting every combination of three stars within each region. The star triplets
(more precisely, the inner angles of the triangles formed by the triplets) are fed
into an unsupervised competitive-learning network whose task is to categorize the
configuration as one of a small number of types - the primitive feature types for
the input layer of TRAFFIC.
The architecture we implemented had an input layer, two intermediate layers, and
an output layer.3 Eight constellations were to be recognized, each represented by a
single unit in the output layer. We used a simple unsupervised learning scheme to
determine the feature types in the intermediate layers of the hierarchy, working up
sequentially from the input layer. During an initial phase of training, the system
samples many regions of the sky at random, creating features at one layer corresponding to the frequently occurring combinations of features in the layer below.
This scheme forms flexible intermediate representations tailored to the domain, but
not hand-coded for the particular object set.
This sampling method determined the connection weights through the intermediate
layers of the network. Back propagation was then used to set the weights between
the penultimate layer and the output layer. 4 The entire network could have been
trained using back propagation, but the combined unsupervised-supervised learning
method we used is much simpler and quicker, and worked well for this problem.
4
EXPERIMENTAL RESULTS
We have run several experiments to test the main properties ofthe network, detailed
further in (Zemel, 1989). Each image used in training and testing contained one of
the eight target constellations, along with other nearby stars.
The first experiment tested the basic recognition capability of the system, as well as
its ability to learn useful connections between objects and features. The training set
consisted of a single view of each constellation. The second experiment examined
the network's ability to recognize a constellation independent of its position and
orientation in the image. We expanded the set of training images to include four
different views of each of the eight constellations, in various positions and orientations. The test set contained two novel views of the eight constellations. In both
experiments, the network quickly ? 150 epochs) learned to identify the target object. Learning was slower in the second experiment, but the network performance
3The details of the network, such as the number of regions and feature types per layer, the
number of connections, etc., are discussed in (Zemel, 1989).
4 In this implementation, we used a less efficient method of encoding the transformations than
the method discussed in Section 2.1, but both versions perform the same transformations.
271
272
Zemel, Mozer and Hinton
was identical for the training and testing images.
The third experiment tested the network's ability not only to recognize an instance
of a constellation, but to correctly specify its reference frame. In most simulations,
the network produced a correct description of the target object instantiation across
the training and testing images.
A final experiment confirmed that the network did not recognize an instance of an
object when the features of the object were present in the input but were not in the
correct relation to one another. The confidence level of the target object decreased
proportionately as random noise was added to the instantiation parameters of input
features. This shows that the upper layers of the network perform the important
function of detecting the spatial relations of features from non-local areas of the
Image.
5
RELATED WORK
TRAFFIC resembles systems based on the Hough transform (Ballard, 1981; Hinton, 1981) in that evidence from various feature instances is combined using the
viewpoint consistency constraint. However, while these Hough transform models
need a unit for every possible viewpoint of an object, TRAFFIC reduces hardware
requirements by using real-valued units to represent viewpoints. s TRAFFIC also
resembles the approach of (Mjolsness, Gindi and Anandan, 1989), which relies on a
large optimization search to simultaneously find the best set of object instantiations
and viewpoint parameters to fit the image data. The TRAFFIC network carries
out a similar type of search, but the limited connectivity and hierarchical architecture of the network constrains the search. The feature abstraction hierachy used
in TRAFFIC is common to many recognition systems. The pattern recognition
technique known as hierarchical synthesis (Barrow, Ambler and Burstall, 1972),
employs a similar architecture, as do several connectionist models (Denker et al.,
1989; Fukushima, 1980; Mozer, 1988). Each of these systems achieve positionand rotation-invariance by removing position information in the upper layers of the
hierarchy. The TRAFFIC hierarchy, on the other hand, maintains and manipulates accurate viewpoint information throughout, allowing it to consider relations
between features in non-local areas of the image.
6
CONCLUSIONS AND FUTURE WORK
The experiments demonstrate that TRAFFIC is capable of recognizing a limited
set of two-dimensional objects in a viewpoint-independent manner based on the
structural relations among components of the objects. We are currently testing
the network's ability to perform multiple-object recognition and its robustness with
respect to noise and occlusion. We are also currently developing a probabilistic
framework for combining the various predictions to form the most likely object
5Many other recognition systems, such as Lowe's SCERPO system (1985), represent object
reference frame information as sets of explicit parameters.
TRAFFIC: Recognizing Objects
instantiation hypothesis. This probabilistic framework may increase the robustness
of the model and allow it to handle deviations from object rigidity.
Another extension to TRAFFIC we are currently exploring concerns the creation of
a pre-processing network to specify reference frame information for input features
directly from a raw image. We train this network using an unsupervised learning method based on the mutual information between neighboring image patches
(Becker and Hinton, 1989). Our aim is to apply this method to learn the mappings
from features to objects throughout the network hierarchy.
Acknowledgements
This research was supported by grants from the Ontario Information Technology Research Center,
grant 87-2-36 from the Alfred P. Sloan foundation, and a grant from the James S. McDonnell
Foundation to Michael Mozer.
References
Ballard, D. H. (1981). Generalizing the Hough transform to detect arbitrary shapes. Pattern
Recognition, 13(2):111-122.
Barrow, H. G., Ambler, A. P., and Burst all, R. M. (1972). Some techniques for recognising
structures in pictures. In Frontiers of Pattern Recognition. Academic Press, New York, NY.
Becker, S. and Hinton, G. E. (1989). Spatial coherence as an internal teacher for a neural network.
Technical Report Technical Report CRG-TR-89-7, University of Toronto.
Bolles, R. C. and Cain, R. A. (1982). Recognizing and locating partially visible objects: The
local-feature-focus method. International Journal of Robotics Research, 1(3):57-82.
Denker, J. S., Gardner, W. L., Graf, H. P., Henderson, D., Howard, R. E., Hubbard, W., D.,
J. L., Baird, H. S., and Guyon, I. (1989). Neural network recognizer for hand-written zip
code digits. In Touretzky, D. S., editor, Advances in neural information processing systems
I, pages 323-331, San Mateo, CA. Morgan Kaufmann Publishers, Inc.
Fukushima, K. (1980). Neocognitron: A self-organizing neural network model for a mechanism of
pattern recognition unaffected by shift in position. Biological Cybernetics, 36:193-202.
Hinton, G. E. (1981). A parallel computation that assigns canonical object-based frames of reference. In Proceedings of the 7th International Joint Conference on Artificial Intelligence,
pages 683-685, Vancouver, BC, Canada.
Huttenlocher, D. P. and Ullman, S. (1987). Object recognition using alignment. In First International Conference on Computer Vision, pages 102-111, London, England.
Lowe, D. G. (1985). Perceptual Organization and Visual Recognition. Kluwer Academic Publishers, Boston.
Lowe, D. G. (1987). The viewpoint consistency constraint. International Journal of Computer
Vision, 1:57-72.
Mjolsness, E., Gindi, G., and Anandan, P. (1989). Optimization in model matching and perceptual
organization. Neural Computation, 1:218-299.
Mozer, M. C. (1988). The perception of multiple objects: A parallel, distributed processing
approach. Technical Report 8803, University of California, San Diego, Institute for Cognitive
Science.
Zemel, R. S. (1989). TRAFFIC: A connectionist model of object recognition. Technical Report
Technical Report CRG-TR-89-2, University of Toronto.
273
| 241 |@word hierachy:1 version:1 simulation:1 thereby:2 tr:2 recursively:1 carry:1 initial:1 configuration:2 contains:3 bc:1 si:3 assigning:1 must:1 written:1 visible:2 realistic:1 shape:3 xif:4 cue:1 selected:1 intelligence:1 detecting:1 toronto:6 successive:1 simpler:1 along:2 burst:1 become:2 manner:3 expected:1 frequently:1 pio:3 relying:1 ont:2 actual:1 increasing:1 provided:1 discover:1 underlying:1 lowest:2 suppresses:1 transformation:19 impractical:1 sky:2 every:3 exactly:1 unit:25 grant:3 appear:1 local:3 limit:1 encoding:5 resembles:2 examined:1 mateo:1 co:1 limited:2 range:1 bi:1 testing:4 digit:1 area:4 matching:2 yio:2 confidence:6 pre:1 suggest:1 impossible:1 map:1 missing:1 center:1 primitive:2 assigns:1 manipulates:1 handle:2 notion:1 coordinate:2 fx:1 target:5 construction:1 hierarchy:10 diego:1 colorado:1 hypothesis:3 origin:1 recognition:16 predicts:1 huttenlocher:1 bottom:2 quicker:1 region:13 connected:1 mjolsness:2 highest:1 mozer:8 transforming:1 complexity:3 xio:2 constrains:1 occluded:1 trained:2 creation:1 basis:1 triangle:1 easily:3 joint:1 sif:2 various:6 represented:2 train:1 describe:2 london:1 artificial:1 zemel:9 whose:1 encoded:1 larger:2 plausible:2 valued:2 ability:6 transform:3 noisy:1 la4:2 final:1 advantage:1 product:2 neighboring:1 combining:2 organizing:1 achieve:1 ontario:1 description:3 requirement:1 object:98 pose:3 dividing:1 implemented:4 predicted:3 involves:1 come:1 iio:4 correct:2 implementing:1 require:1 really:1 biological:1 crg:2 extension:2 exploring:2 frontier:1 lying:1 mapping:1 predict:4 adopt:1 recognizer:1 currently:4 sensitive:1 individually:1 grouped:1 hubbard:1 tf:1 weighted:2 hope:1 always:1 aim:1 rather:1 avoid:1 derived:2 focus:1 pif:2 consistently:1 likelihood:3 indicates:1 detect:1 abstraction:3 rigid:2 eliminate:1 entire:2 spurious:1 relation:7 overall:1 among:2 orientation:6 flexible:1 spatial:5 mutual:1 field:3 once:1 extraction:1 eliminated:1 sampling:1 identical:3 represents:2 iif:2 unsupervised:4 future:1 connectionist:5 report:5 richard:1 inherent:1 primarily:1 bil:1 few:1 employ:1 simultaneously:3 recognize:9 familiar:1 phase:1 occlusion:1 fukushima:2 freedom:1 organization:2 highly:1 henderson:1 alignment:1 tjo:2 accurate:2 edge:1 capable:1 partial:1 hough:3 instance:8 contiguous:1 assignment:1 introducing:1 addressing:1 deviation:3 recognizing:7 teacher:1 combined:2 fundamental:1 international:4 retain:1 probabilistic:2 pool:1 michael:2 connecting:3 together:1 quickly:1 synthesis:1 connectivity:1 containing:3 corner:1 creating:1 cognitive:1 ullman:1 star:7 baird:1 inc:1 sloan:1 sine:2 view:3 lowe:4 traffic:30 competitive:1 maintains:1 parallel:4 capability:1 cio:1 formed:2 variance:2 kaufmann:1 efficiently:2 likewise:1 correspond:1 ofthe:1 identify:1 raw:1 produced:1 confirmed:1 cybernetics:1 m5s:2 unaffected:1 fo:1 touretzky:1 sharing:1 involved:1 james:1 associated:2 segmentation:1 sophisticated:1 actually:1 back:2 higher:1 supervised:1 specify:7 improved:1 implicit:1 working:1 hand:3 mented:1 unsegmented:3 propagation:2 contain:2 normalized:2 consisted:1 spatially:1 sin:1 during:1 self:2 uniquely:1 cosine:2 neocognitron:1 outline:1 complete:1 demonstrate:1 bolles:1 image:38 novel:1 common:1 rotation:1 burstall:1 discussed:2 interpretation:1 kluwer:1 measurement:1 composition:1 transformationinvariant:1 consistency:3 had:1 specification:1 etc:1 align:1 own:1 massively:1 yi:1 cain:1 morgan:1 additional:1 anandan:2 zip:1 recognized:2 determine:3 multiple:4 neurally:1 reduces:1 stem:1 compounded:1 segmented:1 match:2 smooth:1 academic:2 technical:5 long:1 england:1 coded:1 prediction:4 basic:1 vision:5 represent:10 tailored:1 robotics:1 decreased:1 sfo:1 publisher:2 extracting:1 structural:4 intermediate:5 enough:2 fit:1 architecture:6 perfectly:1 inner:1 shift:1 whether:1 yif:3 allocate:1 becker:2 stereo:1 locating:1 cif:3 york:1 ignored:1 generally:1 useful:1 detailed:1 transforms:1 hardware:1 canonical:1 per:2 correctly:1 alfred:1 express:2 key:3 four:7 demonstrating:1 ce:1 convert:1 run:1 angle:5 tailor:1 extends:1 throughout:2 guyon:1 patch:1 coherence:1 layer:32 correspondence:1 constraint:4 precisely:1 worked:1 encodes:1 nearby:1 carve:1 aspect:3 span:2 expanded:1 structured:1 developing:1 combination:3 mcdonnell:1 describes:2 across:1 increasingly:2 invariant:2 restricted:1 boulder:1 equation:1 agree:1 discus:1 loose:1 turn:1 mechanism:1 fed:1 serf:1 acronym:1 eight:4 denker:2 hierarchical:6 apply:1 appropriate:1 appearing:1 robustness:2 slower:1 original:1 top:1 include:1 build:2 added:1 occurs:1 strategy:1 receptive:3 gindi:2 unable:1 penultimate:1 code:1 relationship:3 difficult:2 implementation:2 perform:4 allowing:2 upper:3 howard:1 barrow:2 hinton:9 frame:36 arbitrary:2 canada:1 pair:2 required:1 specified:1 connection:5 california:1 accepts:1 learned:1 discontinuity:1 address:2 able:1 below:2 pattern:5 perception:1 built:1 power:1 critical:1 natural:1 representing:10 scheme:3 technology:1 numerous:1 picture:1 gardner:1 epoch:1 acknowledgement:1 checking:1 multiplication:1 vancouver:1 relative:1 graf:1 highlight:1 interesting:1 geoffrey:1 triple:1 foundation:2 degree:1 consistent:1 viewpoint:12 editor:1 pi:1 placed:1 supported:1 copy:1 allow:2 institute:1 wide:1 distributed:1 depth:1 san:2 sequentially:1 instantiation:21 search:4 triplet:2 nature:1 learn:2 ballard:2 composing:1 ca:1 complex:3 constructing:1 domain:4 did:1 main:1 noise:2 ny:1 precision:1 position:7 explicit:1 candidate:4 lie:1 perceptual:2 third:3 proportionately:1 removing:1 embed:1 constellation:9 concern:2 grouping:1 intrinsic:2 evidence:1 false:1 recognising:1 ci:4 occurring:1 boston:1 generalizing:1 likely:1 forming:3 visual:1 expressed:1 contained:2 partially:1 relies:1 extracted:2 goal:1 determined:1 averaging:2 called:3 pas:1 invariance:1 experimental:2 sio:1 selectively:1 internal:1 categorize:1 dept:3 tested:2 rigidity:1 |
1,552 | 2,410 | The IM Algorithm : A variational
approach to Information Maximization
David Barber
Felix Agakov
Institute for Adaptive and Neural Computation : www.anc.ed.ac.uk
Edinburgh University, EH1 2QL, U.K.
Abstract
The maximisation of information transmission over noisy channels
is a common, albeit generally computationally difficult problem.
We approach the difficulty of computing the mutual information
for noisy channels by using a variational approximation. The resulting IM algorithm is analagous to the EM algorithm, yet maximises mutual information, as opposed to likelihood. We apply the
method to several practical examples, including linear compression,
population encoding and CDMA.
1
Introduction
The reliable communication of information over noisy channels is a widespread issue,
ranging from the construction of good error-correcting codes to feature extraction[3,
12]. In a neural context, maximal information transmission has been extensively
studied and proposed as a principal goal of sensory processing[2, 5, 7]. The central
quantity in this context is the Mutual Information (MI) which, for source variables
(inputs) x and response variables (outputs) y, is
I(x, y) ? H(y) ? H(y|x),
(1)
where H(y) ? ?hlog p(y)ip(y) and H(y|x) ? ?hlog p(y|x)ip(x,y) are marginal and
conditional entropies respectively, and angled brackets represent averages. The
goal is to adjust parameters of the mapping p(y|x) to maximise I(x, y). Despite
the simplicity of the statement, the MI is generally intractable for all but special
cases. The key difficulty lies in the computation of the entropy of p(y) (a mixture).
One such tractable special case is if the mapping y = g(x; ?) is deterministic and
invertible, for which the difficult entropy term trivially becomes
H(y) = hlog |J|ip(y) + const.
(2)
Here J = {?yi /?xj } is the Jacobian of the mapping. For non-Gaussian sources
p(x), and special choices of g(x; ?), the minimization of (1) with respect to the
parameters ? leads to the infomax formulation of ICA[4].
Another tractable special case is if the source distribution p(x) is Gaussian and the
mapping p(y|x) is Gaussian.
x p(x)
source
y p(y|x) encoder
z p(z)
x q(x|y,z) decoder
Figure 1: An illustration of the form of a more
general mixture decoder. x represents the sources
or inputs, which are (stochastically) encoded as
y. A receiver decodes y (possibly with the aid of
auxiliary variables z).
However, in general, approximations of the MI need to be considered. A variety of
methods have been proposed. In neural coding, a popular alternative is to maximise
the Fisher ?Information?[5]. Other approaches use different objective criteria, such
as average reconstruction error.
2
Variational Lower Bound on Mutual Information
Since the MI is a measure of information transmission, our central aim is to maximise
a lower bound on the MI. Using the symmetric property of the MI, an equivalent
formulation of the MI is I(x, y) = H(x) ? H(x|y). Since we shall generally be
interested in optimising MI with respect to the parameters of p(y|x), and p(x) is
simply the data
Pdistribution, we need to bound H(x|y) suitably. The KullbackLeibler bound x p(x|y) log p(x|y) ? p(x|y) log q(x|y) ? 0 gives
def ?
y).
I(x, y) ? H(x) + hlog q(x|y)ip(x,y) = I(x,
| {z }
{z
}
|
?entropy 00
(3)
?energy 00
where q(x|y) is an arbitrary variational distribution. The bound is exact if q(x|y) ?
p(x|y). The form of this bound is convenient since it explicitly includes both the
encoder p(y|x) and decoder q(x|y), see fig(1).
Certainly other well known lower bounds on the MI may be considered [6] and
a future comparison of these different approaches would be interesting. However,
our current experience suggests that the bound considered above is particularly
computationally convenient. Since the bound is based on the KL divergence, it is
equivalent to a moment matching approximation of p(x|y) by q(x|y). This fact
is highly beneficial in terms of decoding, since mode matching approaches, such
as mean-field theory, typically get trapped in the one of many sub-optimal local
minima. More successful decoding algorithms approximate the posterior mean[10].
The IM algorithm
To maximise the MI with respect to any parameters ? of p(y|x, ?), we aim to
push up the lower bound (3). First one needs to choose a class of variational
distributions q(x|y) ? Q for which the energy term is tractable. Then a natural
?
recursive procedure for maximising I(X,
Y ) for given p(x), is
?
1. For fixed q(x|y), find ?new = arg max? I(X,
Y)
new
?
2. For fixed ?, q
(x|y) = arg maxq(x|y)?Q I(X, Y ), where Q is a chosen class
of distributions.
These steps are iterated until convergence. This procedure is analogous to the
(G)EM algorithm which maximises a lower bound on the likelihood[9]. The difference is simply in the form of the ?energy? term.
Note that if |y| is large, the posterior p(x|y) will typically be sharply peaked around
its mode. This would motivate a simple approximation q(x|y) to the posterior,
Figure 2: The MI optimal linear projection of data x (dots) is not always
given by PCA. PCA projects data onto the vertical line, for which the entropy
conditional on the projection H(x|y) is large. Optimally, we should project
onto the horizontal line, for which the conditional entropy is zero.
significantly reducing the computational complexity of optimization. In the case
of real-valued x, a natural choice in the large |y| limit is to use a Gaussian. A
simple approximation would then be to use a Laplace approximation to p(x|y) with
2
log p(x|y)
. Inserted in the bound, this then gives a
covariance elements [??1 ]ij = ? ?x
i ?xj
form reminiscent of the Fisher Information[5]. The bound presented here is arguably
more general and appropriate than presented in [5] since, whilst it also tends to the
exact value of the MI in the limit of a large number of responses, it is a principled
bound for any response dimension.
Relation to Conditional Likelihood
Consider an autoencoder x ? y ? x
? and imagine that we wish to maximise the
probability that the reconstruction x
? is in the same s state as x:
log p(?
x = s|x = s) = log
Z
Jensen
p(?
x = s|y)p(y|x = s)
y
z}|{
? hlog p(?
x = s|y)ip(y|x=s)
Averaging this over all the states of x:
X
X
p(x = s) log p(?
x = s|x = s) ?
hlog p(?
x = s|y)ip(x=s,y) ? hlog q(x|y)ip(x,y)
s
s
?
Hence, maximising I(X,
Y ) (for fixed p(x)) is the same as maximising the lower
bound on the probability of a correct reconstruction. This is a reassuring property
of the lower bound. Even though we do not directly maximise the MI, we also indirectly maximise the probability of a correct reconstruction ? a form of autoencoder.
Generalisation to Mixture Decoders
A straightforward application of Jensen?s inequality leads to the more general result:
?
I(X, Y ) ? H(X) + hlog q(x|y, z)ip(y|x)p(x)q(z) ? I(X,
Y)
where q(x|y, z) and q(z) are variational distributions. The aim is to choose q(x|y, z)
such that the bound is tractably computable. The structure is illustrated in fig(1).
3
Linear Gaussian Channel : Improving on PCA
A common theme in linear compression and feature extraction is to map a (high
dimensional) vector x to a (lower dimensional) vector y = W x such that the information in the vector x is maximally preserved in y. The classical solution to this
problem (and minimizes the linear reconstruction error) is given by PCA. However,
as demonstrated in fig(2), the optimal setting for W is, in general not given by the
widely used PCA.
To see how we might improve on the PCA approach, we consider optimising our
bound with respect to linear mappings. We take as our projection (encoder) model,
p(y|x) ? N (Wx, s2 I), with isotropic Gaussian noise. The empirical distribuPP
?
tion is simply p(x) ?
?=1 ?(x ? x ), where P is the number of datapoints.
Without loss of generality, we assume the data is zero mean. For a decoder
q(x|y) = N (m(y), ?(y)), maximising the bound on MI is equivalent to minimising
P
X
(x ? m(y))T ??1 (y)(x ? m(y)) + log det ?(y)
?=1
p(y|x? )
For constant diagonal matrices ?(y), this reduces to minimal mean square reconstruction error autoencoder training in the limit s2 ? 0. This clarifies why autoencoders (and hence PCA) are a sub-optimal special case of MI maximisation.
Linear Gaussian Decoder
A simple decoder is given by q(x|y) ? N (Uy, ? 2 I), for which
? y) ? 2tr(UWS) ? tr(UMUT ),
I(x,
T
where S = hxx i =
P
?
?
(4)
? T
x (x ) /P is the sample covariance of the data, and
M = Is2 + WSWT
(5)
is the covariance of the mixture distribution p(y). Optimization of (4) for U leads
to SWT = UM. Eliminating U, this gives
? y) ? tr SWT M?1 WS
I(x,
(6)
In the zero noise limit, optimisation of (6) produces PCA. For noisy channels, unconstrained optimization of (6) leads to a divergence of the matrix norm kWWT k? ;
a norm-constrained optimisation in general produces a different result to PCA. The
simplicity of the linear decoder in this case severely limits any potential improvement
over PCA, and certainly would not resolve the issue in fig(2). For this, a non-linear
decoder q(x|y) is required, for which the integrals become more complex.
Non-linear Encoders and Kernel PCA
An alternative to using non-linear decoders to improve on PCA is to use a non-linear
encoder. A useful choice is
p(y|x) = N (W?(x), ? 2 I)
where ?(x) is in general a high dimensional, non-linear embedding function, for
which W will be non-square. In the zero-noise limit the optimal solution for the encoder results in non-linear PCA on the covariance h?(x)?(x)T i of the transformed
data. By Mercer?s theorem, the elements of the covariance matrix may be replaced
by a Kernel function of the users choice[8]. An advantage of our framework is that
our bound enables the principled comparison of embedding functions/kernels.
4
Binary Responses (Neural Coding)
In a neurobiological context, a popular issue is how to encode real-valued stimuli
in a population of spiking neurons. Here we look briefly at a simple case in which
each neuron fires (yi = 1) with increasing probability the further the membrane
potential wiT x is above threshold ?bi . Independent neural firing suggests:
Y
Y
def
p(y|x) =
p(yi |x) =
?(yi (wiT x + bi )).
(7)
i
Figure 3: Top row: a subset of the original real-valued source data. Middle row: after
training, 20 samples from each of the 7 output units, for each of the corresponding source
inputs. Bottom row: Reconstruction of the source data from 50 samples of the output
units. Note that while the 8th and the 10th patterns have closely matching stochastic
binary representations, they differ in the firing rates of unit 5. This results in a visibly
larger bottom loop of the 8th reconstructed pattern, which agrees with the original source
data. Also, the thick vertical 1 (pattern 3) differs from the thin vertical eight (pattern 6)
due to the differences in stochastic firings of the third and the seventh units.
def
Here the response variables y ? {?1, +1}|y| , and ?(a) = 1/(1 + e?a ). For the
decoder, we chose a simple linear Gaussian q(x|y) ? N (Uy, ?). In this case, exact
evaluation of the bound (3) is straightforward, since it only involves computations
of the second-order moments of y over the factorized distribution.
A reasonable reconstruction of the source x? from its representation y will be given
? = hxiq(x|y) of the learned approximate posterior. In noisy channels
by the mean x
? = hhxiq(x|y) ip(y|x? ) .
we need to average over multiple possible representations, i.e. x
We performed reconstruction of continuous source data from stochastic binary responses for |x| = 196 input and |y| = 7 output units. The bound was optimized
with respect to the parameters of p(y|x) and q(x|y) with isotropic norm constraints
on W and b for 30 instances of digits 1 and 8 (15 of each class). The source variables
were reconstructed from 50 samples of the corresponding binary representations at
the mean of the learned q(x|y), see fig(3).
5
Code Division Multiple Access (CDMA)
In CDMA[11], a mobile phone user j ? 1, . . . , M wishes to send a bit sj ? {0, 1} of
information to a base station. To send sj = 1, she transmits an N dimensional realvalued vector gj , which represents a time-discretised waveform (sj = 0 corresponds
to no transmission). The simultaneous transmissions from all users results in a
received signal at the base station of
X j
ri =
gi sj + ?i ,
i = 1, . . . , N,
or
r = Gs + ?
j
where ?i is Gaussian noise. Probabilistically, we can write
n
o
2
p(r|s) ? exp ? (r ? Gs) /(2? 2 ) .
The task for the base station (which knows G) is to decode the received vector r
so that s can be recovered reliably. For simplicity, we assume that N = M so that
the matrix G is square. Using Bayes? rule, p(s|r) ? p(r|s)p(s), and assuming a flat
prior on s,
p(s|r) ? exp ? ?2rT Gs + sT GT Gs /(2? 2 )
(8)
Computing either the MAP solution arg maxs p(s|r) or the MPM solution
arg maxsj p(sj |r), j = 1, . . . , M is, in general, NP-hard.
If GT G is diagonal, optimal decoding is easy, since the posterior factorises, with
(
!
)
X
2
p(sj |r) ? exp
2
ri Gji ? Djj sj /(2? )
i
T
where the diagonal matrix D = G G (and we used s2i ? si for si ? {0, 1}). For
suitably randomly chosen matrices G, GT G will be approximately diagonal in the
limit of large N . However, ideally, one would like to construct decoders that perform
near-optimal decoding without recourse to the approximate diagonality of GT G.
The MAP decoder solves the problem
T
mins?{0,1}N sT GT Gs ? 2sT GT r ? mins?{0,1}N s ? G?1 r GT G s ? G?1 r
and hence the MAP solution is that s which is closest to the vector G?1 r. The
difficulty lies in the meaning of ?closest? since the space is non-isotropically warped
by the matrix GT G. A useful guess for the decoder is that it is the closest in the
Euclidean sense to the vector G?1 r. This is the so-called decorrelation estimator.
Computing the Mutual Information
Of prime interest in CDMA is the evaluation of decoders in the case of nonorthogonal matrices G[11]. In this respect, a principled comparison of decoders
can be obtained by evaluating the corresponding bound on the MI1 ,
XX
I(r, s) ? H(s) ? H(s|r) ? H(s) +
p(s)p(r|s) log q(s|r)
(9)
r
s
where H(s) is trivially given by M (bits). The bound is exact if q(s|r) = p(s|r).
We make the specific assumption
Q in the following that our decoding algorithm takes
the factorised form q(s|r) = i q(si |r) and, without loss of generality, we may write
q(si |r) = ? ((2si ? 1)fi (r))
(10)
for some decoding function fi (r). We restrict interest here to the case of simple
linear decoding functions
X
wij rj .
fi (r) = ai +
j
Since p(r|s) is Gaussian, (2si ? 1)fi (r) ? xi is also Gaussian,
p(xi |s) = N (?i (s), vari ),
?i (s) ? (2si ? 1)(ai + wiT Gs),
vari ? ? 2 wiT wi
where wiT is the ith row of the matrix [W ]ij ? wij . Hence
Z ?
X
1
?[x?(2si ?1)(ai +wiT Gs)]2 /(2? 2 wiT wi )
p
[log ? (x)] e
?H(s|r) ?
2?? 2 wiT wi
x=??
p(s)
i
(11)
In general, the average over the factorised distribution p(s) can be evaluated by
using the Fourier Transform [1]. However, to retain clarity here, we constrain the
decoding matrix W so that wiT Gs = bi si , i.e. W G = diag(b), for a parameter
vector b. The average over p(s) then gives
E
2
2 T
1 XD
log ? (x) (1 + e?[?2xbi ?4xai +2ai bi +bi ]/(2? wi wi )
,
?H(s|r) ?
2 i
N (?ai ,var=? 2 wiT wi )
(12)
1
Other variational methods may be considered to approximate the normalisation constant of p(s|r)[13], and it would be interesting to look into the possibility of using them in
a MI approximation, and also as approximate decoding algorithms.
MI bound for Constrained Optimal Decoder
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.2
0.4
0.6
0.8
1
MI bound for Inverse Decoder
Figure 4: The bound given by the decoder W ? G?1 r
plotted against the optimised bound (for the same G) found
using 50 updates of conjugate gradients. This was repeated
over several trials of randomly chosen matrices G, each of
which are square of N = 10 dimensions. For clarity, a small
number of poor results (in which the bound is negative)
have been omitted. To generate G, form the matrix Aij ?
N (0, 1), and B = A + AT . From the eigen-decomposition
of B, i.e BE = E?, form [G]ij = [E?]ij + 0.1N (0, 1) (so
that GT G has small off diagonal elements).
a sum of one dimensional integrals, each of which can be evaluated numerically. In
the case of an orthogonal matrix GT G = D the decoding function is optimal and
the MI bound is exact with the parameters in (12) set to
ai = ?[GT G]ii /(2? 2 )
W = GT /? 2
bi = [GT G]ii /? 2 .
Optimising the linear decoder
In the case that GT G is non-diagonal, what is the optimal linear decoder? A
partial answer is given by numerically optimising the bound from (11). For the
constrained case, W G = diag(b), (12) can be used to calculate the bound. Using
W = diag(b)G?1 ,
X
? 2 wiT wi = ? 2 b2i
([G?1 ]ij )2 ,
j
and the bound depends only on a and b. Under this constraint the P
bound can be
numerically optimised as a function of a and b, given a fixed vector j ([G?1 ]ij )2 .
As an alternative we can employ the decorrelation decoder, W = G?1 /? 2 , with
ai = ?1/(2? 2 ). In fig(4) we see that, according to our bound, the decorrelation or
(?inverse?) decoder is suboptimal versus the linear decoder fi (r) = ai + wiT r with
W = diag(b)G?1 , optimised over a and b. These initial results are encouraging, and
motivate further investigations, for example, using syndrome decoding for CDMA.
6
Posterior Approximations
There is an interesting relationship between maximising the bound on the MI and
computing an optimal estimate q(s|r) of an intractable posterior p(s|r). The optimal bit error solution sets q(si |r) to the mean of the exact posterior marginal
p(si |r). Mean Field Theory approximates
the posterior marginal by minimisP
ing the KL
divergence:
KL(q||p)
=
(q(s|r)
log q(s|r) ? q(s|r) log p(s|r)), where
s
Q
q(s|r) = i q(si |r). In this case, the KL divergence is tractably computable (up to
a neglectable prefactor). However, this form of the KL divergence chooses q(si |r)
to be any one of a very large number of local modes of the posterior distribution
p(si |r). Since the optimal choice is to choose the posterior marginal mean, this is
why using Mean Field decoding is generally suboptimal. Alternatively, consider
X
X
KL(p||q) =
(p(s|r) log p(s|r) ? p(s|r) log q(s|r)) = ?
p(s|r) log q(s|r) + const.
s
s
This is the correct KL divergence in the sense that, optimally, q(si |r) = p(si |r), that
is, the posterior marginal is correctly calculated. The difficulty lies in performing
averages with respect to p(s|r), which are generally intractable. Since we will have
a distribution p(r) it is reasonable to provide an averaged objective function,
XX
XX
p(r)p(s|r) log q(s|r) =
p(s)p(r|s) log q(s|r).
(13)
r
s
r
s
Whilst, for any given r, we cannot calculate the best posterior marginal estimate,
we may be able to calculate the best posterior marginal estimate on average. This is
precisely the case in, for example, CDMA since the average over p(r|s) is tractable,
and the resulting average over p(s) can be well approximated numerically. Wherever
an average case objective is desired is of interest to the methods suggested here.
7
Discussion
We have described a general theoretically justified approach to information maximization in noisy channels. Whilst the bound is straightforward, it appears to have
attracted little previous attention as a practical tool for MI optimisation. We have
shown how it naturally generalises linear compression and feature extraction. It is
a more direct approach to optimal coding than using the Fisher ?Information? in
neurobiological population encoding. Our bound enables a principled comparison of
different information maximisation algorithms, and may have applications in other
areas of machine learning and Information Theory, such as error-correction.
[1] D. Barber, Tractable Approximate Belief Propagation, Advanced Mean Field Methods
Theory and Practice (D. Saad and M. Opper, eds.), MIT Press, 2001.
[2] H. Barlow, Unsupervised Learning, Neural Computation 1 (1989), 295?311.
[3] S Becker, An Information-theoretic unsupervised learning algorithm for neural networks, Ph.D. thesis, University of Toronto, 1992.
[4] A.J. Bell and T.J. Sejnowski, An information-maximisation approach to blind separation and blind deconvolution, Neural Computation 7 (1995), no. 6, 1004?1034.
[5] N. Brunel and J.-P. Nadal, Mutual Information, Fisher Information and Population
Coding, Neural Computation 10 (1998), 1731?1757.
[6] T. Jaakkola and M. Jordan., Improving the mean field approximation via the use of
mixture distributions, Proceedings of the NATO ASI on Learning in Graphical Models,
Kluwer, 1997.
[7] R. Linsker, Deriving Receptive Fields Using an Optimal Encoding Criterion, Advances in Neural Information Processing Systems (Lee Giles (eds) Steven Hanson,
Jack Cowan, ed.), vol. 5, Morgan-Kaufmann, 1993.
[8] S. Mika, B. Schoelkopf, A.J. Smola, K-R. Muller, M. Scholz, and Gunnar Ratsch,
Kernel PCA and De-Noising in Feature Spaces, Advances in Neural Information Processing Systems 11 (1999).
[9] R. M. Neal and G. E. Hinton, A View of the EM Algorithm That Justifies Incremental,
Sparse, and Other Variants, Learning in Graphical Models (M.J. Jordan, ed.), MIT
Press, 1998.
[10] D. Saad and M. Opper, Advanced Mean Field Methods Theory and Practice, MIT
Press, 2001.
[11] T. Tanaka, Analysis of Bit Error Probability of Direct-Sequence CDMA Multiuser
Demodulators, Advances in Neural Information Processing Systems (T. K. Leen et al.
(eds.), ed.), vol. 13, MIT Press, 2001, pp. 315?321.
[12] K. Torkkola and W. M. Campbell, Mutual Information in Learning Feature Transformations, Proc. 17th International Conf. on Machine Learning (2000).
[13] M. Wainwright, T. Jaakkola, and A. Willsky, A new class of upper bounds on the log
partition function, Uncertainty in Artificial Intelligence, 2002.
| 2410 |@word trial:1 briefly:1 eliminating:1 compression:3 norm:3 middle:1 suitably:2 covariance:5 decomposition:1 tr:3 moment:2 initial:1 multiuser:1 current:1 recovered:1 si:16 yet:1 reminiscent:1 attracted:1 partition:1 wx:1 enables:2 update:1 intelligence:1 guess:1 mpm:1 isotropic:2 ith:1 toronto:1 direct:2 become:1 theoretically:1 ica:1 resolve:1 encouraging:1 little:1 increasing:1 becomes:1 project:2 xx:3 factorized:1 what:1 minimizes:1 nadal:1 whilst:3 transformation:1 xd:1 um:1 uk:1 unit:5 arguably:1 felix:1 maximise:7 local:2 tends:1 limit:7 severely:1 despite:1 encoding:3 optimised:3 firing:3 approximately:1 might:1 chose:1 mika:1 studied:1 xbi:1 suggests:2 scholz:1 bi:6 averaged:1 uy:2 practical:2 maximisation:4 recursive:1 practice:2 differs:1 digit:1 procedure:2 area:1 empirical:1 bell:1 significantly:1 asi:1 convenient:2 matching:3 projection:3 get:1 onto:2 cannot:1 noising:1 context:3 www:1 equivalent:3 deterministic:1 map:4 demonstrated:1 send:2 straightforward:3 attention:1 wit:12 simplicity:3 correcting:1 rule:1 estimator:1 deriving:1 datapoints:1 population:4 embedding:2 analogous:1 laplace:1 imagine:1 construction:1 user:3 exact:6 decode:1 element:3 approximated:1 particularly:1 agakov:1 bottom:2 inserted:1 steven:1 prefactor:1 calculate:3 schoelkopf:1 principled:4 complexity:1 ideally:1 motivate:2 division:1 s2i:1 sejnowski:1 artificial:1 encoded:1 widely:1 valued:3 larger:1 encoder:5 gi:1 transform:1 noisy:6 ip:9 advantage:1 sequence:1 reconstruction:9 maximal:1 loop:1 convergence:1 transmission:5 produce:2 incremental:1 ac:1 ij:6 received:2 solves:1 auxiliary:1 involves:1 differ:1 waveform:1 thick:1 closely:1 correct:3 stochastic:3 investigation:1 im:3 correction:1 around:1 considered:4 neglectable:1 exp:3 mapping:5 nonorthogonal:1 angled:1 omitted:1 proc:1 pdistribution:1 agrees:1 tool:1 minimization:1 mit:4 gaussian:11 always:1 aim:3 mobile:1 jaakkola:2 probabilistically:1 encode:1 improvement:1 she:1 likelihood:3 visibly:1 sense:2 typically:2 w:1 relation:1 transformed:1 wij:2 interested:1 issue:3 arg:4 constrained:3 special:5 mutual:7 marginal:7 field:7 construct:1 extraction:3 optimising:4 represents:2 look:2 unsupervised:2 thin:1 peaked:1 future:1 linsker:1 np:1 stimulus:1 employ:1 randomly:2 divergence:6 replaced:1 fire:1 interest:3 normalisation:1 highly:1 possibility:1 evaluation:2 adjust:1 certainly:2 mixture:5 bracket:1 integral:2 partial:1 experience:1 orthogonal:1 euclidean:1 desired:1 plotted:1 minimal:1 instance:1 giles:1 maximization:2 subset:1 successful:1 seventh:1 kullbackleibler:1 optimally:2 encoders:1 answer:1 chooses:1 st:3 international:1 retain:1 lee:1 off:1 decoding:12 invertible:1 infomax:1 thesis:1 central:2 opposed:1 choose:3 possibly:1 stochastically:1 conf:1 warped:1 potential:2 de:1 factorised:2 coding:4 includes:1 analagous:1 explicitly:1 depends:1 b2i:1 blind:2 tion:1 performed:1 view:1 bayes:1 square:4 kaufmann:1 clarifies:1 decodes:1 iterated:1 simultaneous:1 ed:7 against:1 energy:3 pp:1 naturally:1 transmits:1 mi:21 popular:2 campbell:1 appears:1 response:6 maximally:1 formulation:2 evaluated:2 though:1 leen:1 generality:2 smola:1 until:1 autoencoders:1 horizontal:1 propagation:1 widespread:1 mode:3 barlow:1 hence:4 symmetric:1 neal:1 illustrated:1 djj:1 criterion:2 theoretic:1 ranging:1 variational:7 jack:1 meaning:1 fi:5 common:2 spiking:1 approximates:1 kluwer:1 numerically:4 ai:8 unconstrained:1 trivially:2 dot:1 access:1 gj:1 gt:14 base:3 posterior:14 closest:3 phone:1 prime:1 inequality:1 binary:4 yi:4 muller:1 morgan:1 minimum:1 syndrome:1 diagonality:1 signal:1 ii:2 multiple:2 rj:1 reduces:1 ing:1 generalises:1 minimising:1 variant:1 optimisation:3 demodulator:1 represent:1 kernel:4 preserved:1 justified:1 ratsch:1 source:12 saad:2 cowan:1 jordan:2 near:1 easy:1 variety:1 xj:2 restrict:1 suboptimal:2 computable:2 det:1 pca:14 becker:1 generally:5 useful:2 extensively:1 ph:1 generate:1 trapped:1 correctly:1 write:2 shall:1 vol:2 key:1 gunnar:1 threshold:1 clarity:2 sum:1 inverse:2 uncertainty:1 reasonable:2 separation:1 bit:4 bound:39 def:3 g:8 constraint:2 sharply:1 constrain:1 precisely:1 ri:2 flat:1 fourier:1 min:2 performing:1 according:1 poor:1 conjugate:1 membrane:1 beneficial:1 em:3 wi:7 wherever:1 recourse:1 computationally:2 know:1 tractable:5 apply:1 eight:1 appropriate:1 indirectly:1 alternative:3 eigen:1 original:2 top:1 graphical:2 cdma:7 const:2 classical:1 objective:3 eh1:1 quantity:1 swt:2 receptive:1 rt:1 diagonal:6 gradient:1 decoder:24 hxx:1 barber:2 willsky:1 maximising:5 assuming:1 code:2 relationship:1 illustration:1 gji:1 ql:1 difficult:2 hlog:8 statement:1 negative:1 reliably:1 perform:1 maximises:2 upper:1 vertical:3 neuron:2 hinton:1 communication:1 station:3 arbitrary:1 david:1 discretised:1 required:1 kl:7 optimized:1 hanson:1 learned:2 maxq:1 tractably:2 tanaka:1 able:1 suggested:1 pattern:4 including:1 reliable:1 max:2 belief:1 wainwright:1 decorrelation:3 difficulty:4 natural:2 advanced:2 improve:2 factorises:1 mi1:1 realvalued:1 autoencoder:3 prior:1 loss:2 interesting:3 var:1 versus:1 mercer:1 row:4 aij:1 institute:1 sparse:1 edinburgh:1 dimension:2 calculated:1 evaluating:1 vari:2 opper:2 sensory:1 adaptive:1 reconstructed:2 approximate:6 sj:7 nato:1 neurobiological:2 umut:1 xai:1 receiver:1 xi:2 alternatively:1 continuous:1 why:2 channel:7 improving:2 anc:1 complex:1 diag:4 s2:2 noise:4 repeated:1 fig:6 aid:1 is2:1 sub:2 theme:1 wish:2 lie:3 jacobian:1 third:1 theorem:1 specific:1 jensen:2 torkkola:1 deconvolution:1 intractable:3 albeit:1 justifies:1 push:1 entropy:6 simply:3 isotropically:1 brunel:1 corresponds:1 reassuring:1 conditional:4 goal:2 fisher:4 hard:1 generalisation:1 reducing:1 averaging:1 principal:1 called:1 |
1,553 | 2,411 | On the concentration of expectation and
approximate inference in layered networks
XuanLong Nguyen
University of California
Berkeley, CA 94720
[email protected]
Michael I. Jordan
University of California
Berkeley, CA 94720
[email protected]
Abstract
We present an analysis of concentration-of-expectation phenomena in
layered Bayesian networks that use generalized linear models as the local
conditional probabilities. This framework encompasses a wide variety of
probability distributions, including both discrete and continuous random
variables. We utilize ideas from large deviation analysis and the delta
method to devise and evaluate a class of approximate inference algorithms for layered Bayesian networks that have superior asymptotic error
bounds and very fast computation time.
1
Introduction
The methodology of variational inference has developed rapidly in recent years, with increasingly rich classes of approximation being considered (see, e.g., Yedidia, et al., 2001,
Jordan et al., 1998). While such methods are intuitively reasonable and often perform well
in practice, it is unfortunately not possible, except in very special cases, to provide error
bounds for these inference algorithms. Thus the user has little a priori guidance in choosing
an inference algorithm, and little a posteriori reassurance that the approximate marginals
produced by an algorithm are good approximations. The situation is somewhat better for
sampling algorithms, but there the reassurance is only asymptotic.
A line of research initiated by Kearns and Saul (1998) aimed at providing such error bounds
for certain classes of directed graphs. Analyzing the setting of two-layer networks, binary
nodes with large fan-in, noisy-OR or logistic conditional probabilities, and parameters that
scale as O(1/N ), where N are the number of nodes in each layer, they used a simple
large deviation analysis to design an approximate inference algorithm that provided error
bounds. In later work they extended their algorithm to multi-layer
p networks (Kearns and
Saul, 1999). The error bound provided by this approach was O( ln N/N ). Ng and Jordan
(2000) pursued this line of work, obtaining an improved error bound of O(1/N (k+1)/2 )
where k is the order of a Taylor expansion employed by their technique. Their approach
was, however, restricted to two-layer graphs.
Layered graphs are problematic for many inference algorithms, including belief propagation and generalized belief propagation algorithms. These algorithms convert directed
graphs to undirected graphs by moralization, which creates infeasibly large cliques when
there are nodes with large fan-in. Thus the work initiated by Kearns and Saul is notable
not only for its ability to provide error bounds, but also because it provides one of the few
practical algorithms for general layered graphs. It is essential to develop algorithms that
scale in this setting?e.g., a recent application at Google studied layered graphs involving
more than a million nodes (Harik and Shazeer, personal communication).
In this paper, we design and analyze approximate inference algorithms for general multilayered Bayesian networks with generalized linear models as the local conditional probability distributions. Generalized linear models including noisy-OR and logistic functions
in the binary case, but go significantly further, allowing random variables from any distribution in the exponential family. We show that in such layered graphical models, the
concentration of expectations of any fixed number of nodes propagate from one layer to
another according to a topological sort of the nodes. This concentration phenomenon can
be exploited to devise efficient approximate inference algorithms that provide error bounds.
Specifically, in a multi-layer network with N nodes in each layer and random variables in
some exponential family of distribution, our algorithm has an O((ln N )3 /N )(k+1)/2 ) error
bound and O(N k ) time complexity. We perform a large number of simulations to confirm this error bound and compare with Kearns and Saul?s algorithm, which has not been
empirically evaluated before.
The paper is organized as follows. In Section 2, we study the concentration of expectation
in generalized linear models. Section 3 introduces the use of delta method for approximating the expectations. Section 4 describes an approximate inference algorithm in a general
directed graphical model, which is evaluated empirically in Section 5. Finally, Section 6
concludes the paper.
2
Generalized linear models
Consider a generalized linear model (GLIM; see McCullagh and Nelder, 1983, for details) consisting of N covariates (inputs) X1 , . . . , XN and a response (output) variable Y .
A GLIM makes three assumptions regarding the form of the conditional probability distribution P (Y |X): (1) The inputs X1 , . . . , XN enter the model via a linear combination
PN
? = i=1 ?i Xi ; (2) the conditional mean ? is represented as a function f (?), known as the
response function; and (3) the output Y is characterized by an exponential family distribution (cf. Brown, 1986) with natural parameter ? and conditional mean ?. The conditional
probability takes the following form:
?y ? A(?)
P?,? (Y |X) = h(y, ?) exp
,
(1)
?
where ? is a scale parameter, h is a function reflecting the underlying measure, and A(?)
is the log partition function.
In this section, for ease of exposition, we shall assume that the response function f is a
PN
canonical response function, which simply means that ? = ? = i=1 ?i Xi . As will soon
be clear, however, our analysis is applicable to a general setting in which f is only required
to have bounded derivatives on compact sets.
It is a well-known property of exponential family distributions that !
N
X
E(Y |X) = ? = A0 (?) = f (?) = f
? i Xi
i=1
Var(Y|X)
00
0
= ?A (?) = ?f (?).
The exponential family includes the Bernoulli, multinomial, and Gaussian distributions,
but many other useful distributions as well, including the Poisson, gamma and Dirichlet.
We will be studying GLIMs defined on layered graphical models, and thus X1 , . . . , XN are
themselves taken to be random variables in the exponential family. We also make the key
assumption that all parameters obey the bound |?i | ? ? /N for some constant ? , although
this assumption shall be relaxed later on.
PN
Under these assumptions, we can show that the linear combination ? =
i=1 ?i Xi is
tightly concentrated around its mean with very high probability. Kearns and Saul (1998)
have proved this for binary random variables using large deviation analysis. This type
of analysis can be used to prove general results for (bounded and unbounded) random
variables in any standard exponential family.1
Lemma 1 Assume that X1 , . . . , XN are independent random variables in a standard exponential family distribution. Furthermore, EXi ? [pi ? ?i , pi + ?i ]. Then there are
PN
absolute constants C and ? such that, for any > i=1 |?i |?i :
PN
N
X
?( ? i=1 |?i |?i )2/3
P (|? ?
?i pi | > ) ? C exp ?
PN 2 1/3
( i=1 ?i )
i=1
? C exp{??N 1/3 ? ?2/3 ( ?
N
X
|?i |?i )2/3 }
i=1
We will study architectures that are strictly layered; that is, we require that there are no
edges directly linking the parents of any node. In this setting the parents of each node are
conditionally independent given all ancestor nodes (in the previous layers) in the graph.
This will allow us to use Lemma 1 and iterated conditional expectation formulas to analyze concentration phenomena in these models. The next lemma shows that under certain
assumptions about the response function f , the tight concentration of ? also entails the
concentration of E(Y |X) and Var(Y|X).
Lemma 2 Assume that the means of X1 , . . . , XN are bounded within some fixed
interval [pmin , pmax ] and f has bounded derivatives on compact sets. If ? ?
PN
PN
[ i=1 ?i pi ? , i=1 ?i pi + ] with high probability, then: E(Y |X) = f (?) ?
PN
PN
PN
[f ( i=1 ?i pi ) ? O(), f ( i=1 ?i pi ) + O()], and Var(Y|X) = f 0 (?) ? [f 0 ( i=1 ?i pi ) ?
P
N
O(), f 0 ( i=1 ?i pi ) + O()] with high probability.
Lemmas 1 and 2 provide a mean-field-like basis for propagating the concentration of expectations from the input layer X1 , . . . , XN to the output layer Y . Specifically, if E(Xi )
PN
are approximated by pi (i = 1, . . . , N ), then E(Y ) can be approximated by f ( i=1 ?i pi ).
3
Higher order expansion (the delta method)
While Lemmas 1 and 2 already provide a procedure for approximating E(Y ), one can
use higher-order (Taylor) expansion to obtain a significantly more accurate approximation.
This approach, known in the statistics literature as the delta method, has been used in
slightly different contexts for inference problems in the work of Plefka (1982), Barber and
van der Laar (1999), and Ng and Jordan (2000). In our present setting, we will show that
estimates based on Taylor expansion up to order k can be obtained by propagating the
expectation of the product of up to k nodes from one layer to an offspring layer.
The delta method is based on the same assumptions as in Lemma 2; that is, the means
of X1 , . . . , XN are assumed to be bounded within some fixed interval [pmin , pmax ], and
PN
the response function f has bounded derivatives on compact sets. We have i=1 ?i pi
bounded within fixed interval [? pmin , ? pmax ]. By Lemma 1, with high probability ? =
1
The proofs of this and all other theorems can be found in a longer version of this paper, available
at www.cs.berkeley.edu/?xuanlong.
PN
i=1 ?i pi + , for some small . Using Taylor?s expansion up to second order, we have
that with high probability:
E(Y ) = Ex E(Y |X) = Ex f (?) = f? +
N
X
N
X
1 X
(
?i ?j (E(Xi ? pi )(Xj ? pj ))f?00 + O(3 ),
2!
i=1
i=1
i,j
PN
where f? and its derivatives are evaluated at i=1 ?i pi . This gives us a method of approximating E(Y ) by recursion: Assuming that one can approximate all needed expectations of
variables in the parent layer X with error O(3 ), one can also obtain an approximation of
E(Y ) with the error O(3 ). Clearly, the error can be improved to O(k+1 ) by using Taylor
expansion to some order k (provided that the response function f (?) = A0 (?) has bounded
derivatives up to that order). In this case, the expectation of the product of up to k elements
in the input layer, e.g., E(X1 ? p1 ) . . . (Xk ? pk ), needs to be computed.
(
?i EXi ?
?i pi )f?0 +
The variance of Y (as well as other higher-order expectations) can also be approximated in
the same way:
Var(Y) = Ex (Var(Y|X)) + Varx (E(Y|X))
= ?Ex f 0 (?) + Ex f (?)2 ? (E(Y ))2
where each component can be approximated using the delta method.
4
Approximate inference for layered Bayesian networks
In this section, we shall harness the concentration of expectation phenomenon to design
and analyze a family of approximate inference algorithms for multi-layer Bayesian networks that use GLIMs as local conditional probabilities. The recipe is clear by now. First,
organize the graph into layers that respect the topological ordering of the graph. The algorithm is comprised of two stages: (1) Propagate the concentrated conditional expectations
from ancestor layers to offspring layers. This results in a rough approximation of the expectation of individual nodes in the graph; (2) Apply the delta method to obtain more a
refined marginal expectation of the needed statistics, also starting from ancestor layers to
offspring layers.
Consider a multi-layer network that has L layers, each of which has N random variables.
L N
We refer to the ith variable in layer l by Xil , where {Xi1 }N
i=1 is the input layer, and {Xi }i=1
1
is the output layer. The expectations E(Xi ) of the first layer are given. For each 2 ? l ? L,
l?1
let ?ij
denote the parameter linking Xil and its parent Xjl?1 . Define the weighted sum of
PN l?1 l?1
Xj , where we assume that
contributions from parents to a node Xil : ?il = j=1 ?ij
l
|?ij
| ? ? /N for some constant ? .
We first consider the problem of estimating expectations of nodes in the output layer.
For binary networks, this amounts to estimating marginal probabilities, say, P [X 1L =
L
x1 , ...., Xm
= xm ], for given observed values (x1 , ..., xm ), where m < N . We subsequently consider a more general inference problem involving marginal and conditional
probabilities of nodes residing in different layers in the graph.
4.1 Algorithm stage 1: Propagating the concentrated expectation of single nodes
We establish a rough approximation of the expectations of all single nodes of the graph,
starting from the input layer l = 1 to the output layer l = L in an inductive manner. For
l = 1, let ?1i = ?i1 = 0 and p1i = EXi1 for all i = 1, . . . , N . For l > 1, let
?li
=
N
X
j=1
l?1 l?1
?ij
pj
(2)
li
=
N
X
l?1
|?ij
|?l?1
+?
j
j=1
?il
p
(? ln N )3 /N
= C exp{??N 1/3 ? ?2/3 (li ?
N
X
(3)
l?1
2/3
|?ij
|?l?1
}
j )
(4)
i=1
pli
?li
=
=
1
2
1
2
sup f (x) + inf f (x)
x?Ali
x?Ali
sup f (x) ? inf f (x)
x?Ali
x?Ali
!
!
(5)
where Ali = [?li ? li , ?li + li ].
(6)
In the above updates, constants ? and C arise from Lemma 1, ? is an arbitrary constant
that is greater than 1/?. The following proposition, whose proof makes use of Lemma 1
combined with union bounds, provides the error bounds for our algorithm.
PN
QL
Proposition 3 With probability at least l=1 (1 ? i=1 ?il ) = (1 ? CN 1??? )L?1 , for
l?1
any 1 ? i ? N, 1 ? l ? L we have: E[Xil |X1l?1 , . . p
. , XN
] = f (?il ) ? [pli ? ?li , pli +
?li ] and ?il ? [?li ? li , ?li + li ]. Furthermore, li = O( (ln N )3 /N ) for all i, l.
For layered networks with only bounded and Gaussian
variables, Lemma 1 can be tightp
2
ened, and this results in an error bound of O( (ln N ) /N ). For
p layered networks with
only bounded variables, the error bound can be tightened to O( ln N/N ). In addition, if
l
we drop the conditions that all parameters ?ij
are bounded by ? /N , Proposition 3 still goes
q P
N
l?1 2
through by replacing ? by N j=1 (?ij ) in updating equations for li and ?il for all i
p
and l. The asymptotic error bound O( (ln N )3 /N ) no longer holds, but it can be shown
that there are absolute constants c1 and c2 such that for all i, l:
p
li ? (c1 ||l?1 || + c2 (ln N )3 )||?il?1 ||
qP
qP
N
N
l?1 2
l
l 2
where ||?il?1 || ?
(?
)
and
||
||
?
j=1 ij
i=1 (i ) .
4.2 Algorithm stage 2: Approximating expectations by recursive delta method
The next step is to apply the delta method presented in Section 3 in a recursive manner.
Write:
m
Y
L
L
E[X1L ...Xm
] = EX L?1 E[X1L . . . Xm |X L?1 ] = EX L?1
f (?iL ) = EX L?1 F (?1L , ..., ?m
)
i=1
L
where F (?1L , ..., ?m
) :=
Qm
L
i=1 f (?i ).
Letp?il = ?il ? ?li . So, with probability (1 ? CN 1??? )L?1 we have |?il | ? li =
O( (ln N )3 /N ) for all l = 1, . . . , L and i = 1, . . . , N . Applying the delta method
L
by expanding F around the vector ? = (?L
1 , ..., ?m ) up to order k gives an approximation, which is denoted by MF(k), that depends on expectations of nodes in the previous
layer. Continuing this approximation recursively on the previous layers, we obtain an approximate algorithm that has an error bound O(((ln N )3 /N )(k+1)/2 ) (see the derivation in
Section 3) with probability at least (1 ? CN 1??? )L?1 and an error bound O(1) with the
remaining probability. We conclude that,
Theorem 4 The absolute error of the MF(k) approximation is O(((ln N )3 /N )(k+1)/2 ).
For networks with bounded variables, the error bound can be tightened to
O((ln N/N )(k+1)/2 ).
It is straightforward to check that MF(k) takes O(N max{k,2} ) computational time. The
asymptotic error bound O(((ln N )3 /N )(k+1)/2 ) is guaranteed for the aproximation of
expectations of a fixed number m of nodes in the output layer. In principle, this implies that m has to be small compared to N for the approximation to be useful. For binary networks, for instance, the marginal probabilities of m nodes could be as small as
O(1/2m ), so we need O(1/2m ) to be greater than O((ln N/N )(k+1)/2 ). This implies that
m < ln 1c + (k+1)
2 (ln N ? ln ln N ) for some constant c. However, we shall see that our
approximation is still useful for large m as long as the quantity it tries to approximate is
not too small.
For two-layer networks, an algorithm by Ng and Jordan (2000) yields a better error rate of
O(1/N (k+1)/2 ) by exploiting the Central Limit Theorem. However, this result is restricted
to networks with only 2 layers. Barber and Sollich (1999) were also motivated by the
Central Limit Theorem?s effect to approximate ?il by a multivariate Gaussian distribution,
resulting in a similar exploitation of correlation between pairs of nodes in the parent layer
as in our MF(2) approximation. Also related to Barber and Sollich?s algorithm of using an
approximating family of distribution is the assumed-density filtering approach (e.g., Minka,
2001). These approaches, however, do not provide an error bound guarantee.
4.3
Computing conditional expectations of nodes in different layers
For simplicity, in this subsection we shall consider binary layered networks. First, we
are interested in the marginal probability of a fixed number of nodes in different layers.
This can be expressed in terms of product of conditional probabilities of nodes in the
same layer given values of nodes in the previous layer. As shown in the previous subsection, each of these conditional probabilities can be approximated with an error bound
O((ln N/N )(k+1)/2 ) as N ? ?, and the product can also be approximated with the same
error bound.
Next, we consider approximating the probability of several nodes in the input layer con1
ditioned on some nodes observed in the output layer L, i.e., P (X11 = x11 , . . . , Xm
=
L
L
x1m |X1L = xL
,
.
.
.
,
X
=
x
)
for
some
fixed
numbers
m
and
n
that
are
small
comn
n
1
pared to N . In a multi-layer network, when even one node in the output layer is observed, all nodes in the graph becomes dependent. Furthermore, the conditional probabilities of all nodes in the graph are generally not concentrated. Nevertheless, we can
still approximate the conditional probability by approximating two marginal probabilities
1
L
L
L
L
L
L
P (X11 = x1i , . . . , Xm
= x1m , X1L = xL
1 , . . . , Xn = xn ) and P (X1 = x1 , . . . , Xn = xn )
separately and taking the ratio. This boils down to the problem of computing the marginal
probabilities of nodes residing in different layers of the graph. As discussed in the previous
paragraph, since each marginal probabilities can be approximated with an asymptotic error
bound O((ln N/N )(k+1)/2 ) as N ? ? (for binary networks), the same asymptotic error
bound holds for the conditional probabilities of fixed number of nodes. In the next section,
we shall present empirical results that show that this approximation is still quite good even
when a large number of nodes are conditioned on.
5
Simulation results
In our experiments, we consider a large number of randomly generated multi-layer
Bayesian networks with L = 3, L = 4 or L = 5 layers, and with the number of nodes
in each layer ranging from 10 to 100. The number of parents of each node is chosen
uniformly at random in [2, N ]. We use the noisy-OR function for the local conditional
probabilities; this choice has the advantage that we can obtain exact marginal probabilities for single nodes by exploiting the special structure of noisy-OR function (Heckerman,
?3
0.45
3
tau = 2
x 10
?3
7
K?S
MF(2)
and MF(1)
0.4
tau = 4
x 10
K?S
0.25
MF(2)
6
and MF(1)
0.35
5
absolute error
absolute error
2
0.3
0.25
0.2
4
3
0.15
1
2
0.2
1
0.15
0.1
0
0.1
50
N
100
0
0
50
N
100
0
50
N
0
0
100
(a)
50
N
100
(b)
Figure 1: The figures show the average error in the marginal probabilities of nodes in the output
layer. The x-axis is the number of nodes in each layer (N = 10, . . . , 100). The three curves (solid,
dashed, dashdot) correspond to the different numbers of layers L = 3, 4, 5, respectively. Plot (a)
corresponds to the case ? = 2 and plot (b) corresponds to ? = 4. In each pair of plots, the leftmost
plot shows MF(1) and Kearns and Saul?s algorithm (K-S) (with the latter being distinguished by black
arrows), and the rightmost plot is MF(2). Note the scale on the y-axis for the rightmost plot is 10 ?3 .
k
Network 1
Network 2
Network 3
1
0.0001
0.0007
0.0003
0.0018
0.0002
0.0008
2
0.0041
0.0609
0.0040
0.0508
0.0031
0.0406
3
0.0052
0.0912
0.0148
0.1431
0.0082
0.1150
4
0.0085
0.1925
0.0331
0.3518
0.0501
0.6858
5
0.0162
0.1862
0.0981
0.7605
0.1095
1.2392
6
0.0360
0.3885
0.1629
0.7790
0.0890
0.6115
7
0.0738
0.6262
0.1408
0.7118
0.0957
0.5703
8
0.1562
1.6478
0.1391
0.9435
0.1022
0.7840
Table 1: The experiments were performed on 24-node networks (3 layers with N = 8 nodes in each
layer). For each network, the first line shows the absolute error of our approximation of conditional
probabilities of nodes in the input layer given values of the first k nodes in the output layer, the second
line shows the absolute error of the log likelihood of the k nodes. The numbers were obtained by
averaging over k 2 random instances of the k nodes.
1989). All parameters ?ij are uniformly distributed in [0, ? /N ], with ? = 2 and ? = 4.
Figure 1 shows the error rates for computing the expectation of a single node in the output
layer of the graph. The results for each N are obtained by averaging over many graphical models with the same value of N . Our approximate algorithm, which is denoted by
MF(2), runs fast: The running time for the largest network (with L = 5, N = 100) is
approximately one minute.
We compare our algorithm (with ? fixed to be 2/?) with that of Kearns and Saul (K-S).
The MF(1) estimates are slightly worse that of the K-S algorithm, but they have the same
error curve O(ln N/N )1/2 . The MF(2) estimates, whose error curves were proven to be
O(ln N/N )3/2 , are better than both by orders of magnitude. The figure also shows that the
error increases when we increase the size of the parameters (increase ? ).
Next, we consider the inference problem of computing conditional probabilities of the input layer given that the first k nodes are observed in the output layer. We perform our
experiments on several randomly generated three-layer networks with N = 8. This size
allows us to be able to compute the conditional probabilities exactly.2 For each value of
2
The amount of time spent on exact computation for each network is about 3 days, while our
approximation routines take a few minutes.
k, we generate k 2 samples of the observed nodes generated uniformly at random from the
network and then compute the average of errors of conditional probability approximations.
We observe that while the error of conditional probabilities is higher than those of marginal
probabilities (see Table 1 and Figure 1), the error remains small despite the relatively large
number of observed nodes k compared to N .
6
Conclusions
We have presented a detailed analysis of concentration-of-expectation phenomena in layered Bayesian networks which use generalized linear models as local conditional probabilities. Our analysis encompasses a wide variety of probability distributions, including both
discrete and continuous random variables. We also performed a large number of simulations in multi-layer network models, showing that our approach not only provides a useful
theoretical analysis of concentration phenomena, but it also provides a fast and accurate
inference algorithm for densely-connected multi-layer graphical models.
In the setting of Bayesian networks in which nodes have large in-degree, there are few viable options for probabilistic inference. Not only are junction tree algorithms infeasible,
but (loopy) belief propagation algorithms are infeasible as well, because of the need to
moralize. The mean-field algorithms that we have presented here are thus worthy of attention as one of the few viable methods for such graphs. As we have shown, the framework
allows us to systematically trade time for accuracy with such algorithms, by accounting for
interactions between neighboring nodes via the delta method.
Acknowledgement. We would like to thank Andrew Ng and Martin Wainwright for very
useful discussions and feedback regarding this work.
References
D. Barber and P. van de Laar, Variational cumulant expansions for intractable distributions. Journal
of Artificial Intelligence Research, 10, 435-455, 1999.
L. Brown, Fundamentals of Statistical Exponential Families with Applications in Statistical Decision
Theory, Institute of Mathematical Statistics, Hayward, CA, 1986.
P. McCullagh and J.A. Nelder, Generalized Linear Models, Chapman and Hall, London, 1983.
T. Minka, Expectation propagation for approximate Bayesian inference, In Proc. UAI, 2001.
D. Heckerman, A tractable inference algorithm for diagnosing multiple diseases, In Proc. UAI, 1989.
M.I. Jordan, Z. Ghahramani, T.S. Jaakkola and L.K. Saul, An introduction to variational methods for
graphical models, In Learning in Graphical Models, Cambridge, MIT Press, 1998.
M.J. Kearns and L.K. Saul, Large deviation methods for approximate probabilistic inference, with
rates of convergence, In Proc. UAI, 1998.
M.J. Kearns and L.K. Saul, Inference in multi-layer networks via large deviation bounds, NIPS 11,
1999.
A.Y. Ng and M.I. Jordan, Approximate inference algorithms for two-layer Baysian networks, NIPS
12, 2000.
D. Barber and P. Sollich, Gaussian fields for approximate inference in layered sigmoid belief networks, NIPS 11, 1999.
T. Plefka, Convergence condition of the TAP equation for the infinite-ranged Ising spin glass model,
J. Phys. A: Math. Gen., 15(6), 1982.
J.S. Yedidia, W.T. Freeman, and Y. Weiss. Generalized belief propagation. NIPS 13, 2001.
| 2411 |@word exploitation:1 version:1 simulation:3 propagate:2 accounting:1 solid:1 recursively:1 rightmost:2 reassurance:2 varx:1 dashdot:1 partition:1 drop:1 plot:6 update:1 comn:1 pursued:1 intelligence:1 xk:1 ith:1 provides:4 math:1 node:50 diagnosing:1 unbounded:1 mathematical:1 x1l:5 c2:2 viable:2 prove:1 paragraph:1 manner:2 themselves:1 p1:1 multi:9 freeman:1 little:2 becomes:1 provided:3 estimating:2 bounded:12 underlying:1 hayward:1 developed:1 guarantee:1 berkeley:5 exactly:1 qm:1 organize:1 before:1 local:5 offspring:3 limit:2 despite:1 analyzing:1 initiated:2 approximately:1 black:1 pared:1 studied:1 ease:1 directed:3 practical:1 practice:1 union:1 recursive:2 procedure:1 empirical:1 significantly:2 layered:15 context:1 applying:1 www:1 go:2 straightforward:1 starting:2 attention:1 simplicity:1 pli:3 xjl:1 exact:2 user:1 element:1 approximated:7 updating:1 ising:1 observed:6 connected:1 ordering:1 trade:1 disease:1 complexity:1 covariates:1 personal:1 tight:1 ali:5 creates:1 basis:1 exi:2 represented:1 derivation:1 fast:3 london:1 artificial:1 choosing:1 refined:1 whose:2 quite:1 say:1 ability:1 statistic:3 noisy:4 advantage:1 interaction:1 product:4 neighboring:1 rapidly:1 gen:1 x1m:2 recipe:1 exploiting:2 parent:7 convergence:2 xil:4 spent:1 develop:1 andrew:1 propagating:3 ij:10 c:3 implies:2 subsequently:1 require:1 proposition:3 strictly:1 hold:2 around:2 considered:1 residing:2 hall:1 exp:4 proc:3 applicable:1 largest:1 weighted:1 rough:2 clearly:1 mit:1 gaussian:4 pn:17 jaakkola:1 bernoulli:1 check:1 likelihood:1 glass:1 posteriori:1 inference:23 dependent:1 a0:2 ancestor:3 i1:1 interested:1 x11:3 denoted:2 priori:1 special:2 marginal:11 field:3 ng:5 sampling:1 chapman:1 few:4 randomly:2 gamma:1 tightly:1 densely:1 individual:1 consisting:1 introduces:1 accurate:2 edge:1 tree:1 infeasibly:1 taylor:5 continuing:1 guidance:1 theoretical:1 instance:2 con1:1 moralization:1 loopy:1 deviation:5 plefka:2 comprised:1 too:1 combined:1 density:1 fundamental:1 probabilistic:2 xi1:1 michael:1 central:2 worse:1 derivative:5 pmin:3 li:19 de:1 includes:1 notable:1 depends:1 later:2 try:1 performed:2 analyze:3 sup:2 sort:1 option:1 contribution:1 il:13 spin:1 accuracy:1 variance:1 yield:1 correspond:1 bayesian:9 iterated:1 produced:1 phys:1 minka:2 proof:2 boil:1 proved:1 subsection:2 organized:1 routine:1 glim:2 reflecting:1 higher:4 day:1 methodology:1 response:7 improved:2 harness:1 wei:1 evaluated:3 laar:2 furthermore:3 stage:3 correlation:1 replacing:1 propagation:5 google:1 logistic:2 effect:1 brown:2 ranged:1 inductive:1 conditionally:1 generalized:10 leftmost:1 ranging:1 variational:3 superior:1 sigmoid:1 multinomial:1 empirically:2 qp:2 million:1 linking:2 discussed:1 marginals:1 refer:1 cambridge:1 enter:1 entail:1 longer:2 multivariate:1 recent:2 p1i:1 inf:2 certain:2 binary:7 der:1 devise:2 exploited:1 greater:2 somewhat:1 relaxed:1 employed:1 dashed:1 multiple:1 characterized:1 long:1 involving:2 expectation:26 poisson:1 c1:2 addition:1 separately:1 interval:3 undirected:1 jordan:8 variety:2 xj:2 architecture:1 idea:1 regarding:2 cn:3 aproximation:1 motivated:1 useful:5 generally:1 detailed:1 xuanlong:3 aimed:1 clear:2 amount:2 concentrated:4 generate:1 problematic:1 canonical:1 delta:11 discrete:2 write:1 shall:6 key:1 nevertheless:1 pj:2 utilize:1 graph:18 year:1 convert:1 sum:1 run:1 family:11 reasonable:1 decision:1 bound:26 layer:63 guaranteed:1 fan:2 topological:2 letp:1 glims:2 relatively:1 martin:1 according:1 combination:2 describes:1 slightly:2 increasingly:1 sollich:3 heckerman:2 intuitively:1 restricted:2 taken:1 ln:22 equation:2 remains:1 needed:2 tractable:1 studying:1 available:1 junction:1 yedidia:2 apply:2 obey:1 observe:1 distinguished:1 dirichlet:1 cf:1 remaining:1 running:1 graphical:7 ghahramani:1 establish:1 approximating:7 already:1 quantity:1 concentration:12 thank:1 barber:5 assuming:1 providing:1 ratio:1 ql:1 unfortunately:1 pmax:3 design:3 perform:3 allowing:1 situation:1 extended:1 communication:1 shazeer:1 worthy:1 arbitrary:1 pair:2 required:1 baysian:1 tap:1 california:2 nip:4 able:1 xm:7 encompasses:2 including:5 max:1 tau:2 belief:5 wainwright:1 natural:1 recursion:1 axis:2 concludes:1 literature:1 acknowledgement:1 asymptotic:6 filtering:1 proven:1 var:5 degree:1 principle:1 tightened:2 systematically:1 pi:16 soon:1 infeasible:2 allow:1 institute:1 wide:2 saul:10 taking:1 absolute:7 van:2 distributed:1 curve:3 feedback:1 xn:12 rich:1 nguyen:1 approximate:19 compact:3 clique:1 confirm:1 uai:3 assumed:2 conclude:1 nelder:2 xi:8 continuous:2 table:2 ca:3 expanding:1 obtaining:1 expansion:7 pk:1 multilayered:1 arrow:1 arise:1 x1:12 exponential:9 xl:2 x1i:1 formula:1 theorem:4 down:1 minute:2 showing:1 essential:1 intractable:1 magnitude:1 conditioned:1 mf:13 simply:1 expressed:1 corresponds:2 conditional:24 exposition:1 mccullagh:2 specifically:2 except:1 uniformly:3 infinite:1 averaging:2 kearns:9 lemma:11 latter:1 cumulant:1 evaluate:1 phenomenon:6 ex:8 |
1,554 | 2,412 | A Biologically Plausible Algorithm
for Reinforcement-shaped
Representational Learning
Maneesh Sahani
W.M. Keck Foundation Center for Integrative Neuroscience
University of California, San Francisco, CA 94143-0732
[email protected]
Abstract
Significant plasticity in sensory cortical representations can be driven in
mature animals either by behavioural tasks that pair sensory stimuli with
reinforcement, or by electrophysiological experiments that pair sensory
input with direct stimulation of neuromodulatory nuclei, but usually not
by sensory stimuli presented alone. Biologically motivated theories of
representational learning, however, have tended to focus on unsupervised
mechanisms, which may play a significant role on evolutionary or developmental timescales, but which neglect this essential role of reinforcement in adult plasticity. By contrast, theoretical reinforcement learning
has generally dealt with the acquisition of optimal policies for action in
an uncertain world, rather than with the concurrent shaping of sensory
representations. This paper develops a framework for representational
learning which builds on the relative success of unsupervised generativemodelling accounts of cortical encodings to incorporate the effects of
reinforcement in a biologically plausible way.
1
Introduction
A remarkable feature of the brain is its ability to adapt to, and learn from, experience.
This learning has measurable physiological correlates in terms of changes in the stimulusresponse properties of individual neurons in the sensory systems of the brain (as well as in
many other areas). While passive exposure to sensory stimuli can have profound effects on
the developing sensory cortex, significant plasticity in mature animals tends to be observed
only in situations where sensory stimuli are associated with either behavioural or electrical
reinforcement. Considerable theoretical attention has been paid to unsupervised learning of
representations adapted to natural sensory statistics, and to the learning of optimal policies
of action for decision processes; however, relatively little work (particularly of a biological
bent) has sought to understand the impact of reinforcement tasks on representation.
To be complete, understanding of sensory plasticity must come at two different levels. At
a mechanistic level, it is important to understand how synapses are modified, and how
synaptic modifications can lead to observed changes in the response properties of cells.
Numerous experiments and models have addressed these questions of how sensory plastic-
ity occurs. However, a mechanistic description alone neglects the information-processing
aspects of the brain?s function. Measured changes in sensory representation must underlie
an adaptive change in neural information processing. If we can understand the processing
goals of sensory systems, and therefore understand how changes in representation advance
these goals in the face of changing experience, we will have shed light on the question of
why sensory plasticity occurs. This is the goal of the current work.
To approach this goal, we first construct a representational model and associated objective function which together isolate the question of how the reinforcement-related value
of a stimulus is learned (the classic problem of reinforcement learning) from the question of how this value impacts the sensory representation. We show that the objective
function can be optimised by an expectation-maximisation learning procedure, but suggest
that direct optimisation is not biologically plausible, relying as it does on the availability
of an exact posterior distribution over the cortical representation given both stimulus and
reinforcement-value. We therefore develop and validate (through simulation) an alternative
optimisation approach based on the statistical technique of importance sampling.
2
Model
The standard algorithms of reinforcement learning (RL) deal with an agent that receives
rewards or penalties as it interacts with a world of known structure and, generally Markovian, dynamics [1]. The agent passes through a series of ?states?, choosing in each one
an action which results (perhaps stochastically) in a payoff and in a transition to another
state. Associated with each state (or state-action pair) and a given policy of action is a
value, which represents the expected payoff that would be received if the policy were to
be followed starting from that initial state (and initial action). Much work in RL has focused on learning the value function. Often the state that the agent occupies at each point
in time is assumed to be directly observable. In other cases, the agent receives only partial
information about the state it occupies, although in almost all studies the basic structure
of the world is assumed to be known. In these partially observable models, then, the state
information (which might be thought of as a form of sensory input) is used to estimate
which one of a known group of states is currently occupied, and so a natural representation
emerges in terms of a belief-distribution over states.
In the general case, however, the state structure of the world, if indeed a division into discrete states makes sense at all, is unknown. Instead, the agent must simultaneously discover
a representation of the sensory inputs suitable for predicting the reinforcement value, and
learn the action-contingent value function itself. This general problem is quite difficult. In
probabilistic terms, solving it exactly would require coping with a complicated joint distribution over representational structures and value functions. However, using an analogy to
the variational inference methods of unsupervised learning [2], we might modularise our
approach by factoring this joint into independent distributions over the sensory representation on the one hand and the value function on the other. In this framework approximate
estimation might proceed iteratively, using the current value function to tune the sensory
representation, and then re?estimating the value function for the revised sensory encoding.
The present work, being concerned with the way in which reinforcement guides sensory
representational learning, focuses exclusively on the first of these two steps. Thus, we take
the value associated with the current sensory input to be given. This value might represent a current estimate generated in the course of the iterative procedure described above.
In many of the reinforcement schedules used in physiological experiments, however, the
value is easily determined. For example, in a classical conditioning paradigm the value is
independent of action, and is given by the sum of the current reinforcement and the discounted average reinforcement received. Our problem, then, is to develop a biologically
plausible algorithm which is able to find a representation of the sensory input which facili-
tates prediction of the value.
Although our eventual goal clearly fits well in the framework of RL, we find it useful to
start from a standard theoretical account of unsupervised representational learning. The
view we adopt fits well with a Helmholtzian account of perceptual processing, in which the
sensory cortex interprets the activities of receptors so as to infer the state of the external
world that must have given rise to the observed pattern of activation. Perception, by this
account, may be thought of as a form of probabilistic inference in a generative model. The
general structure of such a model involves a set of latent variables or causes whose values
directly reflect the underlying state of the world, along with a parameterisation of effects of
these causes on immediate sensory experience. A generative model of visual sensation, for
example, might contain a hierarchy of latent variables that, at the top, corresponded to the
identities and poses of visual objects or the colour and direction of the illuminating light,
and at lower levels, represented local consequences of these more basic causes, for example
the orientation and contrast of local edges. Taken together, these variables would provide a
causal account for observations that correspond to photoreceptor activation. To apply such
a framework as a model for cortical processing, then, we take the sensory cortical activity
to represent the inferred values of the latent variables.
Thus, perceptual inference in this framework involves estimating the values of the causal
variables that gave rise to the sensory input, while developmental (unsupervised) learning
involves discovering the correct causal structure from sensory experience. Such a treatment
has been used to account for the structure of simple-cell receptive fields in visual cortex
[3, 4], and has been extended to further visual cortical response properties in subsequent
studies. In the present work our goal is to consider how such a model might be affected by
reinforcement. Thus, in addition to the latent causes Li that generate a sensory event Si ,
we consider an associated (possibly action-contingent) value Vi . This value is presumably
more parsimoniously associated with the causes underlying the sensory experience, rather
than with the details of the receptor activation, and so we take the sensory input and the
corresponding value to be conditionally independent given the cortical representation:
Z
P? (Si , Vi ) = dLi P? (Si | Li )P? (Vi | Li )P? (Li ),
(1)
where ? is a general vector of model parameters. Thus, the variables Si , Li and Vi form
a Markov chain. In particular, this means that whatever information Si carries about Vi is
expressed (if the model is well-fit) in the cortical representation Li , making this structure
appropriate for value prediction. The causal variables Li have taken on the r?ole of the
?state? in standard RL.
3
Objective function
The natural objective in reinforcement learning is to maximise some form of accumulated
reward. However, the model of (1) is, by itself, descriptive rather than prescriptive. That
is, the parameters modelled (those determining the responses in the sensory cortex, rather
than in associative or motor areas) do not directly control actions or policies of action. Instead, these descriptive parameters only influence the animal?s accumulated reinforcement
through the accuracy of the description they generate. As a result, even though the ultimate objective may be to maximise total reward, we need to use objective functions that
are closer in spirit to the likelihoods common in probabilistic unsupervised learning.
In particular, we consider functions of the form
X
L(?) =
?(Vi ) log P? (Si ) + ?(Vi ) log P? (Vi | Si )
i
(2)
In this expression, the two log probabilities reflect the accuracy of stimulus representation,
and of value prediction, respectively. These two terms would appear alone in a straightforward representational model of the joint distribution over sensory stimuli and values. However, in considering a representational subsystem within a reinforcement learning agent,
where the overall goal is to maximise accumulated reward, it seems reasonable that the
demand for representative or predictive fidelity depend on the value associated with the
stimulus; this dependence is reflected here by a value-based weighting of the log probabilities, which we assume will weight the more valuable cases more heavily.
4
Learning
While the objective function (2) does not depend explicitly on the cortical representation variables,
through the marginal likelihoods
R it does depend on their distributions,
R
P? (Si ) = dLi P? (Si , Li ) and P? (Vi | Si ) = dLi P? (Vi , Li | Si ). For all but the
simplest probabilistic models, optimising these integral expressions directly is computationally prohibitive. However, a standard technique called the Expectation-Maximisation
(EM) algorithm can be extended in a straightforward way to facilitate optimisation of functions with the form we consider here.
We introduce 2N unknown probability distributions over the cortical representation,
Q? (Li ) and Q? (Li ). Then, using Jensen?s inequality for convex functions, we obtain a
lower bound on the objective function:
Z
Z
X
Q? (Li )
Q? (Li )
P? (Si , Li ) + ?(Vi ) log
P? (Li , Vi | Si )
L(?) =
?(Vi ) log
Q? (Li )
Q? (Li )
i
X
?
?(Vi ) hlog P? (Si , Li )iQ? (Li ) + H[Q? (Li )]
i
+ ?(Vi ) hlog P? (Li , Vi | Si )iQ? (Li ) + H[Q? (Li )]
= F(?, Q? (Li ), Q? (Li ))
It can be shown that, provided both functions are continuous and differentiable, local maxima of the ?free-energy? F with respect to all of its arguments correspond, in their optimal
values of ?, to local maxima of L [5]. Thus, any hill-climbing technique applied to the freeenergy functional can be used to find parameters that maximise the objective. In particular,
the usual EM approach alternates maximisations (or just steps in the gradient direction)
with respect to each of the arguments of F. In our case, this results in the following on-line
learning updates made after observing the ith data point:
Q? (Li ) ? P? (Li | Si )
(3a)
Q? (Li ) ? P? (Li | Vi , Si )
(3b)
? ? ? + ??? ?(Vi ) hlog P? (Si , Li )iQ? (Li ) + ?(Vi ) hlog P? (Li , Vi | Si )iQ? (Li )
(3c)
where the first two equations represent exact maximisations, while the third is a gradient
step, with learning rate ?. It will be useful to rewrite (3c) as
? ? ? + ? ?(Vi ) h?? log P? (Si , Li )iQ? (Li ) + ?(Vi ) h?? log P? (Li | Si )iQ? (Li )
+?(Vi ) h?? log P? (Vi | Li )iQ? (Li )
(3c0 )
where the conditioning on Si in the final term in not needed due to the Markovian structure
of the model.
5
Biologically Plausible Learning
Could something like the updates of (3) underlie the task- or neuromodulator-driven
changes that are seen in sensory cortex? Two out of the three steps seem plausible. In (3a),
the distribution P? (Li | Si ) represents the animal?s beliefs about the latent causes that led
to the current sensory experience, and as such is the usual product of perceptual inference.
In (3c0 ), the various log probabilities involved are similarly natural products of perceptual
or predictive computations. However, the calculation of the distribution P? (Li | Vi , Si ) in
(3b) is less easily reconciled with biological constraints.
There are two difficulties. First, the sensory input, Si , and the information needed to assess
its associated value, Vi , often arrive at quite different times. However, construction of the
posterior distribution in its full detail requires simultaneous knowledge of both Si and Vi ,
and would therefore only be possible if rich information about the sensory stimulus were to
be preserved until the associated value could be determined. The feasibility of such detailed
persistence of sensory information is unclear. The second difficulty is an architectural one.
The connections from receptor epithelium to sensory areas of cortex are extensive, easily
capable of conveying the information needed to estimate P(L | S). By contrast, the brain
structures that seem to be associated with the evaluation of reinforcement, such as the
ventral tegmental area or nucleus basalis, make only sparse projections to early sensory
cortex; and these projections are frequently modulatory in character, rather than synaptic.
Thus, exact computation of P(Li | Vi ) (a component of the full P(Li | Vi , Si )) seems
difficult to imagine.
It might seem at first that the former of these two problems would also apply to the weight
?(Vi ) (in the first term of (3c0 )), in that execution of this portion of the update would also
need to be delayed until this value-dependent weight could be calculated. On closer examination, however, it becomes evident that this difficulty can be avoided. The trick is that in
learning, the weight can be applied to the gradient. Thus, it is sufficient only to remember
the gradient, or indeed the corresponding change in synaptic weights. One possible way to
do this is to actually carry out an update of the weights when just the sensory stimulus is
known, but then consolidate this learning (or not) as indicated by the value-related weight.
Such a consolidation signal might easily be carried by a neuromodulatory projection from
subcortical nuclei involved in the evaluation of reinforcement.
We propose to solve the problem posed by P(L | S, V ) in essentially the same way, that is
by using information about reinforcement-value to guide modulatory reweighting or consolidation of synaptic changes that are initially based on the sensory stimulus alone. Note
that the expectations over P(Li | Si , Vi ) that appear in (3c0 ) could, in principle, be replaced by sums over samples drawn from the distribution. Since learning is gradual and
on-line, such a stochastic gradient ascent algorithm would still converge (in probability)
to the optimum. Of course, sampling from this distribution is no more compatible with
the foregoing biological constraints than integrating over it. However, consider drawing
? i from P(Li | Si ), and then weighting the corresponding terms in the sum by
samples L
?
w(Li ) = P(Vi | L?i )/P(Vi | Si ). Then we have, taking the second term in (3c0 ) for
example,
D
?i)
P(Vi | L
? i | Si )
P(L
? i ?P(Li |Si )
P(Vi | Si )
L
Z
E
D
?
? i | Si ) P(Vi , Li | Si ) = ?? log P? (L
? i | Si )
? i ?? log P? (L
.
= dL
? i ?P(Li |Si ,Vi )
P(Vi | Si )
L
E
? i | Si )w(L
?i)
?? log P? (L
Z
=
? i ?? log P? (L
? i | Si )
dL
This approach to learning, which exploits the standard statistical technique of importance sampling [6], resolves both of the difficulties discussed above. It implies that
reinforcement-related processing and learning in the sensory systems of the brain proceeds
in these stages:
1. The sensory input is processed to infer beliefs about the latent causes P? (Li | Si ).
? i are drawn from this distribution.
One or more samples L
2. Synaptic weights are updated to follow the gradients h?? log P? (Si , Li )iP? (Li |Si )
? i | Si ) (corresponding to the first two terms of (3c0 ).
and ?? log P? (L
3. The associated value is predicted, both on the basis of the full posterior, giving
? i ).
P? (Vi | Si ), and on the basis of the sample(s), giving P? (Vi | L
4. The actual value is observed or estimated, facilitating calculation of the weights
? i ).
?(Vi ), ?(Vi ), and w(L
5. These weights are conveyed to sensory cortex and used to consolidate (or not) the
synaptic changes of step 2.
This description does not encompass the updates corresponding to the third term of (3c0 ).
Such updates could be undertaken once the associated value became apparent; however,
the parameters that represent the explicit dependence of value on the latent variables are
unlikely to lie in the sensory cortex itself (instead determining computations in subsequent
processing).
5.1
Distributional Sampling
A commonly encountered difficulty with importance sampling has to do with the distribution of importance weights wi . If the range of weights is too extensive, the optimisation
will be driven primarily by few large weights, leading to slow and noisy learning. Fortunately, it is possible to formulate an alternative, in which distributions over the cortical
representational variables, rather than samples of the variables themselves, are randomly
generated and weighted appropriately.1
e i (L) be a distribution over the latent causes L, drawn randomly from a functional
Let P
D
E
e i | Si ), such that P
e i (L)
distribution P(P
= P(Li | Si ). Then, by analogy with
ei |Si )
P(P
the result above, it can be shown that given importance weights
Z
e i (L)
dL P(Vi | L)P
e
,
w(Pi ) =
P(Vi | Si )
we have
D
E
? i | Si )
?? log P? (L
ei (L)
P
e
w(Pi )
ei ?P(P
ei |Si )
P
(4)
= h?? log P? (Li | Si )iL? i ?P(Li |Si ,Vi ) .
These distributional samples can thus be used in almost exactly the same manner as the
single-valued samples described above.
6
Simulation
A paradigmatic generative model structure is that underlying factor analysis (FA) [7], in
which both latent and observed variables are normally distributed:
P? (Li ) = N (0, I) ; P? (Si | Li ) = N (?S Li , ?S ) ; P? (Vi | Li ) = N (?V Li , ?V ) .
(5)
1
This sampling scheme can also be formalised as standard importance sampling carried out with
e i (L).
a cortical representation re-expressed in terms of the parameters determining the distribution P
relative amplitude
a
1
0.5
0
generative weights
relative amplitude
b
1
0.5
0
unweighted learning
relative amplitude
c
1
0.5
0
weighted learning
sensory input dimension
Figure 1: Generative and learned sensory weights. See text for details.
The parameters of the FA model (grouped here in ?) comprise two linear weight matrices
?S and ?V and two diagonal noise covariance matrices ?S and ?V . This model is similar in its linear generative structure to the independent components analysis models that
have previously been employed in accounts of unsupervised development of visual cortical
properties [3, 4]; the only difference is in the assumed distribution of the latent variables.
The unit normal assumption of FA introduces a rotational degeneracy in solutions. This can
be resolved in general by constraining the weight matrix ? = [?S , ?V ] to be orthogonal ?
giving a version of FA known as principal factor analysis (PFA).
We used a PFA-based simulation to verify that the distributional importance-weighted sampling procedure described here is indeed able to learn the correct model given sensory and
reinforcement-value data. Random vectors representing sensory inputs and associated values were generated according to (5); these were then used as inputs to a learning system.
The objective function optimised had both value-dependent weights ?(Vi ) and ?(Vi ) set to
unity; thus the learning system simply attempted to model the joint distribution of sensory
and reinforcement data.
The generative model comprised 11 latent variables, 40 observed sensory variables (which
were arranged linearly so as to represent 40 discrete values along a single sensory axis),
and a single reinforcement variable. Ten of the latent variables only affected the sensory
observations. The weight vectors corresponding to each of these are shown by the solid
lines in figure 1a. These ?tuning curves? were designed to be orthogonal. The curves shown
in figure 1a have been rescaled to have equal maximal amplitude; in fact the amplitudes
were randomly varied so that they formed a unique orthogonal basis for the data. These
features of the generative weight matrix were essential for PFA to be able to recover the
generative model uniquely. The final latent variable affected both reinforcement value and
the sensory input at a single point (indicated by the dashed line in figure 1a). Since the
output noise matrix in PFA can associate arbitrary variance with each sensory variable, a
model fit to only the sensory data would treat the influence of this latent cause as noise.
Only when the joint distribution over both sensory input and reinforcement is modelled
will this aspect of the sensory data be captured in the model parameters.
Learning was carried out by processing data generated by the model described above one
sample at a time. The posterior distribution P? (Li | Si ) for the PFA model is Gaussian,
?1
?1
?1
with covariance ?L = (I + ?T
and mean ?L = ?L ?T
S ?S ?S )
S ?S Si . The distribue i were also taken to be Gaussian. Each had covariance 0.6?L and mean
tional samples P
drawn randomly from N (?L , 0.4?L ).
Two simulations were performed. In one case learning proceeded according to the sampled
e i , with no importance weighting. In the other, learning was modulated by the
distributions P
importance weights given by (4). In all other regards the two simulations were identical.
In particular, in both cases the reinforcement predictive weights ?V were estimated, and
in both cases the orthogonality constraint of PFA was applied to the combined estimated
weight matrix [?S , ?V ]. Figure 1b and c shows the sensory weights ?S learnt by each
of these procedures (again the curves have been rescaled to show relative weights). Both
algorithms recovered the basic tuning properties; however, only the importance sampling
algorithm was able to model the additional data feature that was linked to the prediction
of reinforcement value. The fact that in all other regards the two learning simulations
were identical demonstrates that the importance weighting procedure (rather than, say, the
orthogonality constraint) was responsible for this difference.
7
Summary
This paper has presented a framework within which the experimentally observed impact of
behavioural reinforcement on sensory plasticity might be understood. This framework rests
on a similar foundation to the recent work that has related unsupervised learning to sensory
response properties. It extends this foundation to consider prediction of the reinforcement
value associated with sensory stimuli. Direct learning by expectation-maximisation within
this framework poses difficulties regarding biological plausibility. However, these were
resolved by the introduction of an importance sampled approach, along with its extension
to distributional sampling. Information about reinforcement is thus carried by a weighting
signal that might be identified with the neuromodulatory signals in the brain.
References
[1] R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge,
MA, 1998.
[2] M. I. Jordan, Z. Ghahramani, T. Jaakkola, and L. K. Saul. An introduction to variational methods
for graphical models. Mach. Learning, 37(2):183?233, 1999.
[3] B. A. Olshausen and D. J. Field. Emergence of simple-cell receptive field properties by learning
a sparse code for natural images. Nature, 381(6583):607?9, 1996.
[4] A. J. Bell and T. J. Sejnowski. The ?independent components? of natural scenes are edge filters.
Vision Res., 37(23):3327?3338, 1997.
[5] R. M. Neal and G. E. Hinton. A view of the EM algorithm that justifies incremental, sparse,
and other variants. In M. I. Jordan, ed., Learning in Graphical Models, pp. 355?370. Kluwer
Academic Press, 1998.
[6] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C: The
Art of Scientific Computing. CUP, Cambridge, 2nd edition, 1993.
[7] B. S. Everitt. An Introduction to Latent Variable Models. Chapman and Hall, London, 1984.
| 2412 |@word proceeded:1 version:1 seems:2 distribue:1 c0:7 nd:1 integrative:1 simulation:6 gradual:1 covariance:3 paid:1 solid:1 carry:2 initial:2 phy:1 series:1 exclusively:1 prescriptive:1 current:6 recovered:1 activation:3 si:56 must:4 subsequent:2 numerical:1 plasticity:6 motor:1 designed:1 update:6 alone:4 generative:9 discovering:1 prohibitive:1 ith:1 along:3 direct:3 profound:1 epithelium:1 manner:1 introduce:1 indeed:3 expected:1 themselves:1 frequently:1 brain:6 relying:1 discounted:1 resolve:1 little:1 actual:1 considering:1 becomes:1 provided:1 discover:1 estimating:2 underlying:3 remember:1 shed:1 exactly:2 demonstrates:1 whatever:1 control:1 underlie:2 normally:1 appear:2 unit:1 maximise:4 understood:1 local:4 treat:1 tends:1 consequence:1 receptor:3 encoding:2 sutton:1 mach:1 optimised:2 might:10 range:1 unique:1 responsible:1 maximisation:5 procedure:5 area:4 coping:1 maneesh:2 bell:1 thought:2 projection:3 persistence:1 integrating:1 suggest:1 subsystem:1 influence:2 measurable:1 center:1 exposure:1 attention:1 starting:1 straightforward:2 convex:1 focused:1 formulate:1 ity:1 classic:1 updated:1 hierarchy:1 play:1 heavily:1 construction:1 exact:3 imagine:1 trick:1 associate:1 particularly:1 distributional:4 observed:7 role:2 electrical:1 rescaled:2 valuable:1 developmental:2 reward:4 dynamic:1 depend:3 solving:1 rewrite:1 predictive:3 division:1 basis:3 easily:4 joint:5 resolved:2 represented:1 various:1 london:1 sejnowski:1 ole:1 corresponded:1 choosing:1 quite:2 whose:1 posed:1 plausible:6 solve:1 foregoing:1 drawing:1 apparent:1 valued:1 say:1 ability:1 statistic:1 emergence:1 itself:3 noisy:1 final:2 associative:1 ip:1 descriptive:2 differentiable:1 propose:1 product:2 formalised:1 maximal:1 representational:10 description:3 validate:1 recipe:1 keck:1 optimum:1 incremental:1 object:1 iq:7 develop:2 pose:2 measured:1 received:2 predicted:1 involves:3 come:1 implies:1 direction:2 sensation:1 correct:2 filter:1 stochastic:1 occupies:2 require:1 biological:4 extension:1 hall:1 normal:1 presumably:1 sought:1 adopt:1 ventral:1 early:1 estimation:1 currently:1 concurrent:1 grouped:1 weighted:3 mit:1 clearly:1 tegmental:1 gaussian:2 modified:1 rather:7 occupied:1 barto:1 jaakkola:1 focus:2 likelihood:2 contrast:3 sense:1 inference:4 tate:1 dependent:2 factoring:1 tional:1 accumulated:3 vetterling:1 unlikely:1 basalis:1 initially:1 overall:1 fidelity:1 orientation:1 development:1 animal:4 art:1 marginal:1 field:3 construct:1 once:1 shaped:1 comprise:1 sampling:10 equal:1 optimising:1 represents:2 identical:2 chapman:1 unsupervised:9 stimulus:13 develops:1 primarily:1 few:1 randomly:4 simultaneously:1 individual:1 parsimoniously:1 delayed:1 replaced:1 evaluation:2 introduces:1 pfa:6 light:2 chain:1 edge:2 closer:2 partial:1 integral:1 experience:6 capable:1 orthogonal:3 re:3 causal:4 theoretical:3 uncertain:1 markovian:2 stimulusresponse:1 comprised:1 too:1 learnt:1 combined:1 probabilistic:4 together:2 again:1 reflect:2 possibly:1 stochastically:1 external:1 leading:1 li:61 account:7 availability:1 explicitly:1 vi:48 performed:1 view:2 observing:1 linked:1 portion:1 start:1 recover:1 complicated:1 ass:1 formed:1 il:1 accuracy:2 became:1 variance:1 correspond:2 conveying:1 climbing:1 dealt:1 modelled:2 plastic:1 simultaneous:1 synapsis:1 tended:1 synaptic:6 ed:1 energy:1 acquisition:1 pp:1 involved:2 associated:14 degeneracy:1 sampled:2 treatment:1 knowledge:1 emerges:1 electrophysiological:1 shaping:1 schedule:1 amplitude:5 actually:1 follow:1 reflected:1 response:4 arranged:1 though:1 just:2 stage:1 until:2 hand:1 receives:2 ei:4 reweighting:1 freeenergy:1 perhaps:1 indicated:2 scientific:1 olshausen:1 facilitate:1 effect:3 contain:1 verify:1 former:1 iteratively:1 neal:1 deal:1 conditionally:1 uniquely:1 hill:1 evident:1 complete:1 passive:1 image:1 variational:2 common:1 stimulation:1 functional:2 rl:4 conditioning:2 discussed:1 kluwer:1 significant:3 cambridge:2 cup:1 everitt:1 neuromodulatory:3 tuning:2 similarly:1 had:2 cortex:9 something:1 posterior:4 recent:1 driven:3 inequality:1 success:1 seen:1 captured:1 contingent:2 fortunately:1 additional:1 employed:1 converge:1 paradigm:1 paradigmatic:1 signal:3 dashed:1 full:3 encompass:1 infer:2 adapt:1 calculation:2 plausibility:1 academic:1 bent:1 feasibility:1 impact:3 prediction:5 variant:1 basic:3 optimisation:4 expectation:4 essentially:1 vision:1 represent:5 cell:3 preserved:1 addition:1 addressed:1 appropriately:1 rest:1 pass:1 ascent:1 isolate:1 mature:2 spirit:1 seem:3 jordan:2 constraining:1 concerned:1 fit:4 gave:1 identified:1 interprets:1 regarding:1 motivated:1 expression:2 colour:1 ultimate:1 penalty:1 proceed:1 cause:9 action:11 generally:2 useful:2 detailed:1 modulatory:2 tune:1 ten:1 processed:1 simplest:1 generate:2 neuroscience:1 estimated:3 discrete:2 affected:3 group:1 drawn:4 changing:1 undertaken:1 sum:3 arrive:1 almost:2 reasonable:1 extends:1 architectural:1 decision:1 consolidate:2 bound:1 followed:1 encountered:1 activity:2 adapted:1 constraint:4 orthogonality:2 scene:1 aspect:2 argument:2 relatively:1 developing:1 according:2 alternate:1 em:3 character:1 unity:1 wi:1 parameterisation:1 biologically:6 modification:1 making:1 taken:3 behavioural:3 computationally:1 equation:1 previously:1 mechanism:1 needed:3 mechanistic:2 apply:2 appropriate:1 alternative:2 top:1 graphical:2 neglect:2 exploit:1 giving:3 ghahramani:1 build:1 classical:1 objective:10 question:4 occurs:2 receptive:2 fa:4 dependence:2 usual:2 interacts:1 diagonal:1 unclear:1 evolutionary:1 gradient:6 code:1 rotational:1 difficult:2 hlog:4 rise:2 policy:5 unknown:2 neuron:1 revised:1 observation:2 markov:1 immediate:1 situation:1 payoff:2 extended:2 hinton:1 varied:1 arbitrary:1 inferred:1 pair:3 extensive:2 dli:3 connection:1 california:1 learned:2 adult:1 able:4 proceeds:1 usually:1 pattern:1 perception:1 belief:3 suitable:1 event:1 natural:6 difficulty:6 examination:1 predicting:1 representing:1 scheme:1 numerous:1 axis:1 carried:4 sahani:1 text:1 understanding:1 determining:3 relative:5 subcortical:1 analogy:2 remarkable:1 foundation:3 nucleus:3 illuminating:1 agent:6 conveyed:1 sufficient:1 principle:1 pi:2 neuromodulator:1 course:2 compatible:1 consolidation:2 summary:1 free:1 guide:2 understand:4 saul:1 face:1 taking:1 sparse:3 distributed:1 regard:2 curve:3 calculated:1 cortical:13 world:6 transition:1 rich:1 unweighted:1 sensory:64 dimension:1 made:1 reinforcement:35 san:1 adaptive:1 avoided:1 commonly:1 correlate:1 approximate:1 observable:2 assumed:3 francisco:1 continuous:1 iterative:1 latent:15 why:1 learn:3 nature:1 ca:1 reconciled:1 timescales:1 linearly:1 noise:3 edition:1 facilitating:1 representative:1 slow:1 explicit:1 lie:1 perceptual:4 weighting:5 third:2 jensen:1 physiological:2 dl:3 essential:2 importance:12 execution:1 justifies:1 demand:1 flannery:1 led:1 simply:1 visual:5 expressed:2 partially:1 teukolsky:1 ma:1 goal:7 identity:1 eventual:1 considerable:1 change:9 experimentally:1 determined:2 principal:1 total:1 called:1 attempted:1 photoreceptor:1 modulated:1 ucsf:1 incorporate:1 |
1,555 | 2,413 | A Nonlinear Predictive State Representation
Matthew R. Rudary and Satinder Singh
Computer Science and Engineering
University of Michigan
Ann Arbor, MI 48109
{mrudary,baveja}@umich.edu
Abstract
Predictive state representations (PSRs) use predictions of a set of tests to
represent the state of controlled dynamical systems. One reason why this
representation is exciting as an alternative to partially observable Markov
decision processes (POMDPs) is that PSR models of dynamical systems
may be much more compact than POMDP models. Empirical work on
PSRs to date has focused on linear PSRs, which have not allowed for
compression relative to POMDPs. We introduce a new notion of tests
which allows us to define a new type of PSR that is nonlinear in general
and allows for exponential compression in some deterministic dynamical systems. These new tests, called e-tests, are related to the tests used
by Rivest and Schapire [1] in their work with the diversity representation,
but our PSR avoids some of the pitfalls of their representation?in particular, its potential to be exponentially larger than the equivalent POMDP.
1
Introduction
A predictive state representation, or PSR, captures the state of a controlled dynamical system not as a memory of past observations (as do history-window approaches), nor as a distribution over hidden states (as do partially observable Markov decision process or POMDP
approaches), but as predictions for a set of tests that can be done on the system. A test is
a sequence of action-observation pairs and the prediction for a test is the probability of
the test-observations happening if the test-actions are executed. Littman et al. [2] showed
that PSRs are as flexible a representation as POMDPs and are a more powerful representation than fixed-length history-window approaches. PSRs are potentially significant for
two main reasons: 1) they are expressed entirely in terms of observable quantities and this
may allow the development of methods for learning PSR models from observation data
that behave and scale better than do existing methods for learning POMDP models from
observation data, and 2) they may be much more compact than POMDP representations. It
is the latter potential advantage that we focus on in this paper.
All PSRs studied to date have been linear, in the sense that the probability of any sequence
of k observations given a sequence of k actions can be expressed as a linear function of the
predictions of a core set of tests. We introduce a new type of test, the e-test, and present
the first nonlinear PSR that can be applied to a general class of dynamical systems. In
particular, in the first such result for PSRs we show that there exist controlled dynamical
systems whose PSR representation is exponentially smaller than its POMDP representation.
To arrive at this result, we briefly review PSRs, introduce e-tests and an algorithm to generate a core set of e-tests given a POMDP, show that a representation built using e-tests is
a PSR and that it can be exponentially smaller than the equivalent POMDP, and conclude
with example problems and a look at future work in this area.
2
Models of Dynamical Systems
A model of a controlled dynamical system defines a probability distribution over sequences
of observations one would get for any sequence of actions one could execute in the system.
Equivalently, given any history of interaction with the dynamical system so far, a model
defines the distribution over sequences of future observations for all sequences of future
actions. The state of such a model must be a sufficient statistic of the observed history; that
is, it must encode all the information conveyed by the history.
POMDPs [3, 4] and PSRs [2] both model controlled dynamical systems. In POMDPs, a
belief state is used to encode historical information; in PSRs, probabilities of particular
future outcomes are used. Here we describe both models and relate them to one another.
POMDPs A POMDP model is defined by a tuple hS, A, O, T, O, b0 i, where S, A, and
O are, respectively, sets of (unobservable) hypothetical underlying-system states, actions
that can be taken, and observations that may be issued by the system. T is a set of matrices
of dimension |S| ? |S|, one for each a ? A, such that Tija is the probability that the next
state is j given that the current state is i and action a is taken. O is a set of |S| ? |S|
a,o
diagonal matrices, one for each action-observation pair, such that Oii
is the probability of
observing o after arriving in state i by taking action a. Finally, b0 is the initial belief state,
a |S| ? 1 vector whose ith element is the probability of the system starting in state i.
The belief state at history h is b(S|h) = [prob(1|h) prob(2|h) . . . prob(|S||h)], where
prob(i|h) is the probability of the unobserved state being i at history h. After taking an
action a in history h and observing o, the belief state can be updated as follows:
bT (S|hao) =
bT (S|h)T a Oa,o
(1|S| is the |S| ? 1 vector consisting of all 1?s)
bT (S|h)T a Oa,o 1|S|
PSRs Littman et al. [2] (inspired by the work of Rivest and Schapire [1] and Jaeger [5])
introduced PSRs to represent the state of a controlled dynamical system using predictions
of the outcomes of tests. They define a test t as a sequence of actions and observations
t = a1 o1 a2 o2 ? ? ? ak ok ; we shall call this type of test a sequence test, or s-test in short. An
s-test succeeds iff, when the sequence of actions a1 a2 ? ? ? ak is executed, the system issues
the observation sequence o1 o2 ? ? ? ok . The prediction p(t|h) is the probability that the s-test
t succeeds from observed history h (of length n w.l.o.g.); that is
p(t|h) = prob(on+1 = o1 , . . . , on+k = ok |h, an+1 = a1 , . . . , an+k = ak )
(1)
where ai and oi denote the action taken and the observation, respectively, at time i. In the
rest of this paper, we will abbreviate expressions like the right-hand side of Equation 1 by
prob(o1 o2 ? ? ? ok |ha1 a2 ? ? ? ak ).
A set of s-tests Q = {q1 q2 . . . q|Q| } is said to be a core set if it constitutes a PSR, i.e.,
if its vector of predictions, p(Q|h) = [p(q1 |h) p(q2 |h) . . . p(q|Q| |h)], is a sufficient
statistic for any history h. Equivalently, if Q is a core set, then for any s-test t, there exists
a function ft such that p(t|h) = ft (p(Q|h)) for all h. The prediction vector p(Q|h) in
PSR models corresponds to belief state b(S|h) in POMDP models. The PSRs discussed by
Littman et al. [2] are linear PSRs in the sense that for any s-test t, ft is a linear function of
the predictions of the core s-tests; equivalently, the following equation
?s-tests t ? a weight vector wt , s.t. p(t|h) = pT (Q|h)wt
(2)
defines what it means for Q to constitute a linear PSR. Upon taking action a in history h
and observing o, the prediction vector can be updated as follows:
p(qi |hao) =
p(aoqi |h)
faoqi (p(Q|h))
pT (Q|h)maoqi
=
= T
p(ao|h)
fao (p(Q|h))
p (Q|h)mao
(3)
where the final right-hand side is only valid for linear PSRs. Thus a linear PSR model is
specified by Q and by the weight vectors in the above equation maoq for all a ? A, o ?
O, q ? Q ? ? (where ? is the null sequence). It is pertinent to ask what sort of dynamical
systems can be modeled by a PSR and how many core tests are required to model a system.
In fact, Littman et al. [2] answered these questions with the following result:
Lemma 1 (Littman et al. [2]) For any dynamical system that can be represented by a finite
POMDP model, there exists a linear PSR model of size (|Q|) no more than the size (|S|) of
the POMDP model.
Littman et al. prove this result by providing an algorithm for constructing a linear PSR
model from a POMDP model. The algorithm they present depends on the insight that
s-tests are differentiated by their outcome vectors. An outcome vector u(t) for an s-test
t = a1 o1 a2 o2 . . . ak ok is a |S| ? 1 vector; the ith component of the vector is the
probability of t succeeding given that the system is in the hidden state i, i.e., u(t) =
n
k k
2 2
1
1 1
2
T a Oa o T a Oa o . . . T a Oa o 1|S| . Consider the matrix U whose rows correspond to
the states in S and whose columns are the outcome vectors for all possible s-tests. Let Q
denote the set of s-tests associated with the maximal set of linearly independent columns
of U ; clearly |Q| ? |S|. Littman et al. showed that Q is a core set for a linear PSR model
by the following logic. Let U (Q) denote the submatrix consisting of the columns of U
corresponding to the s-tests ? Q. Clearly, for any s-test t, u(t) = U (Q)wt for some vector
of weights wt . Therefore, p(t|h) = bT (S|h)u(t) = bT (S|h)U (Q)wt = p(Q|h)wt which
is exactly the requirement for a linear PSR (cf. Equation 2).
We will reuse the concept of linear independence of outcome vectors with a new type of
test to derive a PSR that is nonlinear in general. This is the first nonlinear PSR that can be
used to represent a general class of problems. In addition, this type of PSR in some cases
requires a number of core tests that is exponentially smaller than the number of states in
the minimal POMDP or the number of core tests in the linear PSR.
3
A new notion of tests
In order to formulate a PSR that requires fewer core tests, we look to a new kind of test?
the end test, or e-test in short. An e-test is defined by a sequence of actions and a single
ending observation. An e-test e = a1 a2 ? ? ? ak ok succeeds if, after the sequence of actions a1 a2 ? ? ? ak is executed, ok is observed. This type of test is inspired by Rivest and
Schapire?s [1] notion of tests in their work on modeling deterministic dynamical systems.
3.1
PSRs with e-tests
Just as Littman et al. considered the problem of constructing s-test-based PSRs from
POMDP models, here we consider how to construct a e-test-based PSR, or EPSR, from
a POMDP model and will derive properties of EPSRs from the resulting construction.
The |S| ? 1 outcome vector for an e-test e = a1 a2 . . . ak ok is
1
2
k
k k
v(e) = T a T a . . . T a Oa
o
1|S| .
(4)
Note that we are using v?s to denote outcome vectors for e-tests and u?s to denote outcome
vectors for s-tests. Consider the matrix V whose rows correspond to S whose columns are
done ? false; i ? 0; L ? {}
do until done
done ? true
N ? generate all one-action extensions of length-i tests in L
for each t ? N
if v(t) is linearly independent of V (L) then
L ? L ? {t}; done ? false
end for
i?i+1
end do
QV ? L
Figure 1: Our search algorithm to find a set of core e-tests given the outcome vectors.
the outcome vectors for all possible e-tests. Let QV denote the set of e-tests associated
with a maximal set of linearly independent columns of matrix V ; clearly |QV | ? |S|. Note
that QV is not uniquely defined; there are many such sets. The hope is that the set QV is
a core set for an EPSR model of the dynamical system represented by the POMDP model.
But before we consider this hope, let us consider how we would find QV given a POMDP
model.
We can compute the outcome vector for any e-test from the POMDP parameters using
Equation 4. So we could compute the columns of V one by one and check to see how many
linearly independent columns we find. If we ever find |S| linearly independent columns,
we know we can stop, because we will not find any more. However, if |QV | < |S|, then
how would we know when to stop? In Figure 1, we present a search algorithm that finds
a set QV in polynomial time. Our algorithm is adapted from Littman et al.?s algorithm for
finding core s-tests. The algorithm starts with all e-tests of length one and maintains a set
L of currently known linearly independent e-tests. At each iteration it searches for new
linearly independent e-tests among all one-action extensions of the e-tests it added to L at
the last iteration and stops when an iteration does not add to the set L.
Lemma 2 The search algorithm of Figure 1 computes the set QV in time polynomial in
the size of the POMDP.
Proof Computing the outcome vector for an e-test using Equation 4 is polynomial in the
number of states and the length of the e-test. There cannot be more than |S| e-tests in the
set L maintained by the search algorithm algorithm and only one-action extensions of the
e-tests in L ? O are ever considered. Each extension length considered must add an e-test
else the algorithm stops, and so the maximal length of each e-test in QV is upper bounded
by the number of states. Therefore, our algorithm returns QV in polynomial time.
Note that this algorithm is only practical if the outcome vectors have been found; this only
makes sense if the POMDP model is already known, as outcome vectors map POMDP
states to outcomes. We will address learning these models from observations in future
work [6]. Next we show that the prediction of any e-test can be computed linearly from the
prediction vector for the e-tests in QV .
Lemma 3 For any history h and any e-test e, the prediction p(e|h) is some linear function
of prediction vector p(QV |h), i.e., p(e|h) = p(QV |h)we for some weight vector we .
Proof Let V (QV ) be the submatrix of V containing the columns corresponding to QV .
By the definition of QV , for any e-test e, v(e) = V (QV )we , for some weight vector we .
Furthermore, for any history h, p(e|h) = b(S|h)v(e) = b(S|h)V (QV )we = p(QV |h)we .
Note that Lemma 3 does not imply that QV constitutes a PSR, let alone a linear PSR, for
the definition of a PSR requires that the prediction of all s-tests be computable from the
core test predictions. Next we turn to the crucial question: does QV constitute a PSR?
Theorem 1 If V (QV ), defined as above with respect to some POMDP model of a dynamical system, is a square matrix, i.e., the number of e-tests in QV is the number of states |S|
(in that POMDP model), then QV constitutes a linear EPSR for that dynamical system.
Proof For any history h, pT (QV |h) = bT (S|h)V (QV ). If V (QV ) is square then it is
invertible because by construction it has full rank, and hence for any history h, bT (S|h) =
pT (QV |h)V ?1 (QV ). For any s-test t = a1 o1 a2 o2 ? ? ? ak ok ,
1
1
pT (t|h) = bT (S|h)T a Oa
T
,o1
2
2
T a Oa
a1
?1
,o2
k
k
? ? ? T a Oa
a1 ,o1
a2
,ok
a2 ,o2
1S (by first-principles definition)
k
k
k
= p (QV |h)V (QV )T O
T O
? ? ? T a Oa ,o 1S = pT (QV |h)wt
for some weight vector wt . Thus, QV constitutes a linear EPSR as per the definition in
Equation 2.
1
1
1
k
k
k
We note that the product T a Oa ,o ? ? ? T a Oa ,o 1S appears often in association with an
s-test t = a1 o1 ? ? ? ak ok , and abbreviate the product z(t). We similarly define z(e) =
k k
1
k
T a T a2 ? ? ? T a Oa ,o 1S for the e-test e = a1 a2 ? ? ? ak ok .
Staying with the linear EPSR case for now, we can define an update function for p(QV |h)
as follows: (remembering that V (QV ) is invertible for this case)
b(S|h)T a Oa,o z(ei )
p(QV |h)V ?1 (QV )z(aoei )
p(QV |h)maoei
p(aoei |h)
=
=
=
p(ao|h)
p(Q|h)mao
p(QV |h)mao
p(QV |h)mao
(5)
where we used the fact that the test ao in the denominator is an e-test. (The form of the
linear EPSR update equation is identical to the form of the update in linear PSRs with
s-tests given in Equation 3). Thus, a linear EPSR model is defined by QV and the set of
weight vectors, maoe for all a ? A, o ? O, e ? {QV ? ?}, used in Equation 5.
p(ei |hao) =
Now, let us turn to the case when the number of e-tests in QV is less than |S|, i.e., when
V (QV ) is not a square matrix.
Lemma 4 In general, if the number of e-tests in QV is less than |S|, then QV is not
guaranteed to be a linear EPSR.
Proof (Sketch) To prove this lemma, we must only find an example of a dynamical
system that is an EPSR but not a linear EPSR. In Section 4.3 we present a k-bit register as
an example of such a problem. We show in that section that the state space size is 2k and
the number of s-tests in the core set of a linear s-test-based PSR model is also 2k , but the
number of e-tests in QV is only k + 1. Note that this means that the rank of the U matrix
for s-tests is 2k while the rank of the V matrix for e-tests is k + 1. This must mean that QV
is not a linear EPSR. We skip these details for lack of space.
Lemma 4 leaves open the possibility that if |QV | < |S| then QV constitutes a nonlinear
EPSR. We were unable, thus far, to evaluate this possibility for general POMDPs but we
did obtain an interesting and positive answer, presented in the next section, for the class of
deterministic POMDPs.
4
A Nonlinear PSR for Deterministic Dynamical Systems
In deterministic dynamical systems, the predictions of both e-tests and s-tests are binary
and it is this basic fact that allows us to prove the following result.
Theorem 2 For deterministic dynamical systems the set of e-tests QV is always an EPSR
and in general it is a nonlinear EPSR.
Proof For an arbitrary s-test t = a1 o1 a2 o2 ? ? ? ak ok , and some arbitrary history h that is
realizable (i.e., p(h) = 1), and for some vectors wa1 o1 , wa1 o1 a2 o2 , . . . , wa1 o1 a2 o2 ???ak ok of
length |QV |, we have
prob(o1 o2 ? ? ? ok |ha1 a2 ? ? ? ak )
1
1
2
=
1 1 2
= prob(o |ha )prob(o |ha o a ) ? ? ? prob(ok |ha1 o1 a2 o2 ? ? ? ak?1 ok?1 ak )
= prob(o1 |ha1 )prob(o2 |ha1 a2 ) ? ? ? prob(ok |ha1 a2 ? ? ? ak )
= (pT (QV |h)wa1 o1 )(pT (QV |h)wa1 o1 a2 o2 ) ? ? ? (pT (QV |h)wa1 o1 ???ak ok )
= ft (p(QV |h))
In going from the second line to the third, we eliminate observations from the conditions by
noting that in a deterministic system, the observation emitted depends only on the sequence
of actions executed. In going from the third line to the fourth, we use the result of Lemma 3
that regardless of the size of QV , the predictions for all e-tests for any history h are linear
functions of p(QV |h). This shows that for deterministic dynamical systems, QV , regardless
of its size, constitutes an EPSR.
Update Function: Since predictions are binary in deterministic EPSRs, p(ao|h) must be 1
if o is observed after taking action a in history h:
p(ei |hao) = p(aoei |h)/p(ao|h) = p(aei |h) = p(QV |h)maei
where the second equality from the left comes about because p(ao|h) = 1 and, because
o must be observed when a is executed, p(aoei |h) = p(aei |h), and the last equality used
the fact that aei is just some other e-test and so from Lemma 3 must be a linear function
of p(QV |h). It is rather interesting that even though the EPSR formed through QV is
nonlinear (as seen in Theorem 2), the update function is in fact linear.
4.1
Diversity and e-tests
Rivest and Schapire?s [1] diversity representation, the inspiration for e-tests, applies only
to deterministic systems and can be explained using the binary outcome matrix V defined
at the beginning of Section 3.1. Diversity also uses the predictions of a set of e-tests as its
representation of state; it uses as many e-tests as there are distinct columns in the matrix
V . Clearly, at most there can be 2|S| distinct columns and they show that there have to
be at least log2 (|S|) distinct columns and that these bounds are tight. Thus the size of
the diversity representation can be exponentially smaller or exponentially bigger than the
size of a POMDP representation. While we use the same notion of tests as the diversity
representation, our use of linear independence of outcome vectors instead of equivalence
classes based on equality of outcome vectors allows us to use e-tests on stochastic systems.
Next we show through an example that EPSR models in deterministic dynamic systems
can lead to exponential compression over POMDP models in some cases while avoiding
the exponential blowup possible in Rivest and Schapire?s [1] diversity representation.
4.2
EPSRs can be Exponentially Smaller than Diversity
This first example shows a case in which the size of the EPSR representation is exponentially smaller than the size of the diversity representation. The hit register (see Figure 2a)
is a k-bit register (these are the value bits) with an additional special hit bit. There are
2k + 1 states in the POMDP describing this system?one state in which the hit bit is 1
and 2k states in which the hit bit is 0 and the value bits take on different combinations of
a)
k bits (value bits)
0
...
1
b)
k bits
hit
0
1
1
...
1
Figure 2: The two example systems. a) The k-bit hit register. There are k value bits and the
special hit bit. The value of the hit bit determines the observation and k + 2 actions alter
the value of the bits; this is fully specified in Section 4.2. b) The k-bit rotate register. The
value of the leftmost bit is observed; this bit can be flipped, and the register can be rotated
either to the right or to the left. This is described in greater detail in Section 4.3.
values. There are k + 2 actions: a flip action Fi for each value bit i that inverts bit i if the
hit bit is not set, a set action Sh that sets the hit bit if all the value bits are 0, and a clear
action Ch that clears the hit bit. There are two observations: Oh if the hit bit is set and
Om otherwise. Rivest and Schapire [1] present a similar problem (their version has no Ch
k
action). The diversity representation requires O(22 ) equivalence classes and thus tests,
k
whereas an EPSR has only 2 + 1 core e-tests (see Table 1 for the core e-tests and update
function when k = 2).
Table 1: Core e-tests and update functions for the 2-bit hit register problem.
test
F 1 Oh
Sh Oh
F1 Sh Oh
F1
p(F1 Oh )
p(F1 Sh Oh )
p(Sh Oh )
F2 Sh Oh
p(F2 F1 Sh Oh )
F2 F1 Sh Oh
p(F2 Sh Oh )
update function for action
F2
Sh
p(F1 Oh )
p(Sh Oh )
p(F2 Sh Oh )
p(Sh Oh )
p(F2 F1 Sh Oh ) p(Sh Oh ) ? p(F1 Oh ) +
p(F1 Sh Oh )
p(Sh Oh )
p(Sh Oh ) ? p(F1 Oh ) +
p(F2 Sh Oh )
p(F1 Sh Oh )
p(Sh Oh ) ? p(F1 Oh ) +
p(F2 F1 Sh Oh )
Ch
0
p(Sh Oh )
p(F1 Sh Oh ) ?
p(F1 Oh )
p(F2 Sh Oh ) ?
p(F1 Oh )
p(F2 F1 Sh Oh )?
p(F1 Oh )
Lemma 5 For deterministic dynamical systems, the size of the EPSR representation is
always upper-bounded by the minimum of the size of the diversity representation and the
size of the POMDP representation.
Proof The size of the EPSR representation, |QV |, is upper-bounded by |S| by construction of QV . The number of e-tests used by diversity representation is the number of distinct
columns in the binary V matrix of Section 3.1, while the number of e-tests used by the
EPSR representation is the number of linearly independent columns in V . Clearly the latter is upper-bounded by the former. As a quick example, consider the case of 2-bit binary
vectors: There are 4 distinct vectors but only 2 linearly independent ones.
4.3
EPSRs can be Exponentially Smaller than POMDPs and the Original PSRs
This second example shows a case in which the EPSR representation uses exponentially
fewer tests than the number of states in the POMDP representation as well as the original
linear PSR representation. The rotate register illustrated in Figure 2b is a k-bit shift-register.
Table 2: Core e-tests and update function for the 4 bit rotate register problem.
test
F O1
RO1
LO1
F F O1
RRO1
update function for action
R
L
p(F O1 ) + p(F F O1 ) ? p(RO1 ) p(F O1 ) + p(F F O1 ) ? p(LO1 )
p(RRO1 )
p(F F O1 )
p(F F O1 )
p(RRO1 )
p(RO1 )
p(LO1 )
p(LO1 )
p(RO1 )
F
p(F F O1 )
p(RO1 )
p(LO1 )
p(F O1 )
p(RRO1 )
There are two observations: O1 is observed if the leftmost bit is 1 and O0 is observed when
the leftmost bit is 0. The three actions are R, which shifts the register one place to the
right with wraparound, L, which does the opposite, and F , which flips the leftmost bit.
This problem is also presented by Rivest and Schapire as an example of a system whose
diversity is exponentially smaller than the number of states in the minimal POMDP, which
is 2k . This is also the number of core s-tests in the equivalent linear PSR (we computed
these 2k s-tests but do not report them here). The diversity is 2k. However, the EPSR that
models this system has only k + 1 core e-tests. The tests and update function for the 4-bit
rotate register are shown in Table 2.
5
Conclusions and Future Work
In this paper we have used a new type of test, the e-test, to specify a nonlinear PSR for
deterministic controlled dynamical systems. This is the first nonlinear PSR for any general
class of systems. We proved that in some deterministic systems our new PSR models are
exponentially smaller than both the original PSR models as well as POMDP models. Similarly, compared to the size of Rivest & Schapire?s diversity representation (the inspiration
for the notion of e-tests) we proved that our PSR models are never bigger but sometimes
exponentially smaller. This work has primarily been an attempt to understand the representational implications of using e-tests; as future work, we will explore the computational
implications of switching to e-tests.
Acknowledgments
Matt Rudary and Satinder Singh were supported by a grant from the Intel Research Council.
References
[1] Ronald L. Rivest and Robert E. Schapire. Diversity-based inference of finite automata. Journal
of the ACM, 41(3):555?589, May 1994.
[2] Michael L. Littman, Richard S. Sutton, and Satinder Singh. Predictive representations of state.
In Advances In Neural Information Processing Systems 14, 2001.
[3] William S. Lovejoy. A survey of algorithmic methods for partially observed markov decision
processes. Annals of Operations Research, 28(1):47?65, 1991.
[4] Michael L. Littman. Algorithms for Sequential Decision Making. PhD thesis, Brown University,
1996.
[5] Herbert Jaeger. Observable operator models for discrete stochastic time series. Neural Computation, 12(6):1371?1398, 2000.
[6] Satinder Singh, Michael L. Littman, Nicholas E. Jong, David Pardoe, and Peter Stone. Learning
predictive state representations. In The Twentieth International Conference on Machine Learning
(ICML-2003), 2003. To appear.
| 2413 |@word h:1 version:1 briefly:1 polynomial:4 compression:3 open:1 q1:2 initial:1 series:1 past:1 existing:1 o2:14 current:1 must:8 ronald:1 pertinent:1 succeeding:1 update:11 alone:1 fewer:2 leaf:1 beginning:1 ith:2 core:21 short:2 prove:3 introduce:3 blowup:1 nor:1 inspired:2 pitfall:1 window:2 rivest:9 underlying:1 bounded:4 null:1 what:2 kind:1 q2:2 unobserved:1 finding:1 hypothetical:1 exactly:1 hit:13 grant:1 appear:1 before:1 positive:1 engineering:1 switching:1 sutton:1 ak:18 studied:1 equivalence:2 practical:1 acknowledgment:1 area:1 empirical:1 get:1 cannot:1 operator:1 equivalent:3 deterministic:14 map:1 quick:1 regardless:2 starting:1 automaton:1 survey:1 formulate:1 pomdp:31 focused:1 insight:1 oh:34 notion:5 updated:2 annals:1 pt:9 construction:3 us:3 element:1 observed:9 ft:4 capture:1 littman:12 dynamic:1 singh:4 tight:1 predictive:5 upon:1 f2:11 aei:3 represented:2 distinct:5 describe:1 outcome:19 whose:7 larger:1 otherwise:1 statistic:2 final:1 sequence:15 advantage:1 interaction:1 maximal:3 product:2 date:2 iff:1 representational:1 requirement:1 jaeger:2 staying:1 rotated:1 derive:2 b0:2 skip:1 come:1 stochastic:2 ao:6 f1:19 extension:4 considered:3 algorithmic:1 matthew:1 a2:20 currently:1 council:1 qv:65 hope:2 clearly:5 always:2 rather:1 encode:2 focus:1 rank:3 check:1 sense:3 realizable:1 inference:1 lovejoy:1 bt:8 eliminate:1 hidden:2 going:2 issue:1 unobservable:1 flexible:1 oii:1 among:1 development:1 special:2 construct:1 never:1 identical:1 flipped:1 look:2 icml:1 constitutes:6 alter:1 future:7 report:1 richard:1 primarily:1 consisting:2 william:1 attempt:1 possibility:2 sh:26 implication:2 psrs:19 tuple:1 minimal:2 column:14 modeling:1 answer:1 international:1 rudary:2 invertible:2 michael:3 thesis:1 containing:1 return:1 potential:2 diversity:16 register:12 depends:2 observing:3 start:1 sort:1 maintains:1 om:1 oi:1 square:3 formed:1 correspond:2 pomdps:9 history:18 definition:4 associated:2 mi:1 proof:6 stop:4 proved:2 ask:1 appears:1 ok:19 specify:1 done:5 execute:1 though:1 furthermore:1 just:2 until:1 hand:2 sketch:1 ei:3 nonlinear:11 lack:1 defines:3 matt:1 concept:1 true:1 brown:1 former:1 hence:1 equality:3 inspiration:2 illustrated:1 uniquely:1 maintained:1 leftmost:4 stone:1 fi:1 exponentially:13 discussed:1 association:1 significant:1 ai:1 similarly:2 baveja:1 add:2 showed:2 lo1:5 issued:1 binary:5 seen:1 minimum:1 additional:1 remembering:1 greater:1 herbert:1 full:1 bigger:2 a1:13 controlled:7 qi:1 prediction:20 basic:1 denominator:1 iteration:3 represent:3 sometimes:1 addition:1 whereas:1 else:1 crucial:1 rest:1 call:1 emitted:1 noting:1 independence:2 opposite:1 computable:1 shift:2 expression:1 o0:1 reuse:1 peter:1 constitute:2 action:30 clear:2 pardoe:1 schapire:9 generate:2 exist:1 per:1 discrete:1 shall:1 prob:13 powerful:1 fourth:1 arrive:1 place:1 wa1:6 decision:4 submatrix:2 entirely:1 bit:33 bound:1 guaranteed:1 adapted:1 answered:1 combination:1 smaller:10 ro1:5 making:1 explained:1 taken:3 equation:10 turn:2 describing:1 know:2 flip:2 end:3 umich:1 operation:1 differentiated:1 nicholas:1 alternative:1 original:3 cf:1 log2:1 question:2 quantity:1 added:1 already:1 diagonal:1 said:1 unable:1 oa:14 reason:2 length:8 o1:30 modeled:1 providing:1 equivalently:3 executed:5 robert:1 potentially:1 relate:1 hao:4 upper:4 observation:20 markov:3 finite:2 behave:1 ever:2 arbitrary:2 wraparound:1 introduced:1 maei:1 pair:2 required:1 specified:2 david:1 address:1 dynamical:24 built:1 memory:1 belief:5 abbreviate:2 imply:1 review:1 relative:1 fully:1 interesting:2 conveyed:1 sufficient:2 exciting:1 principle:1 row:2 supported:1 last:2 arriving:1 side:2 allow:1 understand:1 taking:4 ha1:6 dimension:1 valid:1 avoids:1 ending:1 computes:1 historical:1 far:2 observable:4 compact:2 logic:1 satinder:4 conclude:1 search:5 why:1 table:4 fao:1 constructing:2 did:1 main:1 linearly:10 allowed:1 intel:1 mao:4 inverts:1 exponential:3 third:2 theorem:3 exists:2 false:2 sequential:1 phd:1 michigan:1 explore:1 psr:36 twentieth:1 happening:1 expressed:2 partially:3 applies:1 ch:3 corresponds:1 determines:1 acm:1 ann:1 wt:8 lemma:10 called:1 arbor:1 succeeds:3 jong:1 latter:2 rotate:4 evaluate:1 avoiding:1 |
1,556 | 2,414 | A classification-based cocktail-party
processor
Nicoleta Roman, DeLiang Wang
Department of Computer and Information
Science and Center for Cognitive Science
The Ohio State University
Columbus, OH 43210, USA
{niki,dwang}@cis.ohio-state.edu
Guy J. Brown
Department of Computer Science
University of Sheffield
211 Portobello Street
Sheffield, S1 4DP, UK
[email protected]
Abstract
At a cocktail party, a listener can selectively attend to a single
voice and filter out other acoustical interferences. How to simulate
this perceptual ability remains a great challenge. This paper
describes a novel supervised learning approach to speech
segregation, in which a target speech signal is separated from
interfering sounds using spatial location cues: interaural time
differences (ITD) and interaural intensity differences (IID).
Motivated by the auditory masking effect, we employ the notion of
an ideal time-frequency binary mask, which selects the target if it
is stronger than the interference in a local time-frequency unit.
Within a narrow frequency band, modifications to the relative
strength of the target source with respect to the interference trigger
systematic changes for estimated ITD and IID. For a given spatial
configuration, this interaction produces characteristic clustering in
the binaural feature space. Consequently, we perform pattern
classification in order to estimate ideal binary masks. A systematic
evaluation in terms of signal-to-noise ratio as well as automatic
speech recognition performance shows that the resulting system
produces masks very close to ideal binary ones. A quantitative
comparison shows that our model yields significant improvement
in performance over an existing approach. Furthermore, under
certain conditions the model produces large speech intelligibility
improvements with normal listeners.
1
In t ro d u c t i o n
The perceptual ability to detect, discriminate and recognize one utterance in a
background of acoustic interference has been studied extensively under both
monaural and binaural conditions [1, 2, 3]. The human auditory system is able to
segregate a speech signal from an acoustic mixture using various cues, including
fundamental frequency (F0), onset time and location, in a process that is known as
auditory scene analysis (ASA) [1]. F0 is widely used in computational ASA systems
that operate upon monaural input ? however, systems that employ only this cue are
limited to voiced speech [4, 5, 6]. Increased speech intelligibility in binaural
listening compared to the monaural case has prompted research in designing
cocktail-party processors based on spatial cues [7, 8, 9]. Such a system can be
applied to, among other things, enhancing speech recognition in noisy environments
and improving binaural hearing aid design.
In this study, we propose a sound segregation model using binaural cues extracted
from the responses of a KEMAR dummy head that realistically simulates the
filtering process of the head, torso and external ear. A typical approach for signal
reconstruction uses a time-frequency (T-F) mask: T-F units are weighted selectively
in order to enhance the target signal. Here, we employ an ideal binary mask [6],
which selects the T-F units where the signal energy is greater than the noise energy.
The ideal mask notion is motivated by the human auditory masking phenomenon, in
which a stronger signal masks a weaker one in the same critical band. In addition,
from a theoretical ASA perspective, an ideal binary mask gives a performance
ceiling for all binary masks. Moreover, such masks have been recently shown to
provide a highly effective front-end for robust speech recognition [10]. We show for
mixtures of multiple sound sources that there exists a strong correlation between the
relative strength of target and interference and estimated ITD/IID, resulting in a
characteristic clustering across frequency bands. Consequently, we employ a
nonparametric classification method to determine decision regions in the joint ITDIID feature space that correspond to an optimal estimate for an ideal mask.
Related models for estimating target masks through clustering have been proposed
previously [11, 12]. Notably, the experimental results by Jourjine et al. [12] suggest
that speech signals in a multiple-speaker condition obey to a large extent disjoint
orthogonality in time and frequency. That is, at most one source has a nonzero
energy at a specific time and frequency. Such models, however, assume input
directly from microphone recordings and head-related filtering is not considered.
Simulation of human binaural hearing introduces different constraints as well as
clues to the problem. First, both ITD and IID should be utilized since IID is more
reliable at higher frequencies than ITD. Second, frequency-dependent combinations
of ITD and IID arise naturally for a fixed spatial configuration. Consequently,
channel-dependent training should be performed for each frequency band.
The rest of the paper is organized as follows. The next section contains the
architecture of the model and describes our method for azimuth localization. Section
3 is devoted to ideal binary mask estimation, which constitutes the core of the
model. Section 4 presents the performance of the system and a quantitative
comparison with the Bodden [7] model. Section 5 concludes our paper.
2
M od el a rch i t ect u re a n d a zi mu t h locali zat i o n
Our model consists of the following stages: 1) a model of the auditory periphery; 2)
frequency-dependent ITD/IID extraction and azimuth localization; 3) estimation of
an ideal binary mask.
The input to our model is a mixture of two or more signals presented at different,
but fixed, locations. Signals are sampled at 44.1 kHz. We follow a standard
procedure for simulating free-field acoustic signals from monaural signals (no
reverberations are modeled). Binaural signals are obtained by filtering the monaural
signals with measured head-related transfer functions (HRTF) from a KEMAR
dummy head [13]. HRTFs introduce a natural combination of ITD and IID into the
signals that is extracted in the subsequent stages of the model.
To simulate the auditory periphery we use a bank of 128 gammatone filters in the
range of 80 Hz to 5 kHz as described in [4]. In addition, the gains of the gammatone
filters are adjusted in order to simulate the middle ear transfer function. In the final
step of the peripheral model, the output of each gammatone filter is half-wave
rectified in order to simulate firing rates of the auditory nerve. Saturation effects are
modeled by taking the square root of the signal.
Current models of azimuth localization almost invariably start with Jeffress?s crosscorrelation mechanism. For all frequency channels, we use the normalized crosscorrelation computed at lags equally distributed in the plausible range from ?1 ms to
1 ms using an integration window of 20 ms. Frequency-dependent nonlinear
transformations are used to map the time-delay axis onto the azimuth axis resulting
in a cross-correlogram structure. In addition, a ?skeleton? cross-correlogram is
formed by replacing the peaks in the cross-correlogram with Gaussians of narrower
widths that are inversely proportional to the channel center frequency. This results
in a sharpening effect, similar in principle to lateral inhibition. Assuming fixed
sources, multiple locations are determined as peaks after summating the skeleton
cross-correlogram across frequency and time. The number of sources and their
locations computed here, as well as the target source location, feed to the next stage.
3
B i n a ry ma s k est i mat i on
The objective of this stage of the model is to develop an efficient mechanism for
estimating an ideal binary mask based on observed patterns of extracted ITD and
IID features. Our theoretical analysis for two-source interactions in the case of pure
tones shows relatively smooth changes for ITD and IID with the relative strength R
between the two sources in narrow frequency bands [14]. More specifically, when
the frequencies vary uniformly in a narrow band the derived mean values of
ITD/IID estimates vary monotonically with respect to R.
To capture this relationship in the context of real signals, statistics are collected for
individual spatial configurations during training. We employ a training corpus
consisting of 10 speech utterances from the TIMIT database (see [14] for details). In
the two-source case, we divide the corpus in two equal sets: target and interference.
In the three-source case, we select 4 signals for the target set and 2 interfering sets
of 3 signals each.
For all frequency channels, local estimates of ITD, IID and R are based on 20-ms
time frames with 10 ms overlap between consecutive time frames. In order to
eliminate the multi-peak ambiguity in the cross-correlation function for mid- and
high-frequency channels, we use the following strategy. We compute ITDi as the
peak location of the cross-correlation in the range 2? / ? i centered at the target ITD,
where ? i indicates the center frequency of the ith channel. On the other hand, IID
and R are computed as follows:
IIDi = 20 log10
? r (t) ? l (t ) ,
i
t
2
2
i
t
Ri =
? s (t )
2
i
t
?
?
?
?
? s (t ) + ? n (t )
2
i
t
2
i
t
?
?
?
?
where l i and ri refer to the left and right peripheral output of the ith channel,
respectively, s i refers to the output for the target signal, and ni that for the acoustic
interference. In computing IIDi , we use 20 instead of 10 in order to compensate for
the square root operation in the peripheral model.
Fig. 1 shows empirical results obtained for a two-source configuration on the
training corpus. The data exhibits a systematic shift for both ITD and IID with
respect to the relative strength R. Moreover, the theoretical mean values obtained in
the case of pure tones [14] match the empirical ones very well. This observation
extends to multiple-source scenarios. As an example, Fig. 2 displays histograms that
show the relationship between R and both ITD (Fig. 2A) and IID (Fig. 2B) for a
three-source situation. Note that the interfering sources introduce systematic
deviations for the binaural cues. Consider a worst case: the target is silent and two
interferences have equal energy in a given T-F unit. This results in binaural cues
indicating an auditory event at half of the distance between the two interference
locations; for Fig. 2, it is 0? - the target location. However, the data in Fig. 2 has a
low probability for this case and shows instead a clustering phenomenon, suggesting
that in most cases only one source dominates a T-F unit.
B
1
1
R
R
A
theoretical
empirical
0
-1
theoretical
empirical
0
-15
1
ITD (ms)
15
IID (dB)
Figure 1. Relationship between ITD/IID and relative strength R for a two-source
configuration: target in the median plane and interference on the right side at 30?. The
solid curve shows the theoretical mean and the dash curve shows the data mean. A: The
scatter plot of ITD and R estimates for a filter channel with center frequency 500 Hz. B:
Results for IID for a filter channel with center frequency 2.5 kHz.
A
B
1
C
10
1
IID
s)
0.5
0
-10
IID (d
B)
10
)
(dB
R
R
0 -0.5
m
ITD (
-10 -0.5
m
ITD (
s)
0.5
Figure 2. Relationship between ITD/IID and relative strength R for a three-source
configuration: target in the median plane and interference at -30? and 30?. Statistics are
obtained for a channel with center frequency 1.5 kHz. A: Histogram of ITD and R
samples. B: Histogram of IID and R samples. C: Clustering in the ITD-IID space.
By displaying the information in the joint ITD-IID space (Fig. 2C), we observe
location-based clustering of the binaural cues, which is clearly marked by strong
peaks that correspond to distinct active sources. There exists a tradeoff between ITD
and IID across frequencies, where ITD is most salient at low frequencies and IID at
high frequencies [2]. But a fixed cutoff frequency that separates the effective use of
ITD and IID does not exist for different spatial configurations. This motivates our
choice of a joint ITD-IID feature space that optimizes the system performance
across different configurations. Differential training seems necessary for different
channels given that there exist variations of ITD and, especially, IID values for
different center frequencies.
Since the goal is to estimate an ideal binary mask, we focus on detecting decision
regions in the 2-dimensional ITD-IID space for individual frequency channels.
Consequently, supervised learning techniques can be applied. For the ith channel,
we test the following two hypotheses. The first one is H 1 : target is dominant or
Ri > 0.5 , and the second one is H 2 : interference is dominant or Ri < 0.5 . Based on
the estimates of the bivariate densities p( x | H 1 ) and p( x | H 2 ) the classification is
done by the maximum a posteriori decision rule: p( H 1 ) p( x | H 1 ) > p( H 2 ) p( x | H 2 ) .
There exist a plethora of techniques for probability density estimation ranging from
parametric techniques (e.g. mixture of Gaussians) to nonparametric ones (e.g. kernel
density estimators). In order to completely characterize the distribution of the data
we use the kernel density estimation method independently for each frequency
channel. One approach for finding smoothing parameters is the least-squares crossvalidation method, which is utilized in our estimation.
One cue not employed in our model is the interaural time difference between signal
envelopes (IED). Auditory models generally employ IED in the high-frequency
range where the auditory system becomes gradually insensitive to ITD. We have
compared the performance of the three binaural cues: ITD, IID and IED and have
found no benefit for using IED in our system after incorporating ITD and IID [14].
4 Pe rfo rmanc e an d c omp arison
The performance of a segregation system can be assessed in different ways,
depending on intended applications. To extensively evaluate our model, we use the
following three criteria: 1) a signal-to-noise (SNR) measure using the original target
as signal; 2) ASR rates using our model as a front-end; and 3) human speech
intelligibility tests.
To conduct the SNR evaluation a segregated signal is reconstructed from a binary
mask using a resynthesis method described in [5]. To quantitatively assess system
performance, we measure the SNR using the original target speech as signal:
SNR = 10 log 10
? s (t) ? (s (t) ? s (t ))
2
o
t
o
2
e
t
where s o (t ) represents the resynthesized original speech and s e (t ) the
reconstructed speech from an estimated mask. One can measure the initial SNR by
replacing the denominator with s N (t ) , the resynthesized original interference.
Fig. 3 shows the systematic results for two-source scenarios using the Cooke corpus
[4], which is commonly used in sound separation studies. The corpus has 100
mixtures obtained from 10 speech utterances mixed with 10 types of intrusion. We
compare the SNR gain obtained by our model against that obtained using the ideal
binary mask across different noise types. Excellent results are obtained when the
target is close to the median plane for an azimuth separation as small as 5?.
Performance degrades when the target source is moved to the side of the head, from
an average gain of 13.7 dB for the target in the median plane (Fig. 3A) to 1.7 dB
when target is at 80? (Fig. 3B). When spatial separation increases the performance
improves even for side targets, to an average gain of 14.5 dB in Fig. 3C. This
performance profile is in qualitative agreement with experimental data [2].
Fig. 4 illustrates the performance in a three-source scenario with target in the
median plane and two interfering sources at ?30? and 30?. Here 5 speech signals
from the Cooke corpus form the target set and the other 5 form one interference set.
The second interference set contains the 10 intrusions. The performance degrades
compared to the two-source situation, from an average SNR of about 12 dB to 4.1
dB. However, the average SNR gain obtained is approximately 11.3 dB. This ability
of our model to segregate mixtures of more than two sources differs from blind
source separation with independent component analysis.
In order to draw a quantitative comparison, we have implemented Bodden?s
cocktail-party processor using the same 128-channel gammatone filterbank [7]. The
localization stage of this model uses an extended cross-correlation mechanism based
on contralateral inhibition and it adapts to HRTFs. The separation stage of the
model is based on estimation of the weights for a Wiener filter as the ratio between
a desired excitation and an actual one. Although the Bodden model is more flexible
by incorporating aspects of the precedence effect into the localization stage, the
estimation of Wiener filter weights is less robust than our binary estimation of ideal
masks. Shown in Fig. 5, our model shows a considerable improvement over the
Bodden system, producing a 3.5 dB average improvement.
SNR (dB)
A
B
C
20
20
20
10
10
10
0
0
0
-10
-10
-10
N0 N1 N2 N3 N4 N5 N6 N7 N8 N9
N0 N1 N2 N3 N4 N5 N6 N7 N8 N9
N0 N1 N2 N3 N4 N5 N6 N7 N8 N9
Figure 3. Systematic results for two-source configuration. Black bars correspond to the
SNR of the initial mixture, white bars indicate the SNR obtained using ideal binary
mask, and gray bars show the SNR from our model. Results are obtained for speech
mixed with ten intrusion types (N0: pure tone; N1: white noise; N2: noise burst; N3:
?cocktail party?; N4: rock music; N5: siren; N6: trill telephone; N7: female speech; N8:
male speech; N9: female speech). A: Target at 0?, interference at 5?. B: Target at 80?,
interference at 85?. C: Target at 60?, interference at 90?.
20
0
SNR (dB)
SNR (dB)
5
-5
-10
-15
-20
10
0
-10
N0
N1
N2
N3
N4
N5
N6
N7
N8
N9
Figure 4. Evaluation for a three-source
configuration: target at 0? and two
interfering sources at ?30? and 30?. Black
bars correspond to the SNR of the initial
mixture, white bars to the SNR obtained
using the ideal binary mask, and gray bars
to the SNR from our model.
N0
N1
N2
N3
N4
N5
N6
N7
N8
N9
Figure 5. SNR comparison between the
Bodden model (white bars) and our
model (gray bars) for a two-source
configuration:
target
at
0?
and
interference
at
30?.
Black
bars
correspond to the SNR of the initial
mixture.
For the ASR evaluation, we use the missing-data technique as described in [10]. In
this approach, a continuous density hidden Markov model recognizer is modified
such that only acoustic features indicated as reliable in a binary mask are used
during decoding. Hence, it works seamlessly with the output from our speech
segregation system. We have implemented the missing data algorithm with the same
128-channel gammatone filterbank. Feature vectors are obtained using the Hilbert
envelope at the output of the gammatone filter. More specifically, each feature
vector is extracted by smoothing the envelope using an 8-ms first-order filter,
sampling at a frame-rate of 10 ms and finally log-compressing. We use the bounded
marginalization method for classification [10]. The task domain is recognition of
connected digits, and both training and testing are performed on acoustic features
from the left ear signal using the male speaker dataset in the TIDigits database.
A 100
B 100
Correctness (%)
Correctness (%)
Fig. 6A shows the correctness scores for a two-source condition, where the male
target speaker is located at 0? and the interference is another male speaker at 30?.
The performance of our model is systematically compared against the ideal masks
for four SNR levels: 5 dB, 0 dB, -5 dB and ?10 dB. Similarly, Fig. 6B shows the
results for the three-source case with an added female speaker at -30?. The ideal
mask exhibits only slight and gradual degradation in recognition performance with
decreasing SNR and increasing number of sources. Observe that large improvements
over baseline performance are obtained across all conditions. This shows the strong
potential of applying our model to robust speech recognition.
80
60
40
20
5 dB
Baseline
Ideal Mask
Estimated Mask
0 dB
?5 dB
80
60
40
20
5 dB
?10 dB
Baseline
Ideal Mask
Estimated Mask
0 dB
?5 dB
?10 dB
Figure 6. Recognition performance at different SNR values for original mixture (dotted
line), ideal binary mask (dashed line) and estimated mask (solid line). A. Correctness
score for a two-source case. B. Correctness score for a three-source case.
Finally we evaluate our model on speech intelligibility with listeners with normal
hearing. We use the Bamford-Kowal-Bench sentence database that contains short
semantically predictable sentences [15]. The score is evaluated as the percentage of
keywords correctly identified, ignoring minor errors such as tense and plurality. To
eliminate potential location-based priming effects we randomly swap the locations
for target and interference for different trials. In the unprocessed condition, binaural
signals are produced by convolving original signals with the corresponding HRTFs
and the signals are presented to a listener dichotically. In the processed condition,
our algorithm is used to reconstruct the target signal at the better ear and results are
presented diotically.
80
80
Keyword score (%)
B100
Keyword score (%)
A 100
60
40
20
0
0 dB
?5 dB
?10 dB
60
40
20
0
Figure 7. Keyword intelligibility score for twelve native English speakers (median
values and interquartile ranges) before (white bars) and after processing (black bars). A.
Two-source condition (0? and 5?). B. Three-source condition (0?, 30? and -30?).
Fig. 7A gives the keyword intelligibility score for a two-source configuration. Three
SNR levels are tested: 0 dB, -5 dB and ?10 dB, where the SNR is computed at the
better ear. Here the target is a male speaker and the interference is babble noise. Our
algorithm improves the intelligibility score for the tested conditions and the
improvement becomes larger as the SNR decreases (61% at ?10 dB). Our informal
observations suggest, as expected, that the intelligibility score improves for
unprocessed mixtures when two sources are more widely separated than 5?. Fig. 7B
shows the results for a three-source configuration, where our model yields a 40%
improvement. Here the interfering sources are one female speaker and another male
speaker, resulting in an initial SNR of ?10 dB at the better ear.
5
C onclu si on
We have observed systematic deviations of the ITD and IID cues with respect to the
relative strength between target and acoustic interference, and configuration-specific
clustering in the joint ITD-IID feature space. Consequently, supervised learning of
binaural patterns is employed for individual frequency channels and different spatial
configurations to estimate an ideal binary mask that cancels acoustic energy in T-F
units where interference is stronger. Evaluation using both SNR and ASR measures
shows that the system estimates ideal binary masks very well. A comparison shows
a significant improvement in performance over the Bodden model. Moreover, our
model produces substantial speech intelligibility improvements for two and three
source conditions.
A c k n ow l e d g me n t s
This research was supported in part by an NSF grant (IIS-0081058) and an AFOSR
grant (F49620-01-1-0027). A preliminary version of this work was presented in
2002 ICASSP.
References
[1] A. S. Bregman, Auditory Scene Analysis, Cambridge, MA: MIT press, 1990.
[2] J. Blauert, Spatial Hearing - The Psychophysics of Human Sound Localization,
Cambridge, MA: MIT press, 1997.
[3] A. Bronkhorst, ?The cocktail party phenomenon: a review of research on speech
intelligibility in multiple-talker conditions,? Acustica, vol. 86, pp. 117-128, 2000.
[4] M. P. Cooke, Modeling Auditory Processing and Organization, Cambridge, U.K.:
Cambridge University Press, 1993.
[5] G. J. Brown and M. P. Cooke, ?Computational auditory scene analysis,? Computer
Speech and Language, vol. 8, pp. 297-336, 1994.
[6] G. Hu and D. L. Wang, ?Monaural speech separation,? Proc. NIPS, 2002.
[7] M. Bodden, ?Modeling human sound-source localization and the cocktail-party-effect,?
Acta Acoustica, vol. 1, pp. 43-55, 1993.
[8] C. Liu et al., ?A two-microphone dual delay-line approach for extraction of a speech
sound in the presence of multiple interferers,? J. Acoust. Soc. Am., vol. 110, pp. 32183230, 2001.
[9] T. Whittkop and V. Hohmann, ?Strategy-selective noise reduction for binaural digital
hearing aids,? Speech Comm., vol. 39, pp. 111-138, 2003.
[10] M. P. Cooke, P. Green, L. Josifovski and A. Vizinho, ?Robust automatic speech
recognition with missing and unreliable acoustic data,? Speech Comm., vol. 34, pp. 267285, 2001.
[11] H. Glotin, F. Berthommier and E. Tessier, ?A CASA-labelling model using the
localisation cue for robust cocktail-party speech recognition,? Proc. EUROSPEECH, pp.
2351-2354, 1999.
[12] A. Jourjine, S. Rickard and O. Yilmaz, ?Blind separation of disjoint orthogonal signals:
demixing N sources from 2 mixtures,? Proc. ICASSP, 2000.
[13] W. G. Gardner and K. D. Martin, ?HRTF measurements of a KEMAR dummy-head
microphone,? MIT Media Lab Technical Report #280, 1994.
[14] N. Roman, D. L. Wang and G. J. Brown, ?Speech segregation based on sound
localization,? J. Acoust. Soc. Am., vol. 114, pp. 2236-2252, 2003.
[15] J. Bench and J. Bamford, Speech Hearing Tests and the Spoken Language of HearingImpaired Children, London: Academic press, 1979.
| 2414 |@word trial:1 version:1 middle:1 stronger:3 seems:1 itdi:1 hu:1 simulation:1 gradual:1 tidigits:1 solid:2 n8:6 reduction:1 initial:5 configuration:15 contains:3 score:10 liu:1 existing:1 current:1 od:1 hohmann:1 si:1 scatter:1 subsequent:1 plot:1 n0:6 cue:12 half:2 tone:3 plane:5 ith:3 core:1 short:1 rch:1 detecting:1 location:12 burst:1 differential:1 ect:1 qualitative:1 consists:1 interaural:3 introduce:2 notably:1 mask:33 expected:1 ry:1 multi:1 glotin:1 decreasing:1 actual:1 window:1 increasing:1 becomes:2 estimating:2 moreover:3 bounded:1 medium:1 acoust:2 finding:1 transformation:1 sharpening:1 spoken:1 quantitative:3 ro:1 filterbank:2 uk:2 unit:6 grant:2 producing:1 before:1 attend:1 local:2 firing:1 approximately:1 black:4 acta:1 studied:1 josifovski:1 limited:1 range:5 testing:1 differs:1 digit:1 procedure:1 empirical:4 refers:1 suggest:2 onto:1 close:2 yilmaz:1 context:1 applying:1 map:1 center:7 missing:3 independently:1 pure:3 rule:1 estimator:1 oh:1 notion:2 variation:1 target:35 trigger:1 us:2 designing:1 hypothesis:1 agreement:1 recognition:9 utilized:2 located:1 native:1 database:3 observed:2 wang:3 capture:1 worst:1 region:2 compressing:1 connected:1 iidi:2 keyword:4 blauert:1 decrease:1 substantial:1 environment:1 mu:1 predictable:1 skeleton:2 comm:2 asa:3 upon:1 localization:8 completely:1 swap:1 binaural:14 icassp:2 joint:4 various:1 listener:4 separated:2 distinct:1 effective:2 jeffress:1 london:1 lag:1 widely:2 plausible:1 larger:1 reconstruct:1 ability:3 statistic:2 noisy:1 final:1 rock:1 propose:1 reconstruction:1 interaction:2 adapts:1 realistically:1 gammatone:6 tessier:1 moved:1 crossvalidation:1 plethora:1 produce:4 depending:1 develop:1 ac:1 measured:1 minor:1 keywords:1 strong:3 soc:2 implemented:2 indicate:1 filter:10 centered:1 human:6 babble:1 ied:4 plurality:1 preliminary:1 adjusted:1 precedence:1 considered:1 itd:35 normal:2 great:1 talker:1 vary:2 consecutive:1 recognizer:1 estimation:8 proc:3 correctness:5 weighted:1 mit:3 clearly:1 modified:1 derived:1 focus:1 improvement:9 b100:1 indicates:1 seamlessly:1 intrusion:3 baseline:3 detect:1 am:2 posteriori:1 dependent:4 el:1 eliminate:2 hidden:1 selective:1 selects:2 classification:5 among:1 flexible:1 dual:1 spatial:9 integration:1 smoothing:2 psychophysics:1 field:1 equal:2 asr:3 extraction:2 sampling:1 represents:1 cancel:1 constitutes:1 report:1 quantitatively:1 roman:2 employ:6 randomly:1 recognize:1 individual:3 intended:1 consisting:1 n1:6 invariably:1 organization:1 highly:1 interquartile:1 localisation:1 evaluation:5 introduces:1 mixture:12 male:6 devoted:1 bregman:1 necessary:1 orthogonal:1 conduct:1 divide:1 re:1 desired:1 theoretical:6 increased:1 modeling:2 hearing:6 deviation:2 snr:27 contralateral:1 delay:2 azimuth:5 front:2 eurospeech:1 characterize:1 density:5 fundamental:1 peak:5 twelve:1 systematic:7 decoding:1 enhance:1 ambiguity:1 ear:6 guy:1 cognitive:1 external:1 summating:1 convolving:1 crosscorrelation:2 suggesting:1 potential:2 onset:1 blind:2 performed:2 root:2 lab:1 wave:1 start:1 masking:2 voiced:1 timit:1 ass:1 square:3 formed:1 ni:1 wiener:2 characteristic:2 correspond:5 yield:2 produced:1 iid:34 rectified:1 processor:3 against:2 energy:5 frequency:33 pp:8 naturally:1 sampled:1 auditory:13 gain:5 dataset:1 improves:3 torso:1 organized:1 hilbert:1 nerve:1 feed:1 higher:1 supervised:3 follow:1 response:1 done:1 evaluated:1 furthermore:1 stage:7 correlation:4 hand:1 replacing:2 nonlinear:1 columbus:1 gray:3 indicated:1 usa:1 effect:6 brown:4 normalized:1 tense:1 hence:1 nonzero:1 white:5 during:2 width:1 portobello:1 speaker:9 excitation:1 m:8 criterion:1 ranging:1 ohio:2 novel:1 recently:1 khz:4 insensitive:1 berthommier:1 slight:1 significant:2 refer:1 measurement:1 cambridge:4 automatic:2 similarly:1 language:2 f0:2 inhibition:2 deliang:1 dominant:2 female:4 perspective:1 optimizes:1 periphery:2 scenario:3 certain:1 binary:19 greater:1 omp:1 employed:2 determine:1 monotonically:1 signal:32 dashed:1 ii:1 multiple:6 sound:8 smooth:1 technical:1 match:1 academic:1 cross:7 compensate:1 equally:1 sheffield:2 denominator:1 enhancing:1 n5:6 histogram:3 kernel:2 background:1 addition:3 shef:1 median:6 source:43 envelope:3 operate:1 rest:1 recording:1 hz:2 resynthesized:2 simulates:1 thing:1 db:32 n7:6 presence:1 ideal:22 marginalization:1 zi:1 architecture:1 identified:1 silent:1 tradeoff:1 listening:1 shift:1 unprocessed:2 motivated:2 speech:35 cocktail:8 generally:1 nonparametric:2 mid:1 band:6 extensively:2 ten:1 processed:1 exist:3 percentage:1 nsf:1 dotted:1 estimated:6 disjoint:2 dummy:3 correctly:1 mat:1 vol:7 salient:1 four:1 cutoff:1 extends:1 almost:1 separation:7 draw:1 decision:3 dash:1 kemar:3 display:1 strength:7 orthogonality:1 constraint:1 scene:3 ri:4 n3:6 aspect:1 simulate:4 relatively:1 martin:1 department:2 peripheral:3 combination:2 describes:2 across:6 dwang:1 n4:6 modification:1 s1:1 gradually:1 interference:24 ceiling:1 segregation:5 remains:1 previously:1 mechanism:3 end:2 informal:1 gaussians:2 operation:1 obey:1 observe:2 intelligibility:10 simulating:1 voice:1 original:6 n9:6 clustering:7 log10:1 music:1 especially:1 vizinho:1 objective:1 added:1 strategy:2 parametric:1 degrades:2 exhibit:2 ow:1 dp:1 distance:1 separate:1 jourjine:2 lateral:1 street:1 me:1 acoustical:1 extent:1 collected:1 assuming:1 modeled:2 relationship:4 prompted:1 ratio:2 reverberation:1 design:1 motivates:1 arison:1 perform:1 observation:2 markov:1 situation:2 segregate:2 extended:1 head:7 dc:1 frame:3 monaural:6 intensity:1 bench:2 sentence:2 acoustic:9 narrow:3 nip:1 able:1 bar:11 pattern:3 challenge:1 saturation:1 trill:1 including:1 reliable:2 green:1 critical:1 overlap:1 natural:1 event:1 hrtfs:3 siren:1 inversely:1 gardner:1 axis:2 concludes:1 hrtf:2 n6:6 utterance:3 niki:1 review:1 segregated:1 relative:7 afosr:1 mixed:2 filtering:3 proportional:1 digital:1 principle:1 displaying:1 bank:1 systematically:1 interfering:6 cooke:5 supported:1 free:1 english:1 side:3 weaker:1 taking:1 distributed:1 benefit:1 curve:2 f49620:1 commonly:1 clue:1 party:8 reconstructed:2 unreliable:1 active:1 corpus:6 continuous:1 channel:17 transfer:2 robust:5 ignoring:1 improving:1 excellent:1 priming:1 bamford:2 domain:1 noise:8 arise:1 profile:1 n2:6 child:1 fig:17 aid:2 perceptual:2 pe:1 specific:2 casa:1 dominates:1 bivariate:1 exists:2 incorporating:2 rickard:1 demixing:1 ci:1 labelling:1 illustrates:1 correlogram:4 extracted:4 ma:3 marked:1 narrower:1 goal:1 consequently:5 considerable:1 change:2 typical:1 determined:1 specifically:2 uniformly:1 telephone:1 semantically:1 degradation:1 microphone:3 discriminate:1 experimental:2 est:1 indicating:1 selectively:2 select:1 assessed:1 evaluate:2 tested:2 phenomenon:3 resynthesis:1 |
1,557 | 2,415 | A Summating, Exponentially-Decaying CMOS
Synapse for Spiking Neural Systems
Rock Z. Shi1,2 and Timothy Horiuchi1,2,3
Electrical and Computer Engineering Department
2
Institute for Systems Research
3
Neuroscience and Cognitive Science Program
University of Maryland, College Park, MD 20742
[email protected],[email protected]
1
Abstract
Synapses are a critical element of biologically-realistic, spike-based neural computation, serving the role of communication, computation, and
modification. Many different circuit implementations of synapse function exist with different computational goals in mind. In this paper we
describe a new CMOS synapse design that separately controls quiescent
leak current, synaptic gain, and time-constant of decay. This circuit implements part of a commonly-used kinetic model of synaptic conductance. We show a theoretical analysis and experimental data for prototypes fabricated in a commercially-available 1.5?m CMOS process.
1
Introduction
Synapses are a critical element in spike-based neural computation. There are perhaps as
many different synapse circuit designs in use as there are brain areas being modeled. This
diversity of circuits reflects the diversity of the synapse?s computational function. In many
computations, a narrow, square pulse of current is all that is necessary to model the synaptic
current. In other situations, a longer post-synaptic current profile is desirable to extend the
effects of extremely short spike durations (e.g., in address-event systems [1],[2], [3], [4]),
or to create a specific time window of interaction (e.g., for coincidence detection or for
creating delays [5]).
Temporal summation or more complex forms of inter-spike interaction are also important
areas of synaptic design that focus on the response to high-frequency stimulation. Recent
designs for fast-synaptic depression [6], [7], [8] and time-dependent plasticity [9], [10] are
good examples of this where some type of memory is used to create interaction between
incoming spikes. Even simple summation of input current can be very important in addressevent systems where a common strategy to reduce hardware is to have a single synapse
circuit mimic inputs from many different cells. A very popular design for this purpose
is the ?current-mirror synapse? [4] that is used extensively in its original form or in new
extended forms [6], [8] to expand the time course of current and to provide summation for
high-frequency spiking. This circuit is simple, compact, and stable, but couples the leak,
part of the synaptic gain, and the decay ?time-constant? in one control parameter. This is
restrictive and often more control is desirable. Alternatively, the same components can be
arranged to give the user manual-control of the decay to produce a true exponential decay
when operating in the subthreshold region (see Figure 7 (b) of [11]). This circuit, however,
does not provide good summation of multiple synaptic events.
In this paper we describe a new CMOS synapse circuit, that utilizes current-mode feedback to produce a first-order dynamical system. In the following sections, we describe the
kinetic model of synaptic conductance, describe the circuit implementation and function,
provide a theoretical analysis and finally compare our theory against testing results. We
also discuss the use of this circuit in various neuromorphic system contexts and conclude
with a discussion of the circuit synthesis approach.
2
Proposed synapse model
We consider a network of spiking neurons, each of which is modeled by the integrateand-fire model or the slightly more generous Spike Response Model (e.g. [12]). Synaptic
function in such neural networks are often modeled as a time-varying current. The functional form of this current could be a ? function, or a limited jump at the time of the spike
followed by an exponential decay. Perhaps the most widely used function in detailed comt
putational models is the ?-function, a function of the form ?t e? ? , introduced by [13].
A more general and practical framework is the neurotransmitter kinetics description proposed by Destexhe et al. [14]. This approach can synthesize a complete description of
synaptic transmission, as well as give an analytic expression for a post-synaptic current in
some simplified schemes. For a two-state ligand-gated channel model, the neurotransmitter
molecules, T, are taken to bind to post-synaptic receptors modeled by the first order kinetic
scheme [15]:
?
R+T ?
(1)
? T R?
?
where R and T R are the unbound and the bound form of the post-synaptic receptor, respectively. ? and ? are the forward and backward rate constants for transmitter binding. In
this model, the fraction of bound receptors, r, is described by the equation:
?
dr
= ?[T ](1 ? r) ? ?r
dt
(2)
If the transmitter concentration [T] can be modeled as a short pulse, then r(t) in (2) is a first
order linear differential equation.
We propose a synapse model that can be implemented by a CMOS circuit working in the
subthreshold region. Our model matches Destexhe et al.?s equations for the time-dependent
conductance, although we assume a fixed driving potential. In our synapse model, the
action potential is modeled as a narrow digital pulse. The pulse width is assumed to be a
fixed value tpw , however, in practice tpw may vary slightly from pulse to pulse.
Figure 1 illustrates the synaptic current response to a single pulse in such a model:
1. A presynaptic spike occurs at tj , during the pulse, the post-synaptic current is
modeled by:
isyn (t) = isyn (?) + (isyn (tj ) ? isyn (?))e?
t?tj
?r
(3)
2. After the presynaptic pulse terminated at time tj + tpw , the post-synaptic current
is modeled by:
?
isyn (t) = isyn (tj + tpw )e
t?tj ?tpw
?d
(4)
? synaptic current
? presynaptic pulse
t
j
t +t
j
pw
Figure 1: Synapse model. The action potential (spike) is modeled as a pulse with width
tpw . The synapse is modeled as first order linear system with synaptic current response
described by Equations (3) and (4)
3
3.1
CMOS circuit synthesis and analysis
The synthesis approach
Lazzaro [11] presents a very simple, compact synapse circuit that has an exponentiallydecaying synaptic current after each spike event. The synaptic current always resets to the
maximum current value during the spike and is not suitable for the summation of rapid
bursts of spikes. Another simple and widely used synapse is the current-mirror synapse
that has its own set of practical problems related to the coupling of gain, time constant, and
offset parameters. Our circuit is synthesized from the clean exponential decay from Lazzaro?s synapse and concepts from log domain filtering [16], [17] to convert the nonlinear
characteristic of the current mirror synapse into an externally-linear, time-invariant system
[18].
Vdd
spkIn
M1
M2
M4
v
M3
vc
V?
M7
M5
Vw
i
isyn
M6
M8
C
Figure 2: The proposed synapse circuit. The pin ?spkIn? receives the spike input with negative logic. The pin ?isyn ? is the synaptic current output. There
are two control parameters.
The input voltage Vw adjusts the weight of the
synapse and the input voltage V? sets the time constant. The transistors sizes
are: S1 = 2.4?m/1.6?m, S2 = 8?m/4?m, S3 = 10?m/4?m ? 4, S4 = 4?m/4?m,
S5 = 4?m/4?m, S6 = 4?m/4?m, S7 = 4?m/4?m, S8 = 10?m/4?m ? 20. The bodies of NMOS transistors are connected to ground, and the bodies of PMOS transistors are
connected to Vdd except for M3 .
3.2
Basic circuit description
The synapse circuit consists of eight transistors and one capacitor as shown in Figure 2. All
transistors are operated in the subthreshold region. Input voltage spikes are applied through
an inverter (not shown), onto the gate of the PMOS M1 . V? sets the current through M7
that determines the time constant of the output synaptic current as will be shown later. Vw
controls the magnitude of the synaptic current, so it determines the synaptic weight. The
voltage on the capacitor is converted to a current by transistor M6 , sent through the current
mirror M4 ? M5 , and into the source follower M3 ? M4 . The drain current of M8 , a scaled
copy of current through M6 produces an inhibitory current. A simple PMOS transistor with
the same gate voltage as M5 can provide an excitatory synaptic current.
3.3
Circuit analysis
We perform an analysis of the circuit by studying its response to a single spike. Assuming a
long transistor so that the Early effect can be neglected, the behavior of a NMOS transistor
working in the subthreshold region can be described by [19], [20]
ids = SI0n e
?n vgs
VT
e
(1??n )vbs
VT
(1 ? e
?vds
VT
)
(5)
where VT = KT /q is the thermal voltage, I0n is a positive constant current when Vgs =
Vbs = 0, and S = W
L is the ratio of the transistor width and length. 0 < ?n < 1 is a
parameter specific to the technology, and we will assume it is constant in this analysis. We
assume that all transistors are operating in saturation (vds > 4VT ). We also neglect any
parasitic capacitances.
The PMOS source follower M3 ? M4 is used as a level shifter. Detailed discussion on use
of source followers in the subthreshold region has been discussed in [21]. Combined with
a current mirror M4 ? M5 , this sub-circuit implements a logarithmic relationship between
i and v (as labeled in Figure 2):
v = Vw +
i S4
VT
ln(
)
?p
I0p S3 S5
(6)
Consistent with the translinear principle, this logarithmic relationship will make the current
through M2 proportional to 1i .
For simplicity, we assume a spike begins at time t=0, and the initial voltage on the capacitor
C is vc (0). The spike ends at time t = tpw . When the spike input is on (0 < t < tpw ), the
dynamics of the circuit for a step input is governed by
C
2
?p (Vdd ?Vw )
??n vc (t)
S2 S3 S5 Iop
dvc (t)
VT
=
e
e VT
? I?
dt
S4 S6 I0n
I? = S7 Ion e
?n V?
VT
(7)
(8)
With the aid of transformation
isyn (t) = S8 Ion e
?n vc (t)
VT
(9)
Equation (7) can be changed into a linear ordinary differential equation for isyn (t):
2
?p (V dd?Vw )
S2 S3 S5 S8 ?n Iop
disyn (t) ?n I?
VT
+
isyn (t) =
e
dt
CVT
S4 S6 CVT
In terms of the general solution expressed in (3), we have
?=
CVT
?n I?
(10)
(11)
?n vc (0)
isyn (0) = S8 I0n e VT
2
?p (V dd?V w)
S2 S3 S5 S8 Iop
VT
isyn (?) =
e
S4 S6 I?
(12)
(13)
When the spike input is off (t > tpw ) and we neglect the leakage current from M2 , then
isyn (t) will exponentially decay with the same time constant defined by (11). That is,
isyn (t) = isyn (tpw )e?
4
4.1
(t?tpw )
?
(14)
Results
Comparison of theory and measurement
vSpkIn(V)
We have fabricated a chip containing the basic synapse circuit as shown in Figure 2 through
MOSIS in a commercially-available 1.5 ?m, double poly fabrication process. In order
to compare our theoretical prediction with chip measurement, we first estimate the two
transistor parameters ? and I0 by measuring the drain currents from test transistors on the
same chip. The current measurements were performed with a Keithley 6517A electrometer.
? and I0 are estimated by fitting Equation (5) (and PMOS with PMOS i-v equation) through
multiple measurements of (vgs, ids) points through linear regression. The two parameters
are found to be ?n = 0.67, I0n = 1.32 ? 10?14 A, ?p = 0.77, I0p = 1.33 ? 10?19 A. In
estimating these two parameters as well as to compute our model predictions, we estimate
the effective transistor width for the wide transistors (e.g. M8 with m=20).
6
4
2
0
vc(t) (V)
0
0.5
1
isyn(t) (A)
2
2
2.5
3
3.5
4
2.5
3
3.5
4
2.5
3
3.5
4
measure
theory
0.4
0.2
1.5
0 ?7
x 10
0.5
1
1.5
2
1
0
theory
measure
0
0.5
1
1.5
2
time (sec)
Figure 3: Comparison between model prediction and measurement. To illustrate the detailed time course, we used a large spike pulse width. We set V? = 0 and Vw = 3.85V .
Figure 3 illustrates our test results compared against the model prediction. We used a very
wide pulse to exaggerate the details in the time response. Note that as the time constant is
so large, the isyn (t) rises almost linearly during the spike. In this case, Vw = 3.85V .
4.2
Tuning of synaptic strength and time constant
The synaptic time constant is solely determined by the leak current through transistor M7 .
The control is achieved by turning the pin V? . The synaptic strength is controlled by Vw
(which is also coupled with I? ) as can be seen from (13). In Figure 4, we present our test
results that illustrate how the various time constants and synaptic strengths can be achieved.
6
vSpkIn(t) (V)
vSpkIn(t) (V)
6
4
2
0
0
20
40
60
4
2
0
80
0
10
30
40
50
20
30
40
50
20
30
40
50
Vw=3.70V
vc(t) (V)
vc(t) (V)
V?=0.150V
20
0.4
V?=0.175V
0.2
Vw=3.75V
0.4
Vw=3.80V
0.2
V?=0.200V
0
20
40
60
0
80
0
?7
x 10
10
3
iSyn(t) (A)
iSyn(t) (A)
4
0
?7
x 10
V?=0.150V
2
V?=0.175V
0
V?=0.200V
0
2
Vw=3.70V
1
Vw=3.75V
0
20
40
time (msec)
(a)
60
80
Vw=3.80V
0
10
time (msec)
(b)
Figure 4: Changing time constant ? and synaptic strength. (a) Keeping Vw = 3.7V constant, but changing V? . (b) Keeping V? = 0.175V , but changing Vw . In both (a) and (b),
spike pulse width is set as 1 msec.
4.3
Spike train response
The exponential rise of the synaptic current during a spike naturally provides the summation and saturation of incoming spikes. Figure 5 illustrates this behavior in response to an
input spike train of fixed duration.
5
Discussion
We have proposed a new synapse model and a specific CMOS implementation of the model.
In our theoretical analysis, we have ignored all parasitic effects which can play an significant role in the circuit behavior. For example, as the source follower M3 ? M4 provides the
gate voltage of M2 , switching through M1 will affect the circuit behavior due to parasitic
capacitance. We emphasize that various circuit implementation can be designed, especially
a circuit with lower glitch but faster speed is preferred.
The synaptic model circuit we have described has a single time constant for both its rising
and decaying phase, whereas the time-course of biological synapses show a faster rising
phase, but a much slower decaying phase. The second time constant can, in principle, be
implemented in our circuit by adding a parallel branch to M7 with some switching circuitry.
Biological synapses have been best modeled and fitted by an exponentially-decaying time
course with different time constants for different types of synapse. Our synapse circuit
model captures this important characteristic of the biological synapse, providing an easily
controlled exponential decay and a natural summation and saturation of the synaptic current. By using a simple first order linear model, our synapse circuit model can give the
circuit designer an analytically tractable function for use in large, complex, spiking neural
network system design. The current mirror synapse, in spite of its successful application,
vSpkIn(t) (V)
6
4
2
0
0
50
100
150
200
250
0
50
100
150
200
250
0
50
100
time (msec)
150
200
250
vc(t) (V)
0.5
0.45
0.4
0.35
0.3
?8
iSyn(t) (A)
x 10
4
3
2
1
0
Figure 5: Response to spike train. The spike pulse width is set as 1 msec, and period 15
msec. Vw = 3.73V , V? = 131mV .
has been found to be an inconvenient computation unit due to its nonlinearity. Our linear
synapse is achieved, however, with the cost of silicon size. This is especially true when
utilized in an AER system, where the spike can be less than a microsecond. Because our
linearity is achieved by employing the CMOS subthreshold current characteristic, working
with very narrow pulses will mean the use of large transistor widths to get large charging
currents. We have identified a number of modifications that may allow the circuit to operate
at much higher current levels and thus higher speed.
6
Conclusion
We have identified a need for more independent control of the synaptic gain, timecourse, and leak parameters in CMOS synapse and have demonstrated a prototype circuit that utilizes current-mode feedback to exhibit the same first-order dynamics that are
utilized by Destexhe et al. [14], [15] to describe a kinetic model description of receptorneurotransmitter binding for a more efficient computational description of the synaptic conductance. The specific implementation relies on the subthreshold exponential characteristic
of the MOSFET and thus operates best at these current levels and slower speeds.
Acknowledgments
This work was supported by funding from DARPA (N0001400C0315) and the Air Force
Office of Strategic Research (AFOSR - F496200110415). We thank MOSIS for fabrication
services in support of our neuromorphic analog VLSI course and teaching laboratory.
References
[1] M. Mahowald, An Analog VLSI System for Stereoscopic Vision.
demic, 1994.
Norwell, MA: Kluwer Aca-
[2] A. Mortara, ?A pulsed communication/computation framework for analog VLSI perceptive systems,? in Neuromorphic Systems Engineering, T. S. Land, Ed. Norwell, MA: Kluwer Academic
Publishers, 1998, pp. 217?228.
[3] S. Deiss, R. Douglas, and A. Whatley, ?A pulse-coded communications infrastructure for neuromorphic systems,? in Pulsed Neural Networks, W. Mass and C. Bishop, Eds. Cambridge,
MA: MIT Press, 1999, pp. 157?178.
[4] K. A. Boahen, ?The retinomorphic approach: adaptive pixel-parallel amplification, filtering,
and quantization,? Journal of Analog Integrated Circuits and Signal Processing, vol. 13, pp.
53?68, 1997.
[5] M. Cheely and T. Horiuchi, ?Analog VLSI models of range-tuned neurons in the bat echolocation system,? EURASIP Journal, Special Issue on Neuromorphic Signal Processing and Implementations (in press), 2003.
[6] C. Rasche and R. H. R. Hahnloser, ?Silicon synaptic depression,? Biol. Cybern., vol. 84, pp.
57?62, 2001.
[7] A. McEwan and A. van Schaik, ?A silicon representation of the Meddis inner hair cell model,?
in Proceedings of the ICSC Symposia on Intelligent Systems & Application (ISA?2000), 2000,
paper 1544-078.
[8] M. Boegerhausen, P. Suter, and S. Liu, ?Modeling short-term synaptic depression in silicon,?
Neural Computation, vol. 15, no. 2, pp. 331?348, Feb 2003.
[9] P. Hafliger, M. Mahowald, and L.Watts, ?A spike based learning neuron in analog VLSI,? in
Advances in Neural Information Processing Systems, M. C. Mozer, M. I. Jordan, and T. Petsche,
Eds. Cambridge, MA: MIT Press, 1997, vol. 9, pp. 692?698.
[10] G. Indiveri, ?Neuromorphic bistable VLSI synapses with spike-timing-dependent plasticity,? in
Advances in Neural Information Processing Systems, M. C. Mozer, M. I. Jordan, and T. Petsche,
Eds. Cambridge, MA: MIT Press, 2002, vol. 15.
[11] J. P. Lazzaro, ?Low-power silicon axons, neuons, and synapses,? in Silicon Implementations
of Pulse Coded Neural Networks, M. E. Zaghloul, J. L. Meador, and R. W. Newcomb, Eds.
Norwell, MA: Kluwer Academic Publishers, 1994, pp. 153?164.
[12] W. Gerstner, Spiking Neuron Models: Single Neurons, Populations, Plasticity.
UK: Cambridge Unvisity Press, 2002.
Cambridge,
[13] W. Rall, ?Distinguishing theoretical synaptic potentials computed for different soma-dendritic
distributions of synaptic inputs,? J. Neurophys., vol. 30, pp. 1138?1168, 1967.
[14] A. Destexhe, Z. F. Mainen, and T. J. Sejnowski, ?Synthesis of models for excitable membranes,
synaptic transmission and neuromodulation using a common kinetic formalism,? Journal of
Computational Neuroscience, vol. 1, pp. 195?230, 1994.
[15] ??, ?An efficient method for computing synaptic conductances based on a kinetic model of
receptor binding,? Neural Computation, vol. 6, pp. 14?18, 1994.
[16] E. Seevinck, ?Companding current-mode integrator: A new circuit principle for continuous
time monolithic filters,? Electron. Letts., vol. 26, pp. 2046?2047, Nov 1990.
[17] D. R. Frey, ?Exponential state space fitlers: A generic current mode design strategy,? IEEE
Trans. Circuits Syst. I, vol. 43, pp. 34?42, Jan 1996.
[18] Y. Tsividis, ?Externally linear, time-invariant systems and their application to companding signal processors,? IEEE Trans. Circuits Syst. II, vol. 44, pp. 65?85, Feb 1997.
[19] C. Mead, Analog VLSI and Neural Systems.
Reading, MA: Addison-Wesley, 1989.
[20] E. A. Vittoz and J. Fellrath, ?CMOS analog integrated circuits based on weak inversion opearaton,? IEEE J. Solid-State Circuits, vol. 12, pp. 224?231, Jun. 1977.
[21] S.-C. Liu, J. Kramer, G. Indiveri, T. Delbruck, and R. Douglas, Analog VLSI: Circuits and
Principle. Cambridge, MA: The MIT Press, 2002.
| 2415 |@word pw:1 rising:2 inversion:1 glue:1 pulse:18 solid:1 initial:1 liu:2 mainen:1 tuned:1 current:48 neurophys:1 follower:4 realistic:1 plasticity:3 analytic:1 designed:1 short:3 schaik:1 infrastructure:1 provides:2 i0n:4 burst:1 differential:2 m7:4 symposium:1 consists:1 fitting:1 inter:1 rapid:1 behavior:4 brain:1 integrator:1 m8:3 rall:1 window:1 begin:1 estimating:1 linearity:1 circuit:41 mass:1 transformation:1 fabricated:2 temporal:1 scaled:1 uk:1 control:8 unit:1 timmer:1 positive:1 service:1 engineering:2 bind:1 timing:1 monolithic:1 frey:1 switching:2 receptor:4 id:2 mead:1 solely:1 limited:1 range:1 bat:1 practical:2 acknowledgment:1 testing:1 practice:1 implement:2 jan:1 area:2 spite:1 get:1 onto:1 context:1 cybern:1 demonstrated:1 cvt:3 duration:2 simplicity:1 m2:4 adjusts:1 s6:4 population:1 play:1 user:1 distinguishing:1 element:2 synthesize:1 utilized:2 labeled:1 role:2 coincidence:1 electrical:1 capture:1 region:5 connected:2 boahen:1 mozer:2 leak:4 vbs:2 neglected:1 dynamic:2 vdd:3 easily:1 darpa:1 chip:3 various:3 neurotransmitter:2 train:3 mosfet:1 fast:1 describe:5 effective:1 horiuchi:1 sejnowski:1 pmos:6 widely:2 whatley:1 transistor:17 rock:1 propose:1 interaction:3 reset:1 amplification:1 description:5 double:1 transmission:2 produce:3 cmos:10 coupling:1 illustrate:2 implemented:2 vittoz:1 newcomb:1 filter:1 vc:9 translinear:1 dvc:1 demic:1 bistable:1 boegerhausen:1 integrateand:1 biological:3 dendritic:1 summation:7 kinetics:1 ground:1 rasche:1 electron:1 circuitry:1 driving:1 vary:1 generous:1 inverter:1 early:1 purpose:1 create:2 reflects:1 mit:4 always:1 varying:1 voltage:8 office:1 focus:1 indiveri:2 transmitter:2 dependent:3 i0:2 integrated:2 vlsi:8 expand:1 pixel:1 issue:1 retinomorphic:1 special:1 park:1 commercially:2 mimic:1 intelligent:1 suter:1 m4:6 phase:3 fire:1 conductance:5 detection:1 putational:1 operated:1 tj:6 kt:1 norwell:3 necessary:1 mortara:1 inconvenient:1 theoretical:5 fitted:1 formalism:1 modeling:1 measuring:1 delbruck:1 neuromorphic:6 ordinary:1 cost:1 strategic:1 mahowald:2 delay:1 fabrication:2 successful:1 combined:1 off:1 vgs:3 synthesis:4 containing:1 dr:1 cognitive:1 creating:1 summating:1 syst:2 potential:4 converted:1 diversity:2 sec:1 mv:1 later:1 performed:1 icsc:1 aca:1 decaying:4 parallel:2 square:1 air:1 characteristic:4 subthreshold:7 weak:1 processor:1 synapsis:6 addressevent:1 manual:1 synaptic:42 ed:5 against:2 echolocation:1 frequency:2 pp:14 naturally:1 couple:1 gain:4 popular:1 wesley:1 higher:2 dt:3 response:9 synapse:30 arranged:1 working:3 receives:1 companding:2 nonlinear:1 mode:4 perhaps:2 effect:3 concept:1 true:2 iop:3 analytically:1 laboratory:1 during:4 width:8 m5:4 complete:1 hafliger:1 funding:1 common:2 stimulation:1 spiking:5 functional:1 exponentially:3 extend:1 s8:5 m1:3 discussed:1 synthesized:1 analog:9 kluwer:3 s5:5 measurement:5 significant:1 silicon:6 cambridge:6 tuning:1 teaching:1 nonlinearity:1 stable:1 longer:1 operating:2 feb:2 own:1 recent:1 pulsed:2 meador:1 isyn:21 vt:13 seen:1 period:1 signal:3 ii:1 branch:1 multiple:2 desirable:2 isa:1 match:1 faster:2 academic:2 long:1 post:6 coded:2 controlled:2 prediction:4 basic:2 regression:1 hair:1 vision:1 achieved:4 cell:2 ion:2 whereas:1 separately:1 source:4 publisher:2 operate:1 umd:2 sent:1 capacitor:3 jordan:2 vw:18 destexhe:4 m6:3 electrometer:1 affect:1 identified:2 reduce:1 inner:1 prototype:2 zaghloul:1 expression:1 s7:2 lazzaro:3 action:2 depression:3 ignored:1 detailed:3 s4:5 extensively:1 hardware:1 exist:1 inhibitory:1 s3:5 designer:1 neuroscience:2 estimated:1 stereoscopic:1 serving:1 vol:12 soma:1 changing:3 douglas:2 clean:1 backward:1 mosis:2 fraction:1 convert:1 almost:1 utilizes:2 bound:2 followed:1 strength:4 aer:1 speed:3 nmos:2 extremely:1 department:1 watt:1 membrane:1 slightly:2 biologically:1 modification:2 s1:1 invariant:2 meddis:1 taken:1 ln:1 equation:8 discus:1 pin:3 neuromodulation:1 mind:1 addison:1 tractable:1 end:1 studying:1 available:2 eight:1 generic:1 petsche:2 gate:3 slower:2 original:1 neglect:2 restrictive:1 especially:2 leakage:1 capacitance:2 spike:31 occurs:1 strategy:2 concentration:1 md:1 exhibit:1 thank:1 maryland:1 vd:2 presynaptic:3 assuming:1 shifter:1 length:1 modeled:11 relationship:2 glitch:1 ratio:1 providing:1 negative:1 rise:2 implementation:7 design:7 gated:1 perform:1 neuron:5 thermal:1 situation:1 extended:1 communication:3 introduced:1 deiss:1 timecourse:1 narrow:3 trans:2 address:1 dynamical:1 reading:1 program:1 saturation:3 unbound:1 memory:1 charging:1 power:1 critical:2 event:3 suitable:1 natural:1 force:1 turning:1 scheme:2 technology:1 excitable:1 coupled:1 jun:1 drain:2 afosr:1 filtering:2 proportional:1 digital:1 consistent:1 principle:4 dd:2 land:1 course:5 excitatory:1 changed:1 supported:1 copy:1 keeping:2 allow:1 institute:1 wide:2 isr:1 van:1 feedback:2 forward:1 commonly:1 jump:1 adaptive:1 simplified:1 employing:1 nov:1 compact:2 emphasize:1 preferred:1 logic:1 incoming:2 conclude:1 quiescent:1 assumed:1 alternatively:1 continuous:1 channel:1 molecule:1 tsividis:1 gerstner:1 complex:2 poly:1 domain:1 linearly:1 terminated:1 s2:4 profile:1 body:2 aid:1 axon:1 sub:1 msec:6 exponential:7 governed:1 externally:2 specific:4 bishop:1 offset:1 decay:8 exaggerate:1 quantization:1 adding:1 mirror:6 magnitude:1 illustrates:3 logarithmic:2 timothy:1 expressed:1 binding:3 ligand:1 determines:2 relies:1 kinetic:6 ma:8 hahnloser:1 goal:1 kramer:1 microsecond:1 eurasip:1 determined:1 except:1 operates:1 experimental:1 m3:5 college:1 parasitic:3 perceptive:1 support:1 biol:1 |
1,558 | 2,416 | Large margin classifiers: convex loss, low noise,
and convergence rates
Peter L. Bartlett, Michael I. Jordan and Jon D. McAuliffe
Division of Computer Science and Department of Statistics
University of California, Berkeley
Berkeley, CA 94720
{bartlett,jordan,jon}@stat.berkeley.edu
Abstract
Many classification algorithms, including the support vector machine,
boosting and logistic regression, can be viewed as minimum contrast
methods that minimize a convex surrogate of the 0-1 loss function. We
characterize the statistical consequences of using such a surrogate by providing a general quantitative relationship between the risk as assessed using the 0-1 loss and the risk as assessed using any nonnegative surrogate
loss function. We show that this relationship gives nontrivial bounds under the weakest possible condition on the loss function?that it satisfy a
pointwise form of Fisher consistency for classification. The relationship
is based on a variational transformation of the loss function that is easy
to compute in many applications. We also present a refined version of
this result in the case of low noise. Finally, we present applications of
our results to the estimation of convergence rates in the general setting of
function classes that are scaled hulls of a finite-dimensional base class.
1 Introduction
Convexity has played an increasingly important role in machine learning in recent years,
echoing its growing prominence throughout applied mathematics (Boyd and Vandenberghe,
2003). In particular, a wide variety of two-class classification methods choose a real-valued
classifier f based on the minimization of a convex surrogate ?(yf (x)) in the place of an
intractable loss function 1(sign(f (x)) 6= y). Examples of this tactic include the support
vector machine, AdaBoost, and logistic regression, which are based on the exponential
loss, the hinge loss and the logistic loss, respectively.
What are the statistical consequences of choosing models and estimation procedures so as
to exploit the computational advantages of convexity? In the setting of 0-1 loss, some basic
answers have begun to emerge. In particular, it is possible to demonstrate the Bayes-risk
consistency of methods based on minimizing convex surrogates for 0-1 loss, with appropriate regularization. Lugosi and Vayatis (2003) have provided such a result for any differentiable, monotone, strictly convex loss function ? that satisfies ?(0) = 1. This handles many
common cases although it does not handle the SVM. Steinwart (2002) has demonstrated
consistency for the SVM as well, where F is a reproducing kernel Hilbert space and ? is
continuous. Other results on Bayes-risk consistency have been presented by Jiang (2003),
Zhang (2003), and Mannor et al. (2002).
To carry this agenda further, it is necessary to find general quantitative relationships between the approximation and estimation errors associated with ?, and those associated with
0-1 loss. This point has been emphasized by Zhang (2003), who has presented several examples of such relationships. We simplify and extend Zhang?s results, developing a general
methodology for finding quantitative relationships between the risk associated with ? and
the risk associated with 0-1 loss. In particular, let R(f ) denote the risk based on 0-1 loss and
let R? = inf f R(f ) denote the Bayes risk. Similarly, let us refer to R? (f ) = E?(Y f (X))
as the ??-risk,? and let R?? = inf f R? (f ) denote the ?optimal ?-risk.? We show that, for
all measurable f ,
?(R(f ) ? R? ) ? R? (f ) ? R?? ,
(1)
for a nondecreasing function ? : [0, 1] ? [0, ?), and that no better bound is possible.
Moreover, we present a general variational representation of ? in terms of ?, and show
how this representation allows us to infer various properties of ?.
This result suggests that if ? is well-behaved then minimization of R? (f ) may provide a
reasonable surrogate for minimization of R(f ). Moreover, the result provides a quantitative
way to transfer assessments of statistical error in terms of ?excess ?-risk? R ? (f ) ? R?? into
assessments of error in terms of ?excess risk? R(f ) ? R ? .
Although our principal goal is to understand the implications of convexity in classification,
we do not impose a convexity assumption on ? at the outset. Indeed, while conditions
such as convexity, continuity, and differentiability of ? are easy to verify and have natural
relationships to optimization procedures, it is not immediately obvious how to relate such
conditions to their statistical consequences. Thus, in Section 2 we consider the weakest
possible condition on ??that it is ?classification-calibrated,? which is essentially a pointwise form of Fisher consistency for classification. We show that minimizing ?-risk leads
to minimal risk precisely when ? is classification-calibrated.
Building on (1), in Section 3 we study the low noise setting, in which the posterior probability ?(X) is not too close to 1/2. We show that in this setting we are able to obtain an
improvement in the relationship between excess ?-risk and excess risk.
Section 4 turns to the estimation of convergence rates for empirical ?-risk minimization
in the low noise setting. We find that for convex ? satisfying a certain uniform convexity
condition, empirical ?-risk minimization yields convergence of misclassification risk to
that of the best-performing classifier in F, and the rate of convergence can be strictly faster
than the classical parametric rate of n?1/2 .
2 Relating excess risk to excess ?-risk
There are three sources of error to be considered in a statistical analysis of classification
problems: the classical estimation error due to finite sample size, the classical approximation error due to the size of the function space F, and an additional source of approximation
error due to the use of a surrogate in place of the 0-1 loss function. It is this last source of
error that is our focus in this section. We give estimates for this error that are valid for any
measurable function. Since the error is defined in terms of the probability distribution, we
work with population expectations in this section.
Fix an input space X and let (X, Y ), (X1 , Y1 ), . . . , (Xn , Yn ) ? X ? {?1} be i.i.d., with
distribution P . Define ? : X ? [0, 1] as ?(x) = P (Y = 1|X = x).
Define the {0, 1}-risk, or just risk, of f as R(f ) = P (sign(f (X)) 6= Y ), where sign(?) =
1 for ? > 0 and ?1 otherwise. Based on the sample Dn = ((X1 , Y1 ), . . . , (Xn , Yn )), we
want to choose a function fn with small risk. Define the Bayes risk R? = inf f R(f ), where
the infimum is over all measurable f . Then any f satisfying sign(f (X)) = sign(?(X) ?
1/2) a.s. on {?(X) 6= 1/2} has R(f ) = R? .
Fix a function ? : ? [0, ?). Define the ?-risk of f as R? (f ) = E?(Y f (X)). We can
view ? as specifying a contrast function that is minimized in determining a discriminant f .
Define C? (?) = ??(?) + (1 ? ?)?(??), so that the conditional ?-risk at x ? X is
E(?(Y f (X))|X = x) = C?(x) (f (x)) = ?(x)?(f (x)) + (1 ? ?(x))?(?f (x)).
As a useful illustration for the definitions that follow, consider a singleton domain X =
{x0 }. Minimizing ?-risk corresponds to choosing f (x0 ) to minimize C?(x0 ) (f (x0 )).
For ? ? [0, 1], define the optimal conditional ?-risk
H(?) = inf C? (?) = inf (??(?) + (1 ? ?)?(??)).
??
??
Then the optimal ?-risk satisfies R?? := inf f R? (f ) = EH(?(X)), where the infimum is
over measurable functions. For ? ? [0, 1], define
H ? (?) =
inf
?:?(2??1)?0
C? (?) =
inf
?:?(2??1)?0
(??(?) + (1 ? ?)?(??)).
This is the optimal value of the conditional ?-risk, under the constraint that the sign of the
argument ? disagrees with that of 2? ? 1.
We now turn to the basic condition we impose on ?. This condition generalizes the requirement that the minimizer of C? (?) (if it exists) has the correct sign. This is a minimal
condition that can be viewed as a form of Fisher consistency for classification (Lin, 2001).
Definition 1. We say that ? is classification-calibrated if, for any ? 6= 1/2,
H ? (?) > H(?).
The following functional transform of the loss function will be useful in our main result.
?
Definition 2. We define the ?-transform of a loss function as follows. Given ? :
[0, ?), define the function ? : [0, 1] ? [0, ?) by ? = ???? , where
1+?
1+?
?
?
?(?) = H
?H
,
2
2
and g ?? : [0, 1] ? is the Fenchel-Legendre biconjugate of g : [0, 1] ? . Equivalently,
the epigraph of g ?? is the closure of the convex hull of the epigraph of g. (Recall that the
epigraph of a function g is the set {(x, t) : x ? [0, 1], g(x) ? t}.)
It is immediate from the definitions that ?? and ? are nonnegative and that they are also continuous on [0, 1]. We calculate the ?-transform for exponential loss, logistic loss, quadratic
loss and truncated quadratic loss, tabulating the results in Table 1. All of these loss functions can be verified to be classification-calibrated. (The other parameters listed in the table
will be referred to later.)
The importance of the ?-transform is shown by the following theorem.
?(?)
exponential
logistic
quadratic
truncated quadratic
e??
ln(1 + e
?2?
)
(1 ? ?)2
(max{0, 1 ? ?})2
1?
?(?)
LB
?()
?
eB
e?B 2 /8
?
2
e?2B 2 /4
?2
?2
2(B + 1)
2(B + 1)
2 /4
2 /4
1 ? ?2
Table 1: Four convex loss functions and the corresponding ?-transform. On the interval
[?B, B], each loss function has the indicated Lipschitz constant LB and modulus of convexity ?() with respect to d? . All have a quadratic modulus of convexity.
Theorem 3.
1. For any nonnegative loss function ?, any measurable f : X ?
and any probability distribution on X ? {?1},
?(R(f ) ? R? ) ? R? (f ) ? R?? .
2. Suppose |X | ? 2. For any nonnegative loss function ?, any > 0 and any ? ?
[0, 1], there is a probability distribution on X ? {?1} and a function f : X ?
such that R(f ) ? R? = ? and ?(?) ? R? (f ) ? R?? ? ?(?) + .
3. The following conditions are equivalent.
(a) ? is classification-calibrated.
(b) For any sequence (?i ) in [0, 1], ?(?i ) ? 0 if and only if ?i ? 0.
(c) For every sequence of measurable functions fi : X ? and every probability distribution on X ? {?1}, R? (fi ) ? R?? implies R(fi ) ? R? .
Remark: It can be shown that classification-calibration implies ? is invertible on [0, 1], in
which case it is meaningful to write the upper bound on excess risk as ? ?1 (R? (f ) ? R?? ).
Remark: Zhang (2003) has given a comparison theorem like Part 1, for convex ? that
satisfy certain conditions. Lugosi and Vayatis (2003) and Steinwart (2002) have shown
limiting results like Part 3c under other conditions on ?. All of these conditions are stronger
than the ones we assume here.
The following lemma summarizes various useful properties of H, H ? and ?.
Lemma 4. The functions H, H ? and ? have the following properties, for all ? ? [0, 1]:
H and H ? are symmetric about 1/2: H(?) = H(1 ? ?), H ? (?) = H ? (1 ? ?).
H is concave and satisfies H(?) ? H(1/2) = H ? (1/2).
If ? is classification-calibrated, then H(?) < H(1/2) for ? 6= 1/2.
H ? is concave on [0, 1/2] and [1/2, 1], and satisfies H ? (?) ? H(?).
5. H, H ? and ?? are continuous on [0, 1].
6. ? is continuous on [0, 1], ? is nonnegative and minimal at 0, and ?(0) = 0.
7. ? is classification-calibrated iff ?(?) > 0 for all ? ? (0, 1].
1.
2.
3.
4.
Proof. (Of Theorem 3). For Part 1, it is straightforward to show that
R(f ) ? R? = E (1 [sign(f (X)) 6= sign(?(X) ? 1/2)] |2?(X) ? 1|) ,
where 1 [?] is 1 if the predicate ? is true and 0 otherwise. From the definition, ? is convex,
so we can apply Jensen?s inequality, the fact that ?(0) = 0 (Lemma 4, part 6) and the fact
?
that ?(?) ? ?(?),
to show that
?(R(f ) ? R? )
? E? (1 [sign(f (X)) 6= sign(?(X) ? 1/2)] |2?(X) ? 1|)
= E (1 [sign(f (X)) 6= sign(?(X) ? 1/2)] ? (|2?(X) ? 1|))
? E 1 [sign(f (X)) 6= sign(?(X) ? 1/2)] ?? (|2?(X) ? 1|)
= E 1 [sign(f (X)) 6= sign(?(X) ? 1/2)] H ? (?(X)) ? H(?(X))
= E 1 [sign(f (X)) 6= sign(?(X) ? 1/2)]
inf
C?(X) (?) ? H(?(X))
?:?(2?(X)?1)?0
? E C?(X) (f (X)) ? H(?(X))
= R? (f ) ? R?? ,
where the last inequality used the fact that for any x, and in particular when sign(f (x)) =
sign(?(x) ? 1/2), we have C?(x) (f (x)) ? H(?(x)).
For Part 2, the first inequality is from Part 1. For the second, fix > 0 and ? ? [0, 1]. From
the definition of ?, we can choose ?, ?1 , ?2 ? [0, 1] for which ? = ??1 + (1 ? ?)?2 and
? 1 ) + (1 ? ?)?(?
? 2 ) ? /2. Choose distinct x1 , x2 ? X , and choose PX such
?(?) ? ? ?(?
that PX {x1 } = ?, PX {x2 } = 1 ? ?, ?(x1 ) = (1 + ?1 )/2, and ?(x2 ) = (1 + ?2 )/2.
From the definition of H ? , we can choose f : X ? such that f (x1 ) ? 0, f (x2 ) ? 0,
C?(x1 ) (f (x1 )) ? H ? (?(x1 )) + /2 and C?(x2 ) (f (x2 )) ? H ? (?(x2 )) + /2. Then it is
? 1 ) + (1 ? ?)?(?
? 2 ) + /2 ? ?(?) + . Furthermore,
easy to verify that R? (f ) ? R?? ? ? ?(?
since sign(f (xi )) = ?1 but ?(xi ) ? 1/2, we have R(f ) ? R? = E|2?(X) ? 1| = ?.
For Part 3, first note that, for any ?, ? is continuous on [0, 1] and ?(0) = 0 by Lemma 4,
part 6, and hence ?i ? 0 implies ?(?i ) ? 0. Thus, we can replace condition (3b) by
(3b?) For any sequence (?i ) in [0, 1], ?(?i ) ? 0 implies ?i ? 0 .
To see that (3a) implies (3b?), let ? be classification-calibrated, and let (? i ) be a sequence that does not converge to 0. Define c = lim sup ?i > 0, and pass to a subsequence with lim ?i = c. Then lim ?(?i ) = ?(c) by continuity, and ?(c) > 0 by
classification-calibration (Lemma 4, part 7). Thus, for the original sequence (? i ), we see
lim sup ?(?i ) > 0, so we cannot have ?(?i ) ? 0.
Part 1 implies that (3b?) implies (3c). The proof that (3c) implies (3a) is straightforward;
see Bartlett et al. (2003).
The following observation is easy to verify. It shows that if ? is convex, the classificationcalibration condition is easy to verify and the ? transform is a little easier to compute.
Lemma 5. Suppose ? is convex. Then we have
1. ? is classification-calibrated if and only if it is differentiable at 0 and ? 0 (0) < 0.
?
2. If ? is classification-calibrated, then ?? is convex, hence ? = ?.
All of the classification procedures mentioned in earlier sections utilize surrogate loss functions which are either upper bounds on 0-1 loss or can be transformed into upper bounds
via a positive scaling factor. It is easy to verify that this is necessary.
Lemma 6. If ? : ? [0, ?) is classification-calibrated, then there is a ? > 0 such that
??(?) ? 1 [? ? 0] for all ? ? .
3 Tighter bounds under low noise conditions
In a study of the convergence rate of empirical risk minimization, Tsybakov (2001) provided a useful condition on the behavior of the posterior probability near the optimal decision boundary {x : ?(x) = 1/2}. Tsybakov?s condition is useful in our setting as well; as
we show in this section, it allows us to obtain a refinement of Theorem 3.
Recall that
R(f ) ? R? = E (1 [sign(f (X)) 6= sign(?(X) ? 1/2)] |2?(X) ? 1|)
? PX (sign(f (X)) 6= sign(?(X) ? 1/2)) ,
(2)
with equality provided that ?(X) is almost surely either 1 or 0. We say that P has noise
exponent ? ? 0 if there is a c > 0 such that every measurable f : X ? has
?
PX (sign(f (X)) 6= sign(?(X) ? 1/2)) ? c (R(f ) ? R? ) .
(3)
Notice that we must have ? ? 1, in view of (2). If ? = 0, this imposes no constraint on the
noise: take c = 1 to see that every probability measure P satisfies (3). On the other hand,
it is easy to verify that ? = 1 if and only if |2?(X) ? 1| ? 1/c a.s. [PX ].
Theorem 7. Suppose P has noise exponent 0 < ? ? 1, and ? is classification-calibrated.
Then there is a c > 0 such that for any f : X ? ,
!
1??
(R(f ) ? R? )
? ?
? R? (f ) ? R?? .
c (R(f ) ? R ) ?
2c
Furthermore, this never gives a worse rate than the result of Theorem 3, since
!
1??
(R(f ) ? R? )
R(f ) ? R?
? ?
(R(f ) ? R ) ?
??
.
2c
2c
The proof follows closely that of Theorem 3(1), with the modification that we approximate
the error integral separately over subsets of the input space with low and high noise.
4 Estimation rates
Large margin algorithms choose f? from a class F to minimize empirical ?-risk,
n
1X
? ? (f ) = E?(Y
?
R
f (X)) =
?(Yi f (Xi )).
n i=1
We have seen how the excess risk depends on the excess ?-risk. In this section, we examine
the convergence of f??s excess ?-risk, R? (f?) ? R?? . We can split this excess risk into an
estimation error term and an approximation error term:
R? (f?) ? R?? = (R? (f?) ? inf R? (f )) + ( inf R? (f ) ? R?? ).
f ?F
f ?F
We focus on the first term, the estimation error term. For simplicity, we assume throughout
that some f ? ? F achieves the infimum, R? (f ? ) = inf f ?F R? (f ).
? ? (f ) and R? (f ) are close,
The simplest way to bound R? (f?) ? R? (f ? ) is to show that R
uniformly over F. This approach can give the wrong rate. For example,?for a nontrivial
class F, the resulting estimation error bound can decrease no faster than 1/ n. However, if
F is a small class (for instance, a VC-class) and R? (f ? ) = 0, then R? (f?) should decrease
as log n/n. Lee et al. (1996) showed that fast rates are also possible for the quadratic
loss ?(?) = (1 ? ?)2 if F is convex, even if R? (f ? ) > 0. In particular, because the
quadratic loss function is strictly convex, it is possible to bound the variance of the excess
loss (difference between the loss of a function f and that of the optimal f ? ) in terms of its
expectation. Since the variance decreases as we approach the optimal f ? , the risk of the
empirical minimizer converges more quickly to the optimal risk than the simple uniform
convergence results would suggest. Mendelson (2002) improved this result, and extended
it from prediction in L2 (PX ) to prediction in Lp (PX ) for other values of p. The proof
used the idea of the modulus of convexity of a norm. This idea can be used to give a
simpler proof of a more general bound when the loss function satisfies a strict convexity
condition, and we obtain risk bounds. The modulus of convexity of an arbitrary strictly
convex function (rather than a norm) is a key notion in formulating our results.
Definition 8 (Modulus of convexity). Given a pseudometric d defined on a vector space
S, and a convex function f : S ? , the modulus of convexity of f with respect to d is the
function ? : [0, ?) ? [0, ?] satisfying
f (x1 ) + f (x2 )
x1 + x 2
?() = inf
?f
: x1 , x2 ? S, d(x1 , x2 ) ? .
2
2
If ?() > 0 for all > 0, we say that f is strictly convex with respect to d.
We consider loss functions ? that also satisfy a Lipschitz condition with respect to a pseudometric d on : we say that ? : ? is Lipschitz with respect to d, with constant L,
if for all a, b ? , |?(a) ? ?(b)| ? L ? d(a, b). (Note that if d is a metric and ? is convex,
then ? necessarily satisfies a Lipschitz condition on any compact subset of .)
We consider four loss functions that satisfy these conditions: the exponential loss function
used in AdaBoost, the deviance function for logistic regression, the quadratic loss function,
and the truncated quadratic loss function; see Table 1. We use the pseudometric
d? (a, b) = inf {|a ? ?| + |? ? b| : ? constant on (min{?, ?}, max{?, ?})} .
For all except the truncated quadratic loss function, this corresponds to the standard metric
on , d? (a, b) = |a ? b|. In all cases, d? (a, b) ? |a ? b|, but for the truncated quadratic, d?
ignores differences to the right of 1. It is easy to calculate the Lipschitz constant and modulus of convexity for each of these loss functions. These parameters are given in Table 1.
In the following result, we consider the function class used by algorithms such as
AdaBoost: the class of linear combinations of classifiers from a fixed base class. We assume that this base class has finite Vapnik-Chervonenkis dimension, and we constrain the
size of the class by restricting the `1 norm of the linear parameters. If G is the VC-class,
we write F = B absconv(G), for some constant B, where
)
(m
X
B absconv(G) =
?i gi : m ? , ?i ? , gi ? G, k?k1 = B .
i=1
Theorem 9. Let ? :
?
be a convex loss function. Suppose that, on the interval
[?B, B], ? is Lipschitz with constant LB and has modulus of convexity ?() = aB 2 (both
with respect to the pseudometric d).
For any probability distribution P on X ? Y that has noise exponent ? = 1, there is
a constant c0 for which the following is true. For i.i.d. data (X1 , Y1 ), . . . , (Xn , Yn ), let
?
f? ? F be the minimizer of the empirical ?-risk, R? (f ) = E?(Y
f (X)). Suppose that
X
F = B absconv(G), where G ? {?1} has dV C (G) = d, and
(
1/(d+1) )
LB a B
?
? BLB max
, 1 n?(d+2)/(2d+2)
B
Then with probability at least 1 ? e?x ,
LB (LB /aB + B)x
?
?
0
?
?
+ inf R? (f ) ? R? .
R(f ) ? R + c +
f ?F
n
Notice that the rate obtained here is strictly faster than the classical n?1/2 parametric rate,
even though the class is infinite dimensional and the optimal element of F can have risk
larger than the Bayes risk. The key idea in the proof is similar to ideas from Lee et al.
(1996), Mendelson (2002), but simpler. Let f ? be the minimizer of ?-risk in a function
class F. If the class F is convex and the loss function ? is strictly convex and Lipschitz,
then the variance of the excess loss, gf (x, y) = ?(yf (x)) ? ?(yf ? (x)), decreases with
its expectation. Thus, as a function f ? F approaches the optimum, f ? , the two losses
?(Y f?(X)) and ?(Y f ? (X)) become strongly correlated. This leads to the faster rates.
More formally, suppose that ? is L-Lipschitz and has modulus of convexity ?() ? c r
with r ? 2. Then it is straightforward to show that Egf2 ? L2 (Egf /(2c))2/r . For the
details, see Bartlett et al. (2003).
5 Conclusions
We have studied the relationship between properties of a nonnegative margin-based loss
function ? and the statistical performance of the classifier which, based on an i.i.d. training
set, minimizes empirical ?-risk over a class of functions. We first derived a universal upper
bound on the population misclassification risk of any thresholded measurable classifier in
terms of its corresponding population ?-risk. The bound is governed by the ?-transform, a
convexified variational transform of ?. It is the tightest possible upper bound uniform over
all probability distributions and measurable functions in this setting.
Using this upper bound, we characterized the class of loss functions which guarantee that
every ?-risk consistent classifier sequence is also Bayes-risk consistent, under any population distribution. Here ?-risk consistency denotes sequential convergence of population
?-risks to the smallest possible ?-risk of any measurable classifier. The characteristic property of such a ?, which we term classification-calibration, is a kind of pointwise Fisher consistency for the conditional ?-risk at each x ? X . The necessity of classification-calibration
is apparent; the sufficiency underscores its fundamental importance in elaborating the statistical behavior of large-margin classifiers.
Under the low noise assumption of Tsybakov (2001), we sharpened our original upper
bound and studied the Bayes-risk consistency of f?, the minimizer of empirical ?-risk over a
convex, bounded class of functions F which is not too complex. We found that, for convex
? satisfying a certain uniform strict convexity condition, empirical ?-risk minimization
yields convergence of misclassification risk to that of the best-performing classifier in F,
as the sample size grows. Furthermore, the rate of convergence can be strictly faster than
the classical n?1/2 , depending on the strictness of convexity of ? and the complexity of F.
Acknowledgments
We would like to thank Gilles Blanchard, Olivier Bousquet, Pascal Massart, Ron Meir,
Shahar Mendelson, Martin Wainwright and Bin Yu for helpful discussions.
References
Bartlett, P. L., Jordan, M. I., and McAuliffe, J. M. (2003). Convexity, classification and risk bounds.
Technical Report 638, Dept. of Statistics, UC Berkeley. [www.stat.berkeley.edu/tech-reports].
Boyd, S. and Vandenberghe, L. (2003). Convex Optimization. [www.stanford.edu/?boyd].
Jiang, W. (2003). Process consistency for Adaboost. Annals of Statistics, in press.
Lee, W. S., Bartlett, P. L., and Williamson, R. C. (1996). Efficient agnostic learning of neural networks with bounded fan-in. IEEE Transactions on Information Theory, 42(6):2118?2132.
Lin, Y. (2001). A note on margin-based loss functions in classification. Technical Report 1044r,
Department of Statistics, University of Wisconsin.
Lugosi, G. and Vayatis, N. (2003). On the Bayes risk consistency of regularized boosting methods.
Annals of Statistics, in press.
Mannor, S., Meir, R., and Zhang, T. (2002). The consistency of greedy algorithms for classification.
In Proceedings of the Annual Conference on Computational Learning Theory, pages 319?333.
Mendelson, S. (2002). Improving the sample complexity using global data. IEEE Transactions on
Information Theory, 48(7):1977?1991.
Steinwart, I. (2002). Consistency of support vector machines and other regularized classifiers. Technical Report 02-03, University of Jena, Department of Mathematics and Computer Science.
Tsybakov, A. (2001). Optimal aggregation of classifiers in statistical learning. Technical Report
PMA-682, Universit?e Paris VI.
Zhang, T. (2003). Statistical behavior and consistency of classification methods based on convex risk
minimization. Annals of Statistics, in press.
| 2416 |@word version:1 stronger:1 norm:3 c0:1 closure:1 prominence:1 biconjugate:1 carry:1 necessity:1 chervonenkis:1 elaborating:1 must:1 fn:1 greedy:1 provides:1 boosting:2 mannor:2 ron:1 simpler:2 zhang:6 dn:1 become:1 x0:4 indeed:1 behavior:3 examine:1 growing:1 little:1 provided:3 moreover:2 bounded:2 agnostic:1 what:1 kind:1 minimizes:1 finding:1 transformation:1 guarantee:1 berkeley:5 quantitative:4 every:5 concave:2 universit:1 classifier:12 scaled:1 wrong:1 yn:3 mcauliffe:2 positive:1 consequence:3 jiang:2 lugosi:3 eb:1 studied:2 suggests:1 specifying:1 acknowledgment:1 procedure:3 universal:1 empirical:9 boyd:3 outset:1 deviance:1 suggest:1 cannot:1 close:2 risk:61 www:2 measurable:10 equivalent:1 demonstrated:1 straightforward:3 convex:26 simplicity:1 immediately:1 vandenberghe:2 population:5 handle:2 notion:1 limiting:1 annals:3 suppose:6 olivier:1 element:1 satisfying:4 role:1 calculate:2 decrease:4 mentioned:1 convexity:19 complexity:2 division:1 various:2 distinct:1 fast:1 choosing:2 refined:1 apparent:1 larger:1 valued:1 stanford:1 say:4 otherwise:2 statistic:6 gi:2 nondecreasing:1 transform:8 advantage:1 differentiable:2 sequence:6 iff:1 convergence:11 requirement:1 optimum:1 converges:1 depending:1 stat:2 implies:8 closely:1 correct:1 hull:2 vc:2 bin:1 fix:3 tighter:1 strictly:8 considered:1 achieves:1 smallest:1 estimation:9 minimization:8 rather:1 derived:1 focus:2 improvement:1 underscore:1 contrast:2 tech:1 helpful:1 transformed:1 classification:28 pascal:1 exponent:3 uc:1 never:1 yu:1 jon:2 minimized:1 report:5 simplify:1 ab:2 implication:1 integral:1 necessary:2 minimal:3 fenchel:1 instance:1 earlier:1 subset:2 uniform:4 predicate:1 too:2 characterize:1 answer:1 calibrated:12 fundamental:1 lee:3 invertible:1 michael:1 quickly:1 sharpened:1 choose:7 worse:1 singleton:1 blanchard:1 satisfy:4 blb:1 depends:1 vi:1 later:1 view:2 sup:2 bayes:8 aggregation:1 minimize:3 variance:3 who:1 characteristic:1 yield:2 definition:8 obvious:1 associated:4 proof:6 begun:1 recall:2 lim:4 hilbert:1 follow:1 adaboost:4 methodology:1 improved:1 sufficiency:1 though:1 strongly:1 furthermore:3 just:1 hand:1 steinwart:3 assessment:2 continuity:2 logistic:6 infimum:3 yf:3 indicated:1 behaved:1 grows:1 tabulating:1 modulus:9 building:1 verify:6 true:2 regularization:1 hence:2 equality:1 symmetric:1 tactic:1 demonstrate:1 variational:3 fi:3 common:1 functional:1 extend:1 relating:1 refer:1 consistency:14 mathematics:2 similarly:1 convexified:1 calibration:4 base:3 posterior:2 recent:1 showed:1 inf:15 certain:3 inequality:3 shahar:1 yi:1 seen:1 minimum:1 additional:1 impose:2 surely:1 converge:1 infer:1 technical:4 faster:5 characterized:1 lin:2 prediction:2 regression:3 basic:2 essentially:1 expectation:3 metric:2 kernel:1 vayatis:3 want:1 separately:1 strictness:1 interval:2 source:3 strict:2 massart:1 jordan:3 near:1 echoing:1 split:1 easy:8 variety:1 idea:4 bartlett:6 peter:1 remark:2 useful:5 listed:1 tsybakov:4 differentiability:1 simplest:1 meir:2 notice:2 sign:28 write:2 key:2 four:2 verified:1 thresholded:1 utilize:1 monotone:1 year:1 place:2 almost:1 throughout:2 reasonable:1 decision:1 summarizes:1 scaling:1 bound:17 played:1 fan:1 quadratic:11 nonnegative:6 annual:1 nontrivial:2 precisely:1 constraint:2 constrain:1 x2:10 bousquet:1 argument:1 min:1 formulating:1 performing:2 pseudometric:4 px:8 martin:1 department:3 developing:1 combination:1 legendre:1 increasingly:1 lp:1 modification:1 dv:1 ln:1 turn:2 generalizes:1 tightest:1 apply:1 appropriate:1 original:2 denotes:1 include:1 hinge:1 exploit:1 k1:1 classical:5 parametric:2 surrogate:8 thank:1 discriminant:1 pointwise:3 relationship:9 illustration:1 providing:1 minimizing:3 equivalently:1 relate:1 agenda:1 gilles:1 upper:7 observation:1 finite:3 truncated:5 immediate:1 extended:1 y1:3 reproducing:1 lb:6 arbitrary:1 paris:1 california:1 able:1 including:1 max:3 wainwright:1 misclassification:3 natural:1 eh:1 regularized:2 gf:1 disagrees:1 l2:2 determining:1 wisconsin:1 loss:49 consistent:2 imposes:1 last:2 pma:1 understand:1 wide:1 emerge:1 boundary:1 dimension:1 xn:3 valid:1 ignores:1 refinement:1 transaction:2 excess:13 approximate:1 compact:1 global:1 xi:3 subsequence:1 continuous:5 table:5 transfer:1 ca:1 improving:1 williamson:1 necessarily:1 complex:1 domain:1 main:1 noise:11 x1:14 epigraph:3 referred:1 jena:1 exponential:4 governed:1 theorem:9 emphasized:1 jensen:1 svm:2 weakest:2 intractable:1 exists:1 mendelson:4 vapnik:1 restricting:1 sequential:1 importance:2 margin:5 easier:1 corresponds:2 minimizer:5 satisfies:7 conditional:4 viewed:2 goal:1 lipschitz:8 fisher:4 replace:1 infinite:1 except:1 uniformly:1 principal:1 lemma:7 pas:1 egf:1 meaningful:1 formally:1 support:3 assessed:2 dept:1 correlated:1 |
1,559 | 2,417 | Learning to Find Pre-Images
G?okhan H. Bak?r, Jason Weston and Bernhard Sch?olkopf
Max Planck Institute for Biological Cybernetics
Spemannstra?e 38, 72076 T?ubingen, Germany
{gb,weston,bs}@tuebingen.mpg.de
Abstract
We consider the problem of reconstructing patterns from a feature map.
Learning algorithms using kernels to operate in a reproducing kernel
Hilbert space (RKHS) express their solutions in terms of input points
mapped into the RKHS. We introduce a technique based on kernel principal component analysis and regression to reconstruct corresponding patterns in the input space (aka pre-images) and review its performance in
several applications requiring the construction of pre-images. The introduced technique avoids difficult and/or unstable numerical optimization,
is easy to implement and, unlike previous methods, permits the computation of pre-images in discrete input spaces.
1
Introduction
We denote by Hk the RKHS associated with the kernel k(x, y) = ?(x)> ?(y), where
?(x) : X ? Hk is a possible nonlinear mapping from input space X (assumed to be a
nonempty set) to the possible infinite dimensional space Hk . The pre-image problem is
defined as follows: given a point ? in Hk , find a corresponding pattern x ? X such that
? = ?(x). Since Hk is usually a far larger space than X , this is often not possible (see Fig.
??). In these cases, the (approximate) pre-image z is chosen such that the squared distance
of ? and ?(z) is minimized,
z = arg mink? ? ?(z)k2 .
(1)
z
This has a significant range of applications in kernel methods: for reduced set methods
[1], for denoising and compression using kernel principal components analysis (kPCA),
and for kernel dependency estimation (KDE), where one finds a mapping between paired
sets of objects. The techniques used so far to solve this nonlinear optimization problem
often employ gradient descent [1] or nonlinear iteration methods [2]. Unfortunately, this
suffers from (i) being a difficult nonlinear optimization problem with local minima requiring restarts and other numerical issues, (ii) being computationally inefficient, given that
the problem is solved individually for each testing example, (iii) not being the optimal
approach (e.g., we may be interested in minimizing a classification error rather then a distance in feature space); and (iv) not being applicable for pre-images which are objects with
discrete variables.
In this paper we propose a method which can resolve all four difficulties: the simple idea is
to estimate the function (1) by learning the map ? ? z from examples (?(z), z). Depending on the learning technique used this can mean, after training, each use of the function
(each pre-image found) can be computed very efficiently, and there are no longer issues
with complex optimization code. Note that this problem is unusual in that it is possible to
produce an infinite amount of training data ( and thus expect to get good performance) by
generating points in Hk and labeling them using (1). However, often we have knowledge
about the distribution over the pre-images, e.g., when denoising digits with kPCA, one expects as a pre-image something that looks like a digit, and an estimate of this distribution is
actually given by the original data. Taking this distribution into account, it is conceivable
that a learning method could outperform the naive method, that of equation (1), by producing pre-images that are subjectively preferable to the minimizers of (1). Finally, learning
to find pre-images can also be applied to objects with discrete variables, such as for string
outputs as in part-of-speech tagging or protein secondary structure prediction.
The remainder of the paper is organized as follows: in Section 2 we review kernel methods
requiring the use of pre-images: kPCA and KDE. Then, in Section 3 we describe our
approach for learning pre-images. In Section 4 we verify our method experimentally in the
above applications, and in Section 5 we conclude with a discussion.
2
Methods Requiring Pre-Images
2.1
Kernel PCA Denoising and Compression
Given data points {xi }m
i=1 ? X , kPCA constructs an orthogonal set of feature extractors
in the RKHS. The constructed
system
?Pmorthogonal
PPm = r{v1 , . .?. , vr } lies in the span of
1
the data points, i.e., P =
?
?(x
),
.
.
.
,
i
i=1 i
i=1 ?i ?(xi ) . It is obtained by solving the eigenvalue problem m??i = K?i for 1 ? i ? r where Kij = k(xi , xj )
is the kernel matrix and r ? m is the number of nonzero eigenvalues.1 Once built,
the orthogonal system P can be used for nonlinear feature extraction. Let x denote
a?P
test point, then the nonlinear
=
? components can be extracted via P ?(x)
Pm r principal
m
>
1
i=1 ?i k(xi , x) where k(xi , x) is substituted for ?(xi ) ?(x).
i=1 ?i k(xi , x), . . . ,
See ([3],[4] chapter 14) for details.
Beside serving as a feature extractor, kPCA has been proposed as a denoising and compression procedure, both of which require the calculation of input patterns x from feature
space points P ?(x).
Denoising. Denoising is a technique used to reconstruct patterns corrupted by noise.
Given data points {xi }m
i=1 and the orthogonal system P = (v1 , . . . , va , . . . , vr ) obtained
by kPCA. Assume that the orthogonal system is sorted by decreasing variance, we write
?(x) = P ?(x) = Pa ?(x) + Pa? ?(x), where Pa denotes the projection on the span of
(v1 , . . . , va ). The hope is that Pa ?(x) retains the main structure of x, while Pa? ?(x) contains noise. If this is the case, then we should be able to construct a denoised input pattern
as the pre-image of Pa ?(x). This denoised pattern z can be obtained as solution to the
problem
z = arg min kPa ?(x) ? ?(z)k2 .
(2)
z
For an application of kPCA denoising see [2].
Compression. Consider a sender receiver-scenario, where the sender S wants to transmit
information to the receiver R. If S and R have the same projection matrix P serving as a
vocabulary, then S could use Pa to encode x and send Pa ?(x) ? Ra instead of x ? Rn .
This corresponds to a lossy compression, and is useful if a ? n. R would obtain the
1
We assume that the ?(xi ) are centered in feature space. This can be achieved by centering the
1
1
kernel matrix Kc = (I ? m
11> )K(I ? m
11> ), where 1 ? Rm is the vector with every entry
equal 1. Test patterns must be centered with the same center obtained from the training stage.
corresponding pattern x by minimizing (2) again. Therefore kPCA would serve as encoder
and the pre-image technique as decoder.
2.2
Kernel Dependency Estimation
Kernel Dependency Estimation (KDE) is a novel algorithm [5] which is able to learn general mappings between an input set X and output set Y, give definitions of kernels k and
l (with feature maps ?k and ?l ) which serve as similarity measures on X and Y, respectively. To learn the mapping from data {xi , yi }m
i=1 ? X ? Y, KDE performs two steps.
1) Decomposition of outputs. First a kPCA is performed in Hl associated with kernel l.
This results in r principal axes v1 , . . . , vr in Hl . Obtaining the principal axes, one is able
to obtain principal components (?l (y)> v1 , . . . , ?l (y)> vr ) of any object y.
2) Learning the map. Next, we learn the map from ?k (x) to (?l (y)> v1 , . . . , ?l (y)> vr ).
To this end, for each principal axis vj we solve the problem
Xm
arg min
(?l (yi )> vj ? g(xi , ? j ))2 + ?k? j k2 ,
(3)
?j
i=1
Pm
where ?k? j k2 acts as a regularization term (with ? > 0), g(xi , ? j ) = s=1 ?sj k(xs , xi ),
and ? ? Rm?r . Let P ? Rm?r with Pij = ?l (yi )> vj , j = 1 . . . r and K ? Rm?m the
kernel matrix with entries Kst = k(xs , xt ), with s, t = 1 . . . m. Problem (3) can then be
minimized, for example via kernel ridge regression, yielding
? = (K> K + ?I)?1 KP.
(4)
3) Testing Phase. Using the learned map from input patterns to principal components,
predicting an output y0 for a new pattern x0 requires solving the pre-image problem
y0
=
arg mink(?l (y)> v1 , . . . , ?l (y)> vr ) ? (k(x1 , x0 ), . . . , k(xm , x0 ))?k2 . (5)
y
0
Thus y is the approximate pre-image of the estimated point ?(y0 ) in Hl .
3
Learning Pre-Images
We shall now argue that by mainly being concerned with (1), the methods that have been
used for this task in the past disregard an important piece of information. Let us summarize
the state of the art (for details, see [4]).
Exact pre-images. One can show that if an exact pre-image exists, and if the kernel can be written as k(x, x0 ) = fk ((x> x0 )) with an invertible function fk (e.g.,
k(x, x0 ) = (x>?x0 )d with odd d), ?then one can compute the pre-image analytically as
PN
Pm
z = i=1 fk?1
j=1 ?j k(xj , ei ) ei , where {e1 , . . . , eN } is any orthonormal basis of
input space. However, if one tries to apply this method in practice, it usually works less
well than the approximate pre-image methods described below. This is due to the fact that
it usually is not the case that exact pre-images exist.
General approximation methods. These methods are based on the minimization of (1).
Whilst there are certain cases where the minimizer of (1) can be found by solving an eigenvalue problem (for k(x, x0 ) = (x> x0 )2 ), people in general resort to methods of nonlinear
optimization. For instance, if the kernel is differentiable, one can multiply out (1) to express it in terms of the kernel, and then perform gradient descent [1]. The drawback of
these methods is that the optimization procedure is expensive and will in general only find
a local optimum. Alternatively one can select the k best input points from some training
set and use them in combination to minimize the distance (1), see [6] for details.
Iteration schemes for particular kernels. For particular types of kernels, such as radial
basis functions, one can devise fixed point iteration schemes which allow faster minimization of (1). Again, there is no guarantee that this leads to a global optimum.
One aspect shared by all these methods is that they do not explicitly make use of the fact
that we have labeled examples of the unknown pre-image map: specifically, if we consider
any point in x ? X , we know that the pre-image of ?(x) is simply x.2 Below, we describe a
method which makes heavy use of this information. Specifically, we use kernel regression
to estimate the pre-image map from data. As a data set, we consider the training data
{xi }m
i=1 that we are given in our original learning problem (kPCA, KDE, etc.).
3.1
Estimation of the Pre-Image Map
We seek to estimate a function ? : Hk ? X with the property that, at least approximately, ?(?(xi )) = xi for i = 1, . . . , m. If we were to use regression using the kernel k corresponding to Hk , then we would simply look for weight vectors wj ? Hk ,
j = 1, . . . , dim X such that ?j (?) = wj> ?, and use the kernel trick to evaluate ?. However, in general we may want to use a kernel ? which is different from k, and thus we cannot
perform our computations implicitly by the use of a kernel. This looks like a problem, but
there is a way to handle it. It is based on the well-known observation that although the
data in Hk may live in an infinite-dimensional space, any finite data set spans a subspace
of finite dimension. A convenient way of working in that subspace
Pn is to choose a basis and
to work in coordinates, e.g., using a kPCA basis. Let Pn ? = i=1 (?> vi )vi denote the
projection that maps a point into its coordinates in the PCA basis v1 , . . . , vn , i.e., into the
subspace where the training set has nonzero variance. We then learn the pre-image map
?j : Rn ? X by solving the learning problem
Xm
?j = arg min
l (xi , ?(Pn ?(xi ))) + ??(?).
(6)
?j
i=1
Here, ? is a regularizer, and ? ? 0. If X is the vector space RN , we can consider the
problem (6) as a standard regression problem for the m training points xi and use kernel
Pmridgej regression with a kernel ?. This yields a pre-image mapping ?j (Pn ?(x)) =
r=1 ?r ?(Pn ?(x), Pn ?(xr )), j = 1, . . . , N, which can be solved like (3).
Note that the general learning setup of (6) allows to use of any suitable loss function, incorporating invariances and a-priori knowledge. For example, if the pre-images are (natural)
images, a psychophysically motivated loss function could be used, which would allow the
algorithm to ignore differences that cannot be perceived.
3.2
Pre-Images for Complex Objects
In methods such as KDE one is interested in finding pre-images for general sets of objects,
e.g. one may wish to find a string which is the pre-image of a representation using a string
kernel [7, 8]. Using gradient descent techniques this is not possible as the objects have
discrete variables (elements of the string). However, using function estimation techniques,
as long as it is possible to learn to find pre-images even for such objects, the problem can
be approached by decomposition into several learning subtasks. This should be possible
whenever there is structure in the object one is trying to predict. In the case of strings one
can predict each character of the string independently given the estimate ?l (y0 ). This is
made particularly tractable in fixed-length string prediction problems such as for part-ofspeech tagging or protein secondary structure prediction because the length is known (it is
the same length as the input). Otherwise the task is more difficult but still one could also
2
(1).
It may not be the only pre-image, but this does not matter as long as it minimizes the value of
predict the length of the output string before predicting each element of it. As an example,
we now describe in depth a method for finding pre-images for known-length strings.
The task is to predict a string y given a string x and a set of paired examples (xi , yi ) ?
p
?
p
??
p=1 (?x ) ? ?p=1 (?y ) . Note that |xi | = |yi | for all i, i.e., the length of any paired input
and output strings are the same. This is the setting of part-of-speech tagging, where ?x are
words and ?y are parts of speech, and also secondary structure prediction, where ?x are
amino acids of a protein sequence and ?y are classes of structure that the sequence folds
into, e.g. helix, sheet or coil.
It is possible to use KDE (Section 2.2) to solve this task directly. One has to define an
appropriate similarity function for both sets of objects using a kernel function, giving two
implicit maps ?k (x) and ?l (y) using string kernels. KDE then learns a map between the
two feature spaces, and for a new test string x one must find the pre-image of the estimate
?l (y0 ) as in equation (5). One can find this pre-image by predicting each character of the
string independently given the estimate ?l (y0 ) as it has known length given the input x.
One can thus learn a function bp = f (?l (y 0 ), ?p ) where bp is the pth element of the
output and ?p = (a(p?n/2) a(p?n/2+1) . . . a(p+n/2) ) is a window of length n + 1 with
center at position p in the input string. One computes the entire output string with
? = (f (?l (y 0 ), ?1 ) . . . f (?l (y 0 ), ?|x| )); window elements outside of the string can be encoded with a special terminal character. The function f can be trained with any multi-class
classification algorithm to predict one of the elements of the alphabet, the approach can
thus be seen as a generalization of the traditional approach which is learning a function
f given only a window on the input (the second parameter). Our approach first estimates
the output using global information from the input and with respect to the loss function of
interest on the outputs?it only decodes this global prediction in the final step. Note that
problems such as secondary structure prediction often have loss functions dependent on the
complete outputs, not individual elements of the output string [9].
4
Experiments
In the following we demonstrate the pre-image learning technique on the applications we
have introduced.
Gaussian noise
PCA
kPCA+grad.desc.
kPCA+learn-pre.
Figure 1: Denoising USPS digits: linear PCA fails on this task, learning to find pre-images
for kPCA performs at least as well as finding pre-images by gradient descent.
KPCA Denoising. We performed a similar experiment to the one in [2] for demonstration
purposes: we denoised USPS digits using linear and kPCA. We added Gaussian noise with
variance 0.5 and selected 100 randomly chosen non-noisy digits for training and a further
100 noisy digits for testing, 10 from each class. As in [2] we chose a nonlinear map
via a Gaussian kernel with ? = 8. We selected 80 principal components for kPCA. We
found pre-images using the Matlab function fminsearch, and compared this to our pre-
image-learning method (RBF kernel K(x, x0 ) = exp(?||x ? x0 ||2 /2? 2 ) with ? = 1, and
regularization parameter ? = 1). Figure 1 shows the results: our approach appears to
perform better than the gradient descent approach. As in [2], linear PCA visually fails for
this problem: we show its best results, using 32 components. Note the mean squared error
performance of the algorithms is not precisely in accordance with the loss of interest to the
user. This can be seen as PCA has an MSE (13.8?0.4) versus gradient descent (31.6?1.7)
and learnt pre-images (29.2?1.8). PCA has the lowest MSE but as can be seen in Figure 1
it doesn?t give satisfactorys visual results in terms of denoising.
Note that some of the digits shown are actually denoised incorrectly as the wrong class.
This is of course possible as choosing the correct digit is a problem which is harder than
a standard digit classification problem because the images are noisy. Moreover, kPCA is
not a classifier per se and could not be expected to classify digits as well as Support Vector
Machines. In this experiment, we also took a rather small number of training examples,
because otherwise the fminsearch code for the gradient descent was very slow, and this
allowed us to compare our results more easily.
KPCA Compression. For the compression experiment we use a video sequence consisting of 1000 graylevel images, where every frame has a 100 ? 100 pixel resolution. The
video sequence shows a famous science fiction figure turning his head 180 degrees. For
training we used every 20th frame resulting in a video sequence of 50 frames with 3.6 degree orientation difference per image. The motivation is to store only these 50 frames and
to reconstruct all frames in between.
We applied a kPCA to all 50 frames with a Gaussian kernel with kernel parameter ?1 .
The 50 feature vectors v1 , . . . , v50 ? R50 are used then to learn the interpolation between
the timeline of the 50 principal components vij where i is the time index, j the principal
component number j and 1 ? i, j ? 50. A kernel ridge regression with Gaussian kernel
and kernel parameter ?2 and ridge r1 was used for this task. Finally the pre-image map ?
was learned from projections onto vi to frames using kernel ridge regression with kernel
parameter ?3 and ridge r2 . All parameters ?1 , ?2 , ?3 , r1 , r2 were selected in a loop such
that new synthesized frames looked subjectively best. This led to the values ?1 = 2.5, ?2 =
1, ?3 = 0.15 and for the ridge parameters r1 = 10?13 , r2 = 10?7 . Figure 2 shows the
original and reconstructed video sequence.
Note that the pre-image mechanism could possibly be adapted to take into account invariances and a-priori knowledge like geometries of standard heads to reduce blending effects,
making it more powerful than gradient descent or plain linear interpolation of frames. For
an application of classical pre-image methods to face modelling, see [10].
String Prediction with Kernel Dependency Estimation. In the following we expose a
simple string mapping problem to show the potential of the approach outlined in Section
3.2. We construct an artificial problem with |?x | = 3 and |?y | = 2. Output strings are
generated by the following algorithm: start in a random state (1 or 2) corresponding to one
of the output symbols. The next symbol is either the same or, with probability 51 , the state
switches (this tends to make neighboring characters the same symbol). The length of the
string is randomly chosen between 10 and 20 symbols. Each input string is generated with
equal probability from one of two models, starting randomly in state a, b or c and using the
following transition matrices, depending on the current output state:
Output 1
a
a 0
b 0
c 1
b
0
0
0
Model 1
Output 2
c
a
1 a 1/2
1 b 1/2
0 c
0
Model 2
b
1/2
1/2
1
c
0
0
0
Output 1
a
a 1/2
b 1/2
c
0
b
1/2
1/2
1/2
c
0
0
1/2
Output 2
a
a 1
b 0
c 0
b
0
1
1
c
0
0
0
Subsequence of original video sequence.
First and last frame are used in training set.
Subsequence of synthesized video sequence.
Figure 2: Kernel PCA compression used to learn intermediate images. The pre-images are
in a 100 ? 100 dimensional space making gradient-descent based descent impracticable.
As the model of the string can be better predicted from the complete string, a global method
could be better in principle than a window-based method. We use a string kernel called the
spectrum kernel[11] to define strings for inputs. This method builds a representation which
is a frequency count of all possible contiguous subsequences of length p. This produces a
P|x|?p+1
mapping with features ?k (x) = h i=1
[(xi , . . . , x(i+p?1) ) = ?] : ? ? (?x )p i where
[x = y] is 1 if x = y, and 0 otherwise. To define a feature space for outputs we count
the number of contiguous subsequences of length p on the input that, if starting in position
q, have the same element of the alphabet at position q + (p ? 1)/2 in the output, for odd
P|x|?p+1
values of p. That is, ?l (x, y) = h i=1
[(xi , . . . , x(i+p?1) ) = ?][yi+(p?1)/2 = b] :
p
? ? (?x ) , b ? ?y i. We can then learn pre-images using a window also of size p as
described in Section 3.2, e.g. using k-NN as the learner. Note that the output kernel is
defined on both the inputs and outputs: such an approach is also used in [12] and called
?joint kernels?, and in their approach the calculation of pre-images is also required, so they
only consider specific kernels for computational reasons. In fact, our approach could also
be a benefit if used in their algorithm.
We normalized the input and output kernel matrices such that a matrix
P S is normalized
with S ? D?1 SD?1 where D is a diagonal matrix such that Di i = i Sii . We also used
a nonlinear map for KDE, via an RBF kernel, i.e. K(x, x0 ) = exp(?d(x, x0 )) where the
distance d is induced by the input string kernel defined above, and we set ? = 1.
We give the results on this toy problem using the classification error (fraction of symbols
misclassified) in the table below, with 50 strings using 10-fold cross validation, we compare
to k-nearest neighbor using a window size of 3, in our method we used p = 3 to generate
string kernels, and k-NN to learn the pre-image, therefore we quote different k for both
methods. Results for larger window sizes only made the results worse.
KDE
k-NN
5
1-NN
0.182?0.03
0.251?0.03
3-NN
0.169?0.03
0.243?0.03
5-NN
0.162?0.03
0.249?0.03
7-NN
0.164?0.03
0.250?0.03
9-NN
0.163?0.03
0.248?0.03
Conclusion
We introduced a method to learn the pre-image of a vector in an RKHS. Compared to classical approaches, the new method has the advantage that it is not numerically unstable, it is
much faster to evaluate, and better suited for high-dimensional input spaces. It is demon-
strated that it is applicable when the input space is discrete and gradients do not exist. However, as a learning approach, it requires that the patterns used during training reasonably
well represent the points for which we subsequently want to compute pre-images. Otherwise, it can fail, an example being a reduced set (see [1]) application, where one needs
pre-images of linear combinations of mapped points in H, which can be far away from
training points, making generalization of the estimated pre-image map impossible. Indeed,
preliminary experiments (not described in this paper) showed that whilst the method can
be used to compute reduced sets, it seems inferior to classical methods in that domain.
Finally, the learning of the pre-image can probably be augmented with mechanisms for
incorporating a-priori knowledge to enhance performance of pre-image learning, making
it more flexible than just a pure optimization approach. Future research directions include
the inference of pre-images in structures like graphs and incorporating a-priori knowledge
in the pre-image learning stage.
Acknowledgement. The authors would like to thank Kwang In Kim for fruitful discussions, and the anonymous reviewers for their comments.
References
[1] C. J. C. Burges. Simplified support vector decision rules. In L. Saitta, editor, Proceedings of
the 13th International Conference on Machine Learning, pages 71?77, San Mateo, CA, 1996.
Morgan Kaufmann.
[2] S. Mika, B. Sch?olkopf, A. J. Smola, K.-R. M?uller, M. Scholz, and G. R?atsch. Kernel PCA and
de-noising in feature spaces. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors, Advances in
Neural Information Processing Systems 11, pages 536?542, Cambridge, MA, 1999. MIT Press.
[3] B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299?1319, 1998.
[4] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[5] Jason Weston, Olivier Chapelle, Andre Elisseeff, Bernhard Sch?olkopf, and Vladimir Vapnik.
Kernel dependency estimation. In S. Becker, S. Thrun, and K. Obermayer, editors, Advances in
Neural Information Processing Systems 15, Cambridge, MA, 2002. MIT Press.
[6] J.T. Kwok and I.W. Tsang. Finding the pre images in kernel principal component analysis. In
NIPS?2002 Workshop on Kernel Machines, 2002.
[7] D. Haussler. Convolutional kernels on discrete structures. Technical Report UCSC-CRL-99-10,
Computer Science Department, University of California at Santa Cruz, 1999.
[8] H. Lodhi, J. Shawe-Taylor, N. Cristianini, and C. Watkins. Text classification using string kernels. Technical Report 2000-79, NeuroCOLT, 2000. Published in: T. K. Leen, T. G. Dietterich
and V. Tresp (eds.), Advances in Neural Information Processing Systems 13, MIT Press, 2001,
as well as in JMLR 2:419-444, 2002.
[9] S. Hua and Z. Sun. A novel method of protein secondary structure prediction with high segment
overlap measure: Svm approach. Journal of Molecular Biology, 308:397?407, 2001.
[10] S. Romdhani, S. Gong, and A. Psarrou. A multiview nonlinear active shape model using kernel
PCA. In Proceedings of BMVC, pages 483?492, Nottingham, UK, 1999.
[11] C. Leslie, E. Eskin, and W. S. Noble. The spectrum kernel: A string kernel for SVM protein
classification. Proceedings of the Pacific Symposium on Biocomputing, 2002.
[12] Y. Altun, I. Tsochantaridis, and T. Hofmann. Hidden markov support vector machines. In 20th
International Conference on Machine Learning (ICML), 2003.
| 2417 |@word compression:8 seems:1 lodhi:1 seek:1 decomposition:2 elisseeff:1 harder:1 contains:1 rkhs:5 past:1 psarrou:1 current:1 must:2 written:1 cruz:1 numerical:2 hofmann:1 shape:1 selected:3 eskin:1 constructed:1 sii:1 ucsc:1 symposium:1 introduce:1 x0:13 tagging:3 expected:1 indeed:1 ra:1 mpg:1 multi:1 terminal:1 decreasing:1 resolve:1 window:7 moreover:1 lowest:1 string:33 minimizes:1 whilst:2 finding:4 guarantee:1 every:3 act:1 preferable:1 k2:5 rm:4 wrong:1 classifier:1 uk:1 planck:1 producing:1 before:1 local:2 accordance:1 tends:1 sd:1 interpolation:2 approximately:1 mika:1 chose:1 mateo:1 scholz:1 range:1 testing:3 practice:1 implement:1 xr:1 digit:10 procedure:2 projection:4 convenient:1 pre:63 radial:1 word:1 protein:5 altun:1 get:1 cannot:2 onto:1 tsochantaridis:1 sheet:1 noising:1 live:1 impossible:1 fruitful:1 map:17 reviewer:1 center:2 send:1 starting:2 independently:2 resolution:1 pure:1 rule:1 haussler:1 orthonormal:1 his:1 handle:1 coordinate:2 transmit:1 graylevel:1 construction:1 user:1 exact:3 olivier:1 pa:8 trick:1 element:7 expensive:1 particularly:1 labeled:1 solved:2 tsang:1 wj:2 sun:1 solla:1 cristianini:1 ppm:1 trained:1 solving:4 segment:1 serve:2 learner:1 basis:5 usps:2 easily:1 v50:1 joint:1 chapter:1 regularizer:1 alphabet:2 describe:3 kp:1 artificial:1 approached:1 labeling:1 outside:1 choosing:1 encoded:1 larger:2 solve:3 reconstruct:3 otherwise:4 encoder:1 noisy:3 final:1 sequence:8 eigenvalue:4 differentiable:1 advantage:1 took:1 propose:1 remainder:1 neighboring:1 loop:1 demon:1 olkopf:5 optimum:2 r1:3 produce:2 generating:1 object:10 depending:2 gong:1 nearest:1 odd:2 predicted:1 bak:1 direction:1 drawback:1 correct:1 subsequently:1 centered:2 require:1 generalization:2 preliminary:1 anonymous:1 biological:1 desc:1 blending:1 exp:2 visually:1 mapping:7 predict:5 purpose:1 perceived:1 estimation:7 applicable:2 quote:1 expose:1 individually:1 hope:1 minimization:2 uller:2 mit:4 gaussian:5 rather:2 pn:7 encode:1 ax:2 modelling:1 mainly:1 aka:1 hk:10 kim:1 saitta:1 dim:1 inference:1 dependent:1 minimizers:1 nn:8 entire:1 hidden:1 kc:1 misclassified:1 interested:2 germany:1 pixel:1 arg:5 issue:2 classification:6 orientation:1 flexible:1 priori:4 art:1 special:1 equal:2 construct:3 once:1 extraction:1 biology:1 look:3 icml:1 noble:1 future:1 minimized:2 report:2 employ:1 randomly:3 individual:1 phase:1 consisting:1 geometry:1 interest:2 multiply:1 yielding:1 orthogonal:4 spemannstra:1 iv:1 taylor:1 kij:1 instance:1 classify:1 contiguous:2 retains:1 leslie:1 kpca:20 entry:2 expects:1 dependency:5 corrupted:1 learnt:1 psychophysically:1 international:2 invertible:1 enhance:1 squared:2 again:2 choose:1 possibly:1 worse:1 resort:1 inefficient:1 toy:1 account:2 potential:1 de:2 matter:1 explicitly:1 vi:3 piece:1 performed:2 try:1 jason:2 start:1 denoised:4 minimize:1 convolutional:1 variance:3 acid:1 efficiently:1 kaufmann:1 yield:1 famous:1 decodes:1 cybernetics:1 published:1 suffers:1 romdhani:1 whenever:1 andre:1 ed:1 definition:1 centering:1 frequency:1 associated:2 di:1 knowledge:5 hilbert:1 organized:1 actually:2 appears:1 restarts:1 bmvc:1 leen:1 just:1 stage:2 implicit:1 smola:3 nottingham:1 working:1 ei:2 cohn:1 nonlinear:11 lossy:1 effect:1 dietterich:1 requiring:4 verify:1 normalized:2 regularization:2 analytically:1 nonzero:2 during:1 inferior:1 trying:1 multiview:1 ridge:6 complete:2 demonstrate:1 performs:2 image:67 novel:2 synthesized:2 numerically:1 significant:1 cambridge:3 fk:3 pm:3 outlined:1 shawe:1 chapelle:1 impracticable:1 longer:1 similarity:2 subjectively:2 etc:1 something:1 showed:1 scenario:1 store:1 certain:1 ubingen:1 yi:6 devise:1 seen:3 minimum:1 morgan:1 ii:1 technical:2 faster:2 calculation:2 cross:1 long:2 okhan:1 e1:1 molecular:1 paired:3 va:2 prediction:8 regression:8 iteration:3 kernel:63 represent:1 achieved:1 want:3 sch:5 operate:1 unlike:1 probably:1 comment:1 induced:1 intermediate:1 iii:1 easy:1 concerned:1 switch:1 xj:2 reduce:1 idea:1 grad:1 motivated:1 pca:10 gb:1 becker:1 speech:3 matlab:1 useful:1 se:1 santa:1 amount:1 reduced:3 generate:1 outperform:1 exist:2 fiction:1 estimated:2 per:2 serving:2 discrete:6 write:1 shall:1 express:2 four:1 fminsearch:2 v1:9 graph:1 fraction:1 powerful:1 vn:1 decision:1 fold:2 adapted:1 precisely:1 bp:2 aspect:1 span:3 min:3 department:1 pacific:1 combination:2 reconstructing:1 y0:6 character:4 b:1 making:4 hl:3 computationally:1 equation:2 r50:1 count:2 nonempty:1 mechanism:2 fail:1 know:1 tractable:1 end:1 unusual:1 permit:1 apply:1 kwok:1 away:1 appropriate:1 original:4 denotes:1 include:1 giving:1 build:1 classical:3 added:1 looked:1 traditional:1 diagonal:1 obermayer:1 gradient:10 conceivable:1 subspace:3 distance:4 thank:1 mapped:2 thrun:1 neurocolt:1 decoder:1 argue:1 tuebingen:1 unstable:2 reason:1 code:2 kst:1 length:11 index:1 minimizing:2 demonstration:1 vladimir:1 difficult:3 unfortunately:1 setup:1 kde:10 mink:2 unknown:1 perform:3 observation:1 markov:1 finite:2 descent:10 incorrectly:1 head:2 frame:10 rn:3 reproducing:1 subtasks:1 introduced:3 required:1 california:1 learned:2 timeline:1 nip:1 able:3 usually:3 pattern:12 xm:3 below:3 summarize:1 built:1 max:1 video:6 suitable:1 overlap:1 difficulty:1 natural:1 predicting:3 turning:1 kpa:1 scheme:2 axis:1 naive:1 tresp:1 text:1 review:2 acknowledgement:1 beside:1 loss:5 expect:1 versus:1 validation:1 degree:2 pij:1 principle:1 editor:3 vij:1 helix:1 heavy:1 course:1 last:1 allow:2 burges:1 institute:1 neighbor:1 taking:1 face:1 kwang:1 benefit:1 dimension:1 vocabulary:1 depth:1 avoids:1 plain:1 computes:1 doesn:1 transition:1 made:2 author:1 san:1 simplified:1 pth:1 far:3 sj:1 approximate:3 reconstructed:1 ignore:1 implicitly:1 bernhard:2 global:4 active:1 receiver:2 assumed:1 conclude:1 xi:23 alternatively:1 spectrum:2 subsequence:4 table:1 learn:12 reasonably:1 ca:1 obtaining:1 mse:2 complex:2 domain:1 substituted:1 vj:3 main:1 motivation:1 noise:4 allowed:1 amino:1 x1:1 augmented:1 fig:1 en:1 slow:1 vr:6 fails:2 position:3 wish:1 lie:1 watkins:1 jmlr:1 extractor:2 learns:1 xt:1 specific:1 symbol:5 r2:3 x:2 svm:2 exists:1 incorporating:3 workshop:1 vapnik:1 suited:1 led:1 simply:2 sender:2 visual:1 hua:1 corresponds:1 minimizer:1 extracted:1 ma:3 coil:1 weston:3 sorted:1 rbf:2 shared:1 crl:1 experimentally:1 infinite:3 specifically:2 denoising:10 principal:12 kearns:1 called:2 secondary:5 invariance:2 disregard:1 atsch:1 select:1 people:1 support:3 biocomputing:1 evaluate:2 ofspeech:1 |
1,560 | 2,418 | Estimating Internal Variables and Parameters of
a Learning Agent by a Particle Filter
Kazuyuki Samejima
Kenji Doya
Department of Computational Neurobiology
ATR Computational Neuroscience laboratories;
?Creating the Brain?, CREST, JST.
?Keihan-na Science City?, Kyoto, 619-0288, Japan
{samejima, doya}@atr.jp
Yasumasa Ueda
Minoru Kimura
Department of Physiology, Kyoto Prefecture University of Medicine,
Kyoto, 602-8566, Japan
{yasu, mkimura}@basic.kpu-m.ac.jp
Abstract
When we model a higher order functions, such as learning and memory,
we face a difficulty of comparing neural activities with hidden variables
that depend on the history of sensory and motor signals and the dynamics of the network. Here, we propose novel method for estimating hidden
variables of a learning agent, such as connection weights from sequences
of observable variables. Bayesian estimation is a method to estimate the
posterior probability of hidden variables from observable data sequence
using a dynamic model of hidden and observable variables. In this paper, we apply particle filter for estimating internal parameters and metaparameters of a reinforcement learning model. We verified the effectiveness of the method using both artificial data and real animal behavioral
data.
1
Introduction
In neurophysiology, the traditional approach to discover unknown information processing
mechanisms is to compare neuronal activities with external variables, such as sensory stimuli or motor output. Recent advances in computational neuroscience allow us to make predictions on neural mechanisms based on computational models. However, when we model
higher order functions, such as attention, memory and learning, the model must inevitably
include hidden variables which are difficult to infer directly from externally observable
variables.
Although the assessment of the plausibility of such models depends on the right estimate of
the hidden variables, tracking their values in an experimental setting is a difficult problem.
For example, in learning agents, hidden variables such as connection weights change in
time. In addition, the course of learning is modulated by hidden meta-parameters such as
the learning rate.
The goal of this study is two-fold: First to establish a method to estimate hidden variables, including meta-parameters from observable experimental data. Second to provide a
method for objectively selecting the most plausible computational model out of multiple
candidates. We introduce a numerical Bayesian estimation method, known as particle filtering, to estimate hidden variables. We validate this method with a reinforcement learning
task.
2
Reinforcement learning model as an animal and a human decision
processes
Reinforcement learning can be a model of animal or human behaviors based on reward
delivery. Notably, the response of monkey midbrain dopamine neurons are successfully
explained by the temporal differnce (TD) error of reinforcement learning models [2]. The
goal of reinforcement learning is to improve the policy so that the agent maximizes rewards
in the long run. The basic strategy of reinforcement learning is to estimate cumulative
future reward under the current policy as the value function and then to improve the policy
based on the value function. A standard algorithm of reinforcement learning is to learn the
action-value function,
"?
#
X
(? ?t)
Q(st , at ) = E
?
r? |st , at ,
(1)
? =t
which estimates the cumulative future reward when action a is taken at a state . The discount factor 0 < ? < 1 is a meta-parameter that controls the time scale of prediction. The
policy of the learner is then given by comparing action-values, e.g. according to Boltzman
distribution
exp ?Q(st , a)
P (a|st ) = P
,
(2)
?)
a
??A exp ?Q(st , a
where the inverse temperature ? > 0 is another meta-parameter that controls randomness
of action selection. From an experience of state st , action at , reward rt , and next state
st+1 , the action-value function is updated by Q-learning algorithm [1] as
?T D (t) = rt + ? max Q(st+1 , a) ? Q(st , at )
a?A
Q(st , at ) ? Q(st , at ) + ??T D (t)
(3)
where ? > 0 is the meta-parameter that controls learning rate. Thus this simple reinforcement learning modol has three meta-paramters, ?,? and ? Such a reinforcement learning
model does not only predict subject?s actions, but also predicts internal process of the brain,
which may be recorded as neural firing or brain imaging data. However, a big problem is
that the predictions are depended on the setting of meta-parameters, such as learning rate
?, action randomness ? and discount factor ?.
3
Bayesian estimation of hidden variables of reinforcement learning
agent
Let us consider a problem of estimating the time course of action-values {Qt (s, a); s ?
S, s ? A, 0 ? t ? T } and meta-parameters ?, ? , and ? of reinforcement learner by
only observing the sequence of states st , actions at and rewards rt . We use a Bayesian
method of estimating a dynamic hidden variable {xt ; t ? N } from sequence of observable variable {yt ; t ? N }. We assume that the hidden variable follows a Markov process
?t
? t+1
?t
? t+1
?t
? t+1
Update
Qt
Decision
metaparameters
Hidden
parameters
Qt+1
a t+1
at
st
Observable
variables
s t+1
State
transition
Get
reward
rt
r t+1
Figure 1: A Bayesian network representation of a Q-learning agent: dynamics of observable and unobservable variable is depended on decision, reward probability, state transition,
and update rule for value function. Circles: hidden variable. Double box: observable variable. Arrow: probabilistic dependency
of initial distribution p(x0 ) and the transition probability p(xt+1 |xt ). The observations
{yt ; t ? N } are assumed to be conditionally independent given the process {xt ; t ? N }
and has the marginal distribution p(yt |xt ). The problem is to estimate recursively in time
the posterior distribution of hidden variable p(x0:t |y1:t ), where x0:t = {x0 , . . . , xt } and
y1:t = {y1 , . . . , yt }. The marginal distribution is given by recursive procedure of the
following prediction and updating,
Z
P redicdion :
p(xt |y1:t?1 ) = p(xt |xt?1 )p(xt?1 |y1:t?1 )dxt?1 ,
U pdating
:
p(xt |y1:t ) = R
p(yt |xt )p(xt |y1:t?1 )
.
p(yt |xt )p(xt |y1:t?1 )dxt
We use a numerical method called particle filter [3] to approximate this process. In the
particle filter, the distributions of sequence of hidden variables p(x0:t |y1:t ) are represented
by a set of random samples, called ?particles?. Figure 1 is the dynamical Bayesian network
representation of a Q-learning agent. The hidden variable xt consists of the action-values
Q(s, a) for each state-action pair, learning rate ?, inverse temperature ?, and discount
factor ?. The observable variable yt consists of the state st , action at , and reward rt .
The marginal distribution p(yt |xt ) of observation process is given by the softmax action
selection probability (2) combined with the state transition rule and the reward condition
p(rt+1 |st , at ) given by the environment. The transition probability p(st+1 |st , at ) of the
hidden variable is given by the Q-learning rule (3) and an assumption about the meta-
?(t)
?(t+1)
?(t)
?(t+1)
Q(t)
Q(t+1))
a(t)
a(t+1)
r(t)
r(t+1)
Figure 2: Simplified Bayesian network for the two-armed bandit problem.
parameter dynamics. Here we assume that meta-parameters are constant with small drifts.
Because ?, ? and ? should all be positive, we assume random-walk dynamics in logarithmic space
log(xt+1 ) = log(xt ) + ?x ,
?x ? N (0, ?x )
(4)
where ?x is a meta-meta-parameter that defines variability of the meta-parameter x ?
{?, ?, ?}.
4
4.1
Simulations
Two armed bandit problem with block wise reward change
In order to test the validity of the proposed method, we use a simple Q-leaning agent that
learns a two armed bandit problem [1]. The task has only one state, two actions, and
stochastic binary reward. The reward probability for each action is fixed in a block of
100 trials. The reward probabilities Pr1 for action a = 1 and Pr2 for action a = 2 are
selected randomly from three settings; {Pr1 , Pr2 } = {0.1, 0.9}, {0.50.5}, {0.9, 0.1} at the
beginning of each block.
The Q-learning agent tries to learn reward expectation of each action and maximize reward
acquired in each block. Because the task has only one state, the agent does not need to
take into account next state?s value, and thus, we set the discount factor as ? = 0. The
Bayesian network for this example is simplified as Figure 2. Simulated actions are selected
according to Boltzman distribution (2) using action-values Q(a = 1) and Q(a = 2), and
the inverse temperature ?. The action values are updated by equation (3) with the action
at , reward rt , and learning rate ?.
4.2
Result
We used 1000 particles for approximating the distribution of hidden variable x = (Q(a =
1), Q(a = 2), log(?), log(?)). We set the initial distribution of particles as Gaussian distribution with the mean {0, 0, 0, 0} and the variance {1, 1, 3, 1} for {Q(a = 1), Q(a =
2), log(?), log(?)}, respectively. We set the meta-meta-parameters for learning rate as
?? = 0.05 , and inverse temperature as ?? = 0.005 . The reward is r = 5 when delivered,
and otherwise r = 0.
Figure 3(a) shows the simulated actions and rewards of 1000 trials by Q-learning agent
with ? = 0.05 and ? = 1. From this observable sequence of yt = (st , at , rt ), the particle
filter estimated the time course of action-values, Qt (a = 1) and Qt (a = 2), learning
rate ?t and inverse temperature ?t . The expected values of the marginal distribution of
these hidden variables (Figure 3(b)-(e) solid line) are in good agreement with the true value
(Figure 3(b)-(e) dotted line) recorded in simulation. Although the initial estimates were
inevitable inaccurate, the particle filter are good estimation of each variable after about 200
observations.
To test robustness of the particle filter approach, we generated behavioral sequences of Qlearners with different combinations of ? = {0.01, 0.15, 0.1, 0.5} and ? = {0.5, 1, 2, 4},
and estimated meta-parameters ? and ?. Even if we set a broad initial distribution of ?
and ?, the expectation value of the estimated values are in good agreement with the true
value. When the agent had the smallest learning rate ? = 0.01, the particle filter tended to
underestimated ? and overestimated ?.
5
Application to monkey behavioral data
We applied the particle filter approach to monkey behavioral data of the two-armed bandit
problem [4]. In this task, the monkey faces a lever that can be turned to either left or right.
After adjusting a lever at center position and holding it for one second, the monkey turned
the lever to left or right based on the reward probabilities assigned on each direction of
lever turn. Probabilities [PL, PR] of reward delivery on the left and right turns, respectively
were varied across three trial blocks as: [PL, PR]=[0.5, 0.5]; [0.1, 0.9]; [0.9, 0.1]. In each
block, the monkeys shifted selection to the direction with higher reward probability.
We used 1000 particles and Gaussian initial distribution with the mean (2,2,3,0) and
the variance (2,2,1,1) for x = (Q(R), Q(L), log(?), log(?)). We set the meta-metaparameters for learning rate as ?? = 0.05 , and for inverse temperature as ?? = 0.001
. The reward was r = 5 when delivered, and otherwise r = 0.
Figure 5(a) shows the sequence of selected actions and rewards in a day. Figure 5(b) shows
the estimated action-values Q(a = L) and Q(a = R) for the left and right lever turns. The
estimated action value Q(L) for left action increased in the blocks of [PL, PR] = [0.9, 0.1],
decreased in the blocks of [0.1, 0.9], and fluctuated in the blocks of [0.5, 0.5].
We tested whether the estimated action-value and meta-parameters could reproduce the
action sequences. We quantified the prediction performance of action sequences by the
likelihood of the action data given the estimated model,
Lt =
N
X
1
log p?(a = at |{a1 , r1 , ? ? ? , at?1 , rt?1 }, M, ?t ),
N ?T +1
(5)
t=T
where p?(a) is estimated probability of action at t by model M and estimated parameters ? t
from the sequence of past experience {a1 , r1 , ? ? ? , at?1 , rt?1 }.
Figure 6(b) shows the distribution of the likelihood computed for the action data of 74
sessions. We compared the predictability of the proposed method, Q-learning model with
[0.1,0.9]
[0.1,0.9]
[0.9,0.1]
[0.9,0.1]
[0.1,0.9]
Trials
Trials
Figure 3: Estimation of hidden variables by simulated actions and rewards of Q-learning
agent. (a) Sequence of simulated actions and rewards by Q-learning agent: Circles are
rewarded trials. Dots are non-rewarded trials; (b)-(e) Time course of the hidden variables
of the model (dotted line) and of the expectation value (solid line) of estimation by particle
filter; (b)(c) Q-values for each action, (d) learning rate , and (e) action randomness . Shaded
areas indicate the blocks of [0.9, 0.1] or [0.1, 0.9]. White areas indicate [0.5, 0.5].
-2
10
-1
10
0
10
Figure 4: Expected values of estimated meta-parameter from the 1000 trials generated with
different settings. The side boxes show initial distribution of particles.
Block
[0.9 0.1] [0.1 0.9]
[0.1 0.9] [0.9 0.1] [0.1 0.9]
Right
(a)
Left
4
0
10
?
200
300
400
500
600
700
2
0
(c)
100
Q(right)
Q
(b)
0
Q(left)
0
100
200
300
400
500
600
700
0
100
200
300
400
500
600
700
0
100
200
300
400
500
600
700
-2
10
-4
10
2
?
(d)
1
0.2
Trials
Figure 5: Expected values of estimated hidden variables by animal behavioral data: (a)
action and reward sequences; Circles are rewarded trials; Dots indicate no rewarded trials.
(b)-(d) Estimated value of (b) action value function , (c) learning rate, and (d) action randomness. Shaded areas indicate the blocks of [0.9, 0.1] or [0.1, 0.9]. White areas indicate
[0.5, 0.5].
(a)
(b)
Maximum likelihood point
(Optimal meta-parameter)
-0.662
-0.664
-0.666
-0.668
-0.67
-0.672
-0.674
-0.676
Figure 6: Comparing models: (a) An example of contour plot of log likelihood for predicted
action by a fixed meta-parameter Q-learning model. Fixed meta-parameter method needs
to find the optimal learning rate ? and the inverse temperature ?. (b) Distributions of
log likelihood of action prediction by proposed particle filter method and by fixed metaparameter Q-learning model with the optimal meta-parameter: The top and bottom limits
of each box show the lower quartile and the upper quartile, and the center of the notch is
the median. Crosses indicate outliers. Boxplot notches show the 95% confidence interval
for the median. The median of log likelihood of action prediction by proposed method is
significantly larger than one by the fixed meta-parameter method ( Wilcoxon signed rank
test; p < 0.0001).
estimating meta-parameters by particle filtering, to the fixed meta-parameter Q-learning
model, which used the fixed optimal learning rate ? and inverse temperature ? in the meaning of maximizing likelihood of action prediction in a session (Figure 6(a)).
The particle filter could predict actions better than fixed meta-parameter Q-learning model
with the optimal meta-parameter (Wilcoxon signed rank test; p < 0.0001). This result
indicated that the particle filtering method successfully track the change of the metaparameters, the learning rate ? and the inverse temperature ?, through the sessions.
6
Discussion
An advantage of the proposed particle filter method is that we do not have to hand-tune
meta-parameter, such as learning rate. Although we still have to set the meta-meta- parameters, which defines dynamics of meta-parameters, the behavior of the estimates are less
sensitive to their settings, compared to the setting of the meta-parameters. Dependency on
the initial distribution of the hidden variables decreases with increasing number of data.
An extension of this study would be to model selection objectively using a hierarchical
Bayesian approach. For example, the several possible reinforcement learning models, e.g.
Q-learning, Sarsa algorithm or policy gradient algorithm, could be compared in term of
measure of the posterior probability of models.
Recently, computational models with heuristic meta-parameters have been successfully
used to generate regressors for neuroimaging data [5]. Bayesian method enables generating such regressors in a more objective, data-driven manner. We are going to apply the
current method for characterizing neural recording data from the monkey.
7
Conclusion
We proposed a particle filter method to estimate internal parameters and meta-parameters
of a reinforcement learning agent from observable variables. Our method is a powerful
tool for interpreting neurophysiological and neuroimaging data in light of computational
models, and to build better models in light of experimental data.
Acknowledgments
This research was conducted as part of ?Research on Human Communication?; with funding from the Telecommunications Advancement Organization of Japan
References
[1] Sutton RS & Barto AG (1998) Reinforcement Learning: An Introduction, MIT Press, Cambridge,
MA.
[2] Schultz W, Dayan P, Montague PR (1997) A neural substrate of prediction and reward. Science.
14;275(5306):1593-1599
[3] Doucet A, de Freitas N and Gordon. N, (2001) An introduction to sequential Monte Carlo methods, In Sequential Monte Carlo Methods in Practice, Doucet A, de Freitas N & Gordon N eds,
Springer-Verlag, pp.3-14.
[4] Ueda Y, Samejima K, Doya K, & Kimura M (2002) Reward value dependent striate neuron
activity of monkey performing trial-and-error behavioral decision task, Abst. of Soc Neurosci, 765.13.
[5] O?Doherty, Dayan P, Friston K , Critchley H and Dolan R (2003) Temporal difference models and
reward-related learning in human brain, Neuron 28, 329-337.
| 2418 |@word neurophysiology:1 trial:12 r:1 simulation:2 solid:2 recursively:1 initial:7 selecting:1 past:1 freitas:2 current:2 comparing:3 must:1 numerical:2 enables:1 motor:2 plot:1 update:2 selected:3 advancement:1 beginning:1 consists:2 behavioral:6 manner:1 introduce:1 acquired:1 x0:5 notably:1 expected:3 behavior:2 pr1:2 brain:4 td:1 armed:4 increasing:1 estimating:6 discover:1 maximizes:1 monkey:8 ag:1 kimura:2 temporal:2 control:3 positive:1 depended:2 limit:1 sutton:1 firing:1 signed:2 quantified:1 shaded:2 acknowledgment:1 recursive:1 block:12 practice:1 procedure:1 area:4 physiology:1 significantly:1 confidence:1 get:1 selection:4 yt:9 center:2 maximizing:1 attention:1 rule:3 modol:1 updated:2 substrate:1 agreement:2 updating:1 predicts:1 critchley:1 bottom:1 decrease:1 environment:1 reward:30 dynamic:7 depend:1 learner:2 montague:1 represented:1 monte:2 artificial:1 heuristic:1 larger:1 plausible:1 otherwise:2 objectively:2 delivered:2 sequence:13 advantage:1 abst:1 propose:1 turned:2 validate:1 double:1 r1:2 generating:1 ac:1 qt:5 soc:1 kenji:1 predicted:1 indicate:6 direction:2 filter:14 stochastic:1 quartile:2 human:4 jst:1 sarsa:1 extension:1 pl:3 exp:2 predict:2 smallest:1 estimation:6 sensitive:1 city:1 successfully:3 tool:1 mit:1 gaussian:2 barto:1 rank:2 likelihood:7 dayan:2 dependent:1 inaccurate:1 hidden:25 bandit:4 reproduce:1 metaparameters:4 going:1 unobservable:1 animal:4 softmax:1 marginal:4 broad:1 inevitable:1 future:2 stimulus:1 gordon:2 randomly:1 organization:1 light:2 experience:2 walk:1 circle:3 increased:1 conducted:1 dependency:2 combined:1 st:18 overestimated:1 probabilistic:1 na:1 lever:5 recorded:2 pr2:2 external:1 creating:1 japan:3 account:1 de:2 depends:1 try:1 observing:1 variance:2 bayesian:10 carlo:2 randomness:4 history:1 tended:1 ed:1 pp:1 adjusting:1 higher:3 day:1 response:1 box:3 hand:1 assessment:1 defines:2 indicated:1 validity:1 true:2 assigned:1 laboratory:1 white:2 conditionally:1 doherty:1 temperature:9 interpreting:1 meaning:1 wise:1 novel:1 recently:1 funding:1 jp:2 metaparameter:1 cambridge:1 session:3 particle:22 had:1 dot:2 wilcoxon:2 posterior:3 recent:1 driven:1 rewarded:4 verlag:1 meta:35 binary:1 maximize:1 signal:1 multiple:1 kyoto:3 infer:1 plausibility:1 cross:1 long:1 a1:2 prediction:9 basic:2 expectation:3 dopamine:1 addition:1 decreased:1 underestimated:1 interval:1 median:3 subject:1 recording:1 effectiveness:1 whether:1 notch:2 action:47 tune:1 discount:4 generate:1 shifted:1 dotted:2 neuroscience:2 estimated:12 track:1 paramters:1 verified:1 imaging:1 run:1 inverse:9 powerful:1 telecommunication:1 ueda:2 doya:3 delivery:2 decision:4 yasumasa:1 fold:1 activity:3 boxplot:1 performing:1 department:2 according:2 combination:1 across:1 midbrain:1 explained:1 outlier:1 pr:4 taken:1 equation:1 turn:3 mechanism:2 apply:2 hierarchical:1 robustness:1 top:1 include:1 medicine:1 build:1 establish:1 approximating:1 objective:1 strategy:1 rt:10 minoru:1 traditional:1 striate:1 gradient:1 atr:2 simulated:4 difficult:2 neuroimaging:2 holding:1 kpu:1 policy:5 unknown:1 upper:1 neuron:3 observation:3 markov:1 inevitably:1 neurobiology:1 variability:1 communication:1 y1:9 varied:1 drift:1 pair:1 connection:2 dynamical:1 max:1 memory:2 including:1 friston:1 difficulty:1 improve:2 kazuyuki:1 dolan:1 dxt:2 filtering:3 agent:15 leaning:1 course:4 side:1 allow:1 face:2 characterizing:1 transition:5 cumulative:2 contour:1 sensory:2 reinforcement:15 regressors:2 simplified:2 schultz:1 boltzman:2 crest:1 approximate:1 observable:12 doucet:2 assumed:1 samejima:3 learn:2 neurosci:1 arrow:1 big:1 neuronal:1 predictability:1 position:1 candidate:1 learns:1 externally:1 xt:19 sequential:2 logarithmic:1 lt:1 neurophysiological:1 tracking:1 springer:1 fluctuated:1 ma:1 goal:2 change:3 called:2 experimental:3 internal:4 modulated:1 tested:1 |
1,561 | 2,419 | Linear Response for Approximate Inference
Max Welling
Department of Computer Science
University of Toronto
Toronto M5S 3G4 Canada
[email protected]
Yee Whye Teh
Computer Science Division
University of California at Berkeley
Berkeley CA94720 USA
[email protected]
Abstract
Belief propagation on cyclic graphs is an efficient algorithm for computing approximate marginal probability distributions over single nodes and
neighboring nodes in the graph. In this paper we propose two new algorithms for approximating joint probabilities of arbitrary pairs of nodes
and prove a number of desirable properties that these estimates fulfill.
The first algorithm is a propagation algorithm which is shown to converge if belief propagation converges to a stable fixed point. The second
algorithm is based on matrix inversion. Experiments compare a number
of competing methods.
1 Introduction
Belief propagation (BP) has become an important tool for approximate inference on graphs
with cycles. Especially in the field of ?error correction decoding?, it has brought performance very close to the Shannon limit. BP was studied in a number of papers which have
gradually increased our understanding of the convergence properties and accuracy of the
algorithm. In particular, recent developments show that the stable fixed points are local
minima of the Bethe free energy [10, 1], which paved the way for more accurate ?generalized belief propagation? algorithms and convergent alternatives to BP [11, 6].
Despite its success, BP does not provide a prescription to compute joint probabilities over
pairs of non-neighboring nodes in the graph. When the graph is a tree, there is a single chain
connecting any two nodes, and dynamic programming can be used to efficiently integrate
out the internal variables. However, when cycles exist, it is not clear what approximate
procedure is appropriate. It is precisely this problem that we will address in this paper.
We show that the required estimates can be obtained by computing the ?sensitivity? of the
node marginals to small changes in the node potentials. Based on this idea, we present two
algorithms to estimate the joint probabilities of arbitrary pairs of nodes.
These results are interesting in the inference domain but may also have future applications
to learning graphical models from data. For instance, information about dependencies between random variables is relevant for learning the structure of a graph and the parameters
encoding the interactions.
2 Belief Propagation on Factor Graphs
Let V index a collection of random variables {Xi }i?V and let xi denote values of Xi . For
a subset of nodes ? ? V let X? = {Xi }i?? be the variable associated with that subset, and
x? be values of X? . Let A be a family of such subsets of V . The probability distribution
.
over X = XV is assumed to have the following form,
Y
1 Y
?? (x? )
?i (xi )
(1)
PX (X = x) =
Z
??A
i?V
where Z is the normalization constant (the partition function) and ?? , ?i are positive potential functions defined on subsets and single nodes respectively. In the following we will
.
write P (x) = PX (X = x) for notational simplicity. The decomposition of (1) is consistent
with a factor graph with function nodes over X? and variables nodes Xi . For each i ? V
denote its neighbors by Ni = {? ? A : ? 3 i} and for each subset ? its neighbors are
simply N? = {i ? ?}.
Factor graphs are a convenient representation for structured probabilistic models and subsume undirected graphical models and acyclic directed graphical models [3]. Further, there
is a simple message passing algorithm for approximate inference that generalizes the belief
propagation algorithms on both undirected and acyclic directed graphical models,
Y
X
Y
m?i (xi )
m?i (xi ) ?
?? (x? )
nj? (xj ) (2)
ni? (xi ) ? ?i (xi )
x?\i
??Ni \?
j?N? \i
where ni? (xi ) represents a message from variable node i to factor node ? and vice versa
for message m?i (xi ). Marginal distributions over factor nodes and variable nodes are
expressed in terms of these messages as follows,
Y
Y
1
1
?? (x? )
ni? (xi )
bi (xi ) = ?i (xi )
m?i (xi )
(3)
b? (x? ) =
??
?i
i?N?
??Ni
where ?i , ?? are normalization constants. It was recently established in [10, 1] that stable
fixed points of these update equations correspond to local minima of the Bethe-Gibbs free
energy given by,
BP
GBP ({bBP
i , b? }) =
XX
?
x?
bBP
? (x? ) log
ci
XX
bBP
bBP
? (x? )
i (xi )
+
bBP
i (xi ) log
?? (x? )
?i (xi )
x
i
with ci = 1 ? |Ni | and the marginals are subject to the following local constraints:
X
X
BP
bBP
b? (x? ) = 1,
?? ? A, i ? ?
? (x? ) = bi (xi ),
x?\i
(4)
i
(5)
x?
Since only local constraints are enforced it is no longer guaranteed that the set of marginals
BP
{bBP
i , b? } are consistent with a single joint distribution B(x).
3 Linear Response
In the following we will be interested in computing estimates of joint probability distributions for arbitrary pairs of nodes. We propose a method based on the linear response
theorem. The idea is to study changes in the system when we perturb single node potentials,
log ?i (xi ) = log ?i0 (xi ) + ?i (xi )
(6)
The superscript 0 indicates unperturbed quantities in (6) and the following. Let ? = {?i }
and define the cumulant generating function of P (X) (up to a constant) as,
XY
Y
F (?) = ? log
?? (x? )
?i0 (xi )e?i (xi )
(7)
x ??A
i?V
Differentiating F (?) with respect to ? gives the cumulants of P (x),
?
?F (?) ?
? ??
= pj (xj )
?
j (xj )
?=0
?
?
?
pij (xi , xj ) ? pi (xi )pj (xj )
if i 6= j
?
?pj (xj ) ?
? 2 F (?)
? ??i (xi )??j (xj ) ?
= ??i (xi ) ?
=
pi (xi )?xi ,xj ? pi (xi )pj (xj ) if i = j
?=0
?=0
(8)
(9)
where pi , pij are single and pairwise marginals of P (x). Expressions for higher order
cumulants can be derived by taking further derivatives of ?F (?).
Notice from (9) that the covariance estimates are obtained by studying the perturbations in
pj (xj ) as we vary ?i (xi ). This is not practical in general since calculating pj (xj ) itself is
intractable. Instead, we consider perturbations of approximate marginal distributions {bj }.
In the following we will assume that bj (xj ; ?) (with the dependence on ? made explicit)
are the beliefs at a local minimum of the BP-Gibbs free energy (subject to constraints).
?b (x ;?) ?
In analogy to (9), let Cij (xi , xj ) = ??j i (xj i ) ??=0 be the linear response estimated covariance, and define the linear response estimated joint pairwise marginal as
0
0
bLR
ij (xi , xj ) = bi (xi )bj (xj ) + Cij (xi , xj )
(10)
.
LR
where
= bi (xi ; ? = 0). We will show that bij and Cij satisfy a number of important
properties which make them suitable as approximations of joint marginals and covariances.
b0i (xi )
First we show that Cij (xi , xj ) can be interpreted as the Hessian of a well-behaved convex
function. Let C be the set of beliefs that satisfy the constraints (5). The approximate
marginals {b0i } along with the joint marginals {b0? } form a local minimum of the Bethe.
Gibbs free energy (subject to b0 = {b0i , b0? } ? C). Assume that b0 is a strict local minimum
BP
of G (the strict local minimality is in fact attained if we use loopy belief propagation [1]).
That is, there is an open domain D containing b0 such that GBP (b0 ) < GBP (b) for each
b ? D ? C\b0 . Now we can define
P
(11)
G? (?) = inf GBP (b) ? i,xi bi (xi )?i (xi )
b?D?C
?
G (?) is a concave function since it is the infimum of a set of linear functions in ?. Further
G? (0) = GBP (b0 ). Since b0 is a strict local minimum when ? = 0, small perturbations in ?
will result in small perturbations in b0 , so that G? is well-behaved on an open neighborhood
?G? (?)
around ? = 0. Differentiating G? (?), we get ??
= ?bj (xj ; ?) so we now have
j (xj )
?
?
?
? 2 G? (?)
?b (x ;?) ?
?
Cij (xi , xj ) = ??j i (xj i ) ?
=?
(12)
??i (xi )??j (xj ) ??=0
?=0
In essence, we can interpret G? (?) as a local convex dual of GBP (b) (by restricting attention
to D). Since GBP is an approximation to the exact Gibbs free energy [8], which is in turn
dual to F (?) [4], G? (?) can be seen as an approximation to F (?) for small values of ?. For
that reason we can take its second derivatives Cij (xi , xj ) as approximations to the exact
covariances (which are second derivatives of ?F (?)).
Theorem 1 The approximate covariance satisfies the following symmetry:
Cij (xi , xj ) = Cji (xj , xi )
(13)
Proof: The covariances are second derivatives of ?G? (?) at ? = 0 so we can interchange
the order of the derivatives since G? (?) is well-behaved on a neighborhood around ? =
0. ?
Theorem 2 The approximate covariance satisfies the following ?marginalization? conditions for each xi , xj :
X
X
Cij (x0i , xj ) =
Cij (xi , x0j ) = 0
(14)
x0i
x0j
As a result the approximate joint marginals satisfy local marginalization constraints:
X
X
0
0
0
0
bLR
bLR
(15)
ij (xi , xj ) = bj (xj )
ij (xi , xj ) = bi (xi )
x0j
x0i
Proof: Using the definition of Cij (xi , xj ) and marginalization constraints for b0j ,
?
P
?
X
X ?bj (x0 ;?) ??
? x0 bj (x0j ;?) ?
?
j
j
?
Cij (xi , x0j ) =
=
= ??i?(xi ) 1?
=0
??i (xi ) ?
??i (xi )
?
x0j
x0j
?=0
?=0
?=0
(16)
P
The constraint x0 Cij (x0i , xj ) = 0 follows from the symmetry (13), while the correi
sponding marginalization (15) follows from (14) and the definition of bLR
ij . ?
Since ?F (?) is convex, its Hessian matrix with entries given in (9) is positive semi-definite.
Similarly, since the approximate covariances Cij (xi , xj ) are second derivatives of a convex
function ?G? (?), we have:
Theorem 3 The matrix formed from the approximate covariances Cij (xi , xj ) by varying
i and xi over the rows and varying j, xj over the columns is positive semi-definite.
Using the above results we can reinterpret the linear response correction as a ?projection?
of the (only locally consistent) beliefs {b0i , b0? } onto a set of beliefs {b0i , bLR
ij } that is both
locally consistent (theorem 2) and satisfies the global constraint of being positive semidefinite (theorem 3)1 .
4 Propagating Perturbations for Linear Response
Recall from (10) that we need the first derivative of bi (xi ; ?) with respect to ?j (xj ) at ? = 0.
This does not automatically imply that we need an analytic expression for bi (xi ; ?) in terms
of ?. In this section we show how we may compute these first derivatives by expanding all
quantities and equations up to first order in ? and keeping track of first order dependencies.
First we assume that belief propagation has converged to a stable fixed point. We expand
the beliefs and messages up to first order as2
?
?
X
bi (xi ; ?) = b0i (xi ) 1 +
Rij (xi , yj )?j (yj )
(17)
j,yj
?
?
X
0
ni? (xi ) = ni? (xi ) 1 +
Ni?,k (xi , yk )?k (yk )
(18)
k,yk
m?i (xi ) =
?
?
X
1+
M?i,k (xi , yk )?k (yk )
m0?i (xi )
(19)
k,yk
1
2
In extreme cases it is however possible that some entries of bLR
ij become negative.
The unconventional form of this expansion will make subsequent derivations more transparent.
The ?response matrices? Rij , Ni?,j , M?i,j measure the sensitivities of the corresponding
logarithms of beliefs and messages to changes in the log potentials log ?j (yj ) at node j.
Next, inserting the expansions (6,18,19) into the belief propagation equations (2) and
matching first order terms, we arrive at the following update equations for the ?supermessages? M?i,k (xi , yk ) and Ni?,k (xi , yk ),
X
Ni?,k (xi , yk ) ? ?ik ?xi yk +
M?i,k (xi , yk )
(20)
??Ni \?
X ?? (x? )
M?i,k (xi , yk ) ?
m0?i (xi )
x
?\i
Y
n0j? (xj )
j?N? \i
X
Nj?,k (xj , yk )
(21)
j?N? \i
The super-messages are initialized at M?i,k = Ni?,k = 0 and updated using (20,21)
until convergence. Just as for belief propagation, where messages are normalized to avoid
numerical over or under flow, after each update the super-messages are ?normalized? as
follows,
X
M?i,k (xi , yk ) ? M?i,k (xi , yk ) ?
M?i,k (xi , yk )
(22)
xi
and similarly for Ni?,k . After the above fixed point equations have converged, we compute
the response matrix Rij (xi , xj ) by again inserting the expansions (6,17,19) into (3) and
matching first order terms,
X
M?i,j (xi , xj )
(23)
Rij (xi , xj ) = ?ij ?xi xj +
??Ni
The constraints
(14) (which follow from the normalization of bi (xi ; ?) and b0i (xi )) translate
P
into xi b0i (xi )Rij (xi , yj ) = 0 and it is not hard to verify that the following shift can be
applied to accomplish this,
X
Rij (xi , yj ) ? Rij (xi , yj ) ?
b0i (xi )Rij (xi , yj )
(24)
xi
Finally, combining (17) with (12), we get
Cij (xi , xj ) = b0i (xi )Rij (xi , xj )
(25)
Theorem 4 If the factor graph has no loops then the linear response estimates defined in
(25) are exact. Moreover, there exists a scheduling of the super-messages such that the
algorithm converges after just one iteration (i.e. every message is updated just once).
Sketch of Proof: Both results follow from the fact that belief propagation on tree structured
factor graphs computes the exact single node marginals for arbitrary ?. Since the supermessages are the first order terms of the BP updates with arbitrary ?, we can invoke the
exact linear response theorem given by (8) and (9) to claim that the algorithm converges to
the exact joint pairwise marginal distributions. ?
For graphs with cycles, BP is not guaranteed to converge. We can however still prove the
following strong result.
Theorem 5 If the messages {m0?i (xi ), n0i? (xi )} have converged to a stable fixed point,
then the update equations for the super-messages (20,21,22) will also converge to a unique
stable fixed point, using any scheduling of the super-messages.
Sketch of Proof3 : We first note that the updates (20,21,22) form a linear system of equations which can only have one stable fixed point. The existence and stability of this fixed
3
For a more detailed proof of the above two theorems we refer to [9].
point is proven by observing that the first order term is identical to the one obtained from
a linear expansion of the BP equations (2) around its stable fixed point. Finally, the SteinRosenberg theorem guarantees that any scheduling will converge to the same fixed point. ?
5 Inverting Matrices for Linear Response
In this section we describe an alternative method to compute
??i (xi )
?bk (xk )
?bi (xi )
??k (xk )
by first computing
and then inverting the matrix formed by flattened {i, xi } into a row index and
{k, xk } into a column index. This method is a direct extension of [2]. The intuition is
that while perturbations in a single ?i (xi ) affect the whole system, perturbations in a single
bi (xi ) (while keeping the others fixed) affect each subsystem ? ? A independently (see
??i (xi )
?bi (xi )
then to compute ??
.
[8]). This makes it easier to compute ?b
k (xk )
k (xk )
First we propose minimal representations for bi , ?i and the messages. We assume that for
each node i there is a distinguished value xi = 0. Set ?i (0) = 0 while functionally define
P
??i (xi )
bi (0) = 1 ? xi 6=0 bi (xi ). Now the matrix formed by ?b
for each i, k and xi , xk 6= 0
k (xk )
is invertible and its inverse gives us the desired covariances for xi , xk 6= 0. Values for xi =
0 or xk = 0 can then be computed using (14). We will also need minimal representations
(xi )
for the messages. This can be achieved by defining new quantities ?i? (xi ) = log nni?
i? (0)
for all i and xi 6= 0. The ?i? ?s can be interpreted as Lagrange multipliers to enforce the
consistency constraints (5) [10]. We will use these multipliers instead of the messages in
this section.
Re-expressing the fixed point equations (2,3) in terms of bi ?s and ?i? ?s only, and introducing the perturbations ?i , we get:
?c
?
?i (xi ) ?i (xi ) Y ??i? (xi )
bi (xi ) i
=
e
e
for all i, xi 6= 0
(26)
bi (0)
?i (0)
??Ni
Q
P
?j? (xj )
j?N? e
x?\i ?? (x? )
Q
bi (xi ) = P
for all i, ? ? Ni , xi 6= 0
(27)
?j? (xj )
x? ?? (x? )
j?N? e
Differentiating the logarithm of (26) with respect to bk (xk ), we get
?
?
X ?? (x )
1
?xi xk
??i (xi )
i?
i
+
+
=
c
?
i
ik
?bk (xk )
?bk (xk )
bi (xi ) bi (0)
(28)
??Ni
remembering that bi (0) is a function of bi (xi ), xi 6= 0. Notice that we need values for
??i? (xi )
??i (xi )
?bk (xk ) in order to solve for ?bk (xk ) . Since perturbations in bk (xk ) (while keeping other
i? (xi )
bj ?s fixed) do not affect nodes not directly connected to k, we have ??
?bk (xk ) = 0 for
k 6? ?. When k ? ?, these can in turn be obtained by solving, for each ?, a matrix inverse.
Differentiating (27) by bk (xk ), we obtain
XX
?? (xj )
?
Cij
(xi , xj ) ?bj?
?ik ?xi xk =
(29)
k (xk )
j?? xj 6=0
?
b (x , x ) ? bi (xi )bj (xj ) if i 6= j
?
Cij
(xi , xj ) = ? i j
bi (xi )?xi xj ? bi (xi )bj (xj ) if i = j
(30)
for each i, k ? N? and xi , xk 6= 0. Flattening the indices in (29) (varying i, xi over rows
and k, xk over columns), the LHS becomes the identity matrix, while the RHS is a product
Neighbors
Distant Nodes
C=0
MF+LR
10
BP+LR
BP
?3
BP+LR
Conditioning
MF+LR
BP+LR
?4
10
?4
10
?5
10
error covariances
error covariances
Next?to?Nearest Neighbors
?2
10
C=0
error covariances
?2
10
Conditioning
?4
10
C=0
MF+LR
Conditioning
?6
10
?6
10
0.5
1
?edge
(a)
1.5
2
0.5
?1edge
1.5
2
0.5
(b)
1
?edge 1.5
2
(c)
Figure 1: L1 -error in covariances for MF+LR, BP, BP+LR and ?conditioning?. Dashed line is
baseline (C = 0). The results are separately plotted for neighboring nodes (a), next-to-nearest
neighboring nodes (b) and the remaining nodes (c).
?
of two matrices. The first is a covariance matrix C? where the ij th block is Cij
(xi , xj );
??
(x )
j
. Hence the derivawhile the second matrix consists of all the desired derivatives ?bj?
k (xk )
?1
tives are given as elements of the inverse covariance matrix C? . Finally, plugging the
?? (xj )
??i (xi )
values of ?bj?
into (28) now gives ?b
and inverting that matrix will now give
k (xk )
k (xk )
us the desired approximate covariances over the whole graph. Interestingly, the method
only requires access to the beliefs at the local minimum, not to the potentials or Lagrange
multipliers.
6 Experiment
The accuracy of the estimated covariances Cij (xi , xj ) in the LR approximation was studied on a 6 ? 6 square grid with only nearest neighbors connected and 3 states per node. The
solid curves in figure 1 represent the error in the estimates for: 1) mean field + LR approximation [2, 9], 2) BP estimates for neighboring nodes with bEDGE = b? in equation (3), 3)
BP+LR and 4) ?conditioning?, where bij (xi , xj ) = bi|j (xi |xj ) bBP
j (xj ) and bi|j (xi |xj ) is
computed by running BP N ? D times with xj clamped at a specific state (this has the same
computational complexity as BP+LR). C was computed as Cij = bij ? bi bj , with {bi , bj }
the marginals of bij , and symmetrizing the result. The error was computed as the absolute
difference between the estimated and the true values, averaged over pairs of nodes and their
possible states, and averaged over 25 random draws of the network. An instantiation of a
network was generated by randomly drawing the logarithm of the edge potentials from a
zero mean Gaussian with a standard deviation ranging between [0, 2]. The node potentials
were set to 1.
From these experiments we conclude that ?conditioning? and BP+LR have similar accuracy
and significantly outperform MF+LR and BP, while ?conditioning? performs slightly better
than BP+LR. The latter does however satisfy some desirable properties which are violated
by conditioning (see section 7 for further discussion).
7 Discussion
In this paper we propose to estimate covariances as follows: first observe that the log
partition function is the cumulant generating function, next define its conjugate dual ? the
Gibbs free energy ? and approximate it, finally transform back to obtain a local convex
approximation to the log partition function, from which the covariances can be estimated.
The computational complexity of the iterative linear response algorithm scales as O(N ?
E ? D3 ) per iteration (N = #nodes, E = #edges, D = #states per node). The noniterative algorithm scales slightly worse, O(N 3 ? D3 ), but is based on a matrix inverse for
which very efficient implementations exist. A question that remains open is whether we
can improve the efficiency of the iterative algorithm when we are only interested in the
joint distributions of neighboring nodes.
There are still a number of generalizations worth mentioning. Firstly, the same ideas can
be applied to the MF approximation [9] and the Kikuchi approximation (see also [5]). Secondly, the presented method easily generalizes to the computation of higher order cumulants. Thirdly, when applying the same techniques to Gaussian random fields, a propagation
algorithm results that computes the inverse of the weight matrix exactly [9]. In the case of
more general continuous random field models we are investigating whether linear response
algorithms can be applied to the fixed points of expectation propagation.
The most important distinguishing feature between the proposed LR algorithm and the
conditioning procedure described in section 6 is the fact that the covariance estimate is
automatically positive semi-definite. Indeed the idea to include global constraints such as
positive semi-definiteness in approximate inference algorithms was proposed in [7]. Other
differences include automatic consistency between joint pairwise marginals from LR and
node marginals from BP (not true for conditioning) and a convergence proof for the LR
algorithm (absent for conditioning, but not observed to be a problem experimentally). Finally, the non-iterative algorithm is applicable to all local minima in the Bethe-Gibbs free
energy, even those that correspond to unstable fixed points of BP.
Acknowledgements
We would like to thank Martin Wainwright for discussion. MW would like to thank Geoffrey Hinton
for support. YWT would like to thank Mike Jordan for support.
References
[1] T. Heskes. Stable fixed points of loopy belief propagation are minima of the bethe free energy.
In Advances in Neural Information Processing Systems, volume 15, Vancouver, CA, 2003.
[2] H.J. Kappen and F.B. Rodriguez. Efficient learning in Boltzmann machines using linear response theory. Neural Computation, 10:1137?1156, 1998.
[3] F.R. Kschischang, B. Frey, and H.A. Loeliger. Factor graphs and the sum-product algorithm.
IEEE Transactions on Information Theory, 47(2):498?519, 2001.
[4] M. Opper and O. Winther. From naive mean field theory to the TAP equations. In Advanced
Mean Field Methods ? Theory and Practice. MIT Press, 2001.
[5] K. Tanaka. Probabilistic inference by means of cluster variation method and linear response
theory. IEICE Transactions in Information and Systems, E86-D(7):1228?1242, 2003.
[6] Y.W. Teh and M. Welling. The unified propagation and scaling algorithm. In Advances in
Neural Information Processing Systems, 2001.
[7] M.J. Wainwright and M.I. Jordan. Semidefinite relaxations for approximate inference on graphs
with cycles. Technical report, Computer Science Division, University of California Berkeley,
2003. Rep. No. UCB/CSD-3-1226.
[8] M. Welling and Y.W. Teh. Approximate inference in boltzmann machines. Artificial Intelligence, 143:19?50, 2003.
[9] M. Welling and Y.W. Teh. Linear response algorithms for approximate inference in graphical
models. Neural Computation, 16:197?221, 2004.
[10] J.S. Yedidia, W. Freeman, and Y. Weiss. Generalized belief propagation. In Advances in Neural
Information Processing Systems, volume 13, 2000.
[11] A.L. Yuille. CCCP algorithms to minimize the Bethe and Kikuchi free energies: Convergent
alternatives to belief propagation. Neural Computation, 14(7):1691?1722, 2002.
| 2419 |@word inversion:1 open:3 covariance:21 decomposition:1 solid:1 kappen:1 cyclic:1 loeliger:1 interestingly:1 numerical:1 distant:1 subsequent:1 partition:3 analytic:1 update:6 intelligence:1 xk:25 lr:19 node:34 toronto:2 firstly:1 along:1 direct:1 become:2 ik:3 prove:2 consists:1 g4:1 x0:3 pairwise:4 indeed:1 freeman:1 automatically:2 becomes:1 xx:3 moreover:1 what:1 interpreted:2 unified:1 nj:2 guarantee:1 berkeley:4 every:1 reinterpret:1 concave:1 exactly:1 positive:6 local:14 frey:1 xv:1 limit:1 despite:1 encoding:1 studied:2 mentioning:1 bi:31 averaged:2 directed:2 practical:1 unique:1 yj:8 practice:1 block:1 definite:3 procedure:2 significantly:1 convenient:1 projection:1 matching:2 get:4 onto:1 close:1 subsystem:1 scheduling:3 applying:1 yee:1 attention:1 independently:1 convex:5 simplicity:1 stability:1 variation:1 updated:2 exact:6 programming:1 distinguishing:1 element:1 observed:1 mike:1 rij:9 cycle:4 connected:2 yk:16 intuition:1 complexity:2 dynamic:1 solving:1 yuille:1 division:2 efficiency:1 easily:1 joint:12 derivation:1 describe:1 artificial:1 paved:1 neighborhood:2 solve:1 drawing:1 transform:1 itself:1 superscript:1 propose:4 interaction:1 product:2 neighboring:6 relevant:1 blr:6 inserting:2 combining:1 loop:1 translate:1 convergence:3 cluster:1 generating:2 converges:3 kikuchi:2 propagating:1 x0i:4 nearest:3 ij:8 b0:11 strong:1 c:1 transparent:1 generalization:1 secondly:1 extension:1 correction:2 around:3 bj:15 claim:1 m0:3 vary:1 applicable:1 vice:1 tool:1 brought:1 mit:1 gaussian:2 super:5 fulfill:1 avoid:1 b0i:10 varying:3 derived:1 notational:1 indicates:1 baseline:1 inference:9 i0:2 expand:1 interested:2 dual:3 development:1 marginal:5 field:6 once:1 identical:1 represents:1 future:1 others:1 report:1 randomly:1 message:17 extreme:1 semidefinite:2 chain:1 accurate:1 edge:5 xy:1 lh:1 tree:2 logarithm:3 initialized:1 desired:3 re:1 plotted:1 minimal:2 increased:1 instance:1 column:3 cumulants:3 loopy:2 introducing:1 deviation:1 subset:5 entry:2 dependency:2 eec:1 accomplish:1 winther:1 sensitivity:2 minimality:1 probabilistic:2 invoke:1 decoding:1 invertible:1 connecting:1 again:1 containing:1 worse:1 derivative:9 potential:7 ywt:1 satisfy:4 observing:1 minimize:1 formed:3 ni:20 accuracy:3 square:1 efficiently:1 correspond:2 worth:1 m5s:1 converged:3 definition:2 energy:9 associated:1 proof:5 recall:1 back:1 higher:2 attained:1 follow:2 response:17 wei:1 b0j:1 just:3 until:1 sketch:2 propagation:18 rodriguez:1 infimum:1 behaved:3 ieice:1 usa:1 normalized:2 verify:1 multiplier:3 true:2 hence:1 essence:1 generalized:2 whye:1 performs:1 l1:1 ranging:1 recently:1 conditioning:11 volume:2 thirdly:1 marginals:12 interpret:1 functionally:1 refer:1 expressing:1 versa:1 gibbs:6 automatic:1 consistency:2 grid:1 similarly:2 heskes:1 stable:9 access:1 longer:1 recent:1 inf:1 rep:1 success:1 seen:1 minimum:9 remembering:1 converge:4 dashed:1 semi:4 desirable:2 technical:1 prescription:1 cccp:1 n0i:1 plugging:1 expectation:1 iteration:2 normalization:3 sponding:1 represent:1 achieved:1 separately:1 strict:3 subject:3 noniterative:1 undirected:2 flow:1 e86:1 jordan:2 mw:1 marginalization:4 xj:62 affect:3 competing:1 idea:4 shift:1 absent:1 whether:2 expression:2 cji:1 passing:1 hessian:2 clear:1 detailed:1 locally:2 outperform:1 exist:2 notice:2 estimated:5 track:1 per:3 write:1 d3:2 pj:6 graph:15 relaxation:1 sum:1 enforced:1 inverse:5 arrive:1 family:1 x0j:7 draw:1 scaling:1 guaranteed:2 convergent:2 nni:1 precisely:1 constraint:11 bp:27 as2:1 ywteh:1 px:2 martin:1 department:1 structured:2 conjugate:1 slightly:2 gradually:1 equation:11 remains:1 turn:2 unconventional:1 symmetrizing:1 studying:1 generalizes:2 yedidia:1 observe:1 appropriate:1 enforce:1 distinguished:1 alternative:3 existence:1 remaining:1 running:1 include:2 graphical:5 calculating:1 perturb:1 especially:1 approximating:1 question:1 quantity:3 dependence:1 thank:3 unstable:1 reason:1 index:4 cij:20 negative:1 implementation:1 boltzmann:2 teh:4 subsume:1 defining:1 hinton:1 perturbation:9 arbitrary:5 canada:1 bk:9 inverting:3 pair:5 required:1 gbp:7 tap:1 california:2 established:1 tanaka:1 address:1 max:1 belief:21 wainwright:2 suitable:1 advanced:1 tives:1 improve:1 imply:1 naive:1 understanding:1 acknowledgement:1 vancouver:1 interesting:1 acyclic:2 analogy:1 proven:1 geoffrey:1 integrate:1 pij:2 consistent:4 pi:4 row:3 free:9 keeping:3 neighbor:5 taking:1 differentiating:4 absolute:1 curve:1 opper:1 computes:2 interchange:1 collection:1 made:1 welling:5 transaction:2 approximate:18 global:2 instantiation:1 investigating:1 assumed:1 conclude:1 xi:149 continuous:1 iterative:3 bethe:6 ca:2 expanding:1 kschischang:1 symmetry:2 expansion:4 domain:2 flattening:1 csd:1 rh:1 whole:2 definiteness:1 explicit:1 clamped:1 bij:4 theorem:11 specific:1 utoronto:1 unperturbed:1 intractable:1 exists:1 restricting:1 flattened:1 ci:2 easier:1 mf:6 simply:1 lagrange:2 expressed:1 satisfies:3 identity:1 change:3 hard:1 experimentally:1 bbp:8 shannon:1 ucb:1 internal:1 support:2 latter:1 cumulant:2 violated:1 |
1,562 | 242 | 482
Saba and Keeler
Algorithms/or Better Representation and
Faster Learning in Radial
Basis Function Networks
Avijit Saba 1
James D. Keeler
Microelectronics and Computer Technology corporation
3500 West Balcones Center Drive
Austin, Tx 78759
ABSTRACT
In this paper we present upper bounds for the learning rates for
hybrid models that employ a combination of both self-organized
and supervised learning, using radial basis functions to build
receptive field representations in the hidden units. The learning
performance in such networks with nearest neighbor heuristic can
be improved upon by multiplying the individual receptive field
widths by a suitable overlap factor. We present results indicat!ng
optimal values for such overlap factors. We also present a new
algorithm for determining receptive field centers. This method
negotiates more hidden units in the regions of the input space as a
function of the output and is conducive to better learning when the
number of patterns (hidden units) is small.
1 INTRODUCTION
Functional approximation of experimental data ongmating from a continuous
dynamical process is an important problem. Data is usually available in the form of
a set S consisting of {x,y} pairs, where x is a input vector and y is the corresponding
output vector. In particular, we consider networks with a single layer of hidden
units and the jth output unit computes Yj = L fa Ra { xj ' xa ' (J'a}' where, Yj is the
1
University of Texas at Austin, Dept. of ECE, Austin TX 78712
Algorithms for Better Representation
network output due to input xj, fa. is the synaptic weight associated with the a.th
hidden neuron and the jth output unit; Ra ( x(tj ), xa' cr} is the Radial Basis Function
(RBF) response of the ath hidden neuron. This technique of using a superposition of
RBF for the purposes of approximation has been considered before by [Medgassy
'58] and more recently by [Moody '88], [Casdagli t89] and [poggio t89]. RBF
networks are particularly attractive since such networks are potentially 1000 times
faster than the ubiquitous backpropagation network for comparable error rates
[Moody t88].
The essence of the network model we consider is described in [Moody '88]. A typical
network that implements a receptive field response consists of a layer of linear input units a layer of linear output units and an intennediate ( hidden ) layer of nonlinear response units. Weights are associated with only the links connecting the
hidden layer to the output layer. For the single output case the real valued functiont
al mapping f: RD -> R is characterized by the following equations:
=
o(Xi)
O(xi)
1: f a. Ra. (Xi)
(1)
=
1: f a. Ra. (Xi) / 1: Ra. (Xi)
=
e - ( I x a. - Xi I / cr a. )
(2)
2
(3)
where xa. is a real valued vector associated with the a. th receptive field ( hidden)
unit and is of the same dimension as the input The output can be nonnalized by the
sum of the responses of the hidden units due to any input and the expression for the
output using nonnalized response function is presented in Equation 2. The xa. valt
ues the centers of the receptive field units and cr are their widths. Training in
a.
such networks can be performed in a two stage hybrid combination of independent
processes. In the fll'St stage, a clustering of the input data is performed. The objective of this clustering algorithm is to establish appropriate xa values for each of
the receptive field units such that the cluster points represent the input distribution
in the best possible manner. We use competetive learning with the nearest neighbor
heuristic as our clustering algorithm (Equation 5). The degree or quality of clustering achieved is quantified by the sum-square measure in Equation 4, which is the objective function we are trying to minimize in the clustering phase.
TSS- KMEANS
xa-closest
=
L ( xa-closest
- xi) 2
= xa-closest + A. ( Xi _ xa-closest)
(4)
(5)
After suitable cluster points (xavalues) are determined the next step is to determine
483
484
Saha and Keeler
the O'a. or widths for each of the receptive fields. Once again we use the nearest
neighbor heuristic where O'a. (the width of the a.th neuron) is set equal to the euclidian distance between xa. and its nearest neighbor. Once the receptive field centers
xa. and the widths (0'a.) are found, the receptive field responses can be calculated
for any input using Equation 3. Finally, the fa. values or weights on links connecting the hidden layer units to the output are determined using the well-known gradient descent learning rule. Pseudo inverse methods are usually impractical in these
problems. The rules for the objective function and weight update are given by equations 6 and 7.
L.1 (O(x.)1
=
E
= f
- t.)2
-.
- t.)
} R (x.)
a. + Tl ( O(x.)
1
-.
a. 1
(6)
(7)
where, i is the number of input patterns, x?I is the input vector and t?1 is the target
output for the i th pattern.
2 LEARNING RATES
In this section we present an adaptive formulation for the network learning rate Tl
(Equation 7). Learning rates (Tl) in such networks that use gradient descent are
usually chosen in an adhoc fashion. A conservative value for Tl is usually sufficient.
However, there are two problems with such an approach. If the learning rate is not
small enough the TSS (Total Sums of Squares) measure can diverge to high values
instead of decreasing. A very conservative estimate on the other hand will work
with almost all sets of data but will unnecessarily slow down the learning process.
or hardware
The choice of learning rate is crucial, since for real-time
implementations of such systems there is very little scope for interactive
monitoring.
This problem is addressed by the Theorem 1. We present the proof for this theorem
for the special case of a single output In the gradient descent algorithm, weight
updates can be performed after each presentation of the entire set of patterns (per
epoch basis or after each pattern (per pattern basis); both cases are considered.
Equation p.3 gives the upper bound for Tl when updates are done on a per epoch
basis. Only positive values of Tl should be considered. Equations pA and p.5 gives
the bounds for Tl when updates are done on a per pattern basis without and with
normalized response function respectively. We present some simulation results for
the logistic map ( x(t+l) = r x(t) [ 1 - x(t)] } data in Figure 1. The plots are
shown only for the normalized response case, and the learning rate was set to Tl =
Jl( ( L Ra.)2 / L (Ra.)2). We used a flXed number of 20 hidden units, and r was set
to 4.0. The network TSS did not diverge until Jl was set arbitrarily close to 1.
Algorithms (or Better Representation
This is shown in Figure 1 which indicates that, with the normalized response
function. if the sum of squares of the hidden unit responses is nearly equal to
the square of the sum of the responses. then a high effective learning rate (Tl) can
be used.
Theorem 1: The TSS measure of a network will be decreasing in time. provided the
learning rate Tl does not exceed Li ei La EaRai / ( Li (:Ea EaRai )2) if the
network is trained on a per epoch basis. and 1/ La(Rai )2 when updates are done
on a per pattern basis. With normalized response function. the upper bound for the
learning rate is (LaRai )2/ La(Rai )2. Note similar result of [Widrow 1985].
fl:.w!(:
TSS(t)
= Li (~ - Laf a R ai)2
(p.1)
where N is the number of exemplars. and K is the number of receptive fields and ti
is the ith target output.
(p.2)
For stability. we impose the condition TSS(t) - TSS (t + 1)
~
O. From
Eqns (p.l) and (p.2) above and substituting Tl 2Ea for ~fa' we have:
TSS(t) - TSS (t + 1) >
L?1 e.12
-
L a fa R at. - L a 2 on'I E a R at.)2
Expanding the RHS of the above expression and substituting ei appropriately :
TSS(t) - TSS (t + 1)
- L? (t. 1
'"'l
~ - 4 Tl ~ ei LaEa Rai + 4 Tl 2 ~ (LaEa Rai)2
??? From the above inequality it follows that for stability in per epoch basis
training. the upper bound for learning rate Tl is given by :
2
Tl
~
~ ei LaEa Rai / Li (LaEaRai)
( p.3)
If updates are done on a per pattern basis. then N = 1 and we drop the summation
over N and the index i and we obtain the following bound:
Tl ~
1/ La(Rai)2.
(pA)
With normalized response function the upper bound for the learning rate is :
Tl
~ (LaR ai)2 / La(Rai )2.
?P.5)
Q.E.D.
485
486
Saba and Keeler
0.2
~----------------:------,
n
o
r
m 0.15
a
I
1
Z
0.1
e
d
e
0.05
r
r
o
r
0.0
1 - -__
..............
___- - - - - - - - - - -
0.1
0.2
0.4
0.6
0.8
1.0
2.0
Jl
Figure 1: Nonnalized error vs. fraction (Jl )
of maximum allowable learning rate
3 EFFECT OF WIDTH (a) ON APPROXIMATION ERROR
In the nearest-neighbor heuristic Cj values of the hidden units are set equal to the
euclidian distance between its center and the center of its nearest neighbor. This
method is preferred mainly because it is computationally inexpensive. However, the
perfonnance can be improved by increasing the overlap between nearby hidden unit
responses. This is done by multiplying the widths obtained with the nearest
neighbor heuristic by an overlap factor m as shown in Equation 3.1.
n
o
0.14
r
m
0.12
a
I
0.10
i
z
0.08
10 hidden units
Logistic Map data
(r
= 4.0 )
e
d
0.06
e
0.04
r
r
0.02
20 hidden units
o
r
0.0
0.0
1.0
overlap factor (m)
2.0
3.0
4.0
5.0
;ao
Figure 2: Nonnalized errors vs. overlap factor for the
logistic map.
Algorithms for Better Representation
cr a.
= m. II xa. -
(3.1)
xa.-nearestll
and II. II is the euclidian distance nonn.
In Figures 2 and 3 we show the network performance ( nonnalized error) as a
function of m. In the logistic map case a value of r = 4.0 was used, predicting 1
timestep into the future; training set size was 10 times the number of hidden units
and test set size was 70 patterns. The results for the Mackey-Glass data are with
parameter values a = 0.1, b = 0.2, A = 6, D = 4. The number of training patterns was
10 times the number of hidden units and the nonnalized error was evaluated based
on the presentation of 900 unseen patterns. For the Mackey-Glass data the optimal
values were rather well-defmed; whereas for the logistic map case we found that the
optimal values were spread out over a range.
O~
II
n
o
0.5
r-
50 units
?I
a
Z
r-
100 units
d
e
r
r
o
??
I
..
.'
?
I S
0.2
:I
I
0.1
~
0.0
.
250 units
500 units
900 units
?
0.0
0.5
r
?
?
?
?
?
??
??
0.4-
0.3
I
II
1
e
I
I
I
r
m
I
I
II!
I
:I:
t
I
t:
I
?
1.0
1.5
overlap factor (m)
2.0
2.5
Figure 2: Nonnalized errors vs. overlap factor for varying
number of hidden units, Mackey-Glass data.
4 EXTENDED METRIC CLUSTERING
In this method clustering is done in higher dimensions. In our experiments we set
the initial K hidden unit center values based on the first K exemplars. The receptive
fields are assigned vector values of dimensions determined by the size of the
input and the output vectors. Each center value was set equal to the vector obtained
by concatenating the input and the corresponding output During the clustering phase
the output Yi is concatenated with the input Xi and presented to the hidden layer.
This method fmds cluster points in the (I+O)-dimensional space of the input-output
map as defined by Equations 4.1, 4.2 and 4.3.
487
488
Saha and Keeler
Xa
= < x a ' Ya>
Xi = < xi' Yi>
(4.1)
(4.2)
Xa-new = X a -old + A ( Xa - Xi)
(4.3)
Once the cluster points or the centers are determined we disable the output field,
and only the input field is used for computing the widths and receptive field
responses. In Figure 3 we present a comparison of the performances of such a
network with and without the enhanced metric clustering. Variable size networks of
only Gaussian RBF units were used. The plots presented are for the Mackey-Glass
data with the same parameter values used in [Farmer 88]. This method works
significantly better when the number of hidden units is low.
n
0.6
0
r
m
a
I
Nearest neighbor
o.s
Enhanced metric clustering
0.4
MACKEY GLASS DATA
i
Z
e
d
0.3
a = 0.2 , b
=0.1, D =4, A = 6.
0.2
e
r
r
o
r
0.1
0.0
~
o
_ _ _------''---_ _ _---'_ _ _ _- - L_ _ _ _- - ' '
200
400
600
800
Number of units
~
Figure 3: Performance of enhanced metric clustering
algorithmm.
5 CONCLUSIONS
One of the emerging application areas for neural network models is real time signal
processing. For such applications and hardware implementations, adaptive methods
for determining network parameters are essential. Our derivations for learning rates
are important in such situations. We have presented results indicating that in RBF
networks, performance can be improved by tuning the receptive field widths by some
suitable overlap factor. We have presented an extended metric algorithm that
negotiates hidden units based on added output information. We have observed more
than 20% improvement in the normalized error measure when the number of training
Algorithms for Better Representation
patterns, and therefore the number of hidden units, used is reasonably small.
References
M. Casdagli. (1989) "Nonlinear Prediction of Chaotic Time Series" Physica 35D,
335 -356.
D. J. Farmer and J. J. Sidorowich. (1988). "Exploiting Chaos to Predict the Future
and Reduce Noise". Tech. Report No. LA-UR-88-901, Los Alamos National Laboratory.
John Moody
and Christen Darlcen (1989). "Learning with Localised Receptive
Fields". In: Eds: D. Touretzky. Hinton and Sejnowski: Proceedings of the 1988 Connectionist Models Summer School. Morgan Kaufmann Publishing, San Mateo, CA.
P. Medgassy. (1961) Decomposition of Superposition of Distribution Functions.
Publishing house of the Hungarian Academy of Sciences, Budapest, 1961.
T. Poggio and F. Girosi. (1989). "A Theory of Networks for Approximation and
Learning". A.I. Memo No. 1140, Massachusetts Institute of Technology.
B. Widrow and S. Stearns (1985). Adaptive Signal Processing. Prentice-Hall Inc.,
Englewood Cliffs, NJ, pp 49,102.
489
| 242 |@word effect:1 build:1 hungarian:1 normalized:6 establish:1 casdagli:2 assigned:1 objective:3 added:1 laboratory:1 receptive:15 simulation:1 fa:5 decomposition:1 attractive:1 during:1 self:1 width:9 euclidian:3 eqns:1 essence:1 defmed:1 gradient:3 distance:3 link:2 initial:1 ao:1 series:1 trying:1 allowable:1 summation:1 keeler:5 physica:1 index:1 considered:3 hall:1 chaos:1 recently:1 mapping:1 john:1 scope:1 predict:1 substituting:2 functional:1 potentially:1 girosi:1 localised:1 memo:1 plot:2 drop:1 update:6 purpose:1 v:3 mackey:5 jl:4 implementation:2 superposition:2 nonn:1 upper:5 neuron:3 ai:2 ith:1 rd:1 tuning:1 descent:3 laea:3 situation:1 extended:2 nonnalized:7 gaussian:1 hinton:1 rather:1 cr:4 indicat:1 varying:1 closest:4 consists:1 pair:1 improvement:1 adhoc:1 indicates:1 manner:1 mainly:1 tech:1 inequality:1 ra:7 arbitrarily:1 glass:5 yi:2 morgan:1 dynamical:1 pattern:13 decreasing:2 entire:1 impose:1 disable:1 little:1 hidden:25 determine:1 usually:4 increasing:1 provided:1 signal:2 ii:6 intennediate:1 suitable:3 conducive:1 overlap:9 hybrid:2 faster:2 characterized:1 emerging:1 special:1 predicting:1 dept:1 field:17 once:3 equal:4 corporation:1 impractical:1 nj:1 ng:1 pseudo:1 technology:2 unnecessarily:1 prediction:1 ti:1 nearly:1 interactive:1 metric:5 future:2 ues:1 report:1 connectionist:1 represent:1 farmer:2 unit:34 employ:1 saha:2 achieved:1 epoch:4 whereas:1 before:1 positive:1 national:1 individual:1 addressed:1 determining:2 phase:2 consisting:1 crucial:1 appropriately:1 flxed:1 cliff:1 avijit:1 tss:11 englewood:1 degree:1 mateo:1 quantified:1 sufficient:1 exceed:1 range:1 tj:1 enough:1 austin:3 negotiates:2 xj:2 christen:1 yj:2 jth:2 l_:1 implement:1 reduce:1 backpropagation:1 chaotic:1 poggio:2 perfonnance:1 institute:1 neighbor:8 texas:1 old:1 area:1 expression:2 significantly:1 dimension:3 calculated:1 radial:3 computes:1 adaptive:3 san:1 close:1 prentice:1 preferred:1 map:6 alamo:1 center:9 sidorowich:1 hardware:2 stearns:1 xi:12 rule:2 st:1 continuous:1 la:6 per:8 lar:1 stability:2 reasonably:1 expanding:1 diverge:2 ca:1 connecting:2 moody:4 target:2 enhanced:3 again:1 did:1 pa:2 spread:1 particularly:1 rh:1 noise:1 timestep:1 li:4 fraction:1 sum:5 observed:1 inverse:1 west:1 tl:17 inc:1 region:1 almost:1 fashion:1 slow:1 performed:3 concatenating:1 comparable:1 house:1 bound:7 layer:8 fl:1 summer:1 fll:1 down:1 theorem:3 trained:1 minimize:1 square:4 kaufmann:1 upon:1 basis:11 microelectronics:1 nearby:1 essential:1 tx:2 balcones:1 derivation:1 multiplying:2 monitoring:1 drive:1 effective:1 sejnowski:1 rai:7 combination:2 touretzky:1 synaptic:1 ed:1 heuristic:5 inexpensive:1 valued:2 ur:1 pp:1 james:1 associated:3 unseen:1 proof:1 massachusetts:1 computationally:1 equation:11 ubiquitous:1 organized:1 cj:1 presentation:2 kmeans:1 rbf:5 ea:2 ath:1 budapest:1 higher:1 available:1 supervised:1 typical:1 determined:4 response:15 improved:3 academy:1 formulation:1 done:6 evaluated:1 appropriate:1 conservative:2 total:1 xa:16 stage:2 los:1 ece:1 exploiting:1 until:1 cluster:4 saba:3 hand:1 experimental:1 ei:4 indicating:1 nonlinear:2 clustering:11 publishing:2 ya:1 widrow:2 logistic:5 quality:1 exemplar:2 nearest:8 school:1 concatenated:1 |
1,563 | 2,420 | Gaussian Processes in Reinforcement Learning
Carl Edward Rasmussen and Malte Kuss
Max Planck Institute for Biological Cybernetics
Spemannstra?e 38, 72076 T?ubingen, Germany
carl,malte.kuss @tuebingen.mpg.de
Abstract
We exploit some useful properties of Gaussian process (GP) regression
models for reinforcement learning in continuous state spaces and discrete time. We demonstrate how the GP model allows evaluation of the
value function in closed form. The resulting policy iteration algorithm is
demonstrated on a simple problem with a two dimensional state space.
Further, we speculate that the intrinsic ability of GP models to characterise distributions of functions would allow the method to capture entire
distributions over future values instead of merely their expectation, which
has traditionally been the focus of much of reinforcement learning.
1 Introduction
Model-based control of discrete-time non-linear dynamical systems is typically exacerbated by the existence of multiple relevant time scales: a short time scale (the sampling
time) on which the controller makes decisions and where the dynamics are simple enough
to be conveniently captured by a model learning from observations, and a longer time scale
which captures the long-term consequences of control actions. For most non-trivial (nonminimum phase) control tasks a policy relying solely on short time rewards will fail.
In reinforcement learning this problem is explicitly recognized by the distinction between
short-term (reward) and long-term (value) desiderata. The consistency between short- and
long-term goals are expressed by the Bellman equation, for discrete states and actions :
! "$# &%' )(
(1)
is the value (the expected long term reward) of state while
following policy
, which
is the probability of taking action in state , and
is the transition
!
probability of going to state % when applying action
given that we are in state ,
#
+.- is the discount factor (see Sutton and
denotes the immediate expected reward and *,+
where
Barto (1998) for a thorough review). The Bellman equations are either solved iteratively by
policy evaluation, or alternatively solved directly (the equations are linear) and commonly
interleaved with policy improvement steps (policy iteration).
While the concept of a value function is ubiquitous in reinforcement learning, this is not
the case in the control community. Some non-linear model-based control is restricted to the
easier minimum-phase systems. Alternatively, longer-term predictions can be achieved by
concatenating short-term predictions, an approach which is made difficult by the fact that
uncertainty in predictions typically grows (precluding approaches based on the certainty
equivalence principle) as the time horizon lengthens. See Qui?nonero-Candela et al. (2003)
for a full probabilistic approach based on Gaussian processes; however, implementing a
controller based on this approach requires numerically solving multivariate optimisation
problems for every control action. In contrast, having access to a value function makes
computation of control actions much easier.
Much previous work has involved the use of function approximation techniques to represent
the value function. In this paper, we exploit a number of useful properties of Gaussian
process models for this purpose. This approach can be naturally applied in discrete time,
continuous state space systems. This avoids the tedious discretisation of state spaces often
required by other methods, eg. Moore and Atkeson (1995). In Dietterich and Wang (2002)
kernel based methods (support vector regression) were also applied to learning of the value
function, but in discrete state spaces.
In the current paper we use Gaussian process (GP) models for two distinct purposes: first
to model the dynamics of the system (actually, we use one GP per dimension of the state
space) which we will refer to as the dynamics GP and secondly the value GP for representing the value function. When computing the values, we explicitly take the uncertainties
from the dynamics GP into account, and using the linearity of the GP, we are able to solve
directly for the value function, avoiding slow policy evaluation iterations.
Experiments on a simple problem illustrates the viability of the method. For these experiments we use a greedy policy wrt. the value function. However, since our representation
of the value function is stochastic, we could represent uncertainty about values enabling a
principled attack of the exploration vs. exploitation tradeoff, such as in Bayesian Q-learning
as proposed by Dearden et al. (1998). This potential is outlined in the discussion section.
2 Gaussian Processes and Value Functions
In a continuous state space we straight-forwardly generalize the Bellman equation (1) by
substituting sums with integrals; further, we assume for simplicity of exposition that the
policy is deterministic (see Section 4 for
discussion):
a further
! " #
(2)
% ( %
! " #
%
% &%
(3)
This involves two integrals over the distribution of consecutive
states % visited when fol
lowing the policy . The transition probabilities
may include two sources of stochas &
ticity: uncertainty in the model of the dynamics and stochasticity in the dynamics itself.
2.1 Gaussian Process Regression Models
In GP models we put a prior directly on functions and condition on observations to
make predictions
(see Williams and Rasmussen (1996) for details). The noisy targets
are assumed jointly Gaussian with covariance function :
"
"
!
% (
where !$#&% ' #
Throughout the remainder of this paper we use a Gaussian covariance function:
(4)
1 2 # 1 % 4 3"576.8 # 1 % :9<;>= "@? #&%(A.B,
(5)
where the positive elements of the diagonal matrix 5 , * and A B , are hyperparameters collected in ) . The hyperparameters are fit by maximising the marginal likelihood (see again
#
% )
+*,.-(/0
Williams and Rasmussen (1996)) using conjugate gradients.
2 !
The predictive distribution for a novel test input
)
6.8
A , '
is Gaussian:
1 !
2.2 Model Identification of System Dynamics
6 8 :=< (6)
%
, we use a separate Gaussian
Given a set of -dimensional observations of the form
process model for predicting each coordinate
of
the
system
dynamics.
The inputs to each
model are the state and action pair
, the output is a (Gaussian) distribution
over the consecutive state variable,
&
using eq. (6). Combining the
predictive models we obtain a multivariate Gaussian distribution over the consecutive state:
the transition probabilities
.
% -
2.3 Policy Evaluation
&
We now turn towards the problem of evaluating
for a given policy over the continuous state space. In policy evaluation the Bellman equations are used as update rules. In
order to apply this approach in the continuous case, we have to solve the two integrals in
eq. (3).
!
For simple (eg. polynomial or Gaussian) reward functions
we can directly compute 1
the first Gaussian integral of eq. (3). Thus, the expected immediate reward, from state ,
following is:
!
%
where
A 8, & & : A , 4 =
(7)
in which the mean and covariance for the consecutive state are coordinate-wise given by
eq. (6) evaluated on the dynamics GP.
The second integral of eq. (3) involves an expectation over the value function, which is
modeled by the value GP as a function of the states. We need access to the value function
at every point in the continuous state space, but we only explicitly represent values at a
finite number of support points,
& and let the GP generalise to the entire
8
space. Here we use the mean of the GP to represent the value 2 ? see section 4 for an
elaboration. Thus, we need to average the values over the distribution predicted for . For
a Gaussian covariance function3 this can be done in closed form as shown by Girard et al.
(2002). In detail, the Bellman equation for the value at support point is:
%
"#
% %
" " $
#
"$#
! 6 8
where
6 8
1& 9<; =
(8)
!
"
and
(9)
5 6 8
6.8 , * , -(/0 1 1% 3 5
denotes
the
covariance
matrix
of
the
value
GP,
is
the
?th
row
where
!
'
of
the
matrix
and
boldface
is
the
vector
of
values
at
the
support
points:
& 3 . Note, that this equation implies a consistency between the value
8
at the support points with the values at all other points. Equation (8) could be used for
iterative policy
evaluation. Notice however, that eq. (8) is a set of ( linear simultaneous
equations in , which we can solve4 explicitly:
1
" #)
! 6 8
+*
#
" 1
! 6 8 = 6.8
(10)
For more complex reward functions we may approximate it using eg. a Taylor expansion.
Thus, here we are using the GP for noise free interpolation of the value function, and consequently set its noise parameter to a small positive constant (to avoid numerical problems)
3
The covariance functions allowing analytical treatment in this way include Gaussian and polynomial, and mixtures of these.
4
We conjecture that the matrix ,.-0/13257 46 is non-singular under mild conditions, but have not
yet devised a formal proof.
2
The computational cost of solving this system is (
, which is no more expensive than
doing iterative policy evaluation, and equal to the cost of value GP prediction.
2.4 Policy Improvement
Above we demonstrated how to compute the value function for a given policy . Now
given a value function we can act greedily, thereby defining an implicit policy:
/
! "$#
% )( %
(11)
giving rise to ( one-dimensional optimisation problems (when the possible actions are
scalar). As above we can solve the relevant integrals and in addition compute derivatives
wrt. the action. Note also that application-specific constraints can often be reformulated as
constraints in the above optimisation problem.
2.5 The Policy Iteration Algorithm
We now combine policy evaluation and policy improvement into policy iteration in which
both steps alternate until a stable configuration is reached5. Thus given observations of
system dynamics and a reward function we can compute a continuous value function and
thereby an implicitly defined policy.
%
$
Algorithm 1 Policy iteration, batch version
1. Given:
observations of system dynamics of the form
for a fixed time
interval , discount factor and reward function
2. Model Identification: Model the system dynamics by Gaussian processes for each
state coordinate and combine them to obtain a model
3. Initialise
Choose a set
& of ( support points and
Value Function:
8
. Fit Gaussian process hyperparameters
for representing
initialize
using conjugate gradient optimisation of the marginal likelihood and set A B , to a small
positive constant.
4. Policy Iteration:
repeat
do
for all
Find action by solving equation (11) subject to problem specific constraints.
Compute
processes
using the dynamics Gaussian
Solve equation (7) in order to obtain
Compute ' ?th row of
as in equation (9)
end
for
"
#
1
!
!
#
! 6.8 6 8
Update Gaussian process
hyperparameter for representing
until stabilisation of
to fit the new
The selection of the support points remains to be determined. When using the algorithm
in an online setting, support points could naturally be chosen as the states visited, possibly
selecting the ones which conveyed most new information about the system. In the experimental section, for simplicity of exposition we consider only the batch case, and use simply
a regular grid of support points.
We have assumed for simplicity that the reward function is deterministic and known, but it
would not be too difficult to also use a (GP) model for the rewards; any model that allows
5
Assuming convergence, which we have not proven.
5
4
t
3
2
1
-1
-0.5
0.6
1
?1
?0.5
(a)
0.6
1
(b)
Figure 1: Figure
(a) illustrates the mountain car problem. The car is initially standing
1 and the goal is to bring it up and hold it in the region
motionless at
1
. The hatched area marks the target
such that
region and below the
approximation
by
a
Gaussian
is
shown
(both
projected
onto
the
axis). Figure (b) shows
the position of the car is when controlled according to (11) using the approximated value
function after 6 policy improvements shown in Figure 3. The car reaches the target region
due to uncertainty in the
in about five time steps but does not end up exactly at
second time steps.
dynamics model. The circles mark the
* '-
*
*
* -
*
*
, *
evaluation of eq. (7) could be used. Similarly the greedy policy has been assumed, but
generalisation to stochastic policies would not be difficult.
3 Illustrative Example
For reasons of presentability of the value function, we below consider the well-known
mountain car problem ?park on the hill?, as described by Moore and Atkeson (1995) where
the state-space is only two-dimensional. The setting depicted in Figure 1(a) consists of a
frictionless, point-like, unit mass car on a hilly landscape described by
"
2
for + *
,
(12)
*
for
8
The state of the system is described by the position of the car and its speed
1
which are constrained to - and 1 ;
+; respectively. As action a horizontal
1
force in the range
can be applied in order to bring the car up into the
target
* and 1 * '- * '- .
region which is a rectangle in state space such that *
Note that the admissible range of forces is not sufficient to drive up the car greedily from
1 * * such that a strategy has to be found which utilises the
the initial state
!
landscape in order to accelerate up the slope, which gives the problem its non-minimum
phase character.
-
*
*
" '
For system identification we draw samples
uniformly from
& &
#
their respective admissible regions and simulate time steps of
seconds6 forward
in time using an ODE solver in order to get the consecutive states . We then use two
Gaussian processes to build a model to predict the system behavior from these examples
for the two state variables independently using covariance functions of type eq. (5). Based
%
6
Note that $&%('*),+ - seconds seems to be an order of magnitude slower than the time scale
usually considered in the literature. Our algorithm works equally well for shorter time steps (/
should be increased); for even longer time steps, modeling of the dynamics gets more complicated,
and eventually for large enough $.% control is no-longer possible.
4
4
2
2
2
1
0
0
0
?1
?0.5
?0.5
1
1
x
(a)
?0.5
?1
0
dx
0.5
?2
0
?1
?1
0
dx
0.5
x
0
0
?1
?1
0
2
1
1
0.2
2
V
0.4
V
V
0.6
dx
0.5
?2
1
x
(b)
?2
(c)
Figure 2: Figures (a-c) show the estimated value function for the mountain car example after initialisation (a), after the first iteration over (b) and a nearly stabilised value function
after 3 iterations (c). See also Figure 3 for the final value function and the corresponding
state transition diagram
*
on
random examples, the relations can already be approximated to within root mean
squared errors (estimated on
test
samples and considering
the mean of the predicted
distribution) of ; for predicting and ; for predicting .
-&* **
* *
*
Having a model of the system dynamics, the other necessary element to provide to the
proposed algorithm is a reward function. In the formulation by Moore and Atkeson (1995)
the reward is equal to if the car is in the target region and elsewhere. For
" convenience we
approximate this cube by a Gaussian proportional to
, with maximum
reward as indicated in Figure 1(a). We now can solve the update equation (10) and also
evaluate its gradient with respect to . This enables
us to efficiently solve the optimization
problem eq. (11) subject to the constraints on , and described above. States outside
the feasible region are assigned zero value and reward.
-
* * * % * *
-
-
-
As support points for the value function we simply put a regular ; ; grid onto the
state-space and initialise the value function with the immediate rewards for these states,
Figure 2(a). The standard deviation of the noise of the value GP representing
is
set to A
, and the discount factor to
. Following the policy iteration
algorithm we estimate the value of all support points following the implicit policy (11)
wrt. the initial value function, Figure 2(a). We then evaluate this policy and obtain an
updated value function shown in Figure 2(b) where all points which can expect to reach
the reward region in one time step gain value. If we iterate this procedure two times we
obtain a value function as shown in Figure 2(c) in which the state space is already well
organised. After five policy iterations the value function and therefore the implicit policy is
stable, Figure 3(a). In Figure 3(b) a dynamic GP based state-transition diagram is shown,
in which each support point is connected to its predicted (mean) consecutive state
when following the implicit policy. For some of the support points the model correctly
predicts that the car will leave the feasible region, no matter what is applied, which
corresponds to areas with zero value in Figure 3(a).
#
* *-
*
&
* *
%
1
If we control the car from !
according to the found policy the car gathers
momentum by first accelerating left before driving up into the target region where it is
balanced as illustrated in Figure 1(b). It shows that the
random examples of the system
dynamics are sufficient for this task. The control policy found is probably very close to the
optimally achievable.
*
2.5
2
1.5
1
dx
0.5
4
V
2
0
?0.5
2
?1
1
0
0
?1.5
?1
?2
?0.5
?1
0
dx
0.5
x
?2.5
1
?1
?2
?0.5
0
0.5
1
1.5
x
(a)
(b)
Figure 3: Figures (a) shows the estimated value function after 6 policy improvements (subsequent to Figures 2(a-c)) over where has stabilised. Figure (b) is the corresponding
state transition diagram illustrating the implicit policy on the support points. The black
lines connect and the respective estimated by the dynamics GP when following the
implicit greedy policy with respect to (a). The thick line marks the trajectory of the car
for the movement described in Figure 1(b)
based on the physics of the system. Note that
; remains unnoticed using time intervals of
the temporary violation of the constraint
; to avoid this the constraints could be enforced continuously in the training set.
%
, *
+
4 Perspectives and Conclusion
Commonly the value function is defined to be the expected (discounted) future reward.
Conceptually however, there is more to values than their expectations. The distribution over
future reward could have small or large variance and identical means, two fairly different
situations, that are treated identically when only the value expectation is considered. It
is clear however, that a principled approach to the exploitation vs. exploration tradeoff
requires a more faithful representation of value, as was recently proposed in Bayesian Qlearning (Dearden et al. 1998), and see also Attias (2003). For example, the large variance
case is more attractive for exploration than the small variance case.
The GP representation of value functions proposed here lends itself naturally to this more
elaborate concept of value. The GP model inherently maintains a full distribution over
values, although in the present paper we have only used its expectation. Implementation
of this would require a second set of Bellman-like equations for the second moment of
the values at the support points. These equations would simply express consistency of
uncertainty: the uncertainty of a value should be consistent with the uncertainty when
following the policy. The values at the support points would be (Gaussian) distributions
with individual
? variances, which is readily handled by using a full diagonal noise term in
the place of #&% A B , in eq. (5). The individual second moments can be computed in closed
form (see derivations in Qui?nonero-Candela et al. (2003)). However, iteration would be
necessary to solve the combined system, as there would be no linearity corresponding to
eq. (10) for the second moments. In the near future we will be exploring these possibilities.
Whereas only a batch version of the algorithm has been described here, it would obviously
be interesting to explore its capabilities in an online setting, starting from scratch. This
will require that we abandon the use of a greedy policy, to avoid risking to get stuck in a
local minima caused by an incomplete model of the dynamics. Instead, a stochastic policy
should be used, which should not cause further computational problems as long as it is
represented by a Gaussian (or perhaps more appropriately a mixture of Gaussians). A good
policy should actively explore regions where we may gain a lot of information, requiring
the notion of the value of information (Dearden et al. 1998). Since the information gain
would come from a better dynamics GP model, it may not be an easy task in practice to
optimise jointly information and value.
We have introduced Gaussian process models into continuous-state reinforcement learning
tasks, to model the state dynamics and the value function. We believe that the good generalisation properties, and the simplicity of manipulation of GP models make them ideal
candidates for these tasks. In a simple demonstration, our parameter-free algorithm converges rapidly to a good approximation of the value function.
Only the batch version of the algorithm was demonstrated. We believe that the full probabilistic nature of the transition model should facilitate the early states of an on-line process.
Also, online addition of new observations in GP model can be done very efficiently. Only
a simple problem was used, and it will be interesting to see how the algorithm performs
on more realistic tasks. Direct implementation of GP models are suitable for up to a few
thousand support points; in recent years a number of fast approximate GP algorithms have
been developed, which could be used in more complex settings.
We are convinced that recent developments in powerful kernel-based probabilistic models for supervised learning such as GPs, will integrate well into reinforcement learning
and control. Both the modeling and analytic properties make them excellent candidates
for reinforcement learning tasks. We speculate that their fully probabilistic nature offers
promising prospects for some fundamental problems of reinforcement learning.
Acknowledgements
Both authors were supported by the German Research Council (DFG).
References
Attias, H. (2003). Planning by probabilistic inference. In Proceedings of the Ninth International
Workshop on Artificial Intelligence and Statistics.
Dearden, R., N. Friedman, and S. J. Russell (1998). Bayesian Q-learning. In Fifteenth National
Conference on Artificial Intelligence (AAAI).
Dietterich, T. G. and X. Wang (2002). Batch value function approximation via support vectors.
In Advances in Neural Information Processing Systems 14, Cambridge, MA, pp. 1491?1498.
MIT Press.
Girard, A., C. E. Rasmussen, J. Qui?nonero-Candela, and R. Murray-Smith (2002). Multiple-step
ahead prediction for non linear dynamic systems ? a Gaussian process treatment with propagation of the uncertainty. In Advances in Neural Information Processing Systems 15.
Moore, A. W. and C. G. Atkeson (1995). The parti-game algorithm for variable resolution reinforcement learning in multidimensional state-spaces. Machine Learning 21, 199?233.
Qui?nonero-Candela, J., A. Girard, J. Larsen, and C. E. Rasmussen (2003). Propagation of uncertainty in Bayesian kernel models - application to multiple-step ahead forecasting. In Proceedings of the 2003 IEEE Conference on Acoustics, Speech, and Signal Processing.
Sutton, R. S. and A. G. Barto (1998). Reinforcement Learning. Cambridge, Massachusetts: MIT
Press.
Williams, C. K. I. and C. E. Rasmussen (1996). Gaussian processes for regression. In Advances in
Neural Information Processing Systems 8.
| 2420 |@word mild:1 exploitation:2 version:3 illustrating:1 polynomial:2 seems:1 achievable:1 tedious:1 covariance:7 thereby:2 moment:3 initial:2 configuration:1 selecting:1 initialisation:1 precluding:1 current:1 yet:1 dx:5 readily:1 numerical:1 subsequent:1 realistic:1 enables:1 analytic:1 update:3 v:2 greedy:4 intelligence:2 smith:1 short:5 attack:1 five:2 direct:1 consists:1 combine:2 expected:4 behavior:1 mpg:1 planning:1 bellman:6 relying:1 discounted:1 solver:1 considering:1 linearity:2 mass:1 what:1 mountain:3 developed:1 lowing:1 certainty:1 thorough:1 every:2 multidimensional:1 act:1 exactly:1 hilly:1 control:11 unit:1 planck:1 positive:3 before:1 local:1 consequence:1 sutton:2 solely:1 interpolation:1 black:1 equivalence:1 range:2 faithful:1 practice:1 procedure:1 area:2 regular:2 get:3 onto:2 convenience:1 selection:1 close:1 put:2 applying:1 deterministic:2 demonstrated:3 williams:3 starting:1 independently:1 resolution:1 simplicity:4 rule:1 parti:1 initialise:2 notion:1 traditionally:1 coordinate:3 updated:1 target:6 carl:2 gps:1 element:2 expensive:1 approximated:2 predicts:1 solved:2 capture:2 wang:2 thousand:1 forwardly:1 region:11 connected:1 movement:1 prospect:1 russell:1 stochas:1 principled:2 balanced:1 reward:19 dynamic:23 solving:3 predictive:2 accelerate:1 represented:1 derivation:1 distinct:1 fast:1 artificial:2 outside:1 solve:7 ability:1 statistic:1 gp:28 jointly:2 itself:2 noisy:1 final:1 online:3 obviously:1 abandon:1 analytical:1 remainder:1 relevant:2 nonero:4 combining:1 rapidly:1 convergence:1 leave:1 converges:1 lengthens:1 eq:11 exacerbated:1 edward:1 predicted:3 involves:2 implies:1 come:1 thick:1 stochastic:3 exploration:3 implementing:1 require:2 biological:1 secondly:1 exploring:1 hold:1 considered:2 predict:1 substituting:1 driving:1 consecutive:6 early:1 purpose:2 visited:2 council:1 mit:2 gaussian:29 avoid:3 barto:2 focus:1 improvement:5 likelihood:2 contrast:1 greedily:2 inference:1 entire:2 typically:2 initially:1 relation:1 going:1 germany:1 development:1 constrained:1 initialize:1 fairly:1 marginal:2 equal:2 cube:1 having:2 sampling:1 identical:1 park:1 nearly:1 future:4 few:1 national:1 individual:2 dfg:1 phase:3 friedman:1 possibility:1 evaluation:9 function3:1 violation:1 mixture:2 integral:6 necessary:2 respective:2 shorter:1 spemannstra:1 discretisation:1 incomplete:1 taylor:1 circle:1 increased:1 modeling:2 cost:2 deviation:1 too:1 optimally:1 connect:1 combined:1 fundamental:1 international:1 standing:1 probabilistic:5 physic:1 continuously:1 again:1 squared:1 aaai:1 choose:1 possibly:1 derivative:1 actively:1 account:1 potential:1 de:1 speculate:2 matter:1 explicitly:4 caused:1 root:1 lot:1 closed:3 candela:4 doing:1 fol:1 maintains:1 complicated:1 capability:1 slope:1 variance:4 efficiently:2 landscape:2 conceptually:1 generalize:1 bayesian:4 identification:3 trajectory:1 drive:1 cybernetics:1 straight:1 kuss:2 simultaneous:1 reach:2 pp:1 involved:1 larsen:1 naturally:3 proof:1 gain:3 treatment:2 frictionless:1 massachusetts:1 car:15 ubiquitous:1 actually:1 stabilised:2 supervised:1 formulation:1 evaluated:1 done:2 risking:1 implicit:6 until:2 horizontal:1 propagation:2 perhaps:1 indicated:1 believe:2 grows:1 facilitate:1 dietterich:2 concept:2 requiring:1 assigned:1 iteratively:1 moore:4 illustrated:1 eg:3 attractive:1 game:1 illustrative:1 hill:1 demonstrate:1 performs:1 bring:2 wise:1 novel:1 recently:1 numerically:1 refer:1 cambridge:2 consistency:3 outlined:1 grid:2 similarly:1 stochasticity:1 access:2 stable:2 longer:4 multivariate:2 recent:2 perspective:1 manipulation:1 ubingen:1 captured:1 minimum:3 utilises:1 recognized:1 signal:1 multiple:3 full:4 offer:1 long:5 elaboration:1 devised:1 equally:1 controlled:1 prediction:6 desideratum:1 regression:4 controller:2 optimisation:4 expectation:5 nonminimum:1 fifteenth:1 iteration:12 represent:4 kernel:3 achieved:1 addition:2 whereas:1 ode:1 interval:2 diagram:3 singular:1 source:1 appropriately:1 probably:1 subject:2 near:1 ideal:1 enough:2 viability:1 identically:1 iterate:1 easy:1 fit:3 tradeoff:2 attias:2 handled:1 accelerating:1 forecasting:1 reformulated:1 speech:1 cause:1 action:11 useful:2 clear:1 characterise:1 discount:3 stabilisation:1 notice:1 estimated:4 per:1 correctly:1 discrete:5 hyperparameter:1 express:1 rectangle:1 merely:1 sum:1 year:1 enforced:1 uncertainty:10 powerful:1 place:1 throughout:1 draw:1 decision:1 qui:4 interleaved:1 ahead:2 constraint:6 speed:1 simulate:1 conjecture:1 according:2 alternate:1 conjugate:2 character:1 restricted:1 equation:15 remains:2 turn:1 eventually:1 fail:1 german:1 wrt:3 end:2 gaussians:1 motionless:1 apply:1 batch:5 slower:1 existence:1 denotes:2 include:2 unnoticed:1 exploit:2 giving:1 build:1 murray:1 already:2 strategy:1 diagonal:2 gradient:3 lends:1 separate:1 collected:1 tuebingen:1 trivial:1 reason:1 boldface:1 maximising:1 assuming:1 modeled:1 demonstration:1 difficult:3 rise:1 implementation:2 policy:43 allowing:1 observation:6 enabling:1 finite:1 immediate:3 defining:1 situation:1 ninth:1 community:1 introduced:1 pair:1 required:1 acoustic:1 distinction:1 temporary:1 able:1 dynamical:1 below:2 usually:1 max:1 optimise:1 dearden:4 suitable:1 malte:2 treated:1 force:2 predicting:3 representing:4 axis:1 review:1 prior:1 literature:1 acknowledgement:1 fully:1 expect:1 interesting:2 proportional:1 organised:1 proven:1 integrate:1 conveyed:1 gather:1 sufficient:2 consistent:1 principle:1 row:2 elsewhere:1 convinced:1 repeat:1 supported:1 rasmussen:6 free:2 formal:1 allow:1 generalise:1 institute:1 taking:1 dimension:1 transition:7 avoids:1 evaluating:1 forward:1 commonly:2 reinforcement:11 made:1 projected:1 stuck:1 atkeson:4 author:1 approximate:3 hatched:1 qlearning:1 implicitly:1 assumed:3 alternatively:2 continuous:8 iterative:2 promising:1 nature:2 inherently:1 expansion:1 excellent:1 complex:2 noise:4 hyperparameters:3 girard:3 elaborate:1 slow:1 position:2 momentum:1 concatenating:1 candidate:2 admissible:2 specific:2 intrinsic:1 workshop:1 magnitude:1 illustrates:2 horizon:1 easier:2 depicted:1 simply:3 explore:2 conveniently:1 expressed:1 scalar:1 corresponds:1 ma:1 goal:2 consequently:1 exposition:2 towards:1 feasible:2 determined:1 generalisation:2 uniformly:1 experimental:1 support:18 mark:3 avoiding:1 evaluate:2 scratch:1 |
1,564 | 2,421 | Eigenvoice Speaker Adaptation via Composite
Kernel PCA
James T. Kwok, Brian Mak and Simon Ho
Department of Computer Science
Hong Kong University of Science and Technology
Clear Water Bay, Hong Kong
[jamesk,mak,csho]@cs.ust.hk
Abstract
Eigenvoice speaker adaptation has been shown to be effective when only
a small amount of adaptation data is available. At the heart of the method
is principal component analysis (PCA) employed to find the most important eigenvoices. In this paper, we postulate that nonlinear PCA, in
particular kernel PCA, may be even more effective. One major challenge
is to map the feature-space eigenvoices back to the observation space so
that the state observation likelihoods can be computed during the estimation of eigenvoice weights and subsequent decoding. Our solution is to
compute kernel PCA using composite kernels, and we will call our new
method kernel eigenvoice speaker adaptation. On the TIDIGITS corpus,
we found that compared with a speaker-independent model, our kernel
eigenvoice adaptation method can reduce the word error rate by 28?33%
while the standard eigenvoice approach can only match the performance
of the speaker-independent model.
1
Introduction
In recent years, there has been a lot of interest in the study of kernel methods [1]. The basic
idea is to map data in the input space X to a feature space via some nonlinear map ?, and
then apply a linear method there. It is now well known that the computational procedure
depends only on the inner products1 ?(xi )0 ?(xj ) in the feature space (where xi , xj ?
X ), which can be obtained efficiently from a suitable kernel function k(?, ?). Besides,
kernel methods have the important computational advantage that no nonlinear optimization
is involved. Thus, the use of kernels provides elegant nonlinear generalizations of many
existing linear algorithms. A well-known example in supervised learning is the support
vector machines (SVMs). In unsupervised learning, the kernel idea has also led to methods
such as kernel-based clustering algorithms and kernel principal component analysis [2].
In the field of automatic speech recognition, eigenvoice speaker adaptation [3] has drawn
some attention in recent years as it is found particularly useful when only a small amount
of adaptation speech is available; e.g. a few seconds. At the heart of the method is principal component analysis (PCA) employed to find the most important eigenvoices. Then
1
In this paper, vector/matrix transpose is denoted by the superscript 0 .
a new speaker is represented as a linear combination of a few (most important) eigenvoices and the eigenvoice weights are usually estimated by maximizing the likelihood of
the adaptation data. Conventionally, these eigenvoices are found by linear PCA. In this
paper, we investigate the use of nonlinear PCA to find the eigenvoices by kernel methods.
In effect, the nonlinear PCA problem is converted to a linear PCA problem in the highdimension feature space using the kernel trick. One of the major challenges is to map the
feature-space eigenvoices back to the observation space to compute the state observation
likelihood of adaptation data during the estimation of eigenvoice weights and likelihood of
test data during decoding. Our solution is to compute kernel PCA using composite kernels.
We will call our new method kernel eigenvoice speaker adaptation.
Kernel eigenvoice adaptation will have to deal with several parameter spaces. To avoid
confusion, we denote the several spaces as follows: the d1 -dimensional observation space
as O; the d2 -dimensional speaker (supervector) space as X ; and the d3 -dimensional speaker
feature space as F. Notice that d1 d2 d3 in general.
The rest of this paper is organized as follows. Brief overviews on eigenvoice speaker
adaptation and kernel PCA are given in Sections 2 and 3. Sections 4 and 5 then describe
our proposed kernel eigenvoice method and its robust extension. Experimental results are
presented in Section 6, and the last section gives some concluding remarks.
2
Eigenvoice
In the standard eigenvoice approach [3], speech training data are collected from many
speakers with diverse characteristics. A set of speaker-dependent (SD) acoustic hidden
Markov models (HMMs) are trained from each speaker where each HMM state is modeled
as a mixture of Gaussian distributions. A speaker?s voice is then represented by a speaker
supervector that is composed by concatenating the mean vectors of all HMM Gaussian
distributions. For simplicity, we assume that each HMM state consists of one Gaussian
only. The extension to mixtures of Gaussians is straightforward. Thus, the ith speaker
supervector consists of R constituents, one from each Gaussian, and will be denoted by
xi = [x0i1 . . . x0iR ]0 ? Rd2 . The similarity between any two speaker supervectors xi and xj
is measured by their dot product
R
X
x0i xj =
x0ir xjr .
(1)
r=1
PCA is then performed on a set of training speaker supervectors and the resulting eigenvectors are called eigenvoices. To adapt to a new speaker, his/her supervector s is treated
as a linear combination of the first M eigenvoices {v1 , . . . , vM }, i.e., s = s(ev) =
PM
0
m=1 wm vm where w = [w1 , . . . , wM ] is the eigenvoice weight vector. Usually, only a
few eigenvoices (e.g., M < 10) are employed so that a little amount of adaptation speech
(e.g., a few seconds) will be required. Given the adaptation data ot , t = 1, . . . , T , the
eigenvoice weights are in turn estimated by maximizing the likelihood of the o t ?s. Mathematically, one finds w by maximizing the Q function: Q(w) = Q? + Qa + Qb (w), where
Q?
=
R
X
?1 (r) log(?r ) ,
r=1
and,
Qb (w)
=
R X
T
X
Qa =
R T
?1
X
X
?t (p, r) log(apr ) ,
p,r=1 t=1
?t (r) log(br (ot , w)) ,
(2)
r=1 t=1
and ?r is the initial probability of state r; ?t (r) is the posterior probability of observation
sequence being at state r at time t; ?t (p, r) is the posterior probability of observation sequence being at state p at time t and at state r at time t + 1; br is the Gaussian pdf of the rth
state after re-estimation. Furthermore, Qb is related to the new speaker supervector s by
R T
1 XX
Qb (w) = ?
(3)
?t (r) d1 log(2?) + log |Cr | + kot ? sr (w)k2Cr ,
2 r=1 t=1
where kot ? sr (w)k2Cr = (ot ? sr (w))0 C?1
r (ot ? sr (w)) and Cr is the covariance matrix
of the Gaussian at state r.
3
Kernel PCA
In this paper, the computation of eigenvoices is generalized by performing kernel PCA
instead of linear PCA. In the following, let k(?, ?) be the kernel with associated mapping ?
which maps a pattern x in the speaker supervector space X to ?(x) in the speaker feature
space F. Given a set of N patterns (speaker supervectors) {x1 , . . . , xN }, denote the mean
PN
of the ?-mapped feature vectors by ?? = N1 i=1 ?(xi ), and the ?centered? map by ??
? the centered version of
(with ?(x)
?
= ?(x) ? ?).
? Eigendecomposition is performed on K,
0
?
K = [k(xi , xj )]ij , as K = U?U , where U = [?1 , . . . , ?N ] with ?i = [?i1 , . . . , ?iN ]0 ,
? is related to K by K
? = HKH, where H =
and ? = diag(?1 , . . . , ?N ). Notice that K
1
0
I ? N 11 is the centering matrix, I is the N ? N identity matrix, and 1 = [1, . . . , 1]0 , an
N -dimensional vector. The mth orthonormal eigenvector of the covariance matrix in the
PN
feature space is then given by [2] as vm = i=1 ???mi ?(x
? i) .
m
4
Kernel Eigenvoice
As seen from Eqn (3), the estimation of eigenvoice weights requires the evaluation of the
distance between adaptation data ot and Gaussian means of the new speaker in the observation space O. In the standard eigenvoice method, this is done by first breaking down
the adapted speaker supervector s to its R constituent Gaussians s1 , . . . , sR . However, the
use of kernel PCA does not allow us to access each constituent Gaussians directly. To get
around the problem, we investigate the use of composite kernels.
4.1
Definition of the Composite Kernel
For the ith speaker supervector xi , we map each constituent xir separately via a kernel
kr (?, ?) to ?r (xir ), and then construct ?(xi ) as ?(xi ) = [?1 (xi1 )0 , . . . , ?R (xiR )0 ]0 . Analogous to Eqn (1), the similarity between two speaker supervectors xi and xj in the composite feature space is measured by
R
X
k(xi , xj ) =
kr (xir , xjr ) .
r=1
Note that if kr ?s are valid Mercer kernels, so is k [1].
Using this composite kernel, we can then proceed with the usual kernel PCA on the set of
N training speaker supervectors and obtain ?m ?s, ?m ?s, and the orthonormal eigenvectors
vm ?s (m = 1, . . . , M ) of the covariance matrix in the feature space F.
4.2
New Speaker in the Feature Space
In the following, we denote the supervector of a new speaker by s. Similar to the standard
eigenvoice approach, its ?-mapped
?
speaker feature vector2 ??(kev) (s) is assumed to be a
2
The notation for a new speaker in the feature space requires some explanation. If s exists, then
its centered image is ?
?(kev) (s). However, since the pre-image of a speaker in the feature space may
linear combination of the first M eigenvectors, i.e.,
??(kev) (s) =
M
X
w m vm =
m=1
Its rth constituent is then given by
(kev)
??r
(sr ) =
(kev)
Hence, the similarity between ?r
kr(kev) (sr , ot )
PN
i=1
(sr ) and ?r (ot ) is given by
M X
N
X
wm ?mi
?
(kr (xir , ot ) ? ??0r ?r (ot )) + ??0r ?r (ot )
?
m
m=1 i=1
M
X
w
? m B(m, r, t),
?m
m=1
(5)
?r (xir ) is the rth part of ?,
?
A(r, t) = ??0r ?r (ot ) =
and
B(m, r, t) =
N
X
i=1
4.3
M X
N
X
wm ?mi
?
??r (xir ) .
?m
m=1 i=1
? ?(sr ) ?r (ot )
!
#0
" M N
X X wm ?mi
?
??r (xir ) + ??r ?r (ot )
=
?m
m=1 i=1
!
" M N
#0
X X wm ?mi
?
(?r (xir ) ? ??r ) + ??r ?r (ot )
=
?m
m=1 i=1
? A(r, t) +
1
N
(4)
0
=
where ??r =
M X
N
X
wm ?mi
?
?(x
? i ).
?m
m=1 i=1
N
1 X
kr (xjr , ot ),
N j=1
?mi kr (xir , ot )
!
? A(r, t)
N
X
?mi
i=1
!
.
Maximum Likelihood Adaptation Using an Isotropic Kernel
On adaptation, we have to express kot ? sr k2Cr of Eqn (3) as a function of w. Consider using isotropic kernels for kr so that kr (xir , xjr ) = ?(kxir ? xjr kCr ). Then
(kev)
kr
(sr , ot ) = ?(kot ? sr k2Cr ), and if ? is invertible, kot ? sr k2Cr will be a function of
(kev)
kr
(sr , ot ), which in turn is a function of w by Eqn (5). In the sequel, we will use the
Gaussian kernel kr (xir , xjr ) = exp(??r kxir ? xjr k2Cr ), and hence
!
M
X
1
wm
1
(kev)
2
? B(m, r, t) . (6)
(sr , ot ) = ? log A(r, t) +
kot ? sr kCr = ? log kr
?r
?r
?m
m=1
Substituting Eqn (6) for the Qb function in Eqn (3), and differentiating with respect to each
eigenvoice weight, wj , j = 1, . . . , M , we obtain
R T
1 X X ?t (r)
B(j, r, t)
?Qb
p
=
? (kev)
.
?wj
2 ?j r=1 t=1 ?r
kr
(sr , ot )
(7)
not exist, its notation as ?
?(kev) (s) is not exactly correct. However, the notation is adopted for its
intuitiveness and the readers are advised to infer the existence of s based on the context.
?Q
?Q
Since Q? and Qa do not depend on w, ?w = ?w b .
j
j
4.4
Generalized EM Algorithm
Because of the nonlinear nature of kernel PCA, Eqn (6) is nonlinear in w and there is no
closed form solution for the optimal w. In this paper, we instead apply the generalized
EM algorithm (GEM) [4] to find the optimal weights. GEM is similar to standard EM
except for the maximization step: EM looks for w that maximizes the expected likelihood
of the E-step but GEM only requires a w that improves the likelihood. Many numerical
methods may be used to update w based on the derivatives of Q. In this paper, gradient
ascent is used to get w(n) from w(n ? 1) based only on the first-order derivative as:
?Q
w(n) = w(n ? 1) + ?(n)Q0 |w=w(n?1) , where Q0 = ?wb and ?(n) is the learning rate
at the nth iteration. Methods such as the Newton?s method that uses the second-order
derivatives may also be used for faster convergence, at the expense of computing the more
costly Hessian in each iteration.
The initial value of w(0) can be important for numerical methods like gradient ascent. One
reasonable approach is to start with the eigenvoice weights of the supervector composed
from the speaker-independent model x(si) . That is,
wm
N
N
X
X
?
?
? mi ?(x
? mi [?(xi ) ? ?]
? i )0 ?(x
? (si) ) =
? 0 [?(x(si) ) ? ?]
?
?
?
m
m
i=1
i=1
#
"
N
N
N
X
X
X
?mi
1
1
?
k(xi , xp )+k(x(si) , xp ) .
k(xi , x(si) )+ 2
k(xp , xq )?
N
N
?
m
p,q=1
p=1
i=1
0
= vm
?(x
? (si) ) =
=
(8)
5
Robust Kernel Eigenvoice
The success of the eigenvoice approach for fast speaker adaptation is due to two factors: (1)
a good collection of ?diverse? speakers so that the whole speaker space is captured by the
eigenvoices; and (2) the number of adaptation parameters is reduced to a few eigenvoice
weights. However, since the amount of adaptation data is so little the adaptation performance may vary widely. To get a more robust performance, we propose to interpolate the
kernel eigenvoice ??(kev) (s) obtained in Eqn (4) with the ?-mapped
?
speaker-independent
(SI) supervector ?(x
? (si) ) to obtain the final speaker adapted model ??(rkev) (s) as follows:
??(rkev) (s) = w0 ?(x
? (si) ) + (1 ? w0 )??(kev) (s) ,
0.0 ? w0 ? 1.0 ,
(9)
where ??(kev) (s) is found by Eqn (4). By replacing ??(kev) (s) by ??(rkev) (s) for the computation of the kernel value of Eqn (5), and following the mathematical steps in Section 4,
one may derive the required gradients for the joint maximum-likelihood estimation of w 0
and other eigenvoice weights in the GEM algorithm.
Notice that ??(rkev) (s) also contains components in ?(x
? (si) ) from eigenvectors beyond the
M selected kernel eigenvoices for adaptation. Thus, robust KEV adaptation may have the
additional benefit of preserving the speaker-independent projections on the remaining less
important but robust eigenvoices in the final speaker-adapted model.
6
Experimental Evaluation
The proposed kernel eigenvoice adaptation method was evaluated on the TIDIGITS speech
corpus [5]. Its performance was compared with that of the speaker-independent model
and the standard eigenvoice adaptation method using only 3s, 5.5s, and 13s of adaptation
speech. If we exclude the leading and ending silence, the average duration of adaptation
speech is 2.1s, 4.1s, and 9.6s respectively.
6.1
TIDIGITS Corpus
The TIDIGITS corpus contains clean connected-digit utterances sampled at 20 kHz. It is
divided into a standard training set and a test set. There are 163 speakers (of both genders)
in each set, each pronouncing 77 utterances of one to seven digits (out of the eleven digits:
?0?, ?1?, . . ., ?9?, and ?oh?.). The speaker characteristics is quite diverse with speakers
coming from 22 dialect regions of USA and their ages ranging from 6 to 70 years old.
In all the following experiments, only the training set was used to train the speakerindependent (SI) HMMs and speaker-dependent (SD) HMMs from which the SI and SD
speaker supervectors were derived.
6.2
Acoustic Models
All training data were processed to extract 12 mel-frequency cepstral coefficients and the
normalized frame energy from each speech frame of 25 ms at every 10 ms. Each of the
eleven digit models was a strictly left-to-right HMM comprising 16 states and one Gaussian
with diagonal covariance per state. In addition, there were a 3-state ?sil? model to capture
silence speech and a 1-state ?sp? model to capture short pauses between digits. All HMMs
were trained by the EM algorithm. Thus, the dimension of the observation space d 1 is 13
and that of the speaker supervector space d2 = 11 ? 16 ? 13 = 2288.
Firstly, the SI models were trained. Then an SD model was trained for each individual
speaker by borrowing the variances and transition matrices from the corresponding SI models, and only the Gaussian means were estimated. Furthermore, the sil and sp models were
simply copied to the SD model.
6.3
Experiments
The following five models/systems were compared:
SI: speaker-independent model
EV: speaker-adapted model found by the standard eigenvoice adaptation method.
Robust-EV: speaker-adapted models found by our robust version of EV, which is the interpolation between the SI supervector and the supervector found by EV. That is,
s(rev) = w0 s(si) + (1 ? w0 )s(ev) ,
0.0 ? w0 ? 1.0 .
KEV: speaker-adapted model found by our new kernel eigenvoice adaptation method as
described in Section 4.
Robust-KEV: speaker-adapted model found by our robust KEV as described in Section 5.
All adaptation results are the averages of 5-fold cross-validation taken over all 163 test
speaker data. The detailed results using different numbers of eigenvoices are shown in
Figure 1, while the best result for each model is shown in Table 1.
Table 1: Word recognition accuracies of SI model and the best adapted models found by
EV, robust EV, KEV, and robust KEV using 2.1s, 4.1s, and 9.6s of adaptation speech.
SYSTEM
SI
EV
robust EV
KEV
robust KEV
2.1s
95.61
96.26
96.85
97.28
4.1s
96.25
95.65
96.26
97.05
97.44
9.6s
95.67
96.27
97.05
97.50
From Table 1, we observe that the standard eigenvoice approach cannot obtain better performance than the SI model3 . On the other hand, using our kernel eigenvoice (KEV) method,
we obtain a word error rate (WER) reduction of 16.0%, 21.3%, and 21.3% with 2.1s, 4.1s,
and 9.6s of adaptation speech over the SI model. When the SI model is interpolated with
the KEV model in our robust KEV method, the WER reduction further improves to 27.5%,
31.7%, and 33.3% respectively. These best results are obtained with 7 to 8 eigenvoices. The
results show that nonlinear PCA using composite kernels can be more effective in finding
the eigenvoices.
98
Word Recognition Accuracy (%)
97.5
97
96.5
96
95.5
SI model
KEV (2.1s)
KEV (9.6s)
robust KEV (2.1s)
robust KEV (9.6s)
95
94.5
94
0
1
2
3
4
5
6
7
8
Number of Kernel Eigenvoices
9
10
Figure 1: Word recognition accuracies of adapted models found by KEV and robust KEV
using different numbers of eigenvoices.
From Figure 1, the KEV method can outperform the SI model even with only two eigenvoices using only 2.1s of speech. Its performance then improves slightly with more eigenvoices or more adaptation data. If we allow interpolation with the SI model as in robust
3
The word accuracy of our SI model is not as good as the best reported result on TIDIGITS which
is about 99.7%. The main reasons are that we used only 13-dimensional static cepstra and energy, and
each state was modelled by a single Gaussian with diagonal covariance. The use of this simple model
allowed us to run experiments with 5-fold cross-validation using very short adaptation speech. Right
now our approach requires computation of many kernel function values and is very computationally
expensive. As a first attempt on the approach, we feel that the use of this simple model is justified.
We are now working on its speed-up and its extension to HMM states of Gaussian mixtures.
KEV, the saturation effect is even more pronounced: even with one eigenvoice, the adaptation performance is already better than that of SI model, and then the performance does
not change much with more eigenvoices or adaptation data. The results seem to suggest
that the requirement that the adapted speaker supervector is a weighted sum of few eigenvoices is both the strength and weakness of the method: on the one hand, fast adaptation
becomes possible since the number of estimation parameters is small, but adaptation saturates quickly because the constraint is so restrictive that all mean vectors of different
acoustic models have to undergo the same linear combination of the eigenvoices.
7
Conclusions
In this paper, we improve the standard eigenvoice speaker adaptation method using kernel PCA with a composite kernel. In the TIDIGITS task, it is found that while the standard eigenvoice approach does not help, our kernel eigenvoice method may outperform the
speaker-independent model by about 28?33% (in terms of error rate improvement).
Right now the speed of recognition using the adapted model that resulted from our kernel
eigenvoice method is slower than that from the standard eigenvoice method because any
state observation likelihoods cannot be directly computed but through evaluating the kernel
values with all training speaker supervectors. One possible solution is to apply sparse
kernel PCA [6] so that computation of the first M principal components involves only M
(instead of N with M N ) kernel functions. Another direction is to use compactly
supported kernels [7], in which the value of ?(kxi ? xj k) vanishes when kxi ? xj k is
greater than a certain threshold. The kernel matrix then becomes sparse. Moreover, no
more computation is required when kxi ? xj k is large.
8
Acknowledgements
This research is partially supported by the Research Grants Council of the Hong Kong SAR
under the grant numbers HKUST2033/00E, HKUST6195/02E, and HKUST6201/02E.
References
[1] B. Sch?olkopf and A.J. Smola. Learning with Kernels. MIT, 2002.
[2] B. Sch?olkopf, A. Smola, and K.R. M?uller. Nonlinear component analysis as a kernel
eigenvalue problem. Neural Computation, 10:1299?1319, 1998.
[3] R. Kuhn, J.-C. Junqua, P. Nguyen, and N. Niedzielski. Rapid Speaker Adaptation in
Eigenvoice Space. IEEE Transactions on Speech and Audio Processing, 8(4):695?707,
Nov 2000.
[4] A.P. Dempster, N.M. Laird, and D.B. Rubin. Maximum likelihood from incomplete
data via the EM algorithm. Journal of the Royal Statistical Society: Series B, 39(1):1?
38, 1977.
[5] R.G. Leonard. A Database for Speaker-Independent Digit Recognition. In Proceedings
of the IEEE International Conference on Acoustics, Speech, and Signal Processing,
volume 3, pages 4211?4214, 1984.
[6] A.J. Smola, O.L. Mangasarian, and B. Sch?olkopf. Sparse kernel feature analysis. Technical Report 99-03, Data Mining Institute, University of Wisconsin, Madison, 1999.
[7] M.G. Genton. Classes of kernels for machine learning: A statistics perspective. Journal
of Machine Learning Research, 2:299?312, 2001.
| 2421 |@word kong:3 version:2 supervectors:7 d2:3 tidigits:6 covariance:5 reduction:2 initial:2 contains:2 series:1 kcr:2 existing:1 si:27 ust:1 numerical:2 subsequent:1 speakerindependent:1 eleven:2 update:1 rd2:1 selected:1 isotropic:2 ith:2 short:2 provides:1 firstly:1 five:1 mathematical:1 consists:2 expected:1 rapid:1 little:2 becomes:2 xx:1 notation:3 moreover:1 maximizes:1 eigenvector:1 finding:1 every:1 exactly:1 grant:2 sd:5 interpolation:2 advised:1 hmms:4 digit:6 procedure:1 composite:9 projection:1 word:6 pre:1 suggest:1 get:3 cannot:2 context:1 map:7 maximizing:3 straightforward:1 attention:1 duration:1 simplicity:1 orthonormal:2 oh:1 his:1 analogous:1 feel:1 sar:1 us:1 trick:1 recognition:6 particularly:1 intuitiveness:1 expensive:1 database:1 capture:2 wj:2 region:1 connected:1 vanishes:1 dempster:1 trained:4 depend:1 compactly:1 joint:1 represented:2 dialect:1 train:1 fast:2 effective:3 describe:1 quite:1 widely:1 statistic:1 laird:1 superscript:1 final:2 advantage:1 sequence:2 eigenvalue:1 propose:1 product:1 coming:1 adaptation:40 pronounced:1 olkopf:3 constituent:5 convergence:1 requirement:1 help:1 derive:1 measured:2 x0i:1 ij:1 c:1 involves:1 direction:1 kuhn:1 correct:1 centered:3 genton:1 generalization:1 brian:1 mathematically:1 extension:3 strictly:1 xjr:7 around:1 exp:1 mapping:1 substituting:1 major:2 vary:1 estimation:6 council:1 weighted:1 uller:1 mit:1 gaussian:12 avoid:1 cr:2 pn:3 xir:12 derived:1 improvement:1 likelihood:11 hk:1 dependent:2 hidden:1 her:1 mth:1 borrowing:1 i1:1 comprising:1 pronouncing:1 denoted:2 mak:2 field:1 construct:1 look:1 unsupervised:1 report:1 few:6 composed:2 resulted:1 interpolate:1 individual:1 n1:1 attempt:1 interest:1 investigate:2 mining:1 evaluation:2 weakness:1 mixture:3 incomplete:1 old:1 re:1 wb:1 maximization:1 reported:1 kxi:3 international:1 sequel:1 vm:6 xi1:1 decoding:2 invertible:1 quickly:1 w1:1 postulate:1 derivative:3 leading:1 converted:1 exclude:1 coefficient:1 depends:1 performed:2 lot:1 closed:1 wm:9 start:1 simon:1 accuracy:4 variance:1 characteristic:2 efficiently:1 modelled:1 jamesk:1 definition:1 centering:1 energy:2 frequency:1 involved:1 james:1 associated:1 mi:11 vector2:1 kev:34 sampled:1 static:1 improves:3 organized:1 back:2 supervised:1 done:1 evaluated:1 furthermore:2 smola:3 hand:2 eqn:10 working:1 replacing:1 nonlinear:10 usa:1 effect:2 normalized:1 hence:2 q0:2 deal:1 during:3 speaker:62 mel:1 hong:3 generalized:3 m:2 pdf:1 confusion:1 image:2 ranging:1 mangasarian:1 overview:1 khz:1 volume:1 rth:3 automatic:1 pm:1 dot:1 access:1 similarity:3 posterior:2 recent:2 perspective:1 certain:1 success:1 seen:1 captured:1 additional:1 preserving:1 greater:1 employed:3 signal:1 infer:1 technical:1 match:1 adapt:1 faster:1 cross:2 divided:1 basic:1 iteration:2 kernel:59 justified:1 addition:1 separately:1 sch:3 ot:20 rest:1 sr:17 ascent:2 elegant:1 undergo:1 seem:1 call:2 xj:10 reduce:1 idea:2 inner:1 br:2 pca:22 speech:15 proceed:1 hessian:1 remark:1 useful:1 clear:1 eigenvectors:4 detailed:1 amount:4 svms:1 processed:1 reduced:1 outperform:2 exist:1 notice:3 estimated:3 per:1 diverse:3 express:1 threshold:1 drawn:1 d3:2 clean:1 v1:1 year:3 sum:1 run:1 wer:2 reader:1 reasonable:1 copied:1 fold:2 adapted:11 strength:1 constraint:1 cepstra:1 interpolated:1 speed:2 concluding:1 qb:6 performing:1 department:1 combination:4 slightly:1 em:6 rev:1 s1:1 heart:2 taken:1 computationally:1 turn:2 adopted:1 available:2 gaussians:3 apply:3 kwok:1 observe:1 voice:1 ho:1 slower:1 existence:1 clustering:1 remaining:1 newton:1 madison:1 restrictive:1 society:1 already:1 costly:1 usual:1 diagonal:2 gradient:3 distance:1 mapped:3 hmm:5 w0:6 seven:1 collected:1 water:1 reason:1 besides:1 modeled:1 expense:1 observation:10 markov:1 saturates:1 frame:2 required:3 acoustic:4 qa:3 beyond:1 usually:2 pattern:2 ev:10 kot:6 challenge:2 saturation:1 royal:1 explanation:1 suitable:1 treated:1 pause:1 nth:1 improve:1 technology:1 brief:1 supervector:15 conventionally:1 extract:1 utterance:2 xq:1 acknowledgement:1 wisconsin:1 sil:2 age:1 validation:2 eigendecomposition:1 xp:3 mercer:1 rubin:1 supported:2 last:1 transpose:1 silence:2 allow:2 institute:1 cepstral:1 differentiating:1 sparse:3 benefit:1 dimension:1 xn:1 valid:1 ending:1 transition:1 evaluating:1 collection:1 nguyen:1 transaction:1 hkh:1 nov:1 corpus:4 assumed:1 gem:4 xi:14 bay:1 table:3 nature:1 robust:18 model3:1 diag:1 sp:2 apr:1 main:1 whole:1 allowed:1 x1:1 concatenating:1 breaking:1 down:1 kxir:2 exists:1 kr:14 led:1 simply:1 partially:1 gender:1 identity:1 leonard:1 change:1 except:1 principal:4 called:1 experimental:2 support:1 audio:1 d1:3 |
1,565 | 2,422 | Impact of an Energy Normalization
Transform on the Performance of the
LF-ASD Brain Computer Interface
Zhou Yu1
1
2
Steven G. Mason2
Gary E. Birch1,2
Dept. of Electrical and Computer Engineering
University of British Columbia
2356 Main Mall
Vancouver, B.C. Canada V6T 1Z4
Neil Squire Foundation
220-2250 Boundary Road
Burnaby, B.C. Canada V5M 3Z3
Abstract
This paper presents an energy normalization transform as a
method to reduce system errors in the LF-ASD brain-computer
interface. The energy normalization transform has two major
benefits to the system performance. First, it can increase class
separation between the active and idle EEG data. Second, it
can desensitize the system to the signal amplitude variability.
For four subjects in the study, the benefits resulted in the
performance improvement of the LF-ASD in the range from
7.7% to 18.9%, while for the fifth subject, who had the highest
non-normalized accuracy of 90.5%, the performance did not
change notably with normalization.
1
In trod u ction
In an effort to provide alternative communication channels for people who suffer
from severe loss of motor function, several researchers have worked over the past
two decades to develop a direct Brain-Computer Interface (BCI).
Since
electroencephalographic (EEG) signal has good time resolution and is non-invasive,
it is commonly used for data source of a BCI. A BCI system converts the input EEG
into control signals, which are then used to control devices like computers,
environmental control system and neuro-prostheses.
Mason and Birch [1] proposed the Low-Frequency Asynchronous Switch Design
(LF-ASD) as a BCI which detected imagined voluntary movement-related potentials
(IVMRPs) in spontaneous EEG. The principle signal processing components of the
LF-ASD are shown in Figure 1.
sIN
Feature
Extractor
sLPF
LPF
Feature
Classifier
sFE
sFC
Figure 1: The original LF-ASD design.
The input to the low-pass filter (LPF), denoted as SIN in Figure 1, are six bipolar
EEG signals recorded from F1-FC1, Fz-FCz, F2-FC2, FC1-C1, FCz-Cz and
FC2-C2 sampled at 128 Hz. The cutoff frequency of the LPF implemented by Mason
and Birch was 4 Hz. The Feature Extractor of the LF-ASD extracts custom features
related to IVMRPs. The Feature Classifier implements a one-nearest-neighbor (1NN) classifier, which determines if the input signals are related to a user state of
voluntary movement or passive (idle) observation. The LF-ASD was able to
achieve True Positive (TP) values in the range of 44%-81%, with the corresponding
False Positive (FP) values around 1% [1].
Although encouraging, the current error rates of the LF-ASD are insufficient for
real-world applications. This paper proposes a method to improve the system
performance.
2
Design and Rationale
The improved design of the LF-ASD with the Energy Normalization Transform
(ENT) is provided in Figure 2.
SIN
ENT
SN
SNLPF
LPF
Feature
Extractor
SNFE
SNFC
Feature
Classifier
Figure 2: The improved LF-ASD with the Energy Normalization Transform.
The design of the Feature Extractor and Feature Classifier were the same as shown
in Figure 1. The Energy Normalization Transform (ENT) is implemented as
S
N
(n ) =
s=(
w
s= ? (
N
?
w
S
IN
S
IN
?1) / 2
N
?1) / 2
(n )
2
(n ? s)
w
N
where W N (normalization window size) is the only parameter in the equation. The
optimal parameter value was obtained by exhaustive search for the best class
separation between active and idle EEG data. The method of obtaining the active
and idle EEG data is provided in Section 3.1.
The idea to use energy normalization to improve the LF-ASD design was based
primarily on an observation that high frequency power decreases significantly
around movement. For example, Jasper and Penfield [3] and Pfurtscheller et al, [4]
reported EEG power decrease in the mu (8-12 Hz) and beta rhythm (18-26 Hz) when
people are involved in motor related activity. Also Mason [5] found that the power
in the frequency components greater than 4Hz decreased significantly during
movement-related potential periods, while power in the frequency components less
than 4Hz did not. Thus energy normalization, which would increase the low
frequency power level, would strengthen the 0-4 Hz features used in the LF-ASD
and hence reduce errors. In addition, as a side benefit, it can automatically adjust the
mean scale of the input signal and desensitize the system to change in EEG power,
which is known to vary over time [2]. Therefore, it was postulated that the addition
of ENT into the improved design would have two major benefits. First, it can
increase the EEG power around motor potentials, consequently increasing the class
separation and feature strength. Second, it can desensitize the system to amplitude
variance of the input signal.
In addition, since the system components of the modified LF-ASD after the ENT
were the same as in the original design, a major concern was whether or not the
ENT distorted the features used by the LF-ASD. Since the features used by the LFASD are generated from the 0-4 Hz band, if the ENT does not distort the phase and
magnitude spectrum in this specific band, it would not distort the features related to
movement potential detection in the application.
3
3.1
Evaluation
Test data
Two types of EEG data were pre-recorded from five able-bodied individuals as
shown in Figure 3. Active Data Type and Idle Data Type. Active Data was recorded
during repeated right index finger flexions alternating with periods of no motor
activity; Idle Data was recorded during extended periods of passive observation.
Figure 3: Data Definition of M1, M2, Idle1 and Idle2.
Observation windows centered at the time of the finger switch activations (as shown
in Figure 4) were imposed in the active data to separate data related to movements
from data during periods of idleness. For purpose of this study, data in the front part
of the observation window was defined as M1 and data in the rear part of the
window was defined as M2. Data falling out of the observation window was defined
as Idle2. All the data in the Idle Data Type was defined as Idle1 for comparison
with Idle2.
Figure 4: Ensemble Average of EEG centered on finger activations.
Figure 5: Density distribution of Idle1, Idle2, M1 and M2.
It was noted, in terms of the density distribution of active and idle data, the
separation between M2 and Idle2 was the largest and Idle1 and Idle2 were nearly
identical (see Figure 5). For the study, M2 and Idle2 were chosen to represent the
active and idle data classes and the separation between M2 and Idle2 data was
defined by the difference of means (DOM) scaled by the amplitude range of Idle2.
3.2
Optimal parameter determination
The optimal combination of normalization window size, W N, and observation
window size, W O was selected to be that which achieved the maximal DOM value.
This
was
determined
by
exhaustive
search,
and
discussed
in
Section 4.1.
3.3
Effect of ENT on the Low Pass Filter output
As mentioned previously, it was postulated that the ENT had two major impacts:
increasing the class separation between active and idle EEG and desensitizing the
system to the signal amplitude variance. The hypothesis was evaluated by
comparing characteristics of SNLPF and SLPF in Figure 1 and Figure 2. DOM was
applied to measure the increased class separation. The signal with the larger DOM
meant larger class separation. In addition, the signal with smaller standard deviation
may result in a more stable feature set.
3.4
Effect of ENT on the LF-ASD output
The performances of the original and improved designs were evaluated by
comparing the signal characteristics of SNFC in Figure 2 to SFC in Figure 1. A
Receiver Operating Characteristic Curve (ROC Curve) [6] was generated for the
original and improved designs. The ROC Curve characterizes the system
performance over a range of TP vs. FP values. The larger area under ROC Curve
indicates better system performance. In real applications, a BCI with high-level FP
rates could cause frustration for subjects. Therefore, in this work only the LF-ASD
performance when the FP values are less than 1% were studied.
4
4.1
Results
Optimal normalization window size (WN)
The method to choose optimal WN was an exhaustive search for maximal DOM
between active and idle classes. This method was possibly dependent on the
observation window size (W O). However, as shown in Figure 6a, the optimal WN
was found to be independent of WO. Experimentally, the W O values were selected in
the range of 50-60 samples, which corresponded to largest DOM between nonnormalized active and idle data. The optimal WN was obtained by exhaustive search
for the largest DOM through normalized active and idle data. The DOM vs. WN
profile for Subject 1 is shown in Figure 6b.
a)
b)
Figure 6: Optimal parameter determination for Subject 1 in Channel 1
a) DOM vs. WO; b) DOM vs. WN.
When using ENT, a small W N value may cause distortion to the feature set used by
the LF-ASD. Thus, the optimal W N was not selected in this range (< 40 samples).
When W N is greater than 200, the ENT has lost its capability to increase class
separation and the DOM curve gradually goes towards the best separation without
normalization. Thus, the optimal W N should correspond to the maximal DOM value
when W N is in the range from 40 to 200. In Figure 6b, the optimal WN is around 51.
4.2
Effect of ENT on the Low Pass Filter output
With ENT, the standard deviation of the low frequency EEG signal decreased from
around 1.90 to 1.30 over the six channels and over the five subjects. This change
resulted in more stable feature sets. Thus, the ENT desensitizes the system to input
signal variance.
a)
b)
Figure 7: Density distribution of the active vs. idle class without
(a) and with (b) ENT, for Subject 1 in Channel 1.
As shown in Figure 7, by increasing the EEG power around motor potentials, ENT
can increase class separations between active and idle EEG data. The class
separation in (frontal) Channels 1-3 across all subjects increased consistently with
the proposed ENT. The same was true for (midline) Channels 4-6, for all subjects
except Subject 5, whose DOM in channel 5-6 decreased by 2.3% and 3.4%
respectively with normalization. That is consistent with the fact that his EEG power
in Channels 4-6 does not decrease. On average, across all five subjects, DOM
increases with normalization to about 28.8%, 26.4%, 39.4%, 20.5%, 17.8% and
22.5% over six channels respectively.
In addition, the magnitude and phase spectrums of the EEG signal before and after
ENT is provided in Figure 8. The ENT has no visible distortion to the signal in the
low frequency band (0-4 Hz) used by the LF-ASD. Therefore, the ENT does not
distort the features used by the LF-ASD.
(a)
(b)
Figure 8: Magnitude and phase spectrum of the EEG signal before and after ENT.
4.3
Effect of ENT on the LF-ASD output
The two major benefits of the ENT to the low frequency EEG data result in the
performance improvement of the LF-ASD. Subject 1?s ROC Curves with and
without ENT is shown in Figure 9, where the ROC-Curve with ENT of optimal
parameter value is above the ROC Curve without ENT. This indicates that the
improved LF-ASD performs better. Table I compares the system performance with
and without ENT in terms of TP with corresponding FP at 1% across all the 5
subjects.
Figure 9: The ROC Curves (in the section of interest) of Subject 1 with different
WN values and the corresponding ROC Curve without ENT.
Table I: Performance of the LF-ASD with and without LF-ASD in terms of
the True Positive rate with corresponding False Positive at 1%.
Subject 1
Subject 2
Subject 3
Subject 4
Subject 5
TP without
ENT
66.1%
82.7%
79.7%
79.3%
90.5%
TP with
ENT
85.0%
90.4%
88.0%
87.8%
88.7%
Performance
Improvement
18.9%
7.7%
8.3%
8.5%
-1.8%
For 4 out of 5 subjects, corresponding with the FP at 1%, the improved system with
ENT increased the TP value by 7.7%, 8.3%, 8.5% and 18.9% respectively. Thus, for
these subjects, the range of TP with FP at 1% was improved from 66.1%-82.7% to
85.0%-90.4% with ENT. For the fifth subject, who had the highest non-normalized
accuracy of 90.5%, the performance remained around 90% with ENT. In addition,
this evaluation is conservative. Since the codebook in the Feature Classifier and the
parameters in the Feature Extractor of the LF-ASD were derived from nonnormalized EEG, they work in favor of the non-normalized EEG. Therefore, if the
parameters and the codebook of the modified LF-ASD are generated from the
normalized EEG in the future, the modified LF-ASD may show better performance
than this evaluation.
5
Conclusion
The evaluation with data from five able-bodied subjects indicates that the proposed
system with Energy Normalization Transform (ENT) has better performance than
the original. This study has verified the original hypotheses that the improved
design with ENT might have two major benefits: increased the class separation
between active and idle EEG and desensitized the system performance to input
amplitude variance. As a side benefit, the ENT can also make the design less
sensitive to the mean input scale.
In the broad band, the Energy Normalization Transform is a non-linear transform.
However, it has no visible distortion to the signal in the 0-4 Hz band. Therefore, it
does not distort the features used by the LF-ASD.
For 4 out of 5 subjects, with the corresponding False Positive rate at 1%, the
proposed transform increased the system performance by 7.7%, 8.3%, 8.5% and
18.9% respectively in terms of True Positive rate. Thus, the overall performance of
the LF-ASD for these subjects was improved from 66.1%-82.7% to 85.0%-90.4%.
For the fifth subject, who had the highest non-normalized accuracy of 90.5%, the
performance did not change notably with normalization. In the future with the
codebook derived from the normalized data, the performance could be further
improved.
References
[1] Mason, S. G. and Birch, G. E., (2000) A Brain-Controlled Switch for Asynchronous
Control Applications. IEEE Trans Biomed Eng, 47(10):1297-1307.
[2] Vaughan, T. M., Wolpaw, J. R., and Donchin, E. (1996) EEG-Based Communication:
Prospects and Problems. IEEE Trans Reh Eng, 4(4):425-430.
[3] Jasper, H. and Penfield, W. (1949) Electrocortiograms in man: Effect of voluntary
movement upon the electrical activity of the precentral gyrus. Arch.Psychiat.Nervenkr.,
183:163-174.
[4] Pfurtscheller, G., Neuper, C., and Flotzinger, D. (1997) EEG-based discrimination
between imagination of right and left hand movement. Electroencephalography and Clinical
Neurophysiology, 103:642-651.
[5] Mason, S. G. (1997) Detection of single trial index finger flexions from continuous,
spatiotemporal EEG. PhD Thesis, UBC, January.
[6] Green, D. M. and Swets, J. A. (1996) Signal Detection Theory and Psychophysics New York:
John Wiley and Sons, Inc.
| 2422 |@word neurophysiology:1 trial:1 implemented:2 effect:5 normalized:7 nervenkr:1 true:4 sfe:1 hence:1 lpf:4 alternating:1 son:1 filter:3 centered:2 eng:2 sin:3 during:4 noted:1 rhythm:1 separate:1 fc2:2 f1:1 performs:1 past:1 interface:3 passive:2 current:1 comparing:2 around:7 index:2 z3:1 activation:2 insufficient:1 john:1 visible:2 vancouver:1 jasper:2 major:6 vary:1 motor:5 purpose:1 imagined:1 discussed:1 v:5 discrimination:1 m1:3 selected:3 device:1 design:12 desensitize:3 curve:10 observation:8 sensitive:1 largest:3 psychiat:1 z4:1 january:1 voluntary:3 extended:1 codebook:3 variability:1 had:4 communication:2 modified:3 stable:2 five:4 zhou:1 operating:1 c2:1 direct:1 beta:1 canada:2 derived:2 yu1:1 improvement:3 consistently:1 electroencephalographic:1 indicates:3 swets:1 notably:2 trans:2 brain:4 rear:1 fcz:2 dependent:1 nn:1 burnaby:1 greater:2 automatically:1 able:3 encouraging:1 fp:7 window:9 electroencephalography:1 increasing:3 period:4 provided:3 signal:19 biomed:1 green:1 overall:1 mall:1 power:9 denoted:1 proposes:1 determination:2 clinical:1 psychophysics:1 dept:1 improve:2 identical:1 controlled:1 broad:1 impact:2 neuro:1 nearly:1 fc1:2 bipolar:1 future:2 extract:1 classifier:6 scaled:1 normalization:18 control:4 primarily:1 penfield:2 cz:1 represent:1 achieved:1 c1:1 addition:6 positive:6 before:2 engineering:1 resulted:2 individual:1 midline:1 decreased:3 phase:3 source:1 columbia:1 loss:1 rationale:1 detection:3 sfc:2 interest:1 subject:26 might:1 hz:10 reh:1 foundation:1 studied:1 custom:1 evaluation:4 severe:1 adjust:1 consistent:1 principle:1 range:8 wn:8 switch:3 asynchronous:2 lost:1 implement:1 lf:30 trod:1 reduce:2 idea:1 wolpaw:1 side:2 neighbor:1 area:1 prosthesis:1 whether:1 six:3 precentral:1 significantly:2 fifth:3 benefit:7 boundary:1 idle:16 road:1 pre:1 increased:5 effort:1 wo:2 flotzinger:1 suffer:1 tp:7 york:1 cause:2 commonly:1 sn:1 deviation:2 vaughan:1 imposed:1 go:1 band:5 front:1 active:15 reported:1 resolution:1 gyrus:1 receiver:1 fz:1 spatiotemporal:1 spectrum:3 m2:6 search:4 continuous:1 density:3 decade:1 table:2 his:1 channel:9 obtaining:1 eeg:27 spontaneous:1 nonnormalized:2 four:1 user:1 strengthen:1 thesis:1 frustration:1 recorded:4 choose:1 hypothesis:2 possibly:1 falling:1 cutoff:1 verified:1 asd:30 did:3 main:1 imagination:1 profile:1 convert:1 potential:5 v6t:1 steven:1 repeated:1 electrical:2 distorted:1 roc:8 inc:1 postulated:2 squire:1 wiley:1 separation:13 pfurtscheller:2 movement:8 highest:3 decrease:3 prospect:1 mentioned:1 characterizes:1 mu:1 capability:1 extractor:5 dom:14 british:1 remained:1 activity:3 specific:1 strength:1 accuracy:3 variance:4 upon:1 who:4 f2:1 characteristic:3 ensemble:1 correspond:1 worked:1 mason:5 concern:1 world:1 false:3 donchin:1 finger:4 flexion:2 phd:1 magnitude:3 researcher:1 ction:1 detected:1 corresponded:1 combination:1 exhaustive:4 smaller:1 whose:1 distort:4 larger:3 definition:1 energy:10 distortion:3 frequency:9 involved:1 invasive:1 bci:5 favor:1 neil:1 gradually:1 transform:10 sampled:1 gary:1 ubc:1 across:3 birch:3 environmental:1 equation:1 determines:1 previously:1 amplitude:5 maximal:3 consequently:1 towards:1 man:1 change:4 experimentally:1 determined:1 except:1 achieve:1 improved:11 evaluated:2 conservative:1 pas:3 arch:1 ent:37 alternative:1 neuper:1 hand:1 original:6 people:2 meant:1 develop:1 frontal:1 nearest:1 |
1,566 | 2,423 | Probabilistic Inference in Human Sensorimotor
Processing
Konrad P. Ko? rding ?
Institute of Neurology
UCL London
London WC1N 3BG,UK
[email protected]
Daniel M. Wolpert ?
Institute of Neurology
UCL London
London WC1N 3BG,UK
[email protected]
Abstract
When we learn a new motor skill, we have to contend with both the variability inherent in our sensors and the task. The sensory uncertainty can
be reduced by using information about the distribution of previously experienced tasks. Here we impose a distribution on a novel sensorimotor
task and manipulate the variability of the sensory feedback. We show that
subjects internally represent both the distribution of the task as well as
their sensory uncertainty. Moreover, they combine these two sources of
information in a way that is qualitatively predicted by optimal Bayesian
processing. We further analyze if the subjects can represent multimodal
distributions such as mixtures of Gaussians. The results show that the
CNS employs probabilistic models during sensorimotor learning even
when the priors are multimodal.
1 Introduction
Real world motor tasks are inherently uncertain. For example, when we try to play an
approaching tennis ball, our vision of the ball does not provide perfect information about
its velocity. Due to this sensory uncertainty we can only generate an estimate of the ball?s
velocity. This uncertainty can be reduced by taking into account information that is available on a longer time scale: not all velocities are a priori equally probable. For example,
very fast and very slow balls may be experienced less often than medium paced balls. Over
the course of a match there will be a probability distribution of velocities. Bayesian theory [1-2] tells us that to make an optimal estimate of the velocity of a given ball, this a
priori information about the distribution of velocities should be combined with the evidence provided by sensory feedback. This combination process requires prior knowledge,
how probable each possible velocity is, and knowledge of the uncertainty inherent in the
sensory estimate of velocity. As the degree of uncertainty in the feedback increases, for
example when playing in fog or at dusk, an optimal system should increasingly depend on
prior knowledge. Here we examine whether subjects represent the probability distribution
of a task and if this can be appropriately combined with an estimate of sensory uncer? www.koerding.com
? www.wolpertlab.com
tainty. Moreover, we examine whether subjects can represent priors that have multimodal
distributions.
2 Experiment 1: Gaussian Prior
To examine whether subjects can represent a prior distribution of a task and integrate it with
a measure of their sensory uncertainty we examined performance on a reaching task. The
perceived position of the hand is displaced relative to the real position of the hand. This
displacement or shift is drawn randomly from an underlying probability distribution and
subjects have to estimate this shift to perform well on the task. By examining where subjects reached while manipulating the reliability of their visual feedback we distinguished
between several models of sensorimotor learning.
2.1 Methods
Ten subjects made reaching movement on a table to a visual target with their right index
finger in a virtual reality setup (for details of the set-up see [6]). An Optotrak 3020 measured the position of their finger and a projection/mirror system prevented direct view of
their arm and allowed us to generate a cursor representing their finger position which was
displayed in the plane of the movement (Figure 1A). As the finger moved from the starting
circle, the cursor was extinguished and shifted laterally from the true finger location by an
amount which was drawn each trial from a Gaussian distribution:
?
??
??
(1)
where and (Figure 1B). Halfway to the target (10 cm), visual feedback was briefly provided for 100 ms either clearly ( ? ) or with different degrees
of blur ( and
), or withheld ( ? ). On each trial one of the 4 types of feedback
(?
? ) was selected randomly, with the relative frequencies of (3, 1, 1, 1) respectively. The ( ? ) feedback was a small white sphere. The ( ) feedback was 25 small
translucent spheres, distributed as a 2 dimensional Gaussian with a standard deviation of 1
cm, giving a cloud type impression. The (
) feedback was analogous but with a standard
deviation of 2 cm. No feedback was provided in the ( ? ) case. After another 10 cm of
movement the trial finished and feedback of the final cursor location was only provided in
the (? ) condition. The experiment consisted of 2000 trials for each subject. Subjects were
instructed to take into account what they see at the midpoint and get as close to the target
as possible and that the cursor is always there even if it is not displayed.
2.2 Results: Trajectories in the Presence of Uncertainty
Subjects were trained for 1000 trials on the task to ensure that they experienced many
samples drawn from the underlying distribution . After this period, when
feedback was withheld ( ? ), subjects pointed 0.97 0.06 cm (mean se across subjects)
to the left of the target showing that they had learned the average shift of 1 cm experienced
over the trials. Subsequently, we examined the relationship between visual feedback and
the location subjects pointed to. On trials in which feedback was provided, there
was compensation during the second half of the movement. Figure 1A shows typical finger
and cursor paths for two trials, ? and ? , in which . The visual feedback
midway through the movement provides information about the lateral shift on the current
trial and allows for a correction for the current lateral shift. However, the visual system
is not perfect and we expect some uncertainty in the sensed lateral shift
. The
distribution of sensed shifts over a large number of trials is expected to have a Gaussian
Prior
B
finger
path
(not
visible)
cursor
path
C
1 cm
?0
?M
Feedback:
?L
lateral shift xtrue [cm]
Evidence
sensed lateral shift xsensed [cm]
D
Probabilistic Model
8
?
N(1,?p=0.5) cm
probability
p(xsensed|xtrue)
Target
1 cm
probability
p(xtrue)
estimated lateral shift
xestimate
probability
p(xtrue|xsensed)
A
lateral shift xtrue [cm]
1
>
lateral deviation
xtrue-xestimate [cm]
E
1
Probabilistic
Model
1
0
0
0
-1
-1
-1
<
lateral shift xtrue
(e.g.2cm)
Compensation
Model
lateral shift xtrue [cm]
lateral shift xtrue [cm]
Mapping
Model
lateral shift xtrue [cm]
Figure 1: The experiment and models. A) Subjects are required to place the cursor on
the target, thereby compensating for the lateral displacement. The finger paths illustrate
typical trajectories at the end of the experiment when the lateral shift was 2 cm (the colors
correspond to two of the feedback conditions). B) The experimentally imposed prior distribution of lateral shifts is Gaussian with a mean of 1 cm. C) A schematic of the probability
distribution of visually sensed shifts under clear and the two blurred feedback conditions
(colors as in panel A) for a trial in which the true lateral shift is 2 cm. D) The estimate of
the lateral shift for an optimal observer that combines the prior with the evidence. E) The
average lateral deviation from the target as a function of the true lateral shift for the models.
Left: the full compensation model. Middle the Bayesian probabilistic model. Right: the
mapping model (see text for details).
distribution centered on
of the system.
with a standard deviation
that depends on the acuity
?
??
??
(2)
As the blur increases we expect
to increase (Figure 1C).
2.3 Computational Models and Predictions
There are several computational models which subjects could use to determine the compensation needed to reach the target based on the sensed location of the finger midway through
the movement. To analyze the subjects performance we plot the average lateral deviation
in a set of bins of as a function of the true shift
. Because
feedback is not biased this term approximates
. Three competing
computational models are able to predict such a graph.
1) Compensation model. Subjects could compensate for the sensed lateral shift
and thus use
. The average lateral deviation should thus be
(Figure 1E, left panel). In this model, increasing the uncertainty
of the feedback
(by increasing the blur) affects the variability of the pointing but
not the average location. Errors arise from variability in the visual feedback and the means
?
squared error (MSE) for this strategy (ignoring motor variability) is
. Crucially this
model does not require subjects to estimate their visual uncertainty nor the distribution of
shifts.
2) Bayesian model. Subjects could optimally use prior information about the distribution
and the uncertainty of the visual feedback to estimate the lateral shift. They have to estimate
given
. Using Bayes rule we can obtain the posterior distribution, that is the
probability of a shift given the evidence
,
(3)
If subjects choose the most likely shift they also minimize their mean squared error (MSE).
We can determine this optimal estimate by differentiating (3) after inserting (1)
and (2). This optimal estimate is a weighted sum between the mean of the prior and the
sensed feedback position:
?
?
?
?
?
?
(4)
The average lateral deviation
is thus linearly dependent to
the slope increases with increasing uncertainty (Figure 1E middle panel).
and
The MSE depends on two factors, the width of the prior and the uncertainty in the
visual feedback
. Calculating the MSE for the above optimal choice we obtain:
?
?
?
?
(5)
which is always less than the MSE for model 1. As we increase the blur, and thus the degree
of uncertainty, the estimate of the shift moves away from the visually sensed displacement
towards the mean of the prior distribution (Figure 1D). Such a computational
strategy thus allows subjects to minimize the MSE at the target.
3) Mapping model. A third computational strategy is to learn a mapping from the sensed
shift
to the optimal lateral shift . By minimizing the average error over
many trials the subjects could achieve a combination similar to model 2 but without any
representation of the prior distribution or the visual uncertainty. However, to learn such a
mapping requires visual feedback and knowledge of the error at the end of the movement.
In our experiment we only revealed the shifted position of the finger at the end of the
movement of the clear feedback trials ( ? ). Therefore, if subjects learn a mapping, they can
only do so for these trials and apply the same mapping to the blurred conditions ( ,
).
Therefore, this model predicts that the average lateral shift
should be
independent of the degree of blur (Figure 1E right panel)
2.3.1 Results: Lateral Deviation
Graphs of
against
are shown for a representative subject in Figure 2A. The slope increases with increasing uncertainty and is, therefore, incompatible with
models 1 and 3 but is predicted by model 2. Moreover, this transition from using feedback
to using prior information occurs gradually with increasing uncertainty as also predicted
by this Bayesian model. These effects are consistent over all the subjects tested. The slope
increases with increasing uncertainty in the visual feedback (Figure 2B). Depending on the
uncertainty of the feedback, subjects thus combine prior knowledge of the distribution of
shifts with new evidence to generate the optimal compensatory movement.
Using Bayesian theory we can furthermore infer the degree of uncertainty from the errors
the subjects made. Given the width of the prior and the result in (4) we can
lateral deviation
xtrue-xestimate [cm]
?0
1
B
***
**
1
slope
***
0
<
-1
2
0
2
lateral shift xtrue [cm]
?
1
?M
?L
?
inferred prior
[AU]
<
<
?0
1
C
>
lateral deviation
xtrue-xestimate [cm]
?L
>
1
0
8
1
lateral shift xtrue [cm]
8
<
-1
0
lateral deviation
xtrue-xestimate [cm]
?M
1
>
>
lateral deviation
xtrue-xestimate [cm]
A
-1
0
1
-1
2
lateral shift xtrue [cm]
0
1
0
-0.5
2
lateral shift xtrue [cm]
2.5
1
lateral shift xtrue [cm]
Figure 2: Results with color codes as in Figure 1. A) The average lateral deviation of the
cursor at the end of the trial as a function of the imposed lateral shift for a typical subject.
Errorbars denote the s.e.m. The horizontal dotted lines indicate the prediction from the
full compensation model and sloped line for a model that ignores sensory feedback on the
current trial and corrects only for the mean over all trials. B) The slopes for the optimal
linear fits are shown for the full population of subjects. The stars indicate the significance
indicated by the paired t-test. C) The inferred priors and the real prior (red) for each subjects
and condition.
estimate the uncertainty
from Fig 2A. For the three levels of imposed uncertainty,
? , and
, we find that the subjects uncertainty
are 0.36 0.1, 0.67 0.3,
0.8 0.2 cm (mean sd across subjects), respectively. Furthermore we have developed a
novel technique to infer the priors used by the subjects. An obvious choice of
is the maximum of the posterior
. The derivative of this posterior with
respect to must vanish at the optimal . This allows us to estimate the prior
used by each subject. Taking derivatives of (3) after inserting (2) and setting to zero we
get:
?
(6)
We assume that has a narrow peak around and thus approximate it by .
We insert the
obtained in (4), affecting the scaling of the integral but not its form.
The average of
across many trials is the imposed shift . Therefore the right
hand side is measured in the experiment and the left hand side approximates the derivative
of
. Since must approach zero for both very small and very large , we
subtract the mean of the right hand side before integrating numerically to obtain an estimate
the prior . Figure 2C shows the priors inferred for each subject and condition. This
shows that the real prior (red line) was reliably learned by each subject.
3 Experiment 2: Mixture of Gaussians Priors
The second experiment was designed to examine whether subjects are able to represent
more complicated priors such as mixtures of Gaussians and if they can utilize such prior
knowledge.
3.1 Methods
12 additional subjects participated in an experiment similar to Experiment 1 with the following changes. The experiments lasted for twice as many trials run on two consecutive
days with 2000 trials performed on each day. Feedback midway through the movement
was always blurred (spheres distributed as a two dimensional Gaussian with ) and
feedback at the end of the movement was provided on every trial. The prior distribution
was a mixture of Gaussians ( Figure 3A,D). One group of 6 subjects was exposed to:
?
? ? ??
??
??
??
(7)
where is half the distance between the two peaks of the Gaussians. is
the width of each Gaussian which is set to 0.5 cm. Another group of 6 subjects experienced
In this case we set
is still 0.5 cm.
?
?
? ? ??
??
??
??
??
(8)
so that the variance is identical to the two Gaussians case.
To estimate the priors learned by the subjects we fitted and compared two models. The
first assumed that subjects learned a single Gaussian distribution and the second assumed
that subjects learned a mixture of Gaussians and we tuned the position of the Gaussians to
minimizes the MSE between predicted and actual data.
p(xtrue)
A
single subject
B
all subjects
C
lateral deviation
xtrue-xestimate [cm]
0
-1
<
-2
0
-2
2
lateral shift xtrue [cm]
D
0
-2
2
lateral shift xtrue [cm]
E
p(xtrue)
F
single subject
all subjects
>
>
4
-3
0
<
0
<
0
lateral shift xtrue [cm]
2
3
lateral deviation
xtrue-xestimate [cm]
lateral deviation
xtrue-xestimate [cm]
relative frequency
?4
0
lateral shift xtrue [cm]
3
1
0
0
<
-1
0
1
>
1
>
relative frequency
lateral deviation
xtrue-xestimate [cm]
1
?4
0
4
lateral shift xtrue [cm]
-3
?4
0
4
lateral shift xtrue [cm]
Figure 3: A The used distribution of as mixture of Gaussians model. B The performance of an arbitrarily chosen subject is shown together with a fit from the ignore prior
model (dotted line), the Gaussian model (dashed line) and the Bayesian Mixture of Gaussians model (solid line) C the average response over all subjects is shown D-F shows the
same as A-C for the Three Gaussian Distribution
3.2 Results: Two Gaussians Distribution
The resulting response graphs (Figure 3B,C) show clear nonlinear effects. Fitting the
and to a two component Mixture of Gaussians model led to an average error
over all 6 subjects of 0.14 0.01 cm compared to an average error obtained for a single
Gaussian of 0.19 0.02 cm for the two Gaussians model. The difference, is significant
at
. The mixture model of the prior is thus better able to explain the data than
the model that assumes that people can just represent one Gaussian. One of the subjects
compensated least for the feedback and his data was well fit by a single Gaussian. After
removing this subject from the dataset we could fit the width of the distribution and
obtained 2.4 0.4 cm, close to the real value of the probability density function of 2 cm.
3.3 Results: Three Gaussians Distribution
The resulting response graphs (Figure 3E,F) again show clear nonlinear effects. Fitting the
and of the three Gaussians model (Figure 3E) led to an average error over
all subjects of 0.21 0.02 cm instead of an error from a single Gaussians of 0.25 0.02
cm. The fitted distance however was 2.0 0.4 cm, significantly smaller than the real
distance.
This result shows that subjects can not fully learn this more complicated distribution but
rather just learn some of its properties. This could be due to several effects. First, large
values of are experienced only rarely. Second, it could be that subjects use a simpler
model such as a generalized Gaussian (the family of distribution that also the Laplacian
distribution belongs to) or that they use a mixture of only a few Gaussians model. Third,
subjects could have a prior over priors that makes a mixture of three Gaussians model very
unlikely. Learning such a mixture would therefore be expected to take far longer.
3.4 Results: Evolution of the Subjects Performance
A
lateral deviation
xtrue-xestimate [cm]
B
01-500
2
2
501-1000
average error [cm]
>
5
<
-2
2
0
0
4000
trial
-2
C
2
additional variance explained
by Full model [%]
20
-2
2
0
1
2
3
4
5
6
7
8
-2
-2
0
2
lateral shift xtrue [cm]
1001-1500
-2
0
2
3001-3500
-2
0
2
3001-3500
-2
0
2
-2
2
-2
2
-2
2
-2
-2
0
2
1501-2000
-2
0
2
2501-3000
-2
0
2
3501-4000
-2
0
2
blocks of 500 trials
Figure 4: A The mean error over the 6 subjects is shown as a function of the trial number
B The average lateral deviation as a function of the shift and the trial number C The
additional variance explained by the full model is plotted as a function of the trial number
As a next step we wanted to analyze how the behaviour of the subjects changes over the
course of training. During the process of training the average error over batches of 500
subsequent trials decreased from 1.97 cm to 0.84 cm (Figure 4A). What change leads to
this decrease?
To address this we plot the evolution of the lateral deviation graph, as a function of the
trial number (Figure 4B). Subjects initially exhibit a slope of about 1 and approximately
linear behaviour. This indicates that initially they are using a narrow Gaussian prior. In
other words they rely on the prior belief that their hand will not be displaced and ignore
the feedback. Only later during training do they show behaviour that is consistent with a
bimodal Gaussians distribution.
In Figure 4C we plot the percentage of additional variance explained by the full model
when compared to the Gaussian model averaged over the population. It seems that in particular after trial 2000, the trial after which people enjoy a nights rest, does the explanatory
power of the full model improve. It could be that subjects need a consolidation period to
adequately learn the distribution. Such improvements in learning contingent upon sleep
have also been observed in visual learning [7].
4 Conclusion
We have shown that a prior is used by humans to determine appropriate motor commands
and that it is combined with an estimate of sensory uncertainty. Such a Bayesian view of
sensorimotor learning is consistent with neurophysiological studies that show that the brain
represents the degree of uncertainty when estimating rewards [8-10] and with psychophysical studies addressing the timing of movements [11]. Not only do people represent the
uncertainty and combine this with prior information, they are also able to represent and utilize complicated nongaussian priors. Optimally using a priori knowledge might be key to
winning a tennis match. Tennis professionals spend a great deal of time studying their opponent before playing an important match - ensuring that they start the match with correct
a priori knowledge.
Acknowledgments
We like to thank Zoubin Ghahramani for inspiring discussions and the Wellcome Trust for
financial support. We also like to thank James Ingram for technical support.
References
[1] Cox, R.T. (1946) American Journal of Physics 17, 1
[2] Bernardo, J.M. & Smith, A.F.M. (1994) Bayesian theory. John Wiley
[3] Berrou, C., Glavieux, A. & Thitimajshima, P. (1993) Proc. ICC?93 Geneva, Switzerland 1064
[4] Simoncelli, E.P. & Adelson, E.H. (1996) Proc. 3rd International Conference on Image Processing
Lausanne, Switzerland
[5] Weiss, Y., Simoncelli, E.P. & Adelson, E.H. (2002) Nature Neuroscience 5, 598
[6] Goodbody, W. & Wolpert, D. (1998) Journal of Neurophysiology 79,1825
[7] Stickgold, R., James, L. & Hobson, J.A. (2000) Nature 3 ,1237
[8] Fiorillo, C.D., Tobler, P.N. & Schultz, W. (2003) Science 299, 1898
[9] Basso, M.A. & Wurt, R.H. (1998) Journal of Neuroscience 18, 7519
[10] Platt M.L. (1999) Nature 400, 233
[11] Carpenter, R.H. & Williams, M.L. Nature 377, 59
| 2423 |@word neurophysiology:1 trial:30 cox:1 middle:2 briefly:1 seems:1 sensed:9 crucially:1 thereby:1 solid:1 daniel:1 tuned:1 current:3 com:3 must:2 john:1 subsequent:1 visible:1 blur:5 midway:3 motor:4 wanted:1 plot:3 designed:1 half:2 selected:1 plane:1 smith:1 provides:1 location:5 simpler:1 direct:1 combine:4 fitting:2 rding:1 expected:2 examine:4 nor:1 brain:1 compensating:1 actual:1 increasing:6 provided:6 estimating:1 moreover:3 underlying:2 panel:4 medium:1 translucent:1 what:2 cm:54 minimizes:1 developed:1 every:1 bernardo:1 laterally:1 uk:3 platt:1 internally:1 enjoy:1 before:2 timing:1 sd:1 path:4 approximately:1 might:1 twice:1 au:1 examined:2 lausanne:1 averaged:1 acknowledgment:1 block:1 displacement:3 significantly:1 projection:1 word:1 integrating:1 zoubin:1 get:2 close:2 www:2 imposed:4 compensated:1 williams:1 starting:1 rule:1 his:1 financial:1 population:2 analogous:1 target:9 play:1 velocity:8 predicts:1 observed:1 cloud:1 movement:12 decrease:1 reward:1 trained:1 koerding:2 depend:1 exposed:1 upon:1 multimodal:3 finger:10 fast:1 london:4 tell:1 spend:1 final:1 ucl:3 inserting:2 basso:1 achieve:1 moved:1 perfect:2 illustrate:1 depending:1 ac:1 measured:2 predicted:4 indicate:2 switzerland:2 correct:1 subsequently:1 centered:1 human:2 virtual:1 bin:1 require:1 behaviour:3 probable:2 insert:1 correction:1 around:1 visually:2 great:1 mapping:7 predict:1 pointing:1 consecutive:1 perceived:1 proc:2 tainty:1 weighted:1 clearly:1 sensor:1 gaussian:16 always:3 reaching:2 rather:1 command:1 acuity:1 improvement:1 indicates:1 lasted:1 inference:1 dependent:1 unlikely:1 explanatory:1 initially:2 manipulating:1 priori:4 identical:1 represents:1 adelson:2 extinguished:1 inherent:2 employ:1 few:1 randomly:2 cns:1 mixture:12 fog:1 wc1n:2 integral:1 circle:1 plotted:1 uncertain:1 fitted:2 deviation:21 addressing:1 examining:1 optimally:2 combined:3 density:1 peak:2 international:1 probabilistic:5 physic:1 corrects:1 together:1 nongaussian:1 squared:2 again:1 choose:1 american:1 derivative:3 account:2 star:1 blurred:3 bg:2 depends:2 performed:1 try:1 view:2 observer:1 later:1 analyze:3 reached:1 red:2 bayes:1 start:1 complicated:3 slope:6 minimize:2 variance:4 correspond:1 bayesian:9 trajectory:2 explain:1 reach:1 against:1 sensorimotor:5 frequency:3 james:2 obvious:1 dataset:1 knowledge:8 color:3 day:2 response:3 wei:1 optotrak:1 furthermore:2 just:2 hand:6 horizontal:1 night:1 uncer:1 trust:1 nonlinear:2 indicated:1 effect:4 consisted:1 true:4 evolution:2 adequately:1 white:1 deal:1 konrad:2 during:4 width:4 m:1 generalized:1 impression:1 image:1 novel:2 approximates:2 numerically:1 significant:1 rd:1 pointed:2 had:1 reliability:1 tennis:3 longer:2 posterior:3 belongs:1 arbitrarily:1 additional:4 contingent:1 impose:1 determine:3 berrou:1 period:2 dashed:1 full:7 simoncelli:2 infer:2 technical:1 match:4 sphere:3 compensate:1 manipulate:1 equally:1 prevented:1 paired:1 laplacian:1 schematic:1 prediction:2 ensuring:1 ko:1 vision:1 represent:9 bimodal:1 ion:1 affecting:1 participated:1 decreased:1 source:1 appropriately:1 biased:1 rest:1 subject:62 xtrue:33 presence:1 revealed:1 affect:1 fit:4 approaching:1 competing:1 shift:47 whether:4 se:1 clear:4 amount:1 thitimajshima:1 ten:1 inspiring:1 reduced:2 generate:3 percentage:1 shifted:2 dotted:2 estimated:1 neuroscience:2 group:2 key:1 drawn:3 utilize:2 graph:5 halfway:1 sum:1 run:1 uncertainty:27 place:1 family:1 hobson:1 incompatible:1 scaling:1 paced:1 sleep:1 ball:6 combination:2 across:3 smaller:1 increasingly:1 explained:3 gradually:1 wellcome:1 previously:1 needed:1 end:5 studying:1 available:1 gaussians:19 opponent:1 apply:1 away:1 appropriate:1 distinguished:1 batch:1 professional:1 assumes:1 ensure:1 calculating:1 giving:1 ghahramani:1 psychophysical:1 move:1 occurs:1 strategy:3 exhibit:1 distance:3 thank:2 lateral:52 sloped:1 code:1 index:1 relationship:1 minimizing:1 setup:1 reliably:1 contend:1 perform:1 displaced:2 withheld:2 compensation:6 displayed:2 variability:5 inferred:3 required:1 compensatory:1 errorbars:1 learned:5 narrow:2 address:1 able:4 belief:1 power:1 rely:1 arm:1 representing:1 improve:1 finished:1 text:1 prior:39 icc:1 relative:4 fully:1 expect:2 integrate:1 degree:6 consistent:3 playing:2 course:2 consolidation:1 side:3 institute:2 taking:2 differentiating:1 midpoint:1 distributed:2 ingram:1 feedback:34 world:1 transition:1 sensory:10 instructed:1 qualitatively:1 made:2 ignores:1 schultz:1 far:1 approximate:1 skill:1 ignore:2 geneva:1 assumed:2 neurology:2 table:1 reality:1 learn:7 nature:4 inherently:1 ignoring:1 mse:7 significance:1 linearly:1 arise:1 allowed:1 carpenter:1 fig:1 representative:1 slow:1 wiley:1 experienced:6 position:7 winning:1 vanish:1 third:2 removing:1 showing:1 evidence:5 mirror:1 cursor:8 subtract:1 wolpert:3 led:2 likely:1 neurophysiological:1 visual:14 glavieux:1 towards:1 experimentally:1 change:3 typical:3 rarely:1 people:3 support:2 tested:1 |
1,567 | 2,424 | Envelope-based Planning in Relational MDPs
Natalia H. Gardiol
MIT AI Lab
Cambridge, MA 02139
[email protected]
Leslie Pack Kaelbling
MIT AI Lab
Cambridge, MA 02139
[email protected]
Abstract
A mobile robot acting in the world is faced with a large amount of sensory data and uncertainty in its action outcomes. Indeed, almost all interesting sequential decision-making domains involve large state spaces
and large, stochastic action sets. We investigate a way to act intelligently as quickly as possible in domains where finding a complete policy
would take a hopelessly long time. This approach, Relational Envelopebased Planning (REBP) tackles large, noisy problems along two axes.
First, describing a domain as a relational MDP (instead of as an atomic
or propositionally-factored MDP) allows problem structure and dynamics to be captured compactly with a small set of probabilistic, relational
rules. Second, an envelope-based approach to planning lets an agent begin acting quickly within a restricted part of the full state space and to
judiciously expand its envelope as resources permit.
1
Introduction
Quickly generating generating usable plans when the world abounds with uncertainty is an
important and difficult enterprise. Consider the classic blocks world domain: the number
of ways to make a stack of a certain height grows exponentially with the number of blocks
on the table; and if the outcomes of actions are uncertain, the task becomes even more
daunting. We want planning techniques that can deal with large state spaces and large,
stochastic action sets since most compelling, realistic domains have these characteristics.
In this paper we propose a method for planning in very large domains by using expressive
rules to restrict attention to high-utility subsets of the state space.
Much of the work in traditional planning techniques centers on propositional, deterministic
domains. See Weld?s survey [12] for an overview of the extensive work in this area. Efforts
to extend classical planning approaches into stochastic domains include mainly techniques
that work with fully-ground state spaces [13, 2]. Conversely, efforts to move beyond propositional STRIPS-based planning involve work in mainly deterministic domains [6, 10].
But the world is not deterministic: for an agent to act robustly, it must handle uncertain dynamics as well as large state and action spaces. Markov decision theory provides techniques
for dealing with uncertain outcomes in atomic-state contexts, and much work has been
done in leveraging structured representations to solve very large MDPs and some POMDPs
[9, 3, 7]. While these techniques have moved MDP techniques from atomic-state representations to factored ones, they still operate in fully-ground state spaces.
In order to describe large stochastic domains compactly, we need relational structures that
can represent uncertainty in the dynamics. Relational representations allow the structure
of the domain to be expressed in terms of object properties rather than object identities
and thus yield a much more compact representation of a domain than the equivalent propositional version can. Efficient solutions for probabilistic, first-order MDPs are difficult to
come by, however. Boutilier et al.[3] find policies for first-order MDPs by solving for the
value-function of a first-order domain: the approach manipulates logical expressions that
stand for sets of underlying states, but keeping the value-function representation manageable requires complex theorem-proving. Other approaches in relational MDPs represent the
value function as a decision-tree [5] or as a sum of local subfunctions [8]. Another recent
body of work avoids learning the value function and learns policies directly from example
policies [14]. These approaches all compute full policies over complete state and action
spaces, however, and so are of a different spirit than the work presented here.
The underlying message is nevertheless clear: the more an agent can compute logically and
the less it attends to particular domain objects, the more general its solutions will be. Since
fully-ground representations grow too big to be useful and purely logical representations
are as yet unwieldy, we propose a middle path: we agree to ground things out, but in a principled, restricted way. We represent world dynamics by a compact set of relational rules,
and we extend the envelope method of Dean et al.[4] to use these structured dynamics. We
quickly come up with an initial trajectory (an envelope of states) to the goal and then to
refine the policy by gradually incorporating nearby states into the envelope. This approach
avoids the wild growth of purely propositional techniques by restricting attention to a useful subset of states. Our approach strikes a balance along two axes: between fully ground
and purely logical representations, and between straight-line plans and full MDP policies.
2
Planning with an Envelope in Relational Domains
The envelope method was initially designed for planning in atomic-state MDPs. Goals of
achievement are encoded as reward functions, and planning now becomes finding a policy
that maximizes a long-term measure of reward. Extending the approach to a relational
setting lets us cast the problem of planning in stochastic, relational domains in terms of
finding a policy for a restricted Markovian state space.
2.1
Encoding Markovian dynamics with rules
The first step to extending the envelope method to relational domains is to encode the
world dynamics relationally. We use a compact set of rules, as in Figure 1. Each rule, or
operator, is denoted by an action symbol and a parameterized argument list. Its behavior
is defined by a precondition and a set of outcomes, together called the rule schema. Each
precondition and outcome is a conjunction of domain predicates. A rule applies in a state
if its precondition can be matched against some subset of the state ground predicates. Each
outcome then describes the set of possible resulting ground states. Given this structured
representation of action dynamics, we define a relational MDP as a tuple hP, Z, O, T ,Ri:
States: The set of states is defined by a finite set P of relational predicates, representing
the properties and relations that can hold among the finite set of domain objects, O. Each
RMDP state is a ground interpretation of the domain predicates over the domain objects.
Actions: The set of ground actions depends on the set of rules Z and the objects in the
world. For example, move(A, B) can be bound to the table arrangement in Figure 2(a) by
binding A to block 1 and B to block 4 to yield the ground action move(1, 4).
Transition Dynamics: For each action, the distribution over next states is given compactly by the distribution over outcomes encoded in the schema. For example, executing
move(A, B)
pre: (clear(B , t), hold(nil), height(B ,H ), incr(H ,H 0 ), clear(A,t),on(A,C ),broke(f))
[ 0.70 ] (on(A, B ), height(A, H ), clear(A, t), clear(B , f), hold(nil), clear(C , t))
eff :
[ 0.30 ] (on(A, table), clear(A, t), height(A,H ), hold(nil), clear(C, t), broke(t))
fix()
pre: (broke(t))
[ 0.97 ] (broke(f))
eff :
[ 0.03 ] (broke(t))
stackon(B)
pre: ( clear(B , t), hold(A), height(B , H ), incr(H, H 0 ), broke(f))
[ .97 ] (on(A, B ), height(A, H ), clear(A, t), clear(B, f),hold(nil))
eff :
[ .03 ] (on(A, table), clear(A, t), height(A,H?), hold(nil), broke(t))
stackon(table)
pre: (clear(table, t), hold(A), broke(f))
eff : [ 1.00 ] (on(A, table), height(A, 0), clear(A, t), hold(nil))
pickup(A)
pre: (clear(A, t), hold(nil), on(A, B ),broke(f))
eff : [ 1.00 ] (hold(A), clear(A, f), on(A, nil), clear(B , t), height(A,-1))
Figure 1: The set of relational rules, Z, for blocks-world dynamics.2 Each rule schema contains the
action name, precondition, and a set of effects.
move(1, 4) yields a 0.3 chance of landing in a state where block 1 falls on the table, and
a 0.7 chance of landing in a state where block 1 is correctly put on block 4. The rule
outcomes themselves usually only specify a subset of the domain predicates, effectively
describing a set of possible ground states. We assume a static frame: state predicates not
directly changed by the rule are assumed to remain the same.
Rewards: A state is deterministically mapped to a scalar reward according to function R(s).
2.2
Initial trajectory planning
The next step is finding an initial path. In a relational setting, when the underlying MDP
space implied by the full instantiation of the representation is potentially huge, a good
initial envelope is crucial. It determines the quality of the early envelope policies and sets
the stage for more elaborate policies later on.
For planning in traditional STRIPS domains, the Graphplan algorithm is known to be effective [1]. Graphplan finds the shortest straight-line plan by iteratively growing a forwardchaining structure called a plangraph and testing for the presence of goal conditions at each
step. Blum and Langford [2] describe a probabilistic extension called TGraphplan (TGP)
that works by returning a plan?s a probability of success rather than a just a boolean flag.
TGP can find straight-line plans fairly quickly from start to goal that satisfy a minimum
probability. Given TGP?s success in probabilistic STRIPS domains, a straightforward idea
is to use the trajectory found by TGP to populate our initial envelope.
Nevertheless, this should give us pause: we have just said that our relational MDP describes
a large underlying MDP. TGP and other Graphplan descendants work by grounding out
the rules and chaining them forward to construct the plangraph. Large numbers of actions cause severe problems for Graphplan-based planners [11] since the branching factor
quickly chokes the forward-chaining plangraph construction. So how do we cope?
A
B
H
H0
C
hold(nil)
1
1
4
4
5
5
3
3
3
1
4
5
4
5
1
5
4
1
1
4
5
3
3
3
0
0
0
0
0
0
1
1
1
1
1
1
1
1
1
1
1
1
1
1
1
2
2
2
table
table
table
table
table
table
2
2
2
table
table
table
clear(table, t)
3
1
2
4
5
move(A,B)
on(3,2)
clear(3,t)
height(3,1)
color(3,blue)
on(2,table)
clear(2,f)
color(2,green)
...
hold(nil)
clear(table,t)
broke(f)
(a)
(b)
broke(f)
on(b1,b2)
on(b1,table)
.7
color(b1,g)
height(b1,0)
height(b1,1)
clear(b2,f)
clear(table, t)
move(b1,b2)
clear(b1, t)
on(b2,table)
color(b2,r)
height(b1,0)
.3
broke(t)
on(b1,table)
height(b2,0)
clear(b2, t)
(c)
Figure 2: (a) Given this world con?guration, the move action produces three types of effects. (b)
12 different groundings for the argument variables, but not all produce different groundings for the
derived variables. (c) A plangraph fragment with a particular instance of move chained forward.
2.3
Equivalence-class sampling: reducing the planning action apace
STRIPS rules require every variable in the rule schema to appear in the argument list, so
move(A, B) becomes move(A, B, H, H 0 , C). The meaning of the operator shifts from
?move A onto B ? to ?move A at height H 0 onto B at height H from C ?. Not only is this
awkward, but specifying all the variables in the argument list yields an exponential number
of ground actions as the number of domain objects grows. In contrast, the operators we
de?ned above have argument lists containing only those variables that are free parameters.
That is, when the operator move(A, B) takes two arguments, A and B , it means that the
other variables (such as C , the block under A) are derivable from the relations in the rule
schema. Guided by this observation, one can generalize among bindings that produce
equivalent effects on the derivable properties.
Consider executing the move(A, B) rule in the world con?guration in Figure 2. This creates 12 fully-ground actions. However examining the bindings reveals only three types of
action-effects. There is one group of actions that move a block from one block and onto
another; a group that moves a block from the table and onto a block of height zero; and
another group that moves a block off the table and onto a block of height one.
Except for the identities of the argument blocks A and B , the actions in each class produce
equivalent groundings for the properties of the related domain objects. Rather than using
all the actions, then, the plangraph can be constructed chaining forward only a sampled
action from each class. We call this equivalence-class sampling; the sampled action is
representative of the effects of any action from that class. Sampling reduces the branching
factor at each step in the plangraph, so signi?cantly larger domains can be handled.
3
From a Planning Problem to a Policy
Now we describe the approach in detail. We de?ne a planning problem as containing:
Rules: These are the relational operators that describe the action effects. In our system,
they are designed by hand and the probabilities are speci?ed by the programmer.
Initial World State: The set of ground predicates that describes the starting state. REBP
does not make the closed world assumption, so all predicates and objects required in the
planning task must appear in the initial state.
Goal Condition: A conjunction of relational predicates. The goal may contain variables ?
it does not need to be fully ground.
OUT
1.0
OUT
1.0
move(2,1) 1.0
1.0
OUT
1.0
1.0
1.0
pickup(2)
0.3
pickup(1)
move(1,2)
1
1.0
2
1.0
0.3
0.3
move(1,2)
1
2
.03
1
1
2
2
1
1
2
2
0.7
0.7
move(1,2)
move(1,2)
0.97
1
2
fix()
fix()
1
2
0.7
Figure 3: An initial envelope corresponding to the plangraph segment of Figure 2(c) followed fringe
sampling and envelope expansion.
Rewards: A list of conjunctions mapping matching states to a scalar reward value. If a state
in the current MDP does not match a reward condition, the default value is 0. Additionally,
there must be a penalty associated with falling out of the envelope. This penalty is an
estimate of the cost of having to recover from falling out (such as having to replan back to
the envelope, for example).
Given a planning problem, there are now three main components to REBP: ?nding an initial
plan, converting the plan into an MDP, and envelope manipulation. A running example to
illustrate the approach will be the tiny task of making a two-block stack in a domain with
two blocks. Figure 3 illustrates output produced by a run of the algorithm.
3.1
Finding an initial plan
The process for making the initial trajectory essentially follows the TGP algorithm described by Blum and Langford [2]. The TGP algorithm starts with the initial world state as
the ?rst layer in the graph, a minimum probability cutoff for the plan, and a maximum plan
depth. We use the equivalence-class sampling technique discussed above to prune actions
from the plangraph. Figure 2(c) shows one step of a plangraph construction.
3.2
Turning the initial plan into an MDP
The TGP algorithm produces a sequence of actions. The next step is to turn the sequence
of action-effects into a well-de?ned envelope MDP; that is, we must compute the set of
states and the transitions. Usually, the sequence of action-effects alone leaves many state
predicates unspeci?ed. Currently, we assume a static frame, which implies that the value
of a predicate remains the same unless it is known to have explicitly changed.
The set of RMDP states are computed iteratively: ?rst, the envelope is initialized with the
initial world state; then, the next state in the envelope is found by applying the plan action to
the previous state and ??lling in? any missing predicates with their previous values; when
the state containing the goal condition is reached, the set of states is complete. To compute
the set of actions, REBP loops through the list of operators and accumulates all the ground
actions whose preconditions bind to any state in the envelope. Transitions that initiate in an
envelope state but do not land in an envelope state are redirected to OUT. The leftmost MDP
in Figure 3 shows the initial envelope corresponding to the one-step plan of Figure 2(c).
3.3
Envelope Expansion
Envelope expansion, or deliberation, involves adding to the subset of world states under
consideration. The decision of when and how long to deliberate must compaare the expected utility of further thinking against the cost of doing so. Dean et al. [4] discuss this
complex issue in depth. As a ?rst step, we considered the simple precursor deliberation
model, in which deliberation occurs for some number r times and is completed before
execution takes place.
A round of deliberation involves sampling from the current policy to estimate which fringe
states ? states one step outside of the envelope ? are likely. In each round, REBP draws
d ? M samples (drawing from an exploratory action with probability ) and keeps counts
of which fringe states are reached. The f ? M most likely fringes are added to the envelope, where M is number of states in the current envelope and d and f are scalars. After
expansion, we recompute the set of actions and compute a new policy.
Figure 3 shows a sequence of fringe sampling and envelope expansion. We see the incorporation of the fringe state in which the hand breaks as a result of move. With the new
envelope, the policy is re-computed to include the fix action. This is a conditional plan that
a straight-line planner could not find.
4
Experimental Domain
To illustrate the behavior of REBP, we show preliminary results in a stochastic blocks world.
While simple, blocks world is a reasonably interesting first domain because, with enough
blocks, it exposes the weaknesses of purely propositional approaches. Its regular dynamics,
on other hand, lend themselves to relational descriptions. This domain demonstrates the
type of scaling that can be achieved with the REBP approach.
The task at hand is to build a stack containing all the blocks on the table. In this domain,
blocks are stacked on one another, with the top block in a stack being clear. Each block
has a color and is at some height in the stack. There is a gripper that may or may not
be broken. The pickup(A) action is deterministic and puts a clear block into the empty
hand; a block in the hand is no longer clear, and its height and and on-ness are no longer
defined. The fix() action takes a broken hand and fixes it with some probability. The
stackon() action comes in two flavors: first, stackon(B), takes a block from the hand and
puts it on block B , which may be dropped onto the table with a small probability; second,
stackon(table), always puts the block from the hand onto the table. The move(A, B) and
stackon(B) actions also have some chance of breaking the hand. If the hand is broken, it
must be fixed before any further actions can apply. The domain is formalized as follows:3
P
: on(Block, Block), clear(Block, T orF ), color(Block, Color),
height(Block, N um), hold(Block), clear(table, T orF ), broke(T orF ).
Z, T : The rules are shown in Figure 1.
O
: A set of n differently colored (red, green, blue) blocks.
R(s) : If ?A height(A, n ? 1), then 1; if broke(t), then ?2; if OUT, then ?1.
5
Empirical Results
We compared the quality of the policies generated by the following algorithms: REBP;
envelope expansion starting from empty initial plan (i.e., the initial envelope containing
only the initial world state); and policy iteration on the fully ground MDP.4
In all cases, the policy was computed by simple policy iteration with a discount of 0.9 and
a stopping threshold of 0.1. In the case of REBP, the number of deliberation rounds r was
10, d was 10, f was 0.3, and was 0.2. In the case of the deliberation-only envelope, the r
was increased to 35. The runs were averaged over at least 7 trials in each case.
We show numerical results for domains with 5 and 6 blocks. The size of the full MDP
in each case is, respectively, 768 and 5,228 states, with 351 and 733 ground actions. A
The predicates behave like functions in the sense that the nth argument represents the value of
the relation for the first n ? 1 arguments. Thus, we say clear(block5, f) instead of ?clear(block5).
4
Starting with the initial state, the set of states is generated by exhaustively applying our operators
until no more new states are found; this yields the true set of reachable states.
3
Figure 4: Results for the block-stacking tasks. The top plots show policy value against computation
time for REBP and the full MDP. The bottom plots show policy value against number of states for
REBP and deliberation only (empty initial plan).
domain of 7 blocks results in an MDP of over 37,000 states with 1,191 actions, a combined
state and action space is too overwhelming for the full MDP solution. The REBP agent, on
the other hand, is able to find plans for making stacks in domains of more than 12 blocks,
which corresponds to an MDP of about 88,000 states and 3,000 ground actions.
The plots in Figure 4 show intuitive results. The top row shows the value of the policy
against execution time (as measured by a monitoring package) showing that the REBP algorithm produces good quality plans quickly. For REBP, we start measuring the value of
the policy at the point when initial trajectory finding ends and deliberation begins; for the
full MDP solution, we measure the value of the policy at the end of each round of policy
iteration. The full MDP takes a long time to find a policy, but eventually converges. Without
the equivalence-class sampling, plangraph construction takes on the order of a couple of
hours; with it, it takes a couple of minutes. The bottom row shows the value of the policy
against the number of states in the envelope so far and shows that the a good initial envelope
is key for behaving well with fewer states.
6
Discussion and Conclusions
Using the relational envelope method, we can take real advantage of relational generalization to produce good initial plans efficiently, and use envelope-growing techniques to
improve the robustness of our plans incrementally as time permits. REBP is a planning system that tries to dynamically reformulate an apparently intractable problem into a small,
easily handled problem at run time.
However, there is plenty remaining to be done. The first thing needed is a more rigorous
analysis of the equivalence-class sampling. Currently, the action sampling is a purely local
decision made at each step of the plangraph. This works in the current setup because
object identities do not matter and properties not mentioned in the operator outcomes are
never part of the goal condition. If, on the other hand, the goal was to make a stack of
height n ? 1 with a green block on top, it could be problematic to construct the plangraph
without considering block color in the sampled actions. We are currently investigating what
conditions are necessary for making general guarantees about the sampling approach.
Furthermore, the current envelope-extension method is relatively undirected; it might be
possible to diagnose more effectively which fringe states would be most profitable to add.
In addition, techniques such as those used by Dean et al. [4] could be employed to decide
when to stop envelope growth, and to manage the eventual interleaving of envelope-growth
and execution. Currently the states in the envelope are essentially atomic; it ought to be
possible to exploit the factored nature of relational representations to allow abstraction in
the MDP model, with aggregate ?states? in the MDP actually representing sets of states in
the underlying world.
In summary, the REBP method provides a way to restrict attention to a small, useful subset
of a large MDP space. It produces an initial plan quickly by taking advantage of generalization among action effects, and as a result behaves smarter in a large space much sooner
than it could by waiting for a full solution.
Acknowledgements
This work was supported by an NSF Graduate Research Fellowship, by the Office of Naval
Research contract #N00014-00-1-0298, and by NASA award #NCC2-1237.
References
[1] Avrim L. Blum and Merrick L. Furst. Fast plannning through planning graph analysis. Arti?cial
Intelligence, 90:281?300, 1997.
[2] Avrim L. Blum and John C. Langford. Probabilistic planning in the graphplan framework. In
5th European Conference on Planning, 1999.
[3] Craig Boutilier, Raymond Reiter, and Bob Price. Symbolic dynamic programming for firstorder MDPs. In IJCAI, 2001.
[4] Thomas Dean, Leslie Pack Kaelbling, Jak Kirman, and Ann Nicholson. Planning under time
constraints in stochastic domains. Arti?cial Intelligence, 76, 1995.
[5] Kurt Driessens, Jan Ramon, and Hendrik Blockeel. Speeding up relational reinforcement learning through the use of an incremental first order decision tree learner. In European Conference
on Machine Learning, 2001.
[6] B. Cenk Gazen and Craig A. Knoblock. Combining the expressivity of UCPOP with the efficiency of graphplan. In Proc. European Conference on Planning (ECP-97), 1997.
[7] H. Geffner and B. Bonet. High-level planning and control with incomplete information using
POMDPs. In Fall AAAI Symposium on Cognitive Robotics, 1998.
[8] C. Guestrin, D. Koller, C. Gearhart, and N. Kanodia. Generalizing plans to new environments
in relational MDPs. In International Joint Conference on Arti?cial Intelligence, 2003.
[9] Jesse Hoey, Robert St-Aubin, Alan Hu, and Craig Boutilier. Spudd: Stochastic planning using
decision diagrams. In Fifteenth Conference on Uncertainty in Arti?cial Intelligence, 1999.
[10] J. Koehler, B. Nebel, J. Hoffmann, and Y. Dimopoulos. Extending planning graphs to an ADL
subset. In Proc. European Conference on Planning (ECP-97), 1997.
[11] B. Nebel, J. Koehler, and Y. Dimopoulos. Ignoring irrelevant facts and operators in plan generation. In Proc. European Conference on Planning (ECP-97), 1997.
[12] Daniel S. Weld. Recent advances in AI planning. AI Magazine, 20(2):93?123, 1999.
[13] Daniel S. Weld, Corin R. Anderson, and David E. Smith. Extending graphplan to handle uncertainty and sensing actions. In Proceedings of AAAI ?98, 1998.
[14] SungWook Yoon, Alan Fern, and Robert Givan. Inductive policy selection for first-order MDPs.
In 18th International Conference on Uncertainty in Arti?cial Intelligence, 2002.
| 2424 |@word trial:1 middle:1 version:1 manageable:1 hu:1 nicholson:1 orf:3 arti:5 initial:24 contains:1 fragment:1 daniel:2 kurt:1 current:5 merrick:1 yet:1 must:6 john:1 numerical:1 realistic:1 designed:2 plot:3 alone:1 intelligence:5 leaf:1 fewer:1 smith:1 colored:1 provides:2 recompute:1 height:23 along:2 enterprise:1 constructed:1 symposium:1 redirected:1 descendant:1 wild:1 expected:1 indeed:1 behavior:2 themselves:2 planning:31 growing:2 overwhelming:1 precursor:1 considering:1 becomes:3 begin:2 underlying:5 matched:1 maximizes:1 what:1 finding:6 ought:1 guarantee:1 cial:5 every:1 act:2 firstorder:1 tackle:1 growth:3 returning:1 demonstrates:1 um:1 control:1 appear:2 before:2 ncc2:1 dropped:1 local:2 bind:1 driessens:1 abounds:1 encoding:1 accumulates:1 blockeel:1 path:2 might:1 equivalence:5 conversely:1 specifying:1 dynamically:1 graduate:1 averaged:1 testing:1 atomic:5 block:43 jan:1 area:1 empirical:1 matching:1 pre:5 regular:1 symbolic:1 onto:7 selection:1 operator:9 put:4 context:1 applying:2 landing:2 equivalent:3 deterministic:4 dean:4 center:1 missing:1 jesse:1 straightforward:1 attention:3 starting:3 survey:1 formalized:1 manipulates:1 factored:3 rule:20 classic:1 handle:2 proving:1 exploratory:1 profitable:1 construction:3 magazine:1 programming:1 bottom:2 yoon:1 precondition:5 principled:1 mentioned:1 environment:1 broken:3 reward:7 dynamic:12 chained:1 exhaustively:1 solving:1 segment:1 purely:5 creates:1 efficiency:1 learner:1 compactly:3 easily:1 joint:1 differently:1 stacked:1 fast:1 describe:4 effective:1 aggregate:1 outcome:9 h0:1 outside:1 whose:1 encoded:2 larger:1 solve:1 say:1 drawing:1 noisy:1 deliberation:8 sequence:4 advantage:2 intelligently:1 propose:2 loop:1 combining:1 description:1 moved:1 intuitive:1 achievement:1 rst:3 ijcai:1 empty:3 extending:4 produce:8 natalia:1 generating:2 executing:2 converges:1 object:10 incremental:1 attends:1 illustrate:2 measured:1 signi:1 come:3 implies:1 involves:2 guided:1 stochastic:8 broke:14 programmer:1 eff:5 require:1 fix:6 generalization:2 givan:1 preliminary:1 aubin:1 extension:2 hold:14 considered:1 ground:19 mapping:1 furst:1 early:1 nebel:2 proc:3 currently:4 expose:1 mit:4 always:1 unspeci:1 rather:3 mobile:1 office:1 conjunction:3 encode:1 ax:2 derived:1 naval:1 mainly:2 logically:1 contrast:1 rigorous:1 sense:1 abstraction:1 stopping:1 guration:2 initially:1 relation:3 koller:1 expand:1 issue:1 among:3 denoted:1 tgp:8 plan:23 ness:1 fairly:1 construct:2 never:1 having:2 sampling:11 represents:1 thinking:1 plenty:1 huge:1 message:1 investigate:1 severe:1 weakness:1 tuple:1 necessary:1 rmdp:2 unless:1 tree:2 incomplete:1 sooner:1 initialized:1 re:1 uncertain:3 instance:1 increased:1 compelling:1 markovian:2 boolean:1 measuring:1 leslie:2 stacking:1 kaelbling:2 cost:2 subset:7 predicate:13 examining:1 too:2 combined:1 st:1 international:2 cantly:1 probabilistic:5 off:1 contract:1 together:1 quickly:8 nhg:1 aaai:2 manage:1 containing:5 geffner:1 cognitive:1 usable:1 de:3 b2:7 matter:1 satisfy:1 explicitly:1 depends:1 later:1 break:1 try:1 diagnose:1 lab:2 apparently:1 schema:5 closed:1 reached:2 start:3 recover:1 red:1 doing:1 characteristic:1 efficiently:1 yield:5 generalize:1 produced:1 craig:3 fern:1 trajectory:5 pomdps:2 monitoring:1 bob:1 straight:4 lpk:1 strip:4 ed:2 against:6 associated:1 adl:1 static:2 con:2 sampled:3 couple:2 stop:1 logical:3 color:8 actually:1 back:1 nasa:1 specify:1 awkward:1 daunting:1 done:2 anderson:1 furthermore:1 just:2 stage:1 langford:3 until:1 hand:13 expressive:1 bonet:1 incrementally:1 quality:3 grows:2 mdp:24 name:1 effect:9 grounding:4 contain:1 true:1 inductive:1 iteratively:2 reiter:1 deal:1 round:4 branching:2 incr:2 replan:1 chaining:3 leftmost:1 complete:3 meaning:1 consideration:1 behaves:1 overview:1 exponentially:1 extend:2 interpretation:1 discussed:1 cambridge:2 ai:6 hp:1 reachable:1 knoblock:1 robot:1 longer:2 behaving:1 add:1 recent:2 irrelevant:1 manipulation:1 certain:1 n00014:1 success:2 captured:1 minimum:2 guestrin:1 speci:1 converting:1 prune:1 employed:1 shortest:1 strike:1 full:10 reduces:1 alan:2 match:1 long:4 award:1 essentially:2 fifteenth:1 iteration:3 represent:3 smarter:1 achieved:1 robotics:1 addition:1 want:1 fellowship:1 diagram:1 grow:1 crucial:1 envelope:42 operate:1 kirman:1 undirected:1 thing:2 leveraging:1 spirit:1 call:1 presence:1 enough:1 restrict:2 idea:1 judiciously:1 shift:1 expression:1 handled:2 utility:2 effort:2 penalty:2 cause:1 action:49 boutilier:3 useful:3 clear:32 involve:2 amount:1 discount:1 deliberate:1 problematic:1 nsf:1 correctly:1 blue:2 waiting:1 group:3 key:1 nevertheless:2 blum:4 falling:2 threshold:1 cutoff:1 graph:3 sum:1 run:3 package:1 parameterized:1 uncertainty:6 place:1 almost:1 planner:2 decide:1 ecp:3 draw:1 decision:7 scaling:1 bound:1 layer:1 followed:1 refine:1 gazen:1 incorporation:1 constraint:1 ri:1 weld:3 nearby:1 argument:9 relatively:1 ned:2 structured:3 according:1 describes:3 remain:1 making:5 restricted:3 gradually:1 hoey:1 resource:1 subfunctions:1 agree:1 remains:1 describing:2 turn:1 discus:1 count:1 eventually:1 initiate:1 needed:1 end:2 permit:2 apply:1 apace:1 robustly:1 robustness:1 thomas:1 top:4 running:1 include:2 remaining:1 completed:1 exploit:1 build:1 classical:1 implied:1 move:25 arrangement:1 added:1 occurs:1 koehler:2 hoffmann:1 traditional:2 said:1 mapped:1 reformulate:1 balance:1 difficult:2 setup:1 robert:2 potentially:1 policy:28 observation:1 markov:1 finite:2 jak:1 behave:1 pickup:4 relational:25 frame:2 spudd:1 stack:7 david:1 propositional:5 cast:1 required:1 extensive:1 expressivity:1 hour:1 beyond:1 able:1 usually:2 hendrik:1 green:3 ramon:1 lend:1 turning:1 pause:1 nth:1 representing:2 improve:1 mdps:9 ne:1 nding:1 speeding:1 raymond:1 faced:1 acknowledgement:1 fully:7 interesting:2 generation:1 agent:4 tiny:1 land:1 row:2 changed:2 summary:1 supported:1 keeping:1 free:1 populate:1 allow:2 fall:2 taking:1 default:1 depth:2 world:19 stand:1 avoids:2 transition:3 sensory:1 forward:4 made:1 reinforcement:1 far:1 cope:1 compact:3 derivable:2 keep:1 dealing:1 instantiation:1 reveals:1 investigating:1 b1:9 assumed:1 lling:1 table:31 additionally:1 pack:2 reasonably:1 nature:1 ignoring:1 expansion:6 complex:2 european:5 domain:37 main:1 big:1 body:1 representative:1 elaborate:1 deterministically:1 exponential:1 breaking:1 learns:1 interleaving:1 theorem:1 unwieldy:1 minute:1 showing:1 symbol:1 list:6 sensing:1 incorporating:1 gripper:1 intractable:1 restricting:1 sequential:1 effectively:2 adding:1 avrim:2 execution:3 illustrates:1 flavor:1 generalizing:1 likely:2 hopelessly:1 expressed:1 scalar:3 applies:1 binding:3 corresponds:1 chance:3 determines:1 ma:2 conditional:1 fringe:7 identity:3 goal:9 ann:1 eventual:1 price:1 except:1 reducing:1 acting:2 flag:1 called:3 nil:10 experimental:1 |
1,568 | 2,425 | Bounded invariance and the formation of
place fields
Reto Wyss and Paul F.M.J. Verschure
Institute of Neuroinformatics
University/ETH Z?
urich
Z?
urich, Switzerland
rwyss,[email protected]
Abstract
One current explanation of the view independent representation of
space by the place-cells of the hippocampus is that they arise out
of the summation of view dependent Gaussians. This proposal assumes that visual representations show bounded invariance. Here
we investigate whether a recently proposed visual encoding scheme
called the temporal population code can provide such representations. Our analysis is based on the behavior of a simulated robot
in a virtual environment containing specific visual cues. Our results show that the temporal population code provides a representational substrate that can naturally account for the formation of
place fields.
1
Introduction
Pyramidal cells in the CA3 and CA1 regions of the rat hippocampus have shown to
be selectively active depending on the animal?s position within an environment[1].
The ensemble of locations where such a cell fires ? the place field ? can be determined by a combination of different environmental and internal cues[2], where vision
has been shown to be of particular importance[3]. This raises the question, how
egocentric visual representations of visual cues can give rise to an allocentric representation of space. Recently it has been proposed that a place field is formed by
the summation of Gaussian tuning curves, each oriented perpendicular to a wall of
the environment and peaked at a fixed distance from it[4, 5, 6]. While this proposal
tries to explain the actual transformation from one coordinate system to another,
it does not account for the problem how appropriate egocentric representations of
the environment are formed. Thus, it is unclear, how the information about a rat?s
distance to different walls becomes available, and in particular how this proposal
would generalize to other environments where more advanced visual skills, such as
cue identification, are required.
For an agent moving in an environment, visual percepts of objects/cues undergo a
combination of transformations comprising zooming and rotation in depth. Thus,
the question arises, how to construct a visual detector, which has a Gaussian like
tuning with regard to the positions within the environment from which snapshots
1
2
3
TPC
1
3
place cell
TPC
2
TPC
Figure 1: Place cells from multiple snapshots. The robot is placed in a virtual
square environment with four patterns on the walls, i.e. a square, a triangle, a
Z and a X. The robot scans the environment for salient stimuli by rotating on
place. A saliency detector triggers the acquisition of visual snapshots which are
subsequently transformed into TPCs. A place cell is defined through its associated
TPC templates.
of a visual cue are taken. The internal representation of a stimulus, upon which
such a detector is based, should be tolerant to certain degrees of visual deformations
without loosing specificity or, in other words, show a bounded invariance. In this
study we show that a recently proposed cortical model of visual pattern encoding,
the temporal population code (TPC), directly supports this notion of bounded
invariance[7]. The TPC is based on the notion that a cortical network can be seen
to transform a spatial pattern into a purely temporal code.
Here, we investigate to what extent the bounded invariance provided by the TPC
can be exploited for the formation of place fields. We address this question in the
context of a virtual robot behaving in an environment containing several visual
cues. Our results show, that the combination of a simple saliency mechanism with
the TPC naturally gives rise to allocentric representations of space, similar to the
place fields observed in the hippocampus.
2
2.1
Methods
The experimental setup
Experiments are performed using a simulated version of the real-world robot Khepera (K-team, Lausanne, Switzerland) programmed in C++ using OpenGL. The
robot has a circular body with two wheels attached to its side each controlled by an
individual motor. The visual input is provided by a camera with a viewing angle
of 60? mounted on top of the robot. The neural networks are simulated on a Linux
computer using a neural network simulator programmed in C++.
The robot is placed in square arena (fig. 1, left),and in the following, all lengths will
be given in units of the side lengths of the square environment.
2.2
The temporal population code
Visual information is transformed into a TPC by a network of laterally coupled
cortical columns, each selective to one of four orientations ? ? {0? , 45? , 90? , 135? }
and one of three spatial frequencies ? ? {high, medium, low}[7]. The outputs of
the network are twelve vectors A?,? each reflecting the average population activity
recorded over 100 time-steps for each type of cortical column. These vectors are
reduced to three vectors A? by concatenating the four orientations. This set of
vectors form the TPC which represents a single snapshot of a visual scene.
The similarity S(s1 , s2 ) between two snapshots s1 and s2 is defined as the average
correlation ? between the corresponding vectors, i.e.
*
+
s1
s2
S(s1 , s2 ) =
Z ?(A? , A? )
(1)
??
where Z is the Fisher Z-Transform given by Z(?) = 1/2 ln((1 + ?)/(1 ? ?)), which
transforms a typically skewed distribution of correlation coefficients ? into an approximately normal distribution of coefficients. Thus, Z(?) becomes a measure on
a proportional scale such that mean values are well defined.
2.3
Place cells from multiple snapshots
In this study, the response properties of a place cell are given by the similarity
between incoming snapshots of the environment and template snapshots associated
to the place cell when it was constructed. Thus, for both, the acquisition of place
cells as well as their exploitation, the system needs to be provided with snapshots
of its environment that contain visual features. For this purpose, the robot is
equipped with a simple visual saliency detector s(t) that selects scenes with high
central contrast:
P ?y2
e
c(y, t)2
s(t) = P
c(y, t)2
where c(y, t) denotes the contrast at location y ? [?1, +1]2 in the image at time
t. At each point in time where s(t) > ?saliency , a new snapshot is acquired with a
probability of 0.1. A place cell k is defined by n snapshots called templates t ki with
i = 1 . . . n.
Whenever the robot tries to localize itself, it scans the environment by rotating
in place and taking snapshots of visually salient scenes (fig. 1). The similarity
S between each incoming snapshot sj with j = 1 . . . m and every template tki is
determined using eq. 1. The activation ak of place cell k for a series of m snapshots
sj is then given by a sigmoidal function
?1
D
E
ak (ik ) = 1 + exp ??(ik ? ?)
where ik = max S(tki , sj )
. (2)
i
j
ik represents the input to the place cell which is computed by determining the
maximal similarity of each snapshot to any template of the place cell and subsequent
averaging, i.e. h?ij corresponds to the average over all snapshots j.
2.4
Position reconstruction
There are many different approaches to the problem of position reconstruction or
decoding from place cell activity[8]. A basis function method uses a linear combination of basis functions ?k (x) with the coefficients proportional to the activity of
the place cells ak . Here we use a direct basis approach, i.e. the basis function ?k (x)
directly corresponds to the average activation ak of place cell k at position x within
? is then given by
the environment. The reconstructed position x
X
? = argmax
x
ak ?k (x)
x
k
The reconstruction error is given by the distance between the reconstructed and
true position averaged over all positions within the environment.
2.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
2
1.5
1
0.5
Figure 2: Similarity surfaces for the four different cues. Similarity between a reference snapshot of the different cues taken at the position marked by the white cross
and all the other positions surrounding the reference location.
2.5
Place field shape and size
In order to investigate the shape of a place field ?(x), and in particular to determine
its degree of asymmetry and its size, we computed the two-dimensional normalized
inertial tensor I given by
P
2
r ?(r) ?ij r ? ri rj
P
Iij =
r ?(r)
P
P
? where x
?=
with r = {r1 , r2 } = x ? x
x?(x)/ ?(x) corresponds to the ?center
of gravity? and ?ij is the Kronecker delta. I is symmetric and can therefore be
diagonalized, i.e. I = VT DV, such that V is an orthonormal transformation matrix
and Dii > 0 for i = 1, 2. A measure
of the half-width of the place field along its two
?
principal axes is then di = 2Dii such that a measure of asymmetry is given by
d ? d
1
2
0?
?1
d1 + d 2
This measure becomes zero for symmetric place fields while approaching one for
asymmetric ones. In addition, we can estimate the size of the place field by approximating its shape by an ellipse, i.e. ?d1 d2 .
3
3.1
Results
Bounded invariance
Initially, we investigate the topological properties of the temporal population coding
space. Depending on the position within an environment, visual stimuli undergo a
geometric transformation which is a combination of scaling and rotation in depth.
Fig. 2 shows the similarity to a reference snapshot taken at the location of the white
cross for the four different cues. Although the precise shape of the similarity surface
differs, the similarity decreases smoothly and monotonically for increasing distances
to the reference point for all stimuli.
The similarity surface for different locations of the reference point is shown in fig. 3
for the Z cue. Although the Z cue has no vertical mirror symmetry, the similarity
surfaces are nearly symmetric with respect to the vertical center line. Thus, using
a single cue, localization is only possible modulo a mirror along the vertical center.
The implications of this will be discussed later. Concerning different distances of
the reference point to the stimulus, fig. 3 (along the columns) shows that the specificity of the similarity measure is large for small distances while the tuning becomes
2
1.5
1
0.5
2
1.5
1
0.5
2
1.5
1
0.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
2.5
2
1.5
1
0.5
Figure 3: Similarity surface of Z cue for different reference points. The distance/angle of the reference point to the cue is kept constant along the rows/columns
respectively.
broader for large distances. This is a natural consequence of the perspective projection which implies that the changes in visual perception due to different viewing
positions are inversely proportional to the viewing distance.
3.2
Place cells from multiple snapshots
The response of a place cell is determined by eq. 2 based on four associated snapshots/templates taken at the same location within the environment. The templates
for each place cell are chosen by the saliency detector and therefore there is no
explicit control over the actual snapshots defining a place cell, i.e. some place cells
are defined based on two or more templates of the same cue. Furthermore, the
stochastic nature of the saliency detector does not allow for any control over the
precise position of the stimulus within the visual field. This is, where the intrinsic translation invariance of the temporal population code plays an important role,
i.e. the precise position of the stimulus within the visual field at the time of the
snapshot has no effect on the resulting encoding as long as the whole stimulus is
visible.
Fig. 4 shows examples of the receptive fields (subsequently also called place fields)
of such place cells acquired at the nodes of a regular 5 ? 5 lattice within the environment. Most of the place fields have a Gaussian-like tuning which is compatible
with single cell recordings from pyramidal cells in CA3 and CA1[2], i.e. the place
cells maximally respond close to their associated positions and degrade smoothly
and monotonically for increasing distances. Some place cells have multiple subfields in that they respond to different locations in the environment with a similar
amplitude.
3.3
Position reconstruction
Subsequently, we determine the accuracy up to which the robot can be localized
within the environment. Therefore we use the direct basis approach for position reconstruction as described in the Methods. As basis functions we take the normalized
response profiles of place cells constructed from four templates taken at the nodes
of a regular lattice covering the environment. Fig. 5a shows the reconstruction error
averaged over the environment as a function of the number of place cells as well as
the number of snapshots taken at each location. The reconstruction error decreases
monotonically both for an increasing number of place cells as well as an increasing
Figure 4: Place fields of 5 ? 5 place cells. The small squares show the average
response of 5 ? 5 different place cells for all the positions of the robot within
the environment. Darker regions correspond to stronger responses. The relative
location of each square within the figure corresponds to the associated location of
the place cell within the environment. All place fields are scaled to a common
maximal response.
number of snapshots. An asymptotic reconstruction error is approached very fast,
i.e. for more then 25 place cells and more then two snapshots per location. Thus,
for a behaving organism exploring an unknown environment, this implies that a
relatively sparse exploration strategy suffices to create a complete representation of
the new environment.
Above we have seen that localization with a single snapshot is only possible modulo
a mirror along the axis where the cue is located. The systematic reconstruction
error introduced by this short-coming can be determined analytically and is ? 0.13
in units of the side-length of the square environment. For an increasing number
of snapshots, the probability that all snapshots are from the same pair of opposite
cues, decreases exponentially fast and we therefore also expect the systematic error
to vanish. Considering 100 place cells, the difference in reconstruction error between
1 and 10 snapshots amounts to 0.147 ? 0.008 (mean ? SD) which is close to the
predicted systematic error due to the effect discussed above. Thus, an increasing
number of snapshots primarily helps to resolve ambiguities due to the symmetry
properties of the TPC.
3.4
Place field shape
Fig. 5b-c shows scatter plots of both, place field asymmetry and size versus the
distance of the place field?s associated location from the center of the environment.
There is a tendency that off-center place cells have more asymmetric place fields
than cells closer to the center (r=0.32) which is in accordance with experimental
results[5]. Regarding place field size, there is no direct relation to the associated
position of place field (r=0.08) apart from the fact that the variance is maximal
for intermediate distances from the center. It must be noted, however, that the
size of the place field critically depends on the choice of the threshold ? in eq. 2.
Indeed different relations between place field size and location can be achieved by
assuming non homogeneous thresholds, which for example might be determined for
reconstruction error
0.4
0.3
0.2
0.1
0
2
4
6
# snapshots
8
10 100
25
50
75
# placecells
b
c
distance from center
a
0.6
0.6
0.4
0.4
0.2
0.2
0
0
0.25
0.5
asymmetry
0.75
0
0
0.1
0.2
0.3
size
Figure 5: (a) Position reconstruction error. The average error in position reconstruction as a function of the number of snapshots and the number of place cells
considered. (b-c) Scatter plots of the place field asymmetry/size versus the distance of the place fields associated location to the center of the environment. The
correlation coefficients are r=0.32/0.08 respectively.
each place cell individually based on its range of inputs. The measure for place
field asymmetry, in contrast, has shown to be more stable in this respect (data not
shown).
4
Discussion
We have shown that the bounded invariance properties of visual stimuli encoded
in a TPC are well suited for the formation of place fields. More specifically, the
topology preservation of similarity amongst different viewing angles and distances
allows a direct translation of the visual similarity between two views to their relative
location within an environment. Therefore, only a small number of place cells
are required for position reconstruction. Regarding the shape of the place fields,
only weak correlations between its asymmetry and its distance to the center of the
environment have been found.
As opposed to the present approach, experimental results suggest that place field
formation in the hippocampus relies on multiple sensory modalities and not only
vision. Although it was shown that vision may play an important role[3], proprioceptive stimuli, for example, can become important in situations where either visual
information is not available such as in the dark or in the presence of visual singularities, where two different locations elicit the same visual sensation[9]. A type
of information strongly related to proprioceptive stimuli, is the causal structure
of behavior which imposes continuous movement in both space and time, i.e. the
information about the last location can be of great importance for estimating the
current location[10]. Indeed, a recent study has shown that position reconstruction
error greatly reduces, if this additional constraint is taken into account[8]. In the
present approach we analyzed the properties of place cells in the absence of a behavioral paradigm. Thus, it is not meaningful to integrate information over different
locations. We expect, however, that for a continuously behaving robot this type of
information would be particularly useful to resolve the ambiguities introduced by
the mirror invariance in the case of a single visual snapshot.
As opposed to the large field of view of rats (? 320? [11]) the robot used in this
study has a very restricted field of view. This has direct implications on the robot?s
behavior. The advantage of only considering a 60? field of view is, however, that
the amount of information contributed by single cues can be investigated. We
have shown, that a single view allows for localization modulo a mirror along the
orientation of the corresponding stimulus. This ambiguity can be resolved taking
additional snapshots into account. In this context, maximal additional information
can be gained if a new snapshot is taken along a direction orthogonal to the first
snapshot which is also more efficient from a behavioral point of view than using
stimuli from opposite directions.
The acquisition of place cells was supervised, in that their associated locations are
assumed to correspond to the nodes of a regular lattice spanning the environment.
While this allows for a controlled statistical analysis of the place cell properties,
it is not very likely that an autonomously behaving agent can acquire place cells
in such a regular fashion. Rather, place cells have to be acquired incrementally
based on purely local information. Information about the number of place cells
responding or the maximal response of any place cell for a particular location is
locally available to the agent, and can therefore be used to selectively trigger the
acquisition of new place cells. In general, the representation will most likely also
reflect further behavioral requirements in that important locations where decisions
need to be taken, will be represented by a high density of place cells.
Acknowledgments
This work was supported by the European Community/Bundesamt f?
ur Bildung und
Wissenschaft Grant IST-2001-33066 (to P.V.). The authors thank Peter K?onig for
valuable discussions and contributions to this study.
References
[1] J. O?Keefe and J. Dostrovsky. The hippocampus as a spatial map: preliminary evidence
from unit activity in the freely moving rat. Brain Res, 34:171?5, 1971.
[2] J. O?Keefe and L. Nadel.
Oxford, 1987.
The hippocampus as a cognitive map.
Clarendon Press,
[3] J. Knierim, H. Kudrimoti, and B. McNaughton. Place cells, head direction cells, and
the learning of landmark stability. J. Neursci., 15:1648?59, 1995.
[4] J. O?Keefe and N. Burgess. Geometric determinants of the place fields of hippocampal
neurons. Nature, 381(6581):425?8, 1996.
[5] J. O?Keefe, N. Burgess, J.G. Donnett, K.J. Jeffrey, and E.A. Maguire. Place cells,
navigational accuracy, and the human hippocampus. Philos Trans R Soc Lond B Biol
Sci., 353(1373):1333?40, 1998.
[6] N. Burgess, J.G. Donnett, H.J. Jeffrey, and J. O?Keefe. Robotic and neuronal simulation of the hippocampus and rat navigation. Philos Trans R Soc Lond B Biol Sci.,
352(1360):1535?43, 1997.
[7] R. Wyss, P. K?
onig, and P.F.M.J. Verschure. Invariant representations of visual patterns
in a temporal population code. Proc. Natl. Acad. Sci. USA, 100(1):324?9, 2003.
[8] K. Zhang, I. Ginzburg, B.L. McNaughton, and T.J. Sejnowski. Interpreting neuronal
population activity by reconstruction: Unified framework with application in hippocampal
place cells. J Neurophysiol., 79(2):1017?44, 1998.
[9] A. Arleo and W. Gerstner. Spatial cognition and neuro-mimetic navigation: a model
of hippocampal place cell activity. Biol Cybern., 83(3):287?99, 2000.
[10] G. Quirk, R. Muller, and R. Kubie. The firing of hippocampal place cells in the dark
depends on the rat?s recent experience. J. Neursci., 10:2008?17, 1995.
[11] A. Hughes. A schematic eye for the rat. Visual Res., 19:569?88, 1977.
| 2425 |@word determinant:1 exploitation:1 version:1 hippocampus:8 stronger:1 d2:1 simulation:1 series:1 diagonalized:1 current:2 activation:2 scatter:2 must:1 visible:1 subsequent:1 shape:6 motor:1 plot:2 cue:19 half:1 short:1 provides:1 node:3 location:22 sigmoidal:1 zhang:1 along:7 constructed:2 direct:5 become:1 ik:4 behavioral:3 acquired:3 indeed:2 behavior:3 simulator:1 brain:1 resolve:2 actual:2 equipped:1 considering:2 increasing:6 becomes:4 provided:3 estimating:1 bounded:7 medium:1 what:1 ca1:2 unified:1 transformation:4 temporal:8 every:1 gravity:1 laterally:1 scaled:1 control:2 unit:3 grant:1 onig:2 tki:2 accordance:1 sd:1 local:1 consequence:1 acad:1 encoding:3 ak:5 oxford:1 firing:1 approximately:1 might:1 khepera:1 lausanne:1 programmed:2 perpendicular:1 subfields:1 averaged:2 range:1 acknowledgment:1 camera:1 hughes:1 differs:1 kubie:1 elicit:1 eth:1 projection:1 word:1 regular:4 specificity:2 suggest:1 wheel:1 close:2 context:2 cybern:1 map:2 center:10 urich:2 orthonormal:1 population:9 stability:1 notion:2 coordinate:1 mcnaughton:2 trigger:2 play:2 modulo:3 substrate:1 homogeneous:1 us:1 particularly:1 located:1 asymmetric:2 observed:1 role:2 region:2 autonomously:1 decrease:3 movement:1 valuable:1 environment:33 und:1 raise:1 purely:2 upon:1 localization:3 basis:6 triangle:1 neurophysiol:1 resolved:1 represented:1 surrounding:1 fast:2 sejnowski:1 approached:1 formation:5 neuroinformatics:1 encoded:1 transform:2 itself:1 advantage:1 reconstruction:16 maximal:5 coming:1 representational:1 tpc:12 asymmetry:7 r1:1 requirement:1 object:1 help:1 depending:2 quirk:1 ij:3 eq:3 soc:2 predicted:1 implies:2 switzerland:2 sensation:1 direction:3 subsequently:3 stochastic:1 exploration:1 human:1 viewing:4 virtual:3 dii:2 suffices:1 wall:3 preliminary:1 summation:2 opengl:1 singularity:1 exploring:1 considered:1 normal:1 visually:1 exp:1 great:1 bildung:1 cognition:1 purpose:1 proc:1 individually:1 create:1 kudrimoti:1 gaussian:3 rather:1 broader:1 ax:1 greatly:1 contrast:3 dependent:1 typically:1 initially:1 relation:2 transformed:2 selective:1 comprising:1 selects:1 orientation:3 animal:1 spatial:4 field:36 construct:1 represents:2 nearly:1 peaked:1 stimulus:13 primarily:1 oriented:1 individual:1 argmax:1 fire:1 jeffrey:2 investigate:4 circular:1 arena:1 analyzed:1 navigation:2 natl:1 implication:2 closer:1 experience:1 orthogonal:1 rotating:2 re:2 causal:1 deformation:1 column:4 dostrovsky:1 lattice:3 ca3:2 density:1 twelve:1 systematic:3 off:1 decoding:1 continuously:1 linux:1 central:1 recorded:1 ambiguity:3 containing:2 opposed:2 reflect:1 cognitive:1 account:4 coding:1 coefficient:4 depends:2 performed:1 view:8 try:2 later:1 contribution:1 formed:2 square:7 accuracy:2 variance:1 percept:1 ensemble:1 correspond:2 saliency:6 generalize:1 weak:1 identification:1 critically:1 arleo:1 explain:1 detector:6 phys:1 donnett:2 whenever:1 acquisition:4 frequency:1 naturally:2 associated:9 di:1 amplitude:1 inertial:1 reflecting:1 clarendon:1 supervised:1 response:7 maximally:1 strongly:1 furthermore:1 correlation:4 incrementally:1 usa:1 effect:2 contain:1 y2:1 true:1 normalized:2 analytically:1 symmetric:3 proprioceptive:2 white:2 skewed:1 width:1 covering:1 noted:1 rat:7 hippocampal:4 ini:1 complete:1 interpreting:1 image:1 recently:3 common:1 rotation:2 attached:1 exponentially:1 discussed:2 organism:1 tuning:4 philos:2 moving:2 robot:15 stable:1 similarity:15 behaving:4 surface:5 recent:2 perspective:1 apart:1 certain:1 vt:1 exploited:1 muller:1 seen:2 additional:3 freely:1 determine:2 paradigm:1 monotonically:3 preservation:1 multiple:5 rj:1 reduces:1 cross:2 long:1 concerning:1 controlled:2 schematic:1 nadel:1 neuro:1 vision:3 achieved:1 cell:55 proposal:3 addition:1 pyramidal:2 modality:1 recording:1 undergo:2 presence:1 intermediate:1 burgess:3 approaching:1 opposite:2 topology:1 regarding:2 whether:1 peter:1 useful:1 transforms:1 amount:2 dark:2 locally:1 reduced:1 delta:1 per:1 ist:1 four:7 salient:2 threshold:2 localize:1 kept:1 egocentric:2 angle:3 respond:2 place:81 mimetic:1 decision:1 scaling:1 ki:1 topological:1 activity:6 kronecker:1 constraint:1 scene:3 ri:1 lond:2 relatively:1 combination:5 ur:1 s1:4 dv:1 restricted:1 invariant:1 ginzburg:1 taken:9 ln:1 mechanism:1 available:3 gaussians:1 appropriate:1 allocentric:2 top:1 assumes:1 denotes:1 responding:1 ellipse:1 approximating:1 tensor:1 question:3 receptive:1 strategy:1 unclear:1 amongst:1 distance:16 thank:1 zooming:1 simulated:3 sci:3 landmark:1 degrade:1 extent:1 spanning:1 assuming:1 code:7 length:3 acquire:1 setup:1 rise:2 unknown:1 contributed:1 vertical:3 neuron:1 snapshot:36 defining:1 situation:1 team:1 precise:3 head:1 knierim:1 community:1 introduced:2 pair:1 required:2 trans:2 address:1 wy:2 pattern:4 perception:1 maguire:1 navigational:1 max:1 explanation:1 natural:1 advanced:1 scheme:1 inversely:1 eye:1 axis:1 coupled:1 geometric:2 determining:1 relative:2 asymptotic:1 expect:2 mounted:1 proportional:3 versus:2 localized:1 integrate:1 agent:3 degree:2 imposes:1 translation:2 row:1 compatible:1 placed:2 last:1 supported:1 verschure:2 side:3 allow:1 institute:1 template:9 taking:2 sparse:1 regard:1 curve:1 depth:2 cortical:4 world:1 sensory:1 author:1 sj:3 reconstructed:2 skill:1 active:1 tolerant:1 incoming:2 robotic:1 assumed:1 continuous:1 nature:2 symmetry:2 investigated:1 european:1 gerstner:1 s2:4 whole:1 paul:1 arise:1 profile:1 neursci:2 body:1 neuronal:2 fig:8 fashion:1 darker:1 iij:1 position:23 explicit:1 concatenating:1 vanish:1 specific:1 r2:1 evidence:1 intrinsic:1 keefe:5 importance:2 gained:1 mirror:5 suited:1 smoothly:2 likely:2 visual:30 ch:1 corresponds:4 environmental:1 relies:1 marked:1 loosing:1 fisher:1 absence:1 change:1 determined:5 specifically:1 averaging:1 principal:1 called:3 invariance:9 experimental:3 tendency:1 meaningful:1 selectively:2 internal:2 support:1 arises:1 scan:2 ethz:1 d1:2 biol:3 |
1,569 | 2,426 | Bayesian Color Constancy
with Non-Gaussian Models
Charles Rosenberg
Thomas Minka
Alok Ladsariya
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213
Statistics Department
Carnegie Mellon University
Pittsburgh, PA 15213
Computer Science Department
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
[email protected]
[email protected]
Abstract
We present a Bayesian approach to color constancy which utilizes a nonGaussian probabilistic model of the image formation process. The parameters of this model are estimated directly from an uncalibrated image
set and a small number of additional algorithmic parameters are chosen
using cross validation. The algorithm is empirically shown to exhibit
RMS error lower than other color constancy algorithms based on the
Lambertian surface reflectance model when estimating the illuminants
of a set of test images. This is demonstrated via a direct performance
comparison utilizing a publicly available set of real world test images
and code base.
1 Introduction
Color correction is an important preprocessing step for robust color-based computer vision
algorithms. Because the illuminants in the world have varying colors, the measured color
of an object will change under different light sources. We propose an algorithm for color
constancy which, given an image, will automatically estimate the color of the illuminant
(assumed constant over the image), allowing the image to be color corrected.
This color constancy problem is ill-posed, because object color and illuminant color are
not uniquely separable. Historically, algorithms for color constancy have fallen into two
groups. The first group imposes constraints on the scene and/or the illuminant, in order to
remove the ambiguities. The second group uses a statistical model to quantify the probability of each illuminant and then makes an estimate from these probabilities. The statistical
approach is attractive, since it is more general and more automatic?hard constraints are a
special case of statistical models, and they can be learned from data instead of being specified in advance. But as shown by [3, 1], currently the best performance on real images
is achieved by gamut mapping, a constraint-based algorithm. And, in the words of some
leading researchers, even gamut mapping is not ?good enough? for object recognition [8].
In this paper, we show that it is possible to outperform gamut mapping with a statistical
approach, by using appropriate probability models with the appropriate statistical framework. We use the principled Bayesian color constancy framework of [4], but combine it
with rich, nonparametric image models, such as used by Color by Correlation [1]. The
result is a Bayesian algorithm that works well in practice and addresses many of the issues
with Color by Correlation, the leading statistical algorithm [1].
At the same time, we suggest that statistical methods still have much to learn from
constraint-based methods. Even though our algorithm outperforms gamut mapping on
average, there are cases in which gamut mapping provides better estimates, and, in fact,
the errors of the two methods are surprisingly uncorrelated. This is an interesting result,
because it suggests that gamut mapping exploits image properties which are different from
what is learned by our algorithm, and probably other statistical algorithms. If this is true,
and if our statistical model could be extended in a way that captures these additional properties, better algorithms should be possible in the future.
2 The imaging model
Our approach is to model the observed image pixels with a probabilistic generative model,
decomposing them as the product of unknown surface reflectances with an unknown illuminant. Using Bayes? rule, we obtain a posterior for the illuminant, and from this we
extract the estimate with minimum risk, e.g., the minimum expected chromaticity error.
Let y be an image pixel with three color channels: (yr , yg , yb ). The pixel is assumed to be
the result of light reflecting off of a surface under the Lambertian reflectance model. Denote
the power of the light in each channel by ` = (`r , `g , `b ), with each channel ranging from
zero to infinity. For each channel, a surface can reflect none of the light, all of the light,
or somewhere in between. Denote this reflectance by x = (xr , xg , xb ), with each channel
ranging from zero to one. The model for the pixel is the well-known diagonal lighting
model:
yr = `r xr
yg = `g xg
yb = `b xb
(1)
To simplify the equations below, we write this in matrix form as
L
y
= diag(`)
= Lx
(2)
(3)
This specifies the conditional distribution p(y|`, x). In reality, there are sensor noise and
other factors which affect the observed color, but we will consider these to be negligible.
Next we make the common assumption that the light and the surface have been chosen
independently, so that p(`, x) = p(`)p(x). The prior distribution for the illuminant (p(`))
will be uniform over a constraint set, described later in section 5.3.
The most difficult step is to construct a model for the surface reflectances in an image
containing many pixels:
Y =
X =
(y(1), ..., y(n))
(x(1), ..., x(n))
(4)
(5)
We need a distribution p(X) for all n reflectances. One approach is to assume that the
reflectances are independent and Gaussian, as in [4], which gives reasonable results but can
be improved upon. Our approach is to quantize the reflectance vectors into K bins, and
consider the reflectances to be exchangeable?a weaker assumption than independence.
Exchangeability implies that the probability only depends on the number of reflectances
in
P
each bin. Thus if we denote the reflectance histogram by (n1 , ..., nK ), where k nk = n,
then
p(x(1), ..., x(n)) ? f (n1 , ..., nK )
(6)
where f is a function to be specified. Independence is a special case of exchangeability.
If
P
mk is the probability of a surface having a reflectance value in bin k, so that k mk = 1,
then independence says
f (n1 , ..., nK )
Y
=
mnk k
(7)
k
As an alternative to this, we have experimented with the Dirichlet-multinomial model,
which employs a parameter s > 0 to control the amount of correlation. Under this model,
?(s) Y ?(nk + smk )
(8)
f (n1 , ..., nK ) =
?(n + s)
?(smk )
k
For large s, correlation is weak and the model reduces to (7). For small s, correlation is
strong and the model expects a few reflectances to be repeated many times, which is what
we see in real images. When s is very small, the expression (8) can be reduced to a simple
form:
1 Y
clip(nk )
f (n1 , ..., nK ) ?
(smk ?(nk ))
(9)
s?(n)
k
0 if nk = 0
clip(nk ) =
(10)
1 if nk > 0
This resembles a multinomial distribution on clipped counts. Unfortunately, this distribution strongly prefers that the image contains a small number of different reflectances,
which biases the light source estimate. Empirically we have achieved our best results using
a ?normalized count? modification of the model which removes this bias:
Y
m?kk
(11)
f (n1 , ..., nK ) =
k
clip(nk )
= nP
k clip(nk )
?k
(12)
The modified counts ?k sum to n just like the original counts nk , but are distributed equally
over all reflectances present in the image.
3 The color constancy algorithm
The algorithm for estimating the illuminant has two parts: (1) discretize the set of all
illuminants on a fine grid and compute their likelihood and (2) pick the illuminant which
minimizes the risk.
The likelihood of the observed image data Y for a given illuminant ` is
!
Z
Y
p(Y|`) =
p(y(i)|`, x(i)) p(X)dX
(13)
X
i
?1 n
= |L
| p(X = L?1 Y)
(14)
?1
The quantity L Y can be understood as the color-corrected image. The determinant term,
1/(`r `g `b )n , makes this a valid distribution over Y and has the effect of introducing a
preference for dimmer illuminants independently of the prior on reflectances. Also implicit
in this likelihood are the bounds on x, which require reflectances to be in the range of zero
and one and thus we restrict our search to illuminants that satisfy:
`r ? max yr (i)
`g ? max yg (i)
i
`b ? max yb (i)
i
i
(15)
The posterior probability for ` then follows:
p(`|Y)
?
?
p(Y|`)p(`)
?1 n
|L
(16)
?1
| p(X = L
Y)p(`)
(17)
The next step is to find the estimate of ` with minimum risk. An answer that the illuminant
is `? , when it is really `, incurs some cost, denoted R(`? |`). Let this function be quadratic
in some transformation g of the illuminant vector `:
R(`? |`) =
||g(`? ) ? g(`)||2
(18)
This occurs, for example, when the cost function is squared error in chromaticity. Then the
minimum-risk estimate satisfies
Z
g(`)p(`|Y)d`
(19)
g(`? ) =
`
The right-hand side, the posterior mean of g, and the normalizing constant of the posterior
can be computed in a single loop over the grid of illuminants.
4 Relation to other algorithms
In this section we describe related color constancy algorithms using the framework of the
imaging model introduced in section 2. This is helpful because it allows us to compare all
of these algorithms in a single framework and understand the assumptions made by each.
Independent, Gaussian reflectances The previous work most similar to our own is by
[10] and [4]; however, these methods are not tested on real images. They use a similar
imaging model and maximum-likelihood and minimum-risk estimation, respectively. The
difference is that they use a Gaussian prior for the reflectance vectors, and assume the
reflectances for different pixels are independent. The Gaussian assumption leads to a simple likelihood formula whose maximum can be found by gradient methods. However, as
mentioned by [4], this is a constraining assumption, and more appropriate priors would be
preferable.
Scale by max The scale by max algorithm (as tested e.g. in [3]) estimates the illuminant
by the simple formula
`r = max yr (i)
i
`g = max yg (i)
i
`b = max yb (i)
i
(20)
which is the dimmest illuminant in the valid set (15). In the Bayesian algorithm, this
solution can be achieved by letting the reflectances be independent and uniform over the
range 0 to 1. Then p(X) is constant and the maximum-likelihood illuminant is (20). This
connection was also noticed by [4].
Gray-world The gray-world algorithm [5] chooses the illuminant such that the average
value in each channel of the corrected image is a constant, e.g. 0.5. This is equivalent to the
Bayesian algorithm with a particular reflectance prior. Let the reflectances be independent
for each pixel and each channel, with distribution p(xc ) ? exp(?2xc ) in each channel c.
The log-likelihood for `c is then
log p(Yc |`c ) = ?n log `c ? 2
X yc (i)
i
whose maximum is (as desired)
`c =
2X
yc (i)
n i
`c
+ const.
(21)
(22)
Figure 1: Plots of slices of the three dimensional color surface reflectance distribution
along a single dimension. Row one plots green versus blue with 0,0 at the upper left of
each subplot and slices in red whose magnitude increases from left to right. Row two plots
red versus blue with slices in green. Row three plots red versus green with slices in blue.
Color by Correlation Color by Correlation [6, 1] also uses a likelihood approach, but
with a different imaging model that is not based on reflectance. Instead, observed pixels
are quantized into color bins, and the frequency of each bin is counted for each illuminant,
in a finite set of illuminants. (Note that this is different from quantizing reflectances, as
done in our approach.) Let mk (`) be the frequency of color bin k for illuminant `, and let
n1 ? ? ? nK be the color histogram of the image, then the likelihood of ` is computed as
Y
mk (`)clip(nk )
(23)
p(Y|`) =
k
While theoretically this is very general, there are practical limitations. First there are training issues. One must learn the color frequencies for every possible illuminant. Since
collecting real-world data whose illuminant is known is difficult, mk (`) is typically trained
synthetically with random surfaces, which may not represent the statistics of natural scenes.
The second issue is that colors and illuminants live in an unbounded 3D space [1], unlike
reflectances which are bounded. In order to store a color distribution for each illuminant,
brightness variation needs to be artificially bounded. The third issue is storage. To reduce
the storage of the mk (`)?s, Barnard et al [1] store the color distribution only for illuminants
of a fixed brightness. However, as they describe, this introduces a bias in the estimation
they refer to as the ?discretization problem? and try to solve it by penalizing bright illuminants. The other part of the bias is due to using clipped counts in the likelihood. As
explained in section 2, a multinomial likelihood with clipped counts is a special case of the
Dirichlet-multinomial, and prefers images with a small number of different colors. This
bias can be removed using a different likelihood function, such as (11).
5 Parameter estimation
5.1 Reflectance Distribution
To implement the Bayesian algorithm, we need to learn the real-world frequencies mk of
quantized reflectance vectors. The direct approach to this would require a set of images
with ground truth information regarding the associated illumination parameters or, alternately, a set of images captured under a canonical illuminant and camera.
Unfortunately, it is quite difficult to collect a large number of images under controlled
conditions. To avoid this issue, we use bootstrapping, as described in [9], to approximate
the ground truth. The estimates from some ?base? color constancy algorithm are used as
a proxy for the ground truth. This might seem to be problematic in that it would limit any
algorithm based on these estimates to perform only as well as the base algorithm. However,
this need not be the case if the errors made by the base algorithm are relatively unbiased.
We used approximately 2300 randomly selected JPEG images from news sites on the web
for bootstrapping, consisting mostly of outdoor scenes, indoor news conferences, and sporting event scenes. The scale by max algorithm was used as our ?base? algorithm. Figure
1 is a plot of the probability distribution collected, where lighter regions represent higher
probability values. The distribution is highly structured and varies with the magnitude of
the channel response. This structure is important because it allows our algorithm to disambiguate between potential solutions to the ill-posed illumination estimation problem.
5.2 Pre-processing and quantization
To increase robustness, pre-processing is performed on the image, similar to that performed
in [3]. The first pre-processing step scales down the image to reduce noise and speed up
the algorithm. A new image is formed in which each pixel is the mean of an m by m
block of the original image. The second pre-processing step removes dark pixels from the
computation, which, because of noise and quantization effects do not contain reliable color
information. Pixels whose yr + yg + yb channel sum is less than a given threshold are
excluded from the computation.
In addition to the reflectance prior, the parameters of our algorithm are: the number of
reflectance histogram bins, the scale down factor, and the dark pixel threshold value. To set
these parameters values, the algorithm was run over a large grid of parameter variations and
performance on the tuning set was computed. The tuning set was a subset of the ?model?
data set described in [7] and disjoint from the test set. A total of 20 images were used, 10
objects imaged under 2 illuminants. (The ?ball2? object was removed so that there was no
overlap between the tuning and test sets.) For the purpose of speed, only images captured
with the Philips Ultralume and the Macbeth Judge II fluorescent illuminants were included.
The best set of parameters was found to be: 32 ? 32 ? 32 reflectance bins, scale down by
m = 3, and omit pixels with a channel sum less than 8/(3 ? 255).
5.3 Illuminant prior
To facilitate a direct comparison, we adopt the two illuminant priors from [3]. Each is
uniform over a subset of illuminants. The first prior, full set, discretizes the illuminants
uniformly in polar coordinates. The second prior, hull set, is a subset of full set restricted
to be within the convex hull of the test set illuminants and other real world illuminants.
Overall brightness, `r + `g + `b , is discretized in the range of 0 to 6 in 0.01 steps.
6 Experiments
6.1 Evaluation Specifics
To test the algorithms we use the publicly available real world image data set [2] used
by Barnard, Martin, Coath and Funt in a comprehensive evaluation of color constancy
algorithms in [3]. The data set consists of images of 30 scenes captured under 11 light
sources, for a total of 321 images (after the authors removed images which had collection
problems) with ground truth illuminant information provided in the form of an RGB value.
As in the ?rg error? measure of [3], illuminant error is measured in chromaticity space:
`1 = `r /(`r + `g + `b )
`2 = `g /(`r + `g + `b )
(24)
R(`? |`) = (`?1 ? `1 )2 + (`?2 ? `2 )2
(25)
The Bayesian algorithm is adapted to minimize this risk by computing the posterior mean
in chromaticity space. The performance of an algorithm on the test set is reported as the
square root of the average R(`? |`) across all images, referred to as the RMS error.
Table 1: The average error of several color constancy algorithms on the test set. The value
in parentheses is 1.64 times the standard error of the average, so that if two error intervals
do not overlap the difference is significant at the 95% level.
Algorithm
Scale by Max
Gamut Mapping without Segmentation
Gamut Mapping with Segmentation
Bayes with Bootstrap Set Model
Bayes with Tuning Set Model
RMS Error for Full Set
0.0584 (+/- 0.0034)
0.0524 (+/- 0.0029)
0.0426 (+/- 0.0023)
0.0442 (+/- 0.0025)
0.0344 (+/- 0.0017)
RMS Error for Hull Set
0.0584 (+/- 0.0034)
0.0461 (+/- 0.0025)
0.0393 (+/- 0.0021)
0.0351 (+/- 0.0020)
0.0317 (+/- 0.0017)
Scale by Max
Gamut Mapping without Segmentation
Gamut Mapping with Segmentation
Bayes with Bootstrap Set Model
Bayes with Tuning Set Model
Full Set
Hull Set
0.030
0.035
0.040
0.045
0.050
0.055
0.060
RMS error
Figure 2: A graphical rendition of table 1. The standard errors are scaled by 1.64, so that if
two error bars do not overlap the difference is significant at the 95% level.
6.2 Results
The results1 are summarized in Table 1 and Figure 2. We compare two versions of our
Bayesian method to the gamut mapping and scale by max algorithms. The appropriate
preprocessing for each algorithm was applied to the images to achieve the best possible
performance. (Note that we do not include results for color by correlation since the gamut
mapping results were found to be significantly better in [3].) In all configurations, our
algorithm exhibits the lowest RMS error except in a single case where it is not statistically different than that of gamut mapping. The differences for the hull set are especially
large. The hull set is clearly a useful constraint that improves the performance of all of the
algorithms evaluated.
The two versions of our Bayesian algorithm differ only in the data set used to build the
reflectance prior. The tuning set, while composed of separate images than the test set, is
very similar and has known illuminants, and, accordingly, gives the best results. Yet the
performance when trained on a very different set of images, the uncalibrated bootstrap set
of section 5.1, is not that different, particularly when the illuminant search is constrained.
The gamut mapping algorithm (called CRULE and ECRULE in [3]) is also presented in two
versions: with and without segmenting the images as a preprocessing step as described in
[3]. These results were computed using software provided by Barnard and used to generate
the results in [3]. In the evaluation of color constancy algorithms in [3] gamut mapping was
found on average to outperform all other algorithms when evaluated on real world images.
It is interesting to note that the gamut mapping algorithm is sensitive to segmentation. Since
fundamentally it should not be sensitive to the number of pixels of a particular color in the
image we must assume that this is because the segmentation is implementing some form of
noise filtering. The Bayesian algorithm currently does not use segmentation.
Scale by max is also included as a reference point and still performs quite well given its simplicity, often beating out much more complex constancy algorithms [8, 3]. Its performance
is the same for both illuminant sets since it does not involve a search over illuminants.
1
Result images can be found at http://www.cs.cmu.edu/?chuck/nips-2003/
Surprisingly, when the error of the Bayesian method is compared with the gamut mapping
method on individual test images, the correlation coefficient is -0.04. Thus the images
which confuse the Bayesian method are quite different from the images which confuse
gamut mapping. This suggests that an algorithm which could jointly model the image
properties exploited by both algorithms might give dramatic improvements. As an example of the potential improvement, the RMS error of an ideal algorithm whose error is the
minimum of Bayes and gamut on each image in the test set is only 0.019.
7 Conclusions and Future Work
We have demonstrated empirically that Bayesian color constancy with the appropriate nonGaussian models can outperform gamut mapping on a standard test set. This is true regardless of whether a calibrated or uncalibrated training set is used, or whether the full set
or a restricted set of illuminants is searched. This should give new hope to the pursuit of
statistical methods as a unifying framework for color constancy.
The results also suggest ways to improve the Bayesian algorithm. The particular image
model we have used, the normalized count model, is only one of many that could be tried.
This is simply an image modeling problem which can be attacked using standard statistical
methods. A particularly promising direction is to pursue models which can enforce constraints like that in the gamut mapping algorithm, since the images where Bayes has the
largest errors appear to be relatively easy for gamut mapping.
Acknowledgments
We would like to thank Kobus Barnard for making his test images and code publicly available. We would also like to thank Martial Hebert for his valuable insight and advice and
Daniel Huber and Kevin Watkins for their help in revising this document. This work was
sponsored in part by a fellowship from the Eastman Kodak company.
References
[1] K. Barnard, L. Martin, and B. Funt, ?Colour by correlation in a three dimensional colour space,?
Proceedings of the 6th European Conference on Computer Vision, pp. 275?289, 2000.
[2] K. Barnard, L. Martin, B. Funt, and A. Coath, ?A data set for colour research,? Color Research and Application, Volume 27, Number 3, pp. 147-151, 2002,
http://www.cs.sfu.ca/?colour/data/colour constancy test images/
[3] K. Barnard, L. Martin, A. Coath, and B. Funt, ?A comparison of color constancy algorithms;
Part Two. Experiments with Image Data,? IEEE Transactions in Image Processing, vol. 11. no.
9. pp. 985-996, 2002.
[4] D. H. Brainard and W. T. Freeman, ?Bayesian color constancy,? Journal of the Optical Society
of America A, vol. 14, no. 7, pp. 1393-1411, 1997.
[5] G. Buchsbaum, ?A spatial processor model for object colour perception,? Journal of the Franklin
Institute, vol. 10, pp. 1-26, 1980.
[6] G. D. Finlayson and S. D. Hordley and P. M. Hubel, ?Colour by correlation: a simple, unifying
approach to colour constancy,? The Proceedings of the Seventh IEEE International Conference
on Computer Vision, vol. 2, pp. 835-842, 1999.
[7] B. Funt and V. Cardei and K. Barnard, ?Learning color constancy,? Proceedings of Imaging
Science and Technology / Society for Information Display Fourth Color Imaging Conference.
pp. 58-60, 1996.
[8] B. Funt and K. Barnard and L. Martin, ?Is colour constancy good enough?,? Proceedings of the
Fifth European Conference on Computer Vision, pp. 445-459, 1998.
[9] B. Funt and V. Cardei. ?Bootstrapping color constancy,? Proceedings of SPIE: Electronic Imaging IV, 3644, 1999.
[10] H. J. Trussell and M. J. Vrhel, ?Estimation of illumination for color correction,? Proc ICASSP,
pp. 2513-2516, 1991.
| 2426 |@word determinant:1 version:3 tried:1 rgb:1 pick:1 dramatic:1 incurs:1 brightness:3 configuration:1 contains:1 daniel:1 document:1 franklin:1 outperforms:1 discretization:1 yet:1 dx:1 must:2 remove:3 plot:5 sponsored:1 generative:1 selected:1 yr:5 accordingly:1 provides:1 quantized:2 lx:1 preference:1 unbounded:1 along:1 direct:3 consists:1 combine:1 theoretically:1 huber:1 expected:1 discretized:1 freeman:1 automatically:1 company:1 provided:2 estimating:2 bounded:2 lowest:1 what:2 minimizes:1 pursue:1 revising:1 transformation:1 bootstrapping:3 every:1 collecting:1 preferable:1 scaled:1 exchangeable:1 control:1 omit:1 appear:1 segmenting:1 negligible:1 understood:1 limit:1 approximately:1 might:2 resembles:1 suggests:2 collect:1 range:3 statistically:1 practical:1 camera:1 acknowledgment:1 practice:1 block:1 implement:1 xr:2 bootstrap:3 significantly:1 word:1 pre:4 suggest:2 storage:2 risk:6 live:1 www:2 equivalent:1 demonstrated:2 regardless:1 independently:2 convex:1 simplicity:1 rule:1 insight:1 utilizing:1 his:2 variation:2 coordinate:1 lighter:1 us:2 pa:3 recognition:1 particularly:2 observed:4 constancy:23 capture:1 region:1 news:2 removed:3 uncalibrated:3 valuable:1 principled:1 mentioned:1 trained:2 upon:1 icassp:1 america:1 describe:2 formation:1 kevin:1 whose:6 quite:3 posed:2 solve:1 say:1 statistic:2 jointly:1 quantizing:1 propose:1 product:1 loop:1 achieve:1 object:6 help:1 brainard:1 stat:1 measured:2 strong:1 c:4 implies:1 judge:1 quantify:1 differ:1 direction:1 hull:6 implementing:1 bin:8 require:2 really:1 kobus:1 correction:2 ground:4 exp:1 algorithmic:1 mapping:21 adopt:1 purpose:1 estimation:5 polar:1 proc:1 currently:2 sensitive:2 largest:1 hope:1 clearly:1 sensor:1 gaussian:5 modified:1 avoid:1 exchangeability:2 varying:1 rosenberg:1 improvement:2 likelihood:12 helpful:1 typically:1 relation:1 pixel:14 issue:5 overall:1 ill:2 denoted:1 constrained:1 special:3 spatial:1 construct:1 having:1 future:2 np:1 simplify:1 fundamentally:1 employ:1 few:1 randomly:1 composed:1 comprehensive:1 individual:1 consisting:1 n1:7 highly:1 evaluation:3 introduces:1 light:8 xb:2 iv:1 desired:1 mk:7 modeling:1 jpeg:1 cost:2 introducing:1 subset:3 expects:1 uniform:3 seventh:1 reported:1 answer:1 varies:1 chooses:1 calibrated:1 international:1 probabilistic:2 off:1 yg:5 nongaussian:2 squared:1 ambiguity:1 reflect:1 containing:1 leading:2 potential:2 summarized:1 coefficient:1 satisfy:1 depends:1 later:1 try:1 performed:2 root:1 red:3 bayes:7 minimize:1 bright:1 publicly:3 formed:1 square:1 weak:1 bayesian:16 fallen:1 none:1 lighting:1 researcher:1 processor:1 frequency:4 pp:9 minka:2 associated:1 spie:1 color:49 improves:1 macbeth:1 segmentation:7 reflecting:1 higher:1 response:1 improved:1 yb:5 dimmer:1 though:1 strongly:1 done:1 evaluated:2 just:1 implicit:1 correlation:11 hand:1 web:1 gray:2 facilitate:1 effect:2 normalized:2 true:2 unbiased:1 contain:1 excluded:1 imaged:1 attractive:1 chromaticity:4 uniquely:1 performs:1 image:56 ranging:2 charles:1 common:1 multinomial:4 empirically:3 volume:1 mellon:3 refer:1 significant:2 automatic:1 tuning:6 grid:3 had:1 surface:9 base:5 posterior:5 own:1 store:2 chuck:2 results1:1 exploited:1 captured:3 minimum:6 additional:2 subplot:1 ii:1 full:5 reduces:1 cross:1 equally:1 controlled:1 parenthesis:1 vision:4 cmu:4 funt:7 histogram:3 represent:2 achieved:3 addition:1 fellowship:1 fine:1 interval:1 source:3 unlike:1 probably:1 seem:1 synthetically:1 constraining:1 ideal:1 enough:2 easy:1 affect:1 independence:3 buchsbaum:1 restrict:1 reduce:2 regarding:1 whether:2 expression:1 rms:7 colour:9 alok:1 prefers:2 useful:1 involve:1 amount:1 nonparametric:1 dark:2 clip:5 reduced:1 generate:1 specifies:1 outperform:3 http:2 canonical:1 problematic:1 estimated:1 disjoint:1 blue:3 mnk:1 carnegie:3 write:1 vol:4 group:3 threshold:2 penalizing:1 imaging:7 sum:3 run:1 fourth:1 clipped:3 reasonable:1 electronic:1 utilizes:1 sfu:1 bound:1 display:1 cardei:2 quadratic:1 adapted:1 constraint:7 infinity:1 scene:5 software:1 speed:2 separable:1 optical:1 relatively:2 martin:5 department:3 structured:1 across:1 modification:1 making:1 explained:1 restricted:2 equation:1 count:7 letting:1 available:3 decomposing:1 pursuit:1 discretizes:1 lambertian:2 appropriate:5 enforce:1 kodak:1 alternative:1 robustness:1 thomas:1 original:2 dirichlet:2 include:1 graphical:1 unifying:2 xc:2 const:1 somewhere:1 exploit:1 reflectance:33 especially:1 build:1 society:2 noticed:1 quantity:1 occurs:1 diagonal:1 exhibit:2 gradient:1 separate:1 thank:2 philip:1 collected:1 finlayson:1 code:2 kk:1 difficult:3 unfortunately:2 mostly:1 unknown:2 perform:1 allowing:1 discretize:1 upper:1 finite:1 attacked:1 extended:1 introduced:1 specified:2 connection:1 learned:2 alternately:1 nip:1 address:1 bar:1 below:1 perception:1 beating:1 yc:3 indoor:1 max:13 green:3 reliable:1 power:1 event:1 overlap:3 natural:1 rendition:1 improve:1 historically:1 technology:1 martial:1 gamut:22 xg:2 extract:1 sporting:1 prior:11 smk:3 interesting:2 limitation:1 fluorescent:1 filtering:1 versus:3 validation:1 proxy:1 imposes:1 uncorrelated:1 row:3 surprisingly:2 hebert:1 bias:5 weaker:1 side:1 understand:1 institute:1 fifth:1 distributed:1 slice:4 dimension:1 world:9 valid:2 rich:1 author:1 made:2 collection:1 preprocessing:3 counted:1 transaction:1 approximate:1 hubel:1 pittsburgh:3 assumed:2 search:3 reality:1 disambiguate:1 table:3 learn:3 channel:11 robust:1 promising:1 ca:1 quantize:1 complex:1 artificially:1 european:2 diag:1 noise:4 repeated:1 site:1 referred:1 advice:1 outdoor:1 watkins:1 third:1 formula:2 down:3 specific:1 experimented:1 normalizing:1 quantization:2 magnitude:2 illumination:3 confuse:2 nk:18 rg:1 eastman:1 simply:1 truth:4 satisfies:1 conditional:1 barnard:9 change:1 hard:1 included:2 except:1 corrected:3 uniformly:1 total:2 called:1 searched:1 illuminant:47 tested:2 |
1,570 | 2,427 | Bias-Corrected Bootstrap and Model
Uncertainty
Harald Steck?
MIT CSAIL
200 Technology Square
Cambridge, MA 02139
[email protected]
Tommi S. Jaakkola
MIT CSAIL
200 Technology Square
Cambridge, MA 02139
[email protected]
Abstract
The bootstrap has become a popular method for exploring model
(structure) uncertainty. Our experiments with artificial and realworld data demonstrate that the graphs learned from bootstrap
samples can be severely biased towards too complex graphical models. Accounting for this bias is hence essential, e.g., when exploring model uncertainty. We find that this bias is intimately tied to
(well-known) spurious dependences induced by the bootstrap. The
leading-order bias-correction equals one half of Akaike?s penalty
for model complexity. We demonstrate the effect of this simple
bias-correction in our experiments. We also relate this bias to the
bias of the plug-in estimator for entropy, as well as to the difference between the expected test and training errors of a graphical
model, which asymptotically equals Akaike?s penalty (rather than
one half).
1
Introduction
Efron?s bootstrap is a powerful tool for estimating various properties of a given
statistic, most commonly its bias and variance (cf. [5]). It quickly gained popularity
also in the context of model selection. When learning the structure of graphical
models from small data sets, like gene-expression data, it has been applied to explore
model (structure) uncertainty [7, 6, 8, 12].
However, the bootstrap procedure also involves various problems (e.g., cf. [4] for an
overview). For instance, in the non-parametric bootstrap, where bootstrap samples
D(b) (b = 1, ..., B) are generated by drawing the data points from the given data D
with replacement, each bootstrap sample D(b) often contains multiple identical data
points, which is a typical property of discrete data. When the given data D is in fact
continuous (with a vanishing probability of two data points being identical), e.g., as
in gene-expression data, the bootstrap procedure introduces a spurious discreteness
in the samples D(b) . A statistic computed from these discrete bootstrap samples
may differ from the ones based on the continuous data D. As noted in [4], however,
the effects due to this induced spurious discreteness are typically negligible.
In this paper, we focus on the spurious dependences induced by the bootstrap procedure, even when given discrete data. We demonstrate that the consequences of those
?
Now at: ETH Zurich, Institute for Computational Science, 8092 Zurich, Switzerland.
spurious dependences cannot be neglected when exploring model (structure) uncertainty by means of bootstrap, whether parametric or non-parametric. Graphical
models learned from the bootstrap samples are biased towards too complex models
and this bias can be considerably larger than the variability of the graph structure,
especially in the interesting case of limited data. As a result, too many edges are
present in the learned model structures, and the confidence in the presence of edges
is overestimated. This suggests that a bias-corrected bootstrap procedure is essential for exploring model structure uncertainty. Similarly to the statistics literature,
we give a derivation for the bias-correction term to amend several popular scoring
functions when applied to bootstrap samples (cf. Section 3.2). This bias-correction
term asymptotically equals one half of the penalty term for model complexity in
the Akaike Information Criterion (AIC), cf. Section 3.2. The (huge) effects of this
bias and the proposed bias-correction are illustrated in our experiments in Section
5.
As the maximum likelihood score and the entropy are intimately tied to each other in
the exponential family of probability distributions, we also relate this bias towards
too complex models with the bias of the plug-in estimator for entropy (Section
3.1). Moreover, we show in Section 4, similarly to [13, 1], how the (bootstrap)
bias-correction can be used to obtain a scoring function whose penalty for model
complexity asymptotically equals Akaike?s penalty (rather than one half of that).
2
Bootstrap Bias-Estimation and Bias-Correction
In this section, we introduce relevant notation and briefly review the bootstrap
bias estimation of an arbitrary statistic as well as the bootstrap bias-correction (cf.
also [5, 4]). The scoring-functions commonly used for graphical models such as the
Akaike Information Criterion (AIC), the Bayesian Information Criterion (BIC), the
Minimum Description Length (MDL), or the posterior probability, can be viewed
as special cases of a statistic.
In a domain of n discrete random variables, X = (X1 , ..., Xn ), let p(X) denote the
(unknown) true distribution from which the given data D has been sampled. The
empirical distribution implied by D is then given by P
p?(X), where p?(x) = N (x)/N ,
where N (x) is the frequency of state X = x and N = x N (x) is the sample size of
D. A statistic T is any number that can be computed from the given data D. Its bias
is defined as BiasT = hT (D)iD?p ? T (p), where hT (D)iD?p denotes the expectation
over the data sets D of size N sampled from the (unknown) true distribution p.
While T (D) is an arbitrary statistic, T (p) is the associated, but possibly slightly
different, statistic that can be computed from a (normalized) distribution. Since
the true distribution p is typically unknown, BiasT cannot be computed. However,
it can be approximated by the bootstrap bias-estimate, where p is replaced by the
empirical distribution p?, and the average over the data sets D is replaced by the
one over the bootstrap samples D(b) generated from p?, where b = 1, ..., B with
sufficiently large B (e.g., cf. [5]):
d T = hT (D(b) )ib ? T (?
Bias
p)
(1)
The estimator T (?
p) is a so-called plug-in statistic, as the empirical distribution is
?plugged in? in place of the (unknown) true one. For example, T??2 (?
p) = IE(X 2 ) ?
2
unbiased
IE(X) is the familiar plug-in statistic for the variance, while T?2
(D) = N/(N ?
p) is the unbiased estimator.
1)T?2 (?
Obviously, a plug-in statistic yields an unbiased estimate concerning the distribution that is plugged in. Consequently, when the empirical distribution is plugged
in, a plug-in statistic typically does not give an unbiased estimate concerning the
(unknown) true distribution. Only plug-in statistics that are linear functions of
p?(x) are inherently unbiased (e.g., the arithmetic mean). However, most statistics,
including the above scoring functions, are non-linear functions of p?(x) (or equivalently of N (x)). In this case, the bias does not vanish in general. In the special case
where a plug-in statistic is a convex (concave) function of p?, it follows immediately
from the Jensen inequality that its bias is positive (negative). For example, the
statistic T?2 (?
p) is a negative quadratic, and thus concave, function of p?, and hence
underestimates the variance of the (unknown) true distribution.
The general procedure of bias-correction can be used to reduce the bias of a biased
statistic considerably. The bootstrap bias-corrected estimator T BC is given by
d T = 2 T (D) ? hT (D(b) )ib ,
T BC (D) = T (D) ? Bias
(2)
1
BC
d
where BiasT is the bootstrap bias estimate according to Eq. 1. Typically, T (D)
agrees with the corresponding unbiased estimator in leading order in N (cf., e.g.,
[5]). Higher-order corrections can be achieved by ?bootstrapping the bootstrap?
[5].
Bias-correction can be dangerous in practice (cf. [5]): even though T BC (D) is
less biased than T (D), the bias-corrected estimator may have substantially larger
variance. This is due to a possibly higher variability in the estimate of the bias,
particularly when computed from small data sets. However, this is not an issue
in this paper, since the ?estimate? of the bias turns out to be independent of the
empirical distribution (in leading order in N ).
3
Bias-Corrected Scoring-Functions
In this section, we show that the above popular scoring-functions are (considerably)
biased towards too complex models when applied to bootstrap samples (in place of
the given data). These scoring functions can be amended by an additional penalty
term that accounts for this bias. Using the bootstrap bias-correction in a slightly
non-standard way, a simple expression for this penalty term follows easily (Section 3.2) from the well-know bias of the plug-in estimator of the entropy, which is
reviewed in Section 3.1 (cf. also, e.g., [11, 2, 16]).
3.1 Bias-Corrected Estimator for True Entropy
The
P entropy of the (true) distribution p(X) is defined by H(p(X)) =
? x p(x) log p(x). Since this is a concave function of the p?s, the plug-in estimator H(?
p(X)) tends to underestimate the true entropy H(p(X)) (cf. Section 2).
d H = hH(D(b) )ib ? H(?
The bootstrap bias estimate of H(?
p(X)) is Bias
p), where
B
X ?(x)
1 X
?(x)
H(D(b) (X)) = ?
h
log
i?(x)?Bin(N,p(x))
,
(3)
?
B
N
N
x
b=1
where Bin(N, p?(x)) denotes the Binomial distribution that originates from the resampling procedure in the bootstrap; N is the sample size; p?(x) is the probability of
sampling a data point with X = x. An exact evaluation of Eq. 3 is computationally
prohibitive in most cases. Monte Carlo methods, while yielding accurate results,
are computationally costly. An analytical approximation of Eq. 3 follows immediately from the second-order Taylor expansion of L(q(x)) := q(x) log q(x) about
p?(x), where q(x) = ?(x)/N :2
X
1 X 00
?(x)
1
?(x)
?
hL(
)i?(x) = H(?
p(x)) ?
L (?
p(x)) h[
? p?(x)]2 i?(x) + O( 2 )
N
2 x
N
N
x
hH(D(b) )ib =
p(x)) ?
= H(?
1
1
1
(|X| ? 1) + O( 2 ),
2N
N
(4)
Note that hT (D(b) )ib is not the bias-corrected statistic.
Note that this approximation can be applied analogously to BiasH (instead of the
d H ), and the same leading-order term is obtained.
bootstrap estimate Bias
2
where ?L00 (?
p(x)) = ?1/?
p(x) is the observed Fisher information evaluated at the
empirical value p?(x), and h[?(x) ? N p?(x)]2 i?(x) = N p?(x)(1 ? p?(x)) is the wellknown variance of the Binomial distribution, induced by the bootstrap. In Eq. 4,
|X| is the number of (joint) states of X. The bootstrap bias-corrected estimator
for the entropy of the (unknown true) distribution is thus given by H BC (?
p(X)) =
1
H(?
p(X)) + 2N
(|X| ? 1) + O( N12 ).
3.2 Bias-Correction for Bootstrapped Scoring-Functions
This section is concerned with the bias of popular scoring functions that is induced
by the bootstrap procedure. For the moment, let us focus on the BIC when learning
a Bayesian network structure m,
n X
X
p?(xi , ?i ) 1
TBIC (D, m) = N
p?(xi , ?i ) log
? log N ? |?|.
(5)
p?(?i )
2
i=1 x ,?
i
i
The maximum likelihood involves a summation over all the variables (i = 1, ..., n)
and all the joint states of each variable Xi and its parents ?i according to graph
m. The number of independent parameters in the Bayesian network is given by
n
X
|?| =
(|Xi | ? 1) ? |?i |
(6)
i=1
where |Xi | denotes the number of states of variable Xi , and |?i | the number of
(joint) states of its parents ?i . Like other scoring-functions, the BIC is obviously
intended to be applied to the given data. If done so, optimizing the BIC yields
an ?unbiased? estimate of the true network structure underlying the given data.
However, when the BIC is applied to a bootstrap sample D(b) (instead of the given
data D), the BIC cannot be expected to yield an ?unbiased? estimate of the true
graph. This is because the maximum likelihood term in the BIC is biased when
computed from the bootstrap sample D(b) instead of the given data D. This bias
dT
reads Bias
= hTBIC (D(b) )ib ? TBIC (D). It differs conceptually from Eq. 1 in two
BIC
ways. First, it is the (exact) bias induced by the bootstrap procedure, while Eq. 1 is a
bootstrap approximation of the (unknown) true bias. Second, while Eq. 1 applies to
a statistic in general, the last term in Eq. 1 necessarily has to be a plug-in statistic.
dT
In contrast, both terms involved in Bias
comprise the same general statistic.
BIC
Since the maximum likelihood term is intimately tied to the entropy in the exponential family of probability distributions, the leading-order approximation of the
bias of the entropy carries over (cf. Eq. 4):
n
1
1
1
1X
dT
{|X
|
?
|?
|
?
1}
?
{|?
|
?
1}
+ O( ) = |?| + O( ), (7)
Bias
=
i
i
i
BIC
2 i=1
N
2
N
where |?| is the number of independent parameters in the model, as given in Eq. 6
for Bayesian networks. Note that this bias is identical to one half of the penalty
for model complexity in the Akaike Information Criterion (AIC). Hence, this bias
due to the bootstrap cannot be neglected compared to the penalty terms inherent
in all popular scoring functions. Also our experiments in Section 5 confirm the
dominating effect of this bias when exploring model uncertainty.
This bias in the maximum likelihood gives rise to spurious dependences induced by
the bootstrap (a well-known property). In this paper, we are mainly interested in
structure learning of graphical models. In this context, the bootstrap procedure
obviously gives rise to a (considerable) bias towards too complex models. As a
consequence, too many edges are present in the learned graph structure, and the
confidence in the presence of edges is overestimated. Moreover, the (undesirable)
additional directed edges in Bayesian networks tend to point towards variables that
already have a large number of parents. This is because the bias is proportional to
the number of joint states of the parents of a variable (cf. Eqs. 7 and 6). Hence,
the amount of the induced bias generally varies among the different edges in the
graph.
Consequently, the BIC has to be amended when applied to a bootstrap sample
BC
D(b) (instead of the given data D). The bias-corrected BIC reads TBIC
(D(b) , m) =
1
(b)
TBIC (D , m) ? 2 |?| (in leading order in N ). Since the bias originates from the
maximum likelihood term involved in the BIC, the same bias-correction applies to
the AIC and MDL scores. Moreover, as the BIC approximates the (Bayesian) log
marginal likelihood, log p(D|m), for large N , the leading-order bias-correction in
Eq. 7 can also be expected to account for most of the bias of log p(D(b) |m) when
applied to bootstrap samples D(b) .
4
Bias-Corrected Maximum-Likelihood
It may be surprising that the bias derived in Eq. 7 equals only one half of the
AIC penalty. In this section, we demonstrate that this is indeed consistent with the
AIC score. Using the standard bootstrap bias-correction procedure (cf. Section 2),
we obtain a scoring function that asymptotically equals the AIC. This approach is
similar to the ones in [1, 13].
Assume that we are given some data D sampled from the (unknown) true distri? m), or
bution p(X). The goal is to learn a Bayesian network model with p(X|?,
?
p?(X|m) in short, where m is the graph structure and ? are the maximum likelihood
parameter estimates, given data D. An information theoretic measure for the quality of graph m is the KL divergence between the (unknown) true distribution p(X)
and the one described by the Bayesian network, p?(X|m) (cf. the approach in [1]).
Since the entropy of the true distribution p(X) is an irrelevant constant when comparing different graphs, minimizing the KL-divergence is equivalent to minimizing
X
the statistic
T (p, p?, m) = ?
p(x) log p?(x|m),
(8)
x
which is the test error of the learned model when using the log loss. When p is
unknown, one cannot evaluate T (p, p?, m), but approximate it by the training error,
X
X
T (?
p, m) = ?
p?(x) log p?(x|m) = ?
p?(x|m) log p?(x|m).
(9)
x
x
(assuming exponential family distributions). Note that T (?
p, m) is equal to the
negative maximum log likelihood up to the irrelevant factor N . It is well-known
that the training error underestimates the test error. However, the ?bias-corrected
training error?,
T BC (?
p, m) = T (?
p, m) ? BiasT (p,m)
,
(10)
?
can serve as a surrogate, (nearly) unbiased estimator for the unknown test error,
T (p, p?, m), and hence as a scoring function for model selection. The bias is given
by the difference between the expected training error and the expected test error,
X
X
1
h?
p(x|m) log p?(x|m)iD?p ? ? |?|. (11)
BiasT =
p(x|m)hlog p?(x|m)iD?p ?
N
x
x
|
{z
} |
{z
}
1
=?H(p(X|m))? 2N
|?|+O(
1
N2
)
1
=?H(p(X|m))+ 2N
|?|+O(
1
N2
)
The expectation is taken over the various data sets D (of sample size N ) sampled
from the unknown true distribution p; H(p(X|m)) is the (unknown) conditional
entropy of the true distribution. In the leading-order approximation in N (cf. also
Section 3.1), the number of independent parameters of the model, |?|, is given
in Eq. 6 for Bayesian network. Note that both the expected test error and the
expected training error give rise to one half of the AIC penalty each. The overall
bias amounts to |?|/N , which exactly equals the AIC penalty for model complexity.
Note that, while the AIC asymptotically favors the same models as cross-validation
[15], it typically does not select the true model underlying the given data, but a
more complex model.
When the bootstrap estimate of the (exact) bias in Eq. 11 is inserted in the scoring
function in Eq. 10, the resulting score may be viewed as the frequentist version of
the (Bayesian) Deviance Information Criterion (DIC)[13] (up to a factor 2): while
averaging over the distribution of the model parameters is natural in the Bayesian
approach, this is mimicked by the bootstrap in the frequentist approach.
5
Experiments
In our experiments with artificial and real-world data, we demonstrate the crucial
effect of the bias induced by the bootstrap procedure, when exploring model uncertainty. We also show that the penalty term in Eq. 7 can compensate for most of
this (possibly large) bias in structure learning of Bayesian networks.
In the first experiment, we used data sampled from the alarm network (37 discrete variables, 46 edges). Comprising 300 and 1,000 data points, respectively, the
generated data sets can be expected to entail some model structure uncertainty.
We examined two different scoring functions, namely BIC and posterior probability
(uniform prior over network structures, equivalent sample size ? = 1, cf. [10]).
We used the K2 search strategy [3] because of its computational efficiency and its
accuracy in structure learning, which is high compared to local search (even when
combined with simulated annealing) [10]. This accuracy is due to the additional
input required by the K2 algorithm, namely a correct topological ordering of the
variables according to the true network structure. Consequently, the reported variability in the learned network structures tends to be smaller than the uncertainty
determined by local search (without this additional information). However, we are
mainly interested in the bias induced by the bootstrap here, which can be expected
to be largely unaffected by the search strategy.
Although the true alarm network is known, we use the network structures learned
from the given data D as a reference in our experiments: as expected, the optimal
graphs learned from our small data sets tend to be sparser than the original graph
in order to avoid over-fitting (cf. Table 1).3
We generated 200 bootstrap samples from the given data D (as suggested in [5]),
and then learned the network structure from each. Table 1 shows that the bias
induced by the bootstrap procedure is considerable for both the BIC and the posterior probability: it cannot be neglected compared to the standard deviation of
the distribution over the number of edges. Also note that, despite the small data
sets, the bootstrap yields graphs that have even more edges than the true alarm
network. In contrast, Table 1 illustrates that this bias towards too complex models
can be reduced dramatically by the bias-correction outlined in Section 3.2. However
note that the bias-correction does not work perfectly as it is only the leading-order
correction in N (cf. Eq. 7).
The jackknife is an alternative resampling method, and can be viewed as an approximation to the bootstrap (e.g., cf. [5]). In the delete-d jackknife procedure,
subsamples are generated from the given data D by deleting d data points.4 The
choice d = 1 is most popular, but leads to inconsistencies for non-smooth statistics
(e.g., cf. [5]). These inconsistency can be resolved by choosing a larger value for
3
Note that the greedy K2 algorithm yields exactly one graph from each given data set.
As a consequence, unlike bootstrap samples, jackknife samples do not contain multiple
identical data points when generated from a given continuous data set (cf. Section 1).
4
data D
boot BC
boot
jack 1
jack d
BIC
41
40.7
49.1
41.0
41.1
?
?
?
?
alarm network data
N = 300
N = 1, 000
posterior
BIC
posterior
40
43
44
4.9
40.5 ? 3.5
44.2 ? 2.6 44.1 ? 2.9
11.5 47.8 ? 10.9 47.3 ? 4.6 47.9 ? 4.8
0.0
40.0 ? 0.0
43.0 ? 0.0 44.0 ? 0.0
0.9
40.1 ? 0.3
43.1 ? 0.3 43.7 ? 0.4
pheromone
N = 320
posterior
63.0 ? 1.5
57.8 ? 3.5
135.7 ? 51.1
63.2 ? 1.5
63.1 ? 2.3
Table 1: Number of edges (mean ? standard deviation) in the network structures
learned from the given data set D, and when using various resampling methods:
bias-corrected bootstrap (boot BC), naive bootstrap (boot), delete-1 jackknife (jack
1), and delete-d jackknife (jack d; here d = N/10).
0.5
0
0
0.5
given data
1
1
bootstrap
1
bootstrap
corrected bootstrap
1
0.5
0
0
0.5
given data
1
0.5
0
0
0.5
corrected bootstrap
1
Figure 1: The axis of these scatter plots show the confidence in the presence of the
edges in the graphs learned from the pheromone data. The vertical and horizontal
lines indicate the threshold values according to the mean number of edges in the
graphs determined by the three methods (cf. Table 1).
?
d, roughly speaking N < d N , cf. [5]. The underestimation of both the bias
and the variance of a statistic is often considered a disadvantage of the jackknife
procedure: the ?raw? jackknife estimates of bias and variance typically have to
be multiplied by a so-called ?inflation factor?, which is usually of the order of the
sample size N . In the context of model selection, however, one may take advantage
of the extremely small bias of the ?raw? jackknife estimate when determining, e.g.,
the mean number of edges in the model. Table 1 shows that the ?raw? jackknife
is typically less biased than the bias-corrected bootstrap in our experiments. However, it is not clear in the context of model selection as to how meaningful the ?raw?
jackknife estimate of model variability is.
Our second experiment essentially confirms the above results. The yeast pheromone
response data contains 33 variables and 320 data points (measurements) [9]. We
discretized this gene-expression data using the average optimal number of discretization levels for each variable as determined in [14]. Unlike in [14], we simply discretized the data in a preprocessing step, and then conducted our experiments based
on this discretized data set.5 Since the correct network structure is unknown in this
experiment, we used local search combined with simulated annealing in order to
optimize the BIC score and the posterior probability (? = 25, cf. [14]). As a reference in this experiment, we used 320 network structures learned from the given
(discretized) data D, each of which is the highest-scoring graph found in a run of
local search combined with simulated annealing.6 Each resampling procedure is
also based on 320 subsamples.
5
Of course, the bias-correction according to Eq. 7 also applies to the joint optimization
of the discretization and graph structure when given a bootstrap sample.
6
Using the annealing parameters as suggested in [10], each run of simulated annealing
resulted in a different network structure (local optimum) in practice.
While the pheromone data experiments in Table 1 qualitatively confirm the previous
results, the bias induced by the bootstrap is even larger here. We suspect that this
difference in the bias is caused by the rather extreme parameter values in the original
alarm network model, which leads to a relatively large signal-to-noise ratio even in
small data sets. In contrast, gene-expression data is known to be extremely noisy.
Another effect of the spurious dependences induced by the bootstrap procedure is
shown in Figure 1: the overestimation of the confidence in the presence of individual
edges in the network structures. The confidence in an individual edge can be estimated as the ratio between the number of learned graphs where that edge is present
and the overall number of learned graphs. Each mark in Figure 1 corresponds to an
edge, and its coordinates reflect the confidence estimated by the different methods.
Obviously, the naive application of the bootstrap leads to a considerable overestimation of the confidence in the presence of many edges in Figure 1, particularly of
those whose absence is favored by both our reference and the bias-corrected bootstrap. In contrast, the confidence estimated by the bias-corrected bootstrap aligns
quite well with the confidence determined by our reference in Figure 1, leading to
more trustworthy results in our experiments.
References
[1] H. Akaike. Information theory and an extension of the maximum likelihood
principle. International Symposium on Information Theory, pp. 267?81. 1973.
[2] Carlton.On the bias of information estimates.Psych. Bulletin, 71:108?13, 1969.
[3] G. Cooper and E. Herskovits. A Bayesian method for constructing Bayesian
belief networks from databases. UAI, pp. 86?94. 1991.
[4] A.C. Davison and D.V. Hinkley. Bootstrap methods and their application. 1997.
[5] B. Efron and R. J. Tibshirani. An Introduction to the Bootstrap. 1993.
[6] N. Friedman, M. Goldszmidt, and A. Wyner. Data analysis with Bayesian
networks: A bootstrap approach. UAI, pp. 196?205. 1999.
[7] N. Friedman, M. Goldszmidt, and A. Wyner. On the application of the bootstrap for computing confidence measures on features of induced Bayesian networks. AI & Stat., p.p 197?202. 1999.
[8] N. Friedman, M. Linial, I. Nachman, and D. Pe?er. Using Bayesian networks
to analyze expression data. Journal of Computational Biology, 7:601?20, 2000.
[9] A. J. Hartemink, D. K. Gifford, T. S. Jaakkola, and R. A. Young. Combining location and expression data for principled discovery of genetic regulatory
networks. In Pacific Symposium on Biocomputing, 2002.
[10] D. Heckerman, D. Geiger, and D. M. Chickering. Learning Bayesian networks:
The combination of knowledge and statistical data. Machine Learning, 20:197?
243, 1995.
[11] G. A. Miller. Note on the bias of information estimates. Information Theory
in Psychology: Problems and Methods, pages 95?100, 1955.
[12] D. Pe?er, A. Regev, G. Elidan, and N. Friedman. Inferring subnetworks from
perturbed expression profiles. Bioinformatics, 1:1?9, 2001.
[13] D. J. Spiegelhalter, N. G. Best, B. P. Carlin, and A. van der Linde. Bayesian
measures of model complexity and fit. J. R. Stat. Soc. B, 64:583?639, 2002.
[14] H. Steck and T. S. Jaakkola. (Semi-)predictive discretization during model
selection. AI Memo 2003-002, MIT, 2003.
[15] M. Stone. An asymptotic equivalence of choice of model by cross-validation
and Akaike?s criterion. J. R. Stat. Soc. B, 36:44?7, 1977.
[16] J. D. Victor. Asymptotic bias in information estimates and the exponential
(Bell) polynomials. Neural Computation, 12:2797?804, 2000.
| 2427 |@word version:1 briefly:1 polynomial:1 steck:2 confirms:1 accounting:1 carry:1 moment:1 contains:2 score:5 genetic:1 bc:9 bootstrapped:1 comparing:1 discretization:3 surprising:1 trustworthy:1 scatter:1 plot:1 resampling:4 half:7 prohibitive:1 greedy:1 vanishing:1 short:1 davison:1 location:1 become:1 symposium:2 fitting:1 introduce:1 indeed:1 expected:10 roughly:1 discretized:4 distri:1 estimating:1 moreover:3 notation:1 underlying:2 substantially:1 psych:1 bootstrapping:1 concave:3 exactly:2 k2:3 originates:2 positive:1 negligible:1 local:5 tends:2 severely:1 consequence:3 despite:1 id:4 examined:1 equivalence:1 suggests:1 limited:1 directed:1 practice:2 differs:1 bootstrap:69 procedure:16 empirical:6 eth:1 bell:1 confidence:10 deviance:1 cannot:6 undesirable:1 selection:5 context:4 optimize:1 equivalent:2 convex:1 immediately:2 estimator:12 n12:1 coordinate:1 exact:3 akaike:8 approximated:1 particularly:2 database:1 observed:1 inserted:1 gifford:1 ordering:1 highest:1 principled:1 complexity:6 overestimation:2 neglected:3 predictive:1 serve:1 linial:1 efficiency:1 easily:1 joint:5 resolved:1 various:4 derivation:1 amend:1 monte:1 artificial:2 choosing:1 whose:2 quite:1 larger:4 dominating:1 drawing:1 favor:1 statistic:24 pheromone:4 noisy:1 obviously:4 subsamples:2 advantage:1 analytical:1 relevant:1 combining:1 description:1 parent:4 optimum:1 stat:3 eq:19 soc:2 involves:2 indicate:1 tommi:2 differ:1 switzerland:1 correct:2 bin:2 summation:1 exploring:6 extension:1 correction:20 sufficiently:1 considered:1 inflation:1 estimation:2 nachman:1 agrees:1 tool:1 mit:5 rather:3 avoid:1 jaakkola:3 derived:1 focus:2 likelihood:11 mainly:2 contrast:4 typically:7 spurious:7 interested:2 comprising:1 issue:1 among:1 overall:2 favored:1 special:2 marginal:1 equal:8 comprise:1 sampling:1 identical:4 biology:1 nearly:1 inherent:1 divergence:2 resulted:1 individual:2 familiar:1 replaced:2 intended:1 replacement:1 friedman:4 huge:1 evaluation:1 mdl:2 introduces:1 extreme:1 yielding:1 accurate:1 edge:18 plugged:3 taylor:1 delete:3 instance:1 disadvantage:1 deviation:2 uniform:1 conducted:1 too:8 reported:1 varies:1 perturbed:1 considerably:3 combined:3 international:1 ie:2 csail:2 overestimated:2 analogously:1 quickly:1 reflect:1 possibly:3 leading:10 account:2 caused:1 analyze:1 bution:1 square:2 accuracy:2 variance:7 largely:1 miller:1 yield:5 conceptually:1 bayesian:19 raw:4 carlo:1 unaffected:1 aligns:1 underestimate:3 frequency:1 involved:2 pp:3 associated:1 sampled:5 popular:6 knowledge:1 efron:2 higher:2 dt:3 response:1 evaluated:1 though:1 done:1 horizontal:1 quality:1 yeast:1 effect:6 normalized:1 true:22 unbiased:9 contain:1 hence:5 read:2 illustrated:1 during:1 noted:1 criterion:6 stone:1 theoretic:1 demonstrate:5 jack:4 overview:1 approximates:1 measurement:1 cambridge:2 ai:4 outlined:1 similarly:2 entail:1 posterior:7 optimizing:1 irrelevant:2 wellknown:1 inequality:1 carlton:1 inconsistency:2 der:1 scoring:16 victor:1 minimum:1 additional:4 elidan:1 signal:1 arithmetic:1 semi:1 multiple:2 smooth:1 plug:11 cross:2 compensate:1 concerning:2 dic:1 essentially:1 expectation:2 harald:2 achieved:1 annealing:5 crucial:1 biased:7 unlike:2 induced:14 tend:2 suspect:1 presence:5 concerned:1 bic:19 psychology:1 carlin:1 fit:1 perfectly:1 reduce:1 whether:1 expression:8 linde:1 penalty:13 speaking:1 dramatically:1 generally:1 clear:1 amount:2 reduced:1 herskovits:1 estimated:3 popularity:1 tibshirani:1 discrete:5 threshold:1 discreteness:2 ht:5 graph:19 asymptotically:5 realworld:1 run:2 uncertainty:10 powerful:1 place:2 family:3 geiger:1 aic:10 quadratic:1 topological:1 dangerous:1 extremely:2 relatively:1 jackknife:10 hinkley:1 pacific:1 according:5 combination:1 smaller:1 slightly:2 heckerman:1 intimately:3 hl:1 taken:1 computationally:2 zurich:2 turn:1 hh:2 know:1 subnetworks:1 multiplied:1 frequentist:2 mimicked:1 alternative:1 original:2 denotes:3 binomial:2 cf:24 graphical:6 especially:1 implied:1 already:1 parametric:3 costly:1 dependence:5 strategy:2 regev:1 surrogate:1 simulated:4 assuming:1 length:1 ratio:2 minimizing:2 equivalently:1 hlog:1 relate:2 negative:3 rise:3 memo:1 unknown:15 boot:4 vertical:1 variability:4 arbitrary:2 namely:2 required:1 kl:2 learned:14 suggested:2 usually:1 including:1 deleting:1 belief:1 natural:1 technology:2 wyner:2 spiegelhalter:1 axis:1 naive:2 review:1 literature:1 prior:1 discovery:1 determining:1 asymptotic:2 loss:1 interesting:1 proportional:1 validation:2 consistent:1 principle:1 course:1 last:1 bias:93 institute:1 bulletin:1 van:1 xn:1 world:1 amended:2 commonly:2 qualitatively:1 preprocessing:1 approximate:1 l00:1 gene:4 confirm:2 uai:2 xi:6 continuous:3 search:6 regulatory:1 reviewed:1 table:7 learn:1 inherently:1 expansion:1 complex:7 necessarily:1 constructing:1 domain:1 noise:1 alarm:5 profile:1 n2:2 x1:1 cooper:1 inferring:1 exponential:4 tied:3 vanish:1 ib:6 pe:2 chickering:1 young:1 jensen:1 er:2 essential:2 gained:1 illustrates:1 sparser:1 entropy:12 simply:1 explore:1 hartemink:1 applies:3 corresponds:1 ma:2 conditional:1 viewed:3 goal:1 consequently:3 towards:7 fisher:1 considerable:3 absence:1 typical:1 determined:4 corrected:17 averaging:1 called:2 underestimation:1 meaningful:1 select:1 mark:1 goldszmidt:2 biocomputing:1 bioinformatics:1 evaluate:1 |
1,571 | 2,428 | Clustering with the Connectivity Kernel
Bernd Fischer, Volker Roth and Joachim M. Buhmann
Institute of Computational Science
Swiss Federal Institute of Technology Zurich
CH-8092 Zurich, Switzerland
{bernd.fischer, volker.roth,jbuhmann}@inf.ethz.ch
Abstract
Clustering aims at extracting hidden structure in dataset. While the problem of finding compact clusters has been widely studied in the literature, extracting arbitrarily formed elongated structures is considered a
much harder problem. In this paper we present a novel clustering algorithm which tackles the problem by a two step procedure: first the data
are transformed in such a way that elongated structures become compact
ones. In a second step, these new objects are clustered by optimizing a
compactness-based criterion. The advantages of the method over related
approaches are threefold: (i) robustness properties of compactness-based
criteria naturally transfer to the problem of extracting elongated structures, leading to a model which is highly robust against outlier objects;
(ii) the transformed distances induce a Mercer kernel which allows us
to formulate a polynomial approximation scheme to the generally N Phard clustering problem; (iii) the new method does not contain free kernel
parameters in contrast to methods like spectral clustering or mean-shift
clustering.
1
Introduction
Clustering or grouping data is an important topic in machine learning and pattern recognition research. Among various possible grouping principles, those methods which try to
find compact clusters have gained particular importance. Presumingly the most prominent
method of this kind is the K-means clustering for vectorial data [6]. Despite the powerful
modeling capabilities of compactness-based clustering methods, they mostly fail in finding
elongated structures. The fast single linkage algorithm [9] is the most often used algorithm
to search for elongated structures, but it is known to be very sensitive to outliers in the
dataset. Mean shift clustering [3], another method of this class, is capable of extracting
elongated clusters only if all modes of the underlying probability distribution have one single maximum. Furthermore, a suitable kernel bandwidth parameter has to be preselected
[2]. Spectral clustering [10] shows good performance in many cases, but the algorithm is
only analyzed for special input instances while a complete analysis of the algorithm is still
missing. Concerning the preselection of a suitable kernel width, spectral clustering suffers
from similar problems as mean shift clustering.
In this paper we present an alternative method for clustering elongated structures. Apart
from the number of clusters, it is a completely parameter-free grouping principle. We build
up on the work on path-based clustering [7]. For a slight modification of the original prob-
lem we show that the defined path distance induces a kernel matrix fulfilling Mercers condition. After the computation of the path-based distance, the compactness-based pairwise
clustering principle is used to partition the data. While for the general N P-hard pairwise
clustering problem no approximation algorithms are known, we present a polynomial time
approximation scheme (PTAS) for our special case with path-based distances. The Mercer
property of these distances allows us to embed the data in a (n ? 1) dimensional vector
space even for non-metric input graphs. In this vector space, pairwise clustering reduces to
minimizing the K-means cost function in (n ? 1) dimensions [13]. For the latter problem,
however, there exists a PTAS [11].
In addition to this theoretical result, we also present an efficient practical algorithm resorting to a 2-approximation algorithm which is based on kernel PCA. Our experiments suggest that kernel PCA effectively reduces the noise in the data while preserving the coarse
cluster structure. Our method is compared to spectral clustering and mean shift clustering
on selected artificial datasets. In addition, the performance is demonstrated on the USPS
handwritten digits dataset.
2
Clustering by Connectivity
The main idea of our clustering criterion is to transform elongated structures into compact
ones in a preprocessing step. Given the transformed data, we then infer a clustering solution
by optimizing a compactness based criterion. The advantage of circumventing the problem
of directly finding connected (elongated) regions in the data as e.g. in the spanning tree approach is the following: while spanning tree algorithms are extremely sensitive to outliers,
the two-step procedure may benefit from the statistical robustness of certain compactness
based methods. Concerning the general case of datasets which are not given in a vector
space, but only characterized by pairwise dissimilarities, the pairwise clustering model has
been shown to be robust against outliers in the dataset [12]. It may, thus, be a natural
choice to formulate the second step as searching for the partition vector c ? {1, . . . , K} n
that minimizes the pairwise clustering cost function
PK
P
P
H PC (c; D) = ?=1 n1? i:ci =? j:cj =? dij ,
(1)
where K denotes the number of clusters, n? = |{i : ci = ?}| denotes the number of
objects in cluster ?, and dij is the pairwise ?effective? dissimilarity between objects i and
j as computed by a preprocessing step.
The idea of this preprocessing step is to define distances between objects by considering
certain paths through the total object set. The natural formalization of such path problems
is to represent the objects as a graph: consider a connected graph G = (V, E, d0 ) with n
vertices (the objects) and symmetric nonnegative edge weights d0ij on the edge (i, j) (the
original dissimilarities). Let us denote by Pij all paths from vertex i to vertex j. In order
to make those objects more similar which are connected by ?bridges? of other objects,
we define for each path p ? Pij the effective dissimilarity dpij between i and j connected
by p as the maximum weight on this path, i.e. the ?weakest link? on this path. The total
dissimilarity between vertices i and j is then defined as the minimum of all path-specific
effective dissimilarities dpij :
dij := min {
max
p?Pij 1?h?|p|?1
d0p[h]p[h+1] }.
(2)
Figure 1 illustrates the definition of the effective dissimilarity. If the objects are in the same
cluster their pairwise effective dissimilarities will be small (fig. 1(a)). If the two objects
belong to two different clusters, however, all paths contain at least one large dissimilarity
and the resulting effective dissimilarity will be large (fig. 1(b)). Note that single outliers
as in (fig. 1(a,b)) do not affect the basic structure in the path-based distances. A problem
dij
dij
(a)
(b)
dij
(c)
Figure 1: Effective dissimilarities. (a) If objects belong to the same high-density region, d ij is small.
(b) If they are in different regions, dij is larger. (c) To regions connected by a ?bridge?.
can only occur, if the point density along a ?bridge? between the two clusters is as high as
the density on the backbone of the clusters, see 1(c). In such a case, however, the points
belonging to the ?bridge? can hardly be considered as ?outliers?. The reader should notice
that the single linkage algorithm does not posses the robustness properties, since it will
separate the three most distant outlier objects in example 1(a) from the remaining data, but
it will not detect the dominant structure.
Summarizing the above model, we formalize the path-based clustering problem as:
INPUT: A symmetric (n ? n) matrix D 0 = (d0ij )1?i,j?n of nonnegative pairwise dissimilarities between n objects, with zero diagonal elements.
QUESTION: Find clusters by minimizing H PC (c; D), where the matrix D represents the
effective dissimilarities derived from D 0 by eq. (2).
3
The Connectivity Kernel
In this section we show that the effective dissimilarities induce a Mercer kernel on the
weighted graph G. The Mercer property will then allow us to derive several approximation
results for the N P-hard pairwise clustering problem in section 4.
Definition 1. A metric D is called ultra-metric if it satisfies the condition dij ?
max(dik , dkj ) for all distinct i, j, k.
Theorem 1. The dissimilarities defined by (2) induce an ultra-metric on G.
Proof. We have to check the axioms of a metric distance measure plus the restricted triangle inequality dij ? max(dik , dkj ): (i) dij ? 0, since the weights are nonnegative; (ii)
dij = dji , since we consider symmetric weights; (iii) dii = 0 follows immediately from
definition (2); (iv) The proof of the restricted triangle inequality follows by contradiction:
suppose, there exists a triple i, j, k for which dij > max(dik , dkj ). This situation, however,
contradicts the above definition (2) of dij : in this case there exists a path from i to j over
k, the weakest link of which is shorter than dij . Equation (2) then implies that dij must be
smaller or equal to max(dik , dkj ).
Definition 2. A metric D is `2 embeddable, if there exists a set of vectors {xi }ni=1 , xi ?
Rp , p ? n ? 1 such that for all pairs i, j kxi ? xj k2 = dij .
A proof for the following lemma has been given in [4]:
?
Lemma 1. For every ultra-metric D, D is `2 embeddable.
Now we are considered with a realization of such an embedding. We introduce the notion
of a centralized matrix. Let P be an (n ? n) matrix and let Q = In ? n1 en e>
n , where
>
en = (1, 1, . . . 1) is a n-vector of ones and In the n ? n identity matrix. We define the
centralized P as P c = QP Q.
The following lemma (for a proof see e.g. [15]) characterizes `2 embeddings:
?
Lemma 2. Given a metric D, D is `2 embeddable iff D c = QDQ is negative
(semi)definite.
The combination of both lemmata yields the following theorem.
Theorem 2. For the distance matrix D defined in the setting of theorem 1, the matrix
S c = ? 12 Dc with D c = QDQ is a Gram matrix or Mercer kernel. It contains dot products
between a set of vectors {xi }ni=1 with squared Euclidean distances kxi ? xj k22 = dij .
?
Proof. (i) Since D is ultra-metric, D is `2 embeddable by lemma 1, and D c is negative
(semi)definite by lemma (2). Thus, S c = ? 12 Dc is positive (semi)definite. As any positive
(semi)definite matrix, S c defines a Gram matrix or Mercer kernel. (ii) Since scij is a dotproduct between two vectors xi and xj , the squaredEuclidean distance
between xi and xj
is defined by kxi ? xj k22 = scii + scjj ? 2scij = ? 21 dcii + dcjj ? 2dcij . With the definition
of the centralized distances,
it can be seen easily that all but one term, namely the original
distance, cancel out: ? 12 dcii + dcjj ? 2dcij = dij .
4
Approximation Results
Pairwise clustering is known to be N P-hard [1]. To our knowledge there is no polynomial
time approximation algorithm known for the general case of pairwise clustering. For our
special case in which the data are transformed into effective dissimilarities, however, we
now present a polynomial time approximation scheme.
A Polynomial Time Approximation Scheme. Let us first consider the computation of the
effective dissimilarities D. Despite the fact that the path-based distance is a minimum over
all paths from i to j, the whole distance matrix can be computed in polynomial time.
Lemma 3. The path-based dissimilarity matrix D defined by equation 2 can be computed
in running time O(n2 log n).
Proof. The computation of the connectivity kernel matrix is an extention of Kruskal?s minimum spanning tree algorithm. We start with n clusters each containing one single object. In each iteration step the two clusters Ci and Cj are merged with minimal costs
dij = minp?Ci ,q?Cj d0pq where d0pq is the edge weight on the input graph. The link dij
gives the effective dissimilarity of all objects in Ci to all objects in Cj . To proof this, one
can consider the case, where dij is not the effective dissimilarity between Ci and Cj . Then
there exists a path over some other cluster Ck , where all objects on this path have a smaller
weight, implying the existence of another pair of clusters with smaller merging costs. The
running time is O(n2 log n) for the spanning tree algorithm on the the complete input graph
and additional O(n2 ) for filling all elements in the matrix D.
Let us now discuss the clustering step. Recall first the problem of K-means clustering:
given n vectors X = {x1 , . . . , xn ? Rp }, the task is to partition the vectors in such a way
that the squared Euclidean distance to the cluster centroids is minimized. The objective
function for K-means is given by
PK P
P
H KM (c; X ) = ?=1 i:ci =? (xi ? y? )2
where
y? = n1? j:cj =? xj (3)
Minimizing the K-means objective function for squared Euclidean distances is N P-hard
if the dimension of the vectors is growing with n.
Lemma 4. There exists a polynomial time approximation scheme (PTAS) for H KM in arbitrary dimensions and for fixed K.
Proof. In [11] Ostrovsky and Rabani presented a PTAS for K-means.
Using this approximation lemma we are able to proof the existence of a PTAS for pairwise
data clustering using the distance defined by (2).
Theorem 3. for distances defined by (2), there exists a PTAS for H PC .
Proof. By lemma 3 the dissimilarity matrix D can be computed in polynomial time. By
theorem 2 we can find vectors x1 , . . . xn ? Rp (p ? n ? 1) with dij = ||xi ? xj ||22 . For
squared Euclidean distances, however, there is an algebraic identity between H PC (c; D)
and H KM (c; X ) [13]. By lemma 4 there exists a PTAS for H KM and thus for H PC .
A 2-approximation by Kernel PCA. While the existence of a PTAS is an interesting
theoretical approximation result, it does not automatically follow that a PTAS can be used
in a constructive way to derive practical algorithms. Taking such a practical viewpoint,
we now consider another (weaker) approximation result from which, however, an efficient
algorithm can be designed easily. From the fact that we can define a connectivity kernel
matrix we can use kernel PCA [14] to reduce the data dimension. The vectors are projected
on the first principle components. Diagonalization of the centered kernel matrix S c leads to
S c = V t ?V , with an orthogonal matrix V = (v1 , . . . , vn ) containing the eigenvectors of
S c , and a diagonal matrix ? = diag(?1 , . . . , ?n ) containing the corresponding eigenvalues
on its diagonal. Assuming now that the eigenvalues are in descending
p (? 1 ? ?2 ?
Pp order
? ? ? ? ?n ), the data are projected on the first p eigenvectors: x0i = j=1 ?j vji .
Theorem 4. Embedding the path-based distances into RK by kernel PCA and enumerating
2
over all possible Voronoi partitions yields an O(nK +1 ) algorithm which approximates
path-based clustering within a constant factor of 2.
Proof. The solution of the K-means cost function induces a Voronoi partition on the
dataset. If the dimension p of the data is kept fix, the number of different Voronoi partitions is at most O(nKp ), and they can be enumerated in O(nKp+1 ) time [8]. Further, if
the embedding dimension is chosen as p = K, K-means in RK is a 2-approximation algorithm for K-means in Rn?1 [5]. Combining both results, we arrive at a 2-approximation
2
algorithm with running time O(nK +1 ).
Heuristics without approximation guarantees. The running time of the 2-approximation
algorithm may still be too large for many applications, therefore we will refer to two heuristic optimization methods without approximation guarantees. Instead of enumerating all
possible Voronoi partitions, one can simply partition the data with the fast classical Kmeans algorithm. In one sweep it assigns each object to the nearest centroid, while keeping
all other object assignments fixed. Then the centroids are relocated according to the new
assignments. Since the running time grows linear with the data dimension, it is useful to
first embed the data in K dimensions which leads us to a functional which optimal solution
is even in the worst case within a factor of two of the desired solution, as we know from
the above approximation results. In this reduced space, the K-means heuristics is applied
with the hope that there exist only few local minima in the low-dimensional subspace.
As a second heuristic one can apply Ward?s method which is an agglomerative optimization
of the K-means objective function.1 It starts with n clusters, each containing one object,
and in each step the two clusters that minimize the K-means objective function are merged.
Ward?s method produces a cluster hierarchy. For applications of this method see figure 3.
5
Experiments
We first compare our method with the classical single linkage algorithm on artificial data
consisting of three noisy spirals, see figure 2. Our main concern in these experiments is
the robustness against noise in the data. Figure 3(a) shows the dendrogram produced by
single linkage. The leaves of the tree are the objects of figure 2. For better visualization
of the tree structure, the bar diagrams below the tree show the labels of the three cluster
1
It has been shown in [12] that Ward?s method is an optimization heuristics for H P C . Due to the
equivalence of H P C and H KM in our special case, this property carries over to K-means.
(a)
(b)
(c)
Figure 2: Comparison to other clustering methods. (a) Mean shift clustering, (b) Spectral Clustering,
(c) Connectivity kernel clustering. (Color images at http://www.inf.ethz.ch/?befische/nips03)
(a)
(b)
(c)
Figure 3: Hierarchical Clustering Solutions for example 2(c). (a) Single Linkage, (b) Ward?s method
with connectivity kernel, applied to embedded objects in n ? 1 dimensions. (c) Ward?s method after
kernel PCA embedding in 3 dimensions.
solution as drawn in fig. 2(c). The height of the inner nodes depicts the merging costs for
two subtrees. Each level of the hierarchy is one cluster solution. It is obvious that the main
parts of the spiral arms are found, but the objects drawn on the right side are separated
from the rest of the cluster. The respective objects are the outliers that are separated in the
highest hierarchical levels of the algorithm. We conclude that for small K, single linkage
has the tendency to separates single outlier objects from the data.
By way of the connectivity kernel we can transform the original dyadic data to n ? 1
dimensional vectorial data. To show comparable results for the connectivity kernel, we
apply Ward?s method to the embedded vectors. Figure 3(b) shows the cluster hierarchy
for Ward?s method in the full space of n ? 1 dimensions. Opposed to the single linkage
results, the main structure of the spiral arms has been successfully found in the hierarchy
corresponding to the three cluster solution. Below the three cluster lever, the tree appears
to be very noisy. It should also be noticed that the costs of the three cluster solution are
not much larger as the costs of the four cluster solution, indicating that the three cluster
solution does not form a distinctly separated hierarchical level.
Figure 3(c) demonstrates that more distinctly separated levels can be found after applying
kernel PCA and embedding the objects into a low-dimensional space (here 3 dimensions).
Ward?s method is then applied to the embedded objects. One can see that the coarse struc-
ture of the tree has been preserved, while the costs of cluster solutions for K > 3 have been
shrunken towards zero. We conclude that PCA has the effect of de-noising the hierarchical
tree, leading to a more robust agglomerative algorithm.
Now we compare our results to other recently published clustering techniques, that have
been designed to extract elongated structures. Mean shift clustering [3] computes a trajectory of vectors towards the gradient of the underlying probability density. The probability
distribution is estimated with a density estimation kernel, e.g. a Gaussian kernel. The trajectories starting at each point in the feature space converge at the local maxima of the
probability distribution. Mean shift clustering is only applicable to finite dimensional vector spaces, because it implicitly involves density estimation. A potential shortcoming of
mean-shift clustering is the following: if the modes of the distribution have multiple local
maxima (as e.g. in the spiral arm example), there does not exist any kernel bandwidth to
successfully separate the data according to the underlying structure. In figure 2(a) the best
result for mean shift clustering is drawn. For smaller values of ? the spiral arms are further
subdivided into additional clusters, and for a larger bandwidth values, the result becomes
more and more similar to compactness-based criteria like K-means.
Spectral methods [10] have become quite popular in the last years. Usually the Laplacian
matrix based on a Gaussian kernel is computed. By way of PCA, the data are embedded
in a low dimensional space. The K-means algorithm on the embedded data then gives
the resulting partition. It has also been proposed to project the data on the unit sphere
before applying K-means. Spectral clustering with a Gaussian kernel is known to be able
to separate nested circles, but we observed that it has severe problems to extract the noisy
spiral arms, see 2(b). In spectral clustering, the kernel width ? is a free parameter which has
to be selected ?correctly?. If ? is too large, spectral clustering becomes similar to standard
K-means and fails to extract elongated structures. If, on the other hand, ? is too small, the
algorithm becomes increasingly sensitive to outliers, in the sense that it has the tendency to
separate single outlier objects.
Our approach to clustering with the connectivity kernel, however, could successfully extract
the three spiral arms as can be seen in figure 2(c). The reader should notice, that this method
does not require the user to preselect any kernel parameter.
(a)
(b)
(c)
Figure 4: Example from the USPS dataset. Training example of digits 2 and 9 embedded in two
dimensions. (a) Ground truth labels. (b) K-means labels and (c) clustering with connectivity kernel.
In a last experiment, we show the advantages of our method compared to a parameter-free
compactness criterion (K-means) on the problem of clustering digits ?2? and ?9? from the
USPS digits dataset. Figure 4 shows the clustering result of our method using the connectivity kernel. The 16x16 digit gray-value images of the USPS dataset are interpreted as
vectors and projected on the two leading principle components. In figure 4(a) the ground
truth solution is drawn. Figure 4(b) shows the partition by directly applying K-means clustering, and figure 4(c) shows the result produced by our method. Compared to the ground
truth solution, path-based clustering succeeded in extracting the elongated structures, resulting in a very small error of only 1.5% mislabeled digits. The compactness-based Kmeans method, on the other hand, produces clearly suboptimal clusters with an error rate
of 30.6%.
6
Conclusion
In this paper we presented a clustering approach, that is based on path-based distances in the
input graph. In a first step, elongated structures are transformed into compact ones, which
in the second step are partitioned by the compactness-based pairwise clustering method.
We showed that the transformed distances induce a Mercer kernel, which in turn allowed
us to derive a polynomial time approximation scheme for the generally N P-hard pairwise
clustering problem. Moreover, Mercers property renders it possible to embed the data
into low-dimensional subspaces by Kernel PCA. These embeddings form the basis for an
efficient 2-approximation algorithm, and also for de-noising the data to ?robustify? fast
agglomerative optimization heuristics. Compared to related methods like single linkage,
mean shift clustering and spectral clustering, our method has been shown to successfully
overcome the problem of sensitivity to outlier objects, while being capable of extracting
nested elongated structures. Our method does not involve any free kernel parameters, which
we consider to be a particular advantage over both mean shift? and spectral clustering.
References
[1] P. Brucker. On the complexity of clustering problems. Optimization and Operations Research,
pages 45?54, 1977.
[2] D. Comaniciu. An algorithm for data-driven bandwidth selection. IEEE T-PAMI, 25(2):281?
288, 2003.
[3] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. IEEE
T-PAMI, 24(5):603?619, 2002.
[4] M. Deza and M. Laurent. Applications of cut polyhedra. J. Comp. Appl. Math., 55:191?247,
1994.
[5] P. Drineas, A. Frieze, R. Kannan, S. Vempala, and V. Vinay. Clustering on large graphs and
matrices. In Proc. of the ACM-SIAM Symp. on Discrete Algorithm., pages 291?299, 1999.
[6] R. Duda, P. Hart, and D. Stork. Pattern Classification. Wiley & Sons, 2001.
[7] B. Fischer and J.M. Buhmann. Path-based clustering for grouping of smooth curves and texture
segmentation. IEEE T-PAMI, 25(4):513?518, 2003.
[8] M. Inaba, N. Katoh, and H. Imai. Applications of weighted voronoi diagrams and randomization
to variance-based k-clustering. In 10th ACM Sympos. Computat. Geom., pages 332?339, 1994.
[9] A. Jain and R. Dubes. Algorithms for Clustering Data. Prentice Hall, 1988.
[10] A.Y. Ng, M.I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In
NIPS, volume 14, pages 849?856, 2002.
[11] R. Ostrovsky and Y. Rabani. Polynomial time approximation schemes for geometric min-sum
median clustering. Journal of the ACM, 49(2):139?156, 2002.
[12] J. Puzicha, T. Hofmann, and J.M. Buhmann. A theory of proximity based clustering: Structure
detection by optimization. Pattern Recognition, 2000.
[13] V. Roth, J. Laub, J.M. Buhmann, and K.-R. Mu? ller. Going metric: Denoising pairwise data. In
NIPS, volume 15, 2003. to appear.
[14] B. Sch?olkopf, A. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Computation, 10:1299?1319, 1998.
[15] G. Young and A. S. Householder. Discussion of a set of points in terms of their mutual distances.
Psychometrica, 3:19?22, 1938.
| 2428 |@word polynomial:10 duda:1 km:5 harder:1 carry:1 contains:1 katoh:1 must:1 distant:1 partition:10 hofmann:1 designed:2 implying:1 selected:2 leaf:1 coarse:2 math:1 node:1 height:1 along:1 become:2 laub:1 scij:2 symp:1 introduce:1 pairwise:16 brucker:1 growing:1 automatically:1 considering:1 becomes:3 project:1 underlying:3 moreover:1 backbone:1 kind:1 interpreted:1 minimizes:1 finding:3 guarantee:2 every:1 tackle:1 k2:1 ostrovsky:2 demonstrates:1 unit:1 appear:1 positive:2 before:1 local:3 despite:2 laurent:1 path:25 pami:3 plus:1 studied:1 equivalence:1 appl:1 practical:3 qdq:2 definite:4 swiss:1 digit:6 procedure:2 axiom:1 induce:4 suggest:1 selection:1 noising:2 prentice:1 applying:3 descending:1 www:1 elongated:14 demonstrated:1 roth:3 missing:1 starting:1 formulate:2 immediately:1 assigns:1 contradiction:1 embedding:5 searching:1 notion:1 meer:1 hierarchy:4 suppose:1 user:1 element:2 recognition:2 inaba:1 cut:1 observed:1 embeddable:4 worst:1 region:4 connected:5 highest:1 mu:1 complexity:1 completely:1 usps:4 triangle:2 mislabeled:1 easily:2 basis:1 drineas:1 various:1 separated:4 distinct:1 fast:3 effective:13 shortcoming:1 jain:1 artificial:2 sympos:1 quite:1 heuristic:6 widely:1 larger:3 fischer:3 ward:8 transform:2 noisy:3 advantage:4 eigenvalue:3 product:1 combining:1 realization:1 shrunken:1 iff:1 olkopf:1 cluster:32 produce:2 object:31 derive:3 dubes:1 x0i:1 nearest:1 ij:1 eq:1 involves:1 implies:1 switzerland:1 merged:2 centered:1 dii:1 require:1 subdivided:1 fix:1 clustered:1 randomization:1 ultra:4 enumerated:1 proximity:1 considered:3 ground:3 hall:1 kruskal:1 estimation:2 proc:1 applicable:1 label:3 sensitive:3 bridge:4 successfully:4 weighted:2 hope:1 federal:1 uller:1 clearly:1 gaussian:3 aim:1 ck:1 volker:2 derived:1 joachim:1 polyhedron:1 check:1 contrast:1 centroid:3 detect:1 summarizing:1 sense:1 preselect:1 voronoi:5 compactness:10 hidden:1 transformed:6 going:1 among:1 classification:1 special:4 mutual:1 equal:1 ng:1 extention:1 represents:1 cancel:1 filling:1 minimized:1 few:1 frieze:1 consisting:1 n1:3 detection:1 centralized:3 highly:1 severe:1 analyzed:1 pc:5 subtrees:1 edge:3 capable:2 succeeded:1 shorter:1 respective:1 orthogonal:1 tree:10 iv:1 euclidean:5 desired:1 circle:1 theoretical:2 minimal:1 instance:1 modeling:1 assignment:2 cost:9 vertex:4 dij:22 too:3 struc:1 kxi:3 density:6 sensitivity:1 siam:1 connectivity:12 squared:5 lever:1 containing:4 opposed:1 leading:3 nkp:2 potential:1 de:2 try:1 characterizes:1 start:2 capability:1 minimize:1 formed:1 ni:2 variance:1 yield:2 handwritten:1 produced:2 trajectory:2 comp:1 published:1 suffers:1 definition:6 against:3 pp:1 obvious:1 naturally:1 proof:11 dataset:8 popular:1 recall:1 knowledge:1 color:1 cj:6 formalize:1 segmentation:1 appears:1 follow:1 wei:1 furthermore:1 smola:1 dendrogram:1 robustify:1 hand:2 nonlinear:1 defines:1 mode:2 gray:1 grows:1 effect:1 k22:2 contain:2 dkj:4 symmetric:3 width:2 comaniciu:2 criterion:6 prominent:1 complete:2 image:2 novel:1 recently:1 functional:1 dji:1 qp:1 stork:1 volume:2 belong:2 slight:1 approximates:1 refer:1 relocated:1 resorting:1 dot:1 dominant:1 showed:1 optimizing:2 inf:2 apart:1 driven:1 certain:2 inequality:2 arbitrarily:1 preserving:1 minimum:4 ptas:9 seen:2 additional:2 converge:1 imai:1 ller:1 semi:4 ii:3 multiple:1 full:1 reduces:2 infer:1 d0:1 smooth:1 characterized:1 sphere:1 concerning:2 hart:1 laplacian:1 basic:1 metric:10 iteration:1 kernel:38 represent:1 preserved:1 addition:2 diagram:2 median:1 sch:1 rest:1 posse:1 jordan:1 extracting:6 iii:2 embeddings:2 spiral:7 ture:1 affect:1 xj:7 bandwidth:4 suboptimal:1 reduce:1 idea:2 inner:1 enumerating:2 shift:12 pca:10 linkage:8 dik:4 render:1 algebraic:1 hardly:1 generally:2 useful:1 eigenvectors:2 involve:1 preselection:1 induces:2 reduced:1 http:1 exist:2 computat:1 notice:2 estimated:1 correctly:1 discrete:1 threefold:1 four:1 drawn:4 kept:1 v1:1 graph:8 circumventing:1 year:1 sum:1 prob:1 powerful:1 arrive:1 reader:2 vn:1 comparable:1 nonnegative:3 occur:1 vectorial:2 rabani:2 min:2 extremely:1 vempala:1 according:2 combination:1 belonging:1 smaller:4 increasingly:1 contradicts:1 son:1 partitioned:1 modification:1 lem:1 vji:1 outlier:12 restricted:2 fulfilling:1 equation:2 zurich:2 visualization:1 discus:1 turn:1 fail:1 know:1 operation:1 apply:2 hierarchical:4 spectral:12 alternative:1 robustness:4 rp:3 existence:3 original:4 denotes:2 clustering:66 remaining:1 running:5 build:1 classical:2 sweep:1 objective:4 noticed:1 question:1 diagonal:3 gradient:1 subspace:2 distance:24 link:3 separate:5 topic:1 agglomerative:3 spanning:4 toward:1 kannan:1 nips03:1 assuming:1 minimizing:3 mostly:1 negative:2 datasets:2 finite:1 situation:1 dc:2 rn:1 arbitrary:1 householder:1 bernd:2 pair:2 namely:1 nip:2 able:2 bar:1 below:2 pattern:3 usually:1 geom:1 preselected:1 max:5 suitable:2 natural:2 buhmann:4 arm:6 scheme:7 technology:1 dotproduct:1 extract:4 literature:1 geometric:1 embedded:6 interesting:1 triple:1 pij:3 mercer:9 jbuhmann:1 principle:5 minp:1 viewpoint:1 deza:1 last:2 free:5 keeping:1 side:1 allow:1 weaker:1 institute:2 taking:1 distinctly:2 benefit:1 overcome:1 dimension:13 xn:2 gram:2 curve:1 computes:1 preprocessing:3 projected:3 compact:5 implicitly:1 conclude:2 xi:7 search:1 transfer:1 robust:4 vinay:1 diag:1 pk:2 main:4 dcjj:2 whole:1 noise:2 n2:3 dyadic:1 allowed:1 x1:2 fig:4 en:2 depicts:1 x16:1 wiley:1 formalization:1 fails:1 young:1 theorem:7 rk:2 embed:3 specific:1 weakest:2 grouping:4 exists:8 concern:1 merging:2 effectively:1 gained:1 importance:1 ci:7 diagonalization:1 dissimilarity:21 texture:1 illustrates:1 nk:2 simply:1 ch:3 nested:2 truth:3 satisfies:1 acm:3 identity:2 kmeans:2 towards:2 hard:5 denoising:1 lemma:12 total:2 called:1 tendency:2 indicating:1 puzicha:1 latter:1 ethz:2 constructive:1 |
1,572 | 2,429 | Variational Linear Response
Manfred Opper(1)
Ole Winther(2)
Neural Computing Research Group, School of Engineering and Applied Science,
Aston University, Birmingham B4 7ET, United Kingdom
(2)
Informatics and Mathematical Modelling, Technical University of Denmark,
R. Petersens Plads, Building 321, DK-2800 Lyngby, Denmark
[email protected] [email protected]
(1)
Abstract
A general linear response method for deriving improved estimates of correlations in the variational Bayes framework is presented. Three applications are given and it is discussed how to use linear response as a general
principle for improving mean field approximations.
1 Introduction
Variational and related mean field techniques have attracted much interest as methods for
performing approximate Bayesian inference, see e.g. [1]. The maturity of the field has
recently been underpinned by the appearance of the variational Bayes method [2, 3, 4] and
associated software making it possible with a window based interface to define and make
inference for a diverse range of graphical models [5, 6].
Variational mean field methods have shortcomings as thoroughly discussed by Mackay [7].
The most important is that it based upon the variational assumption of independent variables. In many cases where the effective coupling between the variables are weak this
assumption works very well. However, if this is not the case, the variational method can
grossly underestimate the width of marginal distributions because variance contributions
induced by other variables are ignored as a consequence of the assumed independence.
Secondly, the variational approximation may be non-convex which is indicated by the occurrence of multiple solutions for the variational distribution. This is a consequence of
the fact that a possibly complicated multi-modal distribution is approximated by a simpler
uni-modal distribution.
Linear response (LR) is a perturbation technique that gives an improved estimate of the correlations between the stochastic variables by expanding around the solution to variational
distribution [8]. This means that we can get non-trivial estimates of correlations from the
factorizing variational distribution. In many machine learning models, e.g. Boltzmann machine learning [9] or probabilistic Independent Component Analysis [3, 10], the M-step of
the EM algorithm depend upon the covariances of the variables and LR has been applied
with success in these cases [9, 10].
Variational calculus is in this paper used to derive a general linear response correction from
the variational distribution. It is demonstrated that the variational LR correction can be
calculated as systematically the variational distribution in the Variational Bayes framework
(albeit at a somewhat higher computational cost). Three applications are given: a model
with a quadratic interactions, a Bayesian model for estimation of mean and variance of a 1D
Gaussian and a Variational Bayes mixture of multinomials (i.e. for modeling of histogram
data). For the two analytically tractable models (the Gaussian and example two above),
it is shown that LR gives the correct analytical result where the variational method does
not. The need for structured approximations, see e.g. [5] and references therein, that is
performing exact inference for solvable subgraphs, might thus be eliminated by the use of
linear response.
We define a general probabilistic model M for data y and model parameters s: p(s, y) =
p(s, y|M). The objective of a Bayesian
R analysis are typically the following: to derive
the marginal likelihood p(y|M)
=
ds p(s, y|M) and marginal distributions e.g. the
RQ
1
one-variable pi (si |y) = p(y)
k6=i dsk p(s, y) and the two-variable pij (si , sj |y) =
RQ
1
ds
p(s,
y).
In
this
paper,
we will only discuss how to derive linear response
k
k6=i,j
p(y)
approximations to marginal distributions. Linear response corrected marginal likelihoods
can also be calculated, see Ref. [11].
The paper is organized as follows: in section 2 we discuss how to use the marginal likelihood as a generating functions for deriving marginal distributions. In section 4 we use this
result to derive the linear response approximation to the two-variable marginals and derive
an explict solution of these equations in section 5. In section 6 we discuss why LR in the
cases where the variational method gives a reasonable solution will give an even better result. In section 7, we give the three applications and in section we conclude and discuss
how to combine the mean field approximation (variational, Bethe,. . . ) with linear response
to give more precise mean field approaches.
After finishing this paper we have become aware of the work of Welling and Teh [12, 13]
which also contains the result eq. (8) and furthermore extend linear response to the Bethe
approximation, give several general results for the properties of linear response estimates
and derive belief propagation algorithms for computing the linear response estimates. The
new contributions of this paper compared to Refs. [12, 13] are the explicit solution of
the linear response equations, the discussion of the expected increased quality of linear
response estimates, the applications of linear response to concrete examples especially in
relation to variational Bayes and the discussion of linear response and mean field methods
beyond variational.
2 Generating Marginal Distributions
In this section it is shown how exact marginal distributions can be obtained from functional derivatives of a generating function (the log partition function). In the derivation
of the variational linear response approximation to the two-variable marginal distribution
pij (si , sj |y), we can use result by replacing the exact marginal distribution with the variational approximation. To get marginal distributions we introduce a generating function
Z
P
Z[a] = ds p(s, y)e i ai (si )
(1)
which is a functional of the arbitrary functions ai (si ) and a is shorthand for the vector of
functions a = (a1 (s1 ), a2 (s2 ), . . . , aN (sN )). We can now obtain the marginal distribution
p(si |y, a) by taking the functional derivative1 with respect to ai (si ):
Z Yn
o
eai (si )
?
ln Z[a] =
d?
sk eak (?sk ) p(?s, y) = pi (si |y, a) .
(2)
?ai (si )
Z[a]
k6=i
1
?a (s )
The functional derivative is defined by ?aji (sji ) = ?ij ?(si ? sj ) and the chain rule.
Setting a = 0 above gives the promised result. The next step is to take the second derivative. This will give us a function that are closely related to the two-variable marginal
distribution. A careful derivation gives
? 2 ln Z[a]
?pi (si |y, a)
0
Bij (si , sj ) ?
=
(3)
?aj (s0j )?ai (si )
?aj (s0j )
a=0
a=0
= ?ij ?(si ? s0j )pi (si |y) + (1 ? ?ij )pij (si , s0j |y) ? pi (si |y)pj (s0j |y)
0 n
0
0
Performing an average of sm
i (sj ) over Bij (si , sj ), it is easy to see that Bij (si , sj ) gives
the ?mean-subtracted? marginal distributions. In the two next sections, variational approximations to the single variable and two-variable marginals are derived.
3 Variational Learning
In many models of interest, e.g. mixture models, exact inference scales exponentially with
the size of the system. It is therefore of interest to come up with polynomial approximations. Q
A prominent method is the variational, where a simpler factorized distribution
q(s) =
i qi (si ) is used instead of the posterior distribution. Approximations to the
marginal distributions pi (si |y) and pij (si , sj |y) are now simply qi (si ) and qi (si )qj (sj ).
The purpose of this paper is to show that it is possible within the variational framework to
go beyond the factorized distribution for two-variable marginals. For this purpose we need
the distribution q(s) which minimizes the KL-divergence or ?distance? between q(s) and
p(s|y):
Z
q(s)
KL(q(s)||p(s|y)) = ds q(s) ln
.
(4)
p(s|y)
The variational approximation to the Likelihood is obtained from
Z
q(s)
P
? ln Zv [a] = ds q(s) ln
= ? ln Z[a] + KL(q(s)||p(s|y, a)) ,
p(s, y)e k ak (?sk )
where a has been introduced to be able use qi (si |a) as a generating function. Introducing
R Lagrange multipliers {?i } as enforce normalization and minimizing KL +
P
?
(
dsi qi (si ) ? 1) with respect to qi (si ) and ?i , one finds
i
i
qi (si |a) = R
eai (si )+
R Q
d?
si eai (?si )+
k6=i {dsk qk (sk |a)} ln p(s,y)
R Q
sk qk (?
sk |a)} ln p(?
s,y)
k6=i {d?
.
(5)
Note that qi (si |a) depends upon all a through the implicit dependence in the q k s appearing
on the right hand side. Writing the posterior in terms of ?interaction potentials?, i.e. as a
factor graph
Y
Y
p(s, y) =
?i (si )
?i,j (si , sj ) . . . ,
(6)
i
i>j
it is easy to see that potentials that do not depend upon si will drop out of variational
distribution. A similar property will be used below to simplify the variational two-variable
marginals.
4 Variational Linear Response
Eq. (3) shows that we can obtain the two-variable marginal as the derivative of the marginal
distribution. To get the variational linear response approximation we exchange the exact
marginal with the variational approximation eq. (5) in eq. (3). In section 6 an argument is
given for why one can expect the variational approach to work in many cases and why the
linear response approximation gives improved estimates of correlations in these cases.
Defining the variational ?mean subtracted? two-variable marginal as
Cij (si , s0j |a) ?
?qi (si |a)
,
?aj (s0j )
(7)
it is now possible to derive an expression corresponding to eq. (3). What makes the derivation a bit cumbersome is that it necessary to take into account the implicit dependence of
aj (s0j ) in qk (sk |a) and the result will consequently be expressed as a set of linear integral
equations in Cij (si , s0j |a). These equations can be solved explicitly, see section 5 or can as
suggested by Welling and Teh [12, 13] be solved by belief propagation.
Taking into account both explicit and implicit a dependence we get the variational linear
response theorem:
Cij (si , s0j |a) = ?ij ?(si ? s0j )qi (si |a) ? qi (si |a)qj (s0j |a)
(8)
XZ Y
Y
+qi (si |a)
dsk
qk (sk |a)Clj (sl , s0j |a)
l6=i
k6=i
k6=i,l
? ln p(s, y) ?
Z
dsi qi (si |a) ln p(s, y)
.
The first term represents the normal variational correlation estimate and the second term
is linear response correction which expresses the coupling between the two-variable
marginals.
Using the factorization of the posterior eq. (6), it is easily seen that potentials that do
not depend on both si and sl will drop out in the last term. This property will make the
calculations for the most variational Bayes models quite simple since this means that one
only has to sum over variables that are directly connected in the graphical model.
5 Explicit Solution
The integral equation can be simplified by introducing the symmetric kernel
Kij (s, s0 ) = (1 ? ?ij ) hln p(s, y)i\(i,j) ? hln p(s, y)i\j ? hln p(s, y)i\i + hln p(s, y)i ,
where the brackets h. . .i\(i,j) = h. . .iq\(i,j) denote expectations over q for all variRables, except for 0si and sj and similarly for h. . .i\i . One can easily show that
ds qi (s) Kij (s, s ) = 0. Writing C in the form
?(s ? s0 )
0
?
1
+
R
(s,
s
)
,
(9)
Cij (s, s0 ) = qi (s)qj (s0 ) ?ij
ij
qj (s0 )
we obtain an integral equation for the function R
XZ
Rij (s, s0 ) =
d?
s ql (?
s)Kil (s, s?)Rlj (?
s, s0 ) + Kij (s, s0 ) .
(10)
l
This resultRcan most easily be obtained by plugging the definition eq. (9) into eq. (8) and
using that ds qi (s) Rij (s, s0 ) = 0. For many applications, kernels can be written in the
form of sums of pairwise multiplicative ?interactions?, i.e.
X
0
0
??0 ?
Kij (s, s0 ) =
Jij
?i (s)??
(11)
j (s )
??0
0
with h??
i iq = 0 for all i and ? then the solution will be on the form R ij (s, s ) =
P
??0 ?
?0 0
??0 Aij ?i (s)?j (s ). The integral equation reduces to a system of linear equations
0
for the coefficients A??
ij .
We now discuss the simplest case where Kij (s, s0 ) = Jij ?i (s)?j (s0 ). This is obtained if
0
the model has only pairwise interactions of the quadratic form ?ij (s, s0 ) = eJij ?i (s)?j (s ) ,
0
0
where ?i (s) = ?i (s) ? h?i iq . Using Rij (s, s ) = Aij ?i (s)?j (s ) and augmenting the
matrix of Jij ?s with the diagonal elements Jii ? ? h?12 iq yield the solution
i
Aij = ?Jii Jjj D(Jii ) ? J?1
ij
,
(12)
where D(Jii ) is a diagonal matrix with entries Jii . Using (9), this result immediately leads
to the expression for the correlations
h?i ?j i = h?i ?j i ? h?i ih?j i = ?(J?1 )ij .
(13)
6 Why Linear Response Works
It may seem paradoxical that an approximation which is based on uncorrelated variables
allows us to obtain a nontrivial result for the neglected correlations. To shed more light on
this phenomenon, we would like to see how the true partition function, which serves as a
generating function for expectations, differs from the mean field one when the approximating mean field distribution q are close. We will introduce into the generating function eq.
(1) the parameter :
Z
P
Z [a] = ds q(s)e( i ai (si )+ln p(s|y)?ln q(s))
(14)
which serves as a bookkeeping device for collecting relevant terms, when ln p(s|y)?ln q(s)
is assumed to be small. At the end we will set = 1 since Z[a] = Z=1 [a]. Then expanding
the partition function to first order in , we get
!
X
ln Z [a] =
hai (si )iq + hln p(s|y) ? ln q(s)iq + O(2 )
(15)
i
=
X
hai (si )iq ? KL(q||p)
i
!
+ O(2 ) .
Keeping only the linear term, setting = 1 and inserting the minimizing mean field distribution for q yields
? ln Z
pi (s|y, a) =
= qi (s|a) + O(2 ) .
(16)
?ai (s)
Hence the computation of the correlations via
Bij (s, s0 ) =
? 2 ln Z
?pi (s|a)
?qi (s|a)
=
=
+ O(2 ) = Cij (s, s0 ) + O(2 ) (17)
0
0
?ai (s)?aj (s )
?aj (s )
?aj (s0 )
can be assumed to incorporate correctly effects of linear order in ln p(s|a) ? ln q(s). On
the other hand, one should expect p(si , sj |y) ? qi (si )qj (sj ) to be order . Although the
above does not prove that diagonal correlations are estimated more precisely from C ii (s, s0 )
than from qi (s)?only that both are correct to linear order in ?one often observes this in
practice, see below.
7 Applications
7.1 Quadratic Interactions
The quadraticPinteraction model?ln
?ij (si , sj ) = si Jij sj and arbitrary ?(si ), i.e.
P
1
ln p(s, y) = i ln ?i (si ) + 2 i6=j si Jij sj + constant?is used in many contexts, e.g.
the Boltzmann machine, independent component analysis and the Gaussian process prior.
For this model we can immediately apply the result eq. (13) to get
hsi sj i ? hsi ihsj i = ?(J?1 )ij
where we have set Jii =
?1/(hs2i iq
?
(18)
hsi i2q ).
We can apply this to the Gaussian model ln ?i (si ) = hi si + Ai s2i /2, The variational
distribution is Gaussian with variance ?1/Ai (and covariance zero). Hence, we can set
Jii = Ai . The mean is ?[J?1 h]i . The exact marginals have mean ?[J?1 h]i and co?1
variance
can be quitedramatic, e.g. in two dimensions for
?[J ]ij . The difference
1
1 ?
1
, we get J?1 = 1?
. The variance estimates are 1/Jii = 1
J=
2
1
? 1
?1
2
for variational and [J ]ii = 1/(1 ? ) for the exact case. The latter diverges for completely correlated variable, ? 1 illustrating that the variational covariance estimate breaks
down when the interaction between the variables are strong.
A very important remark should be made at this point: although the covariance eq. (18)
comes out correctly, the LR method does not reproduce the exact two variable marginals,
i.e. the distribution eq. (9) plus the sum of the product of the one variable marginals is not
a Gaussian distribution.
7.2 Mean and Variance of 1D Gaussian
p
A one dimensional Gaussian observation model p(y|?, ?) = ?/2? exp(??(x ? ?)2 /2),
? = 1/? 2 with a Gaussian prior over the mean and a Gamma prior over ? [7] serves as another example of where linear response?as opposed to variational?gives exact covariance
estimates. The N example likelihood can be rewritten as
N2
?
?
?
2
2
? ? N (? ? y)
p(y|?, ?) =
exp ? N ?
,
(19)
2?
2
2
P
? 2 = i (yi ? y)2 /N are the empirical mean and variance. We immediately
where y and ?
?
recognize ? 2 N (? ? y)2 as the interaction term. Choosing non-informative priors?p(?)
flat and p(?) ? 1/??the variational distribution q? (?) becomes Gaussian with mean y
and variance 1/N h?iq and q? (?) becomes a Gamma distribution ?(?|b, c) ? ? c?1 e??/b ,
? 2 + h(? ? y)2 iq ). The mean and variance of
with parameters cq = N/2 and 1/bq = N2 (?
2
Gamma distribution are given by bc and b c. Solving with respect to h(? ? y)2 iq and h?iq
2
2
give 1/bq = N2?? NN?1 . Exact inference gives cexact = (N ? 1)/2 and 1/bexact = N2?? [7].
A comparison shows that the mean bc is the same in both cases whereas variational underestimates the variance b2 c. This is a quite generic property of the variational approach.
The LR correction to the covariance is easily derived from (13) setting J 12 = ?N/2 and
?1 (?) = ? ? h?iq and ?2 (?) = (? ? y)2 ? h(? ? y)2 iq . This yields J11 = ?1/h?21 (?)i =
?1/bq h?iq . Using h(? ? y)2 iq = 1/N h?iq and h(? ? y)4 iq = 3h(? ? y)2 i2q , we have
J22 = ?1/h?22 (?)i = ?N 2 h?i2q /2. Inverting the 2 ? 2 matrix J, we immediately get
h?21 i = Var(?) = ?(J?1 )11 = bq h?iq /(1 ? bq /2h?iq )
Inserting the result for h?iq , we find that this is in fact the correct result.
7.3 Variational Bayes Mixture of Multinomials
As a final example, we take a mixture model of practical interest and show that linear
response corrections straightforwardly can be calculated. Here we consider the problem of
modeling histogram data ynj consisting of N histograms each with D bins: n = 1, . . . , N
and j = 1, . . . , D. We can model this with a mixture of multinomials (Lars Kai Hansen
2003, in preparation):
K
D
X
Y
y
?, ?) =
p(yn |?
?k
?kjnj ,
(20)
k=1
j=1
where ?k is the probability of the kth mixture and ?kj is P
the probability of P
observing the
jth histogram given we are in the kth component, i.e. k ?k = 1 and j ?kj = 1.
Eventually in the variational Bayes treatment we will introduce Dirichlet priors for the
variables. But the general linear response expression is independent of this. To rewrite
the model such that it is suitable for a variational treatment?i.e. in
Pa product form?we
introduce hidden (Potts) variables xn = {xnk }, xnk = {0, 1} and k xnk = 1 and write
the joint probability of observed and hidden variables as:
?
?xnk
K
D
Y
Y
y
nj
? ?k
? , ?) =
p(yn , xn |?
?kj ?
.
(21)
k=1
j=1
Summing over all possible xn vectors, we recover the original observation model.
P
We can now identify the interaction terms in
n ln p(yn , xn , ? , ? ) as xnk ln ?k and
ynj xnk ln ?kj . Generalizing eq. (8) to sets of variables, we can compute the following
? , ? 0 ), C(?
? , ? 0 ) and C(?? k , ? 0k0 ). To get the explicit solution we need to
correlations C(?
write the coupling matrix for the problem and add diagonal terms and invert. Normally,
the complexity will be order cubed in the number of parameters. However, it turns out
that the two variable marginal distributions involving the hidden variables?the number of
which scales with the number of examples?can be eliminated analytically. The computation of correlation are thus only cubic in the number of parameters, K + K ? D, making
computation of correlations attractive even for mixture models.
The symmetric coupling matrix for this problem can be written as
?
!
J? 1 x1 ? ? ?
Jxx Jx?? Jx??
?
..
J? x J? ? J???
J=
with J? x = ?
.
J? x J??? J???
J? K x1 ? ? ?
?
J? 1 xN
?
..
? ,
.
J? K xN
(22)
where for simplicity the log on ? and ? are omitted and (J? k xn )jk = ynj . The other nonzero sub-matrix is: J? x = [J? x1 ? ? ? J? xN ] with (J? xn )kk0 = ?k,k0 . To get the covariance
V we introduce diagonal elements into J (which are all tractable in h. . .i = h. . .i q ):
?(J?1
xn xn )kk0
?(J??1
? )kk0
?1
?(J? k ? k )jj 0
and invert: V = ?J
?1
= hxnk xnk0 i ? hxnk ihxnk0 i = ?kk0 hxnk i ? hxnk ihxnk0 i
(23)
= hln ?k ln ?k0 i ? hln ?k ihln ?k0 i
(24)
= hln ?kj ln ?kj 0 i ? hln ?kj ihln ?kj 0 i
(25)
.
Using inversion by partitioning and the Woodbury formula we find the following simple
formula
?1
?1
?
?
?
?
V?? =
J?? ? J?? ? J?? J?? ? J??
J??
,
(26)
??? = J? x J?1 Jx?? and J
?? ? =
where we have introduced the ?indirect? couplings J
xx
?1
J? x Jxx Jx?? . Similar formulas can be obtained for V?? and V?? .
8 Conclusion and Outlook
In this paper we have shown that it is possible to extend linear response to completely general variational distributions and solve the linear response equations explicitly. We have
given three applications that show 1. that linear response provides approximations of increased quality for two-variable marginals and 2. linear response is practical for variational
Bayes models. Together this suggests that building linear response into variational Bayes
software such as VIBES [5, 6] would be useful.
Welling and Teh [12, 13] have, as mentioned in the introduction, shown how to apply the
general linear response methods to the Bethe approximation. However, the usefulness of
linear response even goes beyond this: if we can come up with a better tractable approximation to the marginal distribution q(si ) with some free parameters, we can tune these
parameters by requiring consistency between q(si ) and the linear response estimate of the
diagonal of the two-variable marginals eq. (8):
Cii (si , s0i ) = ?(si ? s0i )q(si ) ? q(si )q(s0i ) .
(27)
This design principle can be generalized to models that give non-trivial estimates of twovariable marginals such as Bethe. It might not be possible to match the entire distribution
for a tractable choice of q(si ). In that case it is possibly to only require consistency for some
statistics. The adaptive TAP approach [11]?so far only studied for quadratic interactions?
can be viewed in this way. Generalizing this idea to general potentials, general mean field
approximations, deriving the corresponding marginal likelihoods and deriving guaranteed
convergent algorithms for the approximations are under current investigation.
References
[1] M. Opper and D. Saad, Advanced Mean Field Methods: Theory and Practice, MIT Press, 2001.
[2] H. Attias, ?A variational Bayesian framework for graphical models,? in Advances in Neural
Information Processing Systems 12, T. Leen et al., Ed. 2000, MIT Press, Cambridge.
[3] J. W. Miskin and D. J. C. MacKay, ?Ensemble learning for blind image separation and deconvolution,? in Advances in Independent Component Analysis, M Girolami, Ed. 2000, SpringerVerlag Scientific Publishers.
[4] Z. Ghahramani and M. J. Beal, ?Propagation algorithms for variational Bayesian learning,?
in Advances in Neural Information Processing Systems 13. 2001, pp. 507?513, MIT Press,
Cambridge.
[5] C. M. Bishop, D. Spiegelhalter, and J. Winn, ?VIBES: A variational inference engine for
Bayesian networks,? in Advances in Neural Information Processing Systems 15, 2002.
[6] C. M. Bishop and J. Winn, ?Structured variational distributions in VIBES,? in Artificial Intelligence and Statistics, Key West, Florida, 2003.
[7] D. J. C. MacKay, Information Theory, Inference, and Learning Algorithms, Cambridge University Press, 2003.
[8] G. Parisi, Statistical Field Theory, Addison-Wesley, 1988.
[9] H.J. Kappen and F.B. Rodr??guez, ?Efficient learning in Boltzmann machines using ? linear
response theory,? Neural Computation, vol. 10, pp. 1137?1156, 1998.
[10] P. A.d.F.R. Hojen-Sorensen, O. Winther, and L. K. Hansen, ?Mean field approaches to independent component analysis,? Neural Computation, vol. 14, pp. 889?918, 2002.
[11] M. Opper and O. Winther, ?Adaptive and self-averaging Thouless-Anderson-Palmer mean field
theory for probabilistic modeling,? Physical Review E, vol. 64, pp. 056131, 2001.
[12] M. Welling and Y. W. Teh, ?Linear response algorithms for approximate inference,? Artificial
Intelligence Journal, 2003.
[13] M. Welling and Y. W. Teh, ?Propagation rules for linear response estimates of joint pairwise
probabilities,? preprint, 2003.
| 2429 |@word illustrating:1 inversion:1 polynomial:1 calculus:1 covariance:7 dramatic:1 outlook:1 kappen:1 contains:1 united:1 bc:2 current:1 si:66 guez:1 attracted:1 written:2 partition:3 informative:1 drop:2 intelligence:2 device:1 manfred:1 lr:7 provides:1 simpler:2 mathematical:1 become:1 hojen:1 maturity:1 shorthand:1 prove:1 combine:1 introduce:5 pairwise:3 expected:1 xz:2 multi:1 window:1 becomes:2 xx:1 factorized:2 what:1 kk0:4 minimizes:1 nj:1 collecting:1 shed:1 uk:1 partitioning:1 normally:1 underpinned:1 yn:4 engineering:1 consequence:2 ak:1 might:2 plus:1 therein:1 studied:1 suggests:1 co:1 factorization:1 palmer:1 range:1 practical:2 woodbury:1 practice:2 differs:1 aji:1 empirical:1 get:10 close:1 context:1 writing:2 demonstrated:1 go:2 rlj:1 convex:1 simplicity:1 immediately:4 subgraphs:1 rule:2 deriving:4 exact:10 pa:1 element:2 approximated:1 jk:1 observed:1 preprint:1 solved:2 rij:3 connected:1 observes:1 rq:2 mentioned:1 complexity:1 neglected:1 depend:3 solving:1 rewrite:1 upon:4 completely:2 easily:4 joint:2 indirect:1 k0:4 s2i:1 derivation:3 shortcoming:1 effective:1 ole:1 artificial:2 choosing:1 quite:3 kai:1 solve:1 statistic:2 final:1 beal:1 parisi:1 analytical:1 interaction:9 jij:5 product:2 inserting:2 relevant:1 opperm:1 diverges:1 generating:7 coupling:5 derive:7 ac:1 augmenting:1 iq:21 ij:15 school:1 eq:14 strong:1 come:3 girolami:1 closely:1 correct:3 stochastic:1 lars:1 bin:1 exchange:1 require:1 investigation:1 secondly:1 correction:5 around:1 normal:1 exp:2 jx:4 a2:1 omitted:1 purpose:2 estimation:1 birmingham:1 hansen:2 mit:3 gaussian:10 derived:2 finishing:1 potts:1 modelling:1 likelihood:6 inference:8 nn:1 typically:1 entire:1 xnk:6 hidden:3 relation:1 reproduce:1 rodr:1 k6:7 mackay:3 marginal:23 field:15 aware:1 eliminated:2 represents:1 simplify:1 gamma:3 divergence:1 recognize:1 thouless:1 consisting:1 interest:4 mixture:7 bracket:1 light:1 sorensen:1 chain:1 integral:4 necessary:1 bq:5 ynj:3 increased:2 kij:5 modeling:3 cost:1 introducing:2 entry:1 usefulness:1 straightforwardly:1 thoroughly:1 winther:3 probabilistic:3 informatics:1 together:1 concrete:1 opposed:1 possibly:2 derivative:4 cubed:1 account:2 potential:4 jii:8 b2:1 coefficient:1 explicitly:2 depends:1 blind:1 multiplicative:1 break:1 observing:1 bayes:10 recover:1 complicated:1 contribution:2 variance:10 qk:4 ensemble:1 yield:3 identify:1 weak:1 bayesian:6 cumbersome:1 ihsj:1 ed:2 definition:1 petersens:1 grossly:1 underestimate:2 pp:4 associated:1 treatment:2 organized:1 wesley:1 higher:1 response:38 improved:3 modal:2 leen:1 anderson:1 furthermore:1 implicit:3 correlation:12 d:8 hand:2 replacing:1 propagation:4 quality:2 aj:7 indicated:1 scientific:1 building:2 effect:1 requiring:1 multiplier:1 true:1 analytically:2 hence:2 symmetric:2 nonzero:1 attractive:1 width:1 self:1 generalized:1 prominent:1 interface:1 image:1 variational:56 recently:1 bookkeeping:1 multinomial:3 functional:4 kil:1 physical:1 b4:1 exponentially:1 discussed:2 extend:2 marginals:11 cambridge:3 ai:11 consistency:2 similarly:1 i6:1 add:1 posterior:3 sji:1 success:1 yi:1 seen:1 somewhat:1 cii:1 ii:2 hsi:3 multiple:1 reduces:1 technical:1 match:1 calculation:1 a1:1 plugging:1 qi:20 involving:1 expectation:2 histogram:4 normalization:1 kernel:2 invert:2 whereas:1 winn:2 publisher:1 saad:1 induced:1 j11:1 seem:1 easy:2 independence:1 idea:1 attias:1 qj:5 expression:3 j22:1 dsk:3 remark:1 jj:1 ignored:1 useful:1 eai:3 tune:1 simplest:1 sl:2 estimated:1 correctly:2 diverse:1 write:2 vol:3 express:1 group:1 zv:1 key:1 promised:1 explict:1 pj:1 graph:1 sum:3 reasonable:1 separation:1 bit:1 hi:1 guaranteed:1 convergent:1 quadratic:4 nontrivial:1 precisely:1 software:2 flat:1 argument:1 jjj:1 performing:3 structured:2 em:1 making:2 s1:1 plads:1 lyngby:1 equation:9 ln:29 discus:5 eventually:1 turn:1 addison:1 tractable:4 serf:3 end:1 rewritten:1 apply:3 enforce:1 generic:1 occurrence:1 appearing:1 subtracted:2 florida:1 original:1 dirichlet:1 graphical:3 paradoxical:1 l6:1 ghahramani:1 especially:1 approximating:1 objective:1 dependence:3 diagonal:6 hai:2 kth:2 distance:1 trivial:2 denmark:2 cq:1 minimizing:2 kingdom:1 ql:1 cij:5 design:1 boltzmann:3 teh:5 observation:2 sm:1 defining:1 precise:1 perturbation:1 arbitrary:2 hln:9 introduced:2 inverting:1 kl:5 tap:1 engine:1 beyond:3 able:1 suggested:1 below:2 belief:2 suitable:1 solvable:1 advanced:1 aston:2 spiegelhalter:1 dtu:1 sn:1 kj:8 prior:5 review:1 dsi:2 expect:2 var:1 pij:4 s0:17 principle:2 systematically:1 clj:1 pi:8 uncorrelated:1 last:1 keeping:1 free:1 jth:1 aij:3 side:1 taking:2 opper:3 calculated:3 dimension:1 xn:11 made:1 adaptive:2 simplified:1 far:1 welling:5 eak:1 sj:17 approximate:2 uni:1 imm:1 twovariable:1 summing:1 assumed:3 conclude:1 factorizing:1 s0i:3 sk:8 why:4 bethe:4 expanding:2 improving:1 s0j:13 owi:1 s2:1 n2:4 ref:2 x1:3 west:1 cubic:1 sub:1 explicit:4 bij:4 theorem:1 down:1 formula:3 bishop:2 dk:2 deconvolution:1 ih:1 albeit:1 vibe:3 generalizing:2 simply:1 appearance:1 lagrange:1 expressed:1 viewed:1 consequently:1 careful:1 springerverlag:1 except:1 corrected:1 miskin:1 averaging:1 latter:1 preparation:1 incorporate:1 phenomenon:1 correlated:1 |
1,573 | 243 | 380
Giles, Sun, Chen, Lee and Chen
HIGHER ORDER RECURRENT NETWORKS
& GRAMMATICAL INFERENCE
C. L. Giles?, G. Z. Sun, H. H. Chen, Y. C. Lee, D. Chen
Department of Physics and Astronomy
and
Institute for Advanced Computer Studies
University of Maryland. College Park. MD 20742
* NEC Research Institute
4 Independence Way. Princeton. NJ. 08540
ABSTRACT
A higher order single layer recursive network easily learns to
simulate a deterministic finite state machine and recognize regular
grammars. When an enhanced version of this neural net state machine
is connected through a common error term to an external analog stack
memory, the combination can be interpreted as a neural net pushdown
automata. The neural net finite state machine is given the primitives,
push and POP. and is able to read the top of the stack. Through a
gradient descent learning rule derived from the common error
function, the hybrid network learns to effectively use the stack
actions to manipUlate the stack memory and to learn simple contextfree grammars.
INTRODUCTION
Biological networks readily and easily process temporal information; artificial neural
networks should do the same. Recurrent neural network models permit the encoding
and learning of temporal sequences. There are many recurrent neural net models. for example see [Jordan 1986. Pineda 1987, Williams & Zipser 1988]. Nearly all encode the
current state representation of the models in the activity of the neuron and the next
state is determined by the current state and input. From an automata perspective, this
dynamical structure is a state machine. One formal model of sequences and machines
that generate and recognize them are formal grammars and their respective automata.
These models formalize some of the foundations of computer science. In the Chomsky
hierarchy of formal grammars [Hopcroft & Ullman 1979] the simplest level of complexity is defmed by the finite state machine and its regular grammars. (All machines
Higher Order Recurrent Networks and Grammatical Inference
and grammars described here are deterministic.} The next level of complexity is described by pushdown automata and their associated context-free grammars. The pushdown automaton is a fmite state machine with the added power to use a stack memory.
Nemal networks should be able to perform the same type of computation and thus
solve such learning problems as grammatical inference [pu 1982] .
Simple grammatical inference is defined as the problem of finding (learning) a grammar
from a fmite set of strings, often called the teaching sample. Recall that a grammar
{phrase-structured} is defined as a 4-tuple (N, V, P, S) where N and V are a nonterm ina1 and terminal vocabularies, P is a finite set of production rules and S is the start symbol. Here grammatical inference is also defined as the learning of the machine that
recognizes the teaching and testing samples. Potential applications of grammatical inference include such various areas as pattern recognition, information retrieval, programming language design, translation and compiling and graphics languages [pu 1982].
There has been a great deal of interest in teaching nemal nets to recognize grammars and
simulate automata [Allen 1989. Jordan 1986. Pollack 1989. Servant-Schreiber et. a1.
1989,Williams & Zipser 1988]. Some important extensions of that work are discussed
here. In particular we construct recurrent higher order nemal net state machines which
have no hidden layers and seem to be at least as powerful as any nemal net multilayer
state machine discussed so far. For example, the learning time and training sample size
are significantly reduced. In addition, we integrate this neural net fmite state machine
with an external stack memory and inform the network through a common objective
function that it has at its disposal the symbol at the top of the stack and the operation
primitives of push and pop. By devising a common error function which integrates the
stack and the nemal net state machine, this hybrid structure learns to effectively use the
stack to recognize context-free grammars. In the interesting work of [Williams &
Zipser 1988] a recurrent net learns only the state machine part of a Turing Machine.
since the associated move, read, write operations for each input string are known and are
given as part of the training set. However, the model we present learns how to manipulate the push, POP. and read primitives of an external stack memory plus learns the additional necessary state operations and structure.
HIGHER ORDER RECURRENT NETWORK
The recurrent neural network utilized can be considered as a higher order modification
of the network model developed by [Williams & Zipser 1988]. Recall that in a recurrent net the activation state S of the neurons at time (t+l) is defined as in a state machine automata:
S(t+ 1) = F ( S(t), I(t); W }
(1)
where F maps the state S and the input I at time t to the next state. The weight matrix
W forms the mapping and is usually learned. We use a higher order form for this mapping:
(2)
381
382
Giles, Sun, Chen, Lee and Chen
where the range of i, j is the number of state neurons and k the number of input neurons;
g is defined as g(x)=l!(l+exp(-x)). In order to use the net for grammatical inference, a
learning rule must be devised. To learn the mapping F and the weight matrix W, given
a sample set of P strings of the grammar, we construct the following error function E :
E =L E 2
r
= L (Tr - S01"
(L)) 2
(3)
where the sum is over P samples. The error function is evaluated at the end of a presented sequence of length ~ and So is the activity of the output neuron. For a recurrent net,
the output neuron is a designated member of the state neurons. The target value of any
pattern is 1 for a legal string and 0 for an illegal one. U sing a gradient descent procedure, we minimize the error E function for only the rth pattern. The weight update rule
becomes
(4)
where" is the learning rate. Using eq. (2), dSo(tp) / dWijk is easily calculated using
the recursion relationship and the choice of an initial value for aSi(t = O)/aWijk'
aSI(t+l)/aWijk = hI (Sl(t+l)) ( ~li Sit) Ik(t) + 1: Wlmn In(t) aSm(t)taWijk } (5)
where h(x) = dg/dx. Note that this requires dSi(t) / dWijk be updated as each element
of each string is presented and to have a known initial value. Given an adequate network
topology, the above neural net state machine should be capable of learning any regular
grammar of arbitrary string length or a more complex grammar of finite length.
FINITE STATE MACHINE SIMULATION
In order to see how such a net performs, we trained the net on a regular grammar, the dual parity grammar. An arbitrary length string of O's and 1's has dual parity if the
string contains an even number of O's and an even number of 1's. The network architecture was 3 input neurons and either 3, 4, or 5 state neurons with fully connected second
order interconnection weights. The string vocabulary O,l,e (end symbol) used a unary
representation. The initial training set consisted of 30 positive and negative strings of
increasing sting length up to length 4. After including in the training all strings up to
length 10 which resulted in misclassification(about 30 strings), the neural net state machine perfectly recognized on all strings up to length 20. Total training time was usually 500 epochs or less.
By looking closely at the dynamics of learning, it was discovered that for different inputs the states of the network tended to cluster around three values plus the initial
state. These four states can be considered as possible states of an actual fmite state machine and the movement between these states as a function of input can be interpreted as
the state transitions of a state machine. Constructing a state machine yields a perfect
four state machine which will recognize any dual parity grammar. Using minimization
procedures [pu 1982], the extraneous state transitions can be reduced to the minimal 4-
Higher Order Recurrent Networks and Grammatical Inference
state machine. The extracted state machine is shown in Fig. 1. However, for more complicated grammars and different initial conditions, it might be difficult to extract the
fmite state machine. When different initial weights were chosen, different extraneous
transition diagrams with more states resulted. What is interesting is that the neural
net finite state machine learned this simple grammar perfectly. A first order net can also learn this problem; the higher order net learns it much faster. It is easy to prove that
there are fmite sate machines that cannot be represented by fust order, single layer recurrent nets [Minsky 1967]. For further discussion of higher order state machines, see
[Liu, et. al. 1990].
o
I
1
I
1
FIGURE 1: A learned four state machine; state 1 is both the start
and the final state.
NEURAL NET PUSHDOWN AUTOMATA
In order to easily learn more complex deterministic grammars, the neural net must
somehow develop and/or learn to use some type of memory, the simplest being a stack
memory. Two approaches easily come to mind. Teach the additional weight structure in
a multilayer neural network to serve as memory [Pollack 1989] or teach the neural net
to use an external memory source. The latter is appealing because it is well known
from formal language theory that a finite stack machine requires significantly fewer resources than a fmite state machine for bounded problems such as recognizing a finite
length context-free grammar. To teach a neural net to use a stack memory poses at least
three problems: 1) how to construct the stack memory, 2) how to couple the stack memory to the neural net state machine, and 3) how to formulate the objective function such
that its optimization will yield effective learning rules.
Most slraight-forward is formulating the objective function so that the stack is coupled to the neural net state machine. The most stringent condition for a pushdown automata to accept a context-free grammar is that the pushdown automata be in a final
state and the stack be empty. Thus, the error function of eq. (3) above is modified to include both final state and stack length terms:
383
384
Giles, Sun, Chen, Lee and Chen
(6)
where L(Y is the final stack length at time )" i.e. the time at which the last symbol of
the string is presented. Therefore, for legal strings E = 0, if the pushdown automata is
in a final state and the stack is empty.
Now consider how the stack can be connected to the neural net state machine. Recall
that for a pushdown automata [pu 1982], the state transition mapping of eq. (I) includes
an additional argument, the symbol R(t) read from the top of the stack and an additional
stack action mapping. An obvious approach to connecting the stack to the neural net is to
let the activity level of certain neurons represent the symbol at the top of the stack and
others represent the action on the stack. The pushdown automata has an additional stack
action of reading or writing to the top of the stack based on the current state, input, and
top stack symbol. One interpretation of these mappings would be extensions of eq. (2):
Si(t+l) = g( 1: WSijk Slt) Vk(t)}
(7)
~(t+l) = f( 1: Waijk Slt) Vk(t)}
(8)
Tee
FIGURE 2:. Single layer higher order recursive neural network that is connected
to a stack memory. A represents action neurons connected to the stack; R represents
memory buffer neurons which read the top of the stack. The activation proceeds upward from states, input, and stack top at time t to states and action at time t+ 1.
The recursion replaces the states in the bottom layer with the states in the top layer.
where Aj(t) are output neurons controlling the action of the stack; Vk(t) is either the
input neuron value Ik(t) or the connected stack memory neuron value Rk(t), dependent
on the index k; and f=2g-1. The current values Slt), Ik(t), and Rk(t) are all fully connected through 2nd order weights with no hidden neurons. The mappings of eqs. (7) and
(8) define the recursive network and can be implemented concurrently and in parallel.
Let A(t=O) & R(t=O)= O. The neuron state values range continuously from 0 to 1 while
the neuron action values range from -I to I. The neural network part of the architecture
Higher Order Recurrent Networks and Grammatical Inference
is depicted in Fig. 2. The number of read neurons is equal to the coding representation of
the stack. For most applications, one action neuron suffices.
In order to use the gradient descent learning rule described in eq. (4), the stack length
must have continuous values. (Other types of leaming algorithms may not require a
continuous stack.) We now explain how a continuous stack is used and connected to the
action and read neurons. Interpret the stack actions as follows: push (A>O), pop (A<O),
no action (A=O). For simplicity, only the current input symbol is pushed ; then the
number of input and stack memory neurons are equal. (If the input symbol is a, then
only AD of that value is pushed into the stack) T he stack consists of a summation of analog symbols. By definition, all symbols up in unit depth one are in the read neuron R at
time too If A<O (POp), a depth of IAI of all symbols in that depth is removed from the
stack. In the next time step what remains in R is a unit length from the current stack
top. An attempt to pop an empty stack occurs if not enough remains in the stack to pop
depth IAI. Further description of this operation with examples can be found in [Sun, et.
al.1990). Since the action operation A removes or adds to the stac~ the stack length at
time t+l is L(t+l) = L(t) + A(t), where L(t=O) = O.
With the recursion relations, stack construction, and error function defined, the leaming algorithms may be derived from eqs. (4) & (6)
AWijk =11 Er (dSt(y/awijk - dL(~)/dWij'
(9)
The derivative terms may be derived from the recurrent relations eqs. (7) & (8) and the
stack length equation. They are
aS l(t+l)/aWijk = hI Sl(t+l) (~il Slt) Vk(t) + 1:: Wl mn Vn(t) aSm(t)!aWijk +
1:: Wlmn Sm(t) aRn(t)!aWijk }
(10)
and
(11)
Since the change dRk(t)/dWijk must contain information about past changes in action
A, we have
aRk(t)/awijk =1:: aRk(t)/aA(t) aA(t)!awijk == AR aA(t)/awijk
(12)
where AR = 0,1, or -1 and depends on the top and bottom symbols read in R(t). This ~p?
proximation assumes that the read changes are only effected by actions which occurred
in the recent past. The change in action with respect to the weights is defined by a recursion derived from eq. (8) and has the same form as eq. (10). For the case of popping an
empty stack, the weight change increases the stack length for a legal string; otherwise
nothing happens. It appears that all these derivatives are necessary to adequately integrate the neural net to the continuous stack memory.
PUSHDOWN AUTOMATA SIMULATIONS
To test this theoretical development, we trained the neural net pushdown automaton on
385
386
Giles, Sun, Chen, Lee and Chen
two context-free grammars, 1nOn and the parenthesis grammar (balanced strings of
parentheses), For the parenthesis grammar, the net architecture consisted of a 2nd order
fully interconnected single layer net with 3 state neurons, 3 input neurons, and 2 action
neurons (one for push & one for pop). In 20 epochs with fifty positive and negative
training samples of increasing length up to length eight , the network learned how to be
a perfect pushdown automaton. We concluded this after testing on all strings up to
length 20 and through a similar analysis of emergent state-stack values. Using a
similar clustering analysis and heuristic reduction approach, the minimal pushdown
automaton emerges. It should be noted that for this pushdown automaton, the state
machine does very little and is easily learned Fig. 3 shows the pushdown automaton
that emerged; the 3-tuple represents (input symbol, stack symbol, action of push or
non
pop), The 1
was also successfully trained with a small training set and a few
hundred epochs of learning. This should be compared to the more computationally
intense learning of layered networks [Allen 1989]. A minimal pushdown automaton
was also derived, For further details of the learning and emergent pushdown automata,
see [Sun, etal. 1990].
(O,cp,-I)
(O,cp,-I)
(1,1,1)
(0,1,-1)
(1,cp,l)
(e,I,.)
FIGURE 3: Learned neural network pushdown automaton for parenthesis
balance checker where the numerical results for states (1), (2), (3), and (4)
are (1,0,0), (.9,.2,.2), (.89,.17,.48) and (.79,.25,.70). State (1) is the start
state. State (3) is a legal end state. Before feeding the end symbol, a legal
string must end at state (2) with empty stack.
CONCLUSIONS
This work presents a different approach to incorporating and using memory in a neural
network. A recurrent higher order net learned to effectively employ an external stack
Higher Order Recurrent Networks and Grammatical Inference
memory to learn simple context-free grammars. However, to do so required the creation of a continuous stack structure. Since it was possible to reduce the neural network
to the ideal pushdown automaton, the neural network can be said to have "perfectly"
learned these simple grammars. Though the simulations appear very promising, many
questions remain. Besides extending the simulations to more complex grammars, there
are questions of how well such architectures will scale for "real" problems. What became evident was the power of the higher order network; again demonstrating its sp~
of learning and sparseness of training sets. Will the same be true for more complex
problems is a question for further work.
REFERENCES
R.A. Allen, Adaptive Training for Connectionist State Machines, ACM Computer
Conference, Louisville, p.428, (1989).
D. Angluin & C.H. Smith, Inductive Inference: Theory and Methods, ACM Computing
Surveys. Vol. 15, No.3, p. 237, (1983).
K.S. Fu, Syntactic Pattern Recognition and Applications. Prentice-Hall, Englewood
Cliffs, NJ. (1982).
J.E. Hopcroft & J.D. Ullman, Introduction to Automata Theory. Languages. and Computation. Addison Wesley, Reading, Ma. (1979).
M.I. Jordan, Attractor Dynamics and Parallelism in a Connectionist Sequential Machine, Proceedings of the Eigtht Conference of the Cognitive Science Society. Amherst,
Ma, p. 531 (1986).
Y.D. Liu, G.Z. Sun, H.H. Chen, Y.C. Lee, C.L. Giles, Grammatical Inference and Neural Network State Machines, Proceedings of the International Joint Conference on Neural Networks, M. Caudill (ed), Lawerence Erlbaum, Hillsdale, NJ., vol 1. p.285
(1990).
ML. Minsky, Computation: Finite and Infinite Machines, Prentice-Hall, Englewood,
NJ., p. 55 (1967).
FJ. Pineda, Generalization of Backpropagation to Recurrent Neural Networks, Phys.
Rev. Lett., vol 18, p. 2229 (1987).
J.B. Pollack, Implications of Recursive Distributed Representations, Advances in Neural Information Systems 1, D.S. Touretzky (ed), Morgan Kaufmann, San Mateo, Ca, p.
527 (1989).
D. Servan-Schreiber, A. Cleeremans & J L. McClelland, Encoding Sequential Structure
in Simple Recurrent Networks, Advances in Neural Information Systems 1, D.S.
Touretzky (ed), Morgan Kaufmann, San Mateo, Ca, p. 643 (1989).
GZ. Sun, H.H. Chen, C.L. Giles, Y.C. Lee, D. Chen, Connectionist Pushdown Automata that Learn Context-free Grammars, Proceedings of the International Joint Conference on Neural Networks. M. Caudill (ed), Lawerence Erlbaum, Hillsdale, N.J., vol
1. p.577 (1990).
R.I. Williams & D. Zipser, A Learning Algorithm for Continually Running Fully Recurrent Neural Networks, Institute for Cognitive Science Report 8805, U. of CA, San
Diego, La Jolla, Ca 92093, (1988).
387
| 243 |@word version:1 nd:2 awijk:10 simulation:4 fmite:7 tr:1 reduction:1 initial:6 liu:2 contains:1 past:2 current:6 activation:2 si:1 dx:1 must:5 readily:1 numerical:1 nemal:5 remove:1 update:1 fewer:1 devising:1 smith:1 ik:3 prove:1 consists:1 terminal:1 actual:1 little:1 increasing:2 becomes:1 sting:1 bounded:1 what:3 interpreted:2 string:19 developed:1 astronomy:1 finding:1 nj:4 temporal:2 unit:2 appear:1 continually:1 positive:2 before:1 encoding:2 cliff:1 might:1 plus:2 mateo:2 range:3 testing:2 recursive:4 backpropagation:1 procedure:2 area:1 asi:2 significantly:2 illegal:1 regular:4 chomsky:1 cannot:1 layered:1 prentice:2 context:7 writing:1 deterministic:3 map:1 primitive:3 williams:5 automaton:25 survey:1 formulate:1 simplicity:1 rule:6 updated:1 enhanced:1 hierarchy:1 target:1 controlling:1 construction:1 programming:1 diego:1 element:1 recognition:2 utilized:1 ark:2 bottom:2 cleeremans:1 connected:8 sun:9 movement:1 removed:1 balanced:1 complexity:2 dynamic:2 trained:3 creation:1 serve:1 easily:6 hopcroft:2 joint:2 emergent:2 various:1 represented:1 effective:1 artificial:1 heuristic:1 emerged:1 solve:1 interconnection:1 otherwise:1 grammar:29 asm:2 syntactic:1 final:5 pineda:2 sequence:3 net:34 interconnected:1 description:1 cluster:1 empty:5 extending:1 perfect:2 recurrent:19 develop:1 pose:1 eq:10 implemented:1 come:1 closely:1 stringent:1 hillsdale:2 require:1 feeding:1 suffices:1 generalization:1 biological:1 summation:1 extension:2 around:1 considered:2 hall:2 exp:1 great:1 mapping:7 integrates:1 schreiber:2 wl:1 successfully:1 minimization:1 concurrently:1 modified:1 encode:1 derived:5 vk:4 inference:12 dependent:1 unary:1 accept:1 fust:1 hidden:2 relation:2 upward:1 dual:3 extraneous:2 development:1 equal:2 construct:3 wlmn:2 represents:3 park:1 nearly:1 others:1 connectionist:3 report:1 few:1 employ:1 dg:1 recognize:5 resulted:2 minsky:2 attractor:1 attempt:1 interest:1 englewood:2 popping:1 implication:1 tuple:2 capable:1 fu:1 necessary:2 respective:1 intense:1 pollack:3 minimal:3 theoretical:1 giles:7 ar:2 tp:1 servan:1 phrase:1 hundred:1 recognizing:1 erlbaum:2 graphic:1 too:1 drk:1 international:2 amherst:1 lee:7 physic:1 connecting:1 continuously:1 dso:1 again:1 external:5 cognitive:2 derivative:2 ullman:2 li:1 potential:1 coding:1 includes:1 ad:1 depends:1 servant:1 start:3 effected:1 complicated:1 parallel:1 minimize:1 il:1 became:1 kaufmann:2 yield:2 explain:1 inform:1 tended:1 phys:1 touretzky:2 ed:4 definition:1 obvious:1 associated:2 couple:1 recall:3 emerges:1 formalize:1 appears:1 wesley:1 disposal:1 higher:15 iai:2 evaluated:1 though:1 somehow:1 aj:1 consisted:2 contain:1 true:1 adequately:1 inductive:1 read:10 deal:1 defmed:1 noted:1 evident:1 performs:1 allen:3 cp:3 fj:1 common:4 analog:2 discussed:2 interpretation:1 rth:1 interpret:1 he:1 occurred:1 teaching:3 etal:1 language:4 pu:4 add:1 recent:1 perspective:1 jolla:1 certain:1 buffer:1 tee:1 morgan:2 additional:5 arn:1 recognized:1 faster:1 retrieval:1 devised:1 manipulate:2 a1:1 parenthesis:4 multilayer:2 represent:2 addition:1 diagram:1 source:1 concluded:1 fifty:1 checker:1 member:1 seem:1 jordan:3 zipser:5 ideal:1 easy:1 enough:1 independence:1 architecture:4 perfectly:3 topology:1 reduce:1 action:18 adequate:1 mcclelland:1 simplest:2 reduced:2 generate:1 angluin:1 sl:2 write:1 vol:4 four:3 demonstrating:1 sum:1 turing:1 powerful:1 dst:1 vn:1 pushed:2 layer:7 hi:2 replaces:1 activity:3 simulate:2 argument:1 formulating:1 department:1 structured:1 designated:1 combination:1 remain:1 sate:1 appealing:1 rev:1 modification:1 happens:1 legal:5 resource:1 equation:1 remains:2 computationally:1 mind:1 addison:1 end:5 operation:5 permit:1 eight:1 lawerence:2 compiling:1 s01:1 top:11 assumes:1 include:2 clustering:1 recognizes:1 running:1 society:1 objective:3 move:1 added:1 question:3 occurs:1 md:1 said:1 gradient:3 maryland:1 length:19 besides:1 index:1 relationship:1 balance:1 difficult:1 teach:3 negative:2 design:1 perform:1 neuron:26 sm:1 sing:1 finite:10 descent:3 looking:1 discovered:1 stack:58 arbitrary:2 required:1 learned:8 pop:9 able:2 proceeds:1 dynamical:1 pattern:4 usually:2 parallelism:1 reading:2 including:1 memory:19 power:2 misclassification:1 hybrid:2 recursion:4 advanced:1 mn:1 caudill:2 gz:1 extract:1 coupled:1 epoch:3 fully:4 dsi:1 interesting:2 foundation:1 integrate:2 production:1 translation:1 parity:3 free:7 last:1 formal:4 institute:3 slt:4 distributed:1 grammatical:11 calculated:1 vocabulary:2 transition:4 depth:4 lett:1 forward:1 adaptive:1 san:3 far:1 ml:1 proximation:1 continuous:5 promising:1 learn:7 contextfree:1 ca:4 complex:4 constructing:1 sp:1 nothing:1 fig:3 learns:7 rk:2 er:1 symbol:16 sit:1 dl:1 incorporating:1 sequential:2 effectively:3 nec:1 push:6 sparseness:1 chen:13 depicted:1 aa:3 extracted:1 dwij:1 acm:2 ma:2 leaming:2 change:5 pushdown:20 determined:1 infinite:1 called:1 total:1 la:1 college:1 latter:1 princeton:1 |
1,574 | 2,430 | Linear Program Approximations for Factored
Continuous-State Markov Decision Processes
Milos Hauskrecht and Branislav Kveton
Department of Computer Science and Intelligent Systems Program
University of Pittsburgh
milos,bkveton @cs.pitt.edu
Abstract
Approximate linear programming (ALP) has emerged recently as one of
the most promising methods for solving complex factored MDPs with
finite state spaces. In this work we show that ALP solutions are not
limited only to MDPs with finite state spaces, but that they can also be
applied successfully to factored continuous-state MDPs (CMDPs). We
show how one can build an ALP-based approximation for such a model
and contrast it to existing solution methods. We argue that this approach
offers a robust alternative for solving high dimensional continuous-state
space problems. The point is supported by experiments on three CMDP
problems with 24-25 continuous state factors.
1
Introduction
Markov decision processes (MDPs) offer an elegant mathematical framework for representing and solving decision problems in the presence of uncertainty. While standard solution
techniques, such as value and policy iteration, scale-up well in terms of the number of
states, the state space of more realistic MDP problems is factorized and thus becomes exponential in the number of state components. Much of the recent work in the AI community
has focused on factored structured representations of finite-state MDPs and their efficient
solutions. Approximate linear programming (ALP) has emerged recently as one of the
most promising methods for solving complex factored MDPs with discrete state components. The approach uses a linear combination of local feature functions to model the value
function. The coefficients of the model are fit using linear program methods. A number of
refinements of the ALP approach have been developed over past few years. These include
the work by Guestrin et al [8], de Farias and Van Roy [6, 5], Schuurmans and Patrascu
[15], and others [11]. In this work we show how the same set of linear programming (LP)
methods can be extended also to solutions of factored continuous-state MDPs. 1
The optimal solution of the continuous-state MDP (CMDP) may not (and typically does
not) have a finite support. To address this problem, CMDPs and their solutions are usually
approximated and solved either through state space discretization or by fitting a surrogate
and (often much simpler) parametric value function model. The two methods come with
different advantages and limitations. 2 The disadvantage of discretizations is their accu1
2
We assume that action spaces stay finite. Rust [14] calls such models discrete decision processes.
The two methods are described in more depth in Section 3.
racy and the fact that higher accuracy solutions are paid for by the exponential increase in
the complexity of discretizations. On the other hand, parametric value-function approximations may become unstable when combined with the dynamic programming methods
and least squares error [1]. The ALP solution that is developed in this work eliminates the
disadvantages of discretization and function approximation approaches while preserving
their good properties. It extends the approach of Trick and Zin [17] to factored multidimensional continuous state spaces. Its main benefits are good running time performance,
stability of the solution, and good quality policies.
Factored models offer a more natural and compact way of parameterizing complex decision
processes. However, not all CMDP models and related factorizations are equally suitable
also for the purpose of optimization. In this work we study factored CMDPs with state
spaces restricted to
. We show that the solution for such a model can be approximated
by an ALP with infinite number of constraints that decompose locally. In addition, we
show that by choosing transition models based on beta densities (or their mixtures) and
basis functions defined by products of polynomials one obtains an ALP in which both the
objective function and constraints are in closed form. In order to alleviate the problem of
infinite number of constraints, we develop and study approximation based on constraint
sampling [5, 6]. We show that even under a relatively simple random constraint sampling
we are able to very quickly calculate solutions of a high quality that are comparable to other
existing CMDP solution methods.
The text of the paper is organized as follows. First we review finite-state MDPs and approximate linear programming (ALP) methods developed for their factored refinements. Next
we show how to extend the LP approximations to factored continuous-state MDPs and discuss assumptions underlying the model. Finally, we test the new method on a continuousstate version of the computer network problem [8, 15] and compare its performance to
alternative CMDP methods.
2
Finite-state MDPs
A finite state MDP defines a stochastic control process with components
,
where is a finite set of states, is a finite set of actions, "!# $%&
defines a probabilistic transition model mapping a state to the next states given an action,
and '()*+! IR defines a reward model for choosing an action in a specific state.
,.-/01!2
Given an MDP our objective is to find the policy
maximizing the infinite96?
horizon
discounted reward criterion: 34
65897 :<;>= 9 , where =A@ $%B is a discount factor
?
and 9 is a reward obtained in step C . The value of the optimal policy satisfies the Bellman
fixed point equation [12]:
N
D
E>GFIH*JLK
M
where
O
EPRQSUT
=WVXLY
D
4
EUZ\[ EPRQS
(1)
EUZ]_^4
D
is the value of the optimal policy
and
E Z denotes the next state. For all states
D
D
F8`
`
the equation
can
be
written
as
,
where
is the Bellman operator. Given
D
the value function , the optimal policy , -
E> is defined by the action optimizing Eqn 1.
E
@
Methods for solving an MDP include value iteration, policy iteration, and linear programming [12, 2]. In the linear program (LP) formulation we solve the following problem:
minimize
D
E>.a
D
E>
(2)
E>
D
subject to:
where values of
V X
=
VX
Y
D
4
EUZb[ EP
QS
EUZc.aO
EP
Q$edf
for every state E are treated as variables.
ghEP
Q
Factorizations and LP approximations
In factored MDPs, the state space
is defined in terms of state variables
*iBR4jBBklkBk
R
. As a result, the state space becomes exponential in the number of
variables. Compact parameterizations of MDPs based on dynamic belief networks [7] and
decomposable reward functions are routinely used to represent such MDPs more efficiently.
However, the presence of a compact model does not imply the existence of efficient optimal
solutions. To address this problem Koller and Parr [9] and Guestrin at al [8] propose to use
m
m
a linear model [13]:
9
V
E>F
9
9
E
9+n
D
m
to approximate
the value function
E> . Here 9 are the linear coefficients to be found (fit)
n
9
9
E
and s denote feature functions defined over subsets
of state variables.
Given a factored binary-state MDP, the coefficients of the linear model can be found by
solving the surrogate of the
LP in Equation
2 [8]:
psBtuUv wxyv
p|}~p_
minimizeo
V<prq
subject to:
V
V
p???
p
q
p |}
(3)
wx{z
p ????
z
|}~?p6? }
V
w
p]? ?(?6?
Y
p |}<?p ??
????|}
?y?
??
?
??
}
?\?
z
x?
where E 9? M are the parents of state variables in E Z9 under action Q , and O
EPRQS decomposes to 5??? : i M ? ?
E M ? ? RQS , such that M ? ?
E M ? ?
Q$ is a local reward function defined
over a subset of state variables. Note that while the objective function can be computed
efficiently, the number of constraints one has to satisfy remains exponential in the number
of random variables. However, only a subset of these constraints becomes active and affect
the solution. Guestrin et al [8] showed how to find active constraints by solving a cost
network problem. Unfortunately, the cost network formulation is NP-hard. An alternative
approach for finding active constraints was devised by Schuurmans and Patrascu [15]. The
approach implements a constraint generation method [17] and appears to give a very good
performance on average. The idea is to greedily search for maximally violated constraints
which can be done efficiently by solving a linear optimization problem. These constraints
are included in the linear program and the process is repeated until no violated constraints
are found. De Farias and Van Roy [5] analyzed a Monte Carlo approach with randomly
sampled constraints.
3
Factored continuous-state MDPs
Many stochastic controlled processes are more naturally defined using continuous state
variables. In this work we focus on continuous-state MDPs (CMDPs) where state spaces
are restricted to $%& . 3 We assume factored representations where transition probabilities
are defined in terms of densities over
state variable subspaces: ?>
E Z [ EP
QS?F
?
?
l
? :
denote the current and previous states. Rewards are repi ?.
?? Z [ EP
QS where E Z and E
resented compactly over subsets of state variables, similarly to factored finite-state MDPs.
3.1 Solving continuous-state MDP
The optimal value function for a continuous-state MDP satisfies the Bellman fixed point
equation:
D
E>F?H?J(K
O
EP
QSUT
M??
=??
XLY
D
?.
E
Z [ EP
QS
E
Z b?E
Z??
k
?
t
3
We note that in general any bounded subspace of IR can be transformed to
?
?R??? t
.
The problem with CMDPs is that in most cases the optimal value function does not have a
finite support and cannot be computed. The solutions attempt to replace the value function
or the optimal policy with a finite approximation.
Grid-based MDP (GMDP) discretizations. A typical solution is to discretize the state
space to a set of grid points and approximate value functions over such points. Unfortunately, classic grid algorithms scale up exponentially with the number of state variables
i
j
[4]. Let ??F
E
E
lkBkBkR
EU? be a set of grid points over the state space
? . Then
the Bellman operator ` can be approximated with an operator `/? that is restricted to grid
points ? . One such operator has been studied by Rust [14] and is defined as:
??
E
?
??
9
D
9
FIH*JLK
M
9
?
O
E
=
Q$<T
?
V
?l:
9
?
9
0??
E
[ E
??
?
D
QS
??
E
(4)
i
9
where ? 9
E [ E
QS?F'? M
E ?>
E [ E RQS defines a normalized transition probability such
that ? M
E is a normalizing constant. EquationD 4 applied D to grid points ? defines a finite state MDP with [ ?*[ states. The solution, ?hF?`?? ? , approximates the original
continuous-state MDP. Convergence properties of the approximation scheme in Equation 4
for random or pseudo-random samples were analyzed by Rust [14].
Parametric function approximations. An alternative
way to solve a continuous-state
D
MDP is to approximate the optimal value function
E> with an appropriate parametric
function model [3]. The parameters of the model are fitted iteratively by applying one
step Bellman backups to a finite set of state points arranged on a fixed grid or obtained
through Monte Carlo sampling. Least squares criterion is used to fit the parameters of the
model. In addition to parallel updates and optimizations, on-line update schemes based on
gradient descent [3, 16] are very popular and can be used to optimize the parameters. The
disadvantage of the methods is their instability and possible divergence [1].
3.2 LP approximations of CMDPs
Our objective is to develop an alternative to the above solutions that is based on ALP
techniques and that takes advantage of model factorizations. It is easy to see that for a
general continuous-state model the exact solution cannot be formulated as a linear program
as was done in Equation 2 since the number of states is infinite. However, using linear
representations of the value functions we need to optimize only over a finite number of
weights combining feature functions. So adopting the ALP approach from factored MDPs
(Section 2), the CMDP problem can be formulated as:
p
minimizeo
V
p
subject to:
V
pfq
p |}
q
?
w?x
p???
p _?}
p
z
p_|}~pU???
|?
z
?
w
Y
Y
x???I?
w
? ???
? ?}
?
Y
??
?6?
_??
py|}
?p _?}
?p ??
????|}
?\?
??
?
?4?
}
???
?
z
x_?
The above formulation of the ALP builds upon our observation that linear models in combination with factored transitions are well-behaved when integrated over
6 state space
(or any bounded space) and nicely decompose along state-variable subsets defining feature
functions similarly to Equation 3. This simplification is a consequence of the following
variable elimination transformation:
m
i
?
;??
?S?
i
m
_?L6?S?????S?F
?
?
??
_?Lb?$?????%?
;
m
F
??
??(6?S?k
Despite the decomposition, the ALP formulation of the factored CMDP comes with two
concerns. First, the integrals may be improper and not computable. Second, we need to
satisfy infinite number of constraints (for all values of
solutions to both issues.
and Q ). In the following we give
E
Closed form solutions Integrals in the objective function and constraints depend on the
choice of transition models and basis functions. We want all these integrals to be proper
Riemannian integrals. We prefer integrals with closed-form expressions. To this point, we
have identified conjugate classes of transition models and basis functions leading to closed
form expressions.
Beta transitions. To parameterize the transition model over
densities or their mixtures. The beta transition is defined as:
?.
???? Z
where E
for E ?R? M
[ E
?
? M
i
[ ? ?
? M
QSFr????\QU
???? Z
$%&
j
?
? M
? ? M
??
E
?R? M
E
@
?
i
?
? M
is theX parent set of a variable ? ? under action Q , and ?
?_? ?
6?
? define the parameters of the beta model.
?
? M
we propose to use beta
E
j
&\? ?
? M
?R? M
E
?
? M
??+
Feature functions. A feature function form that is particularly suitable for the ALP and
matches beta transitions is a product of power functions:
m
9
E
9
F
?
?
? ?
X
?
x
x
???
??
k
It is easy to show that for such a case the integrals in the objective function simplify to:
m
X
?
9
x
E
9
6?E
9
X
?
F
x
X
?
? ?
? ?
?
x
??
x
?_?
9
??E
F
X
?
?
x
? ?
?
? ?
?
??
x
?_?
?S?
?
F
X
?
? ?
?
?
x
k
?
? 9
T8
Similarly, using our conjugate transition and basis models the integrals in constraints simplify to:
|?
?
Y
w x
??
Y
?
w
? ???
? ??6?
? ? }
?p _?}
|]??? ? ?
?p~?
?
?
Y
p |}
_??
Y
?
w
? ?
?
z
x ?
Y
|}
?
? ? 6
|}
?
? ? S?/? ? ? ? ? |}
?
?
xS?
? ? ~???%? ? ?
|}
|]? ? ? ? ?
?
?
? ? 6S???
|]??? ? ? |}
? ? <??? ?
?
?
?p
|]?
?
? ??
?
?p
?
?? 6
|}
?
?
?
?
X function. For example, assuming features with products of state
where ?
bk?m is the gamma
?
x
?
? ??
variables: 9
E 9 PF
, the ALP formulation becomes:
?
p??
minimizeo
prq
V
?
v wx6v
sS?
(5)
?
p???
subject to:
V
p
q
?
?
wx
? ? ?
?
???
Y
?
w
? ? ?
Y
?
x
?
? ??
? ??
|}
??
?
? ? ?S?/? ? ? ?
|}
?
?
|}
? ?B
??
???|}
???
P?
?
??
}
?\?
?
ALP solution. Although the ALP uses infinitely many constraints, only a finite subset of
constraints, active constraints, is necessary to define the optimal solution. Existing ALP
methods for factored finite-state MDPs search for this subset more efficiently by taking
advantage of local constraint decompositions and various heuristics. However, at the end
these methods always rely on the fact the decompositions are defined on a finite state subspace. Unfortunately, constraints in our model decompose over smaller but still continuous
subspaces, so the existing solutions for the finite-state MDPs cannot be applied directly.
Sampling constraints. To avoid the problem of continuous state spaces we approximate
the ALP solution using a finite set of constraints defined by a finite set of state space points
and actions in . These state space points can be defined by regular grids on state subspaces or via random sampling of states E @?? . In this work we focus on and experiment
Transition model p(x?j | {xj, xj ? 1}, a)
a ? j, x = 1
a=j
8
7
4
6
5
3
4
2
3
x
= 0.0
j?1
x
= 0.5
j?1
xj ? 1 = 1.0
1
0
(a)
j
5
0
0.5
x?j
2
1
1
0
0
0.5
x?j
1
(b)
Figure 1: a. Topologies of computer networks used in experiments. b. Transition densities
for the ? th computer and different previous-state/action combinations.
with the random sampling approach. For the finite state spaces such a technique has been
devised and analyzed by de Farias and Van Roy [5]. We note that the blind sampling approach can be improved via various heuristics.4 However, despite many possible heuristic
improvements, we believe that the crucial benefit comes from the ALP formulation that
?fits? the linear model and subsequent constraint and subspace decompositions.
4
Experiments
To test the ALP method we use a continuous-state modification of the computer network
example proposed by Guestrin et al [8]. Figure 1a illustrates three different network structures used in experiments. Nodes in graphs represent computers. The state of a machine
is represented by a number between 0 and 1 reflecting its processing capacity (the ability to process tasks). The network performance can be controlled through activities of a
human operator: the operator can attend a machine (one at time) or do nothing. Thus,
there is a total of ??T? actions where ? is the number of computers in the network. The
processing capacity of a machine fluctuates randomly and is determined by: (1) a random event (e.g., a software bug), (2) machines connected to it and (3) the presence of
the operator at the machine console. The transition model represents the dynamics of
the computer network. The model is factorized and defined in terms of beta densities:
i
j
?
? M
?
? M
?
? M
?>
_? ? Z [ E
RQSOF?????\QU
?? ? Z [ ? ?R? M
E
?? ?R? M
E
\ , where ? ? Z
is the current state of the ? th
computer, and E ?R? M describes the previous-step state of the computers affecting ? . We use:
i
j
?
? M
?
?
??
?R? M
?
?
???
? ?L?: M ? M
E
0F8?<Tl?L?
a{?(?
?
F?l?a{?L?
a{?L?
?
i and ? ?L?: M ? M
E
i for transitions
i
j
?R? M
l
?
:
?
l
?
: M ? M
E ?R? M PF8?
M M
E
when the human does not attend the computer, and ?
F8?L and ?
when the operator is present at the computer. Figure 1b illustrates transition densities for
the ? th computer given different values of its parents ? ? ? ??? i
and actions. The goal is to
maintain the processing ability of the network at the highest possible level over time. The
j
j
preferences are expressed in the reward function: O
EPRQSF'?(? i T 5 l? : j ? ? , where ? i is
the server. The discount factor = is k??(? .
To define the ALP approximation, we used a linear combination of linear (for every node)
and quadratic (for every link) feature functions. To demonstrate the practical benefit of
the approach we have compared it to the grid-based approximation (Equation 4) and leastsquare value iteration approach (with the same linear value function model as in the ALP).
The constraints in the ALP were sampled randomly. To make the comparison fair the same
sets of samples were shared by all three methods. The full comparison study was run on
4
Various constraint sampling heuristics are analyzed and reported in a separate work [10].
(b)
Expected reward
Time (sec)
(a)
24?ring
145
random
GMDP
LS
ALP
135
0
140
145
135
140
500
135
0
500
130
400
400
400
300
300
300
200
200
200
100
100
100
0
0
24?ring?of?rings
145
150
140
130
25?star
155
500
Number of samples
0
0
500
Number of samples
0
0
500
0
500
Number of samples
Figure 2: (a) Average values of control policies for ALP, least-squares (LS), and grid
(GMDP) approaches for different sample sizes. Random policy is used as a baseline. (b)
Average running times.
problems with three network structures from Figure 1a, each with 24 or 25 nodes. Figure
2a illustrates the average quality (value) of a policy obtained by different approximation
methods while varying the number of samples. The average is computed over 30 solutions
obtained for 30 different sample sets and 100 different (random) start states. The simulation
trajectories of length 50 are used. Figure 2b illustrates the scale-up potential of the methods
in terms of running times. Results are averaged over 30 solutions.
Overall, the results of experiments clearly demonstrate the benefit of the ALP with ?local? feature functions. For the sample size range tested, our ALP method came close to
the least-squares (LS) approach in terms of the quality. Both used the same value function model and both managed to fit well the parameters, hence we got comparable quality
results. However, the ALP was much better in terms of running time. Oscillations and
poor convergence behavior of the iterative LS method is responsible for the difference. The
ALP outperformed the grid-based approach (GMDP) in both the policy quality and running
times. The gap in the policy quality was more pronounced for smaller sample sizes. This
can be explained by the ability of the model to ?cover? complete state space as opposed to
individual grid points. Better running times for the ALP can be explained by the fact that
the number of free variables to be optimized is fixed (they are equal to weights ? ), while
in grid methods free variables correspond to grid samples and their number grows linearly.
5
Conclusions
We have extended the application of linear program approximation methods and their benefits to factored MDPs with continuous states. 5 We have proposed a factored transition
model based on beta densities and identified feature functions that match well such a model.
Our ALP solution offers numerous advantages over standard grid and function approximation approaches: (1) it takes advantage of the structure of the process; (2) it allows one to
define non-linear value function models and avoids the instabilities associated with leastsquared approximations; (3) it gives a more robust solution for small sample sizes when
5
We note that our CMDP solution paves the road to ALP solutions for factored hybrid state MDPs.
compared to grid methods and provides a better way of ?smoothing? value function to unseen examples; (4) its running time scales up better than grid methods. These has been
demonstrated experimentally on three large problems.
Many interesting issues related to the new method remain to be addressed. First, the random
sampling of constraints can be improved using various heuristics. We report results of
some heuristic solutions in a separate work [10]. Second, we did not give any complexity
bounds for the random constraint sampling approach. However, we expect that the proofs
by de Farias and Van Roy [5] can be adapted to cover the CMDP case. Finally, our ALP
method assumes a bounded subspace of IR . The important open question is how to extend
the ALP method to IR spaces.
References
[1] D.P. Bertsekas. A counter-example to temporal differences learning. Neural Computation, 7:270?279, 1994.
[2] D.P. Bertsekas. Dynamic programming and optimal control. Athena Scientific, 1995.
[3] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-dynamic Programming. Athena Sc., 1996.
[4] C.S. Chow and J.N. Tsitsiklis. An optimal one-way multigrid algorithm for discretetime stochastic control. IEEE Transactions on Automatic Control, 36:898?914, 1991.
[5] D. P. de Farias and B. Van Roy. On constraint sampling for the linear programming approach to approximate dynamic programming. Mathematics of Operations Research,
submitted, 2001.
[6] D.P. de Farias and B. Van Roy. The Linear Programming Approach to Approximate
Dynamic Programming. In Operations Research, 51:6, 2003.
[7] T. Dean and K. Kanazawa. A model for reasoning about persistence and causation.
Computational Intelligence, 5:142?150, 1989.
[8] C. Guestrin, D. Koller, and R. Parr. Max-norm projections for factored MDPs. In Proceedings of the Seventeenth International Joint Conference on Artificial Intelligence,
pages 673?682, 2001.
[9] D. Koller and R. Parr. Computing factored value functions for policies in structured
MDPs. In Proceedings of the 16th International Joint Conference on Artificial Intelligence, pages 1332?1339, 1999.
[10] B. Kveton and M. Hauskrecht. Heuristics refinements of approximate linear programming for factored continuous-state Markov decision processes. In 14Th International
Conference on Automated Planning and Scheduling, to appear, 2004.
[11] P. Poupart, C. Boutilier, R. Patrascu, and D. Schuurmans. Piecewise linear value
function approximation for factored MDPs. In Proceedings of the Eighteenth National
Conference on AI, pages 292?299, 2002.
[12] M.L. Puterman. Markov decision processes: discrete stochastic dynamic programming. John Wiley, New York, 1994.
[13] B. Van Roy. Learning and value function approximation in complex decision problems. PhD thesis, Massachussetts Institute of Technology, 1998.
[14] J. Rust. Using randomization to break the course of dimensionality. Econometrica,
65:487?516, 1997.
[15] D. Schuurmans and R.Patrascu. Direct value-approximation for factored MDPs. In
Advances in Neural Information Processing Systems 14, MIT Press, 2002.
[16] R. S. Sutton and A. G. Barto. Reinforcement Learning: An introduction. 1998.
[17] M. Trick and E.S Zin. A linear programming approach to solving stochastic dynamic
programs, TR, 1993.
| 2430 |@word version:1 polynomial:1 norm:1 open:1 gfih:1 simulation:1 decomposition:4 paid:1 tr:1 past:1 existing:4 current:2 discretization:2 cmdp:9 written:1 john:1 subsequent:1 realistic:1 update:2 intelligence:3 provides:1 parameterizations:1 node:3 preference:1 simpler:1 mathematical:1 along:1 direct:1 become:1 beta:8 fitting:1 expected:1 behavior:1 planning:1 bellman:5 discounted:1 pf:1 becomes:4 underlying:1 bounded:3 factorized:2 multigrid:1 developed:3 finding:1 transformation:1 hauskrecht:2 pseudo:1 temporal:1 every:3 multidimensional:1 control:5 appear:1 bertsekas:3 attend:2 local:4 consequence:1 despite:2 sutton:1 studied:1 limited:1 factorization:3 p_:2 range:1 averaged:1 seventeenth:1 practical:1 responsible:1 kveton:2 implement:1 discretizations:3 got:1 projection:1 persistence:1 road:1 regular:1 cannot:3 close:1 operator:8 scheduling:1 applying:1 instability:2 py:1 optimize:2 branislav:1 dean:1 demonstrated:1 eighteenth:1 maximizing:1 l:4 focused:1 decomposable:1 factored:28 parameterizing:1 q:8 stability:1 classic:1 exact:1 programming:15 us:2 trick:2 roy:7 approximated:3 particularly:1 ep:10 solved:1 parameterize:1 calculate:1 improper:1 connected:1 eu:1 counter:1 highest:1 complexity:2 reward:8 econometrica:1 dynamic:9 depend:1 solving:10 upon:1 basis:4 farias:6 compactly:1 joint:2 routinely:1 various:4 represented:1 monte:2 artificial:2 sc:1 choosing:2 emerged:2 heuristic:7 solve:2 fluctuates:1 s:1 ability:3 unseen:1 advantage:5 propose:2 product:3 fr:1 combining:1 bug:1 pronounced:1 parent:3 convergence:2 ring:3 develop:2 c:1 come:3 xly:2 stochastic:5 human:2 alp:35 elimination:1 decompose:3 alleviate:1 randomization:1 mapping:1 pitt:1 parr:3 purpose:1 outperformed:1 successfully:1 mit:1 clearly:1 always:1 avoid:1 varying:1 barto:1 focus:2 improvement:1 contrast:1 greedily:1 baseline:1 typically:1 integrated:1 chow:1 koller:3 transformed:1 issue:2 overall:1 smoothing:1 equal:1 nicely:1 sampling:11 represents:1 report:1 others:1 jb:1 piecewise:1 intelligent:1 np:1 few:1 simplify:2 x_:1 randomly:3 causation:1 gamma:1 divergence:1 individual:1 national:1 maintain:1 attempt:1 mixture:2 analyzed:4 integral:7 necessary:1 fitted:1 cover:2 disadvantage:3 cost:2 subset:7 reported:1 combined:1 density:7 international:3 stay:1 rqs:7 probabilistic:1 quickly:1 thesis:1 opposed:1 leading:1 potential:1 de:6 star:1 resented:1 sec:1 coefficient:3 satisfy:2 blind:1 break:1 closed:4 start:1 hf:1 parallel:1 minimize:1 square:4 ir:4 accuracy:1 efficiently:4 correspond:1 carlo:2 trajectory:1 submitted:1 leastsquare:1 naturally:1 associated:1 riemannian:1 proof:1 sampled:2 popular:1 ut:2 dimensionality:1 organized:1 z9:1 reflecting:1 appears:1 fih:1 higher:1 maximally:1 improved:2 formulation:6 done:2 arranged:1 p6:1 until:1 hand:1 eqn:1 defines:5 quality:7 behaved:1 scientific:1 believe:1 grows:1 mdp:12 normalized:1 managed:1 hence:1 iteratively:1 puterman:1 criterion:2 complete:1 demonstrate:2 reasoning:1 recently:2 console:1 rust:4 exponentially:1 extend:2 approximates:1 ai:2 automatic:1 grid:17 mathematics:1 similarly:3 continuousstate:1 recent:1 showed:1 optimizing:1 server:1 wv:1 binary:1 came:1 guestrin:5 preserving:1 full:1 match:2 offer:4 devised:2 equally:1 zin:2 controlled:2 neuro:1 iteration:4 represent:2 adopting:1 affecting:1 want:1 addition:2 addressed:1 crucial:1 eliminates:1 subject:4 elegant:1 call:1 presence:3 easy:2 automated:1 xj:3 affect:1 fit:5 identified:2 topology:1 idea:1 computable:1 expression:2 york:1 action:11 boutilier:1 discount:2 locally:1 discretetime:1 milo:2 discrete:3 f8:3 graph:1 year:1 run:1 uncertainty:1 extends:1 cmdps:6 oscillation:1 decision:8 prefer:1 comparable:2 bound:1 simplification:1 quadratic:1 activity:1 adapted:1 constraint:30 software:1 relatively:1 department:1 structured:2 combination:4 poor:1 conjugate:2 smaller:2 describes:1 remain:1 lp:6 qu:2 modification:1 explained:2 restricted:3 jlk:2 sbt:1 equation:8 remains:1 discus:1 end:1 operation:2 appropriate:1 massachussetts:1 alternative:5 existence:1 original:1 denotes:1 running:7 include:2 assumes:1 build:2 objective:6 question:1 parametric:4 prq:2 surrogate:2 pave:1 gradient:1 subspace:7 link:1 separate:2 capacity:2 athena:2 poupart:1 argue:1 unstable:1 assuming:1 length:1 unfortunately:3 proper:1 policy:14 discretize:1 observation:1 markov:4 finite:23 descent:1 defining:1 extended:2 community:1 bk:1 optimized:1 address:2 able:1 usually:1 program:8 max:1 belief:1 power:1 suitable:2 event:1 natural:1 treated:1 rely:1 hybrid:1 representing:1 scheme:2 technology:1 mdps:25 imply:1 numerous:1 text:1 review:1 expect:1 generation:1 limitation:1 interesting:1 edf:1 course:1 supported:1 free:2 tsitsiklis:2 institute:1 taking:1 van:7 benefit:5 depth:1 transition:18 avoids:1 refinement:3 reinforcement:1 transaction:1 approximate:10 compact:3 obtains:1 active:4 pittsburgh:1 continuous:21 search:2 iterative:1 decomposes:1 promising:2 robust:2 schuurmans:4 complex:4 t8:1 did:1 main:1 linearly:1 backup:1 nothing:1 repeated:1 fair:1 wiley:1 exponential:4 ib:1 specific:1 x:1 normalizing:1 concern:1 kanazawa:1 phd:1 racy:1 illustrates:4 horizon:1 gap:1 infinitely:1 expressed:1 patrascu:4 satisfies:2 goal:1 formulated:2 replace:1 shared:1 hard:1 experimentally:1 included:1 infinite:4 typical:1 determined:1 total:1 support:2 violated:2 tested:1 |
1,575 | 2,431 | Linear Dependent Dimensionality Reduction
Nathan Srebro
Tommi Jaakkola
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
Cambridge, MA 02139
[email protected],[email protected]
Abstract
We formulate linear dimensionality reduction as a semi-parametric estimation problem, enabling us to study its asymptotic behavior. We generalize the problem beyond additive Gaussian noise to (unknown) nonGaussian additive noise, and to unbiased non-additive models.
1
Introduction
Factor models are often natural in the analysis of multi-dimensional data. The underlying premise of such models is that the important aspects of the data can be captured via a
low-dimensional representation (?factor space?). The low-dimensional representation may
be useful for lossy compression as in typical applications of PCA, for signal reconstruction as in factor analysis or non-negative matrix factorization [1], for understanding the
signal structure [2], or for prediction as in applying SVD for collaborative filtering [3]. In
many situations, including collaborative filtering and structure exploration, the ?important?
aspects of the data are the dependencies between different attributes. For example, in collaborative filtering we rely on a representation that summarizes the dependencies among
user preferences. More generally, we seek to identify a low-dimensional space that captures
the dependent aspects of the data, and separate them from independent variations. Our goal
is to relax restrictions on the form of each of these components, such as Gaussianity, additivity and linearity, while maintaining a principled rigorous framework that allows analysis
of the methods.
We begin by studying the probabilistic formulations of the problem, focusing on the assumptions that are made about the dependent, low-rank ?signal? and independent ?noise?
distributions. We consider a general semi-parametric formulation that emphasizes what is
being estimated and allows us to discuss asymptotic behavior (Section 2). We then study
the standard (PCA) approach, show that it is appropriate for additive i.i.d. noise (Section 3),
and present a generic estimator that is appropriate also for unbiased non-additive models
(Section 4). In Section 5 we confront the non-Gaussianity directly, develop maximumlikelihood estimators in the presence of Gaussian mixture additive noise, and show that the
consistency of such maximum-likelihood estimators should not be taken for granted.
2
Dependent Dimensionality Reduction
Our starting point is the problem of identifying linear dependencies in the presence of independent identically distributed Gaussian noise. In this formulation, we observe a data
matrix Y ? <n?d which we assume was generated as Y = X + Z, where the dependent,
low-dimensional component X ? <n?d (the ?signal?) is a matrix of rank k and the independent component Z (the ?noise?) is i.i.d. zero-mean Gaussian with variance ? 2 . We can
write down the log-likelihood of X as ?1
? 2 |Y ?X |Fro +Const (where ||Fro is the Frobenius,
or sum-squared, norm) and conclude that, regardless of the variance ? 2 , the maximumlikelihood estimator of X is the rank-k matrix minimizing the Frobenius distance. It is
given by the leading components of the singular value decomposition of Y .1
Although the above formulation is perfectly valid, there is something displeasing about
it. We view the entire matrix X as parameters, and estimate them according to a single
observation Y . The number of parameters is linear in the data, and even with more data,
we cannot hope to estimate the parameters (entries in X ) beyond a fixed precision. What we
can estimate with more data rows is the rank-k row-space of X . Consider the factorization
X = U V 0 , where V 0 ? <k?d spans this ?signal space?.
The dependencies of each row y of Y are captured by a row
u of U , which, through the parameters V and ? specifies
how each entry yi is generated independently given u.2
u
y1
y2
...
yd
A standard parametric analysis of the model would view u as a random vector (rather
than parameters) and impose some, possibly parametric, distribution over it (interestingly,
if u is Gaussian, the maximum-likelihood reconstruction is the same Frobenius low-rank
approximation [4]). However, in the analysis we started with, we did not make any assumptions about the distribution of u, beyond its dimensionality. The model class is then
non-parametric, yet we still desire, and are able, to estimate a parametric aspect of the
model: The estimator can be seen as a ML estimator for the signal subspace, where the
distribution over u is unconstrained nuisance.
Although we did not impose any form on the distribution u, we did impose a strict form
on the conditional distributions yi |u: we required them to be Gaussian with fixed variance
? 2 and mean uVi0 . We would like to relax these requirements, and require only that y|u be
a product distribution, i.e. that its coordinates yi |u be (conditionally) independent. Since
u is continuous, we cannot expect to forego all restrictions on yi |ui , but we can expect to
set up a semi-parametric problem in which y|u may lie in an infinite dimensional family
of distributions, and is not strictly parameterized.
Relaxing the Gaussianity leads to linear additive models y = uV 0 + z, with z independent
of u, but not necessarily Gaussian. Further relaxing the additivity is appropriate, e.g., when
the noise has a multiplicative component, or when the features of y are not real numbers.
These types of models, with a known distribution yi |xi , have been suggested for classification using logistic loss [5], when yi |xi forms an exponential family [6], and in a more
abstract framework [7]. Relaxing the linearity assumption x = uV 0 is also appropriate in
many situations. Fitting a non-linear manifold by minimizing the sum-squared distance can
be seen as a ML estimator for y|u = g(u) + z, where z is i.i.d. Gaussian and g : <k ? <d
specifies some smooth manifold. Combining these ideas leads us to discuss the conditional
distributions yi |gi (u), or yi |u directly.
In this paper we take our first steps is studying this problem, and relaxing restrictions on
1
A mean term is also usually allowed. Incorporating a non-zero mean is straight forward, and in
order to simplify derivations, we do not account for it in most of our presentation.
2
We use uppercase letters to denote matrices, and lowercase letters for vectors, and use bold type
to indicate random quantities.
y|u. We continue to assume a linear model x = uV 0 and limit ourselves to additive noise
models and unbiased models in which E [y|x] = x. We study the estimation of the rank-k
signal space in which x resides, based on a sample of n independent observations of y
(forming the rows of Y), where the distribution on u is unconstrained nuisance.
In order to study estimators for a subspace, we must be able to compare two subspaces. A
natural way of doing so is through the canonical angles between them [8]. Define the angle
between a vector v1 and a subspace V2 to be the minimal angle between v1 and any v2 ? V2 .
The largest canonical angle between two subspaces is then the maximal angle between a
vector in v1 ? V1 and the subspace V2 . The second largest angle is the maximum over all
vectors orthogonal to the v1 , and so on. It is convenient to think of a subspace in terms
of the matrix whose columns span it. Computationally, if the columns of V1 and V2 form
orthonormal bases of V1 and V2 , then the cosines of the canonical angles between V1 and
V2 are given by the singular values of V10 V2 . Throughout the presentation, we will slightly
overload notation and use a matrix to denote also its column subspace. In particular, we
will denote by V0 the true signal subspace, i.e. such that x = uV0 0 .
3
The L2 Estimator
We first consider the ?standard? approach to low-rank approximation?minimizing the sum
squared error.3 This is the ML estimator when the noise is i.i.d. Gaussian. But the L2
estimator is appropriate also in a more general setting. We will show that the L2 estimator
is consistent for any i.i.d. additive noise with finite variance (as we will see later on, this is
more than can be said for some ML estimators).
The L2 estimator of the signal subspace is the subspace spanned by the leading eigenvectors
? n of y, which is a consistent estimator of the true
of the empirical covariance matrix ?
covariance matrix ?Y , which in turn is the sum of the covariance matrices of x and z,
where ?X is of rank exactly4 k, and if z is i.i.d., ?Z = ? 2 I.
Let s1 ? s2 ? ? ? ? ? sk > 0 be the non-zero eigenvalues of ?x . Since z has variance exactly ? 2 in any direction, the principal directions of variation are not affected by it, and the
eigenvalues of ?Y are exactly s1 + ? 2 , . . . , sk + ? 2 , ? 2 , . . . , ? 2 , with the leading k eigenvectors being the eigenvectors of ?X . This ensures an eigenvalue gap of sk > 0 between
the invariant subspace of ?Y spanned by the eigenvectors of ?X and its complement, and
we can bound the norm of the canonical sines between V0 and the leading k eigenvectors of
? n by |?? n ??Y | [8]. Since |?
? n ??Y | ? 0 a.s., we conclude that the estimator is consistent.
?
sk
4
The Variance-Ignoring Estimator
We turn to additive noise with independent, but not identically distributed, coordinates. If
the noise variances are known, the ML estimator corresponds to minimizing the columnweighted (inversely proportional to the variances) Frobenius norm of Y ? X , and can be
calculated from the leading eigenvectors of a scaled empirical covariance matrix [9]. If the
variances are not known, e.g. when the scale of different coordinates is not known, there is
no ML estimator: at least k coordinates of each y can always be exactly matched, and so
the likelihood is unbounded when up to k variances approach zero.
3
We call this an L2 estimator not because it minimizes the matrix L2 -norm |Y ? X |2 , which it
does, but because it minimizes the vector L2 -norms |y ? x |22 .
4
We should also be careful about signals that occupy only a proper subspace of V0 , and be satisfied
with any rank-k subspace containing the support of x, but for simplicity of presentation we assume
this does not happen and x is of full rank k.
1
L2
0.9
1.1
ML, known variances
0.8
1
1
variance?ignored
0.9
0.7
0.8
0.6
0.7
0.8
L2
0.6
|sin(?)|2
0.5
0.6
0.4
0.5
0.3
0.4
0.2
0.3
0.1
0.2
0
ML, known variances
|sin ?|2
|sin ?|2
variance?ignored
10
100
1000
sample size
10000
0.1
0.4
0.2
full L2
1
2
3
4
5
6
7
spread of noise scale (max/min ratio)
8
9
10
0
variance?ignored
2
10
3
10
sample size (number of observed rows)
4
10
Figure 1: Norm of sines of canonical angles to correct subspace: (a) Random rank-2 subspaces
in <10 . Gaussian noise of different scales in different coordinates? between 0.17 and 1.7 signal
strength. (b) Random rank-2 subspaces in <10 , 500 sample rows, and Gaussian noise with varying
distortion (mean over 200 simulations, bars are one standard deviations tall) (c) Observations are
0
exponentially distributed with means in rank-2 subspace ( 11 10 11 10 11 10 11 10 11 10 ) .
The L2 estimator is not satisfactory in this scenario. The covariance matrix ?Z is still diagonal, but is no longer a scaled identity. The additional variance introduced by the noise is
different in different directions, and these differences may overwhelm the ?signal? variance
along V0 , biasing the leading eigenvectors of ?Y , and thus the limit of the L2 estimator,
toward axes with high ?noise? variance. The fact that this variability is independent of the
variability in other coordinates is ignored, and the L2 estimator is asymptotically biased.
Instead of recovering the directions of greatest variability, we recover the covariance struc? n ? ?Y = ?X + ?Z , a sum of a rank-k matrix and a diagonal
ture directly. In the limit, ?
? n approach those of ?X . We can thus
matrix. In particular, the non-diagonal entries of ?
? X approximating ?
? n , e.g. in a sum-squared sense, except on the diseek a rank-k matrix ?
?X
agonal. This is a (zero-one) weighted low-rank approximation problem. We optimize ?
?
by iteratively seeking a rank-k approximation of ?n with diagonal entries filled in from
? X (this can be viewed as an EM procedure [5]). The row-space of the
the last iterate of ?
?
resulting ?X is then an estimator for the signal subspace. Note that the L2 estimator is the
? n.
row-space of the rank-k matrix minimizing the unweighted sum-squared distance to ?
Figures 1(a,b) demonstrate this variance-ignoring estimator on simulated data with nonidentical Gaussian noise. The estimator reconstructs the signal-space almost as well as the
ML estimator, even though it does not have access to the true noise variance.
Discussing consistency in the presence of non-identical noise with unknown variances is
problematic, since the signal subspace is not necessarily identifiable. For example, the
combined covariance
matrix ?Y = ( 21 12 ) can arise from a rank-one signal covariance
a 1
?X = 1 1/a for any 12 ? a ? 2, each corresponding to a different signal subspace.
Counting
the number of parameters and constraints suggests identifiability when k < d ?
?
8d+1?1
, but this is by no means a precise guarantee. Anderson and Rubin [10] present
2
several conditions on ?X which are sufficient for identifiability but require k < d2 , and
other weaker conditions which are necessary.
Non-Additive Noise The above estimation method is also useful in a less straightforward situation. Until now we have considered only additive noise, in which the distribution of yi ? xi was independent of xi . We will now relax this restriction and allow
more general conditional distributions yi |xi , requiring only that E [yi |xi ] = xi . With this
requirement, together with the structural constraint (yi independent given x), for any i 6= j:
Cov [yi , yj ] = E [yi yj ] ? E [yi ]E [yj ] = E [E [yi yj |x]] ? E [E [yi |x]]E [E [yj |x]]
= E [E [yi |x]E [yj |x]] ? E [xi ]E [xj ] = E [xi xj ] ? E [xi ]E [xj ] = Cov [xi , xj ].
As in the non-identical additive noise case, ?Y agrees with ?X except on the diagonal.
Even if yi |xi is identically conditionally distributed
for all i, the difference
h
i
h ?Y ? ?Xi is
2
2
not in general a scaled identity: Var [yi ] = E E yi2 |xi ? E [yi |xi ] + E E [yi |xi ] ?
2
E [yi ] = E [Var [yi |xi ]] + Var [xi ]. Unlike the additive noise case, the variance of yi |xi
depends on xi , and so its expectation depends on the distribution of xi .
These observations suggest using the variance-ignoring estimator. Figure 1(c) demonstrates
how such an estimator succeeds in reconstruction when yi |xi is exponentially distributed
with mean xi , even though the standard L2 estimator is not applicable. We cannot guarantee consistency because
the decomposition of the covariance matrix might not be unique,
but when k < d2 this is not likely to happen. Note that if the conditional distribution
y|x is known, even if the decomposition is not unique, the correct signal covariance might
be identifiable based on the relationship between the signal marginals and the expected
conditional variance of of y|x, but this is not captured by the variance-ignoring estimator.
5
Low Rank Approximation with a Gaussian Mixture Noise Model
We return to additive noise, but seeking better estimation with limited data, we confront
non-Gaussian noise distributions directly: we would like to find the maximum-likelihood
X when Y = X + Z, and Zij are distributed according to a Gaussian mixture: pZ (zij ) =
Pm
2 1/2
exp((zij ? ?c )2 /(2?c2 )).
c=1 pc (2??c )
To do so, we introduce latent variables Cij specifying the mixture component of the noise
at Yij , and solve the problem using EM. In the Expectation step, we compute the posterior
probabilities Pr (Cij |Yij ; X ) based on the current low-rank parameter matrix X . In the
Maximization step we need to find the low-rank matrix X that maximizes the posterior
expected log-likelihood:
X X Pr(C =c)|Y
ij
ij
EC|Y [log Pr (Y = X + Z|C; X )] = ?
(Xij ?(Yij +?c ))2 + Const
2? 2
c
ij
= ? 12
X
c
2
Wij (Xij ? Aij ) + Const
(1)
ij
where
Wij =
X Pr(C
c
ij =c)|Yij
?c2
Aij = Yij +
X Pr(C
ij =c)|Yij ?c
?c2 Wij
c
This is a weighted Frobenius low-rank approximation (WLRA) problem. Equipped with a
WLRA optimization method [5], we can now perform EM iteration in order to find the matrix X maximizing the likelihood of the observed matrix Y . At each M step it is enough to
perform a single WLRA optimization iteration, which is guaranteed to improve the WLRA
objective, and so also the likelihood. The method can be augmented to handle an unknown
Gaussian mixture, by introducing an optimization of the mixture parameters at each M
iteration.
Experiments with GSMs We report here initial experiments with ML estimation using
bounded Gaussian scale mixtures [11], i.e. a mixture of Gaussians with zero mean, and
variance bounded from bellow. Gaussian scale mixtures (GSMs) are a rich class of symmetric distributions, which include non-log-concave, and heavy tailed distributions. We
investigated two noise distributions: a ?Gaussian with outliers? distribution formed as a
mixture of two zero-mean Gaussians with widely varying variances; and a Laplace distribution p(z) ? e?|z| , which is an infinite scale mixture of Gaussians. Figures 2(a,b)
show the quality of reconstruction of the L2 estimator and the ML bounded GSM estimator, for these two noise distributions, for a fixed sample size of 300 rows, under varying
0.45
1.4
0.4
L2
ML, known noise model
ML, nuisance noise model
0.35
0.4
1.2
0.3
0.5
ML
L2
0.45
0.25
0.35
0.4
1
0.2
0.35
0.15
0.8
0.05
0
0.25
0.3
|sin(?)|2
|sin(?)|2
0.1
0
0.1
0.2
0.3
0.4
sin(?)
0.3
0.6
0.25
0.2
0.4
0.2
0.15
0.15
0.1
L
2
0.2
ML
0.05
0.1
0.2
0.4
0.6
0.8
1
signal variance / noise variance
1.2
1.4
1.6
0
0
0.2
0.4
0.6
0.8
signal variance / noise variance
1
1.2
1.4
0
10
100
1000
Sample size (number of observed rows)
10000
Figure 2: Norm of sines of canonical angles to correct subspace: (a) Random rank-3 subspace in
<10 with Laplace noise. Insert: sine norm of ML est. plotted against sine norm of L2 est. (b)
Random rank-2 subspace in <10 with 0.99N (0, 1) + 0.01N (0, 100) noise. (c) span(2, 1, 1)0 ? <3
with 0.9N (0, 1) + 0.1N (0, 25) noise. The ML estimator converges to (2.34, 1, 1). Bars are one
standard deviation tall.
signal strengths. We allowed ten Gaussian components, and did not observe any significant
change in the estimator when the number of components increases.
The ML estimator is overall more accurate than the L2 estimator?it succeeds in reliably
reconstructing the low-rank signal for signals which are approximately three times weaker
than those necessary for reliable reconstruction using the L2 estimator. The improvement
in performance is not as dramatic, but still noticeable, for Laplace noise.
Comparison with Newton?s Methods Confronted with a general additive noise distribution, the approach presented here would be to rewrite, or approximate, it as a Gaussian
mixture and use WLRA in order to learn X using EM. A different approach is to considering the second order Taylor expansions of the log-likelihood, with respect to the entries of
X , and iteratively maximize them using WLRA [5, 7]. Such an approach requires calculating the first and second derivatives of the density. If the density is not specified analytically,
or is unknown, these quantities need to be estimated. But beyond these issues, which can be
overcome, lies the major problem of Newton?s method: the noise density must be strictly
log-concave and differentiable. If the distribution is not log-concave, the quadratic expansion of the log-likelihood will be unbounded and will not admit an optimum. Attempting
to ignore this fact, and for example ?optimizing? U given V using the equations derived
for non-negative weights would actually drive us towards a saddle-point rather then a local
optimum. The non-concavity does not only mean that we are not guaranteed a global optimum (which we are not guaranteed in any case, due to the non-convexity of the low-rank
requirement)? it does not yield even local improvements. On the other hand, approximating the distribution as a Gaussians mixture and using the EM method, might still get stuck
in local minima, but is at least guaranteed local improvement.
Limiting ourselves to only log-concave distributions is a rather strong limitation, as it
precludes, for example, all heavy-tailed distributions. Consider even the ?balanced tail?
Laplace distribution p(z) ? e?|z| . Since the log-density is piecewise linear, a quadratic
approximation of it is a line, which of course does not attain a minimum value.
Consistency Despite the gains in reconstruction presented above, the ML estimator may
suffer from an asymptotic bias, making it inferior to the L2 estimator on large samples. We
study the asymptotic limit of the ML estimator, for a known product distribution p. We first
establish a necessary and sufficient condition for consistency of the estimator.
The ML estimator is the minimizer of the empirical mean of the random function ?(V ) =
minu (? log p(y ? uV 0 )). When the number of samples increase, the empirical means converge to the true means, and if E [?(V1 )] < E [?(V2 )], then with probability approaching
? [?(V )]. For the ML estimator to be consistent, E [?(V )] must
one V2 will not minimize E
be minimized by V0 , establishing a necessary condition for consistency.
? [?(V )]}, which
The sufficiency of this condition rests on the uniform convergence of {E
does not generally exist, or at least on uniform divergence from E [?(V0 )]. It should be
noted that the issue here is whether the ML estimator at all converges, since if it does converge, it must converge to the minimizer of E [?(V )]. Such convergence can be demonstrated at least in the special case when the marginal noise density p(zi ) is continuous,
strictly positive, and has finite variance and differential entropy. Under these conditions,
the ML estimator is consistent if and only if V0 is the unique minimizer of E [?(V )].
When discussing E [?(V )], the expectation is with respect to the noise distribution and
the signal distribution. This is not quite satisfactory, as we would like results which are
independent of the signal distribution, beyond the rank of its support. To do so, we must
ensure the expectation of ?(V ) is minimized on V0 for all possible signals (and not only in
expectation). Denote the objective ?(y; V ) = minu (? log p(y ? uV 0 )). For any x ? <d ,
consider ?(V ; x ) = Ez [?(x + z; V )], where the expectation is only over the additive noise
z. Under the previous conditions guaranteeing the ML estimator converges, it is consistent
for any signal distribution if and only if, for all x ? <d , ?(V ; x ) is minimized with respect
to V exactly when x ? spanV .
It will be instructive to first revisit the ML estimator in the presence of i.i.d. Gaussian
noise, i.e. the L2 estimator which we already showed is consistent. We will consider the
decomposition y = yk + y? of vectors into their projection onto the subspace V , and the
residual . Any rotation of p is an isotropic Gaussian, and so z? and zk are independent,
and p(y) = pk (yk )p? (y? ). We can now analyze:
?(V ; y) = min(? log pk (yk + uV 0 ) ? log p? (y? )) = ? log pk (0) +
u
1
|y? |2 + Const
?2
yielding ?(V ; x ) ? Ez? [|x? + z? |2 ] + Const, which is minimized when x? = 0, i.e. x
is spanned by V . We thus re-derived the consistency of the L2 estimator directly, for the
special case in which the noise is indeed Gaussian.
This consistency proof employed a key property of the isotropic Gaussian: rotations of an
isotropic Gaussian random variable remain i.i.d. As this property is unique to Gaussian
random variables, other ML estimators might not be consistent. In fact, we will shortly see
that the ML estimator for a known Laplace noise model is not consistent. To do so, we will
note that a necessary condition for consistency, if the density function p is continuous, is
that ?(V ; 0) = E [?(z; V )] is constant over all V . Otherwise we have ?(V1 ; 0) < ?(V2 ; 0)
for some V1 , V2 , and for small enough x ? V2 , ?(V1 ; x ) < ?(V2 ; x ). A non-constant
?(V ; 0) indicates an a-priori bias towards certain sub-spaces.
The negative log-likelihood of a Laplace distribution, p(zi ) = 12 e?|zi | , is essentially the
L1 norm. Consider a rank-one approximation in a two-dimensional space with Laplace
noise. For any V = (1, ?), 0 ? ? ? 1, and (z1 , z2 ), the L1 norm |z + uV 0 |1 is minimized
when z1 + u = 0 yielding ?(V ; z ) = |z2 ? ?z1 |, ignoring a constant term, and ?(V ; 0) =
R R 1 ?|z |?|z |
2
+?+1
1
2
|z2 ? ?z1 |dz1 dz2 = ? ?+1
, which is monotonic increasing in ? in the
4e
valid range [0, 1]. In particular, 1 = ?((1, 0); 0) < ?((1, 1); 0) = 32 and the estimator is
biased towards being axis-aligned.
Figure 2(c) demonstrates such an asymptotic bias empirically. Two-component Gaussian
mixture noise was added to rank-one signal in <3 , and the signal subspace was estimated
using an ML estimator with known noise model, and an L2 estimator. For small data sets,
the ML estimator is more accurate, but as the number of samples increase, the error of the
L2 estimator vanishes, while the ML estimator converges to the wrong subspace.
6
Discussion
In many applications few assumptions beyond independence can be made. We formulate the problem of dimensionality reduction as semi-parametric estimation of the lowdimensional signal, or ?factor? space, treating the signal distribution as unconstrained nuisance and the noise distribution as constrained nuisance. We present an estimator which is
appropriate when the conditional means E [y|u] lie in a low-dimensional linear space, and
a maximum-likelihood estimator for additive Gaussian mixture noise.
The variance-ignoring estimator is also applicable when y can be transformed such that
E [g(y)|u] lie in a low-rank linear space, e.g. in log-normal models. If the conditional
distribution y|x is known, this amount to an unbiased estimator for xi . When such a
transformation is not known, we may wish to consider it as nuisance.
We draw attention to the fact the maximum-likelihood low-rank estimation cannot be taken
for granted, and demonstrate that it might not be consistent even for known noise models.
The approach employed here can also be used to investigate the consistency of ML estimators with non-additive noise models. Of particular interest are distributions yi |xi that form
exponential families where xi are the natural parameters [6]. When the mean parameters
form a low-rank linear subspace, the variance-ignoring estimator is applicable, but when
the natural parameters form a linear subspace, the means are in general curved, and there is
no unbiased estimator for the natural parameters. Initial investigation reveals that, for example, the ML estimator for a Bernoulli (logistic) conditional distribution is not consistent.
The problem of finding a consistent estimator for the linear-subspace of natural parameters
when yi |xi forms an exponential family remains open.
We also leave open the efficiency of the various estimators, and the problem of finding
asymptotically efficient estimators, and consistent estimators exhibiting the finite-sample
gains of the ML estimator for additive Gaussian mixture noise.
References
[1] Daniel D. Lee and H. Sebastian Seung. Learning the parts of objects by non-negative matrix
factorization. Nature, 401:788?791, 1999.
[2] Orly Alter, Patrick O. Brown, and David Botstein. Singular value decomposition for genomewide expression data processing and modeling. PNAS, 97(18):10101?10106, 2000.
[3] Yossi Azar, Amos Fiat, Anna R. Karlin, Frank McSherry, and Jared Saia. Spectral analysis of
data. In 33rd ACM Symposium on Theory of Computing, 2001.
[4] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the
Royal Statistical Society, Series B, 21(3):611?622, 1999.
[5] Nathan Srebro and Tommi Jaakkola. Weighted low rank approximation. In 20th International
Conference on Machine Learning, 2003.
[6] M. Collins, S. Dasgupta, and R. E. Schapire. A generalization of principal components analysis
to the exponential family. In Advances in Neural Information Processing Systems 14, 2002.
[7] Geoffrey J. Gordon. Generalized2 linear2 models. In Advances in Neural Information Processing Systems 15, 2003.
[8] G. W. Stewart and Ji-guang Sun. Matrix Perturbation Theory. Academic Press, Inc, 1990.
[9] Michal Irani and P Anandan. Factorization with uncertainty. In 6th European Conference on
Computer Vision, 2000.
[10] T. W. Anderson and Herman Rubin. Statistical inference in factor analysis. In Third Berleley
Symposium on Mathematical Statistics and Probability, volume V, pages 111?150, 1956.
[11] M J Wainwright and E P Simoncelli. Scale mixtures of Gaussians and the statistics of natural
images. In Advances in Neural Information Processing Systems 12, 2000.
| 2431 |@word compression:1 norm:11 open:2 d2:2 dz1:1 seek:1 simulation:1 decomposition:5 covariance:10 dramatic:1 reduction:4 initial:2 series:1 zij:3 daniel:1 interestingly:1 current:1 z2:3 michal:1 yet:1 must:5 additive:20 happen:2 treating:1 isotropic:3 preference:1 unbounded:2 mathematical:1 along:1 c2:3 differential:1 symposium:2 fitting:1 introduce:1 indeed:1 expected:2 behavior:2 multi:1 equipped:1 considering:1 increasing:1 begin:1 underlying:1 linearity:2 notation:1 matched:1 maximizes:1 bounded:3 what:2 minimizes:2 finding:2 transformation:1 guarantee:2 concave:4 exactly:4 scaled:3 demonstrates:2 wrong:1 positive:1 engineering:1 local:4 gsms:2 limit:4 despite:1 establishing:1 yd:1 approximately:1 might:5 suggests:1 relaxing:4 specifying:1 factorization:4 limited:1 range:1 unique:4 yj:6 procedure:1 empirical:4 attain:1 convenient:1 projection:1 suggest:1 get:1 onto:1 cannot:4 applying:1 restriction:4 optimize:1 demonstrated:1 maximizing:1 straightforward:1 regardless:1 starting:1 independently:1 attention:1 formulate:2 simplicity:1 identifying:1 estimator:72 orthonormal:1 spanned:3 handle:1 variation:2 coordinate:6 laplace:7 limiting:1 user:1 observed:3 electrical:1 capture:1 ensures:1 sun:1 yk:3 principled:1 balanced:1 vanishes:1 convexity:1 ui:1 seung:1 rewrite:1 efficiency:1 various:1 derivation:1 additivity:2 whose:1 quite:1 widely:1 solve:1 distortion:1 relax:3 otherwise:1 precludes:1 cov:2 gi:1 statistic:2 think:1 confronted:1 eigenvalue:3 differentiable:1 karlin:1 reconstruction:6 lowdimensional:1 product:2 maximal:1 aligned:1 combining:1 frobenius:5 convergence:2 requirement:3 optimum:3 guaranteeing:1 converges:4 leave:1 object:1 tall:2 develop:1 v10:1 ij:6 noticeable:1 strong:1 recovering:1 indicate:1 tommi:3 direction:4 exhibiting:1 correct:3 attribute:1 exploration:1 linear2:1 require:2 premise:1 generalization:1 investigation:1 yij:6 strictly:3 insert:1 considered:1 normal:1 exp:1 minu:2 genomewide:1 major:1 estimation:7 applicable:3 largest:2 agrees:1 weighted:3 amos:1 hope:1 mit:2 gaussian:30 always:1 rather:3 varying:3 jaakkola:2 ax:1 derived:2 improvement:3 rank:34 likelihood:13 indicates:1 bernoulli:1 rigorous:1 sense:1 inference:1 dependent:5 lowercase:1 entire:1 wij:3 transformed:1 overall:1 among:1 classification:1 issue:2 priori:1 constrained:1 special:2 marginal:1 identical:2 alter:1 minimized:5 report:1 simplify:1 piecewise:1 few:1 gordon:1 divergence:1 ourselves:2 interest:1 investigate:1 mixture:17 yielding:2 pc:1 uppercase:1 mcsherry:1 accurate:2 necessary:5 orthogonal:1 filled:1 taylor:1 re:1 plotted:1 minimal:1 column:3 modeling:1 stewart:1 maximization:1 introducing:1 deviation:2 entry:5 uniform:2 dependency:4 struc:1 combined:1 density:6 international:1 probabilistic:2 lee:1 together:1 nongaussian:1 squared:5 satisfied:1 containing:1 reconstructs:1 possibly:1 admit:1 derivative:1 leading:6 return:1 forego:1 account:1 bold:1 gaussianity:3 inc:1 depends:2 sine:5 multiplicative:1 view:2 later:1 doing:1 analyze:1 recover:1 identifiability:2 collaborative:3 minimize:1 formed:1 variance:34 yield:1 identify:1 generalize:1 emphasizes:1 drive:1 straight:1 gsm:1 sebastian:1 against:1 proof:1 gain:2 massachusetts:1 dimensionality:5 wlra:6 fiat:1 actually:1 focusing:1 tipping:1 botstein:1 formulation:4 sufficiency:1 though:2 anderson:2 until:1 hand:1 logistic:2 quality:1 nonidentical:1 lossy:1 requiring:1 unbiased:5 y2:1 true:4 brown:1 analytically:1 symmetric:1 iteratively:2 satisfactory:2 irani:1 conditionally:2 sin:6 nuisance:6 inferior:1 noted:1 cosine:1 demonstrate:2 l1:2 image:1 uv0:1 rotation:2 empirically:1 ji:1 exponentially:2 volume:1 tail:1 marginals:1 significant:1 cambridge:1 ai:1 rd:1 unconstrained:3 consistency:10 uv:7 pm:1 access:1 longer:1 v0:8 base:1 patrick:1 something:1 posterior:2 showed:1 optimizing:1 scenario:1 certain:1 continue:1 discussing:2 yi:28 captured:3 seen:2 additional:1 minimum:2 impose:3 anandan:1 employed:2 converge:3 maximize:1 signal:32 semi:4 full:2 simoncelli:1 pnas:1 smooth:1 academic:1 prediction:1 confront:2 expectation:6 essentially:1 vision:1 iteration:3 singular:3 biased:2 rest:1 unlike:1 strict:1 call:1 structural:1 presence:4 counting:1 identically:3 ture:1 enough:2 iterate:1 xj:4 independence:1 zi:3 perfectly:1 approaching:1 idea:1 whether:1 expression:1 pca:2 granted:2 suffer:1 ignored:4 useful:2 generally:2 eigenvectors:7 amount:1 ten:1 schapire:1 specifies:2 occupy:1 xij:2 exist:1 canonical:6 problematic:1 revisit:1 estimated:3 write:1 dasgupta:1 affected:1 key:1 v1:12 asymptotically:2 sum:7 angle:9 parameterized:1 letter:2 uncertainty:1 family:5 throughout:1 almost:1 draw:1 summarizes:1 bound:1 guaranteed:4 quadratic:2 identifiable:2 strength:2 constraint:2 nathan:2 aspect:4 span:3 min:2 attempting:1 department:1 according:2 remain:1 slightly:1 em:5 reconstructing:1 dz2:1 making:1 s1:2 outlier:1 invariant:1 pr:5 taken:2 computationally:1 equation:1 remains:1 overwhelm:1 discus:2 turn:2 yossi:1 jared:1 studying:2 gaussians:5 observe:2 v2:14 appropriate:6 generic:1 spectral:1 shortly:1 include:1 ensure:1 maintaining:1 newton:2 const:5 calculating:1 establish:1 approximating:2 society:1 seeking:2 objective:2 already:1 quantity:2 added:1 parametric:8 diagonal:5 said:1 subspace:30 distance:3 separate:1 simulated:1 manifold:2 toward:1 relationship:1 ratio:1 minimizing:5 cij:2 frank:1 negative:4 reliably:1 proper:1 unknown:4 perform:2 observation:4 enabling:1 finite:3 curved:1 situation:3 variability:3 precise:1 y1:1 perturbation:1 introduced:1 complement:1 david:1 required:1 specified:1 z1:4 beyond:6 able:2 suggested:1 usually:1 bar:2 biasing:1 herman:1 including:1 max:1 reliable:1 royal:1 wainwright:1 greatest:1 natural:7 rely:1 residual:1 improve:1 technology:1 inversely:1 axis:1 started:1 fro:2 understanding:1 l2:26 nati:1 asymptotic:5 loss:1 expect:2 limitation:1 filtering:3 proportional:1 srebro:2 var:3 geoffrey:1 sufficient:2 consistent:13 rubin:2 heavy:2 row:11 course:1 last:1 aij:2 bias:3 weaker:2 allow:1 institute:1 distributed:6 overcome:1 calculated:1 valid:2 resides:1 unweighted:1 rich:1 forward:1 made:2 concavity:1 stuck:1 saia:1 ec:1 approximate:1 ignore:1 ml:34 global:1 reveals:1 conclude:2 xi:27 continuous:3 latent:1 sk:4 tailed:2 learn:1 zk:1 nature:1 ignoring:7 expansion:2 investigated:1 necessarily:2 european:1 did:4 pk:3 spread:1 yi2:1 anna:1 s2:1 noise:55 arise:1 azar:1 guang:1 allowed:2 augmented:1 precision:1 sub:1 wish:1 exponential:4 lie:4 third:1 down:1 bishop:1 pz:1 incorporating:1 gap:1 entropy:1 likely:1 saddle:1 forming:1 ez:2 desire:1 monotonic:1 corresponds:1 minimizer:3 acm:1 ma:1 conditional:8 goal:1 presentation:3 identity:2 viewed:1 careful:1 towards:3 change:1 typical:1 infinite:2 except:2 principal:3 bellow:1 svd:1 succeeds:2 est:2 support:2 maximumlikelihood:2 collins:1 overload:1 instructive:1 |
1,576 | 2,432 | An MDP-Based Approach to Online
Mechanism Design
David C. Parkes
Division of Engineering and Applied Sciences
Harvard University
[email protected]
Satinder Singh
Computer Science and Engineering
University of Michigan
[email protected]
Abstract
Online mechanism design (MD) considers the problem of providing incentives to implement desired system-wide outcomes in systems with self-interested agents that arrive and depart dynamically. Agents can choose to misrepresent their arrival and departure times, in addition to information about their value for different
outcomes. We consider the problem of maximizing the total longterm value of the system despite the self-interest of agents. The
online MD problem induces a Markov Decision Process (MDP),
which when solved can be used to implement optimal policies in a
truth-revealing Bayesian-Nash equilibrium.
1
Introduction
Mechanism design (MD) is a subfield of economics that seeks to implement particular outcomes in systems of rational agents [1]. Classically, MD considers static
worlds in which a one-time decision is made and all agents are assumed to be patient enough to wait for the decision. By contrast, we consider dynamic worlds in
which agents may arrive and depart over time and in which a sequence of decisions
must be made without the benefit of hindsight about the values of agents yet to
arrive. The MD problem for dynamic systems is termed online mechanism design
[2]. Online MD supposes the existence of a center, that can receive messages from
agents and enforce a particular outcome and collect payments.
Sequential decision tasks introduce new subtleties into the MD problem. First,
decisions now have expected value instead of certain value because of uncertainty
about the future. Second, new temporal strategies are available to an agent, such
as waiting to report its presence to try to improve its utility within the mechanism.
Online mechanisms must bring truthful and immediate revelation of an agent?s value
for sequences of decisions into equilibrium.
Without the problem of private information and incentives, the sequential decision
problem in online MD could be formulated and solved as a Markov Decision Process
(MDP). In fact, we show that an optimal policy and MDP-value function can be
used to define an online mechanism in which truthful and immediate revelation of
an agent?s valuation for different sequences of decisions is a Bayes-Nash equilibrium.
Our approach is very general, applying to any MDP in which the goal is to maximize
the total expected sequential value across all agents. To illustrate the flexibility of
this model, we can consider the following illustrative applications:
reusable goods. A renewable resource is available in each time period. Agents
arrive and submit a bid for a particular quantity of resource for each of a
contiguous sequence of periods, and before some deadline.
multi-unit auction. A finite number of identical goods are for sale. Agents submit
bids for a quantity of goods with a deadline, by which time a winnerdetermination decision must be made for that agent.
multiagent coordination. A central controller determines and enforces the actions that will be performed by a dynamically changing team of agents.
Agents are only able to perform actions while present in the system.
Our main contribution is to identify this connection between online MD and MDPs,
and to define a new family of dynamic mechanisms, that we term the online VCG
mechanism. We also clearly identify the role of the ability to stall a decision, as it
relates to the value of an agent, in providing for Bayes-Nash truthful mechanisms.
1.1
Related Work
The problem of online MD is due to Friedman and Parkes [2], who focused on
strategyproof online mechanisms in which immediate and truthful revelation of an
agent?s valuation function is a dominant strategy equilibrium. The authors define
the mechanism that we term the delayed VCG mechanism, identify problems for
which the mechanism is strategyproof, and provide the seeds of our work in BayesNash truthful mechanisms. Work on online auctions [3] is also related, in that
it considers a system with dynamic agent arrivals and departures. However, the
online auction work considers a much simpler setting (see also [4]), for instance the
allocation of a fixed number of identical goods, and places less emphasis on temporal
strategies or allocative efficiency. Awerbuch et al. [5], provide a general method to
construct online auctions from online optimization algorithms. In contrast to our
methods, their methods consider the special case of single-minded bidders with a
value vi for a particular set of resources ri , and are only temporally strategyproof
in the special-case of online algorithms with a non-decreasing acceptance threshold.
2
Preliminaries
In this section, we introduce a general discrete-time and finite-action formulation
for a multiagent sequential decision problem. Putting incentives to one side for
now, we also define and solve an MDP formalization of the problem. We consider
a finite-horizon problem1 with a set T of discrete time points and a sequence of
decisions k = {k1 , . . . , kT }, where kt ? Kt and Kt is the set of feasible decisions
in period t. Agent i ? I arrives at time ai ? T , departs at time di ? T , and has
value vi (k) ? 0 for the sequence of decisions k. By assumption, an agent has no
1
The model can be trivially extended to consider infinite horizons if all agents share
the same discount factor, but will require some care for more general settings.
value for decisions outside of interval [ai , di ]. Agents also face payments, which we
allow in general to be collected after an agents departure. Collectively, information
?i = (ai , di , vi ) defines the type of agent i with ?i ? ?. Agent types are sampled
i.i.d. from a probability distribution f (?), assumed known to the agents and to
the central mechanism. We allow multiple agents to arrive and depart at the same
time. Agent i, with type ?i , receives utility ui (k, p; ?i ) = vi (k; ?i ) ? p, for decisions
k and payment p. Agents are modeled as expected-utility maximizers. We adopt as
our goal that of maximizing the expected total sequential value across all agents.
If we were to simply ignore incentive issues, the expected-value maximizing decision
problem induces an MDP. The state2 of the MDP at time t is the history-vector
ht = (?1 , . . . , ?t ; k1 , . . . , kt?1 ), and includes the reported types up to and including
period t and the decisions made up to and including period t ? 1. The set of all
possible states at time t is denoted Ht . The set of all possible states across all time is
ST +1
H = t=1 Ht . The set of decisions available in state ht is Kt (ht ). Given a decision
kt ? Kt (ht ) in state ht , there is some probability distribution Prob(ht+1 |ht , kt )
over possible next states ht+1 determined by the random new agent arrivals, agent
departures, and the impact of decision kt . This makes explicit the dynamics that
were left implicit in type distribution ?i ? f (?i ), and includes additional information
about the domain.
The objective is to make decisions to maximize the expected total value across all
agents. We define a payoff function for the induced MDP as follows: there is a
payoff Ri (ht , kt ) = vi (k?t ; ?i ) ? vi (k?t?1 ; ?i ), that becomes
available from agent i
P?
i
upon taking action kt in state ht . With
this,
we
have
R
(ht ; kt ) = vi (k?? ; ?i ),
t=1
P i
for all periods ? . The summed value, i R (ht , kt ), is the payoff obtained from all
agents at time t, and is denoted R(ht , kt ). By assumption, the reward to an agent
in this basic online MD problem depends only on decisions, and not on state. The
transition probabilities and the reward function defined above, together with the
feasible decision space, constitute the induced MDP Mf .
Given a policy ? = {?1 , ?2 , . . . , ?T } where ?t : Ht ? Kt , an MDP defines an MDPvalue function V ? as follows: V ? (ht ) is the expected value of the summed payoff
obtained from state ht onwards under policy ?, i.e., V ? (ht ) = E? {R(ht , ?(ht )) +
R(ht+1 , ?(ht+1 )) + ? ? ? + R(hT , ?(hT ))}. An optimal policy ? ? is one that maximizes
the MDP-value of every state3 in H. The optimal MDP-value function V ? can be
computed via the following value iteration algorithm: for t = T ? 1, T ? 2, . . . , 1
X
?h ? Ht V ? (h) = max [R(h, k) +
P rob(h0 |h, k)V ? (h0 )]
k?Kt (h)
h0 ?Ht+1
?
where V (h ? HT ) = maxk?KT (h) R(h, k). This algorithm works backwards in time
from the horizon and has time complexity polynomial in the size of the MDP and
the time horizon T .
Given the optimal MDP-value function, the optimal policy is derived as follows: for
t<T
X
? ? (h ? Ht ) = arg max [R(h, k) +
P rob(h0 |h, k)V ? (h0 )]
k?Kt (h)
h0 ?Ht+1
?
and ? (h ? HT ) = arg maxk?KT (h) R(h, k). Note that we have chosen not to
subscript the optimal policy and MDP-value by time because it is implicit in the
length of the state.
2
Using histories as state in the induced MDP will make the state space very large.
Often, there will be some function g for which g(h) is a sufficient statistic for all possible
states h. We ignore this possibility here.
3
It is known that a deterministic optimal policy always exists in MDPs[6].
Let R<t0 (ht ) denote the total payoff obtained prior to time t0 for a state ht with
t ? t0 . The following property of MDPs is useful.
Lemma 1 (MDP value-consistency) For any time t < T , and for any policy ?,
E{ht+1 ,...,hT |ht ,?} {R<t0 (ht0 ) + V ? (ht0 )} = R<t (ht ) + V ? (ht ), for all t0 ? t, where the
expectation is taken with respect to a (correct) MDP model, Mf , given information
up to and including period t and policy ?.
We will need to allow for incorrect models, Mf , because agents may misreport their
? Let ht (?;
? ?) denote the state at time t produced
true types ? as untruthful types ?.
? Payoff, R(ht , kt ), will always
by following policy ? on agents with reported types ?.
denote the payoff with respect to the reported valuations of agents; in particular,
? ?) denotes the total payoff prior to period t0 obtained by applying policy ?
R<t0 (?;
?
to reported types ?.
Example. (WiFi at Starbucks) [2] There is a finite set of WiFi (802.11b) channels
to allocate to customers that arrive and leave a coffee house. A decision defines an
allocation of a channel to a customer for some period of time. There is a known
distribution on agent valuations and a known arrival and departure process. Each
customer has her own value function, for example ?I value any 10 minute connection
in the next 30 minutes a $0.50.? The decision space might include the ability to delay
making a decision for a new customer, before finally making a definite allocation
decision. At this point the MDP reward would be the total value to the agent for
this allocation into the future.
The following domain properties are required to formally state the economic properties of our online VCG mechanism. First, we need value-monotonicity, which will
be sufficient to provide for voluntary participation in our mechanism. Let ? i ? ht
denote that agent i with type ?i arrived in some period t0 ? t in history ht .
Definition 1 (value-monotonicity) MDP, Mf , satisfies value-monotonicity if
for all states, ht , the optimal MDP-value function satisfies V ? (ht (?? ? ?i ; ? ? )) ?
? ? ? )) ? 0, for agent i with type ?i that arrives in period t.
V ? (ht (?;
Value-monotonicity requires that the arrival of each additional agent has a positive
effect on the expected total value from that state forward. In WiFi at Starbucks,
this is satisfied because an agent with a low value can simply be ignored by the
mechanism. It may fail in other problems, for instance in a physical domain with a
new robot that arrives and blocks the progress of other robots.
Second, we need no-positive-externalities, which will be sufficient for our mechanisms to run without payment deficits to the center.
Definition 2 (no-positive-externalities) MDP, Mf , satisfies no-positiveexternalities if for all states, ht , the optimal MDP-value function satisfies
? ? ? )), for agent i with type
V ? (ht (?? ? ?i ; ? ? )) ? vi (? ? (ht (?? ? ?i ; ? ? )); ?i ) ? V ? (ht (?;
?i that arrives in period t.
No-positive-externalities requires that the arrival of each additional agent can only
make the other agents worse off in expectation. This holds in WiFi at Starbucks,
because a new agent can take resources from other agents, but not in general,
for instance when agents are both providers and consumers of resources or when
multiple agents are needed to make progress.
3
The Delayed VCG Mechanism
In this section, we define the delayed VCG mechanism, which was introduced in
Friedman and Parkes [2]. The mechanism implements a sequence of decisions based
on agent reports but delays final payments until the final period T . We prove that
the delayed VCG mechanism brings truth-revelation into a Bayes-Nash equilibrium
in combination with an optimal MDP policy.
The delayed VCG mechanism is a direct-revelation online mechanism (DRM). The
strategy space restricts an agent to making a single claim about its type. Formally,
an online direct-revelation mechanism, M = (?; ?, p), defines a feasible type space
?, along with a decision policy ? = (?1 , . . . , ?T ), with ?t : Ht ? Kt , and a payment
rule p = (p1 , . . . , pT ), with pt : Ht ? RN , such that pt,i (ht ) denotes the payment
to agent i in period t given state ht .
Definition 3 (delayed VCG mechanism) Given history h ? H, mechanism
MDvcg = (?; ?, pDvcg ), implements decisions kt = ?(ht ), and computes payment
h
i
? ?) = Ri (?;
? ?) ? R?T (?;
? ?) ? R?T (???i ; ?)
(?;
(1)
pDvcg
?T
i
to agent i at the end of the final period, where R?T (???i ; ?) denotes the total reported
payoff for the optimal policy in the system without agent i.
An agent?s payment is discounted from its reported value for the outcome by a term
equal to the total (reported) marginal value generated by its presence. Consider
agent i, with type ?i , and let ?<i denote the types of agents that arrive before
agent i, and let ?>i denote a random variable (distributed according to f (?)) for
the agents that arrive after agent i.
Definition 4 (Bayesian-Nash Incentive-Compatible) Mechanism MDvcg is
Bayesian-Nash incentive-compatible if and only if the policy ? and payments satisfy:
E?>i {vi (?(?<i , ?i , ?>i ); ?i ) ? pDvcg
(?<i , ?i , ?>i ; ?)}
i
?E? {vi (?(?<i , ??i , ?>i ); ?i ) ? pDvcg (?<i , ??i , ?>i ; ?)}
>i
(BNIC)
i
for all types ?<i , all types ?i , and all ??i =
6 ?i .
Bayes-Nash IC states that truth-revelation is utility maximizing in expectation,
given common knowledge about the distribution on agent valuations and arrivals
f (?) and when other agents are truthful. Moreover, it implies immediate revelation,
because the type includes information about an agent?s arrival period.
Theorem 1 A delayed VCG mechanism, (?; ? ? , pDvcg ), based on an optimal policy
? ? for a correct MDP model defined for a decision space that includes stalling is
Bayes-Nash incentive compatible.
Proof. Assume without loss of generality that the other agents are reporting truthfully. Consider some agent i, with type ?i , and suppose agents ?<i
have already arrived. Now, the expected utility to agent i when it reports
type ??i , substituting for the payment term pDvcg
, is E?>i {vi (? ? (?<i , ??i , ?>i ); ?i ) +
i
P
j
?
?
?
j6=i R?T (?<i , ?i , ?>i ; ? ) ? R?T (?<i , ?>i ; ? )}. We can ignore the final term because it does not depend on the choice of ??i at all. Let ? denote the arrival period
ai of agent i, with state h? including agent types ?<i , decisions up to and including period ? ? 1, and the reported type of agent i if it makes a report in period
ai . Ignoring R<? (h? ), which is the total payoff already received by agents j 6= i
in periods up to and including ? ? 1, the remaining terms are equal to the expected value of the summed payoff obtained from state h? onwards under policy ? ? ,
P
P
E?? {vi (? ? (h? ); ?i )+ j6=i vj (? ? (h? ); ??j )+vi (? ? (h? +1 ); ?i )+ j6=i vj (? ? (h? +1 ); ??j )+
P
. . . + vi (? ? (hT ); ?i ) + j6=i vj (? ? (hT ); ??j )}, defined with respect to the true type
of agent i and the reported types of agents j 6= i. This is the MDP-value for policy ? ? in state h? , E?? {R(h? , ? ? (h? )) + R(h? +1 , ? ? (h? +1 )) + . . . + R(hT , ? ? (hT ))},
because agents j 6= i are assumed to report their true types in equilibrium. We
have a contradiction with the optimality of policy ? ? because if there is some type
??i 6= ?i that agent i can report to improve the MDP-value of policy ? ? , given types
?<i , then we can construct a new policy ? 0 that is better than policy ? ? ; policy ? 0
is identical to ? ? in all states except h? , when it implements the decision defined
0
by ? ? in the state with type ?i replaced by type ??i . The new policy, ? , lies in the
space of feasible policies because the decision space includes stalling and can mimic
the effect of any manipulation in which agent i reports a later arrival time.
t
u
The effect of the first term in the discount in Equation 1 is to align the agent?s incentives with the system-wide objective of maximizing the total value across agents.
We do not have a stronger equilibrium concept than Bayes-Nash because the mechanism?s model will be incorrect if other agents are not truthful and its policy suboptimal. This leaves space for useful manipulation. The following corollary captures
the requirement that the MDPs decision space must allow for stalling, i.e. it must
include the option to delay making a decision that will determine the value of agent
i until some period after the agent?s arrival. Say an agent has patience if d i > ai .
Corollary 2 A delayed VCG mechanism cannot be Bayes-Nash incentivecompatible if agents have any patience and the expected value of its policy can be
improved by stalling a decision.
If the policy can be improved through stalling, then an agent can improve its expected utility by delaying its reported arrival to correct for this, and make the
policy stall. This delayed VCG mechanism is ex ante efficient, because it implements the policy that maximizes the expected total sequential value across
all agents. Second, it is interim individual-rational as long as the MDP satisfies the value-monotonicity property. The expected utility to agent i in equilibrium is E?>i {R?T (?<i , ?i , ?>i ; ? ? ) ? R?T (?<i , ?>i ; ? ? )}, which is equivalent to
value-monotonicity. Third, the mechanism is ex ante budget-balanced as long
as the MDP satisfies the no-positive-externalities property. The expected payment by agent i, with type ?i , to the mechanism is E?>i {R?T (?<i , ?>i ; ? ? ) ?
i
(R?T (?<i , ?i , ?>i ; ? ? ) ? R?T
(?<i , ?i , ?>i ; ? ? ))}, which is non-negative exactly when
the no-positive-externalities condition holds.
4
The Online VCG Mechanism
We now introduce the online VCG mechanism, in which payments are determined as
soon as all decisions are made that affect an agent?s value. Not only is this a better
fit with the practical needs of online mechanisms, but the online VCG mechanism
also enables better computational properties than the delayed mechanism.
Let V ? (ht (???i ; ?)) denote the MDP-value of policy ? in the system without agent
i, given reports ??i from other agents, and evaluated in some period t.
Definition 5 (online VCG mechanism) Given history h ? H, mechanism
Mvcg = (?; ?, pvcg ) implements decisions kt = ?(ht ), and computes payment
h
i
? ?) = Ri (?;
? ?) ? V ? (ha? (?;
? ?)) ? V ? (ha? (???i ; ?))
(
?;
(2)
pvcg
?mi
i
i
i
to agent i in its commitment period mi , with zero payments in all other periods.
Note the payment is computed in the commitment period for an agent, which is
some period before an agent?s departure at which its value is fully determined. In
WiFi at Starbucks, this can be the period in which the mechanism commits to a
particular allocation for an agent.
Agent i?s payment in the online VCG mechanism is equal to its reported value from
the sequence of decisions made by the policy, discounted by the expected marginal
value that agent i will contribute to the system (as determined by the MDP-value
function for the policy in its arrival period). The discount is defined as the expected
forward looking effect the agent will have on the value of the system. Establishing
incentive-compatibility requires some care because the payment now depends on
the stated arrival time of an agent. We must show that there is no systematic
dependence that an agent can use to its advantage.
Theorem 3 An online VCG mechanism, (?; ? ? , pvcg ), based on an optimal policy
? ? for a correct MDP model defined for a decision space that includes stalling is
Bayes-Nash incentive compatible.
Proof. We establish this result by demonstrating that the expected value of the
payment by agent i in the online VCG mechanism is the same as in the delayed VCG
mechanism, when other agents report their true types and for any reported type of
agent i. This proves incentive-compatibility, because the policy in this online VCG
mechanism is exactly that in the delayed VCG mechanism (and so an agent?s value
from decisions is the same), and with identical expected payments the equilibrium
follows from the truthful equilibrium of the delayed mechanism. The first term in
i
the payment (see Equation 2) is R?m
(??i , ??i ; ? ? ) and has the same value as the
i
i
first term, R?T
(??i , ??i ; ? ? ), in the payment in the delayed mechanism (see Equation
1). Now, consider the discount term in Equation 2, and rewrite this as:
V ? (ha?i (??i , ??i ; ? ? )) + Ra?i (??i ; ? ? ) ? V ? (ha?i (??i ; ? ? )) ? Ra?i (??i ; ? ? )
(3)
The expected value of the left-hand pair of terms in Equation 3 is equal to
V ? (ha?i (??i , ??i ; ? ? )) + Ra?i (??i , ??i ; ? ? ) because agent i?s announced type has no effect on the reward before its arrival. Applying Lemma 1, the expected value of
these terms is constant and equal to the expected value of V ? (ht0 (??i , ??i ; ? ? )) +
Rt0 (??i , ??i ; ? ? ) for all t0 ? ai (with the expectation taken wrt history hai available
to agent i in its true arrival period.) Moreover, taking t0 to be the final period, T ,
this is also equal to the expected value of R?T (??i , ??i ; ? ? ), which is the expected
value of the first term of the discount in the payment in the delayed VCG mechanism. Similarly, the (negated) expected value of the right-hand pair of terms in
Equation 3 is constant, and equals V ? (ht0 (??i ; ? ? )) + Rt0 (??i ; ? ? ) for all t0 ? ai .
Again, taking t0 to be the final period T this is also equal to the expected value of
R?T (??i ; ? ? ), which is the expected value of the second term of the discount in the
payment in the delayed VCG mechanism.
t
u
We have demonstrated that although an agent can systematically reduce the expected value of each of the first and second terms in the discount in its payment
(Equation 2) by delaying its arrival, these effects exactly cancel each other out.
Note that it also remains important for incentive-compatibility on the online VCG
mechanism that the policy allows stalling.
The online VCG mechanism shares the properties of allocative efficiency and
budget-balance with the delayed VCG mechanism (under the same conditions).
The online VCG mechanism is ex post individual-rational so that an agent?s
expected utility is always non-negative, a slightly stronger condition that for the
delayed VCG mechanism. The expected utility to agent i is V ? (hai ) ? V ? (hai \ i)
and non-negative because of the value-monotonicity property of MDPs.
The online VCG mechanism also suggests the possibility of new computational
speed-ups. The payment to an agent only requires computing the optimal-MDP
value without the agent in the state in which it arrives, while the delayed VCG
payment requires computing the sequence of decisions that the optimal policy would
have made in the counterfactual world without the presence of each agent.
5
Discussion
We described a direct-revelation mechanism for a general sequential decision making setting with uncertainty. In the Bayes-Nash equilibrium each agent truthfully
reveals its private type information, and immediately upon arrival. The mechanism induces an MDP, and implements the sequence of decisions that maximize
the expected total value across all agents. There are two important directions in
which to take this preliminary work. First, we must deal with the fact that for
most real applications the MDP that will need to be solved to compute the decision
and payment policies will be too big to be solved exactly. We will explore methods for solving large-scale MDPs approximately, and consider the consequences for
incentive-compatibility. Second, we must deal with the fact that the mechanism
will often have at best an incomplete and inaccurate knowledge of the distributions
on agent-types. We will explore the interaction between models of learning and
incentives, and consider the problem of adaptive online mechanisms.
Acknowledgments
This work is supported in part by NSF grant IIS-0238147.
References
[1] Matthew O. Jackson. Mechanism theory. In The Encyclopedia of Life Support
Systems. EOLSS Publishers, 2000.
[2] Eric Friedman and David C. Parkes. Pricing WiFi at Starbucks? Issues in online
mechanism design. Short paper, In Fourth ACM Conf. on Electronic Commerce
(EC?03), 240?241, 2003.
[3] Ron Lavi and Noam Nisan. Competitive analysis of incentive compatible on-line
auctions. In Proc. 2nd ACM Conf. on Electronic Commerce (EC-00), 2000.
[4] Avrim Blum, Vijar Kumar, Atri Rudra, and Felix Wu. Online learning in online
auctions. In Proceedings of the 14th Annual ACM-SIAM symposium on Discrete
algorithms, 2003.
[5] Baruch Awerbuch, Yossi Azar, and Adam Meyerson. Reducing truth-telling
online mechanisms to online optimization. In Proc. ACM Symposium on Theory
of Computing (STOC?03), 2003.
[6] M. L. Puterman. Markov decision processes : discrete stochastic dynamic programming. John Wiley & Sons, New York, 1994.
| 2432 |@word private:2 longterm:1 polynomial:1 stronger:2 nd:1 seek:1 yet:1 must:8 john:1 enables:1 leaf:1 short:1 parkes:5 pdvcg:6 contribute:1 ron:1 simpler:1 along:1 direct:3 symposium:2 incorrect:2 prove:1 introduce:3 ra:3 expected:31 p1:1 multi:1 discounted:2 decreasing:1 becomes:1 moreover:2 maximizes:2 hindsight:1 temporal:2 every:1 exactly:4 revelation:9 sale:1 unit:1 grant:1 before:5 positive:6 engineering:2 felix:1 consequence:1 despite:1 establishing:1 subscript:1 approximately:1 might:1 emphasis:1 dynamically:2 collect:1 suggests:1 practical:1 acknowledgment:1 enforces:1 commerce:2 block:1 implement:9 definite:1 revealing:1 ups:1 wait:1 cannot:1 applying:3 equivalent:1 deterministic:1 customer:4 center:2 maximizing:5 demonstrated:1 economics:1 rt0:2 focused:1 untruthful:1 immediately:1 contradiction:1 rule:1 jackson:1 pt:3 suppose:1 programming:1 mvcg:1 harvard:2 role:1 solved:4 capture:1 balanced:1 nash:12 ui:1 complexity:1 reward:4 dynamic:6 singh:1 depend:1 rewrite:1 solving:1 upon:2 division:1 efficiency:2 eric:1 outcome:5 outside:1 h0:6 solve:1 say:1 ability:2 statistic:1 final:6 online:40 sequence:10 advantage:1 interaction:1 commitment:2 flexibility:1 requirement:1 adam:1 leave:1 illustrate:1 received:1 progress:2 implies:1 direction:1 correct:4 stochastic:1 require:1 renewable:1 preliminary:2 hold:2 ic:1 equilibrium:11 seed:1 claim:1 matthew:1 substituting:1 adopt:1 proc:2 coordination:1 minded:1 clearly:1 always:3 corollary:2 derived:1 misreport:1 contrast:2 inaccurate:1 her:1 stalling:7 interested:1 compatibility:4 issue:2 arg:2 denoted:2 special:2 summed:3 marginal:2 equal:8 construct:2 identical:4 cancel:1 lavi:1 wifi:6 problem1:1 future:2 mimic:1 report:9 drm:1 individual:2 delayed:19 replaced:1 friedman:3 onwards:2 interest:1 message:1 acceptance:1 possibility:2 arrives:5 kt:24 rudra:1 incomplete:1 desired:1 instance:3 contiguous:1 delay:3 too:1 reported:12 supposes:1 eec:1 st:1 siam:1 systematic:1 off:1 together:1 again:1 central:2 satisfied:1 choose:1 classically:1 worse:1 conf:2 bidder:1 includes:6 satisfy:1 vi:14 depends:2 nisan:1 performed:1 try:1 later:1 competitive:1 bayes:9 option:1 ante:2 contribution:1 who:1 identify:3 bayesian:3 produced:1 provider:1 j6:4 history:6 definition:5 proof:2 di:3 mi:2 static:1 rational:3 sampled:1 counterfactual:1 knowledge:2 improved:2 formulation:1 evaluated:1 generality:1 implicit:2 until:2 hand:2 receives:1 incentivecompatible:1 defines:4 brings:1 pricing:1 mdp:36 effect:6 concept:1 true:5 awerbuch:2 deal:2 puterman:1 self:2 illustrative:1 arrived:2 bring:1 auction:6 common:1 physical:1 ai:8 trivially:1 consistency:1 similarly:1 baveja:1 robot:2 align:1 dominant:1 own:1 termed:1 manipulation:2 certain:1 life:1 vcg:30 additional:3 care:2 determine:1 truthful:8 maximize:3 period:31 ii:1 relates:1 multiple:2 long:2 deadline:2 post:1 impact:1 basic:1 controller:1 patient:1 expectation:4 externality:5 iteration:1 strategyproof:3 receive:1 addition:1 interval:1 allocative:2 publisher:1 induced:3 misrepresent:1 presence:3 backwards:1 enough:1 bid:2 affect:1 fit:1 suboptimal:1 stall:2 economic:1 reduce:1 t0:12 allocate:1 utility:9 york:1 constitute:1 action:4 ignored:1 useful:2 discount:7 encyclopedia:1 induces:3 restricts:1 nsf:1 discrete:4 incentive:15 waiting:1 reusable:1 putting:1 threshold:1 demonstrating:1 blum:1 changing:1 ht:60 ht0:4 baruch:1 run:1 prob:1 uncertainty:2 fourth:1 arrive:8 family:1 place:1 reporting:1 electronic:2 wu:1 decision:50 patience:2 announced:1 annual:1 ri:4 speed:1 optimality:1 kumar:1 interim:1 according:1 combination:1 across:7 slightly:1 son:1 rob:2 making:5 taken:2 resource:5 equation:7 payment:28 remains:1 mechanism:67 fail:1 needed:1 wrt:1 yossi:1 end:1 umich:1 available:5 enforce:1 existence:1 denotes:3 remaining:1 include:2 commits:1 k1:2 coffee:1 establish:1 prof:1 objective:2 already:2 quantity:2 depart:3 strategy:4 dependence:1 md:11 hai:3 deficit:1 valuation:5 considers:4 collected:1 consumer:1 length:1 modeled:1 providing:2 balance:1 stoc:1 noam:1 negative:3 stated:1 design:5 policy:39 perform:1 negated:1 markov:3 finite:4 immediate:4 payoff:11 extended:1 maxk:2 team:1 voluntary:1 delaying:2 rn:1 looking:1 david:2 introduced:1 pair:2 required:1 connection:2 able:1 departure:6 including:6 max:2 participation:1 improve:3 mdps:6 state3:1 temporally:1 prior:2 subfield:1 multiagent:2 loss:1 fully:1 allocation:5 agent:119 sufficient:3 systematically:1 share:2 compatible:5 supported:1 soon:1 side:1 allow:4 telling:1 wide:2 face:1 taking:3 benefit:1 distributed:1 world:3 transition:1 computes:2 author:1 made:7 forward:2 adaptive:1 meyerson:1 ec:2 ignore:3 starbucks:5 satinder:1 monotonicity:7 reveals:1 assumed:3 state2:1 truthfully:2 channel:2 ignoring:1 domain:3 submit:2 vj:3 main:1 big:1 azar:1 arrival:18 wiley:1 formalization:1 explicit:1 lie:1 house:1 third:1 minute:2 departs:1 theorem:2 maximizers:1 exists:1 avrim:1 sequential:7 budget:2 horizon:4 mf:5 michigan:1 simply:2 explore:2 atri:1 subtlety:1 collectively:1 truth:4 determines:1 satisfies:6 acm:4 goal:2 formulated:1 feasible:4 infinite:1 determined:4 except:1 reducing:1 lemma:2 total:14 pvcg:3 formally:2 support:1 ex:3 |
1,577 | 2,433 | Margin Maximizing Loss Functions
Saharon Rosset
Watson Research Center
IBM
Yorktown, NY, 10598
[email protected]
Ji Zhu
Department of Statistics
University of Michigan
Ann Arbor, MI, 48109
[email protected]
Trevor Hastie
Department of Statistics
Stanford University
Stanford, CA, 94305
[email protected]
Abstract
Margin maximizing properties play an important role in the analysis of classi?cation models, such as boosting and support vector machines. Margin maximization is theoretically interesting because it facilitates generalization error analysis,
and practically interesting because it presents a clear geometric interpretation of
the models being built. We formulate and prove a suf?cient condition for the
solutions of regularized loss functions to converge to margin maximizing separators, as the regularization vanishes. This condition covers the hinge loss of SVM,
the exponential loss of AdaBoost and logistic regression loss. We also generalize
it to multi-class classi?cation problems, and present margin maximizing multiclass versions of logistic regression and support vector machines.
1
Introduction
Assume we have a classi?cation ?learning? sample {x i , yi }ni=1 with yi ? {?1, +1}. We
wish to build
a model F (x) for
this data by minimizing (exactly or approximately) a loss
criterion i C(yi , F (xi )) = i C(yi F (xi )) which is a function of the margins yi F (xi )
of this model on this data. Most common classi?cation modeling approaches can be cast
in this framework: logistic regression, support vector machines, boosting and more. The
model F (x) which these methods actually build is a linear combination of dictionary functions coming from a dictionary H which can be large or even in?nite:
F (x) =
?j hj (x)
hj ?H
and our prediction at point x based on this model is sgnF (x).
When |H| is large, as is the case in most boosting or kernel SVM applications, some regularization is needed to control the ?complexity? of the model F (x) and the resulting over?tting. Thus, it is common that the quantity actually minimized on the data is a regularized
version of the loss function:
?
?(?)
= min
(1)
C(yi ? h(xi )) + ??p
?
p
i
where the second term penalizes for the lp norm of the coef?cient vector ? (p ? 1 for
convexity, and in practice usually p ? {1, 2}), and ? ? 0 is a tuning regularization parameter. The 1- and 2-norm support vector machine training problems with slack can be cast in
this form ([6], chapter 12). In [8] we have shown that boosting approximately follows the
?path? of regularized solutions traced by (1) as the regularization parameter ? varies, with
the appropriate loss and an l1 penalty.
?
The main question that we answer in this paper is: for what loss functions does ?(?)
converge to an ?optimal? separator as ? ? 0? The de?nition of ?optimal? which we will
use depends on the lp norm used for regularization, and we will term it the ?lp -margin
maximizing separating hyper-plane?. More concisely, we will investigate for which loss
functions and under which conditions we have:
?
?(?)
lim
(2)
= arg max min yi ? h(xi )
?
??0 ?(?)
?p =1 i
This margin maximizing property is interesting for three distinct reasons. First, it gives us
a geometric interpretation of the ?limiting? model as we relax the regularization. It tells
us that this loss seeks to optimally separate the data by maximizing a distance between a
separating hyper-plane and the ?closest? points. A theorem by Mangasarian [7] allows us
to interpret lp margin maximization as lq distance maximization, with 1/p + 1/q = 1, and
hence make a clear geometric interpretation. Second, from a learning theory perspective
large margins are an important quantity ? generalization error bounds that depend on
the margins have been generated for support vector machines ([10] ? using l2 margins)
and boosting ( [9] ? using l1 margins). Thus, showing that a loss function is ?margin
maximizing? in this sense is useful and promising information regarding this loss function?s
potential for generating good prediction models. Third, practical experience shows that
exact or approximate margin maximizaion (such as non-regularized kernel SVM solutions,
or ?in?nite? boosting) may actually lead to good classi?cation prediction models. This is
certainly not always the case, and we return to this hotly debated issue in our discussion.
Our main result is a suf?cient condition on the loss function, which guarantees that (2)
holds, if the data is separable, i.e. if the maximum on the RHS of (2) is positive. This
condition is presented and proven in section 2. It covers the hinge loss of support vector
machines, the logistic log-likelihood loss of logistic regression, and the exponential loss,
most notably used in boosting. We discuss these and other examples in section 3. Our result
generalizes elegantly to multi-class models and loss functions. We present the resulting
margin-maximizing versions of SVMs and logistic regression in section 4.
2
Suf?cient condition for margin maximization
The following theorem shows that if the loss function vanishes ?quickly? enough, then it
will be margin-maximizing as the regularization vanishes. It provides us with a uni?ed
margin-maximization theory, covering SVMs, logistic regression and boosting.
Theorem 2.1 Assume the data {xi , yi }ni=1 is separable, i.e. ?? s.t. mini yi ? h(xi ) > 0.
Let C(y, f ) = C(yf ) be a monotone non-increasing loss function depending on the margin
only.
If ?T > 0 (possibly T = ? ) such that:
C(t ? [1 ? ])
lim
(3)
= ?, ? > 0
tT
C(t)
Then C is a margin maximizing loss function in the sense that any convergence point of the
?
?(?)
normalized solutions ?(?)
to the regularized problems (1) as ? ? 0 is an lp margin?
p
maximizing separating hyper-plane. Consequently, if this margin-maximizing hyper-plane
is unique, then the solutions converge to it:
?
?(?)
(4)
= arg max min yi ? h(xi )
lim
?
??0 ?(?)
?p =1 i
p
Proof We prove the result separately for T = ? and T < ?.
a. T = ?:
??0
?
Lemma 2.2 ?(?)
p ?? ?
Proof Since T = ? then C(m) > 0 ?m > 0, and limm?? C(m) = 0. Therefore, for
?
loss+penalty to vanish as ? ? 0, ?(?)
p must diverge, to allow the margins to diverge.
Lemma 2.3 Assume ?1 , ?2 are two separating models, with ?1 p = ?2 p = 1, and ?1
separates the data better, i.e.: 0 < m2 = mini yi h(xi ) ?2 < m1 = mini yi h(xi ) ?1 .
Then ?U = U (m1 , m2 ) such that
C(yi h(xi ) (t?1 )) <
C(yi h(xi ) (t?2 ))
?t > U,
i
i
In words, if ?1 separates better than ?2 then scaled-up versions of ?1 will incur smaller
loss than scaled-up versions of ?2 , if the scaling factor is large enough.
2)
Proof Since condition (3) holds with T = ?, there exists U such that ?t > U, C(tm
C(tm1 ) >
n. Thus from C being non-increasing we immediately get:
?t > U,
C(yi h(xi ) (t?1 )) ? n ? C(tm1 ) < C(tm2 ) <
C(yi h(xi ) (t?2 ))
i
i
?
?(?)
Proof of case a.: Assume ? ? is a convergence point of ?(?)
as ? ? 0, with ? ? p = 1.
?
p
? p = 1 and bigger minimal lp margin. Denote the
Now assume by contradiction ?? has ?
minimal margins for the two models by m? and m,
? respectively, with m? < m.
?
By continuity of the minimal margin in ?, there exists some open neighborhood of ? ? on
the lp sphere:
N? ? = {? : ?p = 1, ? ? ? ? 2 < ?}
and an > 0, such that:
min yi ? h(xi ) < m
? ? , ?? ? N? ?
i
Now by lemma 2.3 we get that exists U = U (m,
? m
? ? ) such that t?? incurs smaller loss
than t? for any t > U, ? ? N? ? . Therefore ? ? cannot be a convergence point of
?
?(?)
.
?
?(?)
p
b. T < ?
Lemma 2.4 C(T ) = 0 and C(T ? ?) > 0, ?? > 0.
Proof From condition (3),
C(T ?T )
C(T )
= ?. Both results follow immediately, with ? = T .
? h(xi ) = T
Lemma 2.5 lim??0 mini yi ?(?)
Proof Assume by contradiction that there is a sequence ?1 , ?2 , ... 0 and > 0 s.t.
? j ) h(xi ) ? T ? .
?j, mini yi ?(?
? p = 1 and m
Pick any separating normalized model ?? i.e. ?
? := mini yi ?? h(xi ) > 0.
p C(T ?)
we get:
Then for any ? < m
?
Tp
T ? p
T
C(yi ?? h(xi )) + ? ?
< C(T ? )
m
?
m
? p
i
since the ?rst term (loss) is 0 and the penalty is smaller than C(T ? ) by condition on ?.
?)
? j ), since
? p C(T
and so we get a contradiction to optimality of ?(?
But ?j0 s.t. ?j0 < m
0
Tp
?
we assumed mini yi ?(?j0 ) h(xi ) ? T ? and thus:
? j ) h(xi )) ? C(T ? )
C(yi ?(?
0
i
? h(xi ) ? T . It remains to prove equality.
We have thus proven that lim inf ??0 mini yi ?(?)
? h(xi ) > T .
Assume by contradiction that for some value of ? we have m := mini yi ?(?)
T ?
?
Then the re-scaled model m ?(?) has the same zero loss as ?(?), but a smaller penalty,
T ?
T ?
?
?
?(?) < ?(?).
So we get a contradiction to optimality of ?(?).
?(?) = m
since m
?
?(?)
as ? ? 0, with ? ? p = 1.
Proof of case b.: Assume ? ? is a convergence point of ?(?)
?
p
? p = 1 and bigger minimal margin. Denote the
Now assume by contradiction ?? has ?
minimal margins for the two models by m? and m,
? respectively, with m? < m.
?
? j)
?(?
?
Let ?1 , ?2 , ... 0 be a sequence along which ?(?
? j )p ? ? . By lemma 2.5 and our
T
T
T
?
?
assumption, ?(?j )p ? m? > m
? . Thus, ?j0 such that ?j > j0 , ?(?j )p > m
? and
consequently:
T ? p
T ?
? j )p > ?( T )p =
? j ) h(xi )) + ??(?
C(yi ?(?
C(yi ?h(x
?
i )) + ?
p
m
?
m
?
m
? p
i
i
? j ).
So we get a contradiction to optimality of ?(?
Thus we conclude for both cases a. and b. that any convergence point of
?
?(?)
?
?(?)
p
must
?
?(?)
?(?)
p
?
p
maximize the lp margin. Since
= 1, such convergence points obviously exist.
If the lp -margin-maximizing separating hyper-plane is unique, then we can conclude:
?
?(?)
? ?? := arg max min yi ? h(xi )
?
?p =1 i
?(?)p
Necessity results
A necessity result for margin maximization on any separable data seems to require either
additional assumptions on the loss or a relaxation of condition (3). We conjecture that if we
also require that the loss is convex and vanishing (i.e. limm?? C(m) = 0) then condition
(3) is suf?cient and necessary. However this is still a subject for future research.
3
Examples
Support vector machines
Support vector machines (linear or kernel) can be described as a regularized problem:
(5)
[1 ? yi ? h(xi )]+ + ??pp
min
?
i
where p = 2 for the standard (?2-norm?) SVM and p = 1 for the 1-norm SVM. This
formulation is equivalent to the better known ?norm minimization? SVM formulation in
the sense that they have the same set of solutions as the regularization parameter ? varies
in (5) or the slack bound varies in the norm minimization formulation.
The loss in (5) is termed ?hinge loss? since it?s linear for margins less than 1, then ?xed
at 0 (see ?gure 1). The theorem obviously holds for T = 1, and it veri?es our knowledge
that the non-regularized SVM solution, which is the limit of the regularized solutions,
maximizes the appropriate margin (Euclidean for standard SVM, l1 for 1-norm SVM).
Note that our theorem indicates that the squared hinge loss (AKA truncated squared loss):
C(yi , F (xi )) = [1 ? yi F (xi )]2+
is also a margin-maximizing loss.
Logistic regression and boosting
The two loss functions we consider in this context are:
(6)
(7)
Exponential :
Log likelihood :
Ce (m) = exp(?m)
Cl (m) = log(1 + exp(?m))
These two loss functions are of great interest in the context of two class classi?cation: C l
is used in logistic regression and more recently for boosting [4], while Ce is the implicit
loss function used by AdaBoost - the original and most famous boosting algorithm [3] .
In [8] we showed that boosting approximately follows the regularized path of solutions
?
?(?)
using these loss functions and l1 regularization. We also proved that the two loss
functions are very similar for positive margins, and that their regularized solutions converge
to margin-maximizing separators. Theorem 2.1 provides a new proof of this result, since
the theorem?s condition holds with T = ? for both loss functions.
Some interesting non-examples
Commonly used classi?cation loss functions which are not margin-maximizing include any
1
polynomial loss function: C(m) = m
, C(m) = m2 , etc. do not guarantee convergence of
regularized solutions to margin maximizing solutions.
Another interesting method in this context is linear discriminant analysis. Although it does
not correspond to the loss+penalty formulation we have described, it does ?nd a ?decision
hyper-plane? in the predictor space.
For both polynomial loss functions and linear discriminant analysis it is easy to ?nd examples which show that they are not necessarily margin maximizing on separable data.
4
A multi-class generalization
Our main result can be elegantly extended to versions of multi-class logistic regression and
support vector machines, as follows. Assume the response is now multi-class, with K ? 2
possible values i.e. yi ? {c1 , ..., cK }. Our model consists of a ?prediction? for each class:
(k)
?j hj (x)
Fk (x) =
hj ?H
with the obvious prediction rule at x being arg maxk Fk (x).
This gives rise to a K ? 1 dimensional ?margin? for each observation. For y = ck , de?ne
the margin vector as:
(8)
m(ck , f1 , ..., fK ) = (fk ? f1 , ..., fk ? fk?1 , fk ? fk+1 , ..., fk ? fK )
And our loss is a function of this K ? 1 dimensional margin:
I{y = ck }C(m(ck , f1 , ..., fK ))
C(y, f1 , ..., fK ) =
k
3
hinge
exponential
logistic
6
2.5
5
4
2
3
1.5
2
1
1
0
?2
?2
?1
0.5
?1
0
0
1
1
2
0
?2
?1.5
?1
?0.5
0
0.5
1
1.5
2
3
2
3
Figure 1: Margin maximizing loss functions for 2-class problems (left) and the SVM 3-class loss
function of section 4.1 (right)
The lp -regularized problem is now:
?
(9) ?(?)
C(yi , h(xi ) ? (1) , ..., h(xi ) ? (K) ) + ?
? (k) pp
= arg min
? (1) ,...,? (K)
i
k
?
Where ?(?)
= (??(1) (?), ..., ??(K) (?)) ? RK?|H| .
In this formulation, the concept of margin maximization corresponds to maximizing the
minimal of all n ? (K ? 1) normalized lp -margins generated by the data:
(10)
max
(K) p =1
? (1) p
p +...+?
p
min min h(xi ) (? (yi ) ? ? (k) )
i
yi =ck
Note that this margin maximization problem still has a natural geometric interpretation, as
h(xi ) (? (yi ) ? ? (k) ) > 0 ?i, k
= yi implies that the hyper-plane h(x) (? (j) ? ? (k) ) = 0
successfully separates classes j and k for any two classes.
Here is a generalization of the optimal separation theorem 2.1 to multi-class models:
Theorem 4.1 Assume C(m) is commutative and decreasing in each coordinate, then if
?T > 0 (possibly T = ? ) such that:
(11)
limtT
C(t[1 ? ], tu1 , ...tuK?2 )
= ?,
C(t, tv1 , ..., tvK?2 )
? > 0, u1 ? 1, ..., uK?2 ? 1, v1 ? 1, ...vK?2 ? 1
Then C is a margin-maximizing loss function for multi-class models, in the sense that any
?
?(?)
, attains the optimal separaconvergence point of the normalized solutions to (9), ?(?)
?
p
tion as de?ned in (10)
Idea of proof The proof is essentially identical to the two class case, now considering the
n ? (K ? 1) margins on which the loss depends. The condition (11) implies that as the
regularization vanishes the model is determined by the minimal margin, and so an optimal
model puts the emphasis on maximizing that margin.
Corollary 4.2 In the 2-class case, theorem 4.1 reduces to theorem 2.1.
Proof The loss depends on ? (1) ? ? (2) , the penalty on ? (1) pp + ? (2) pp . An optimal
solution to the regularized problem must thus have ? (1) + ? (2) = 0, since by transforming:
? (1) ? ? (1) ?
? (1) + ? (2)
? (1) + ? (2)
, ? (2) ? ? (2) ?
2
2
we are not changing the loss, but reducing the penalty, by Jensen?s inequality:
? (1) ?
? (1) + ? (2) p
? (1) + ? (2) p
? (1) ? ? (2) p
p + ? (2) ?
p = 2
p ? ? (1) pp + ? (2) pp
2
2
2
So we can conclude that ??(1) (?) = ???(2) (?) and consequently that the two margin maximization tasks (2), (10) are equivalent.
4.1
Margin maximization in multi-class SVM and logistic regression
Here we apply theorem 4.1 to versions of multi-class logistic regression and SVM.
For logistic regression, we use a slightly different formulation than the ?standard? logistic
regression models, which uses class K as a ?reference? class, i.e. assumes that ? (K) = 0.
This is required for non-regularized ?tting, since without it the solution is not uniquely
de?ned. However, using regularization as in (9) guarantees that the solution will be unique
and consequently we can ?symmetrize? the model ? which allows us to apply theorem
4.1. So the loss function we use is (assume y = ck belongs to class k):
efk
=
ef1 + ... + efK
= log(ef1 ?fk + ... + efk?1 ?fk + 1 + efk+1 ?fk + ... + efK ?fk )
(12) C(y, f1 , ..., fK ) = ? log
with the linear model: fj (xi ) = h(xi ) ? (j) . It is not dif?cult to verify that condition (11)
holds for this loss function with T = ?, using the fact that log(1 + ) = + O(2 ). The
sum of exponentials which results from applying this ?rst-order approximation satis?es
(11), and as ? 0, the second order term can be ignored.
For support vector machines, consider a multi-class loss which is a natural generalization
of the two-class loss:
C(m) =
(13)
K?1
[1 ? mj ]+
j=1
Where mj is the j?th component of the multi-margin m as in (8). Figure 1 shows this loss
for K = 3 classes as a function of the two margins. The loss+penalty formulation using 13
is equivalent to a standard optimization formulation of multi-class SVM (e.g. [11]):
max c
s.t. h(xi ) (? (yi ) ? ? (k) ) ? c(1 ? ?ik ), i ? {1, ...n}, k ? {1, ..., K}, ck
= yi
?ik ? 0 ,
?ik ? B ,
? (k) pp = 1
i,k
k
As both theorem 4.1 (using T = 1) and the optimization formulation indicate, the regularized solutions to this problem converge to the lp margin maximizing multi-class solution.
5
Discussion
What are the properties we would like to have in a classi?cation loss function? Recently
there has been a lot of interest in Bayes-consistency of loss functions and algorithms ([1]
and references therein), as the data size increases. It turns out that practically all ?reasonable? loss functions are consistent in that sense, although convergence rates and other
measures of ?degree of consistency? may vary.
Margin maximization, on the other hand, is a ?nite sample optimality property of loss functions, which is potentially of decreasing interest as sample size grows, since the training
data-set is less likely to be separable. Note, however, that in very high dimensional predictor spaces, such as those typically used by boosting or kernel SVM, separability of any
?nite-size data-set is a mild assumption, which is violated only in pathological cases.
We have shown that the margin maximizing property is shared by some popular loss functions used in logistic regression, support vector machines and boosting. Knowing that
these algorithms ?converge?, as regularization vanishes, to the same model (provided they
use the same regularization) is an interesting insight. So, for example, we can conclude
that 1-norm support vector machines, exponential boosting and l1 -regularized logistic regression all facilitate the same non-regularized solution, which is an l1 -margin maximizing
separating hyper-plane. From Mangasarian?s theorem [7] we know that this hyper-plane
maximizes the l? distance from the closest points on either side.
The most interesting statistical question which arises is: are these ?optimal? separating
models really good for prediction, or should we expect regularized models to always do
better in practice? Statistical intuition supports the latter, as do some margin-maximizing
experiments by Breiman [2] and Grove and Schuurmans [5]. However it has also been observed that in many cases margin-maximization leads to reasonable prediction models, and
does not necessarily result in over-?tting. We have had similar experience with boosting
and kernel SVM. Settling this issue is an intriguing research topic, and one that is critical in determining the practical importance of our results, as well as that of margin-based
generalization error bounds.
References
[1]
Bartlett, P., Jordan, M. & McAuliffe, J. (2003). Convexity, Classi?cation and Risk Bounds.
Technical reports, dept. of Statistics, UC Berkeley.
[2] Breiman, L. (1999). Prediction games and arcing algorithms. Neural Computation 7:1493-1517.
[3] Freund, Y. & Scahpire, R.E. (1995). A decision theoretic generalization of on-line learning and
an application to boosting. Proc. of 2nd Eurpoean Conf. on Computational Learning Theory.
[4] Friedman, J. H., Hastie, T. & Tibshirani, R. (2000). Additive logistic regression: a statistical
view of boosting. Annals of Statistics 28, pp. 337-407.
[5] Grove, A.J. & Schuurmans, D. (1998). Boosting in the limit: Maximizing the margin of learned
ensembles. Proc. of 15th National Conf. on AI.
[6] Hastie, T., Tibshirani, R. & Friedman, J. (2001). Elements of Stat. Learning. Springer-Verlag.
[7] Mangasarian, O.L. (1999). Arbitrary-norm separating plane. Operations Research Letters, Vol.
24 1-2:15-23
[8] Rosset, R., Zhu, J & Hastie, T. (2003). Boosting as a regularized path to a maximum margin
classi?er. Technical report, Dept. of Statistics, Stanford Univ.
[9] Scahpire, R.E., Freund, Y., Bartlett, P. & Lee, W.S. (1998). Boosting the margin: a new explanation for the effectiveness of voting methods. Annals of Statistics 26(5):1651-1686
[10] Vapnik, V. (1995). The Nature of Statistical Learning Theory. Springer.
[11] Weston, J. & Watkins, C. (1998). Multi-class support vector machines. Technical report CSDTR-98-04, dept of CS, Royal Holloway, University of London.
| 2433 |@word mild:1 version:7 polynomial:2 norm:10 seems:1 nd:3 open:1 seek:1 pick:1 incurs:1 necessity:2 com:1 intriguing:1 must:3 additive:1 plane:10 cult:1 vanishing:1 gure:1 provides:2 boosting:21 along:1 ik:3 prove:3 consists:1 theoretically:1 notably:1 multi:14 decreasing:2 considering:1 increasing:2 provided:1 maximizes:2 what:2 xed:1 guarantee:3 berkeley:1 voting:1 exactly:1 scaled:3 uk:1 control:1 mcauliffe:1 positive:2 limit:2 path:3 approximately:3 emphasis:1 therein:1 dif:1 practical:2 unique:3 practice:2 nite:4 j0:5 word:1 get:6 cannot:1 put:1 context:3 applying:1 risk:1 equivalent:3 center:1 maximizing:28 convex:1 tv1:1 formulate:1 immediately:2 m2:3 contradiction:7 rule:1 insight:1 coordinate:1 limiting:1 tting:3 annals:2 play:1 exact:1 us:1 element:1 observed:1 role:1 intuition:1 vanishes:5 convexity:2 complexity:1 transforming:1 depend:1 incur:1 chapter:1 univ:1 distinct:1 london:1 ef1:2 tell:1 hyper:9 neighborhood:1 stanford:4 relax:1 statistic:6 obviously:2 sequence:2 coming:1 rst:2 convergence:8 generating:1 depending:1 stat:2 c:1 hotly:1 implies:2 indicate:1 require:2 f1:5 generalization:7 really:1 hold:5 practically:2 exp:2 great:1 dictionary:2 vary:1 proc:2 successfully:1 minimization:2 always:2 ck:8 hj:4 breiman:2 corollary:1 arcing:1 vk:1 likelihood:2 indicates:1 aka:1 attains:1 sense:5 typically:1 limm:2 arg:5 issue:2 uc:1 identical:1 future:1 minimized:1 report:3 pathological:1 national:1 friedman:2 interest:3 satis:1 investigate:1 certainly:1 grove:2 necessary:1 experience:2 euclidean:1 penalizes:1 re:1 srosset:1 minimal:7 modeling:1 cover:2 tp:2 maximization:12 predictor:2 optimally:1 answer:1 varies:3 rosset:2 lee:1 diverge:2 quickly:1 squared:2 possibly:2 conf:2 tu1:1 return:1 potential:1 de:4 depends:3 tion:1 view:1 lot:1 bayes:1 ni:2 ensemble:1 correspond:1 generalize:1 famous:1 cation:9 coef:1 trevor:1 ed:1 pp:8 obvious:1 proof:11 mi:1 proved:1 popular:1 lim:5 knowledge:1 actually:3 follow:1 adaboost:2 response:1 formulation:9 implicit:1 hand:1 continuity:1 logistic:18 yf:1 grows:1 facilitate:1 concept:1 verify:1 normalized:4 regularization:13 hence:1 equality:1 game:1 uniquely:1 covering:1 yorktown:1 criterion:1 theoretic:1 saharon:1 l1:6 fj:1 mangasarian:3 recently:2 common:2 ji:1 interpretation:4 m1:2 interpret:1 ai:1 tuning:1 fk:17 consistency:2 had:1 etc:1 closest:2 showed:1 perspective:1 inf:1 belongs:1 termed:1 verlag:1 inequality:1 watson:1 yi:39 nition:1 additional:1 converge:6 maximize:1 reduces:1 technical:3 sphere:1 bigger:2 prediction:8 regression:16 essentially:1 kernel:5 limt:1 c1:1 separately:1 veri:1 subject:1 facilitates:1 effectiveness:1 jordan:1 enough:2 easy:1 hastie:5 regarding:1 tm:1 idea:1 multiclass:1 knowing:1 bartlett:2 penalty:8 ignored:1 useful:1 clear:2 scahpire:2 svms:2 exist:1 tibshirani:2 vol:1 traced:1 changing:1 ce:2 v1:1 relaxation:1 monotone:1 sum:1 letter:1 reasonable:2 separation:1 decision:2 scaling:1 bound:4 tvk:1 u1:1 min:9 optimality:4 separable:5 conjecture:1 ned:2 department:2 combination:1 smaller:4 slightly:1 separability:1 lp:12 remains:1 slack:2 discus:1 turn:1 needed:1 know:1 umich:1 generalizes:1 operation:1 apply:2 appropriate:2 original:1 assumes:1 include:1 hinge:5 build:2 question:2 quantity:2 distance:3 separate:4 separating:9 topic:1 discriminant:2 reason:1 mini:9 minimizing:1 potentially:1 rise:1 observation:1 jizhu:1 truncated:1 maxk:1 extended:1 arbitrary:1 cast:2 required:1 concisely:1 learned:1 usually:1 built:1 max:5 royal:1 explanation:1 critical:1 natural:2 settling:1 regularized:19 zhu:2 ne:1 geometric:4 l2:1 determining:1 freund:2 loss:61 expect:1 interesting:7 suf:4 proven:2 degree:1 consistent:1 ibm:2 side:1 allow:1 symmetrize:1 commonly:1 approximate:1 uni:1 assumed:1 conclude:4 xi:35 promising:1 mj:2 nature:1 ca:1 efk:5 schuurmans:2 cl:1 separator:3 necessarily:2 elegantly:2 main:3 rh:1 cient:5 ny:1 wish:1 exponential:6 debated:1 lq:1 vanish:1 watkins:1 third:1 theorem:15 rk:1 showing:1 jensen:1 tm1:2 er:1 svm:15 exists:3 vapnik:1 importance:1 commutative:1 margin:65 michigan:1 likely:1 springer:2 corresponds:1 weston:1 ann:1 consequently:4 shared:1 determined:1 reducing:1 classi:10 lemma:6 arbor:1 e:2 tuk:1 holloway:1 support:14 latter:1 arises:1 violated:1 dept:3 |
1,578 | 2,434 | Semidefinite Programming
by Perceptron Learning
Ralf Herbrich
Thore Graepel
Microsoft Research Ltd., Cambridge, UK
{thoreg,rherb}@microsoft.com
Andriy Kharechko
John Shawe-Taylor
Royal Holloway, University of London, UK
{ak03r,jst}@ecs.soton.ac.uk
Abstract
We present a modified version of the perceptron learning algorithm
(PLA) which solves semidefinite programs (SDPs) in polynomial
time. The algorithm is based on the following three observations:
(i) Semidefinite programs are linear programs with infinitely many
(linear) constraints; (ii) every linear program can be solved by a
sequence of constraint satisfaction problems with linear constraints;
(iii) in general, the perceptron learning algorithm solves a constraint
satisfaction problem with linear constraints in finitely many updates.
Combining the PLA with a probabilistic rescaling algorithm (which,
on average, increases the size of the feasable region) results in a probabilistic algorithm for solving SDPs that runs in polynomial time.
We present preliminary results which demonstrate that the algorithm works, but is not competitive with state-of-the-art interior
point methods.
1
Introduction
Semidefinite programming (SDP) is one of the most active research areas in optimisation. Its appeal derives from important applications in combinatorial optimisation
and control theory, from the recent development of efficient algorithms for solving
SDP problems and the depth and elegance of the underlying optimisation theory [14],
which covers linear, quadratic, and second-order cone programming as special cases.
Recently, semidefinite programming has been discovered as a useful toolkit in machine
learning with applications ranging from pattern separation via ellipsoids [4] to kernel
matrix optimisation [5] and transformation invariant learning [6].
Methods for solving SDPs have mostly been developed in an analogy to linear programming. Generalised simplex-like algorithms were developed for SDPs [11], but to
the best of our knowledge are currently merely of theoretical interest. The ellipsoid
method works by searching for a feasible point via repeatedly ?halving? an ellipsoid
that encloses the affine space of constraint matrices such that the centre of the ellipsoid is a feasible point [7]. However, this method shows poor performance in practice
as the running time usually attains its worst-case bound. A third set of methods
for solving SDPs are interior point methods [14]. These methods minimise a linear
function on convex sets provided the sets are endowed with self-concordant barrier
functions. Since such a barrier function is known for SDPs, interior point methods
are currently the most efficient method for solving SDPs in practice.
Considering the great generality of semidefinite programming and the complexity of
state-of-the-art solution methods it is quite surprising that the forty year old simple
perceptron learning algorithm [12] can be modified so as to solve SDPs. In this
paper we present a combination of the perceptron learning algorithm (PLA) with a
rescaling algorithm (originally developed for LPs [3]) that is able to solve semidefinite
programs in polynomial time. We start with a short introduction into semidefinite
programming and the perceptron learning algorithm in Section 2. In Section 3 we
present our main algorithm together with some performance guarantees, whose proofs
we only sketch due to space restrictions. While our numerical results presented in
Section 4 are very preliminary, they do give insights into the workings of the algorithm
and demonstrate that machine learning may have something to offer to the field of
convex optimisation.
For the rest of the paper we denote matrices and vectors by bold face upper and
lower case letters, e.g., A and x. We shall use x := x/ kxk to denote the unit length
vector in the direction of x. The notation A ? 0 is used to denote x0 Ax ? 0 for all
x, that is, A is positive semidefinite.
2
2.1
Learning and Convex Optimisation
Semidefinite Programming
In semidefinite programming a linear objective function is minimised over the image
of an affine transformation of the cone of semidefinite matrices, expressed by linear
matrix inequalities (LMI):
minimise
n
x?R
c0 x
subject to
F (x) := F0 +
n
X
xi Fi ? 0 ,
(1)
i=1
where c ? Rn and Fi ? Rm?m for all i ? {0, . . . , n}. The following proposition shows
that semidefinite programs are a direct generalisation of linear programs.
Proposition 1. Every semidefinite program is a linear program with infinitely many
linear constraints.
Proof. Obviously, the objective function in (1) is linear in x. For any u ? Rm , define
the vector au := (u0 F1 u, . . . , u0 Fn u). Then, the constraints in (1) can be written as
?u ? Rm :
u0 F (x) u ? 0
?u ? Rm :
?
m
This is a linear constraint in x for all u ? R
x0 au ? ?u0 F0 u .
(2)
(of which there are infinitely many).
Since the objective function is linear in x, we can solve an SDP by a sequence of
semidefinite constraint satisfaction problems (CSPs) introducing the additional constraint c0 x ? c0 and varying c0 ? R. Moreover, we have the following proposition.
Proposition 2. Any SDP can be solved by a sequence of homogenised semidefinite
CSPs of the following form:
find
x ? Rn+1
subject to
G (x) :=
n
X
i=0
xi Gi ? 0 .
Algorithm 1 Perceptron Learning Algorithm
Require: A (possibly) infinite set A of vectors a ? Rn
Set t ? 0 and xt = 0
while there exists a ? A such that x0t a ? 0 do
xt+1 = xt + a
t?t+1
end while
return xt
Proof. In order to make F0 and c0 dependent on the optimisation variables, we
introduce an auxiliary variable x0 > 0; the solution to the original problem is given
0
by x?1
0 ? x. Moreover, we can repose the two linear constraints c0 x0 ? c x ? 0 and
x0 > 0 as an LMI using the fact that a block-diagonal matrix is positive (semi)definite
if and only if every block is positive (semi)definite. Thus, the following matrices are
sufficient:
?
?
?
!
F0 0 0
Fi
0 0
0 ?ci 0
G0 = ? 00 c0 0 ? ,
Gi =
.
0
0
0
00 0 1
Given an upper and a lower bound on the objective function, repeated bisection can
be used to determine the solution in O(log 1? ) steps to accuracy ?.
In order to simplify notation, we will assume that n ? n+1 and m ? m+2 whenever
we speak about a semidefinite CSP for an SDP in n variables with Fi ? Rm?m .
2.2
Perceptron Learning Algorithm
The perceptron learning algorithm (PLA) [12] is an online procedure which finds a
linear separation of a set of points from the origin (see Algorithm 1). In machine
learning this algorithm is usually applied to two sets A+1 and A?1 of points labelled
+1 and ?1 by multiplying every data vector ai by its class label1 ; the resulting vector
xt (often referred to as the weight vector in perceptron learning) is then read as the
normal of a hyperplane which separates the sets A+1 and A?1 .
A remarkable property of the perceptron learning algorithm is that the total number
t of updates is independent of the cardinality of A but can be upper bounded simply
in terms of the following quantity
? (A) := maxn ? (A, x) := maxn min a0 x .
x?R
x?R
a?A
This quantity is known as the (normalised) margin of A in the machine learning
community or as the radius of the feasible region in the optimisation community.
It quantifies the radius of the largest ball that can be fitted in the convex region
enclosed by all a ? A (the so-called feasible set). Then, the perceptron convergence
theorem [10] states that t ? ??2 (A).
For the purpose of this paper we observe that Algorithm 1 solves a linear CSP where
the linear constraints are given by the vectors a ? A. Moreover, by the last argument
we have the following proposition.
Proposition 3. If the feasible set has a positive radius, then the perceptron learning
algorithm solves a linear CSP in finitely many steps.
It is worth mentioning that in the last few decades a series of modified PLAs A
have been developed (see [2] for a good overview) which mainly aim at guaranteeing
1
Note that sometimes the update equation is given using the unnormalised vector a.
Algorithm 2 Rescaling algorithm
Require: A maximal number T ? N+ of steps and a parameter ? ? R+
Set y uniformly at random in {z : kzk = 1}
for t = 0, . . . , T do
0
?0a
?u := ?Pun G(?y0)u 2 ? ?? (u ? smallest EV of G (?
Find au such that y
y))
(u G u)
j=1
j
if u does not exists then
Set ?i ? {1, . . . , n} : Gi ? Gi + y i G (y); return y
end if
y ? y ? (y0 au ) au ; t ? t + 1
end for
return unsolved
not only feasibility of the solution xt but also a lower bound on ? (A, xt ). These
guarantees usually come at the price of a slightly larger mistake bound which we
shall denote by M (A, ? (A)), that is, t ? M (A, ? (A)).
3
Semidefinite Programming by Perceptron Learning
If we combine Propositions 1, 2 and 3 together with Equation (2) we obtain a perceptron algorithm that sequentially solves SDPs. However, there remain two problems:
1. How do we find a vector a ? A such that x0 a ? 0?
2. How can we make the running time of this algorithm polynomial in the
description length of the data?2
In order to address the first problem we notice that A in Algorithm 1 is not explicitly
given but is defined by virtue of
A (G1 , . . . , Gn ) := {au := (u0 G1 u, . . . , u0 Gn u) | u ? Rm } .
Hence, finding a vector au ? A such that x0 au ? 0 is equivalent to identifying a
vector u ? Rm such that
n
X
xi u0 Gi u = u0 G (x) u ? 0 .
i=1
One possible way of finding such a vector u (and consequently au ) for the current
solution xt in Algorithm 1 is to calculate the eigenvector corresponding to the smallest
eigenvalue of G (xt ); if this eigenvalue is positive, the algorithm stops and outputs
xt . Note, however, that computationally easier procedures can be applied to find a
suitable u ? Rm (see also Section 4).
The second problem requires us to improve the dependency of the runtime from
O(??2 ) to O(? log(?)). To this end we employ a probabilistic rescaling algorithm
(see Algorithm 2) which was originally developed for LPs [3]. The purpose of this algorithm is to enlarge the feasible region (in terms of ? (A (G1 , . . . , Gn ))) by a constant
factor ?, on average, which would imply a decrease in the number of updates of the
perceptron algorithm exponential in the number of calls to this rescaling algorithm.
This is achieved by running Algorithm 2. If the algorithm does not return unsolved
the rescaling procedure on the Gi has the effect that au changes into au + (y0 au ) y
for every u ? Rm . In order to be able to reconstruct the solution xt to the original
problem, whenever we rescale the Gi we need to remember the vector y used for
rescaling. In Figure 1 we have shown the effect of rescaling for three linear con2
Note that polynomial runtime is only guaranteed if ??2 (A (G1 , . . . , Gn )) is bounded by
a polynomial function of the description length of the data.
Figure 1: Illustration of the rescaling procedure. Shown is the feasible region and
one feasible point before (left) and after (left) rescaling with the feasible point.
straints in R3 . The main idea of Algorithm 2 is to find a vector y that is ?-close to
the current feasible region and hence leads to an increase in its radius when used for
rescaling. The following property holds for Algorithm 2.
1
Theorem 1. Assume Algorithm 2 did not return unsolved. Let ? ? 32n
, ? be the
0
radius of the feasible set before rescaling and ? be the radius of the feasible set after
1
rescaling and assume that ? ? 4n
. Then
?
?
1
1. ?0 ? 1 ? 16n
? with probability at most 34 .
?
?
1
2. ?0 ? 1 + 4n
? with probability at least 14 .
The probabilistic nature of the theorem stems from the fact that the rescaling can
only be shown to increase the size of the feasible region if the (random) initial value
y already points sufficiently closely to the feasible region. A consequence of this theorem is that, on average, the radius increases by ? = (1 + 1/64n) > 1. Algorithm 3
combines rescaling and perceptron learning, which results in a probabilistic polynomial runtime algorithm3 which alternates between calls to Algorithm 1 and 2 . This
algorithm may return infeasible in two cases: either Ti many calls to Algorithm 2
have returned unsolved or L many calls of Algorithm 1 together with rescaling have
not returned a solution. Each of these two conditions can either happen because of
an ?unlucky? draw of y in Algorithm 2 or because ? (A (G1 , . . . , Gn )) is too small.
Following the argument in [3] one can show that for L = ?2048n ? ln (?min ) the total
probability of returning infeasible despite ? (A (G1 , . . . , Gn )) > ?min cannot exceed
exp (?n).
4
Experimental Results
The experiments reported in this section fall into two parts. Our initial aim was
to demonstrate that the method works in practice and to assess its efficacy on a
3
Note that we assume that the optimisation problem in line 3 of Algorithm 2 can be
solved in polynomial time with algorithms such as Newton-Raphson.
Algorithm 3 Positive Definite Perceptron Algorithm
Require: G1 , . . . , Gn ? Rm?m and maximal number of iteration L ? N+
Set B = In
for i = 1, . . . , L do
?
?
1
Call Algorithm 1 for at most M A, 4n
many updates
if Algorithm 1 converged then return Bx
ln(?i )
Set ?i = ?23i2 and Ti = ln
( 34 )
for j = 1, . . . , Ti do
1
Call Algorithm 2 with T = 1024n2 ln (n) and ? = 32n
0
if Algorithm 2 returns y then B ? B (In + yy ); goto the outer for-loop
end for
return infeasible
end for
return infeasible
benchmark example from graph bisection [1].
These experiments would also indicate how competitive the baseline method is when
compared to other solvers. The algorithm was implemented in MATLAB and all of
the experiments were run on 1.7GHz machines. The time taken can be compared
with a standard method SDPT3 [13] partially implemented in C but running under
MATLAB.
We considered benchmark problems arising from semidefinite relaxations to the
MAXCUT problems of weighted graphs, which is posed as finding a maximum weight
bisection of a graph. The benchmark MAXCUT problems have the following relaxed
SDP form (see [8]):
1
10 x subject to ? (diag(C1) ? C) + diag (x) ? 0 ,
minimise
(3)
x?Rn
| 4
{z
} |P {z }
F0
i
x i Fi
where C ? Rn?n is the adjacency matrix of the graph with n vertices.
The benchmark used was ?mcp100? provided by SDPLIB 1.2 [1]. For this problem,
n = 100 and it is known that the optimal value of the objective function equals
226.1574. The baseline method used the bisection approach to identify the critical
value of the objective, referred to throughout this section as c0 .
Figure 2 (left) shows a plot of the time per iteration against the value of c0 for the
first four iterations of the bisection method. As can be seen from the plots the time
taken by the algorithm for each iteration is quite long, with the time of the fourth
iteration being around 19,000 seconds. The initial value of 999 for c0 was found
without an objective constraint and converged within 0.012 secs. The bisection then
started with the lower (infeasible) value of 0 and the upper value of 999. Iteration 1
was run with c0 = 499.5, but the feasible solution had an objective value of 492. This
was found in just 617 secs. The second iteration used a value of c0 = 246 slightly
above the optimum of 226. The third iteration was infeasible but since it was quite
far from the optimum, the algorithm was able to deduce this fact quite quickly. The
final iteration was also infeasible, but much closer to the optimal value. The running
time suffered correspondingly taking 5.36 hours. If we were to continue the next
iteration would also be infeasible but closer to the optimum and so would take even
longer.
The first experiment demonstrated several things. First, that the method does indeed work as predicted; secondly, that the running times are very far from being
1000
4
16000
Time (in sec.)
14000
12000
10000
2
3
8000
6000
4000
2000
0
0
1
100
200
Optimal value
Optimal value
Value of objective function (c0)
18000
300
400
Value of objective function (c0)
500
900
800
700
600
500
400
300
200
0
10
20
30
40
50
Iterations
Figure 2: (Left) Four iterations of the bisection method showing time taken per iteration (outer for loop in Algorithm 3) against the value of the objective constraint.
(Right) Decay of the attained objective function value while iterating through Algorithm 3 with a non-zero threshold of ? = 500.
competitive (SDPT3 takes under 12 seconds to solve this problem) and thirdly that
the running times increase as the value of c0 approaches the optimum with those
iterations that must prove infeasibility being more costly than those that find a solution.
The final observation prompted our first adaptation of the base algorithm. Rather
than perform the search using the bisection method we implemented a non-zero
threshold on the objective constraint (see the while-statement in Algorithm 1). The
value of this threshold is denoted ? , following the notation introduced in [9].
Using a value of ? = 500 ensured that when a feasible solution is found, its objective
value is significantly below that of the objective constraint c0 . Figure 2 (right)
shows the values of c0 as a function of the outer for-loops (iterations); the algorithm
eventually approached its estimate of the optimal value at 228.106. This is within
1% of the optimum, though of course iterations could have been continued. Despite
the clear convergence, using this approach the running time to an accurate estimate
of the solution is still prohibitive because overall the algorithm took approximately
60 hours of CPU time to find its solution.
A profile of the execution, however, revealed that up to 93% of the execution time is
spent in the eigenvalue decomposition to identify u. Observe that we do not need a
minimal eigenvector to perform an update, simply a vector u satisfying
u0 G(x)u < 0
(4)
Cholesky decomposition will either return u satisfying (4) or it will converge indicating that G(x) is psd and Algorithm 1 has converged.
5
Conclusions
Semidefinite programming has interesting applications in machine learning. In turn,
we have shown how a simple learning algorithm can be modified to solve higher
order convex optimisation problems such as semidefinite programs. Although the
experimental results given here suggest the approach is far from computationally
competitive, the insights gained may lead to effective algorithms in concrete applications in the same way that for example SMO is a competitive algorithm for solving
quadratic programming problems arising from support vector machines. While the
optimisation setting leads to the somewhat artificial and inefficient bisection method
the positive definite perceptron algorithm excels at solving positive definite CSPs
as found, e.g., in problems of transformation invariant pattern recognition as solved
by Semidefinite Programming Machines [6]. In future work it will be of interest to
consider the combined primal-dual problem at a predefined level ? of granularity so
as to avoid the necessity of bisection search.
Acknowledgments We would like to thank J. Kandola, J. Dunagan, and A. Ambroladze for interesting discussions. This work was supported by EPSRC under grant
number GR/R55948 and by Microsoft Research Cambridge.
References
[1] B. Borchers. SDPLIB 1.2, A library of semidefinite programming test problems.
Optimization Methods and Software, 11(1):683?690, 1999.
[2] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification and Scene
Analysis. John Wiley and Sons, New York, 2001. Second edition.
[3] J. Dunagan and S. Vempala. A polynomial-time rescaling algorithm for solving
linear programs. Technical Report MSR-TR-02-92, Microsoft Research, 2002.
[4] F. Glineur. Pattern separation via ellipsoids and conic programming. M?emoire
de D.E.A., Facult?e Polytechnique de Mons, Mons, Belgium, Sept. 1998.
[5] T. Graepel. Kernel matrix completion by semidefinite programming. In
J. R. Dorronsoro, editor, Proceedings of the International Conference on Neural Networks, ICANN2002, Lecture Notes in Computer Science, pages 694?699.
Springer, 2002.
[6] T. Graepel and R. Herbrich. Invariant pattern recognition by Semidefinite Programming Machines. In S. Thrun, L. Saul, and B. Sch?olkopf, editors, Advances
in Neural Information Processing Systems 16. MIT Press, 2004.
[7] M. Gr?otschel, L. Lov?
asz, and A. Schrijver. Geometric Algorithms and Combinatorial Optimization, volume 2 of Algorithms and Combinatorics. Springer-Verlag,
1988.
[8] C. Helmberg. Semidefinite programming for combinatorial optimization. Technical Report ZR-00-34, Konrad-Zuse-Zentrum f?
ur Informationstechnik Berlin,
Oct. 2000.
[9] Y. Li, H. Zaragoza, R. Herbrich, J. Shawe-Taylor, and J. Kandola. The perceptron algorithm with uneven margins. In Proceedings of the International
Conference of Machine Learning (ICML?2002), pages 379?386, 2002.
[10] A. B. J. Novikoff. On convergence proofs on perceptrons. In Proceedings of the
Symposium on the Mathematical Theory of Automata, volume 12, pages 615?622.
Polytechnic Institute of Brooklyn, 1962.
[11] G. Pataki. Cone-LP?s and semi-definite programs: facial structure, basic solutions, and the symplex method. Technical Report GSIA, Carnegie Mellon
University, 1995.
[12] F. Rosenblatt. The perceptron: A probabilistic model for information storage
and organization in the brain. Psychological Review, 65(6):386?408, 1958.
[13] K. C. Toh, M. Todd, and R. T?
ut?
unc?
u. SDPT3 ? a MATLAB software package
for semidefinite programming. Technical Report TR1177, Cornell University,
1996.
[14] L. Vandenberghe and S. Boyd. Semidefinite programming. SIAM Review,
38(1):49?95, 1996.
| 2434 |@word msr:1 version:1 polynomial:9 duda:1 c0:17 decomposition:2 thoreg:1 tr:1 necessity:1 initial:3 series:1 efficacy:1 current:2 com:1 surprising:1 toh:1 written:1 must:1 john:2 fn:1 numerical:1 happen:1 plot:2 update:6 prohibitive:1 short:1 herbrich:3 pun:1 mathematical:1 direct:1 symposium:1 prove:1 combine:2 introduce:1 x0:7 lov:1 indeed:1 sdp:6 brain:1 cpu:1 considering:1 cardinality:1 solver:1 provided:2 underlying:1 notation:3 moreover:3 bounded:2 eigenvector:2 developed:5 finding:3 transformation:3 guarantee:2 remember:1 every:5 ti:3 runtime:3 returning:1 rm:10 ensured:1 uk:3 control:1 unit:1 grant:1 generalised:1 positive:8 before:2 todd:1 mistake:1 consequence:1 despite:2 approximately:1 au:12 mentioning:1 acknowledgment:1 pla:4 practice:3 block:2 definite:6 procedure:4 area:1 significantly:1 boyd:1 suggest:1 cannot:1 interior:3 encloses:1 close:1 unc:1 storage:1 restriction:1 equivalent:1 demonstrated:1 convex:5 automaton:1 identifying:1 insight:2 continued:1 vandenberghe:1 ralf:1 searching:1 speak:1 programming:20 origin:1 satisfying:2 recognition:2 epsrc:1 solved:4 worst:1 calculate:1 region:8 decrease:1 complexity:1 solving:8 effective:1 london:1 borchers:1 artificial:1 approached:1 quite:4 whose:1 larger:1 solve:5 posed:1 mon:2 reconstruct:1 gi:7 g1:7 final:2 online:1 obviously:1 sequence:3 eigenvalue:3 took:1 maximal:2 adaptation:1 combining:1 loop:3 plas:1 description:2 olkopf:1 convergence:3 optimum:5 guaranteeing:1 spent:1 ac:1 completion:1 rescale:1 finitely:2 solves:5 auxiliary:1 implemented:3 predicted:1 come:1 indicate:1 direction:1 radius:7 closely:1 jst:1 adjacency:1 require:3 f1:1 preliminary:2 proposition:7 secondly:1 hold:1 sufficiently:1 considered:1 around:1 normal:1 exp:1 great:1 smallest:2 belgium:1 purpose:2 combinatorial:3 currently:2 largest:1 weighted:1 mit:1 aim:2 modified:4 csp:3 rather:1 avoid:1 cornell:1 varying:1 ax:1 mainly:1 attains:1 baseline:2 dependent:1 a0:1 overall:1 dual:1 classification:1 denoted:1 development:1 art:2 special:1 field:1 equal:1 enlarge:1 icml:1 future:1 simplex:1 report:4 simplify:1 novikoff:1 employ:1 few:1 kandola:2 zentrum:1 microsoft:4 psd:1 organization:1 interest:2 unlucky:1 semidefinite:28 primal:1 predefined:1 accurate:1 closer:2 facial:1 taylor:2 old:1 theoretical:1 minimal:1 fitted:1 psychological:1 gn:7 cover:1 introducing:1 vertex:1 gr:2 too:1 reported:1 dependency:1 combined:1 international:2 siam:1 probabilistic:6 minimised:1 together:3 quickly:1 concrete:1 possibly:1 lmi:2 inefficient:1 rescaling:17 return:11 bx:1 li:1 con2:1 de:2 bold:1 sec:3 combinatorics:1 explicitly:1 competitive:5 start:1 ass:1 accuracy:1 identify:2 sdps:9 helmberg:1 bisection:10 multiplying:1 worth:1 converged:3 whenever:2 against:2 elegance:1 proof:4 soton:1 unsolved:4 stop:1 knowledge:1 ut:1 graepel:3 originally:2 attained:1 higher:1 though:1 generality:1 just:1 sketch:1 working:1 thore:1 effect:2 hence:2 read:1 i2:1 zaragoza:1 konrad:1 self:1 demonstrate:3 polytechnique:1 ranging:1 image:1 recently:1 fi:5 x0t:1 overview:1 stork:1 volume:2 thirdly:1 mellon:1 cambridge:2 ai:1 maxcut:2 centre:1 shawe:2 had:1 toolkit:1 f0:5 longer:1 deduce:1 base:1 something:1 recent:1 csps:3 verlag:1 inequality:1 continue:1 seen:1 additional:1 relaxed:1 somewhat:1 converge:1 forty:1 determine:1 u0:9 semi:3 ii:1 pataki:1 stem:1 technical:4 offer:1 long:1 raphson:1 hart:1 feasibility:1 halving:1 basic:1 optimisation:11 iteration:16 kernel:2 sometimes:1 achieved:1 c1:1 suffered:1 sch:1 rest:1 asz:1 subject:3 goto:1 thing:1 call:6 granularity:1 exceed:1 iii:1 revealed:1 rherb:1 andriy:1 idea:1 minimise:3 sdpt3:3 ltd:1 returned:2 york:1 repeatedly:1 matlab:3 useful:1 iterating:1 clear:1 notice:1 arising:2 per:2 yy:1 rosenblatt:1 carnegie:1 shall:2 four:2 threshold:3 graph:4 relaxation:1 merely:1 cone:3 year:1 run:3 package:1 letter:1 fourth:1 throughout:1 separation:3 draw:1 bound:4 guaranteed:1 quadratic:2 repose:1 constraint:17 scene:1 software:2 argument:2 min:3 vempala:1 maxn:2 alternate:1 combination:1 poor:1 ball:1 remain:1 slightly:2 son:1 y0:3 ur:1 lp:3 invariant:3 taken:3 computationally:2 equation:2 ln:4 turn:1 r3:1 eventually:1 end:6 ambroladze:1 endowed:1 observe:2 polytechnic:1 original:2 running:8 straints:1 newton:1 objective:15 g0:1 already:1 quantity:2 costly:1 diagonal:1 excels:1 separate:1 algorithm3:1 thank:1 thrun:1 otschel:1 outer:3 berlin:1 length:3 ellipsoid:5 illustration:1 prompted:1 mostly:1 statement:1 glineur:1 perform:2 dunagan:2 upper:4 observation:2 benchmark:4 discovered:1 rn:5 community:2 introduced:1 smo:1 hour:2 address:1 able:3 brooklyn:1 usually:3 pattern:5 ev:1 below:1 program:12 royal:1 suitable:1 satisfaction:3 critical:1 zr:1 improve:1 imply:1 library:1 conic:1 started:1 unnormalised:1 sept:1 review:2 geometric:1 lecture:1 interesting:2 analogy:1 enclosed:1 remarkable:1 affine:2 sufficient:1 editor:2 course:1 supported:1 last:2 infeasible:8 normalised:1 perceptron:21 institute:1 fall:1 saul:1 face:1 barrier:2 correspondingly:1 taking:1 ghz:1 kzk:1 depth:1 ec:1 far:3 active:1 sequentially:1 xi:3 search:2 facult:1 infeasibility:1 quantifies:1 decade:1 nature:1 diag:2 did:1 main:2 profile:1 n2:1 edition:1 repeated:1 referred:2 wiley:1 exponential:1 third:2 theorem:4 xt:11 showing:1 appeal:1 decay:1 virtue:1 derives:1 exists:2 gained:1 ci:1 execution:2 margin:2 easier:1 simply:2 infinitely:3 informationstechnik:1 kxk:1 expressed:1 partially:1 springer:2 emoire:1 oct:1 consequently:1 labelled:1 price:1 feasible:16 change:1 generalisation:1 infinite:1 uniformly:1 hyperplane:1 total:2 called:1 experimental:2 concordant:1 schrijver:1 perceptrons:1 indicating:1 holloway:1 uneven:1 cholesky:1 support:1 |
1,579 | 2,435 | Efficient Multiscale Sampling from
Products of Gaussian Mixtures
Alexander T. Ihler, Erik B. Sudderth, William T. Freeman, and Alan S. Willsky
Department of Electrical Engineering and Computer Science
Massachusetts Institute of Technology
[email protected], [email protected], [email protected], [email protected]
Abstract
The problem of approximating the product of several Gaussian mixture
distributions arises in a number of contexts, including the nonparametric
belief propagation (NBP) inference algorithm and the training of product of experts models. This paper develops two multiscale algorithms
for sampling from a product of Gaussian mixtures, and compares their
performance to existing methods. The first is a multiscale variant of previously proposed Monte Carlo techniques, with comparable theoretical
guarantees but improved empirical convergence rates. The second makes
use of approximate kernel density evaluation methods to construct a fast
approximate sampler, which is guaranteed to sample points to within a
tunable parameter of their true probability. We compare both multiscale samplers on a set of computational examples motivated by NBP,
demonstrating significant improvements over existing methods.
1
Introduction
Gaussian mixture densities are widely used to model complex, multimodal relationships.
Although they are most commonly associated with parameter estimation procedures like
the EM algorithm, kernel or Parzen window nonparametric density estimates [1] also take
this form for Gaussian kernel functions. Products of Gaussian mixtures naturally arise
whenever multiple sources of statistical information, each of which is individually modeled by a mixture density, are combined. For example, given two independent observations y1 , y2 of an unknown variable x, the joint likelihood p(y1 , y2 |x) ? p(y1 |x)p(y2 |x) is
equal to the product of the marginal likelihoods. In a recently proposed nonparametric belief propagation (NBP) [2, 3] inference algorithm for graphical models, Gaussian mixture
products are the mechanism by which nodes fuse information from different parts of the
graph. Product densities also arise in the product of experts (PoE) [4] framework, in which
complex densities are modeled as the product of many ?local? constraint densities.
The primary difficulty associated with products of Gaussian mixtures is computational. The
product of d mixtures of N Gaussians is itself a Gaussian mixture with N d components.
In many practical applications, it is infeasible to explicitly construct these components,
and therefore intractable to build a smaller approximating mixture using the EM algorithm.
Mixture products are thus typically approximated by drawing samples from the product
density. These samples can be used to either form a Monte Carlo estimate of a desired
expectation [4], or construct a kernel density estimate approximating the true product [2].
Although exact sampling requires exponential cost, Gibbs sampling algorithms may often
be used to produce good approximate samples [2, 4].
When accurate approximations are required, existing methods for sampling from products
of Gaussian mixtures often require a large computational cost. In particular, sampling is
the primary computational burden for both NBP and PoE. This paper develops a pair of
new sampling algorithms which use multiscale, KD-Tree [5] representations to improve
accuracy and reduce computation. The first is a multiscale variant of existing Gibbs samplers [2, 4] with improved empirical convergence rate. The second makes use of approximate kernel density evaluation methods [6] to construct a fast -exact sampler which, in
contrast with existing methods, is guaranteed to sample points to within a tunable parameter of their true probability. Following our presentation of the algorithms, we demonstrate
their performance on a set of computational examples motivated by NBP and PoE.
2
Products of Gaussian Mixtures
Let {p1 (x), . . . , pd (x)} denote a set of d mixtures of N Gaussian densities, where
X
pi (x) =
wli N (x; ?li , ?i )
(1)
li
Here, li are a set of labels for the N mixture components in pi (x), wli are the normalized
component weights, and N (x; ?li , ?i ) denotes a normalized Gaussian density with mean
?li and diagonal covariance ?i . For simplicity, we assume that all mixtures are of equal
size N , and that the variances ?i are uniform within each mixture, although the algorithms
which follow may be readily extended to problems where this is not the case. Our goal is
Qd
to efficiently sample from the N d component mixture density p(x) ? i=1 pi (x).
2.1
Exact Sampling
Sampling from the product density can be decomposed into two steps: randomly select one
of the product density?s N d components, and then draw a sample from the corresponding
Gaussian. Let each product density component be labeled as L = [l1 , . . . , ld ], where li
labels one of the N components of pi (x).1 The relative weight of component L is given by
Qd
d
d
X
X
wl N (x; ?li , ?i )
??1
wL = i=1 i
??1
??1
??1
i ?li (2)
i
L ?L =
L =
N (x; ?L , ?L )
i=1
i=1
where ?L , ?L are the mean and variance of product component L, and this equation may be
evaluated at any x (the value x = ?L may be numerically convenient). To form
P the product
density, these weights are normalized by the weight partition function Z , L wL .
Determining Z exactly takes O(N d ) time, and given this constant we can draw N samples
from the distribution in O(N d ) time and O(N ) storage. This is done by drawing and sorting N uniform random variables on the interval [0, 1], and then computing the cumulative
distribution of p(L) = wL /Z to determine which, if any, samples are drawn from each L.
2.2
Importance Sampling
Importance sampling is a Monte Carlo method for approximately sampling from (or computing expectations of) an intractable distribution p(x), using a proposal distribution q(x)
for which sampling is feasible [7]. To draw N samples from p(x), an importance sampler
draws M ? N samples xi ? q(x), and assigns
the ith sample weight wi ? p(xi )/q(xi ).
P
The weights are then normalized by Z = i wi , and N samples are drawn (with replacement) from the discrete distribution p?(xi ) = wi /Z.
1
Throughout this paper, we use lowercase letters (li ) to label input density components, and capital letters (L = [l1 , . . . , ld ]) to label the corresponding product density components.
...
Mix 2
Mix 1
Sequential Gibbs Sampler
Mix 1
...
Parallel Gibbs Sampler
Mix 2
X
X
Figure 1: Two possible Gibbs samplers for a product of 2 mixtures of 5 Gaussians. Arrows show the
weights assigned to each label. Top left: At each iteration, one label is sampled conditioned on the
other density?s current label. Bottom left: Alternate between sampling a data point X conditioned on
the current labels, and resampling all labels in parallel. Right: After ? iterations, both Gibbs samplers
identify mixture labels corresponding to a single kernel (solid) in the product density (dashed).
For products of Gaussian mixtures, we consider two different proposal distributions. The
first, which we refer to as mixture importance sampling, draws each sample by randomly
selecting one of the d input mixtures, and sampling from its N components (q(x)
Q = p i (x)).
The remaining d ? 1 mixtures then provide the importance weight (wi = j6=i pj (xi )).
This is similar to the method used to combine density trees in [8]. Alternatively, we can
approximate
Q each input mixture pi (x) by a single Gaussian density qi (x), and choose
q(x) ? i qi (x). We call this procedure Gaussian importance sampling.
2.3
Gibbs Sampling
Sampling from Gaussian mixture products is difficult because the joint distribution over
product density labels, as defined by equation (2), is complicated. However, conditioned
on the labels of all but one mixture, we can compute the conditional distribution over the
remaining label in O(N ) operations, and easily sample from it. Thus, we may use a Gibbs
sampler [9] to draw asymptotically unbiased samples, as illustrated in Figure 1. At each
iteration, the labels {lj }j6=i for d ? 1 of the input mixtures are fixed, and the ith label is
sampled from the corresponding conditional density. The newly chosen li is then fixed,
and another label is updated. After a fixed number of iterations ?, a single sample is drawn
from the product mixture component identified by the final labels. To draw N samples, the
Gibbs sampler requires O(d?N 2 ) operations; see [2] for further details.
The previously described sequential Gibbs sampler defines an iteration over the labels of
the input mixtures. Another possibility uses the fact that, given a data point x
? in the product
density space, the d input mixture labels are conditionally independent [4]. Thus, one can
define a parallel Gibbs sampler which alternates between sampling a data point conditioned
on the current input mixture labels, and parallel sampling of the mixture labels given the
current data point (see Figure 1). The complexity of this sampler is also O(d?N 2 ).
3
KD?Trees
A KD-tree is a hierarchical representation of a point set which caches statistics of subsets
of the data, thereby making later computations more efficient [5]. KD-trees are typically
binary trees constructed by successively splitting the data along cardinal axes, grouping
points by spatial location. We use the variable l to denote the label of a leaf node (the index
of a single point), and l to denote a set of leaf labels summarized at a node of the KD-tree.
{1,2,3,4,5,6,7,8}
x xx
xx x
{1,2,3,4}
x xx
{1,2}
x xx
x x
x xx
xx x
x x
x x
x xx
xx x
x x
x xx
xx x
x x
{5,6,7,8}
xx x
{3,4}
{5,6}
{7,8}
xx x
x x
(a)
(b)
Figure 2: Two KD-tree representations of the same one-dim. point set. (a) Each node maintains a
bounding box (label sets l are shown in braces). (b) Each node maintains mean and variance statistics.
Figure 2 illustrates one-dimensional KD-trees which cache different sets of statistics. The
first (Figure 2(a)) maintains bounding boxes around the data, allowing efficient computation of distances; similar trees are used in Section 4.2. Also shown in this figure are the
label sets l for each node. The second (Figure 2(b)) precomputes means and variances of
point clusters, providing a multi-scale Gaussian mixture representation used in Section 4.1.
3.1
Dual Tree Evaluation
Multiscale representations have been effectively
applied to kernel density estimation problems.
Given a mixture of N Gaussians with means {?i },
we would like to evaluate
X
wi N (xj ; ?i , ?)
(3)
p(xj ) =
i
at a given set of M points {xj }. By representing
the means {?i } and evaluation points {xj } with
two different KD-trees, it is possible to define a
dual?tree recursion [6] which is much faster than
direct evaluation of all N M kernel?point pairs.
x xx
xx x
Dmax
oooo
x x
Dmin
o ooo
Figure 3: Two KD-tree representations
may be combined to efficiently bound
the maximum (Dmax ) and minimum
(Dmin ) pairwise distances between subsets of the summarized points (bold).
The dual-tree algorithm uses bounding box statistics (as in Figure 2(a)) to approximately
evaluate subsets of the data. For any set of labels in the density tree l? and location tree lx ,
one may use pairwise distance bounds (see Figure 3) to find upper and lower bounds on
X
wi N (xj ; ?i , ?) for any j ? lx
(4)
i?l?
When the distance bounds are sufficiently tight, the sum in equation (4) may be approximated by a constant, asymptotically allowing evaluation in O(N ) operations [6].
4
4.1
Sampling using Multiscale Representations
Gibbs Sampling on KD-Trees
Although the pair of Gibbs samplers discussed in Section 2.3 are often effective, they sometimes require a very large number of iterations to produce accurate samples. The most difficult densities are those for which there are multiple widely separated modes, each of which
is associated with disjoint subsets of the input mixture labels. In this case, conditioned
on a set of labels corresponding to one mode, it is very unlikely that a label or data point
corresponding to a different mode will be sampled, leading to slow convergence.
Similar problems have been observed with Gibbs samplers on Markov random fields [9].
In these cases, convergence can often be accelerated by constructing a series of ?coarser
scale? approximate models in which the Gibbs sampler can move between modes more easily [10]. The primary challenge in developing these algorithms is to determine procedures
for constructing accurate coarse scale approximations. For Gaussian mixture products,
KD-trees provide a simple, intuitive, and easily constructed set of coarser scale models.
As in Figure 2(b), each level of the KD-tree stores the mean and variance (biased by kernel
size) of the summarized leaf nodes. We start at the same coarse scale for all input mixtures,
and perform standard Gibbs sampling on that scale?s summary Gaussians. After several
iterations, we condition on a data sample (as in the parallel Gibbs sampler of Section 2.3)
to infer labels at the next finest scale. Intuitively, by gradually moving from coarse to fine
scales, multiscale sampling can better explore all of the product density?s important modes.
As the number of sampling iterations approaches infinity, multiscale samplers have the
same asymptotic properties as standard Gibbs samplers. Unfortunately, there is no guarantee that multiscale sampling will improve performance. However, our simulation results
indicate that it is usually very effective (see Section 5).
4.2
Epsilon-Exact Sampling using KD-Trees
In this section, we use KD-trees to efficiently compute an approximation to the partition
function Z, in a manner similar to the dual tree evaluation algorithm of [6] (see Section 3.1).
This leads to an -exact sampler for which a label L = [l1 , . . . , ld ], with true probability
pL , is guaranteed to be sampled with some probability p?L ? [pL ? , pL + ]. We denote
subsets of labels in the input densities with lowercase script (li ), and sets of labels in the
product density by L = l1 ? ? ? ? ? ld . The approximate sampling procedure is similar to
the exact sampler of Section 2.1. We first construct KD-tree representations of each input
density (as in Figure 2(a)), and use a multi?tree recursion to approximate the partition
P
function Z? =
w
?L by summarizing sets of labels L where possible. Then, we compute
?
the cumulative distribution of the sets of labels, giving each label set L probability w
? L /Z.
4.2.1 Approximate Evaluation of the Weight Partition Function
We first note that the weight function (equation (2)) can be rewritten using terms which
involve only pairwise distances (the quotient is computed elementwise):
d
Y
Y
?i ?j
wL =
w lj ?
N (?li ; ?lj , ?(i,j) )
where
?(i,j) =
(5)
?L
j=1
(li ,lj>i )
Qd
This equation may be divided into two parts: a weight contribution i=1 wli , and a distance
contribution (which we denote by KL ) expressed in terms of the pairwise distances between
kernel centers. We use the KD-trees? distance bounds to compute bounds on each of these
pairwise distance terms for a collection of labels L = l1 ?? ? ??ld . The product of the upper
(lower) pairwise bounds is itself an upper (lower) bound on the total distance contribution
for any label L within the set; denote these bounds by KL+ and KL? , respectively.2
By using the mean KL? = 12 KL+ + KL? to approximate KL , we incur a maximum error
+
?
1
2 KL ? KL for any label L ? L. If this error is less than Z? (which we ensure by
comparing to a running lower bound Zmin on Z), we treat it as constant over the set L and
approximate the contribution to Z by
X
X Y
Y X
w
?L = KL?
( wli ) = KL?
(
w li )
(6)
L?L
L?L
i
i
li ?li
This is easily calculated using cached statistics of the weight contained in each set. If the
error is larger than Z?, we need to refine at least one of the label sets; we use a heuristic
to make this choice. This procedure is summarized in Algorithm 1. Note that all of the
2
We can also use multipole methods such as the Fast Gauss Transform [11] to efficiently compute
alternate, potentially tighter bounds on the pairwise values.
MultiTree([l1 , . . . , ld ])
1. For each pair of distributions (i, j > i), use their bounding boxes to compute
(i,j)
(a) Kmax ? maxli ?li ,lj ?lj N (xli ? xlj ; 0, ?(i,j) )
(i,j)
(b) Kmin ? minli ?li ,lj ?lj N (xli ? xlj ; 0, ?(i,j) )
Q
Q
(i,j)
(i,j)
2. Find Kmax = (i,j>i) Kmax and Kmin = (i,j>i) Kmin
1
3. If 2 (Kmax ? Kmin ) ? Zmin ?, approximate this combination of label sets:
Q
P
(a) w
?L = 21 (Kmax + Kmin Q
) ( wli ), where wli = li ?li wli is cached by the KD-trees
(b) Zmin = Zmin + Kmin ( wli )
(c) Z? = Z? + w
?L
4. Otherwise, refine one of the label sets:
(i,j)
(i,j)
(a) Find arg max(i,j) Kmax /Kmin such that range(li ) ? range(lj ).
(b) Call recursively:
i. MultiTree([l1 , . . . , Nearer(Left(li ), Right(li ), lj ), . . . , ld ])
ii. MultiTree([l1 , . . . , Farther(Left(li ), Right(li ), lj ), . . . , ld ])
where Nearer(Farther) returns the nearer (farther) of the first two arguments to the third.
Algorithm 1: Recursive multi-tree algorithm for approximately evaluating the partition function Z
of the product of d Gaussian mixture densities represented by KD?trees. Z min denotes a running
lower bound on the partition function, while Z? is the current estimate. Initialize Zmin = Z? = 0.
? repeat Algorithm 1 with the following modifications:
Given the final partition function estimate Z,
? j < c? + w
3. (c) If c? ? Zu
?L for any j, draw L ? L by sampling li ? li with weight wli /wli
3. (d) c? = c? + w
?L
Algorithm 2: Recursive multi-tree algorithm for approximate sampling. c? denotes the cumulative
sum of weights w
?L . Initialize by sorting N uniform [0, 1] samples {uj }, and set Zmin = c? = 0.
quantities required by this algorithm may be stored within the KD?trees, avoiding searches
over the sets li . At the algorithm?s termination, the total error is bounded by
X
X
XY
Y
1
? ?
K+ ? K?
wl ? Z?
|Z ? Z|
|wL ? w
?L | ?
wl ? Z? (7)
2
L
L
L
i
L
i
L
where the last inequality follows because each input mixture?s weights are normalized.
This guarantees that our estimate Z? is within a fractional tolerance ? of its true value.
4.2.2
Approximate Sampling from the Cumulative Distribution
To use the partition function estimate Z? for approximate sampling, we repeat the approximation process in a manner similar to the exact sampler: draw N sorted uniform random
variables, and then locate these samples in the cumulative distribution. We do not explicitly
construct the cumulative distribution, but instead use the same approximate partial weight
sums used to determine Z? (see equation (6)) to find the block of labels L = l1 ? ? ? ? ? ld
associated with each sample. Since all labels L ? L within this block have approximately
equal distance contribution KL ? KL? , we independently sample a label li within each set
li proportionally to the weight wli .
This procedure is shown in Algorithm 2. Note that, to be consistent about when approxima? we repeat the procedure
tions are made and thus produce weights w
?L which still sum to Z,
for computing Z? exactly, including recomputing the running lower bound Zmin . This algorithm is guaranteed to sample each label L with probability p?L ? [pL ? , pL + ]:
w
wL
2?
?L
?
?
,
(8)
|?
pL ? pL | =
?
Z
1??
Z
? Q
Q
|KL ?KL
|
wli ? ?( wli ) ? ? and
Z
1+?
?. Thus, the estimated probability
1??
w
?L
2?
|
+
| w?ZL ? w?Z?L | ? 1??
.
Z
Proof: From our bounds on the error of KL? , | wZL ? w?ZL | =
1
1
?
| w?ZL ? w?Z?L | = w?ZL |1 ? Z/Z
| ? w?ZL |1 ? 1??
| ? w?ZL 1??
?
?
of choosing label L has at most error | wZL ?
w
?L
? |
Z
? | wZL ?
5
Computational Examples
5.1 Products of One?Dimensional Gaussian Mixtures
In this section, we compare the sampling methods discussed in this paper on three challenging one?dimensional examples, each involving products of mixtures of 100 Gaussians
(see Figure 4). We measure performance by drawing 100 samples, constructing a kernel
density estimate using likelihood cross?validation [1], and calculating the KL divergence
from the true product density. We repeat this test 250 times for each of a range of parameter
settings of each algorithm, and plot the average KL divergence versus computation time.
For the product of three mixtures in Figure 4(a), the multiscale (MS) Gibbs samplers dramatically outperform standard Gibbs sampling. In addition, we see that sequential Gibbs
sampling is more accurate than parallel. Both of these differences can be attributed to the
bimodal product density. However, the most effective algorithm is the ?exact sampler,
which matches exact sampling?s performance in far less time (0.05 versus 2.75 seconds).
For a product of five densities (Figure 4(b)), the cost of exact sampling increases to 7.6
hours, but the ?exact sampler matches its performance in less than one minute. Even
faster, however, is the sequential MS Gibbs sampler, which takes only 0.3 seconds.
For the previous two examples, mixture importance sampling (IS) is nearly as accurate
as the best multiscale methods (Gaussian IS seems ineffective). However, in cases where
all of the input densities have little overlap with the product density, mixture IS performs
very poorly (see Figure 4(c)). In contrast, multiscale samplers perform very well in such
situations, because they can discard large numbers of low weight product density kernels.
5.2 Tracking an Object using Nonparametric Belief Propagation
NBP [2] solves inference problems on non?Gaussian graphical models by propagating the
results of local sampling computations. Using our multiscale samplers, we applied NBP
to a simple tracking problem in which we observe a slowly moving object in a sea of randomly shifting clutter. Figure 5 compares the posterior distributions of different samplers
two time steps after an observation containing only clutter. ?exact sampling matches the
performance of exact sampling, but takes half as long. In contrast, a standard particle
filter [7], allowed ten times more computation, loses track. As in the previous section,
multiscale Gibbs sampling is much more accurate than standard Gibbs sampling.
6
Discussion
For products of a few mixtures, the ?exact sampler is extremely fast, and is guaranteed to
give good performance. As the number of mixtures grow, ?exact sampling may become
overly costly, but the sequential multiscale Gibbs sampler typically produces accurate samples with only a few iterations. We are currently investigating the performance of these
algorithms on large?scale nonparametric belief propagation applications.
References
[1] B. W. Silverman. Density Estimation for Statistics and Data Analysis. Chapman & Hall, 1986.
[2] E. B. Sudderth, A. T. Ihler, W. T. Freeman, and A. S. Willsky. Nonparametric belief propagation.
In CVPR, 2003.
[3] M. Isard. PAMPAS: Real?valued graphical models for computer vision. In CVPR, 2003.
[4] G. E. Hinton. Training products of experts by minimizing contrastive divergence. Technical
Report 2000-004, Gatsby Computational Neuroscience Unit, 2000.
[5] K. Deng and A. W. Moore. Multiresolution instance-based learning. In IJCAI, 1995.
[6] A. G. Gray and A. W. Moore. Very fast multivariate kernel density estimation. In JSM, 2003.
[7] A. Doucet, N. de Freitas, and N. Gordon, editors. Sequential Monte Carlo Methods in Practice.
Springer-Verlag, New York, 2001.
[8] S. Thrun, J. Langford, and D. Fox. Monte Carlo HMMs. In ICML, pages 415?424, 1999.
[9] S. Geman and D. Geman. Stochastic relaxation, Gibbs distributions, and the Bayesian restoration of images. IEEE Trans. PAMI, 6(6):721?741, November 1984.
[10] J. S. Liu and C. Sabatti. Generalised Gibbs sampler and multigrid Monte Carlo for Bayesian
computation. Biometrika, 87(2):353?369, 2000.
[11] J. Strain. The fast Gauss transform with variable scales. SIAM J. SSC, 12(5):1131?1139, 1991.
Exact
MS ??Exact
MS Seq. Gibbs
MS Par. Gibbs
Seq. Gibbs
Par. Gibbs
Gaussian IS
Mixture IS
(a)
Input Mixtures
KL Divergence
0.5
0.4
0.3
0.2
0.1
0
0
Product Mixture
0.1
0.2
0.3
0.4
0.5
0.6
Computation Time (sec)
2
Exact
MS ??Exact
MS Seq. Gibbs
MS Par. Gibbs
Seq. Gibbs
Par. Gibbs
Gaussian IS
Mixture IS
1.8
(b)
Input Mixtures
KL Divergence
1.6
1.4
1.2
1
0.8
0.6
0.4
0.2
0
0
Product Mixture
0.5
1
1.5
2
2.5
Computation Time (sec)
Exact
MS ??Exact
MS Seq. Gibbs
MS Par. Gibbs
Seq. Gibbs
Par. Gibbs
Gaussian IS
Mixture IS
(c)
Input Mixtures
KL Divergence
0.5
0.4
0.3
0.2
0.1
0
0
Product Mixture
0.1
0.2
0.3
0.4
0.5
0.6
Computation Time (sec)
Figure 4: Comparison of average sampling accuracy versus computation time for different algorithms (see text). (a) Product of 3 mixtures (exact requires 2.75 sec). (b) Product of 5 mixtures (exact
requires 7.6 hours). (c) Product of 2 mixtures (exact requires 0.02 sec).
Target Location
Observations
Exact NBP
(a)
Target Location
??Exact NBP
Particle Filter
(b)
Target Location
MS Seq. Gibbs NBP
Seq. Gibbs NBP
(c)
Figure 5: Object tracking using NBP. Plots show the posterior distributions two time steps after an
observation containing only clutter. The particle filter and Gibbs samplers are allowed equal computation. (a) Latest observations, and exact sampling posterior. (b) ?exact sampling is very accurate,
while a particle filter loses track. (c) Multiscale Gibbs sampling leads to improved performance.
| 2435 |@word seems:1 termination:1 simulation:1 covariance:1 contrastive:1 thereby:1 solid:1 recursively:1 ld:9 liu:1 series:1 selecting:1 existing:5 freitas:1 current:5 comparing:1 finest:1 readily:1 partition:8 plot:2 resampling:1 half:1 leaf:3 isard:1 ith:2 esuddert:1 farther:3 coarse:3 node:7 location:5 lx:2 five:1 along:1 constructed:2 direct:1 become:1 combine:1 manner:2 pairwise:7 zmin:7 p1:1 multi:4 freeman:2 decomposed:1 little:1 window:1 cache:2 xx:14 bounded:1 nbp:12 multigrid:1 guarantee:3 exactly:2 biometrika:1 zl:6 unit:1 generalised:1 engineering:1 local:2 treat:1 approximately:4 pami:1 challenging:1 hmms:1 range:3 practical:1 recursive:2 block:2 practice:1 silverman:1 procedure:7 empirical:2 convenient:1 storage:1 context:1 kmax:6 center:1 latest:1 independently:1 simplicity:1 splitting:1 assigns:1 updated:1 target:3 exact:28 us:2 approximated:2 coarser:2 labeled:1 geman:2 bottom:1 observed:1 electrical:1 pd:1 complexity:1 tight:1 incur:1 multimodal:1 joint:2 easily:4 represented:1 separated:1 fast:6 effective:3 monte:6 choosing:1 heuristic:1 widely:2 larger:1 cvpr:2 valued:1 drawing:3 otherwise:1 statistic:6 transform:2 itself:2 final:2 product:51 poorly:1 multiresolution:1 intuitive:1 pampas:1 convergence:4 cluster:1 ijcai:1 sea:1 produce:4 cached:2 object:3 tions:1 propagating:1 approxima:1 solves:1 quotient:1 indicate:1 qd:3 filter:4 stochastic:1 require:2 tighter:1 pl:7 around:1 sufficiently:1 hall:1 estimation:4 label:47 currently:1 individually:1 wl:9 mit:4 gaussian:27 ax:1 improvement:1 likelihood:3 contrast:3 summarizing:1 dim:1 inference:3 lowercase:2 typically:3 lj:11 unlikely:1 arg:1 dual:4 spatial:1 initialize:2 marginal:1 equal:4 construct:6 field:1 sampling:49 chapman:1 icml:1 nearly:1 report:1 develops:2 cardinal:1 few:2 gordon:1 randomly:3 divergence:6 replacement:1 william:1 wli:13 possibility:1 evaluation:8 mixture:60 jsm:1 accurate:8 partial:1 xy:1 fox:1 tree:31 desired:1 theoretical:1 recomputing:1 instance:1 restoration:1 cost:3 subset:5 uniform:4 stored:1 combined:2 density:43 siam:1 parzen:1 successively:1 containing:2 choose:1 slowly:1 ssc:1 expert:3 leading:1 return:1 li:30 de:1 summarized:4 bold:1 sec:5 explicitly:2 later:1 script:1 start:1 maintains:3 parallel:6 complicated:1 contribution:5 accuracy:2 variance:5 efficiently:4 identify:1 xli:2 bayesian:2 carlo:6 j6:2 whenever:1 naturally:1 associated:4 ihler:3 proof:1 attributed:1 sampled:4 newly:1 tunable:2 massachusetts:1 fractional:1 oooo:1 follow:1 improved:3 ooo:1 evaluated:1 done:1 box:4 langford:1 multiscale:18 propagation:5 defines:1 mode:5 gray:1 normalized:5 true:6 y2:3 unbiased:1 assigned:1 moore:2 illustrated:1 conditionally:1 m:12 demonstrate:1 performs:1 l1:9 image:1 recently:1 discussed:2 elementwise:1 numerically:1 significant:1 refer:1 gibbs:43 ai:1 particle:4 moving:2 posterior:3 multivariate:1 discard:1 store:1 verlag:1 inequality:1 binary:1 minimum:1 deng:1 determine:3 dashed:1 ii:1 multiple:2 mix:4 infer:1 alan:1 technical:1 faster:2 match:3 cross:1 long:1 divided:1 qi:2 variant:2 involving:1 vision:1 expectation:2 iteration:9 kernel:13 sometimes:1 bimodal:1 proposal:2 addition:1 fine:1 kmin:7 interval:1 sudderth:2 source:1 grow:1 biased:1 brace:1 ineffective:1 call:2 xj:5 identified:1 reduce:1 poe:3 motivated:2 york:1 dramatically:1 proportionally:1 involve:1 nonparametric:6 wzl:3 clutter:3 ten:1 outperform:1 estimated:1 disjoint:1 track:2 overly:1 neuroscience:1 discrete:1 demonstrating:1 drawn:3 capital:1 pj:1 fuse:1 graph:1 asymptotically:2 relaxation:1 sum:4 letter:2 throughout:1 seq:8 draw:9 comparable:1 bound:14 guaranteed:5 refine:2 constraint:1 infinity:1 argument:1 min:1 extremely:1 department:1 developing:1 alternate:3 combination:1 kd:19 smaller:1 em:2 xlj:2 wi:6 making:1 modification:1 intuitively:1 gradually:1 equation:6 previously:2 dmax:2 mechanism:1 gaussians:5 operation:3 rewritten:1 observe:1 hierarchical:1 top:1 remaining:2 denotes:3 ensure:1 running:3 graphical:3 multipole:1 calculating:1 giving:1 epsilon:1 build:1 uj:1 approximating:3 move:1 quantity:1 primary:3 costly:1 diagonal:1 distance:11 thrun:1 willsky:3 erik:1 modeled:2 relationship:1 index:1 providing:1 minimizing:1 difficult:2 unfortunately:1 potentially:1 unknown:1 perform:2 allowing:2 dmin:2 upper:3 observation:5 markov:1 november:1 situation:1 extended:1 hinton:1 strain:1 y1:3 locate:1 pair:4 required:2 kl:21 hour:2 nearer:3 trans:1 sabatti:1 usually:1 challenge:1 including:2 max:1 belief:5 shifting:1 overlap:1 difficulty:1 recursion:2 representing:1 improve:2 technology:1 text:1 determining:1 relative:1 asymptotic:1 billf:1 par:6 versus:3 validation:1 consistent:1 editor:1 pi:5 summary:1 repeat:4 last:1 infeasible:1 institute:1 tolerance:1 calculated:1 evaluating:1 cumulative:6 commonly:1 collection:1 made:1 far:1 approximate:16 doucet:1 investigating:1 xi:5 alternatively:1 search:1 complex:2 constructing:3 arrow:1 bounding:4 arise:2 allowed:2 gatsby:1 slow:1 exponential:1 third:1 minute:1 zu:1 grouping:1 intractable:2 burden:1 sequential:6 effectively:1 importance:7 conditioned:5 illustrates:1 sorting:2 explore:1 expressed:1 contained:1 tracking:3 springer:1 loses:2 precomputes:1 conditional:2 goal:1 presentation:1 sorted:1 feasible:1 sampler:34 total:2 gauss:2 select:1 arises:1 alexander:1 accelerated:1 evaluate:2 avoiding:1 |
1,580 | 2,436 | One microphone blind dereverberation
based on quasi-periodicity of speech signals
Tomohiro Nakatani, Masato Miyoshi, and Keisuke Kinoshita
Speech Open Lab., NTT Communication Science Labs., NTT Corporation
2-4, Hikaridai, Seika-cho, Soraku-gun, Kyoto, Japan
{nak,miyo,kinoshita}@cslab.kecl.ntt.co.jp
Abstract
Speech dereverberation is desirable with a view to achieving, for example, robust speech recognition in the real world. However, it is still a challenging problem, especially when using a single microphone. Although
blind equalization techniques have been exploited, they cannot deal with
speech signals appropriately because their assumptions are not satisfied
by speech signals. We propose a new dereverberation principle based
on an inherent property of speech signals, namely quasi-periodicity. The
present methods learn the dereverberation filter from a lot of speech data
with no prior knowledge of the data, and can achieve high quality speech
dereverberation especially when the reverberation time is long.
1
Introduction
Although numerous studies have been undertaken on robust automatic speech recognition
(ASR) in the real world, long reverberation is still a serious problem that severely degrades
the ASR performance [1]. One simple way to overcome this problem is to dereverberate
the speech signals prior to ASR, but this is also a challenging problem, especially when
using a single microphone. For example, certain blind equalization methods, including
independent component analysis (ICA), can estimate the inverse filter of an unknown impulse response convolved with target signals when the signals are statistically independent
and identically distributed sequences [2]. However, these methods cannot appropriately
deal with speech signals because speech signals have inherent properties, such as periodicity and formant structure, making their sequences statistically dependent. This approach
inevitably destroys such essential properties of speech. Another approach that uses the
properties of speech has also been proposed [3]. The basic idea involves adaptively detecting time regions in which signal-to-reverberation ratios become small, and attenuating
speech signals in those regions. However, the precise separation of the signal and reverberation durations is difficult, therefore, this approach has achieved only moderate results so
far.
In this paper, we propose a new principle for estimating an inverse filter by using an essential property of speech signals, namely quasi-periodicity, as a clue. In general, voiced
segments in an utterance have approximate periodicity in each local time region while the
period gradually changes. Therefore, when a long reverberation is added to a speech signal,
signals in different time regions with different periods are mixed, thus degrading the periodicity of the signals in local time regions. By contrast, we show that we can estimate an
inverse filter for dereverberating a signal by enhancing the periodicity of the signal in each
local time region. The estimated filter can dereverberate both the periodic and non-periodic
parts of speech signals with no prior knowledge of the target signals, even though only the
periodic parts of the signals are used for the estimation.
2
Quasi-periodicity based dereverberation
We propose two dereverberation methods, referred to as Harmonicity based dEReverBeration (HERB) methods, based on the features of quasi-periodic signals: one based on an Average Transfer Function (ATF) that transforms reverberant signals into quasi-periodic components (ATF-HERB), and the other based on the Minimum Mean Squared Error (MMSE)
criterion that evaluates the quasi-periodicity of target signals (MMSE-HERB). First, we
briefly explain the features of quasi-periodic signals, and then describe the two methods.
2.1
Features of quasi-periodic signals
When a source signal s(n) is recorded in a reverberant room1 , the obtained signal x(n) is
represented as x(n) = h(n)?s(n), where h(n) is the impulse response of the room and ???
is a convolution operation. The goal of the dereverberation is to estimate a dereverberation
filter, w(n), for ?N < n < N that dereverberates x(n), and to obtain the dereverberated
signal y(n) by:
y(n) = w(n) ? x(n) = (w(n) ? h(n)) ? s(n) = q(n) ? s(n).
(1)
where q(n) = w(n) ? h(n) is referred to as a dereverberated impulse response. Here, we
assume s(n) is a quasi-periodic signal2 , which has the following features:
1. In each local time region around n0 (n0 ? ? < n < n0 + ? for ? n0 ), s(n) is
approximately a periodic signal whose period is T (n0 ).
2. Outside the region (|n ? n0 | > ?), s(n ) is also a periodic signal within its
neighboring time region, but often has another period that is different from T (n0 ).
These features make x(n) a non-periodic signal even within local time regions when h(m)
contains non-zero values for |m| > ?. This is because more than two periodic signals,
s(n) and s(n ? m), that have different periods, are added to x(n) with weights of h(0)
and h(m). Inversely, the goal of our dereverberation is to estimate w(n) that makes y(n)
a periodic signal in each local time region. Once such a filter is obtained, q(m) must have
zero values for |m| > ?, and thus, reverberant components longer than ? are eliminated
from y(n).
An important additional feature of a quasi-periodic signal is that quasi-periodic components
in a source signal can be enhanced by an adaptive harmonic filter. An adaptive harmonic
filter is a time-varying linear filter that enhances frequency components whose frequencies
correspond to multiples of the fundamental frequency (F0 ) of the target signal, while preserving their phases and amplitudes. The filter values are adaptively modified according to
F0 . For example, a filter, F (f0 (n))[?], can be implemented as follows:
x
?(n)
=
=
F (f0 (n))[x(n)],
g2 (n ? n0 )Re{x(n) ? (g1 (n)
exp(j2?kf0 (n0 )n/fs ))},
n0
(2)
(3)
k
where n0 is the center time of each frame, f0 (n0 ) is the fundamental frequency (F0 ) of
the signal at the frame, k is a harmonics index, g1 (n) and g2 (n) are analysis window
1
In this paper, time domain and frequency domain signals are represented by non-capitalized
and capitalized symbols, respectively. Arguments ?(?)? that represent the center frequencies of the
discrete Fourier transformation bins are often omitted from frequency domain signals.
2
Later, this assumption is extended so that s(n) is composed of quasi-periodic components and
non-periodic components in the case of speech signals.
S(?)
H(?)
X(?)
W(?)
F(f0)
^
E(X/X)
Y(?)
^
X(?)
Figure 1: Diagram of ATF-HERB
functions, and fs is the sampling frequency. Even when x(n) contains a long reverberation,
the reverberant components that have different frequencies from s(n) are reduced by the
harmonic filter, and thus, the quasi-periodic components can be enhanced.
2.2
ATF-HERB: average transfer function based dereverberation
Figure 1 is a diagram of ATF-HERB, which uses the average transfer function from reverberant signals to quasi-periodic signals. A speech signal, S(?), can be modeled by the
sum of the quasi-periodic components, or voiced components, Sh (?), and non-periodic
components, or unvoiced components, Sn (?), as eq. (4). The reverberant observed signal,
X(?), is then represented by the product of S and the transfer function, H(?), of a room as
eq. (5). The transfer function, H, can also be divided into two functions, D(?) and R(?).
The former transforms S into the direct signal, DS, and the latter into the reverberation
part, RS, as shown in eq. (6). X is also represented by the sum of the direct signal of the
quasi-periodic components, DSh , and the other components as eq. (7).
S(?)
X(?)
= Sh (?) + Sn (?),
= H(?)S(?),
= (D(?) + R(?))S(?),
= DSh + (RSh + HSn ).
(4)
(5)
(6)
(7)
Of these components, DSh can approximately be extracted from X by harmonic filtering.
Although the frequencies of quasi-periodic components change dynamically according to
the changes in their fundamental frequency (F0 ), their reverberation remains unchanged at
the same frequency. Therefore, direct quasi-periodic components, DSh , can be enhanced
by extracting frequency components located at multiples of its F0 . This approximated
?
direct signal X(?)
can be modeled as follows:
?
?
?
X(?)
= D(?)Sh (?) + (R(?)S
h (?) + N (?)),
(8)
?
?
where R(?)S
h (?) and N (?) are part of the reverberation of Sh and part of the direct signal
? after the harmonic filtering3 . We
and reverberation of Sn , which unexpectedly remain in X
? in eq. (8).
? are caused by RS
? h and N
assume that all the estimation errors in X
?
?
The goal of ATF-HERB is to estimate O(R(?))
= (D(?) + R(?))/H(?),
referred to as
? which can be obtained
a ?dereverberation operator.? This is because the signal DS + RS,
? by X, becomes in a sense a dereverberated signal.
by multiplying O(R)
?
?
O(R(?))X(?)
= D(?)S(?) + R(?)S(?),
(9)
? cannot be represented as a linear transformation because the reverberation
Strictly speaking, R
? depends on the time pattern of X.
? We introduce this approximation for simplicity.
included in X
3
S(?)
H(?)
X(?)
F(f0)
W(?)
Y(?)
MMSE
^
X(?)
Figure 2: Diagram of MMSE-HERB
where the right side of eq. (9) is composed of a direct signal, DS, and certain parts of the
? The rest of the reverberation included in X(= DS +RS), or (R? R)S,
?
reverberation, RS.
is eliminated by the dereverberation operator.
? SupTo estimate the dereverberation operator, we use the output of the harmonic filter, X.
?
pose a number of X values are obtained and X values are calculated from individual X
? can be approximated as the average of
values. Then, the dereverberation operator, O(R),
?
?
? by substitutX/X, or W (?) = E(X/X). W (?) is shown to be a good estimate of O(R)
?
ing E(X/X)
for eqs. (4), (5) and (8) as eq. (11).
W (?)
=
?
E(X/X),
1
1
?
) + E(
= O(R(?))E(
),
? )/N
?
1 + Sn /Sh
1 + (X ? N
?
O(R(?))P
(|Sh (?)| > |Sn (?)|),
(10)
(11)
(12)
where P (?) is a probability function. The arguments of the two average functions in eq. (11)
have the form of a complex function, f (z) = 1/(1 + z). E(f (z)) is easily proven to equal
P (|z| < 1), using the residue theorem if it is assumed that the phase of z is uniformly
distributed, the phases of z and |z| are independent, and |z| = 1. Based on this property, the
? is a non-periodic component
second term of eq. (11) approximately equals zero because N
? almost always
that the harmonic filter unexpectedly extracts and thus the magnitude of N
?
has a smaller value than (Y ? N ) if a sufficiently long analysis window is used. Therefore,
W (?) can be approximated by eq. (12), that is, W (?) has the value of the dereverberation
operator multiplied by the probability of the harmonic components of speech with a larger
magnitude than the non-periodic components.
Once the dereverberation operator is calculated from the periodic parts of speech signals
for almost all the frequency ranges, it can dereverberate both the periodic and non-periodic
parts of the signals because the inverse transfer function is independent of the source signal
characteristics. Instead, the gain of W (?) tends to decrease with frequency when using
our method. This is because the magnitudes of the non-periodic components relative to
the periodic components tend to increase with frequency for a speech signal, and thus
the P (|Sh | > |Sn |) value becomes smaller as ? increases. To compensate for this decreasing gain, it may be useful to use the average attributes of speech on the probability,
P (|Sh | > |Sn |). In our experiments in section 4, however, W (?) itself was used as the
dereverberation operator without any compensation.
2.3
MMSE-HERB: minimum mean squared error criterion based dereverberation
As discussed in section 2.1, quasi-periodic signals can be dereverberated simply by enhancing their quasi-periodicity. To implement this principle directly, we introduce a cost
function, referred to as the minimum mean squared error (MMSE) criterion, to evaluate the
quasi-periodicity of the signals as follows:
C(w) =
(y(n) ? F (f0 (n))[y(n)])2 =
(w(n) ? x(n) ? F (f0 (n))[w(n) ? x(n)])2 ,
n
n
(13)
where y(n) = w(n) ? x(n) is a target signal that should be dereverberated by controlling
w(n), and F (f0 (n))[y(n)] is a signal obtained by applying a harmonic filter to y(n). When
y(n) is a quasi-periodic signal, y(n) approximately equals F (f0 (n))[y(n)] because of the
feature of quasi-periodic signals, and thus, the above cost function is expected to have the
minimum value. Inversely, the filter, w(n), that minimizes C(w) is expected to enhance
the quasi-periodicity of x(n). Such filter parameters can, for example, be obtained using optimization algorithms such as a hill-climbing method using the derivatives of C(w)
calculated as follows:
?C(w)
=2
(y(n) ? F (f0 (n))[y(n)])(x(n ? l) ? F (f0 (n))[x(n ? l)]),
(14)
?w(l)
n
where F (f0 (n))[x(n ? l)]) is a signal obtained by applying the adaptive harmonic filter to
x(n ? l)4 .
There are, however, several problems involved in directly using eq. (13) as the cost function.
1. As discussed in section 2.1, the values of the dereverberated impulse response,
q(n), are expected to become zero using this method where |n| > ?, however,
the values are not specifically determined where |n| < ?. This may cause unexpected spectral modification of the dereverberated signal. Additional constraints
are required in order to specify these values.
2. The cost function has a self-evident solution, that is, w(l) = 0 for all l values. This
solution means that the signal, y(n), is always zero instead of being
dereverberated, and therefore, should be excluded. Some constraints, such as l w(l)2 = 1,
may be useful for solving this problem.
3. The complexity of the computing needed to minimize the cost function based on
repetitive estimation increases as the dereverberation filter becomes longer. The
longer the reverberation becomes, the longer the dereverberation filter should be.
To overcome these problems, we simplify the cost function in this paper. The new cost
function is defined as follows:
2
2
?
?
C(W (?)) = E((Y (?) ? X(?))
) = E((W (?)X(?) ? X(?))
),
(15)
?
where Y (?), X(?), and X(?)
are discrete Fourier transformations of y(n), x(n), and
F (f0 (n))[x(n)], respectively. The new cost function evaluates the quasi-periodicity not in
?
the time domain but in the frequency domain, and uses a fixed quasi-periodic signal X(?)
as the desired signal, instead of using the non-fixed quasi-periodic signal, F (f0 (n))[y(n)].
This modification allows us to solve the above problems. The use of the fixed desired
signals specifically provides the dereverberated impulse response, q(n), with the desired
values, even in the time region, |n| < ?. In addition, the self-evident solution, w(l) = 0, can
no longer be optimal in terms of the cost function. Furthermore, the computing complexity
is greatly reduced because the solution can be given analytically as follows:
W (?) =
?
?
E(X(?)X
(?))
.
?
E(X(?)X (?))
(16)
A diagram of this simplified MMSE-HERB is shown in Fig. 2.
4
F (f0 (n))[x(n ? l)]) is not the same signal as x
?(n ? l). When calculating F (f0 (n))[x(n ? l)],
x(n) is time-shifted with l-points while f0 (n) of the adaptive harmonic filter is not time-shifted.
STEP1:
X
F0
estimation
Input X
F0
STEP2:
^
O(R1) X
F0
estimation
X
Adaptive
harmonics
filter
^
X1
X
Dereverberation
operator
estimation
^
^
O(R1) X
O(R1) X
DereverbeAdaptive
ration
harmonics
^
F0 filter
operator
X2
estimation
Dereverberation
^
by O(R^1)
O(R1)
^
O(R1) X
^
O(R1) X
Dereverberation
^
by O(R^2)
O(R2)
^
^
O(R2) O(R1) X
Figure 3: Processing flow of dereverberation.
? S ? ) = 0,
? in eq. (8), and E(Sh Sn? ) = E(Sn S ? ) = E(N
When we assume the model of X
h
h
it is shown that the resulting W in eq. (16) again approaches the dereverberation operator,
? presented in section 2.2:
O(R),
W (?)
=
? Sn? )
1
E(Sh Sh? )
E(N
+
,
E(Sh Sh? ) + E(Sn Sn? ) H E(Sh Sh? ) + E(Sn Sn? )
E(Sh Sh? )
?
O(R(?))
.
E(Sh Sh? ) + E(Sn Sn? )
?
O(R(?))
(17)
(18)
? represents non-periodic components that are included unexpectedly and at ranBecause N
dom in the output of the harmonic filter, the absolute value of the second term in eq. (17)
is expected to be sufficiently small compared with that of the first term, therefore, we
disregard this term. Then, W (?) in eq. (16) becomes the dereverberation operator multiplied by the ratio of the expected power of the quasi-periodic components in the signals to that of whole signals. As with the speech signals discussed in section 2.2, the
E(Sh Sh? )/(E(Sh Sh? ) + E(Sn Sn? )) value becomes smaller as ? increases, and thus, the
gain of W (?) tends to decrease. Therefore, the same frequency compensation scenario as
found in section 2.2 may again be useful for the MMSE based dereverberation scheme.
3
Processing flow
Based on the above two methods, we constructed a dereverberation algorithm composed of
two steps as shown in Fig. 3. Both methods are implemented in the same processing flow
except that the methods used to calculate the dereverberation operator are different. The
flow is summarized as follows:
1. In the first step, F0 is estimated from the reverberant signal, X. Then the harmonic
? 1 based on adaptive harmonic filcomponents included in X are estimated as X
?
tering. The dereverberation operator O(R1 ) is then calculated by ATF-HERB or
MMSE-HERB for a number of reverberant speech signals. Finally, the derever? 1 ) by X.
berated signal is obtained by multiplying O(R
2. The second step employs almost the same procedures as the first step except that
the speech data dereverberated by the first step are used as the input signal. The
? 2 X2 ,
use of this dereverberated input signal means that reverberant components, R
inevitably included in eq. (8) can be attenuated. Therefore, a more effective dereverberation can be achieved in step 2.
In our preliminary experiments, however, repeating STEP 2 did not always improve the
quality of the dereverberated signals. This is because the estimation error of the dereverberation operators accumulates in the dereverberated signals when the signals are multiplied
by more than one dereverberation operator. Therefore, in our experiments, we used STEP 2
only once. A more detailed explanation of these processing steps is also presented in [4].
0
0
rtime=0.5 sec.
Power (dB)
Power (dB)
rtime=1.0 sec.
?20
?40
?60
0
0.2
0.4
0.6
Time (sec.)
?20
?40
?60
0
0.8
0
0.2
0.8
rtime=0.1 sec.
Power (dB)
Power (dB)
rtime=0.2 sec.
?20
?40
?60
0
0.4
0.6
Time (sec.)
0
0.2
0.4
0.6
Time (sec.)
0.8
?20
?40
?60
0
0.2
0.4
0.6
Time (sec.)
0.8
Figure 4: Reverberation curves of the original impulse responses (thin line) and dereverberated impulse responses (male: thick dashed line, female: thick solid line) for different
reverberation times (rtime).
Accurate F0 estimation is very important in terms of achieving effective dereverberation
with our methods in this processing flow. However, this is a difficult task, especially for
speech with a long reverberation using existing F0 estimators. To cope with this problem,
we designed a simple filter that attenuates a signal that continues at the same frequency, and
used it as a preprocessor for the F0 estimation [5]. In addition, the dereverberation operator,
? 1 ), itself is a very effective preprocessor for an F0 estimator because the reverberation
O(R
of the speech can be directly reduced by the operator. This mechanism is already included
? 1 )X.
in step 2 of the dereverberation procedure, that is, F0 estimation is applied to O(R
Therefore, more accurate F0 can be obtained in step 2 than in step 1.
4
Experimental results
We examined the performance of the proposed dereverberation methods. Almost the same
results were obtained with the two methods, and so we only describe those obtained with
ATF-HERB. We used 5240 Japanese word utterances provided by a male and a female
speaker (MAU and FKM, 12 kHz sampling) included in the ATR database as source signals,
S(?). We used four impulse responses measured in a reverberant room whose reverberation
times were about 0.1, 0.2, 0.5, and 1.0 sec, respectively. Reverberant signals, X(?), were
obtained by convolving S(?) with the impulse responses.
Figure 4 depicts the reverberation curves5 of the original impulse responses and the dereverberated impulse responses obtained with ATF-HERB. The figure shows that the proposed
methods could effectively reduce the reverberation in the impulse responses for the female
speaker when the reverberation time (rtime) was longer than 0.1 sec. For the male speaker,
the reverberation effect in the lower time region was also effectively reduced. This means
that strong reverberant components were eliminated, and we can expect the intelligibility
of the signals to be improved [6].
Figure 5 shows spectrograms of reverberant and dereverberated speech signals when rtime
was 1.0 sec. As shown in the figure, the reverberation of the signal was effectively reduced,
and the formant structure of the signal was restored. Similar spectrogram features were
observed under other reverberation conditions, and an improvement in sound quality could
clearly be recognized by listening to the dereverberated signals [7]. We also evaluated the
quality of the dereverberated speech in terms of speaker dependent word recognition rates
5
[6].
The reverberation curve shows the reduction in the energy of a room impulse response with time
2
Frequency (kHz)
Frequency (kHz)
2
1
0
0
0.4
0.8
Time (sec.)
1.2
1
0
0
0.4
0.8
Time (sec.)
1.2
Figure 5: Spectrogram of reverberant (left) and dereverberated (right) speech of a male
speaker uttering ?ba-ku-da-i?.
with an ASR system, and could achieve more than 95 % recognition rates under all the
reverberation conditions with acoustic models trained using dereverberated speech signals.
Detailed information on the ASR experiments is also provided in [4].
5
Conclusion
A new blind dereverberation principle based on the quasi-periodicity of speech signals was
proposed. We presented two types of dereverberation method, referred to as harmonicity based dereverberation (HERB) method: one estimates the average filter function that
transforms reverberant signals into quasi-periodic signals (ATF-HERB) and the other minimizes the MMSE criterion that evaluates the quasi-periodicity of signals (MMSE-HERB).
We showed that ATF-HERB and a simplified version of MMSE-HERB are both capable
of learning the dereverberation operator that can reduce reverberant components in speech
signals. Experimental results showed that a dereverberation operator trained with 5240
Japanese word utterances could achieve very high quality speech dereverberation. Future
work will include an investigation of how such high quality speech dereverberation can be
achieved with fewer speech data.
References
[1] Baba, A., Lee, A., Saruwatari, H., and Shikano, K., ?Speech recognition by reverberation adapted acoustic model,? Proc. of ASJ general meeting, pp. 27?28, Akita,
Japan, Sep., 2002.
[2] Amari, S., Douglas, S. C., Cichocki, A., and Yang, H. H., ?Multichannel blind deconvolution and equalization using the natural gradient,? Proc. IEEE Workshop on Signal
Processing Advances in Wireless Communications, Paris, pp. 101-104, April 1997.
[3] Yegnanarayana, B., and Murthy, P. S., ?Enhancement of reverberant speech using LP
residual signal,? IEEE Trans. SAP vol. 8, no. 3, pp. 267?281, 2000.
[4] Nakatani, T., Miyoshi, M., and Kinoshita, K., ?Implementation and effects of single
channel dereverberation based on the harmonic structure of speech,? Proc. IWAENC2003, Sep., 2003.
[5] Nakatani, T., and Miyoshi, M., ?Blind dereverberation of single channel speech signal
based on harmonic structure,? Proc. ICASSP-2003, vol. 1, pp. 92?95, Apr., 2003.
[6] Yegnanarayana, B., and Ramakrishna, B. S., ?Intelligibility of speech under nonexponential decay conditions,? JASA, vol. 58, pp. 853?857, Oct. 1975.
[7] http://www.kecl.ntt.co.jp/icl/signal/nakatani/sound-demos/dm/derev-demos.html
| 2436 |@word version:1 briefly:1 open:1 r:5 solid:1 reduction:1 contains:2 mmse:12 existing:1 must:1 designed:1 n0:12 fewer:1 keisuke:1 provides:1 detecting:1 constructed:1 direct:6 become:2 introduce:2 expected:5 ica:1 seika:1 decreasing:1 window:2 becomes:6 provided:2 estimating:1 step2:1 minimizes:2 degrading:1 transformation:3 corporation:1 nakatani:4 local:6 tends:2 severely:1 accumulates:1 approximately:4 examined:1 dynamically:1 challenging:2 co:2 range:1 statistically:2 implement:1 procedure:2 word:3 cannot:3 operator:19 applying:2 equalization:3 www:1 center:2 duration:1 fkm:1 simplicity:1 estimator:2 target:5 enhanced:3 controlling:1 us:3 recognition:5 approximated:3 located:1 continues:1 database:1 observed:2 unexpectedly:3 calculate:1 region:13 decrease:2 complexity:2 ration:1 dom:1 trained:2 solving:1 segment:1 easily:1 sep:2 icassp:1 represented:5 describe:2 effective:3 outside:1 whose:3 larger:1 solve:1 amari:1 formant:2 g1:2 itself:2 sequence:2 propose:3 product:1 neighboring:1 j2:1 achieve:3 enhancement:1 dsh:4 r1:8 miyoshi:3 pose:1 measured:1 eq:17 strong:1 implemented:2 involves:1 thick:2 attribute:1 filter:27 capitalized:2 bin:1 preliminary:1 investigation:1 strictly:1 around:1 sufficiently:2 exp:1 omitted:1 estimation:11 proc:4 destroys:1 clearly:1 always:3 modified:1 varying:1 improvement:1 greatly:1 contrast:1 sense:1 dependent:2 quasi:32 html:1 equal:3 once:3 asr:5 eliminated:3 sampling:2 represents:1 thin:1 future:1 simplify:1 inherent:2 serious:1 employ:1 composed:3 individual:1 phase:3 male:4 sh:23 accurate:2 capable:1 re:1 desired:3 herb:19 cost:9 periodic:39 cho:1 adaptively:2 fundamental:3 lee:1 enhance:1 squared:3 again:2 satisfied:1 recorded:1 convolving:1 derivative:1 japan:2 summarized:1 sec:13 caused:1 blind:6 depends:1 later:1 view:1 lot:1 lab:2 voiced:2 minimize:1 characteristic:1 tering:1 correspond:1 climbing:1 multiplying:2 murthy:1 explain:1 evaluates:3 rsh:1 energy:1 frequency:21 involved:1 pp:5 dm:1 gain:3 sap:1 icl:1 knowledge:2 amplitude:1 response:13 specify:1 improved:1 april:1 evaluated:1 though:1 furthermore:1 d:4 quality:6 impulse:13 effect:2 former:1 analytically:1 excluded:1 deal:2 self:2 speaker:5 criterion:4 hill:1 evident:2 harmonic:19 khz:3 jp:2 discussed:3 automatic:1 f0:33 longer:6 showed:2 female:3 moderate:1 scenario:1 certain:2 meeting:1 exploited:1 preserving:1 minimum:4 additional:2 spectrogram:3 recognized:1 period:5 signal:99 dashed:1 multiple:2 desirable:1 sound:2 kyoto:1 ntt:4 ing:1 long:6 compensate:1 divided:1 basic:1 enhancing:2 repetitive:1 represent:1 nak:1 achieved:3 addition:2 residue:1 diagram:4 source:4 appropriately:2 rest:1 tend:1 db:4 flow:5 extracting:1 yang:1 reverberant:17 identically:1 reduce:2 idea:1 attenuated:1 masato:1 listening:1 f:2 soraku:1 speech:44 speaking:1 cause:1 useful:3 detailed:2 transforms:3 repeating:1 multichannel:1 reduced:5 http:1 shifted:2 estimated:3 discrete:2 vol:3 four:1 achieving:2 douglas:1 undertaken:1 sum:2 inverse:4 harmonicity:2 almost:4 separation:1 adapted:1 constraint:2 x2:2 step1:1 fourier:2 argument:2 cslab:1 according:2 remain:1 smaller:3 lp:1 making:1 kinoshita:3 modification:2 gradually:1 remains:1 mechanism:1 needed:1 operation:1 multiplied:3 spectral:1 intelligibility:2 convolved:1 original:2 include:1 calculating:1 especially:4 hikaridai:1 unchanged:1 added:2 already:1 restored:1 degrades:1 enhances:1 gradient:1 atr:1 gun:1 index:1 modeled:2 ratio:2 difficult:2 reverberation:28 ba:1 attenuates:1 implementation:1 unknown:1 rtime:7 convolution:1 unvoiced:1 compensation:2 inevitably:2 extended:1 communication:2 precise:1 frame:2 namely:2 required:1 paris:1 acoustic:2 trans:1 pattern:1 dereverberation:48 including:1 explanation:1 power:5 natural:1 residual:1 scheme:1 improve:1 inversely:2 numerous:1 extract:1 utterance:3 cichocki:1 sn:18 prior:3 relative:1 expect:1 mixed:1 filtering:1 proven:1 baba:1 ramakrishna:1 jasa:1 kecl:2 principle:4 periodicity:15 wireless:1 side:1 absolute:1 distributed:2 overcome:2 calculated:4 curve:2 world:2 uttering:1 clue:1 adaptive:6 simplified:2 far:1 cope:1 approximate:1 asj:1 assumed:1 mau:1 shikano:1 demo:2 learn:1 transfer:6 robust:2 ku:1 channel:2 complex:1 japanese:2 domain:5 da:1 did:1 apr:1 whole:1 x1:1 fig:2 referred:5 depicts:1 theorem:1 preprocessor:2 symbol:1 r2:2 decay:1 deconvolution:1 essential:2 workshop:1 effectively:3 magnitude:3 atf:11 simply:1 unexpected:1 g2:2 extracted:1 oct:1 goal:3 attenuating:1 room:4 change:3 included:7 kf0:1 specifically:2 uniformly:1 determined:1 except:2 microphone:3 experimental:2 disregard:1 latter:1 evaluate:1 |
1,581 | 2,437 | Applying Metric-Trees to Belief-Point POMDPs
Joelle Pineau, Geoffrey Gordon
School of Computer Science
Carnegie Mellon University
Pittsburgh, PA 15213
{jpineau,ggordon}@cs.cmu.edu
Sebastian Thrun
Computer Science Department
Stanford University
Stanford, CA 94305
[email protected]
Abstract
Recent developments in grid-based and point-based approximation algorithms for POMDPs have greatly improved the tractability of POMDP
planning. These approaches operate on sets of belief points by individually learning a value function for each point. In reality, belief points
exist in a highly-structured metric simplex, but current POMDP algorithms do not exploit this property. This paper presents a new metric-tree
algorithm which can be used in the context of POMDP planning to sort
belief points spatially, and then perform fast value function updates over
groups of points. We present results showing that this approach can reduce computation in point-based POMDP algorithms for a wide range of
problems.
1
Introduction
Planning under uncertainty is a central problem in the field of robotics as well as many
other AI applications. In terms of representational effectiveness, the Partially Observable
Markov Decision Process (POMDP) is among the most promising frameworks for this
problem. However the practical use of POMDPs has been severely limited by the computational requirement of planning in such a rich representation. POMDP planning is difficult
because it involves learning action selection strategies contingent on all possible types of
state uncertainty. This means that whenever the robot?s world state cannot be observed,
the planner must maintain a belief (namely a probability distribution over possible states)
to summarize the robot?s recent history of actions taken and observations received. The
POMDP planner then learns an optimal future action selection for each possible belief. As
the planning horizon grows (linearly), so does the number of possible beliefs (exponentially), which causes the computational intractability of exact POMDP planning.
In recent years, a number of approximate algorithms have been proposed which overcome
this issue by simply refusing to consider all possible beliefs, and instead selecting (and
planning for) a small set of representative belief points. During execution, should the robot
encounter a belief for which it has no plan, it finds the nearest known belief point and
follows its plan. Such approaches, often known as grid-based [1, 4, 13], or point-based [8,
9] algorithms, have had significant success with increasingly large planning domains. They
formulate the plan optimization problem as a value iteration procedure, and estimate the
cost/reward of applying a sequence of actions from a given belief point. The value of
each action sequence can be expressed as an ?-vector, and a key step in many algorithms
consists of evaluating many candidate ?-vectors (set ?) at each belief point (set B).
These B ? ? (point-to-vector) comparisons?which are typically the main bottleneck in
scaling point-based algorithms?are reminiscent of many M ? N comparison problems
that arise in statistical learning tasks, such as kNN, mixture models, kernel regression, etc.
Recent work has shown that for these problems, one can significantly reduce the number of
necessary comparisons by using appropriate metric data structures, such as KD-trees and
ball-trees [3, 6, 12]. Given this insight, we extend the metric-tree approach to POMDP
planning, with the specific goal of reducing the number of B ? ? comparisons. This paper
describes our algorithm for building and searching a metric-tree over belief points.
In addition to improving the scalability of POMDP planning, this approach features a number of interesting ideas for generalizing metric-tree algorithms. For example, when using
trees for POMDPs, we move away from point-to-point search procedures for which the
trees are typically used, and leverage metric constraints to prune point-to-vector comparisons. We show how it is often possible to evaluate the usefulness of an ?-vector over an
entire sub-region of the belief simplex without explicitly evaluating it at each belief point
in that sub-region. While our new metric-tree approach offers significant potential for all
point-based approaches, in this paper we apply it in the context of the PBVI algorithm [8],
and show that it can effectively reduce computation without compromising plan quality.
2
Partially Observable Markov Decision Processes
We adopt the standard POMDP formulation [5], defining a problem by the n-tuple:
{S, A, Z, T, O, R, ?, b0 }, where S is a set of (discrete) world states describing the problem domain, A is a set of possible actions, and Z is a set of possible observations providing (possibly noisy and/or partial) state information. The distribution T (s, a, s 0 ) describes state-to-state transition probabilities; distribution O(s, a, z) describes observation
emission probabilities; function R(s, a) represents the reward received for applying action
a in state s; ? represents the discount factor; and b0 specifies the initial belief distribution. An |S|-dimensional vector, bt , represents the agent?s belief about the state of the
world at time t, and is expressed as a probability distribution over states. This belief is
updated after each time step?to
reflect the latest pair (at?1 , zt )?using a Bayesian filter:
P
bt (s0 ) := c O(s0 , at?1 , zt ) s?S T (s, at?1 , s0 )bt?1 (s), where c is a normalizing constant.
The goal of POMDP
P planning is to find a sequence of actions maximizing the expected
sum of rewards E[ t ? t R(st , at )], for all belief. The corresponding value function can be
P
formulated as a Bellman equation: V (b) = maxa?A R(b, a) + ? b0 ?B T (b, a, b0 )V (b0 )
By definition there exist an infinite number of belief points. However when optimized exactly, the value function is always piecewise linear and convex in the belief (Fig. 1a). After
n value iterations, the solution consists of a finite set of ?-vectors: Vn = {?0 , ?1 , ..., ?m }.
Each ?-vector represents an |S|-dimensional hyper-plane,
P and defines the value function
over a bounded region of the belief: Vn (b) = max??Vn s?S ?(s)b(s). When performing
exact value updates, the set of ?-vectors can (and often does) grow exponentially with the
planning horizon. Therefore exact algorithms tend to be impractical for all but the smallest
problems. We leave out a full discussion of exact POMDP planning (see [5] for more) and
focus instead on the much more tractable point-based approximate algorithm.
3
Point-based value iteration for POMDPs
The main motivation behind the point-based algorithm is to exploit the fact that most beliefs are never, or very rarely, encountered, and thus resources are better spent planning
for those beliefs that are most likely to be reached. Many classical POMDP algorithms
do not exploit this insight. Point-based value iteration algorithms on the other hand apply value backups only to a finite set of pre-selected (and likely to be encountered) belief
points B = {b0 , b1 , ..., bq }. They initialize a separate ?-vector for each selected point, and
repeatedly update the value of that ?-vector. As shown in Figure 1b, by maintaining a full
?-vector for each belief point, we can preserve the piecewise linearity and convexity of
the value function, and define a value function over the entire belief simplex. This is an
approximation, as some vectors may be missed, but by appropriately selecting points, we
can bound the approximation error (see [8] for details).
V={ ? 0 ,? 1 ,? 2 ,? 3 }
V={ ? 0 ,? 1 ,? 3 }
b2
b1
(a)
b0
b3
(b)
Figure 1: (a) Value iteration with exact updates. (b) Value iteration with point-based updates.
There are generally two phases to point-based algorithms. First, a set of belief points is selected, and second, a series of backup operations are applied over ?-vectors for that set of
points. In practice, steps of value iteration and steps of belief set expansion can be repeatedly interleaved to produce an anytime algorithm that can gradually trade-off computation
time and solution quality. The question of how to best select belief points is somewhat
orthogonal to the ideas in this paper and is discussed in detail in [8]. We therefore focus
on describing how to do point-based value backups, before showing how this step can be
significantly accelerated by the use of appropriate metric data structures.
The traditional value iteration POMDP backup operation is formulated as a dynamic program, where we build the n-th horizon value function V from the previous solution V 0 :
"
#
X
X
XX
0
0
0 0
V (b)
=
=
max
R(s, a)b(s)+ ?
a?A
max
a?A
"
s?S
X
z?Z
max
z?Z
max
?0 ?V 0
"
X R(s, a)
|Z|
s?S
?0 ?V 0
T (s, a, s )O(z, s , a)? (s )b(s)
s?S s0 ?S
b(s)+ ?
XX
0
0
0
0
T (s, a, s )O(z, s , a)? (s )b(s)
s?S s0 ?S
(1)
##
To plan for a finite set of belief points B, we can modify this operation such that only
one ?-vector per belief point is maintained and therefore we only consider V (b) at points
b ? B. This is implemented using three steps. First, we take each vector in V 0 and project
it backward (according to the model) for a given action, observation pair. In doing so, we
generate intermediate sets ?a,z , ?a ? A, ?z ? Z:
X
R(s, a)
0
0
0 0
0
0
a,z
a,z
?
?
?i (s) =
|Z|
+?
s0 ?S
T (s, a, s )O(z, s , a)?i (s ), ??i ? V (Step 1) (2)
Second for each b ? B, we construct ?a (?a ? A). This sum over observations1 includes
the maximum ?a,z (at a given b) from each ?a,z :
X
a
?b
=
z?Z
1
argmax(? ? b) (Step 2)
(3)
???a,z
In exact updates, this step requires taking a cross-sum over observations, which is
O(|S| |A| |V 0 ||Z| ). By operating over a finite set of points, the cross-sum reduces to a simple sum,
which is the main reason behind the computational speed-up obtained in point-based algorithms.
Finally, we find the best action for each belief point:
?
V
argmax(?ab ? b),
?a ,?a?A
b
?b ? B (Step 3)
(4)
The main bottleneck in applying point-based algorithms to larger POMDPs is in step 2
where we perform a B ? ? comparison2 : for every b ? B, we must find the best vector
from a given set ?a,z . This is usually implemented as a sequential search, exhaustively
comparing ? ? b for every b ? B and every ? ? ?a,z , in order to find the best ? at
each b (with overall time-complexity O(|A| |Z| |S| |B| |V 0 |)). While this is not entirely
unreasonable, it is by far the slowest step. It also completely ignores the highly structured
nature of the belief space.
Belief points exist in a metric space and there is much to be gained from exploiting this
property. For example, given the piecewise linearity and convexity of the value function, it
is more likely that two nearby points will share similar values (and policies) than points that
are far away. Consequently it could be much more efficient to evaluate an ?-vector over
sets of nearby points, rather than by exhaustively looking at all the points separately. In the
next section, we describe a new type of metric-tree which structures data points based on a
distance metric over the belief simplex. We then show how this kind of tree can be used to
efficiently evaluate ?-vectors over sets of belief points (or belief regions).
4
Metric-trees for belief spaces
Metric data structures offer a way to organize large sets of data points according to distances
between the points. By organizing the data appropriately, it is possible to satisfy many
different statistical queries over the elements of the set, without explicitly considering all
points. Instances of metric data structures such as KD-trees, ball-trees and metric-trees have
been shown to be useful for a wide range of learning tasks (e.g. nearest-neighbor, kernel
regression, mixture modeling), including some with high-dimensional and non-Euclidean
spaces. The metric-tree [12] in particular offers a very general approach to the problem of
structural data partitioning. It consists of a hierarchical tree built by recursively splitting the
set of points into spatially tighter subsets, assuming only that the distance between points
is a metric.
4.1
Building a metric-tree from belief points
Each node ? in a metric-tree is represented by its center ?c , its radius ?r , and a set of points
?B that fall within its radius. To recursively construct the tree?starting with node ? and
building children nodes ? 1 and ? 2 ?we first pick two candidate centers (one per child) at
the extremes of the ??s region: ?c1 = maxb??D D(?c , b), and ?c2 = maxb??D D(?c1 , b). In
a single-step approximation to k-nearest-neighbor (k=2), we then re-allocate each point in
?B to the child with the closest center (ties are broken randomly):
1
?B
?b
2
?B
?b
if D(?c1 , b) < D(?c2 , b)
(5)
if D(?c1 , b) > D(?c2 , b)
Finally we update the centers and calculate the radius for each child:
1
?c1 = Center{?B
}
?r1
= max
1
b??B
D(?c1 , b)
2
?c2 = Center{?B
}
?r2
= max
2
b??B
D(?c2 , b)
(6)
(7)
Step 1 projects all vectors ? ? V 0 for any (a, z) pair. In the worse-case, this has time-complexity
O(|A| |Z| |S|2 |V 0 |), however most problems have very sparse transition matrices and this is typically
much closer to O(|A| |Z| |S| |V 0 |). Step 3 is also relatively efficient at O(|A| |Z| |S| |B|).
2
The general metric-tree algorithm allows a variety of ways to calculate centers and distances. For the centers, the most common choice is the centroid of the points and this is
what we use when building a tree over belief points. We have tried other options, but with
negligible impact. For the distance metric, we select the max-norm: D(?c , b) = ||?c ?b||? ,
which allows for fast searching as described in the next section. While the radius determines the size of the region enclosed by each node, the choice of distance metric determines its shape (e.g. with Euclidean distance, we would get hyper-balls of radius ? r ). In
the case of the max-norm, each node defines an |S|-dimensional hyper-cube of length 2?? r .
Figure 2 shows how the first two-levels of a tree are built, assuming a 3-state problem.
P(s2)
n0
n1
n0
n2
nr
nc
n1
n2
bi bj
P(s1)
(a)
(b)
(c)
...
(d)
Figure 2: (a) Belief points. (b) Top node. (c) Level-1 left and right nodes. (d) Corresponding tree
While we need to compute the center and radius for each node to build the tree, there are
additional statistics which we also store about each node. These are specific to using trees
in the context of belief-state planning, and are necessary to evaluate ? vectors over regions
of the belief simplex. For a given node ? containing data points ?B , we compute ?min and
?max , the vectors containing respectively the min and max belief in each dimension:
?min (s) = min b(s), ?s ? S
b??B
4.2
?max (s) = max b(s), ?s ? S
b??B
(8)
Searching over sub-regions of the simplex
Once the tree is built, it can be used for fast statistical queries. In our case, the goal is to
compute argmax???a,z (? ? b) for all belief points. To do this, we consider the ? vectors
one at a time, and decide whether a new candidate ?i is better than any of the previous
vectors {?0 . . . ?i?1 }. With the belief points organized in a tree, we can often assess this
over sets of points by consulting a high-level node ?, rather than by assessing this for each
belief point separately.
We start at the root node of the tree. There are four different situations we can encounter
as we traverse the tree: first, there might be no single previous ?-vector that is best for all
belief points below the current node (Fig. 3a). In this case we proceed to the children of the
current node without performing any tests. In the other three cases there is a single dominant alpha-vector at the current node; the cases are that the newest vector ?i dominates it
(Fig. 3b), is dominated by it (Fig. 3c), or neither (Fig. 3d). If we can prove that ? i dominates or is dominated by the previous one, we can prune the search and avoid checking the
current node?s children; otherwise we must check the children recursively.
We seek an efficient test to determine whether one vector, ?i , dominates another, ?j , over
the belief points contained within a node. The test must be conservative: it must never
erroneously say that one vector dominates another. It is acceptable for the test to miss
some pruning opportunities?the consequence is an increase in run-time as we check more
nodes than necessary?but this is best avoided if possible. The most thorough test would
check whether ? ? b is positive or negative at every belief sample b under the current node
?i
?
?i
i
?i
?
?
?
?
c
r
(a)
?
?
c
r
(b)
?
?
c
r
c
r
(c)
(d)
Figure 3: Possible scenarios when evaluation a new vector ? at a node ?, assuming a 2-state domain.
(a) ? is a split node. (b) ?i is dominant. (c) ?i is dominated. (d) ?i is partially dominant.
(where ? = (?i ? ?j )). All positive would mean that ?i dominates ?j , all negative the
reverse, and mixed positive and negative would mean that neither dominates the other. Of
course, this test renders the tree useless, since all points are checked individually. Instead,
we test whether ??b is positive or negative over a convex region R which includes all of the
belief samples that belong to the current node. The smaller the region, the more accurate
our test will be; on the other hand, if the region is too complicated we won?t be able to
carry out the test efficiently. (Note that we can always test some region R by solving one
linear program to find l = minb?R b ? ?, another to find h = maxb?R b ? ?, and testing
whether l < 0 < h. But this is expensive and we prefer a more efficient test.)
P(s2)
)
(s 3
ax
?m
?max(s1)
?max(s2)
(b)
)
(s 3
(a)
?min(s1)
in
P(s1)
?m
?min(s2)
(c)
(d)
Figure 4: Several possible convex regions over subsets of belief points, assuming a 3-state domain.
We tested several types of region. The simplest type is an axis-parallel bounding box
(Fig. 4a), ?min ? b ? ?max for vectors ?min P
and ?max (as defined in Eq. 8). We also
tested the simplex defined by b ?P?min and s?S b(s) = 1 (Fig. 4b), as well as the
simplex defined by b ? ?max and s?S b(s) = 1 (Fig. 4c). The most effective test we
discoveredPassumes R is the intersection of the bounding box ?min ? b ? ?max with
the plane s?S b(s) = 1 (Fig. 4d). For each of these shapes, minimizing or maximizing
b ? ? takes time O(d) (where d=#states): for the box (Fig. 4a) we check each dimension
independently, and for the simplices (Figs 4b, 4c) we check each corner exhaustively. For
the last shape (Fig. 4d), maximizing with respect to b is the same as computing ? s.t. b(s) =
?min (s) if ?(s) < ? and b(s) = ?max (s) if ?(s) > ?. We can find ? in expected time O(d)
using a modification of the quick-median algorithm. In practice, not all O(d) algorithms
are equivalent. Empirical results show that checking the corners of regions (b) and (c) and
taking the tightest bounds provides the fastest algorithm. This is what we used for the
results presented below.
5
Results and Discussion
We have conducted a set of experiments to test the effectiveness of the tree structure in
reducing computations. While still preliminary, these results illustrate a few interesting
properties of metric-trees when used in conjunction with point-based POMDP planning.
Figure 5 presents results for six well-known POMDP problems, ranging in size from 4 to
870 states (for problem descriptions see [2], except for Coffee [10] and Tag [8]). While all
these problems have been successfully solved by previous approaches, it is interesting to
observe the level of speed-up that can be obtained by leveraging metric-tree data structures.
In Fig. 5(a)-(f) we show the number of B ? ? (point-to-vector) comparisons required, with
and without a tree, for different numbers of belief points. In Fig. 5(g)-(h) we show the
computation time (as a function of the number of belief points) required for two of the
problems. The No-Tree results were generated by applying the original PBVI algorithm
(Section 2, [8]). The Tree results (which count comparisons on both internal and leaf
nodes) were generated by embedding the tree searching procedure described in Section 4.2
within the same point-based POMDP algorithm. For some of the problems, we also show
performance using an -tree, where the test for vector dominance can reject (i.e. declare ? i
is dominated, Fig. 3c) a new vector that is within of the current best vector.
6
4
x 10
7
4
x 10
7
x 10
2
x 10
6
# comparisons
1.5
6
4
1
4
3
2
0.5
2
1.5
5
# comparisons
8
# comparisons
2
No Tree
Tree
Epsilon?Tree
# comparisons
10
1
0.5
1
2000
# belief points
3000
0
0
4000
1000
(a) Hanks, |S|=4
0
0
5000
10
# comparisons
4
6
4
0
0
0
0
0
0
(e) Hallway, |S|=60
1200
100
200
300
# belief points
400
(f) Tag, |S|=870
500
100
200
300
# belief points
400
500
(d) Tiger-grid, |S|=36
4
5
x 10
4
10
5
1000
0
0
500
15
2
400
600
800
# belief points
400
20
2
200
200
300
# belief points
25
x 10
8
6
100
(c) Coffee, |S|=32
6
x 10
8
# comparisons
4000
(b) SACI, |S|=12
7
10
2000
3000
# belief points
TIME (secs)
1000
TIME (secs)
0
0
3
2
1
0.5
1
1.5
# belief points
2
2.5
4
x 10
(g) SACI, |S|=12
0
0
200
400
600
# belief points
800
1000
(h) Tag, |S|=870
Figure 5: Results of PBVI algorithm with and without metric-tree.
These early results show that, in various proportions, the tree can cut down on the number
of comparisons. This illustrates how the use of metric-trees can effectively reduce POMDP
computational load. The -tree is particularly effective at reducing the number of comparisons in some domains (e.g. SACI, Tag). The much smaller effect shown in the other
problems may be attributed to a poorly tuned (we used = 0.01 in all experiments). The
question of how to set such that we most reduce computation, while maintaining good
control performance, tends to be highly problem-dependent.
In keeping with other metric-tree applications, our results show that computational savings
increase with the number of belief points. What is more surprising is to see the trees paying
off with so few data points (most applications of KD-trees start seeing benefits with 1000+
data points.) This may be partially attributed to the compactness of our convex test region
(Fig. 4d), and to the fact that we do not search on split nodes (Fig. 3a); however, it is
most likely due to the nature of our search problem: many ? vectors are accepted/rejected
before visiting any leaf nodes, which is different from typical metric-tree applications. We
are particularly encouraged to see trees having a noticeable effect with very few data points
because, in some domains, good control policies can also be extracted with few data points.
We notice that the effect of using trees is negligible in some larger problems (e.g. Tigergrid), while still pronounced in others of equal or larger size (e.g. Coffee, Tag). This is
likely due to the intrinsic dimensionality of each problem.3 Metric-trees often perform
well in high-dimensional datasets with low intrinsic dimensionality; this also appears to be
true of metric-trees applied to vector sorting. While this suggests that our current algorithm
is not as effective in problems with intrinsic high-dimensionality, a slightly different tree
structure or search procedure may well help in those cases. Recent work has proposed new
kinds of metric-trees that can better handle point-based searches in high-dimensions [7],
and some of this may be applicable to the POMDP ?-vector sorting problem.
6
Conclusion
We have described a new type of metric-tree which can be used for sorting belief points
and accelerating value updates in POMDPs. Early experiments indicate that the tree structure, by appropriately pruning unnecessary ?-vectors over large regions of the belief, can
accelerate planning for a range problems. The promising performance of the approach on
the Tag domain opens the door to larger experiments.
Acknowledgments
This research was supported by DARPA (MARS program) and NSF (ITR initiative).
References
[1] R. I. Brafman. A heuristic variable grid solution method for POMDPs. In Proceedings of the
Fourteenth National Conference on Artificial Intelligence (AAAI), pages 727?733, 1997.
[2] A. Cassandra. http://www.cs.brown.edu/research/ai/pomdp/examples/index.html.
[3] J. H. Friendman, J. L. Bengley, and R. A. Finkel. An algorithm for finding best matches in
logarithmic expected time. ACM Transactions on Mathematical Software, 3(3):209?226, 1977.
[4] M. Hauskrecht. Value-function approximations for partially observable Markov decision processes. Journal of Artificial Intelligence Research, 13:33?94, 2000.
[5] L. P. Kaelbling, M. L. Littman, and A. R. Cassandra. Planning and acting in partially observable
stochastic domains. Artificial Intelligence, 101:99?134, 1998.
[6] A. W. Moore. Very fast EM-based mixture model clustering using multiresolution KD-trees. In
Advances in Neural Information Processing Systems (NIPS), volume 11, 1999.
[7] A. W. Moore. The anchors hierarchy: Using the triangle inequality to survive high dimensional
data. Technical Report CMU-RI-TR-00-05, Carnegie Mellon, 2000.
[8] J. Pineau, G. Gordon, and S. Thrun. Point-based value iteration: An anytime algorithm for
POMDPs. In International Joint Conference on Artificial Intelligence (IJCAI), 2003.
[9] K.-M. Poon. A fast heuristic algorithm for decision-theoretic planning. Master?s thesis, The
Hong-Kong University of Science and Technology, 2001.
[10] P. Poupart and C. Boutilier. Value-directed compression of POMDPs. In Advances in Neural
Information Processing Systems (NIPS), volume 15, 2003.
[11] N. Roy and G. Gordon. Exponential family PCA for belief compression in POMDPs. In
Advances in Neural Information Processing Systems (NIPS), volume 15, 2003.
[12] J. K. Uhlmann. Satisfying general proximity/similarity queries with metric trees. Information
Processing Letters, 40:175?179, 1991.
[13] R. Zhou and E. A. Hansen. An improved grid-based approximation algorithm for POMDPs. In
Proceedings of the 17th International Joint Conference on Artificial Intelligence (IJCAI), 2001.
3
The coffee domain is known to have an intrinsic dimensionality of 7 [10]. We do not know the
intrinsic dimensionality of the Tag domain, but many robot applications produce belief points that
exist in sub-dimensional manifolds [11].
| 2437 |@word kong:1 compression:2 norm:2 proportion:1 open:1 seek:1 tried:1 pick:1 tr:1 recursively:3 carry:1 initial:1 series:1 selecting:2 tuned:1 current:9 comparing:1 surprising:1 must:5 reminiscent:1 shape:3 update:8 n0:2 newest:1 intelligence:5 selected:3 leaf:2 plane:2 hallway:1 provides:1 consulting:1 node:25 traverse:1 mathematical:1 c2:5 initiative:1 consists:3 prove:1 expected:3 planning:20 bellman:1 considering:1 project:2 xx:2 bounded:1 linearity:2 what:3 kind:2 maxa:1 finding:1 impractical:1 hauskrecht:1 thorough:1 every:4 tie:1 exactly:1 partitioning:1 control:2 organize:1 before:2 negligible:2 positive:4 declare:1 modify:1 tends:1 severely:1 consequence:1 might:1 suggests:1 fastest:1 limited:1 range:3 bi:1 directed:1 practical:1 acknowledgment:1 testing:1 practice:2 procedure:4 empirical:1 significantly:2 reject:1 pre:1 seeing:1 get:1 cannot:1 selection:2 context:3 applying:5 www:1 equivalent:1 quick:1 center:9 maximizing:3 latest:1 starting:1 independently:1 convex:4 pomdp:21 formulate:1 splitting:1 insight:2 embedding:1 searching:4 handle:1 updated:1 hierarchy:1 exact:6 pa:1 element:1 roy:1 expensive:1 particularly:2 satisfying:1 cut:1 observed:1 solved:1 calculate:2 region:17 trade:1 convexity:2 complexity:2 broken:1 reward:3 littman:1 dynamic:1 exhaustively:3 solving:1 completely:1 triangle:1 accelerate:1 darpa:1 joint:2 represented:1 various:1 fast:5 describe:1 effective:3 query:3 artificial:5 hyper:3 heuristic:2 stanford:3 larger:4 say:1 otherwise:1 statistic:1 knn:1 noisy:1 sequence:3 pbvi:3 organizing:1 poon:1 poorly:1 multiresolution:1 representational:1 description:1 pronounced:1 scalability:1 exploiting:1 ijcai:2 requirement:1 r1:1 assessing:1 produce:2 leave:1 spent:1 illustrate:1 help:1 nearest:3 school:1 b0:7 noticeable:1 paying:1 received:2 eq:1 implemented:2 c:2 involves:1 indicate:1 radius:6 compromising:1 filter:1 stochastic:1 preliminary:1 tighter:1 proximity:1 bj:1 adopt:1 smallest:1 early:2 applicable:1 hansen:1 uhlmann:1 individually:2 successfully:1 always:2 rather:2 avoid:1 zhou:1 finkel:1 conjunction:1 ax:1 emission:1 focus:2 check:5 slowest:1 greatly:1 centroid:1 dependent:1 typically:3 entire:2 bt:3 compactness:1 issue:1 among:1 overall:1 html:1 development:1 plan:5 initialize:1 cube:1 field:1 construct:2 never:2 once:1 saving:1 having:1 encouraged:1 equal:1 represents:4 survive:1 future:1 simplex:8 others:1 report:1 gordon:3 piecewise:3 few:4 randomly:1 preserve:1 national:1 phase:1 argmax:3 maintain:1 n1:2 ab:1 highly:3 evaluation:1 mixture:3 extreme:1 behind:2 accurate:1 tuple:1 closer:1 partial:1 necessary:3 bq:1 orthogonal:1 tree:61 euclidean:2 re:1 instance:1 modeling:1 tractability:1 cost:1 kaelbling:1 subset:2 usefulness:1 conducted:1 too:1 st:1 international:2 off:2 thesis:1 central:1 reflect:1 aaai:1 containing:2 possibly:1 worse:1 corner:2 potential:1 b2:1 sec:2 includes:2 satisfy:1 explicitly:2 root:1 doing:1 reached:1 start:2 sort:1 option:1 complicated:1 parallel:1 ass:1 efficiently:2 bayesian:1 refusing:1 pomdps:12 history:1 sebastian:1 whenever:1 checked:1 definition:1 attributed:2 anytime:2 dimensionality:5 organized:1 appears:1 improved:2 formulation:1 box:3 mar:1 hank:1 rejected:1 hand:2 defines:2 pineau:2 quality:2 grows:1 b3:1 effect:3 building:4 brown:1 true:1 spatially:2 moore:2 during:1 maintained:1 won:1 hong:1 theoretic:1 ranging:1 common:1 exponentially:2 volume:3 extend:1 discussed:1 belong:1 mellon:2 significant:2 ai:2 grid:5 had:1 robot:4 similarity:1 operating:1 etc:1 dominant:3 closest:1 recent:5 reverse:1 scenario:1 store:1 inequality:1 success:1 joelle:1 contingent:1 somewhat:1 additional:1 prune:2 determine:1 full:2 reduces:1 technical:1 match:1 offer:3 cross:2 impact:1 regression:2 metric:35 cmu:2 iteration:9 kernel:2 robotics:1 c1:6 addition:1 separately:2 grow:1 median:1 appropriately:3 operate:1 minb:1 tend:1 leveraging:1 effectiveness:2 structural:1 leverage:1 door:1 intermediate:1 split:2 maxb:3 variety:1 reduce:5 idea:2 itr:1 bottleneck:2 whether:5 six:1 pca:1 allocate:1 accelerating:1 render:1 proceed:1 cause:1 action:10 repeatedly:2 boutilier:1 generally:1 useful:1 discount:1 simplest:1 generate:1 specifies:1 http:1 exist:4 nsf:1 notice:1 per:2 carnegie:2 discrete:1 group:1 key:1 four:1 dominance:1 neither:2 backward:1 year:1 sum:5 run:1 fourteenth:1 letter:1 uncertainty:2 master:1 planner:2 family:1 decide:1 vn:3 missed:1 decision:4 acceptable:1 scaling:1 prefer:1 interleaved:1 bound:2 entirely:1 encountered:2 constraint:1 ri:1 software:1 nearby:2 dominated:4 erroneously:1 speed:2 tag:7 min:11 performing:2 relatively:1 department:1 structured:2 according:2 ball:3 kd:4 describes:3 smaller:2 increasingly:1 slightly:1 em:1 modification:1 s1:4 gradually:1 taken:1 equation:1 resource:1 describing:2 count:1 know:1 tractable:1 operation:3 tightest:1 unreasonable:1 apply:2 observe:1 hierarchical:1 away:2 appropriate:2 encounter:2 original:1 top:1 clustering:1 opportunity:1 maintaining:2 exploit:3 epsilon:1 build:2 coffee:4 classical:1 move:1 question:2 strategy:1 traditional:1 nr:1 visiting:1 distance:7 separate:1 thrun:3 poupart:1 manifold:1 reason:1 assuming:4 length:1 useless:1 index:1 providing:1 minimizing:1 nc:1 difficult:1 negative:4 zt:2 policy:2 perform:3 observation:5 markov:3 datasets:1 finite:4 defining:1 situation:1 looking:1 namely:1 pair:3 required:2 optimized:1 nip:3 able:1 usually:1 below:2 summarize:1 program:3 built:3 max:20 including:1 belief:69 jpineau:1 technology:1 axis:1 checking:2 mixed:1 interesting:3 enclosed:1 geoffrey:1 agent:1 s0:6 intractability:1 share:1 course:1 supported:1 last:1 keeping:1 brafman:1 wide:2 neighbor:2 taking:2 fall:1 sparse:1 benefit:1 overcome:1 dimension:3 world:3 evaluating:2 rich:1 transition:2 ignores:1 avoided:1 far:2 transaction:1 approximate:2 observable:4 alpha:1 pruning:2 anchor:1 b1:2 pittsburgh:1 unnecessary:1 search:7 ggordon:1 reality:1 promising:2 nature:2 ca:1 improving:1 expansion:1 domain:10 main:4 linearly:1 motivation:1 backup:4 arise:1 s2:4 n2:2 bounding:2 child:7 fig:17 representative:1 simplices:1 sub:4 exponential:1 candidate:3 learns:1 down:1 load:1 specific:2 showing:2 r2:1 normalizing:1 dominates:6 intrinsic:5 sequential:1 effectively:2 gained:1 execution:1 illustrates:1 horizon:3 sorting:3 cassandra:2 generalizing:1 intersection:1 logarithmic:1 simply:1 likely:5 expressed:2 contained:1 partially:6 determines:2 extracted:1 acm:1 goal:3 formulated:2 consequently:1 tiger:1 infinite:1 except:1 reducing:3 typical:1 acting:1 miss:1 conservative:1 accepted:1 rarely:1 select:2 internal:1 accelerated:1 evaluate:4 tested:2 |
1,582 | 2,438 | Semidefinite relaxations for approximate
inference on graphs with cycles
Martin J. Wainwright
Electrical Engineering and Computer Science
UC Berkeley, Berkeley, CA 94720
[email protected]
Michael I. Jordan
Computer Science and Statistics
UC Berkeley, Berkeley, CA 94720
[email protected]
Abstract
We present a new method for calculating approximate marginals for
probability distributions defined by graphs with cycles, based on a Gaussian entropy bound combined with a semidefinite outer bound on the
marginal polytope. This combination leads to a log-determinant maximization problem that can be solved by efficient interior point methods [8]. As with the Bethe approximation and its generalizations [12], the
optimizing arguments of this problem can be taken as approximations to
the exact marginals. In contrast to Bethe/Kikuchi approaches, our variational problem is strictly convex and so has a unique global optimum.
An additional desirable feature is that the value of the optimal solution
is guaranteed to provide an upper bound on the log partition function. In
experimental trials, the performance of the log-determinant relaxation is
comparable to or better than the sum-product algorithm, and by a substantial margin for certain problem classes. Finally, the zero-temperature
limit of our log-determinant relaxation recovers a class of well-known
semidefinite relaxations for integer programming [e.g., 3].
1
Introduction
Given a probability distribution defined by a graphical model (e.g., Markov random field,
factor graph), a key problem is the computation of marginal distributions. Although highly
efficient algorithms exist for trees, exact solutions are prohibitively complex for more general graphs of any substantial size. This difficulty motivates the use of algorithms for
computing approximations to marginal distributions, a problem to which we refer as approximate inference. One widely-used algorithm is the belief propagation or sum-product
algorithm. As shown by Yedidia et al. [12], it can be interpreted as a method for attempting
to solve a variational problem wherein the exact entropy is replaced by the Bethe approximation. Moreover, Yedidia et al. proposed extensions to the Bethe approximation based on
clustering operations.
An unattractive feature of the Bethe approach and its extensions is that with certain exceptions [e.g., 6], the associated variational problems are typically not convex, thus leading to
algorithmic complications, and also raising the possibility of multiple local optima. Secondly, in contrast to other variational methods (e.g., mean field [4]), the optimal values of
Bethe-type variational problems fail to provide bounds on the log partition function. This
function arises in various contexts, including approximate parameter estimation and large
deviations exponents, so that such bounds are of interest in their own right.
This paper introduces a new class of variational problems that are both convex and provide
upper bounds. Our derivation relies on a Gaussian upper bound on the discrete entropy of
a suitably regularized random vector, and a semidefinite outer bound on the set of valid
marginal distributions. The combination leads to a log-determinant maximization problem
with a unique optimum that can be found by efficient interior point methods [8]. As with the
Bethe/Kikuchi approximations and sum-product algorithms, the optimizing arguments of
this problem can be taken as approximations to the marginal distributions of the underlying
graphical model. Moreover, taking the ?zero-temperature? limit recovers a class of wellknown semidefinite programming relaxations for integer programming problems [e.g., 3].
2
Problem set-up
We consider an undirected graph G = (V, E) with n = |V | nodes. Associated
with each vertex s ? V is a random variable xs taking values in the discrete space
X = {0, 1, . . . , m ? 1}. We let x = {xs | s ? V } denote a random vector taking values
in the Cartesian product space X n . Our analysis makes use of the following exponential representation of a graph-structured distribution p(x). For some index set I, we let
? = {?? | ? ? I} denote a collection of potential functions associated with the cliques of
G, and let ? = {?? | ? ? I} be a vector of parameters associated with these potential
functions. The exponential family determined by ? is the following collection:
X
p(x; ?) = exp
?? ?? (x) ? ?(?)
(1a)
?
?(?)
=
log
X
x?X n
exp
X
?
?? ?? (x) .
(1b)
Here ?(?) is the log partition function that serves to normalize the distribution. In a minimal representation, the functions {?? } are affinely independent, and d = |I| corresponds
to the dimension of the family. For example, one minimal representation of a binaryvalued random vector on a graph with pairwise cliques is the standard Ising model, in
which ? = {xs | s ? V } ? { xs xt | (s, t) ? E}. Here the index set I = V ? E,
and d = n + |E|. In order to incorporate higher order interactions, we simply add higher
degree monomials (e.g., xs xt xu for a third order interaction) to the collection of potential
functions. Similar representations exist for discrete processes on alphabets with m > 2
elements.
2.1
Duality and marginal polytopes
It is well known that ? is convex in terms of ?, and strictly so for a minimal representation.
Accordingly, it is natural to consider its conjugate dual function, which is defined by the
relation:
?? (?) = sup {h?, ?i ? ?(?)}.
(2)
??Rd
Here the vector of dual variables ? is the same dimension as exponential parameter ? (i.e.,
? ? Rd ). It is straightforward to show that the partial derivatives of ? with respect to
? correspond to cumulants of ?(x); in particular, the first order derivatives define mean
parameters:
X
??
(?) =
p(x; ?)?? (x) = E? [?? (x)].
(3)
???
n
x?X
In order to compute ?? (?) for a given ?, we take the derivative with respect to ? of the
quantity within curly braces in Eqn. (2). Setting this derivative to zero and making use of
Eqn. (3) yields defining conditions for a vector ?(?) attaining the optimum in Eqn. (2):
?? = E?(?) [?? (x)]
???I
(4)
It can be shown [10] that Eqn. (4) has a solution if and only if ? belongs to the relative
interior of the set:
X
p(x) ?(x) = ? for some p(?)}
(5)
MARG(G; ?) = { ? ? Rd
x?X n
Note that this set is equivalent to the convex hull of the finite collection of vectors
{?(x) | x ? X n }; consequently, the Minkowski-Weyl theorem [7] guarantees that it can
be characterized by a finite number of linear inequality constraints. We refer to this set as
the marginal polytope1 associated with the graph G and the potentials ?.
In order to calculate an explicit form for ?? (?) for any ? ? MARG(G; ?), we substitute
the relation in Eqn. (4) into the definition of ?? , thereby obtaining:
X
?? (?) = h?, ?(?)i ? ?(?(?)) =
p(x; ?(?)) log p(x; ?(?)).
(6)
x?X n
This relation establishes that for ? in the relative interior of MARG(G; ?), the value of the
conjugate dual ?? (?) is given by the negative entropy of the distribution p(x; ?(?)), where
the pair ?(?) and ? are dually coupled via Eqn. (4). For ? ?
/ cl MARG(G; ?), it can be
shown [10] that the value of the dual is +?.
Since ? is lower semi-continuous, taking the conjugate twice recovers the original function [7]; applying this fact to ?? and ?, we obtain the following relation:
h?, ?i ? ?? (?) .
(7)
?(?) =
max
??MARG(G;?)
Moreover, we are guaranteed that the optimum is attained uniquely at the exact marginals
? = {?? } of p(x; ?). This variational formulation plays a central role in our development
in the sequel.
2.2
Challenges with the variational formulation
There are two difficulties associated with the variational formulation (7). First of all, observe that the (negative) entropy ?? , as a function of only the mean parameters ?, is implicitly defined; indeed, it is typically impossible to specify an explicit form for ? ? . Key
exceptions are trees and hypertrees, for which the entropy is well-known to decompose into
a sum of local entropies defined by local marginals on the (hyper)edges [1]. Secondly, for
a general graph with cycles, the marginal polytope MARG(G; ?) is defined by a number
of inequalities that grows rapidly in graph size [e.g., 2]. Trees and hypertrees again are
important exceptions: in this case, the junction tree theorem [e.g., 1] provides a compact
representation of the associated marginal polytopes.
The Bethe approach (and its generalizations) can be understood as consisting of two steps:
(a) replacing the exact entropy ??? with a tree (or hypertree) approximation; and (b)
replacing the marginal polytope MARG(G; ?) with constraint sets defined by tree (or hypertree) consistency conditions. However, since the (hyper)tree approximations used do
not bound the exact entropy, the optimal values of Bethe-type variational problems do not
provide a bound on the value of the log partition function ?(?). Requirements for bounding ? are both an outer bound on the marginal polytope, as well as an upper bound on the
entropy ??? .
1
When ?? corresponds to an indicator function, then ?? is a marginal probability; otherwise, this
choice entails a minor abuse of terminology.
3
Log-determinant relaxation
In this section, we state and prove a set of upper bounds based on the solution of a variational problem involving determinant maximization and semidefinite constraints. Although
the ideas and methods described here are more generally applicable, for the sake of clarity
in exposition we focus here on the case of a binary vector x ? {?1, +1}n of ?spins?.
It is also convenient to define all problems with respect to the complete graph K n (i.e.,
fully connected). We use the standard (minimal) Ising representation for a binary problem,
in terms of the potential functions
? = {xs | s ? V } ? {xs xt | (s, t)}. On the complete
graph, there are d = n + n2 such potential functions in total. Of course, any problem can
be embedded into the complete graph by setting to zero a subset of the {?st } parameters.
(In particular, for a graph G = (V, E), we simply set ?st = 0 for all pairs (s, t) ?
/ E.)
3.1
Outer bounds on the marginal polytope
We first focus on the marginal polytope MARG(Kn ) ? MARG(Kn ; ?) of valid dual
variables {?s , ?st }, as defined in Eqn. (5). In this section, we describe a set of semidefinite
and linear constraints that any valid dual vector ? ? MARG(Kn ) must satisfy.
3.1.1
Semidefinite constraints
Given an arbitrary vector ? ? Rd , consider the following (n + 1) ? (n + 1) matrix:
?
?
1
?1
?2 ? ? ?
?n?1
?n
1
?12 ? ? ?
???
?1n ?
? ?1
? ?
?
?
1
?
?
?
?
?
?
?
2
21
2n ?
?
? .
?
..
..
..
..
..
M1 [?] := ? ..
?
.
.
.
.
.
?
?
?
?
.
.
.
.
??
..
..
..
..
?n,(n?1) ?
n?1
?n
?n1 ?n2 ? ? ? ?(n?1),n
1
(8)
The motivation underlying this definition is the following: suppose that the given dual vector ? actually belongs
P to MARG(Kn ), in which
P case there exists some distribution p(x; ?)
such that ?s = x p(x; ?) xs and ?st = x p(x; ?) xs xt . Thus, if ? ? MARG(Kn ),
the matrix M1 [?] can be interpreted as the matrix of second order moments for the vector
(1, x), as computed under p(x; ?). (Note in particular that the diagonal elements are all
one, since x2s = 1 when xs ? {?1, +1}.) Since any such moment matrix must be positive
semidefinite,2 we have established the following:
Lemma 1 (Semidefinite outer bound). The binary marginal polytope MARG(K n ) is
contained within the semidefinite constraint set:
SDEF1 :=
? ? Rd M1 [?] 0
(9)
This semidefinite relaxation can be further strengthened by including higher order terms in
the moment matrices [5].
3.1.2
Additional linear constraints
It is straightforward to augment these semidefinite constraints with additional linear constraints. Here we focus in particular on two classes of constraints, referred to as rooted and
unrooted triangle inequalities by Deza and Laurent [2], that are of especial relevance in the
graphical model setting.
To be explicit, letting z = (1, x), then for any vector a ? Rn+1 , we have aT M1 [?]a =
a E[zzT ]a = E[kaT zk2 ], which is certainly non-negative.
2
T
Pairwise edge constraints: It is natural to require that the subset of mean parameters
associated with each pair of random variables (xs , xt ) ? namely, ?s , ?t and ?st ? specify
a valid pairwise marginal distribution. Letting {a, b} take values in {?1, +1} 2 , consider
the set of four linear constraints of the following form:
1 + a ?s + b ?t + ab ?st
? 0.
(10)
It can be shown [11, 10] that these constraints are necessary and sufficient to guarantee the
existence of a consistent pairwise marginal. By the junction tree theorem [1], this pairwise
consistency guarantees that the constraints of Eqn. (10) provide a complete description
of the binary marginal polytope for any tree-structured graph. Moreover, for a general
graph with cycles, they are equivalent to the tree-consistent constraint set used in the Bethe
approach [12] when applied to a binary vector x ? {?1, +1}n .
Triplet constraints: Local consistency can be extended to triplets {xs , xt , xu }, and
even more generally to higher order subsets. For the triplet case, consider the following set of constraints (and permutations thereof) among the pairwise mean parameters
{?st , ?su , ?tu }:
?st + ?su + ?tu ? ?1,
?st ? ?su ? ?tu ? ?1.
(11)
It can be shown [11, 10] that these constraints, in conjunction with the pairwise constraints (10), are necessary and sufficient to ensure that the collection of mean parameters
{?s , ?t , ?u , ?st , ?su , ?tu } uniquely determine a valid marginal over the triplet (xs , xt , xu ).
Once again, by applying the junction tree theorem [1], we conclude that the constraints (10)
and (11) provide a complete characterization of the binary marginal polytope for hypertrees
of width two. It is worthwhile observing that this set of constraints is equivalent to those
that are implicitly enforced by any Kikuchi approximation [12] with clusters of size three
(when applied to a binary problem).
3.2
Gaussian entropy bound
We now turn to the task of upper bounding the entropy. Our starting point is the familiar
interpretation of the Gaussian as the maximum entropy distribution subject to covariance
constraints:
R
Lemma 2. The (differential) entropy h(e
x) := ? p(e
x) log p(e
x)de
x is upper bounded by
the entropy 12 log det cov(e
x) + n2 log(2?e) of a Gaussian with matched covariance.
Of interest to us is the discrete entropy of a discrete-valued random vector x ? {?1, +1} n ,
whereas the Gaussian bound of Lemma 2 applies to the differential entropy of a continuousvalued random vector. Therefore, we need to convert our discrete vector to the continuous
e = 12 x + u, where
space. In order to do so, we define a new continuous random vector via x
u is a random vector independent of x, with each element independently and identically
distributed3 as us ? U[? 12 , 12 ]. The motivation for rescaling x by 21 is to pack the boxes
together as tightly as possible.
Lemma 3. We have h(e
x) = H(x), where h and H denote the differential and discrete
e and x respectively.
entropies of x
Proof. By construction, the differential entropy can be decomposed as a sum of integrals
over hyperboxes of unit volume, one for each configuration, over which the probability
e is constant.
density of x
3
The notation U[a, b] denotes the uniform distribution on the interval [a, b].
3.3
Log-determinant relaxation
Equipped with these building blocks, we are now ready to state and prove a log-determinant
relaxation for the log partition function.
Theorem 1. Let x ? {?1, +1}n , and let OUT(Kn ) be any convex outer bound on
MARG(Kn ) that is contained within SDEF1 . Then there holds
1
1
n
?e
?(?) ?
max
h?, ?i + log det M1 (?) + blkdiag[0, In ] + log( )
2
3
2
2
??OUT(Kn )
(12)
where blkdiag[0, In ] is an (n+1)?(n+1) block-diagonal matrix. Moreover, the optimum
is attained at a unique ?
b ? OUT(Kn ).
Proof. For any ? ? MARG(Kn ), let x be a random vector with these mean parameters.
e = 12 x + u. From Lemma 3, we have
Consider the continuous-valued random vector x
H(x) = h(e
x); combining this equality with Lemma 2, we obtain the upper bound H(x) ?
1
x) + n2 log(2?e). Since x and u are independent and u ? U[?1/2, 1/2], we
2 log det cov(e
1
can write cov(e
x) = 14 cov(x) + 12
In . Next we use the Schur complement formula [8] to
express the log determinant as follows:
1
1
(13)
log det cov(e
x) = log det M1 [?] + blkdiag[0, In ] + n log .
3
4
Combining Eqn. (13) with the Gaussian upper bound leads to the following expression:
n
1
1
?e
H(x) = ??? (?) ?
log det M1 [?] + blkdiag[0, In ] + log( )
2
3
2
2
Substituting this upper bound into the variational representation of Eqn. (7) and using the
fact that OUT(Kn ) is an outer bound on MARG(G) yields Eqn. (12). By construction,
the cost function is strictly convex so that the optimum is unique.
The inclusion OUT(Kn ) ? SDEF1 in the statement of Theorem 1 guarantees that the
matrix M1 (?) will always be positive semidefinite. Importantly, the optimization problem in Eqn. (12) is a determinant maximization problem, for which efficient interior point
methods have been developed [e.g., 8].
4
Experimental results
The relevance of the log-determinant relaxation for applications is two-fold: it provides
upper bounds on the log partition function, and the maximizing arguments ?
b ? OUT(K n )
of Eqn. (12) can be taken as approximations to the exact marginals of the distribution
p(x; ?). So as to test its performance in computing approximate marginals, we performed extensive experiments on the complete graph (fully connected) and the 2-D nearestneighbor lattice model. We treated relatively small problems with 16 nodes so as to enable comparison to the exact answer. For any given trial, we specified the distribution
p(x; ?) by randomly choosing ? as follows. The single node parameters were chosen
as ?s ? U[?0.25, 0.25] independently4 for each node. For a given coupling strength
dcoup > 0, we investigated three possible types of coupling: (a) for repulsive interactions, ?st ? U[?2dcoup , 0]; (b) for mixed interactions, ?st ? U[?dcoup , +dcoup ]; (c) for
attractive interactions, ?st ? U[0, 2dcoup ].
For each distribution p(x; ?), we performed the following computations: (a) the exact
marginal probability p(xs = 1; ?) at each node; and (b) approximate marginals computed
4
Here U[a, b] denotes the uniform distribution on [a, b].
from the Bethe approximation with the sum-product algorithm, or (c) log-determinant
approximate marginals from Theorem 1 using the outer bound OUT(Kn ) given by the
first semidefinite relaxation SDEF1 in conjunction with the pairwise linear constraints in
Eqn. (10). We computed the exact marginal values either by exhaustive summation (complete graph), or by the junction tree algorithm (lattices). We used the standard parallel
message-passing form of the sum-product algorithm with a damping factor5 ? = 0.05.
The log-determinant problem of Theorem 1 was solved using interior point methods [8].
For each graph (fully connected or grid), we examined a total of 6 conditions: 2 different
potential strengths (weak or strong) for each P
of the 3 types of coupling (attractive, mixed,
n
and repulsive). We computed the `1 -error n1 s=1 |p(xs = 1; ?) ? ?
bs |, where ?
bs was the
approximate marginal computed either by SP or by LD.
Problem type
Method
Sum-product
Graph
Full
Coupling
Strength
Median
Range
Median
Range
R
(0.25, 0.25)
0.035
[0.01, 0.10]
0.020
[0.01, 0.03]
R
(0.25, 0.50)
0.066
[0.03, 0.20]
0.017
[0.01, 0.04]
M?
(0.25, 0.25)
0.003
[0.00, 0.04]
0.019
[0.01, 0.03]
M
(0.25, 0.50)
0.035
[0.01, 0.31]
0.010
[0.01, 0.06]
?
(0.25, 0.06)
0.021
[0.00, 0.08]
0.026
[0.01, 0.06]
A
(0.25, 0.12)
0.422
[0.08, 0.86]
0.023
[0.01, 0.09]
R
(0.25, 1.0)
0.285
[0.04, 0.59]
0.041
[0.01, 0.12]
A
Grid
Log-determinant
R
(0.25, 2.0)
0.342
[0.04, 0.78]
0.033
[0.00, 0.12]
M?
(0.25, 1.0)
0.008
[0.00, 0.20]
0.016
[0.01, 0.02]
M
(0.25, 2.0)
0.053
[0.01, 0.54]
0.032
[0.01, 0.11]
A
(0.25, 1.0)
0.404
[0.06, 0.90]
0.037
[0.01, 0.13]
A
(0.25, 2.0)
0.550
[0.06, 0.94]
0.031
[0.00, 0.12]
Table 1. Statistics of the `1 -approximation error for the sum-product (SP) and logdeterminant (LD) methods for the fully connected graph K16 , as well as the 4-nearest neighbor grid with 16 nodes, with varying coupling and potential strengths.
Table 1 shows quantitative results for 100 trials performed in each of the 12 experimental
conditions, including only those trials for which SP converged. The potential strength is
given as the pair (dobs , dcoup ); note that dobs = 0.25 in all trials. For each method, we show
the sample median, and the range [min, max] of the errors. Overall, the performance of
LD is better than that of SP , and often substantially so. The performance of SP is slightly
better in the regime of weak coupling and relatively strong observations (?s values); see the
entries marked with ? in the table. In the remaining cases, the LD method outperforms SP,
and with a large margin for many examples with strong coupling. The two methods also
differ substantially in the ranges of the approximation error. The SP method exhibits some
instability, with the error for certain problems being larger than 0.5; for the same problems,
the LD error ranges are much smaller, with a worst case maximum error over all trials and
conditions of 0.13. In addition, the behavior of SP can change dramatically between the
weakly coupled and strongly coupled conditions, whereas the LD results remain stable.
5
new
old
More precisely, we updated messages in the log domain as ? log M st
+ (1 ? ?) log Mst
.
5
Discussion
In this paper, we developed a new method for approximate inference based on the combination of a Gaussian entropy bound with semidefinite constraints on the marginal polytope.
The resultant log-determinant maximization problem can be solved by efficient interior
point methods [8]. In experimental trials, the log-determinant method was either comparable or better than the sum-product algorithm, and by a substantial margin for certain
problem classes. Of particular interest is that, in contrast to the sum-product algorithm,
the performance degrades gracefully as the interaction strength is increased. It can be
shown [11, 10] that in the zero-temperature limit, the log-determinant relaxation (12) reduces to a class of semidefinite relaxations that are widely used in combinatorial optimization. One open question is whether techniques for bounding the performance of such
semidefinite relaxations [e.g., 3] can be adapted to the finite temperature case.
Although this paper focused exclusively on the binary problem, the methods described here
can be extended to other classes of random variables. It remains to develop a deeper understanding of the interaction between the two components to these approximations (i.e.,
the entropy bound, and the outer bound on the marginal polytope), as well as how to tailor approximations to particular graph structures. Finally, semidefinite constraints can be
combined with entropy approximations (preferably convex) other than the Gaussian bound
used in this paper, among them ?convexified? Bethe/Kikuchi entropy approximations [9].
Acknowledgements: Thanks to Constantine Caramanis and Laurent El Ghaoui for helpful discussions. Work funded by NSF grant IIS-9988642, ARO MURI DAA19-02-1-0383, and a grant from
Intel Corporation.
References
[1] R. G. Cowell, A. P. Dawid, S. L. Lauritzen, and D. J. Spiegelhalter. Probabilistic networks and
expert systems. Statistics for Engineering and Information Science. Springer-Verlag, 1999.
[2] M. Deza and M. Laurent. Geometry of cuts and metric embeddings. Springer-Verlag, New
York, 1997.
[3] M. X. Goemans and D. P. Williamson. Improved approximation algorithms for maximum cut
and satisfiability problems using semidefinite programming. Journal of the ACM, 42:1115?
1145, 1995.
[4] M. Jordan, editor. Learning in graphical models. MIT Press, Cambridge, MA, 1999.
[5] J. B. Lasserre. Global optimization with polynomials and the problem of moments. SIAM
Journal on Optimization, 11(3):796?817, 2001.
[6] R. J. McEliece and M. Yildirim. Belief propagation on partially ordered sets. In D. Gilliam and
J. Rosenthal, editors, Mathematical Theory of Systems and Networks. Institute for Mathematics
and its Applications, 2002.
[7] G. Rockafellar. Convex Analysis. Princeton University Press, Princeton, 1970.
[8] L. Vandenberghe, S. Boyd, and S. Wu. Determinant maximization with linear matrix inequality
constraints. SIAM Journal on Matrix Analysis and Applications, 19:499?533, 1998.
[9] M. J. Wainwright, T. S. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log
partition function. In Uncertainty in Artificial Intelligence, volume 18, pages 536?543, August
2002.
[10] M. J. Wainwright and M. I. Jordan. Graphical models, exponential families, and variational
inference. Technical report, UC Berkeley, Department of Statistics, No. 649, 2003.
[11] M. J. Wainwright and M. I. Jordan. Semidefinite relaxations for approximate inference on
graphs with cycles. Technical report, UC Berkeley, UCB/CSD-3-1226, January 2003.
[12] J. S. Yedidia, W. T. Freeman, and Y. Weiss. Understanding belief propagation and its generalizations. Technical Report TR2001-22, Mitsubishi Electric Research Labs, January 2002.
| 2438 |@word trial:7 determinant:18 polynomial:1 suitably:1 open:1 mitsubishi:1 covariance:2 thereby:1 ld:6 moment:4 configuration:1 exclusively:1 outperforms:1 wainwrig:1 must:2 mst:1 partition:7 weyl:1 intelligence:1 accordingly:1 provides:2 characterization:1 complication:1 node:6 mathematical:1 differential:4 prove:2 pairwise:8 indeed:1 behavior:1 freeman:1 continuousvalued:1 decomposed:1 equipped:1 moreover:5 underlying:2 bounded:1 matched:1 notation:1 x2s:1 interpreted:2 substantially:2 developed:2 corporation:1 guarantee:4 berkeley:8 quantitative:1 preferably:1 prohibitively:1 unit:1 grant:2 positive:2 engineering:2 local:4 understood:1 limit:3 laurent:3 abuse:1 twice:1 nearestneighbor:1 examined:1 range:5 unique:4 block:2 kat:1 binaryvalued:1 convenient:1 boyd:1 interior:7 context:1 marg:16 applying:2 impossible:1 instability:1 equivalent:3 maximizing:1 straightforward:2 starting:1 independently:1 convex:9 focused:1 importantly:1 vandenberghe:1 updated:1 zzt:1 play:1 suppose:1 construction:2 exact:10 programming:4 curly:1 element:3 dawid:1 cut:2 ising:2 muri:1 role:1 electrical:1 solved:3 worst:1 calculate:1 cycle:5 connected:4 substantial:3 tr2001:1 weakly:1 triangle:1 various:1 caramanis:1 derivation:1 alphabet:1 k16:1 describe:1 artificial:1 hyper:2 choosing:1 exhaustive:1 widely:2 solve:1 valued:2 larger:1 otherwise:1 statistic:4 cov:5 aro:1 interaction:7 product:10 tu:4 combining:2 rapidly:1 description:1 normalize:1 cluster:1 optimum:7 requirement:1 kikuchi:4 coupling:7 develop:1 nearest:1 lauritzen:1 minor:1 strong:3 c:1 differ:1 hull:1 enable:1 require:1 generalization:3 hypertrees:3 decompose:1 secondly:2 summation:1 strictly:3 extension:2 hold:1 exp:2 algorithmic:1 substituting:1 estimation:1 applicable:1 combinatorial:1 establishes:1 mit:1 gaussian:9 always:1 varying:1 jaakkola:1 conjunction:2 dcoup:6 focus:3 contrast:3 affinely:1 helpful:1 inference:5 el:1 typically:2 relation:4 overall:1 dual:7 among:2 augment:1 exponent:1 development:1 uc:4 marginal:25 field:2 once:1 report:3 randomly:1 tightly:1 familiar:1 replaced:1 geometry:1 consisting:1 n1:2 ab:1 interest:3 message:2 highly:1 possibility:1 certainly:1 introduces:1 semidefinite:21 edge:2 integral:1 partial:1 necessary:2 damping:1 tree:12 old:1 minimal:4 increased:1 cumulants:1 maximization:6 lattice:2 cost:1 deviation:1 vertex:1 monomials:1 subset:3 entry:1 uniform:2 kn:13 answer:1 eec:1 combined:2 st:14 density:1 thanks:1 siam:2 sequel:1 probabilistic:1 michael:1 together:1 again:2 central:1 expert:1 derivative:4 leading:1 rescaling:1 potential:9 de:1 attaining:1 rockafellar:1 satisfy:1 dobs:2 performed:3 lab:1 observing:1 sup:1 parallel:1 spin:1 correspond:1 yield:2 weak:2 yildirim:1 converged:1 definition:2 thereof:1 resultant:1 associated:8 proof:2 recovers:3 satisfiability:1 actually:1 higher:4 attained:2 wherein:1 specify:2 improved:1 wei:1 formulation:3 box:1 strongly:1 mceliece:1 eqn:14 replacing:2 su:4 propagation:3 grows:1 building:1 equality:1 attractive:2 width:1 uniquely:2 rooted:1 complete:7 temperature:4 sdef1:4 variational:13 volume:2 interpretation:1 m1:8 marginals:8 refer:2 cambridge:1 rd:5 consistency:3 grid:3 mathematics:1 inclusion:1 convexified:1 funded:1 stable:1 entail:1 logdeterminant:1 add:1 own:1 optimizing:2 belongs:2 constantine:1 wellknown:1 certain:4 verlag:2 inequality:4 binary:8 additional:3 unrooted:1 determine:1 semi:1 ii:1 full:1 desirable:1 multiple:1 reduces:1 technical:3 characterized:1 involving:1 metric:1 whereas:2 addition:1 interval:1 median:3 brace:1 subject:1 undirected:1 schur:1 jordan:5 integer:2 identically:1 embeddings:1 idea:1 det:6 whether:1 expression:1 passing:1 york:1 dramatically:1 generally:2 exist:2 nsf:1 rosenthal:1 discrete:7 write:1 express:1 key:2 four:1 terminology:1 clarity:1 graph:23 relaxation:15 sum:11 convert:1 enforced:1 uncertainty:1 tailor:1 family:3 wu:1 comparable:2 bound:29 guaranteed:2 fold:1 strength:6 adapted:1 constraint:26 precisely:1 sake:1 argument:3 min:1 attempting:1 minkowski:1 martin:1 relatively:2 structured:2 department:1 combination:3 conjugate:3 smaller:1 slightly:1 remain:1 making:1 b:2 ghaoui:1 taken:3 remains:1 turn:1 fail:1 letting:2 serf:1 zk2:1 repulsive:2 junction:4 operation:1 yedidia:3 observe:1 worthwhile:1 existence:1 substitute:1 original:1 denotes:2 clustering:1 ensure:1 remaining:1 graphical:5 calculating:1 question:1 quantity:1 degrades:1 diagonal:2 exhibit:1 outer:9 gracefully:1 polytope:11 willsky:1 index:2 hypertree:2 statement:1 negative:3 motivates:1 upper:12 observation:1 markov:1 finite:3 january:2 defining:1 extended:2 rn:1 dually:1 arbitrary:1 august:1 complement:1 pair:4 namely:1 specified:1 extensive:1 raising:1 polytopes:2 established:1 regime:1 challenge:1 blkdiag:4 including:3 max:3 belief:3 wainwright:4 difficulty:2 natural:2 regularized:1 treated:1 indicator:1 spiegelhalter:1 ready:1 coupled:3 understanding:2 acknowledgement:1 relative:2 embedded:1 fully:4 permutation:1 mixed:2 especial:1 degree:1 sufficient:2 consistent:2 editor:2 course:1 deza:2 deeper:1 institute:1 neighbor:1 taking:4 dimension:2 valid:5 collection:5 approximate:10 compact:1 implicitly:2 clique:2 global:2 conclude:1 continuous:4 triplet:4 table:3 lasserre:1 bethe:12 pack:1 ca:2 obtaining:1 williamson:1 investigated:1 complex:1 cl:1 electric:1 domain:1 sp:8 csd:1 bounding:3 motivation:2 n2:4 xu:3 referred:1 intel:1 strengthened:1 explicit:3 exponential:4 third:1 theorem:8 formula:1 xt:7 x:15 unattractive:1 exists:1 cartesian:1 margin:3 entropy:23 simply:2 contained:2 ordered:1 partially:1 applies:1 cowell:1 springer:2 corresponds:2 relies:1 acm:1 ma:1 marked:1 consequently:1 exposition:1 change:1 determined:1 lemma:6 total:2 goemans:1 duality:1 experimental:4 ucb:1 exception:3 arises:1 relevance:2 incorporate:1 princeton:2 |
1,583 | 2,439 | Learning Bounds for a Generalized Family of
Bayesian Posterior Distributions
Tong Zhang
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598
[email protected]
Abstract
In this paper we obtain convergence bounds for the concentration of
Bayesian posterior distributions (around the true distribution) using a
novel method that simplifies and enhances previous results. Based on the
analysis, we also introduce a generalized family of Bayesian posteriors,
and show that the convergence behavior of these generalized posteriors is
completely determined by the local prior structure around the true distribution. This important and surprising robustness property does not hold
for the standard Bayesian posterior in that it may not concentrate when
there exist ?bad? prior structures even at places far away from the true
distribution.
1
Introduction
Consider a sample space X and a measure ? on X (with respect to some ?-field). In
statistical inference, the nature picks a probability measure Q on X which is unknown. We
assume that Q has a density q with respect to ?. In the Bayesian paradigm, the statistician
considers a set of probability densities p(?|?) (with respect to ? on X ) indexed by ? ? ?, and
makes an assumption1 that the true density q can be represented as p(?|?) with ? randomly
picked from ? according to a prior distribution ? on ?. Throughout the paper, all quantities
appearing in the derivations are assumed to be measurable.
Given a set of samples X = {X1 , . . . , Xn } ? X n , where each Xi independently drawn
from (the unknown distribution) Q, the optimal Bayesian method can be derived as the
optimal inference with respect to the posterior distribution. Although a Bayesian procedure
is optimal only when the nature picks the same prior as the statistician (which is very
unlikely), it is known that procedures with desirable properties from the frequentist point
of view (such minimaxity and admissibility) are often Bayesian [6]. From a theoretical
point of view, it is necessary to understand the behavior of Bayesian methods without
the assumption that the nature picks the same prior as the statistician. In this respect,
the most fundamental issue in Bayesian analysis is whether the Bayesian inference based
on the posterior distribution will converge to the corresponding inference of the true (but
1
In this paper, we view the Bayesian paradigm as a method to generate statistical inferencing
procedures, and thus don?t assume that the Bayesian prior assumption has to be true. In particular,
we do not even assume that q ? {p(?|?) : ? ? ?}.
unknown) distribution when the number of observations approach infinity.
A more general question is whether the Bayesian posterior distribution will be concentrated around the true underlying distribution when the sample size is large. This is often
referred to as the consistency of Bayesian posterior distribution, which is certainly the most
fundamental issue for understanding the behavior of Bayesian methods. This problem has
drawn considerable attention in statistics. The classical results include average consistency
results such as Doob?s consistency theorem and asymptotic convergence results such as the
Bernstein-von Mises theorem for parametric problems. For infinite-dimensional problems,
one has to choose the prior very carefully, or the Bayesian posterior may not concentrate
around the true underlying distribution, which leads to inconsistency [1, 2]. In [1], the
authors also gave conditions that guarantee the consistency of Bayesian posterior distributions, although convergence rates were not obtained. The convergence rates were studied
in two recent works [3, 8] by using heavy machineries from the empirical process theory.
The purpose of this paper is to develop finite-sample convergence bounds for Bayesian
posterior distributions using a novel approach that not only simplifies the analysis given
in [3, 8], but also leads to tighter bounds. At the heart of our approach are some new
posterior averaging bounds that are related to the PAC Bayes analysis appeared in some
recent machine learning works. These new bounds are of independent interests (though
we cannot fully explore their consequences here) since they can be used to obtain correct
convergence rates for other statistical estimation problems such as least squares regression.
Motivated by our learning bounds, we introduce a generalized family of Bayesian methods,
and show that their convergence behavior relies only on the prior mass in a small neighborhood around the true distribution. This is rather surprising when we consider the example
given in [1], which shows that for the (standard) Bayesian method, even if one puts a positive prior mass around the true distribution, one may still get an inconsistent posterior when
there exist undesirable prior structures far away from the true distribution.
2
The regularization formulation of Bayesian posterior measure
Assume we observe n-samples X = {X1 , . . . , Xn } ? X n , independently drawn from the
true underlying distribution Q. We shall call any probability density w
?X (?) with respect to
? that depends on the observation X (and measurable on X n ? ?) a posterior distribution.
?? ? (0, 1], we define a generalized Bayesian posterior ? ? (?|X) with respect to ? as:
Qn
?
?
i=1 p (Xi |?)
R
Q
.
(1)
? (?|X) =
n
?
i=1 p (Xi |?)d?(?)
?
We call ? ? the ?-Bayesian posterior. The standard Bayesian posterior is denoted as
?(?|X) = ? 1 (?|X). Given a probability density w(?) on ? with respect to ?, we define
the KL-divergence KL(wd?||d?) as:
Z
KL(wd?||d?) =
w(?) ln w(?)d?(?).
?
Consider a real-valued function f (?) on ?, we denote by E? f (?) the expectation of f (?)
with respect to ?. Similarly, consider a real-valued function `(x) on X , we denote by
Eq `(x) the expectation of `(?) with respect the true underlying distribution q. We also use
EX to denote the expectation with respect to the observation X.
The key starting point of our analysis is the following simple observation that relates the
Bayesian posterior to the solution of an entropy regularized density (with respect to ?) estimation. Under this formulation, techniques for analyzing regularized risk minimization
problems, such as those recently investigated by the author, can be applied to obtain sample
complexity bound for Bayesian posterior distributions. The proof of the following regularization formulation is straight-forward, which we shall skip due to the space limitation.
Proposition 2.1 For any density w on ? with respect to ?, let
1
?
?X
R
(w) = ?
n
n
X
E? w(?) ln
i=1
1
q(Xi )
+ KL(wd?||d?).
p(Xi |?) n
? ? (? ? (?|X)) = inf w R
? ? (w).
Then R
X
X
The above Proposition indicates that the generalized Bayesian posterior minimizes the reg? ? (w) among all possible densities w with respect to the prior ?.
ularized empirical risk R
X
We thus only need to study the behavior of this regularized empirical risk minimization
?X
problem. One may define the true risk of w by replacing the empirical expectation E
with the expectation with respect to the true underlying distribution q:
Rq? (w) = ?E? w(?)KL(q||p(?|?)) +
1
KL(wd?||d?),
n
(2)
q(x)
where KL(q||p) = Eq ln p(x)
is the KL-divergence between q and p, which is always a
non-negative number. This quantity is widely used to measure the closeness of two distributions p and q. Clearly the Bayesian posterior is an approximate solution to (2) using
empirical expectation. The first term of Rq? (w) measures the average KL-divergence of q
and p under the w-density. Since both the first term and the second term are non-negative,
we know immediately that if Rq? (w) ? 0, then the distribution w is concentrated around q.
Using empirical process techniques, one would typically expect to bound Rq? (w) in term
? ? (w). Unfortunately, it does not work in our case since KL(q||p) is not well-defined
of R
X
for all p. This implies that as long as w has non-zero concentration around a density p with
KL(q||p) = +?, then Rq? (w) = +?. Therefore we may have Rq? (?(?|X)) = +? with
non-zero probability even when the sample size approaches infinity.
A remedy is to consider a distance function that is always well-defined. In statistics, one
often considers the ?-divergence for ? ? (0, 1), which is defined as:
?
1
p(x)
D? (q||p) =
Eq 1 ?
.
(3)
?(1 ? ?)
q(x)
This divergence is always well-defined and KL(q||p) = lim??0 D? (q||p). In the statistical
literature, the convergence results were often specified under the Hellinger distance (? =
0.5). We would also like to mention that our learning bound derived later will become
trivial when ? ? 0. This is consistent with the above discussion since Rq? (corresponding
to ? = 0) may not converge at all. However, under additional assumptions, such as the
boundedness of q/p, KL(q||p) exists and can be bounded using the ?-divergence D? (q||p).
3
Posterior averaging bounds under entropy regularization
The following inequality follows directly from a well-known convex duality. For example,
see [5, 7] for an explanation.
Proposition 3.1 Assume that f (?) is a measurable real-valued function on ?, and w(?) is
a density with respect to ?, we have
E? w(?)f (?) ? KL(wd?||d?) + ln E? exp(f (?)).
The main technical result which forms the basis of the paper is given by the following
lemma, where we assume that w
?X (?) is a posterior (density with respect to ? that depends
on X and measurable on X n ? ?).
Lemma 3.1 Consider any posterior w
?X (?). The following inequality holds for all measurable real-valued functions LX (?) on X n ? ?:
h
i
EX exp E? w
?X (?)(LX (?) ? ln EX eLX (?) ) ? KL(w
?X d?||d?) ? 1,
where EX is the expectation with respect to the observation X.
Proof. From Proposition 3.1, we obtain
?
L(X)
=E? w
?X (?)(LX (?) ? ln EX eLX (?) ) ? KL(w
?X d?||d?)
? ln E? exp(LX (?) ? ln EX eLX (?) ).
Now applying the Fubini?s theorem to interchange the order of integration, we have:
?
EX eL(X) ? EX E? eLX (?)?ln EX exp(LX (?)) = E? EX eLX (?)?ln EX exp(LX (?)) = 1.
2
The following corollary is a straight-forward consequence of Lemma 3.1. Note that for the
Bayesian method, the loss `? (x) has a form of `(p(x|?)).
Theorem 3.1 (Posterior Averaging Bounds) Under the notation of Lemma 3.1. Let X =
{X1 , . . . , Xn } be n-samples that are independently drawn from q. Consider a measurable
function `? (x) : ? ? X ? R. Then ?t > 0 and real number ?, the following event holds
with probability at least 1 ? exp(?t):
Pn
? i=1 E? w
?X (?)`? (Xi ) + KL(w
?X d?||d?) + t
.
? E? w
?X (?) ln Eq exp(??`? (x))) ?
n
Moreover, we have the following expected risk bound:
Pn
? i=1 E? w
?X (?)`? (Xi ) + KL(w
?X d?||d?)
?EX E? w
?X (?) ln Eq exp(??`? (x))) ? EX
.
n
Proof Sketch. The first bound is a direct consequence of Markov inequality. The second
bound can be obtained by using the fact EX exp(?X ) ? exp(EX ?X ), which follows
from the Jensen?s inequality. 2
The above bounds are immediately applicable to Bayesian posterior distribution. The first
leads to an exponential tail inequality, and the second leads to an expected risk bound.
Before analyzing Bayesian methods in detail in the next section, we shall briefly compare
the above results to the so-called PAC-Bayes bounds, which can be obtained by estimating
the left-hand side using the Hoeffding?s inequality with an appropriately chosen ?. However, in the following, we shall estimate the left-hand side using a Bernstein style bound,
which is much more useful for general statistical estimation problems:
Corollary 3.1 Under the notation of Theorem 3.1, and assume that sup?,x1 ,x2 |`? (x1 ) ?
`? (x2 )| ? 1. Then ?t, ? > 0, with probability of at least 1 ? exp(?t):
n
1X
E? w
?X (?)`? (Xi )
E? w
?X (?)Eq `? (x) ? ??(?)E? w
?X (?)Varq `? (x) ?
n i=1
KL(w
?X d?||d?) + t
,
?n
where ?(x) = (exp(x) ? x ? 1)/x2 and Varq `? (x) = Eq (`? (x) ? Eq `? (x))2 .
+
Proof Sketch. We follow one of the standard derivations of Bernstein inequality outlined
below: it is well known that ?(x) is non-decreasing in x, which in turn implies that
ln Eq exp(??`? (x))) ? ??Eq `? (x) + ?2 ?(?)Eq (`? (x) ? Eq `? (x))2 .
Now applying this bound to the left hand side of Theorem 3.1, we finish the proof. 2
One may use the simple bound Varq `? (x) ? 1/4 and obtain2 .
n
X
`? (Xi )
??(?) KL(w
?X d?||d?) + t
E? w
?X (?)Eq `? (x) ? E? w
?X (?)
+
+
. (4)
n
4
?n
i=1
This inequality holds for any data-independent choice of ?. However, one may easily turn
it into a bound which allows ? to depend on the data using well-known techniques (see [5],
for example). After we optimize ?, the resulting bound
pbecomes similar to the PAC-Bayes
bound [4]. Typically the optimal ? is in the order of KL(w
?X d?||d?)/n,
and hence the
p
rate of convergence given on the right-hand side is no better than O( 1/n). However, the
more interesting case is when there exists a constant b ? 0 such that
Eq (`? (x) ? Eq `? (x))2 ? bEq `? (x).
(5)
This condition appears in the theoretical analysis of many statistical estimation problems,
such as least squares regression, and when the loss function is non-negative (such as classification). It also appears in some analysis of maximum-likelihood estimation (log-loss),
though as we shall see, log-loss can be much more directly handled in our framework using
Theorem 3.1. A modified version of this condition also occurs in some recent analysis of
classification problems even when the problem is not separable. We shall now assume that
(5) holds. It follows from Corollary 3.1 that ?? > 0 such that ??(?) ? 1/b, we have
Pn
?E? w
?X (?) i=1 `? (Xi ) + KL(w
?X d?||d?) + t
E? w
?X (?)Eq `? (x) ?
.
(6)
?(1 ? b??(?))n
Again the above inequality holds for any data-independent ?, but we can easily turn it into
a bound that allows ? to depend on X using standard techniques. However we shall not
list the final result here since this is not the purpose of the paper. The parameter ? can be
optimized, and it is not hard to check that the resulting bound is significantly better than (4)
Pn
i)
when E? w
?X (?) i=1 `? (X
? 0. The ?self-bounding? condition (5) holds in the theon
retical analysis of many statistical estimation problems. To obtain the correct convergence
behavior in such cases (including the Bayesian method which we are interested in here),
inequality (4) is inadequate, and it is essential to use a Bernstein-type bound such as (6). It
is also useful to point out that to analyze such problems, one actually only needs (6) with
an appropriately chosen data-independent ?, which will lead to the correct (minimax) rate
of convergence. Note that if we choose ? to be a constant, then it is possible to achieve a
bound that converges as fast as O(1/n). We shall point out that in [7], a KL-divergence
version of the PAC-Bayes bound was developed for the 0-1 loss using related techniques,
which can lead to a rate as fast as O(ln n/n) if we make near zero errors. However, the
Bernstein style bound given here is more generally applicable and is necessary for more
complicated statistical estimation problems such as least squares regression.
4
Convergence bounds for Bayesian posterior distributions
We shall now analyze the finite sample convergence behavior of Bayesian posterior distributions using Theorem 3.1. Although the exponential tail inequality provides more detailed
information, our discussion will be based on the expected risk bound for simplicity.
2
In this case, slightly tighter results can be obtained by applying the Hoeffding?s exponential
inequality directly to the left-hand side of Theorem 3.1, instead of the method used in Corollary 3.1.
To analyze the Bayesian method, we let `? (x) = ln(q(x)/p(x|?)) in Theorem 3.1. Consider ? ? (0, 1). We also let w
?X (?) be the Bayesian posterior ? ? (?|X) with parameter
? ? [?, 1] defined in (1). Consider an arbitrary data-independent density w(?) with respect
to ?, using (3), we can obtain from Theorem 3.1 the following chain of equations:
1
1 ? ?(1 ? ?)D? (q||p(?|?))
q(x)
= ? EX E? ? ? (?|X) ln Eq exp ?? ln
p(x|?)
#
"
n
X1
q(Xi )
KL(? ? (?|X)d?||d?)
?
ln
+
?EX ?E? ? (?|X)
n p(Xi |?)
n
i=1
"
#
n
n
X
1X
q(Xi )
KL(wd?||d?)
???
p(Xi |?)
?EX ?E? w(?)
ln
+
+
EX sup
ln
n i=1 p(Xi |?)
n
n
q(Xi )
? i=1
EX E? ? ? (?|X) ln
q
=R?
(w) +
n
X
???
p(Xi |?)
EX sup
,
ln
n
q(Xi )
? i=1
where Rq? (w) is defined in (2). Note that the first inequality follows from Theorem 3.1,
and the second inequality follows from Proposition 2.1. The empirical process bound in
the second term can be improved using a more precise bounding method, but we shall
skip it here due to the lack of space. It is not difficult to see (also see Proposition 2.1 and
Proposition 3.1) that (we skip the derivation due to the space limitation):
inf Rq? (w) = ?
w
1
ln E? exp(??nKL(q||p(?|?))).
n
Using the fact ? ln(1 ? x) ? x to simplify the left-hand side, we thus obtain:
EX E? ? ? (?|X)D? (q||p(?|?))
?
? ln E? e??nKL(q||p(?|?)) + (? ? ?)EX sup?
Pn
i=1
i |?)
ln p(X
q(Xi )
?(1 ? ?)n
.
(7)
In the following, we shall compare our analysis with previous results. To be consistent with
the concept used in these previous studies, we shall consider the following quantity:
?
m?,?
? (X, ) = E? ? (?|X)1(D? (q||p(?|?)) ? ),
where 1 is the set indicator function. Intuitively m?,?
? (X, ) is the probability mass of the
?-Bayesian posterior ? ? (?|X) in the region of p(?|?) that is at least -distance away from q
in D? -divergence. Using Markov inequality, we immediately obtain from (7) the following
bound for m?,?
(X):
Pn
i |?)
? ln E? e??nKL(q||p(?|?)) + (? ? ?)EX sup? i=1 ln p(X
q(Xi )
?,?
EX m? (X, ) ?
. (8)
?(1 ? ?)n
Next we would like to estimate the right-hand side of (8). Due to the limitation of space, we
shall only consider a simple truncation estimation, which leads to the correct convergence
rate for non-parametric problems but yields an unnecessary ln n factor for parametric problems (which can be correctly handled with a more precise estimation). We introduce the
following notation, which is essentially the prior measure of an -radius KL-ball around q:
M?KL () = ?(KL(q||p(?|?)) ? ) = E? 1(KL(q||p(?|?)) ? ).
Using this definition, we have E? e??nKL(q||p(?|?)) ? M?KL ()e??n . In addition, we shall
define the -upper bracketing of ? (introduced in [1]), denoted by N (?, ), as the minimum
number of non-negative functions {fi } on X with respect to ? such that Eq (fi /q) = 1 + ,
and ?? ? ?, ?i such that p(x|?) ? fi (x) a.e. [?]. We have
N (?,) P
n
f (X )
X
X
n
1
p(Xi |?) 1
ln j i
EX sup
? EX ln
e i=1 q(Xi )
ln
n
q(Xi )
n
? i=1
j=1
N (?,)
Pn
f (X )
X
1
ln N (?, )
ln j i
? ln
EX e i=1 q(Xi ) =
+ ln(1 + ).
n
n
j=1
Therefore we obtain from (8) that ?s > 0:
ln N (?, ) + n
1
ln M?KL () + (? ? ?)
.
n
n
The above bound immediately implies the following consistency and convergence rate theorem for Bayesian posterior distribution:
?(1 ? ?)sEX m?,?
? (X, s) ? ? ?
Theorem 4.1 Consider a sequence of Bayesian prior distributions ?n on a parameter
space ?n , which may be different for different sample sizes. Consider a sequence of positive
numbers {n } such that
?1
sup
ln M?KL
(n ) < ?,
(9)
n
n nn
then ?sn > 0 such that sn ? ?, and ?? ? (0, 1), m?,?
?n (X, sn n ) ? 0 in probability.
Moreover, if
ln N (?n , n )
< ?,
(10)
nn
n
then ?sn > 0 such that sn ? ?, and ?? ? (0, 1), m1,?
?n (X, sn n ) ? 0 in probability.
sup
The first claim implies that for all ? < 1, the ?-Bayesian posterior ? ? is concentrated in
an n ball around q in D? divergence, and the rate of convergence is Op (n ). Note that
n is determined only by the local property of ?n around the true distribution q. It also
immediately implies that as long as M?KL
() > 0 for all > 0, the ?-Bayesian method
n
with ? < 1 is consistent.
The second claim applies to the standard Bayesian method. Its consistency requires an
additional assumption (10), which depends on global properties of the prior ?n . This may
seem somewhat surprising at first, but the condition is necessary. In fact, the counterexample given in [1] shows that the standard Bayesian method can be inconsistent even
under the condition M?KL
() > 0 for all > 0. Therefore a standard Bayesian procedure
n
can be ill-behaved even if we put a sufficient amount of prior around the true distribution.
The consistency theorem given in [1] also relies on the upper entropy number N (?, ).
However, no convergence rates were established. Here we obtained a rate of convergence
result for the standard Bayesian method using their covering definitions. Other definitions
of covering (e.g. Hellinger covering) were used in more recent works to obtain rate of
convergence for non-parametric Bayesian methods [3, 8]. Although it is possible to derive bounds using those different covering definitions in our analysis, we shall not work
out the details here. However, we shall point out that these works made assumptions not
completely necessary. For example, in [3], the definition of M?KL () requires additional
assumptions that Eq ln(q/p(?|?))2 ? 2 . This stronger condition is not needed in our analysis. Finally we shall mention that the bound of the form in Theorem 4.1 is known to
produce optimal convergence rates for non-parametric problems (see [3, 8] for examples).
5
Conclusion
In this paper, we formulated an extended family of Bayesian algorithms as empirical logrisk minimization under entropy regularization. We then derived general posterior averaging bounds under entropy regularization that are suitable for analyzing Bayesian methods.
These new bounds are of independent interests since they lead to Bernstein style exponential inequalities, which are crucial for obtaining the correct convergence behavior for many
statistical estimation problems such as least squares regression.
Using the posterior averaging bounds, we obtain new convergence results for a generalized
family of Bayesian posterior distributions. Our results imply that the ?-Bayesian method
with ? < 1 is more robust than the standard Bayesian method since its convergence behavior is completely determined by the local prior density around the true distribution. Although the standard Bayesian method is ?optimal? in a certain averaging sense, its behavior
is heavily dependent on the regularity of the prior distribution globally. What happens is
that the standard Bayesian method can put too much emphasis on the difficult part of the
prior distribution, which degrades the estimation quality in the easier parts where we are
actually more interested in. Therefore even if one is able to guess the true distribution
by putting a large prior mass around its neighborhood, the Bayesian method can still illbehave if one accidentally makes bad choices elsewhere. It is thus difficult to design good
Bayesian priors. The new theoretical insights obtained here imply that unless one completely understands the impact of the prior, it is much safer to use an ?-Bayesian method.
Acknowledgments
The author would like to thank Andrew Barron, Ron Meir, and Matthias Seeger for helpful
discussions and comments.
References
[1] Andrew Barron, Mark J. Schervish, and Larry Wasserman. The consistency of posterior distributions in nonparametric problems. Ann. Statist., 27(2):536?561, 1999.
[2] Persi Diaconis and David Freedman. On the consistency of Bayes estimates. Ann.
Statist., 14(1):1?67, 1986. With a discussion and a rejoinder by the authors.
[3] Subhashis Ghosal, Jayanta K. Ghosh, and Aad W. van der Vaart. Convergence rates of
posterior distributions. Ann. Statist., 28(2):500?531, 2000.
[4] D. McAllester. PAC-Bayesian stochastic model selection. Machine Learning, 51(1):5?
21, 2003.
[5] Ron Meir and Tong Zhang. Generalization error bounds for Bayesian mixture algorithms. Journal of Machine Learning Research, 4:839?860, 2003.
[6] C. P. Robert. The Bayesian Choice: A Decision Theoretic Motivation. Springer Verlag,
New York, 1994.
[7] M. Seeger. PAC-Bayesian generalization error bounds for Gaussian process classification. JMLR, 3:233?269, 2002.
[8] Xiaotong Shen and Larry Wasserman. Rates of convergence of posterior distributions.
Ann. Statist., 29(3):687?714, 2001.
| 2439 |@word version:2 briefly:1 stronger:1 sex:1 pick:3 mention:2 boundedness:1 com:1 wd:6 surprising:3 guess:1 provides:1 ron:2 lx:6 zhang:2 height:1 direct:1 become:1 introduce:3 hellinger:2 expected:3 behavior:10 globally:1 decreasing:1 estimating:1 underlying:5 bounded:1 notation:3 mass:4 moreover:2 what:1 minimizes:1 developed:1 ghosh:1 guarantee:1 positive:2 before:1 local:3 consequence:3 analyzing:3 emphasis:1 studied:1 acknowledgment:1 procedure:4 empirical:8 significantly:1 get:1 cannot:1 undesirable:1 selection:1 put:3 risk:7 applying:3 optimize:1 measurable:6 center:1 attention:1 starting:1 independently:3 convex:1 shen:1 subhashis:1 simplicity:1 immediately:5 wasserman:2 insight:1 heavily:1 region:1 rq:9 complexity:1 depend:2 completely:4 basis:1 easily:2 represented:1 derivation:3 fast:2 neighborhood:2 widely:1 valued:4 statistic:2 vaart:1 varq:3 final:1 sequence:2 matthias:1 jayanta:1 achieve:1 convergence:26 regularity:1 produce:1 converges:1 derive:1 inferencing:1 develop:1 andrew:2 op:1 eq:19 skip:3 implies:5 concentrate:2 radius:1 correct:5 stochastic:1 mcallester:1 larry:2 generalization:2 proposition:7 tighter:2 hold:7 around:14 exp:15 claim:2 purpose:2 estimation:11 applicable:2 minimization:3 clearly:1 always:3 gaussian:1 modified:1 rather:1 pn:7 corollary:4 derived:3 elx:5 indicates:1 likelihood:1 check:1 seeger:2 sense:1 helpful:1 inference:4 dependent:1 el:1 unlikely:1 typically:2 doob:1 interested:2 issue:2 among:1 classification:3 ill:1 denoted:2 integration:1 field:1 beq:1 simplify:1 randomly:1 diaconis:1 divergence:9 statistician:3 interest:2 certainly:1 mixture:1 chain:1 necessary:4 machinery:1 unless:1 indexed:1 theoretical:3 nkl:4 inadequate:1 too:1 density:14 fundamental:2 von:1 again:1 choose:2 hoeffding:2 style:3 depends:3 later:1 view:3 picked:1 analyze:3 sup:8 bayes:5 complicated:1 square:4 yield:1 bayesian:61 straight:2 definition:5 proof:5 mi:1 persi:1 lim:1 carefully:1 actually:2 understands:1 appears:2 fubini:1 follow:1 improved:1 formulation:3 though:2 sketch:2 hand:7 replacing:1 lack:1 quality:1 behaved:1 concept:1 true:19 remedy:1 regularization:5 hence:1 self:1 covering:4 yorktown:1 generalized:7 theoretic:1 novel:2 recently:1 fi:3 tail:2 m1:1 counterexample:1 consistency:9 outlined:1 similarly:1 posterior:40 recent:4 inf:2 verlag:1 certain:1 inequality:16 watson:2 inconsistency:1 der:1 minimum:1 additional:3 somewhat:1 converge:2 paradigm:2 relates:1 desirable:1 technical:1 long:2 impact:1 regression:4 essentially:1 expectation:7 addition:1 bracketing:1 crucial:1 appropriately:2 comment:1 inconsistent:2 seem:1 call:2 near:1 bernstein:6 finish:1 gave:1 simplifies:2 whether:2 motivated:1 handled:2 york:1 useful:2 generally:1 detailed:1 amount:1 nonparametric:1 statist:4 concentrated:3 generate:1 meir:2 exist:2 correctly:1 shall:17 key:1 putting:1 drawn:4 schervish:1 tzhang:1 place:1 family:5 throughout:1 decision:1 bound:42 infinity:2 x2:3 xiaotong:1 separable:1 according:1 ball:2 slightly:1 happens:1 intuitively:1 heart:1 ln:41 equation:1 turn:3 needed:1 know:1 observe:1 away:3 barron:2 appearing:1 frequentist:1 robustness:1 include:1 classical:1 question:1 quantity:3 occurs:1 parametric:5 degrades:1 concentration:2 enhances:1 distance:3 thank:1 considers:2 trivial:1 difficult:3 unfortunately:1 robert:1 negative:4 design:1 unknown:3 upper:2 observation:5 markov:2 finite:2 extended:1 precise:2 arbitrary:1 ghosal:1 introduced:1 david:1 kl:35 specified:1 optimized:1 established:1 able:1 below:1 appeared:1 including:1 explanation:1 event:1 suitable:1 regularized:3 indicator:1 minimax:1 imply:2 minimaxity:1 sn:6 prior:21 understanding:1 literature:1 asymptotic:1 fully:1 admissibility:1 expect:1 loss:5 interesting:1 limitation:3 rejoinder:1 sufficient:1 consistent:3 heavy:1 ibm:2 elsewhere:1 truncation:1 retical:1 accidentally:1 side:7 understand:1 aad:1 van:1 xn:3 qn:1 author:4 forward:2 interchange:1 made:1 far:2 approximate:1 global:1 assumed:1 unnecessary:1 xi:24 don:1 nature:3 robust:1 obtaining:1 investigated:1 main:1 bounding:2 motivation:1 freedman:1 x1:6 referred:1 ny:1 tong:2 exponential:4 jmlr:1 theorem:16 bad:2 pac:6 jensen:1 list:1 closeness:1 exists:2 essential:1 easier:1 entropy:5 explore:1 applies:1 springer:1 relies:2 formulated:1 ann:4 considerable:1 hard:1 safer:1 determined:3 infinite:1 averaging:6 lemma:4 called:1 duality:1 mark:1 ularized:1 reg:1 ex:28 |
1,584 | 244 | A Neural Network for Feature Extraction
A Neural Network for Feature Extraction
Nathan Intrator
Div. of Applied Mathematics, and
Center for Neural Science
Brown University
Providence, RI 02912
ABSTRACT
The paper suggests a statistical framework for the parameter estimation problem associated with unsupervised learning in a neural
network, leading to an exploratory projection pursuit network that
performs feature extraction, or dimensionality reduction.
1
INTRODUCTION
The search for a possible presence of some unspecified structure in a high dimensional space can be difficult due to the curse of dimensionality problem, namely
the inherent sparsity of high dimensional spaces. Due to this problem, uniformly
accurate estimations for all smooth functions are not possible in high dimensions
with practical sample sizes (Cox, 1984, Barron, 1988).
Recently, exploratory projection pursuit (PP) has been considered (Jones, 1983) as a
potential method for overcoming the curse of dimensionality problem (Huber, 1985),
and new algorithms were suggested by Friedman (1987), and by Hall (1988, 1989).
The idea is to find low dimensional projections that provide the most revealing
views of the full-dimensional data emphasizing the discovery of nonlinear effects
such as clustering.
Many of the methods of classical multivariate analysis turn out to be special cases
of PP methods. Examples are principal component analysis, factor analysis, and
discriminant analysis. The various PP methods differ by the projection index optimized.
719
720
Intrator
Neural networks seem promising for feature extraction, or dimensionality reduction,
mainly because of their powerful parallel computation. Feature detecting functions
of neurons have been studied in the past two decades (von der Malsburg, 1973, Nass
et al., 1973, Cooper et aI., 1979, Takeuchi and Amari, 1979). It has also been shown
that a simplified neuron model can serve as a principal component analyzer (Oja,
1982).
This paper suggests a statistical framework for the parameter estimation problem
associated with unsupervised learning in a neural network, leading to an exploratory
PP network that performs feature extraction, or dimensionality reduction, of the
training data set. The formulation, which is similar in nature to PP, is based on
a minimization of a cost function over a set of parameters, yielding an optimal
decision rule under some norm. First, the formulation of a single and a multiple
feature extraction are presented. Then a new projection index (cost function) that
favors directions possessing multimodality, where the multimodality is measured
in terms of the separability property of the data, is presented. This leads to the
synaptic modification equations governing learning in Bienenstock, Cooper, and
Munro (BCM) neurons (1982). A network is presented based on the multiple feature
extraction formulation, and both, the linear and nonlinear neurons are analysed.
2
SINGLE FEATURE EXTRACTION
We associate a feature with each projection direction. With the addition of a
threshold function we can say that an input posses a feature associated with that
direction if its projection onto that direction is larger than the threshold. In these
terms, a one dimensional projection would be a single feature extraction.
The approach proceeds as follows: Given a compact set of parameters, define a
family of loss functions, where the loss function corresponds to a decision made by
the neuron whether to fire or not for a given input. Let the risk be the averaged
loss over all inputs. Minimize the risk over all possible decision rules, and then
minimize the risk over the parameter set. In case the risk does not yield a meaningful
minimization problem, or when the parameter set over which the minimization takes
place can be restricted by some a-priori knowledge, a penalty, i.e. a measure on the
parameter set, may be added to the risk.
Define the decision problem (11, Fo, P, L, A), where 11 = (x(1), ... , x(n?), x(i) E R N ,
is a fixed set of input vectors, (11, Fo, P) the corresponding probability space, A =
{O, I} the decision space, and {Le }eEBM, Le : 11 x A t---+ R is the family of loss
functions. BM is a compact set in RM. Let 1) be the space of all decision rules.
The risk Re : 1) t---+ R, is given by:
n
Re(c5)
= L P(x(i?)Le(x(i), c5(x(i))).
(2.1)
i=l
For a fixed 8, the optimal decision c5e is chosen so that:
(2.2)
A Neural Network for Feature Extraction
Since the minimization takes place over a finite set, the minimizer exists. In particular, for a given XCi) the decision 88 (x(i?) is chosen so that L8(X(i),88(x(i?)) <
L8(x(i), 1- 88(x(i?)).
Now we find an optimal
Bthat
minimizes the risk, namely,
Bwill
be such that:
(2.3)
The minimum with respect to
f}
exits since BM is compact.
R8(88 ) becomes a function that depends only on
in RN, R8 can be viewed as a projection index.
3
f},
and when
f}
represents a vector
MULTI-DIMENSIONAL FEATURE EXTRACTION
In this case we have a single layer network of interconnected units, each performing
a single feature extraction. All units receive the same input and the interaction between the units is via lateral inhibition. The formulation is similar to single feature
extraction, with the addition of interaction between the single feature extractors.
Let Q be the number of features to be extracted from the data. The multiple de(8~1), ... ,8~Q?) takes values in A
{0,1}Q. The risk of node k
cision rule 88
is given by: R~k)(8) = l::=l P(x(i?)L~k)(x(i), 8(k)(x(i?)), and the total risk of the
=
=
network is R8(8) = l:~=l R~k)(8). Proceeding as before, we can minimize over the
decision rules 8 to get 88 , and then minimize over f} to get B, as in equation (2.3).
The coupling of the equations via the inhibition, and the relation between the
different features extracted is exhibited in the loss function for each node and will
become clear through the next example.
4
FINDING THE OPTIMAL
FUNCTION
4.1
f)
FOR A SPECIFIC LOSS
A SINGLE BCM NEURON - ONE FEATURE EXTRACTION
In this section, we present an exploratory PP method with a specific loss function.
The differential equations performing the optimization turn out to be a good approximation of the low governing synaptic weight modification in the BCM theory
for learning and memory in neurons. The formal presentation of the theory, and
some theoretical analysis is given in (Bienenstock, 1980, Bienenstock et al., 1982),
mean field theory for a network based on these neurons is presented in (Scofield
and Cooper, 1985, Cooper and Scofield, 1988), more recent analysis based on the
statistical viewpoint is in (Intrator 1990), computer simulations and the biological
relevance are discussed in (Saul et al., 1986, Bear et al., 1987, Cooper et al., 1988).
We start with a short review of the notations and definitions of BCM theory.
Consider a neuron with input vector x = (Xl, ... , XN), synaptic weights vector
m (ml' ... , mN), both in R N , and activity (in the linear region) c X . m.
=
=
721
722
Intrator
=
=
=
Define em
E[(x? m)2], ?(e, em) e2 - jee m , 4>(e, em) e2 - ~eem. The input
x, which is a stochastic process, is assumed to be of Type II t.p mixing, bounded, and
piecewise constant. The t.p mixing property specifies the dependency of the future
of the process on its past. These assumptions are needed for the approximation of
the resulting deterministic equation by a stochastic one and are discussed in detail
in (Intrator, 1990). Note that e represents the linear projection of x onto m, and
we seek an optimal projection in some sense.
The BCM synaptic modification equations are given by: m = JL(t)4>(x . m, em)x,
mo, where JL(t) is a global modulator which is assumed to take into account
all the global factors affecting the cell, e.g., the beginning or end of the critical
period, state of arousal, etc.
m(O)
=
Rewriting the modification equation as m = JL(t)(x . m)(x . m - ~!1m)X, we see
that unlike a classical Hebb-Stent rule, the threshold !1m is dynamic. This gives
the modification equation the desired stability, with no extra conditions such as
saturation of the activity, or normalization of II m II, and also yields a statistically
meaningful optimization.
=
Returning to the statistical formulation, we let !1
m be the parameter to be
estimated according to the above formulation and define an appropriate loss function
depending on the cell's decision whether to fire or not. The loss function represents
the intuitive idea that the neuron will fire when its activity is greater than some
threshold, and will not otherwise. We denote the firing of the neuron by a = 1.
Define K
= -JL JJe... ... ?(s, em)ds. Consider the following loss function:
L8(X, a) = Lm(x, a) =
_
(x? m) >
-JL .l(z.m)
e ... 4>A( s, em )d s,
.l{z.m) A
K - JL e... 4>(s, em)ds, (x? m) <
(x? m) ~
-JL .l(z.m)
e... 4>A( s, em )d s,
K - JL .l{z.m)
0...
4>A( s, em )d s, (x? m) >
e m,
em,
em,
em,
a=1
a=1
a=O
a=O
(4.1)
It follows from the definition of L8 and from the definition of 68 in (2.2) that
Lm(x, 6m ) = -JL
{(z.m)
Je ...
?(s, em)ds
= - JL {(x. m)3 -
E[(x . m)2](x . m)2}
(4.2)
3
The above definition of the loss function suggests that the decision of a neuron
whether to fire or not is based on a dynamic threshold (x . m) > em. It turns out
that the synaptic modification equations remain the same if the decision is based
on a fixed threshold. This is demonstrated by the following loss function, which
leads to the same risk as in equation (4.3): K = -JL Joje ... ?(s, em)ds,
L8(X, a)
= Lm(x, a) =
((z.m) A(
-JL Jo
4> s, em )d s,
K - JL J~z . m) ?(s, em)ds,
((z.m) A(
-JL Jo
4> s, em )d s,
(z.m)
K - JL Jo
4>(s, em ds,
A
_)
(x . m) ~ 0, a =
(x? m) < 0, a =
(x? m) ~ 0, a =
(x. m) > 0, a =
1
1
0
0
(4.1')
A Neural Network for Feature Extraction
The risk is given by:
(4.3)
The following graph represents the ? function and the associated loss function
Lm(x, 6m ) of the activity c.
THE
4>
FUNCTION
THE LOSS FUNCTION
em.
em, the loss
Fig. 1: The Function ? and the Loss Functions for a Fixed m and
From the graph of the loss function it follows that for any fixed m and
is small for a given input x, when either x . m is close to zero or negative, or when
x . m is larger than em. This suggests, that the preferred directions for a fixed 8m
will be such that the projected single dimensional distribution differs from normal
in the center of the distribution, in the sense that it has a multi-modal distribution
with a distance between the two peaks larger than 8m ? Rewriting (4.3) we get
Re(6e)
__ !!.. E[(x? m)3] _ 1
E2[(x . m)2] 3 {E2[(x . m)2]
}.
(4.4)
The term E[(x.m)3]/E2[(x.m)2] can be viewed as some measure of the skewness of
the distribution, which is a measure of deviation from normality and therefore an
interesting direction (Diaconis and Friedman, 1984), in accordance with Friedman
(1987) and Hall's (1988, 1989) argument that it is best to seek projections that
differ from the normal in the center of the distribution rather than in the tails.
Since the risk is continuously differentiable, its minimization can be done via the
gradient descent method with respect to m, namely:
( 4.5)
Notice that the resulting equation represents an averaged deterministic equation
of the stochastic BCM modification equations. It turns out that under suitable
conditions on the mixing of the input x and the global function IL, equation (4.5) is
a good approximation of its stochastic version.
When the nonlinearity of the neuron is emphasized, the neuron's activity is then
defined as c = 0'( X ? m), where 0' usually represents a smooth sigmoidal function.
em is then defined as E[0'2(x . m)], and the loss function is similar to the one
given by equation (4.1) except that (x? m) is replaced by O'(x, m). The gradient of
723
724
Intrator
the risk is given by: -VTn.Rnt(8m) = J-LE[?(O'(x, m), E>m ) 0" x], where
the derivative of 0' at the point (x . m). Note that
function, e.g. radial symmetric kernels.
4.2
0'
0"
represents
may represent any nonlinear
THE NETWORK - MULTIPLE FEATURE EXTRACTION
In this case we have Q identical nodes, which receive the same input and inhibit
each other. Let the neuronal activity be denoted by Ck = X ? mk. We define the
inhibited activity Ck = Ck - 11 Eitk ci' and the threshold e~ = E[cil. In a more
general case, the inhibition may be defined to take into account the spatial location
of adjacent neurons, namely, Ck = Ei Aikci, where Aik represents different types
of inhibitions, e.g. Mexican hat. Since the following calculations are valid for both
kinds of inhibition we shall introduce only the simpler one.
The loss function is similar to the one defined in a single feature extraction with the
exception that the activity C = X? m is replaced by C. Therefore the risk for node k is
given by: Rk = -~{E[c~] - (E[ci])2}, and the total risk is given by R = E~=l Rk.
The gradient of R is given by:
8R = -J-L[I-11(Q 8mk
l)lE[?(ck,e~)x].
(4.6)
Equation (4.6) demonstrates the ability of the network to perform exploratory projection pursuit in parallel, since the minimization of the risk involves minimization
of nodes 1, ... , Q, which are loosely coupled.
The parameter 11 represents the amount of lateral inhibition in the network, and
is related to the amount of correlation between the different features sought by
the network. Experience shows that when 11 ~ 0, the different units may all become selective to the simplest feature that can be extracted from the data. When
l1(Q - 1) ~ 1, the network becomes selective to those inputs that are very far apart
(under the l2 norm), yielding a classification of a small portion of the data, and
mostly unresponsiveness to the rest of the data. When 0 < 11( Q - 1) < 1, the network becomes responsive to substructures that may be common to several different
inputs, namely extract invariant features in the data. The optimal value of 11 has
been estimated by data driven techniques.
When the non linearity of the neuron is emphasized the activity is defined (as in
the single neuron case) as Ck = 0'( X ? mk)' Ck, e~, and Rk are defined as before. In
this case
= -l1O"(X' mi)x,
= O"(x? mk)x, and equation (4.6) becomes:
:!:
4.3
:!kk
OPTIMAL NETWORK SIZE
A major problem in network solutions to real world problems is optimal network
size. In our case, it is desirable to try and extract as many features as possible on
A Neural Network for Feature Extraction
one hand, but it is clear that too many neurons in the network will simply inhibit
each other, yielding sub-optimal results. The following solution was adopted: We
replace each neuron in the network with a group of neurons which all receive the
same input, and the same inhibition from adjacent groups. These neurons differ
from one another only in their initial synaptic weights. The output of each neuron
is replaced by the average group activity. Experiments show that the resulting
network is more robust to noise and outliers in the data. Furthermore, it is observed
that groups that become selective to a true feature in the data, posses a much
smaller inter-group variance of their synaptic weight vector than those which do
not become responsive to a coherent feature. We found that eliminating neurons
with large inter-group variance and retraining the network, may yield improved
feature extraction properties.
The network has been applied to speech segments, in an attempt to extract some
features from CV pairs of isolated phonemes (Seebach and Intrator, 1988).
5
DISCUSSION
The PP method based on the BCM modification function, has been found capable of
effectively discovering non linear data structures in high dimensional spaces. Using
a parallel processor and the presented network topology, the pursuit can be done
faster than in the traditional serial methods.
The projection index is based on polynomial moments, and is therefore computationally attractive. When only the nonlinear structure in the data is of interest, a
sphering transformation (Huber, 1981, Friedman, 1987), can be applied first to the
data for removal of all the location, scale, and correlational structure from the data.
When compared with other PP methods, the highlights of the presented method are
i) the projection index concentrates on directions where the separability property as
well as the non-normality of the data is large, thus giving rise to better classification
properties; ii) the degree of correlation between the directions, or features extracted
by the network can be regulated via the global inhibition, allowing some tuning of
the network to different types of data for optimal results; iii) the pursuit is done on
all the directions at once thus leading to the capability of finding more interesting
structures than methods that find only one projection direction at a time. iv) the
network's structure suggests a simple method for size-optimization.
Acknowledgements
I would like to thank Professor Basilis Gidas for many fruitful discussions.
Supported by the National Science Foundation, the Office of Naval Research, and
the Army Research Office.
References
Barron A. R. (1988) Approximation of densities by sequences of exponential families.
Submitted to Ann. Statist.
725
726
Intrator
Bienenstock E. L. (1980) A theory of the development of neuronal selectivity. Doctoral
dissertation, Brown University, Providence, RI
Bienenstock E. L., L. N Cooper, and P. W. Munro (1982) Theory for the development
of neuron selectivity: orientation specificity and binocular interaction in visual cortex.
J.Neurosci. 2:32-48
Bear M. F., L. N Cooper, and F. F. Ebner (1987) A Physiological Basis for a Theory of
Synapse Modification. Science 237:42-48
Cooper L. N, and F. Liberman, and E. Oja (1979) A theory for the acquisition and loss
of neurons specificity in visual cortex. Bioi. Cyb. 33:9-28
Cooper L. N, and C. L. Scofield (1988) Mean-field theory of a neural network. Proc. Natl.
Acad. Sci. USA 85:1973-1977
Cox D. D. (1984) Multivariate smoothing spline functions. SIAM J. Numer. Anal. 21
789-813
Diaconis P., and D. Freedman (1984) Asymptotics of Graphical Projection Pursuit. The
Annals of Statistics, 12 793-815.
Friedman J. H. (1987) Exploratory Projection Pursuit. Journal of the American Statistical
Association 82-397:249-266
Hall P. (1988) Estimating the Direction in which Data set is Most Interesting. Probab.
Theory ReI. Fields 80, 51-78
Hall P. (1989) On Polynomial-Based Projection Indices for Exploratory Projection Pursuit.
The Annals of Statistics, 17,589-605.
Huber P. J. (1981) Projection Pursuit. Research Report PJH-6, Harvard University, Dept.
of Statistics.
Huber P. J. (1985) Projection Pursuit. The Annal. of Stat. 13:435-475
Intrator N. (1990) An Averaging Result for Random Differential Equations. In Press.
Jones M. C. (1983) The Projection Pursuit Algorithm for Exploratory Data Analysis.
Unpublished Ph.D. dissertation, University of Bath, School of Mathematics.
von der Malsburg, C. (1973) Self-organization of orientation sensitivity cells in the striate
cortex. Kybernetik 14:85-100
Nass M. M., and L. N Cooper (1975) A theory for the development of feature detecting
cells in visual cortex. Bioi. Cybernetics 19:1-18
Oja E. (1982) A Simplified Neuron Model as a Principal Component Analyzer. J. Math.
Biology, 15:267-273
Saul A., and E. E. Clothiaux, 1986) Modeling and Simulation III: Simulation of a Model for
Development of Visual Cortical specificity. J . of Electrophysiological Techniques, 13:279306
Scofield C. L., and L. N Cooper (1985) Development and properties of neural networks.
Contemp. Phys. 26:125-145
Seebach B. S., and N. Intrator (1988) A learning Mechanism for the Identification of
Acoustic Features. (Society for Neuroscience).
Takeuchi A., and S. Amari (1979) Formation of topographic maps and columnar microstructures in nerve fields. Bioi. Cyb. 35:63-72
| 244 |@word cox:2 version:1 eliminating:1 polynomial:2 norm:2 retraining:1 simulation:3 seek:2 moment:1 reduction:3 initial:1 past:2 analysed:1 discovering:1 beginning:1 short:1 dissertation:2 detecting:2 math:1 node:5 location:2 sigmoidal:1 simpler:1 rnt:1 become:4 differential:2 multimodality:2 introduce:1 inter:2 huber:4 multi:2 curse:2 becomes:4 estimating:1 notation:1 bounded:1 linearity:1 kind:1 unspecified:1 minimizes:1 skewness:1 finding:2 transformation:1 returning:1 rm:1 demonstrates:1 unit:4 before:2 accordance:1 kybernetik:1 acad:1 firing:1 vtn:1 doctoral:1 studied:1 suggests:5 statistically:1 averaged:2 practical:1 differs:1 stent:1 asymptotics:1 revealing:1 projection:23 radial:1 specificity:3 get:3 onto:2 close:1 risk:16 fruitful:1 map:1 deterministic:2 xci:1 center:3 demonstrated:1 clothiaux:1 rule:6 stability:1 exploratory:8 annals:2 aik:1 associate:1 harvard:1 observed:1 region:1 inhibit:2 dynamic:2 cyb:2 segment:1 serve:1 exit:1 basis:1 various:1 formation:1 larger:3 say:1 amari:2 otherwise:1 favor:1 ability:1 statistic:3 topographic:1 sequence:1 differentiable:1 interconnected:1 interaction:3 bath:1 mixing:3 intuitive:1 seebach:2 coupling:1 depending:1 stat:1 measured:1 school:1 involves:1 differ:3 direction:11 concentrate:1 rei:1 stochastic:4 biological:1 considered:1 hall:4 normal:2 mo:1 lm:4 major:1 sought:1 estimation:3 proc:1 minimization:7 rather:1 ck:7 office:2 naval:1 mainly:1 sense:2 bienenstock:5 relation:1 selective:3 classification:2 orientation:2 denoted:1 priori:1 development:5 spatial:1 special:1 smoothing:1 field:4 once:1 extraction:19 identical:1 represents:9 biology:1 jones:2 unsupervised:2 future:1 report:1 spline:1 piecewise:1 inherent:1 inhibited:1 oja:3 diaconis:2 national:1 replaced:3 fire:4 friedman:5 attempt:1 organization:1 interest:1 numer:1 yielding:3 natl:1 accurate:1 capable:1 experience:1 iv:1 loosely:1 re:3 arousal:1 desired:1 isolated:1 theoretical:1 annal:1 mk:4 modeling:1 cost:2 deviation:1 too:1 providence:2 dependency:1 density:1 peak:1 siam:1 sensitivity:1 continuously:1 na:2 jo:3 von:2 american:1 derivative:1 leading:3 account:2 potential:1 de:1 depends:1 view:1 try:1 portion:1 start:1 parallel:3 capability:1 substructure:1 minimize:4 il:1 takeuchi:2 variance:2 phoneme:1 yield:3 identification:1 cybernetics:1 processor:1 submitted:1 fo:2 phys:1 synaptic:7 definition:4 acquisition:1 pp:8 e2:5 associated:4 mi:1 knowledge:1 dimensionality:5 electrophysiological:1 nerve:1 modal:1 improved:1 synapse:1 formulation:6 done:3 furthermore:1 governing:2 binocular:1 correlation:2 d:6 hand:1 ei:1 nonlinear:4 microstructures:1 usa:1 effect:1 brown:2 true:1 l1o:1 symmetric:1 attractive:1 adjacent:2 self:1 performs:2 l1:1 recently:1 possessing:1 common:1 jl:15 discussed:2 tail:1 association:1 ai:1 cv:1 tuning:1 mathematics:2 analyzer:2 nonlinearity:1 cortex:4 inhibition:8 etc:1 multivariate:2 recent:1 apart:1 driven:1 selectivity:2 der:2 minimum:1 greater:1 period:1 ii:4 multiple:4 desirable:1 full:1 smooth:2 faster:1 calculation:1 serial:1 kernel:1 normalization:1 represent:1 cell:4 receive:3 addition:2 affecting:1 extra:1 rest:1 unlike:1 posse:2 exhibited:1 seem:1 contemp:1 presence:1 iii:2 modulator:1 topology:1 idea:2 whether:3 munro:2 penalty:1 speech:1 clear:2 amount:2 statist:1 ph:1 simplest:1 specifies:1 notice:1 estimated:2 neuroscience:1 shall:1 group:6 threshold:7 rewriting:2 graph:2 powerful:1 place:2 family:3 decision:12 layer:1 activity:10 ri:2 nathan:1 argument:1 performing:2 sphering:1 according:1 remain:1 smaller:1 em:23 separability:2 modification:9 outlier:1 restricted:1 invariant:1 gidas:1 computationally:1 equation:18 turn:4 mechanism:1 needed:1 end:1 adopted:1 pursuit:11 intrator:10 barron:2 appropriate:1 responsive:2 hat:1 clustering:1 graphical:1 malsburg:2 giving:1 classical:2 society:1 added:1 striate:1 traditional:1 div:1 gradient:3 regulated:1 distance:1 thank:1 lateral:2 sci:1 discriminant:1 index:6 kk:1 difficult:1 mostly:1 negative:1 rise:1 anal:1 ebner:1 perform:1 allowing:1 neuron:26 finite:1 descent:1 rn:1 overcoming:1 namely:5 pair:1 unpublished:1 optimized:1 bcm:7 coherent:1 acoustic:1 suggested:1 proceeds:1 usually:1 sparsity:1 saturation:1 memory:1 critical:1 suitable:1 mn:1 normality:2 coupled:1 extract:3 review:1 probab:1 discovery:1 l2:1 removal:1 acknowledgement:1 loss:20 bear:2 highlight:1 interesting:3 foundation:1 degree:1 jee:1 viewpoint:1 l8:5 supported:1 formal:1 scofield:4 saul:2 eem:1 dimension:1 xn:1 valid:1 world:1 cortical:1 made:1 c5:2 projected:1 simplified:2 bm:2 far:1 compact:3 cision:1 preferred:1 liberman:1 ml:1 global:4 assumed:2 search:1 decade:1 promising:1 nature:1 robust:1 neurosci:1 noise:1 freedman:1 neuronal:2 fig:1 je:1 hebb:1 cooper:11 cil:1 sub:1 exponential:1 xl:1 extractor:1 rk:3 emphasizing:1 specific:2 emphasized:2 r8:3 physiological:1 exists:1 effectively:1 ci:2 columnar:1 simply:1 army:1 visual:4 corresponds:1 minimizer:1 extracted:4 bioi:3 viewed:2 presentation:1 ann:1 replace:1 professor:1 except:1 uniformly:1 averaging:1 principal:3 mexican:1 total:2 correlational:1 meaningful:2 exception:1 relevance:1 dept:1 |
1,585 | 2,440 | Online Learning of Non-stationary Sequences
Claire Monteleoni and Tommi Jaakkola
MIT Computer Science and Artificial Intelligence Laboratory
200 Technology Square
Cambridge, MA 02139
{cmontel,tommi}@ai.mit.edu
Abstract
We consider an online learning scenario in which the learner can make
predictions on the basis of a fixed set of experts. We derive upper and
lower relative loss bounds for a class of universal learning algorithms involving a switching dynamics over the choice of the experts. On the basis
of the performance bounds we provide the optimal a priori discretization for learning the parameter that governs the switching dynamics. We
demonstrate the new algorithm in the context of wireless networks.
1 Introduction
We focus on the online learning framework in which the learner has access to a set of experts but possesses no other a priori information relating to the observation sequence. In
such a scenario the learner may choose to quickly identify a single best expert to rely on
[12], or switch from one expert to another in response to perceived changes in the observation sequence [8], thus making assumptions about the switching dynamics. The ability to
shift emphasis from one ?expert? to another, in response to changes in the observations, is
valuable in many applications, including energy management in wireless networks.
Many algorithms developed for universal prediction on the basis of a set of experts have
clear performance guarantees (e.g., [12, 6, 8, 14]). The performance bounds characterize
the regret relative to the best expert, or best sequence of experts, chosen in hindsight. Algorithms with such relative loss guarantees have also been developed for adaptive game
playing [5], online portfolio management [7], paging [3] and the k-armed bandit problem
[1]. Other relative performance measures for universal prediction involve comparing across
systematic variations in the sequence [4].
Here we extend the class of algorithms considered in [8], by learning the switching-rate
parameter online, at the optimal resolution. Our goal of removing the switching-rate as a
parameter is similar to Vovk?s in [14], though the approach and the comparison class for
the bounds differ. We provide upper and lower performance bounds, and demonstrate the
utility of these algorithms in the context of wireless networks.
2 Algorithms and performance guarantees
The learner has access to n experts, a1 , . . . , an , and each expert makes a prediction at each
time-step over a finite (known) time period t = 1, . . . , T . We denote the ith expert at
time t as ai,t to suppress any details about how the experts arrive at their predictions and
what information is available to facilitate the predictions. These details may vary from one
expert to another and may change over time. We denote the non-negative prediction loss
of expert i at time t as L(i, t), where the loss, a function of t, naturally depends on the
observation yt ? Y at time t. We consider here algorithms that provide a distribution p t (i),
i = 1, . . . , n, over the experts at each time point. The prediction loss of such an algorithm
is denoted by L(pt , t).
For the purpose of deriving learning algorithms such as Static-expert and Fixedshare described in [8], we associate the loss of each expert with a predictive probability so
that ? log p(yt |yt?1 , . . . , y1 , i) = L(i, t). We define the loss of any probabilistic prediction
to be the log-loss:
L(pt , t) = ? log
n
X
i=1
pt (i) p(yt |i, y1 , . . . , yt?1 ) = ? log
n
X
pt (i)e?L(i,t)
(1)
i=1
Many other definitions of the loss corresponding to pt (?) can be bounded by a scaled logloss [6, 8]. We omit such modifications here as they do not change the essential nature of
the algorithms nor their analysis.
The algorithms combining expert predictions can be now derived as simple Bayesian estimation methods calculating the distribution pt (i) = P (i|y1 , . . . , yt?1 ) over the experts on
the basis of the observations seen so far. p1 (i) = 1/n for any such method as any other initial bias could be detrimental in terms of relative performance guarantees. Updating p t (?)
involves assumptions about how the optimal choice of expert can change with time. For
simplicity, we consider here only a Markov dynamics, defined by p(i t |it?1 ; ?), where ?
parameterizes the one-step transition probabilities. Allowing switches at rate ?, we define 1
p(it |it?1 ; ?) = (1 ? ?)?(it , it?1 ) +
?
[1 ? ?(it , it?1 )]
n?1
(2)
which corresponds to the Fixed-share algorithm, and yields the Static-expert
algorithm when ? = 0. The Bayesian algorithm updating pt (?) is defined analogously to
forward propagation in generalized HMMs (allowing observation dependence on past):
pt (i; ?) =
n
1 X
pt?1 (j; ?)e?L(j,t?1) p(i|j; ?)
Zt j=1
(3)
where Zt normalizes the distribution. While we have made various probabilistic assumptions (e.g., conditional independence of expert predictions) in deriving the algorithm, the
algorithms can be used in a context where no such statistical assumptions about the observation sequence or the experts are warranted. The performance guarantees we provide
below for these algorithms do not require these assumptions.
2.1 Relative loss bounds
The existing upper bound on the relative loss of the Fixed-share algorithm [8] is expressed in terms of the loss of the algorithm relative to the loss of the best k-partition of
the observation sequence, where the best expert is assigned to each segment. We start by
providing here a similar guarantee but characterizing the regret relative to the best Fixedshare algorithm, parameterized by ?? , where ?? is chosen in hindsight after having seen
the observation sequence. Our proof technique is different from [8] and gives rise to simple
guarantees for a wider class of prediction methods, along with a lower bound on this regret.
1
where ?(?, ?) is the Kronecker delta.
PT
Lemma 1 Let LT (?) = t=1 L(pt;? , t), ? ? [0, 1], be the cumulative loss of the Fixedshare algorithm on an arbitrary sequence of observations. Then for any ?, ? ? :
i
h
? ? )?D(?k?)]
?
(4)
LT (?) ? LT (?? ) = ? log E??Q
e(T ?1)[D(?k?
?
Proof: The cumulative log-loss of the Bayesian algorithm can be expressed in terms of
negative log-probability of all the observations:
X
LT (?) = ? log[
?(~s)p(~s; ?)]
(5)
~
s
where ~s = {i1 , . . . , iT }, ?(~s) =
QT
t=1
e?L(it ,t) , and p(~s; ?) = p1 (i1 )
QT
t=2
p(it |it?1 ; ?).
Consequently, LT (?) ? LT (?? )
"
#
P
X ?(~s)p(~s; ?? ) p(~s; ?)
?(~s)p(~s; ?)
~
s
P
= ? log P
= ? log
r )p(~r; ?? )
r )p(~r; ?? ) p(~s; ?? )
~
r ?(~
~
r ?(~
~
s
#
"
#
"
X
X
p(~
s;?)
s; ?)
? log p(~s;?? )
? p(~
= ? log
Q(~s; ? )e
= ? log
Q(~s; ? )
p(~s; ?? )
~
s
~
s
"
#
X
1??
?
? s) log ?? +(1??(~
? s)) log 1??? )
= ? log
Q(~s; ?? )e(T ?1)(?(~
~
s
?
where Q(~s; ? ) is the posterior probability over the choices of experts along the sequence,
induced by the hindsight-optimal switching-rate ?? , and ?
? (~s) is the empirical fraction of
non-self-transitions in the selection sequence ~s. This can be rewritten as the expected value
of ?
? under distribution Q. 2
We obtain upper and lower bounds on regret by optimizing Q in Q, the set of all distributions over ?
? ? [0, 1], of the expression for regret.
2.1.1 Upper bound
? ? )?D(?k?)]
?
The upper bound follows from solving: maxQ?Q ? log E??Q
e(T ?1)[D(?k?
?
subject to the constraint that ?? has to be the hindsight-optimal switching-rate, i.e. that:
d
(LT (?) ? LT (?? ))|?=?? = 0
(C1) d?
Theorem 1 Let LT (?? ) = min? LT (?) be the loss of the best Fixed-share algorithm
chosen in hindsight. Then for any ? ? [0, 1], LT (?) ? LT (?? ) ? (T ? 1) D(?? k?), where
D(?? k?) is the relative entropy between Bernoulli distributions defined by ? ? and ?.
The bound vanishes when ? = ?? and does not depend directly on the number of experts.
The dependence on n may appear indirectly through ?? , however. While the regret appears
proportional to T , this dependence vanishes for any reasonable learning algorithm that is
guaranteed to find ? such that D(?? k?) ? O(1/T ), as we will show in Section 3.
Theorem 1 follows, as a special case, from an analogous result for algorithms based on
arbitrary first-order Markov transition dynamics. In the general case, the regret bound
is: (T ? 1) maxi D(P (j|i, ?? ) k P (j|i, ?)), where ?, ?? are now transition matrices, and
D(?k?) is the relative entropy between discrete distributions. For brevity, we provide only
the proof of the scalar case of Theorem 1.
d
Proof: Constraint (C1) can be expressed simply as d?
LT (?)|?=?? = 0, which is equiv?
alent to E??Q
{?
?
}
=
?
.
Taking
the
expectation
outside
the logarithm, in Equation 4,
?
results in the upper bound. 2
2.1.2 Lower bound
The relative losses obviously satisfy LT (?) ? LT (?? ) ? 0 providing a trivial lower bound.
Any non-trivial lower bound on the regret cannot be expressed only in terms of ? and ? ? ,
but needs to incorporate some additional information about the losses along the observation
sequence. We express the lower bound on the regret as a function of the relative quality ? ?
of the minimum ?? :
?? (1 ? ?? ) d2
?? =
LT (?)|?=??
(6)
T ? 1 d?2
where the normalization guarantees that ? ? ? 1. ? ? ? 0 for any ?? that minimizes LT (?).
? ? )?D(?k?)]
?
The lower bound is found by solving: min Q?Q ? log E??Q
e(T ?1)[D(?k?
?
? ? (T ?1)
d2
?
subject to both constraint (C1) and (C2) d?
2 (LT (?) ? LT (? ))|?=?? = ?? (1??? )
Theorem 2 Let ? ? and ?? be defined as above based on an arbitrary observation seT ?1 ?? ?1
T ?1 1??? ?1
and q0 = [1 + 1??
. Then
quence, and q1 = [1 + 1??
?
? 1??? ]
?? ]
i
h
? ? )?D(?k?)]
?
(7)
LT (?) ? LT (?? ) ? ? log E??Q
e(T ?1)[D(?k?
?
where Q(1) = q1 and Q((?? ? q1 )/(1 ? q1 )) = 1 ? q1 whenever ? ? ?? ; Q(0) = q0 and
Q(?? /(1 ? q0 )) = 1 ? q0 otherwise.
Proof omitted due to space constraints. The upper and lower bounds agree for all ?, ? ? ?
(0, 1) when ? ? ? 1. Thus there may exist observation sequences on which Fixedshare, using ? 6= ?? , must incur regret linear in T .
2.2 Algorithm Learn-?
We now give an algorithm to learn the switching-rate simultaneously to updating the probability weighting over the experts. Since the cumulative loss Lt (?) of each Fixedshare algorithm running with switching parameter ? can be interpreted as a negative
log-probability, the posterior distribution over the switching-rate becomes
pt (?) = P (?|yt?1 , . . . , y1 ) ? e?Lt?1 (?)
(8)
assuming a uniform prior over ? ? [0, 1]. As a predictive distribution p t (?) does not
include the observation at the same time point. We can view this algorithm as finding
the single best ??-expert,? where the collection of ?-experts is given by Fixed-share
algorithms running with different switching-rates, ?.
We will consider a finite resolution version of this algorithm, allowing only m possible
choices for the switching-rate, {?1 , . . . , ?m }. For a sufficiently large m and appropriately
chosen values {?j }, we expect to be able to always find ?j ? ?? and suffer only a minimal
additional loss due to not being able to represent the hindsight-optimal value exactly.
Let pt,j (i) be the distribution over experts defined by the j th Fixed-share algorithm
corresponding to ?j , and let ptop
t (j) be the top-level algorithm producing a weighting over
such Fixed-share experts. The top-level algorithm is given by
1 top
ptop
p (j)e?L(pt?1,j ,t?1)
(9)
t (j) =
Zt t?1
where ptop
1 (j) = 1/m, and the loss per time-step becomes
m
m X
n
X
X
?L(pt,j ,t)
?L(i,t)
= ? log
(10)
Ltop (ptop
ptop
ptop
t , t) = ? log
t (j)e
t (j)pt,j (i)e
j=1
as is appropriate for a hierarchical Bayesian method.
j=1 i=1
3 Relative loss and optimal discretization
We derive here the optimal choice of the discrete set {?1 , . . . , ?m } on the basis of the
upper bound on relative loss. We begin by extending Theorem 1 to provide an analogous
guarantee for the Learn-? algorithm.
Corollary to Theorem 1 Let Ltop
T be the cumulative loss of the hierarchical Learn-?
algorithm using {?1 , . . . , ?m }. Then
?
Ltop
min D(?? k?j )
(11)
T ? LT (? ) ? log(m) + (T ? 1)
j=1,...,m
The hierarchical algorithm involves two competing goals that manifest themselves in the
regret: 1) the ability to identify the best Fixed-share expert, which degrades for larger
m, and 2) the ability to find ?j whose loss is close to the optimal ? for that sequence, which
improves for larger m. The additional regret arising from having to consider a number of
non-optimal values of the parameter, in the search, comes from the relative loss bound
for the Static-Expert algorithm, i.e. the relative loss associated with tracking the
best single expert [8, 12]. This regret is simply log(m) in our context. More precisely,
the corollary follows directly from successive application of that single expert relative loss
bound, and then our Fixed-share relative loss bound (Theorem 1):
?
Ltop
(12)
T ? LT (? ) ? log(m) + min LT (?j )
j=1,...,m
? log(m) + (T ? 1)
min D(?? k?j )
j=1,...,m
(13)
3.1 Optimal discretization
We start by finding the smallest discrete set of switching-rate parameters so that any additional regret due to discretization does not exceed (T ? 1)?, for some threshold ?. In other
words, we find m = m(?) values ?1 , . . . , ?m(?) such that
max
min
D(?? k?j ) = ?
(14)
?
? ?[0,1] j=1,...,m(?)
The resulting discretization, a function of ?, can be found algorithmically as follows. First,
we set ?1 so that max?? ?[0,?1 ] D(?? k?1 ) = D(0k?1 ) = ? implying that ?1 = 1 ? e?? .
Each subsequent ?j is found conditionally on ?j?1 so that
max
min{D(?? k?j?1 ), D(?? k?j )} = ?
(15)
?
? ?[?j?1 ,?j ]
?
The maximizing ? can be solved explicitly by equating the two relative entropies giving
?1
1 ? ?j?1
?j 1 ? ?j?1
?? = log(
) log(
)
(16)
1 ? ?j
?j?1 1 ? ?j
which lies within [?j?1 , ?j ] and is an increasing function of the new point ?j . Substituting
this ?? back into one of the relative entropies we can set ?j so that D(?? k?j?1 ) = ?. The
relative entropy is an increasing function of ?j (through ?? ) and the solution is obtained
easily via, e.g., bisection search. The iterative procedure of generating new values ? j
can be stopped after the new point exceeds 1/2; the remaining levels can be filled-in by
symmetry so long as we also include 1/2. The resulting discretization ?
is not uniform but
denser towards the edges; the spacing around the edges is O(?), and O( ?) around 1/2.
For small values of ?, the logarithm of the number of resulting discretization levels, or
log m(?), closely approximates ?1/2 log ?. We can then optimize
? the regret bound (11):
?
?
?1/2
log
?
+
(T
?
1)?,
yielding
?
=
1/(2T
),
and
m(?
)
=
2T . Thus we will
?
? need
O( T ) settings of ?, as in the case of choosing the levels uniformly with spacing ?. The
uniform discretization would not, however, possess the same regret guarantee, resulting in
a higher than necessary loss due to discretization.
3.1.1 Optimized regret bound for Learn-?
The optimized regret bound for Learn-?(? ? ) is thus (approximately) 12 log T +c, which is
comparable to analysis of universal coding for word-length T [11]. The optimal discretization for learning the parameter is not affected by n, the number of original experts. Unlike
regret bounds for Fixed-share, the value of the bound does not depend on the observation sequence. And notably, in comparison to the lower bound on Fixed-share?s
performance, Learn-??s regret is at most logarithmic in T .
4 Application to wireless networks
We applied the Learn-? algorithm to an open problem in computer networks: managing
the tradeoff between energy consumption and performance in wireless nodes of the IEEE
802.11 standard [9]. Since a node cannot receive packets while asleep, yet maintaining the
awake state drains energy, the existing standard uses a fixed polling time at which a node
should wake from the sleep state to poll its neighbors for buffered packets. Polling at fixed
intervals however, does not respond optimally to current network activity. This problem is
clearly an appropriate application for an online learning algorithm, such as Fixed-share
due to [8]. Since we are concerned with wireless, mobile nodes, there is no principled way
to set the switching-rate parameter a priori, as network activity varies not only over time,
but across location, and the location of the mobile node is allowed to change. We can
therefore expect an additional benefit from learning the switching-rate.
Previous work includes Krashinsky and Balakrishnan?s [10] Bounded Slowdown algorithm which uses an adaptive control loop to change polling time based on network conditions. This algorithm uses parameterized exploration intervals, and the tradeoff is not
managed optimally. Steinbach applied reinforcement learning [13] to this problem, yet
required an unrealistic assumption: that network activity possesses the Markov property.
We instantiate the experts as deterministic algorithms assuming constant polling times.
Thus we use n experts, each corresponding to a different but fixed polling time in milliseconds (ms): Ti : i ? {1 . . . n} The experts form a discretization over the range of possible
polling times. We then apply the Learn-? algorithm exactly as in our previous exposition,
using the discretization defined by ? ? , and thus running m(? ? ) sub-algorithms, each running Fixed-share with a different ?j . In this application, the learning algorithm can
only receive observations, and perform learning updates, when it is awake. So our subscript
t here signifies only wake times, not every time epoch at which bytes might arrive.
We define the loss function, L, to reflect the tradeoff inherent in the conflicting goals of
minimizing both the energy usage of the node, and the network latency it introduces by
sleeping. We propose a loss function that is one of many functions proportional to this
tradeoff. We define loss per expert i as:
Loss(i, t) = ?
It Ti2
1
+
2Tt
Ti
(17)
where It is the observation the node receives, of how many bytes arrived upon awakening
at time t, and Tt is the length of time that the node just slept. The first term models the
average latency introduced into the network by buffering those bytes, and scales I t to the
number of bytes that would have arrived had the node slept for time T i instead of Tt , under
the assumption that the bytes arrived at a uniform rate. The second term models the energy
consumption of the node, based on the design that the node wakes only after an interval T t
to poll for buffered bytes, and the fact that it consumes less energy when asleep than awake.
The objective function is a sum of convex functions and thus admits a unique minimum.
? > 0 allows for scaling between the units of information and time, and the ability to
encode a preference for the ratio between energy and latency that the user favors.
1150
12000
arbitrary expert (500ms)
1100
Cumulative loss of Learn?alpha alg (n=10)
10000
Cumulative loss
8000
Fixed?share(alpha) alg
6000
4000
best expert (100ms)
IEE 802.11 Protocol alg
2000
Static?expert alg
1050
1000
950
900
850
Learn?alpha(delta*)
0
a)
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
alpha
800
c)
0
2
4
6
8
10
12
14
4
x 10
1/delta
3500
1280
best expert (100ms)
IEE 802.11 Protocol alg
1260
3000
Cumulative loss of Learn?alpha alg (n=5)
1240
Cumulative loss
2500
2000
Fixed?share(alpha) alg
Static?expert alg
1500
1220
1200
1180
1160
1140
1120
1000
Learn?alpha(delta*)
1100
500
b)
0
0.2
0.4
0.6
0.8
1
1.2
1.4
1.6
1.8
2
?3
alpha
x 10
1080
d)
0
2
4
6
8
10
12
14
4
1/delta
x 10
Figure 1: a) Cumulative loss of Fixed-share(?) as a function of ?, compared to the
cumulative loss on the same trace of the 802.11 protocol, Static-expert, and Learn?(? ? ). Figure b) zooms in on the first 0.002 of the ? range. c) Cumulative loss of Learn?(?), as a function of 1/?, when n = 10, and b) n = 5. Circles at 1/? ? = 2T .
4.0.2 Experiments
We used traces of real network activity from [2], a UC Berkeley home dial-up server that
monitored users accessing HTTP files from home. Multiple overlapping connections, passing through a collection node over several days, were recorded by start and end times, and
number of bytes transferred. Per connection we smoothed the total number of bytes uniformly over 10ms intervals spanning its duration. We set ? = 1.0 ? 10 ?7 , calibrated to
attain polling times within the range of the existing protocol.
Figure 1a) and b) compare cumulative loss of the various algorithms on a 4 hour trace,
with observation epochs every 10ms. This corresponds to approximately 26,100 training
iterations for the learning algorithms. In the typical online learning scenario, T , the number
of learning iterations, i.e. the time horizen parameter to the loss bounds, is just the number
of observation epochs. In this application, the number of training epochs need not match
the number of observation epochs, since the application involves sleeping during many
observation epochs, and learning is only done upon awakening. Since in these experiments
the performance of the three learning algorithms are compared by each algorithm using n
experts spanning the range of 1000ms at regularly spaced intervals of 100ms, to obtain a
prior estimate of T , we assume a mean sleep interval of 550ms, the mean of the experts.
The Static-expert algorithm achieved lower cumulative loss than the best expert,
since it can attain the optimal smoothed value over the desired range of polling times,
whereas the expert values just form a discretization. On this trace, the optimal ? for
Fixed-share turns out to be extremely low. So for most settings of ?, one would be
better off using a Static-expert model, yet as the second graph shows, there is a value
of ? below which it is beneficial to use Fixed-share. This lends validity to our fundamental goal of being able to quantify the level of non-stationarity of a process, in order
to better model it. Moreover there is a clear advantage to using Learn-?, since without
prior knowledge of the stochastic process to be observed, there is no optimal way to set ?.
Figure 1c) and d) show the cumulative loss of Learn-? as a function of 1/?. We see that
1
choosing ? = 2T
, matches the point in the curve beyond which one cannot significantly
reduce cumulative loss by decreasing ?. As expected, the performance of the algorithm
levels off after the optimal ? that we can compute a priori. Our results also verify that the
optimal ? is not significantly affected by the number of experts n.
5 Conclusion
We proved upper and lower bounds on the regret for a class of online learning algorithms,
applicable to any sequence of observations. The bounds extend to richer models of nonstationary sequences, allowing the switching dynamics to be governed by an arbitrary transition matrix. We derived the regret-optimal discretization (including the overall resolution)
for learning the switching-rate parameter in a simple switching dynamics, yielding an algorithm with stronger guarantees than previous algorithms. We exemplified the approach in
the context of energy management in wireless networks. In future work, we hope to extend
the online estimation of ? and the optimal discretization to learning a full transition matrix.
References
[1] P. Auer, N. Cesa-Bianchi, Y. Freund, and R. E. Schapire. Gambling in a rigged casino: the
adversarial multi-armed bandit problem. In Proc. of the 36th Annual Symposium on Foundations
of Computer Science, pages 322?331, 1995.
[2] Berkeley. UC Berkeley home IP web traces. In http://ita.ee.lbl.gov/html/contrib/UCB.homeIP-HTTP.html, 1996.
[3] A. Blum, C. Burch, and A. Kalai. Finely-competitive paging. In IEEE 40th Annual Symposium
on Foundations of Computer Science, page 450, New York, New York, October 1999.
[4] D. P. Foster and R. Vohra. Regret in the on-line decision problem. Games and Economic
Behavior, 29:7?35, 1999.
[5] Y. Freund and R. Schapire. Adaptive game playing using multiplicative weights. Games and
Economic Behavior, 29:79?103, 1999.
[6] D. Haussler, J. Kivinen, and M. K. Warmuth. Sequential prediction of individual sequences
under general loss functions. IEEE Trans. on Information Theory, 44(5):1906?1925, 1998.
[7] D. P. Helmbold, R. E. Schapire, Y. Singer, and M. K. Warmuth. On-line portfolio selection
using multiplicative updates. In International Conference on Machine Learning, pages 243?
251, 1996.
[8] M. Herbster and M. K. Warmuth. Tracking the best expert. Machine Learning, 32:151?178,
1998.
[9] IEEE. Computer society LAN MAN standards committee. In IEEE Std 802.11: Wireless LAN
Medium Access Control and Physical Layer Specifications, August 1999.
[10] R. Krashinsky and H. Balakrishnan. Minimizing energy for wireless web access with bounded
slowdown. In MobiCom 2002, Atlanta, GA, September 2002.
[11] R. Krichevsky and V. Trofimov. The performance of universal encoding. IEEE Trans. on
Information Theory, 27(2):199?207, 1981.
[12] N. Littlestone and M. K. Warmuth. The weighted majority algorithm. In IEEE Symposium on
Foundations of Computer Science, pages 256?261, 1989.
[13] C. Steinbach. A reinforcement-learning approach to power management. In AI Technical Report, M.Eng Thesis, Artificial Intelligence Laboratory, MIT, May 2002.
[14] V. Vovk. Derandomizing stochastic prediction strategies. Machine Learning, 35:247?282, 1999.
| 2440 |@word version:1 stronger:1 open:1 rigged:1 d2:2 trofimov:1 eng:1 q1:5 initial:1 past:1 existing:3 current:1 discretization:15 comparing:1 yet:3 must:1 subsequent:1 partition:1 update:2 stationary:1 intelligence:2 implying:1 instantiate:1 warmuth:4 ith:1 node:12 location:2 successive:1 preference:1 along:3 c2:1 symposium:3 notably:1 expected:2 behavior:2 p1:2 nor:1 themselves:1 multi:1 decreasing:1 gov:1 armed:2 increasing:2 becomes:2 begin:1 bounded:3 moreover:1 medium:1 what:1 interpreted:1 minimizes:1 developed:2 hindsight:6 finding:2 guarantee:11 berkeley:3 every:2 ti:2 alent:1 exactly:2 scaled:1 control:2 unit:1 omit:1 appear:1 producing:1 switching:18 encoding:1 subscript:1 approximately:2 might:1 emphasis:1 equating:1 hmms:1 range:5 unique:1 regret:23 procedure:1 universal:5 empirical:1 attain:2 significantly:2 word:2 cannot:3 close:1 selection:2 ga:1 context:5 derandomizing:1 optimize:1 deterministic:1 yt:7 maximizing:1 duration:1 convex:1 resolution:3 simplicity:1 helmbold:1 haussler:1 deriving:2 fixedshare:5 variation:1 analogous:2 pt:16 user:2 us:3 steinbach:2 associate:1 updating:3 std:1 observed:1 solved:1 mobicom:1 valuable:1 consumes:1 principled:1 accessing:1 vanishes:2 dynamic:7 depend:2 solving:2 segment:1 predictive:2 incur:1 upon:2 learner:4 basis:5 easily:1 various:2 artificial:2 outside:1 choosing:2 whose:1 richer:1 larger:2 denser:1 otherwise:1 ability:4 favor:1 ip:1 online:9 obviously:1 sequence:18 advantage:1 propose:1 combining:1 loop:1 extending:1 generating:1 wider:1 derive:2 qt:2 involves:3 come:1 quantify:1 tommi:2 differ:1 closely:1 stochastic:2 exploration:1 packet:2 require:1 equiv:1 sufficiently:1 considered:1 around:2 substituting:1 vary:1 smallest:1 omitted:1 purpose:1 perceived:1 estimation:2 proc:1 applicable:1 weighted:1 hope:1 mit:3 clearly:1 always:1 kalai:1 mobile:2 jaakkola:1 corollary:2 encode:1 derived:2 focus:1 quence:1 bernoulli:1 adversarial:1 bandit:2 i1:2 polling:8 overall:1 html:2 denoted:1 priori:4 special:1 uc:2 having:2 buffering:1 future:1 report:1 inherent:1 simultaneously:1 zoom:1 individual:1 atlanta:1 stationarity:1 introduces:1 yielding:2 logloss:1 edge:2 necessary:1 filled:1 logarithm:2 littlestone:1 circle:1 desired:1 lbl:1 minimal:1 stopped:1 signifies:1 paging:2 uniform:4 iee:2 characterize:1 optimally:2 varies:1 calibrated:1 fundamental:1 international:1 herbster:1 systematic:1 probabilistic:2 off:2 analogously:1 quickly:1 thesis:1 reflect:1 recorded:1 management:4 cesa:1 choose:1 expert:55 coding:1 casino:1 includes:1 satisfy:1 explicitly:1 depends:1 multiplicative:2 view:1 start:3 competitive:1 square:1 yield:1 identify:2 spaced:1 bayesian:4 bisection:1 vohra:1 monteleoni:1 whenever:1 definition:1 energy:9 naturally:1 proof:5 associated:1 monitored:1 static:8 proved:1 manifest:1 knowledge:1 improves:1 auer:1 back:1 appears:1 higher:1 day:1 response:2 done:1 though:1 just:3 receives:1 web:2 overlapping:1 propagation:1 quality:1 usage:1 facilitate:1 validity:1 verify:1 managed:1 assigned:1 q0:4 laboratory:2 conditionally:1 game:4 self:1 during:1 m:9 generalized:1 arrived:3 tt:3 demonstrate:2 awakening:2 physical:1 ti2:1 extend:3 approximates:1 relating:1 buffered:2 cambridge:1 ai:3 portfolio:2 had:1 access:4 specification:1 posterior:2 optimizing:1 scenario:3 server:1 seen:2 minimum:2 additional:5 managing:1 period:1 multiple:1 full:1 exceeds:1 technical:1 match:2 long:1 a1:1 prediction:14 involving:1 expectation:1 iteration:2 normalization:1 represent:1 achieved:1 sleeping:2 c1:3 receive:2 whereas:1 spacing:2 interval:6 wake:3 appropriately:1 unlike:1 posse:3 finely:1 file:1 induced:1 subject:2 balakrishnan:2 regularly:1 nonstationary:1 ee:1 exceed:1 concerned:1 switch:2 independence:1 competing:1 reduce:1 economic:2 parameterizes:1 tradeoff:4 shift:1 expression:1 utility:1 suffer:1 passing:1 york:2 latency:3 governs:1 clear:2 involve:1 http:3 schapire:3 exist:1 millisecond:1 delta:5 arising:1 per:3 algorithmically:1 discrete:3 affected:2 express:1 threshold:1 poll:2 blum:1 lan:2 graph:1 fraction:1 sum:1 parameterized:2 respond:1 arrive:2 reasonable:1 home:3 decision:1 scaling:1 comparable:1 bound:33 layer:1 guaranteed:1 sleep:2 annual:2 activity:4 kronecker:1 constraint:4 precisely:1 awake:3 burch:1 min:7 extremely:1 transferred:1 across:2 beneficial:1 making:1 modification:1 equation:1 agree:1 turn:1 committee:1 singer:1 end:1 available:1 rewritten:1 apply:1 hierarchical:3 indirectly:1 appropriate:2 original:1 top:3 running:4 include:2 remaining:1 maintaining:1 dial:1 calculating:1 giving:1 society:1 objective:1 degrades:1 strategy:1 dependence:3 september:1 detrimental:1 lends:1 krichevsky:1 majority:1 consumption:2 trivial:2 spanning:2 assuming:2 length:2 providing:2 minimizing:2 ratio:1 october:1 trace:5 negative:3 rise:1 suppress:1 design:1 zt:3 perform:1 allowing:4 upper:10 bianchi:1 observation:23 markov:3 finite:2 y1:4 smoothed:2 arbitrary:5 august:1 introduced:1 required:1 optimized:2 connection:2 conflicting:1 hour:1 maxq:1 trans:2 able:3 beyond:1 below:2 exemplified:1 including:2 max:3 unrealistic:1 power:1 rely:1 kivinen:1 technology:1 ltop:4 byte:8 prior:3 epoch:6 drain:1 relative:22 freund:2 loss:47 expect:2 proportional:2 ita:1 foundation:3 foster:1 cmontel:1 ptop:6 playing:2 share:17 claire:1 normalizes:1 wireless:9 slowdown:2 bias:1 neighbor:1 characterizing:1 taking:1 benefit:1 curve:1 transition:6 cumulative:15 forward:1 made:1 adaptive:3 collection:2 reinforcement:2 far:1 alpha:8 search:2 iterative:1 nature:1 learn:17 symmetry:1 alg:8 warranted:1 protocol:4 allowed:1 gambling:1 sub:1 lie:1 governed:1 weighting:2 removing:1 theorem:7 maxi:1 admits:1 essential:1 sequential:1 entropy:5 lt:26 logarithmic:1 simply:2 expressed:4 tracking:2 scalar:1 corresponds:2 ma:1 asleep:2 conditional:1 goal:4 consequently:1 exposition:1 towards:1 man:1 change:7 typical:1 uniformly:2 vovk:2 lemma:1 total:1 ucb:1 slept:2 brevity:1 incorporate:1 |
1,586 | 2,441 | The doubly balanced network of spiking
neurons: a memory model with high
capacity
Yuval Aviel*
Interdisciplinary Center for Neural Computation
Hebrew University
Jerusalem, Israel 91904
[email protected]
David Horn
School of Physics
Tel Aviv University
Tel Aviv, Israel 69978
[email protected]
Moshe Abeles
Interdisciplinary Center for Neural Computation
Hebrew University
Jerusalem, Israel 91904
[email protected]
Abstract
A balanced network leads to contradictory constraints on memory
models, as exemplified in previous work on accommodation of
synfire chains. Here we show that these constraints can be
overcome by introducing a 'shadow' inhibitory pattern for each
excitatory pattern of the model. This is interpreted as a doublebalance principle, whereby there exists both global balance
between average excitatory and inhibitory currents and local
balance between the currents carrying coherent activity at any
given time frame. This principle can be applied to networks with
Hebbian cell assemblies, leading to a high capacity of the
associative memory. The number of possible patterns is limited by
a combinatorial constraint that turns out to be P=0.06N within the
specific model that we employ. This limit is reached by the
Hebbian cell assembly network. To the best of our knowledge this
is the first time that such high memory capacities are demonstrated
in the asynchronous state of models of spiking neurons.
1
In trod u ction
Numerous studies analyze the different phases of unstructured networks of spiking
neurons [1, 2]. These networks with random connectivity possess a phase of
asynchronous activity, the asynchronous state (AS), which is the most interesting
one from the biological perspective, since it is similar to physiological data.
Unstructured networks, however, do not hold information in their connectivity
matrix, and therefore do not store memories.
Binary networks with ordered connectivity matrices, or structured networks, and
their ability to store and retrieve memories, have been extensively studied in the
past [3-8]. Applicability of these results to biologically plausible neuronal models is
questionable. In particular, models of spiking neurons are known to have modes of
synchronous global oscillations. Avoiding such modes, and staying in an AS, is a
major constraint on networks of spiking neurons that is absent in most binary neural
networks. As we will show below, it is this constraint that imposes a limit on
capacity in our model. Existing associative memory models of spiking neurons have
not strived for maximal pattern capacity [3, 4, 8].
Here, using an integrate-and-fire model, we embed structured synaptic connections
in an otherwise unstructured network and study the capacity limit of the system. The
system is therefore macroscopically unstructured, but microscopically structured.
The unstructured network model is based on Brunel's [1] balanced network of
integrate-and-fire neurons. In his model, the network possesses different phases, one
of which is the AS. We replace his unstructured excitatory connectivity by a semistructured one, including a super-position of either synfire chains or Hebbian cell
assemblies.
The existence of a stable AS is a fundamental prerequisite of the system. There are
two reasons for that: First, physiological measurements of cortical tissues reveal an
irregular neuronal activity and an asynchronous population activity. These findings
match the properties of the AS. Second, in term of information content, the entropy
of the system is the highest when firing probability is uniformly distributed, as in an
AS. In general, embedding one or two patterns will not destabilize the AS.
Increasing the number of embedded patterns, however, will eventually destabilize
the AS, leading to global oscillations.
In previous work [9], we have demonstrated that the cause of AS instability is
correlations between neurons that result from the presence of structure in the
network. The patterns, be it Hebbian cell assemblies (HCA) or pools occurring in
synfire chains (SFC), have an important characteristic: neurons that are members of
the same pattern (or pool) share a large portion of their inputs. This common input
correlates neuronal activities both when a pattern is activated and when both
neurons are influenced by random activity. If too many patterns are embedded in the
network, too many neurons become correlated due to common inputs, leading to
globally synchronized deviations from mean activity.
A qualitative understanding of this state of affairs is provided by a simple model of
a threshold linear pair of neurons that receive n excitatory common, and correlated,
inputs, and K-n excitatory, as well as K inhibitory, non-common uncorrelated
inputs. Thinking of these neurons as belonging to a pattern or a pool within a
network, we can obtain an interesting self-consistent result by assuming the
correlation of the pair of neurons to be also the correlation in their common
correlated input (as is likely to be the case in a network loaded with HCA or SFC).
We find then [9] that there exists a critical pattern size, n c , below which
correlations decay but above which correlations are amplified. Furthermore, the
following scaling was found to exist
(1)
nc = rc K .
Implications of this model for the whole network are that: (i) rc is independent of
N, the size of the network, (ii) below nc the AS is stable, and (iii) above nc the AS is
unstable.
Using extensive computer simulations we were able [9] to validate all these
predictions. In addition, keeping n<nc, we were able to observe traveling synfire
waves on top of global asynchronous activity.
The pattern's size n is also limited from below, n> nmin, by the requirement that n
excitatory post-synaptic potentials (PSPs), on average, drive a neuron across its
threshold. Since N>K and typically N>>K, together with Eq. (1) it follows that
N >> (n min / rc ) . Hence rc and nmin set the lower bound of the network's size,
2
above which it is possible to embed a reasonable number of patterns in the network
without losing the AS. In this paper we propose a solution that enables small n min
and large r values, which in turn enables embedding a large number of patterns in
much smaller networks. This is made possible by the doubly-balanced construction
to be outlined below.
2
The double-balance principle
Counteracting the excitatory correlations with inhibitory ones is the principle that
will allow us to solve the problem. Since we deal with balanced networks, in which
the mean excitatory input is balanced by an inhibitory one, we note that this
principle imposes a second type of balancing condition, hence we refer to it as the
double- balance principle.
In the following, we apply this principle by introducing synaptic connections
between any excitatory pattern and its randomly chosen inhibitory pattern. These
inhibitory patterns, which we call shadow patterns, are activated after the excitatory
patterns fire, but have no special in-pattern connectivity or structured projections
onto other patterns. The premise is that correlations evolved in the excitatory
patterns will elicit correlated inhibitory activity, thus balancing the network's
average correlation level. The size of the shadow pattern has to be small enough, so
that the global network activity will not be quenched, yet large enough, so that the
excitatory correlation will be counteracted. A balanced network that is embedded
with patterns and their shadow patterns will be referred to as a doubly balanced
network (DBN), to be contrasted with the singly balanced network (SBN) where
shadow patterns are absent.
3
3.1
Application of the double balance principle.
The Network
We model neuronal activity with the Integrate and Fire [10] model. All neurons
have the same parameters: ? = 10ms , ? ref = 2.5ms , C=250pF. PSPs are modeled
by a delta function with fixed delay. The number of synapses on a neuron is fixed
and set to KE excitatory synapses from the local network, KE excitatory synapses
from external sources and KI inhibitory synapses from the local network. See Aviel
et al [9] for details. All synapses of each group will be given fixed values. It is
allowed for one pre-synaptic neuron to make more than one connection to one postsynaptic neuron. The network possesses NE excitatory neurons and N I ? ?N E
inhibitory neurons. Connectivity is sparse,
? = 0.1 ).
K E N E = K I N I = ? , (we use
A Poisson process with rate vext=10Hz models the external source. If a
neuron of population y innervates a neuron of population x its synaptic strength J xy
is defined as
J xE ? J 0
K E , J xI ? ? gJ 0
with J0=10, and g=5. Note that J xI = ?
g
?
KI
J xE , hence
g
?
controls the balance
between the two populations.
Within an HCA pattern the neurons have high connection probability with one
another. Here it is achieved by requiring L of the synapses of a neuron in the
excitatory pattern to originate from within the pattern. Similarly, a neuron in the
inhibitory shadow pattern dedicates L of its synapses to the associated excitatory
pattern. In a SFC, each neuron in an excitatory pool is fed by L neurons from the
previous pool. This forms a feed forward connectivity. In addition, when shadow
pools are present, each neuron in a shadow pool is fed by L neurons from its
associated excitatory pool.
In both cases L = C L
K E , with CL=2.5. The size of the excitatory patterns (i.e.
the number of neurons participating in a pattern) or pools, nE, is also chosen to be
proportional to
K E (see Aviel et al. 2003 [9]), nE ? Cn K E , where Cn varies.
This is a suitable choice, because of the behavior of the critical nc of Eq. (1), and is
needed for the meaningful memory activity (of the HCA or SFC) to overcome
synaptic noise.
~
The size of a shadow pattern is defined as nI ? d nE . This leads to the factor d,
representing the relative strength of inhibitory and excitatory currents, due to a
pattern or pool, affecting a neuron that is connected to both:
d?
(2)
Thus it fixes nI = d
? J xI nI
J xE nE
=
gJ 0 K E d gd
.
=
J0 K I
?
( )n . In the simulations reported below d varied between 1
?
g
E
and 3.
Wiring the network is done in two stages, first all excitatory patterns are wired, and
then random connections are added, complying with the fixed number of synapses.
A volley of w spikes, normally distributed over time with width of 1ms, is used to
ignite a memory pattern. In the case of SFC, the first pool is ignited, and under the
right conditions the volley propagates along the chain without fading away and
without destabilizing the AS.
3.2
Results
First we show that the AS remains stable when embedding HCAs in a small DBN,
whereas global oscillations take place if embedding is done without shadow pools.
Figure 1 displays clearly the sustained activity of an HCA in the DBN.
The same principle also enables embedding of SFCs in a small network. This is to
be contrasted with the conclusions drawn in Aviel et al [9], where it was shown that
otherwise very large networks are necessary to reach this goal.
Figure 1: HCAs are embedded in a balanced network without (left) and with (right)
shadow patterns. P=300 HCAs of size nE=194 excitatory neurons were embedded in
a network of NE=15,000 excitatory neurons. The eleventh pattern is externally
ignited at time t=100ms. A raster plot of 200ms is displayed. Without shadow
patterns the network exhibits global oscillations, but with shadow patterns the
network exhibits only minute oscillations, enabling the activity of the ignited
pattern to be sustained. The size of the shadow patterns is set according to Eq. (2)
with d=1. Neurons that participate in more than one HCA may appear more than
once on the raster plot, whose y-axis is ordered according to HCAs, and represents
every second neuron in each pattern.
Figure 2: SFCs embedded in a balanced network without (left) and with (right)
shadow patterns. The first pool is externally ignited at time t=100ms. d=0.5. The
rest of the parameters are as in Figure 1. Here again, without shadow pools, the
network exhibits global oscillations, but with shadow pools it has only minute
oscillation, enabling a stable propagation of the synfire wave.
3.3
Maximum Capacity
In this section we show that, within our DBN, it is the fixed number of synapses
(rather than dynamical constraints) that dictates the maximal number of patterns or
pools P that may be loaded onto the network. Let us start by noting that a neuron of
population x (E or I) can participate in at most m ? ?K E L ? patterns, hence N x m
sets an upper bound on the number of neurons that participate in all patterns:
P
n x P ? m ? N x . Next, defining ? x ?
, we find that
Nx
?x ?
(3)
m
nx
=
? K E CL K E ?
?
?
nx
To leading order in NE this turns into
? K E CL K E ?
? N = C C D ?1 N ? O
?xNx = ?
n L x
E
E
D C K
(4)
x n
E
(
)
(
NE
)
(g ? ) if x=I, or 1 for x=E.
where Dx ? d
Thus we conclude that synaptic combinatorial considerations lead to a maximal
number of patterns P. If DI<1, including the case DI=0 of the SBN, the excitatory
neurons determine the limit to be P = (C n C L ) N E . If, as is the case in our DBN,
?1
DI>1, then
(
?? I < ? E
P = C n C L DI
)
?1
and the inhibitory neurons set the maximum value to
NE .
For example, setting Cn=3.5, CL=2.4, g=3 and d=3, in Eq. (4), we get P=0.06NE. In
Figure 3 we use these parameters. The capacity of a DBN is compared to that of an
SBN for different network sizes. The maximal load is defined by the presence of
global oscillation strong enough to prohibit sustained activity of patterns. The DBN
reaches the combinatorial limit, whereas the SBN does not increase with N and
obviously does not reach its combinatorial limit.
1400
1200
Pmax
1000
DBN
SBN
DBN Upper Limit
SBN Upper Limit
800
600
400
200
0
0
5000
10000
NE
15000
Figure 3: A balanced network maximally loaded with HCAs. Left: A raster plot of a
maximally loaded DBN. P=408, NE=6,000. At time t=450ms, the seventh pattern is
ignited for a duration of 10ms, leading to termination of another pattern's activity
(upper stripe) and to sustained activity of the ignited pattern (lower stripe). Right:
P(NE) as inferred from simulations of a SBN ("o") and of a DBN ("*"). The DBN
realizes the combinatorial limit (dashed line) whereas the SBN does not realize its
limit (solid line). From this comparison it is clear that DBN is superior to the SBN
in terms of network capacity.
The simulations displayed in Figure 3 show that in the DBN the combinatorial P is
indeed realized, and the capacity of this DBN grows like 0.06NE. In the SBN,
dynamic interference prevents reaching the combinatorial limit.
We have tried, in many ways, to increase the capacity of SBN. Recently, we have
discovered [11] that only if the external rates are appropriately scaled, then SBN
capacity can be linear with NE with a pre-factor ? almost as high as that of a DBN.
Although under these conditions SBNs can have large capacity, we emphasize that
DBNs posses a clear advantage. Their structure guarantees high capacity under more
general conditions.
4
D i s c u s s i on
In this paper we study memory patterns embedded in a balanced network of spiking
neurons. In particular, we focus on the maximal capacity of Hebbian cell
assemblies. Requiring stability of the asynchronous state of the network, that serves
as the background for memory activity, and further assuming that the neuronal
spiking process is noise-driven, we show that naively applying Hebb's architecture
leads to global oscillations. We propose the double-balance principle as the solution
to this problem. This double-balance is obtained by introducing shadow patterns, i.e.
inhibitory patterns that are associated with the excitatory ones and fed by them, but
do not have specific connectivity other than that.
The maximal load of our system is determined in terms of the available synaptic
resources, and is proportional to the size of the excitatory population, NE. For the
parameters used here it turns out to be P=0.06NE. This limit was estimated by a
combinatorial argument of synaptic availability, and shown to be realized by
simulations.
Synfire chains were also studied. DBNs allow for their embedding in relatively
small networks, as shown in Figure 2. Previous studies have shown that their
embedding in balanced networks without shadow pools require network sizes larger
by an order of magnitude [9]. The capacity P of a SFC is defined, in analogy with
the HCA case, as the number of pools embedded in the network. In this case we
cannot realize the theoretical limit in simulations. We believe that the feed-forward
structure of the SFC, which is absent in HCA, introduces further dynamical
interference. The feed-forward structure can amplify correlations and firing rates
more efficiently than the feedback structure within patterns of the HCA. Thus a
network embedded with SFCs may be more sensitive to spontaneously evolved
correlations than a network embedded with HCAs.
It is interesting to note that the addition of shadow patterns has an analogy in the
Hopfield model [5], where neurons in a pattern have both excitatory and inhibitory
couplings with the rest of the network. One may claim that the architecture proposed
here recovers the same effect via the shadow patterns. Accommodating the Hopfield
model in networks of spiking neurons was tried before [3, 4] without specific
emphasis on the question of capacity. In Gerstner and van Hemenn [4] the synaptic
matrix is constructed in the same way as in the Hopfield model, i.e. neurons can
have excitatory and inhibitory synapses. In [3, 8] the synaptic bonds of the Hopfield
model were replaced by strong excitatory connections within a pattern, and weak
excitatory connections among neurons in a patterns and those outside the pattern.
While the different types of connection are of different magnitude, they are all
excitatory. In contrast, here, excitation exists within a pattern as well as outside it,
but the pattern has a well-defined inhibitory effect on the rest of the network,
mediated by the shadow pattern. The resulting inhibitory correlated currents cancel
the excitatory correlated input. Since the firing process in a BN is driven by
fluctuations, it seems that negating excitatory correlations by inhibitory ones is
more akin to Hopfield's construction in a network of two populations.
Hertz [12] has argued that a capacity limit obtained in a network of integrate-andfire neurons should be multiplied by ? / 2 to compare it with a network of binary
neurons. Hence the ? = 0.12 obtained here, is equivalent to ? = 0.6 in a binary
model. It is not surprising that the last number is higher than 0.14, the limit of the
original Hopfield model, since our model is sparse, as, e.g. the Tsodyks-Feigelman
[7] model, where larger capacities were achieved.
Finally, let us point out again that whereas only DBNs can reach the combinatorial
capacity limit under the conditions specified in this paper, we have recently
discovered [11] that SBN can also reach this limit if additional scaling conditions
are imposed on the input. The largest capacities that we obtained under these
conditions were of order 0.1.
Acknowledgments
This work was supported in part by grants from GIF.
References
1.
Brunel, N., Dynamics of sparsely connected networks of excitatory and
inhibitory spiking neurons. J Comput Neurosci, 2000. 8(3): p. 183-208.
2.
van Vreeswijk, C. and H. Sompolinsky, Chaotic balanced state in a model
of cortical circuits. Neural Comput, 1998. 10(6): p. 1321-71.
3.
Amit, D., J and N. Brunel, Dynamics of a recurrent network of spiking
neurons before and following learning. Network, 1997. 8: p. 373.
4.
Gerstner, W. and L. van Hemmen, Associative memory in a network of
'spiking' neurons. Network, 1992. 3: p. 139-164.
5.
Hopfield, J.J., Neural networks and physical systems woth emergant
collective computational abilities. PNAS, 1982. 79: p. 2554-58.
6.
Willshaw, D.J., O.P. Buneman, and H.C. Longuet-Higgins, Nonholographic associative memory. Nature (London), 1969. 222: p. 960-962.
7.
Tsodyks, M.V. and M.V. Feigelman, The enhanced storage capacity in
neural networks with low activity level. Europhys. Let., 1988. 6(2): p. 101.
8.
Brunel, N. and X.-J. Wang, Effects of neuromodulation in a cortical
network model of object working memory dominated by recurrent
inhibition. J. of Computational Neuroscience, 2001. 11: p. 63-85.
9.
Aviel, Y., et al., On embedding synfire chains in a balanced network.
Neural Computation, 2003. 15(6): p. 1321-1340.
10.
Tuckwell, H.C., Introduction to theoretical neurobiology. 1988,
Cambridge: Cambridge University Press.
11.
Aviel, Y., D. Horn, and M. Abeles, Memory Capacity of Balanced
Networks. 2003: Submitted.
12.
Hertz, J.A., Modeling synfire networks, in Neuronal Information processing
- From Biological Data to Modelling and Application, G. Burdet, P.
Combe, and O. Parodi, Editors. 1999.
| 2441 |@word complying:1 seems:1 termination:1 simulation:6 tried:2 bn:1 solid:1 past:1 existing:1 current:4 surprising:1 yet:1 dx:1 realize:2 enables:3 plot:3 affair:1 rc:4 along:1 constructed:1 become:1 qualitative:1 doubly:3 sustained:4 eleventh:1 indeed:1 behavior:1 globally:1 pf:1 increasing:1 provided:1 circuit:1 israel:3 evolved:2 interpreted:1 gif:1 finding:1 ignited:6 guarantee:1 every:1 questionable:1 willshaw:1 scaled:1 control:1 normally:1 grant:1 appear:1 before:2 local:3 limit:17 firing:3 fluctuation:1 emphasis:1 studied:2 limited:2 horn:3 acknowledgment:1 spontaneously:1 chaotic:1 j0:2 elicit:1 destabilizing:1 dictate:1 projection:1 pre:2 quenched:1 get:1 onto:2 cannot:1 amplify:1 storage:1 applying:1 instability:1 equivalent:1 imposed:1 demonstrated:2 center:2 jerusalem:2 duration:1 ke:2 unstructured:6 higgins:1 his:2 retrieve:1 population:7 embedding:8 stability:1 construction:2 dbns:3 enhanced:1 losing:1 stripe:2 sparsely:1 wang:1 tsodyks:2 sbn:13 connected:2 sompolinsky:1 innervates:1 feigelman:2 highest:1 balanced:17 dynamic:3 carrying:1 aviel:7 hopfield:7 london:1 ction:1 outside:2 europhys:1 whose:1 larger:2 plausible:1 solve:1 otherwise:2 ability:2 associative:4 obviously:1 advantage:1 propose:2 maximal:6 amplified:1 validate:1 participating:1 double:5 requirement:1 wired:1 staying:1 object:1 coupling:1 recurrent:2 ac:3 school:1 eq:4 strong:2 shadow:22 synchronized:1 require:1 premise:1 argued:1 fix:1 biological:2 hold:1 claim:1 major:1 xnx:1 realizes:1 combinatorial:9 bond:1 sensitive:1 largest:1 clearly:1 super:1 rather:1 reaching:1 focus:1 modelling:1 contrast:1 typically:1 among:1 special:1 once:1 represents:1 cancel:1 thinking:1 hca:9 employ:1 randomly:1 microscopically:1 replaced:1 phase:3 fire:4 sfc:7 introduces:1 activated:2 chain:6 implication:1 trod:1 xy:1 necessary:1 theoretical:2 modeling:1 negating:1 applicability:1 introducing:3 deviation:1 delay:1 semistructured:1 seventh:1 too:2 reported:1 varies:1 abele:3 gd:1 fundamental:1 huji:2 interdisciplinary:2 physic:1 pool:18 together:1 connectivity:8 again:2 external:3 leading:5 potential:1 availability:1 analyze:1 reached:1 portion:1 wave:2 macroscopically:1 start:1 il:3 ni:3 loaded:4 characteristic:1 efficiently:1 weak:1 cc:1 drive:1 tissue:1 submitted:1 synapsis:10 influenced:1 reach:5 synaptic:11 raster:3 associated:3 di:4 recovers:1 knowledge:1 feed:3 higher:1 maximally:2 done:2 furthermore:1 stage:1 correlation:12 traveling:1 nmin:2 working:1 synfire:8 propagation:1 mode:2 reveal:1 believe:1 grows:1 aviv:2 effect:3 requiring:2 hence:5 tuckwell:1 deal:1 wiring:1 self:1 width:1 whereby:1 prohibit:1 excitation:1 m:8 consideration:1 recently:2 common:5 superior:1 spiking:12 physical:1 measurement:1 refer:1 counteracted:1 cambridge:2 outlined:1 dbn:16 similarly:1 stable:4 gj:2 inhibition:1 accommodation:1 perspective:1 driven:2 store:2 binary:4 xe:3 additional:1 determine:1 dashed:1 ii:1 sbns:1 pnas:1 hebbian:5 match:1 post:2 prediction:1 buneman:1 poisson:1 achieved:2 cell:5 strived:1 receive:1 background:1 irregular:1 affecting:1 whereas:4 addition:3 source:2 appropriately:1 rest:3 posse:4 hz:1 member:1 call:1 counteracting:1 presence:2 noting:1 iii:1 enough:3 psps:2 architecture:2 cn:3 absent:3 synchronous:1 akin:1 cause:1 clear:2 singly:1 extensively:1 exist:1 inhibitory:20 neuroscience:1 delta:1 estimated:1 group:1 threshold:2 destabilize:2 drawn:1 andfire:1 place:1 almost:1 reasonable:1 oscillation:9 scaling:2 bound:2 ki:2 display:1 activity:19 strength:2 fading:1 constraint:6 dominated:1 argument:1 min:2 relatively:1 structured:4 according:2 belonging:1 hertz:2 across:1 smaller:1 postsynaptic:1 biologically:1 interference:2 resource:1 remains:1 turn:4 eventually:1 vreeswijk:1 neuromodulation:1 needed:1 fed:3 serf:1 available:1 prerequisite:1 apply:1 observe:1 multiplied:1 away:1 existence:1 original:1 top:1 assembly:5 vext:1 amit:1 added:1 moshe:1 spike:1 realized:2 question:1 exhibit:3 capacity:23 nx:3 participate:3 originate:1 accommodating:1 unstable:1 reason:1 assuming:2 modeled:1 balance:8 hebrew:2 nc:5 dedicates:1 pmax:1 collective:1 upper:4 neuron:50 enabling:2 displayed:2 defining:1 neurobiology:1 frame:1 discovered:2 varied:1 inferred:1 david:1 pair:2 specified:1 extensive:1 connection:8 coherent:1 able:2 below:6 exemplified:1 pattern:67 dynamical:2 including:2 memory:15 tau:1 critical:2 suitable:1 representing:1 numerous:1 ne:18 axis:1 mediated:1 understanding:1 relative:1 embedded:10 interesting:3 proportional:2 analogy:2 integrate:4 consistent:1 imposes:2 propagates:1 principle:10 editor:1 uncorrelated:1 share:1 balancing:2 excitatory:35 supported:1 last:1 asynchronous:6 keeping:1 allow:2 sparse:2 distributed:2 van:3 overcome:2 feedback:1 cortical:3 forward:3 made:1 correlate:1 emphasize:1 sfcs:3 global:10 conclude:1 xi:3 nature:1 longuet:1 tel:2 gerstner:2 cl:4 neurosci:1 whole:1 noise:2 allowed:1 ref:1 neuronal:6 referred:1 hemmen:1 hebb:1 vms:1 position:1 comput:2 volley:2 externally:2 minute:2 embed:2 load:2 specific:3 decay:1 physiological:2 exists:3 naively:1 magnitude:2 occurring:1 entropy:1 likely:1 prevents:1 ordered:2 brunel:4 goal:1 replace:1 content:1 determined:1 uniformly:1 yuval:1 contrasted:2 contradictory:1 meaningful:1 avoiding:1 correlated:6 |
1,587 | 2,442 | Efficient and Robust Feature Extraction by
Maximum Margin Criterion
Haifeng Li
Tao Jiang
Department of Computer Science
University of California
Riverside, CA 92521
{hli,jiang}@cs.ucr.edu
Keshu Zhang
Department of Electrical Engineering
University of New Orleans
New Orleans, LA 70148
[email protected]
Abstract
A new feature extraction criterion, maximum margin criterion (MMC),
is proposed in this paper. This new criterion is general in the sense that,
when combined with a suitable constraint, it can actually give rise to
the most popular feature extractor in the literature, linear discriminate
analysis (LDA). We derive a new feature extractor based on MMC using
a different constraint that does not depend on the nonsingularity of the
within-class scatter matrix Sw . Such a dependence is a major drawback
of LDA especially when the sample size is small. The kernelized (nonlinear) counterpart of this linear feature extractor is also established in this
paper. Our preliminary experimental results on face images demonstrate
that the new feature extractors are efficient and stable.
1
Introduction
In statistical pattern recognition, the high-dimensionality is a major cause of the practical
limitations of many pattern recognition technologies. In the past several decades, many dimensionality reduction techniques have been proposed. Linear discriminant analysis (LDA,
also called Fisher?s Linear Discriminant) [1] is one of the most popular linear dimensionality reduction method. In many applications, LDA has been proven to be very powerful.
LDA is given by a linear transformation matrix W ? RD?d maximizing the so-called
Fisher criterion (a kind of Rayleigh coefficient)
JF (W) =
W T Sb W
W T Sw W
(1)
Pc
Pc
T
where Sb =
and Sw =
i=1 pi Si are the betweeni=1 pi (mi ?m)(mi ?m)
class scatter matrix and the within-class scatter matrix, respectively; c is the number of
classes;
Pcmi and pi are the mean vector and a priori probability of class i, respectively;
m = i=1 pi mi is the overall mean vector; Si is the within-class scatter matrix of class
i; D and d are the dimensionalities of the data before and after the transformation, respectively. To maximize (1), the transformation matrix W must be constituted by the largest
eigenvectors of S?1
w Sb . The purpose of LDA is to maximize the between-class scatter
while simultaneously minimizing the within-class scatter. The two-class LDA has a close
connection to optimal linear Bayes classifiers. In the two-class case, the transformation
matrix W is just a vector, which is in the same direction as the discriminant in the corresponding optimal Bayes classifier. However, it has been shown that LDA is suboptimal for
multi-class problems [2]. A major drawback of LDA is that it cannot be applied when S w
is singular due to the small sample size problem [3]. The small sample size problem arises
whenever the number of samples is smaller than the dimensionality of samples. For example, a 64 ? 64 image in a face recognition system has 4096 dimensions, which requires
more than 4096 training data to ensure that Sw is nonsingular. So, LDA is not a stable
method in practice when the training data are scarce.
In recent years, many researchers have noticed this problem and tried to overcome the computational difficulty with LDA. Tian et al. [4] used the pseudo-inverse matrix S +
w instead
.
For
the
same
purpose,
Hong
and
Yang
[5]
tried
to
add
a
singular
of the inverse matrix S?1
w
value perturbation to Sw to make it nonsingular. Neither of these methods are theoretically
sound because Fisher?s criterion is not valid when Sw is singular. When Sw is singular, any
positive Sb makes Fisher?s criterion infinitely large. Thus, these naive attempts to calculate
the (pseudo or approximate) inverse of Sw may lead to arbitrary (meaningless) results. Besides, it is also known that an eigenvector could be very sensitive to small perturbation if
its corresponding eigenvalue is close to another eigenvalue of the same matrix [6].
In 1992, Liu et al. [7] modified Fisher?s criterion by using the total scatter matrix S t =
Sb + Sw as the denominator instead of Sw . It has been proven that the modified criterion
is exactly equivalent to Fisher?s criterion. However, when Sw is singular, the modified
criterion reaches the maximum value (i.e., 1) no matter what the transformation W is.
Such an arbitrary transformation cannot guarantee the maximum class separability unless
WT Sb W is maximized. Besides, this method need still calculate an inverse matrix, which
is time consuming. In 2000, Chen et al. [8] proposed the LDA+PCA method. When S w is
of full rank, the LDA+PCA method just calculates the maximum eigenvectors of S ?1
t Sb to
form the transformation matrix. Otherwise, a two-stage procedure is employed. First, the
data are transformed into the null space V0 of Sw . Second, it tries to maximize the betweenclass scatter in V0 , which is accomplished by performing principal component analysis
(PCA) on the between-class scatter matrix in V0 . Although this method solves the small
sample size problem, it is obviously suboptimal because it maximizes the between-class
scatter in the null space of Sw instead of the original input space. Besides, the performance
of the LDA+PCA method drops significantly when n ? c is close to the dimensionality D,
where n is the number of samples and c is the number of classes. The reason is that the
dimensionality of the null space V0 is too small in this situation, and too much information
is lost when we try to extract the discriminant vectors in V0 . LDA+PCA also need calculate
the rank of Sw , which is an ill-defined operation due to floating-point imprecisions. At last,
this method is complicated and slow because too much calculation is involved.
Kernel Fisher?s Discriminant (KFD) [9] is a well-known nonlinear extension to LDA. The
instability problem is more severe for KFD because Sw in the (nonlinear) feature space F
is always singular (the rank of Sw is n ? c). Similar to [5], KFD simply adds a perturbation
?I to Sw . Of course, it has the same stability problem as that in [5] because eigenvectors
are sensitive to small perturbation. Although the authors also argued that this perturbation
acts as some kind of regularization, i.e., a capacity control in F, the real influence in this
setting of regularization is not yet fully understood. Besides, it is hard to determine an
optimal ? since there are no theoretical guidelines.
In this paper, a simpler, more efficient, and stable method is proposed to calculate the most
discriminant vectors based on a new feature extraction criterion, the maximum margin criterion (MMC). Based on MMC, new linear and nonlinear feature extractors are established.
It can be shown that MMC represents class separability better than PCA. As a connection
to Fisher?s criterion, we may also derive LDA from MMC by incorporating some suitable
constraint. On the other hand, the new feature extractors derived above (based on MMC)
do not suffer from the small sample size problem, which is known to cause serious stability problems for LDA (based on Fisher?s criterion). Different from LDA+PCA, the new
feature extractors based on MMC maximize the between-class scatter in the input space
instead of the null space of Sw . Hence, it has a better overall performance than LDA+PCA,
as confirmed by our preliminary experimental results.
2
Maximum Margin Criterion
Suppose that we are given empirical data
(x1 , y1 ), . . . , (xn , yn ) ? X ? {C1 , . . . , Cc }
Here, the domain X ? RD is some nonempty set that the patterns xi are taken from. The
yi ?s are called labels or targets. By studying these samples, we want to predict the label
y ? {C1 , . . . , Cc } of some new pattern x ? X . In other words, we choose y such that (x, y)
is in some sense similar to the training examples. For this purpose, some measure need be
employed to assess similarity or dissimilarity. We want to keep such similarity/dissimilarity
information as much as possible after the dimensionality reduction, i.e., transforming x
from RD to Rd , where d D.
If some distance metric is used to measure the dissimilarity, we would hope that a pattern
is close to those in the same class but far from those in different classes. So, a good
feature extractor should maximize the distances between classes after the transformation.
Therefore, we may define the feature extraction criterion as
c
J=
c
1 XX
pi pj d(Ci , Cj )
2 i=1 j=1
(2)
We call (2) the maximum margin criterion (MMC). It is actually the summation of 21 c(c?1)
interclass margins. Like the weighted pairwise Fisher?s criteria in [2], one may also define
a weighted maximum margin criterion. Due to the page limit, we omit the discussion in
this paper.
One may use the distance between mean vectors as the distance between classes, i.e.
d(Ci , Cj ) = d(mi , mj )
(3)
where mi and mj are the mean vectors of the class Ci and the class Cj , respectively. However, (3) is not suitable since it neglects the scatter of classes. Even if the distance between
the mean vectors is large, it is not easy to separate two classes that have the large spread
and overlap with each other. By considering the scatter of classes, we define the interclass
distance (or margin) as
d(Ci , Cj ) = d(mi , mj ) ? s(Ci ) ? s(Cj )
(4)
where s(Ci ) is some measure of the scatter of the class Ci . In statistics, we usually use the
generalized variance |Si | or overall variance tr(Si ) to measure the scatter of data. In this
paper, we use the overall variance tr(Si ) because it is easy to analyze. The weakness of the
overall variance is that it ignores covariance structure altogether. Note that, by employing
the overall/generalized variance, the expression (4) measures the ?average margin? between
two classes while the minimum margin is used in support vector machines (SVMs) [10].
With (4) and s(Ci ) being tr(Si ), we may decompose (2) into two parts
J=
=
c
c
c
c
1 XX
pi pj (d(mi , mj ) ? tr(Si ) ? tr(Sj ))
2 i=1 j=1
c
c
1 XX
1 XX
pi pj d(mi , mj ) ?
pi pj (tr(Si ) + tr(Sj ))
2 i=1 j=1
2 i=1 j=1
The second part is easily simplified to tr(Sw )
c
c
c
c
X
X
1 XX
pi pj (tr(Si ) + tr(Sj )) =
pi tr(Si ) = tr
pi S i
2 i=1 j=1
i=1
i=1
!
= tr(Sw )
(5)
By employing the Euclidean distance, we may also simplify the first part to tr(S b ) as
follows
c
c
c
c
1 XX
1 XX
T
pi pj d(mi , mj ) =
pi pj (mi ?mj ) (mi ?mj )
2 i=1 j=1
2 i=1 j=1
c
=
c
1 XX
T
pi pj (mi ?m + m ? mj ) (mi ?m + m ? mj )
2 i=1 j=1
After expanding
Pcit, we can simplify the above equation to
using the fact j=1 pj (m ? mj ) = 0. So
Pc
i=1
T
pi (mi ?m) (mi ?m) by
c
c
c
X
1 XX
T
pi pj d(mi , mj ) = tr
pi (mi ?m)(mi ?m)
2 i=1 j=1
i=1
!
= tr(Sb )
(6)
Now we obtain
J = tr(Sb ? Sw )
(7)
Since tr(Sb ) measures the overall variance of the class mean vectors, a large tr(Sb ) implies
that the class mean vectors scatter in a large space. On the other hand, a small tr(S w )
implies that every class has a small spread. Thus, a large J indicates that patterns are close
to each other if they are from the same class but are far from each other if they are from
different classes. Thus, this criterion may represent class separability better than PCA.
Recall that PCA tries to maximize the total scatter after a linear transformation. But the
data set with a large within-class scatter can also have a large total scatter even when it has
a small between-class scatter because St = Sb + Sw . Obviously, such data are not easy
to classify. Compared with LDA+PCA, we maximize the between-class scatter in input
space rather than the null space of Sw when Sw is singular. So, our method can keep more
discriminative information than LDA+PCA does.
3
Linear Feature Extraction
When performing dimensionality reduction, we want to find a (linear or nonlinear) mapping
from the measurement space M to some feature space F such that J is maximized after the
transformation. In this section, we discuss how to find an optimal linear feature extractor.
In the next section, we will generalize it to the nonlinear case.
Consider a linear mapping W ? RD?d . We would like to maximize
W
J(W) = tr(SW
b ?Sw )
W
where SW
b and Sw are the between-class scatter matrix and within-class scatter matrix in
T
the feature space F. Since W is a linear mapping, it is easy to show SW
b = W Sb W and
W
T
Sw = W Sw W. So, we have
J(W) = tr WT (Sb ?Sw )W
(8)
In this formulation, we have the freedom to multiply W with some nonzero constant. Thus, we additionally require that W is constituted by the unit vectors, i.e.
W = [w1 w2 . . . wd ] and wkT wk = 1. This means that we need solve the following
constrained optimization
max
d
X
wkT (Sb ?Sw )wk
k=1
subject to wkT wk ? 1 = 0
k = 1, . . . , d
Note that, we may also use other constraints in the above.
For example, we may require
tr WT Sw W = 1 and then maximize tr WT Sb W . It is easy to show that maximizing MMC with such a constraint in fact results in LDA. The only difference is that it
involves a constrained optimization whereas the traditional LDA solves an unconstrained
optimization. The motivation for using the constraint wkT wk = 1 is that it allows us to
avoid calculating the inverse of Sw and thus the potential small sample size problem.
To solve the above optimization problem, we may introduce a Lagrangian
L(wk , ?k ) =
d
X
wkT (Sb ?Sw )wk ? ?k (wkT wk ? 1)
(9)
k=1
with multipliers ?k . The Lagrangian L has to be maximized with respect to ?k and wk .
The condition that at the stationary point, the derivatives of L with respect to wk must
vanish
?L(wk , ?k )
= ((Sb ?Sw ) ? ?k I)wk = 0
k = 1, . . . , d
(10)
?wk
leads to
(Sb ?Sw )wk =?k wk
k = 1, . . . , d
(11)
which means that the ?k ?s are the eigenvalues of Sb ?Sw and the wk ?s are the corresponding eigenvectors. Thus
J(W) =
d
X
wkT (Sb ?Sw )wk =
k=1
d
X
?k wkT wk =
k=1
d
X
?k
(12)
k=1
Therefore, J(W) is maximized when W is composed of the first d largest eigenvectors of
Sb ? Sw . Here, we need not calculate the inverse of Sw , which allows us to avoid the small
sample size problem easily. We may also require W to be orthonormal, which may help
preserve the shape of the distribution.
4
Nonlinear Feature Extraction with Kernel
In this section, we follow the approach of nonlinear SVMs [10] to kernelize the above linear
feature extractor. More precisely, we first reformulate the maximum margin criterion in
terms of only dot-product h?(x), ?(y)i of input patterns. Then we replace the dot-product
2
by some positive definite kernel k(x, y), e.g. Gaussian kernel e??kx?yk .
Consider the maximum margin criterion in the feature space F
J ? (W) =
d
X
?
wkT (Sb ?S?
w )wk
k=1
?
where S?
scatter matrix and within-class
b and Sw are the
Pc scatter? maPc between-class
? T
?
?
?
?
?
m
)
,
S
=
trix in F, i.e., S?
=
p
(m
?
m
)(m
w
i
i
b
i=1 pi Si and
i=1 i
P ni
P ni
(i)
(i)
(i)
1
1
?
?
? T
?
?
Si =
j=1 (?(xj ) ? mi )(?(xj ) ? mi ) with mi = ni
j=1 ?(xj ), m =
Pc n i ?
(i)
i=1 pi mi , and xj is the pattern of class Ci that has ni samples.
For us, an important fact is that each wk lies in the span of ?(x1 ), ?(x2 ), . . . , ?(xn ).
Pn
(k)
Therefore, we can find an expansion for wk in the form wk = l=1 ?l ?(xl ). Using this
?
expansion and the definition of mi , we have
?
?
ni
n
X
X
1
(k)
(i)
wkT m?
?l ?
h?(xl ), ?(xj )i?
i =
ni j=1
l=1
e i )l =
Replacing the dot-product by some kernel function k(x, y) and defining (m
Pn i
(k)
(i)
1
?
T e
T
m
with
(?
)
=
?
k(x
,
x
m
=
?
.
Similarly,
we
have
),
we
get
w
i
k l
l
i
j
k
k
j=1
l
ni
wkT m? = wkT
c
X
T
pi m ?
i = ?k
i=1
e =
with m
Pc
i=1
d
X
wkT S?
b wk =
d X
c
X
?
T
?
? T
pi (wkT (m?
i ? m ))(wk (mi ? m ))
k=1 i=1
d X
c
X
k=1 i=1
eb =
where S
i=1
e i = ?Tk m
e
pi m
?
T e
e and
e i . This means wkT (m?
pi m
i ? m).
i ? m ) = ? k (m
k=1
=
c
X
Pc
e i ? m)(
e m
e i ? m)
e T ?k =
pTi ?Tk (m
d
X
k=1
T
i=1
e i ? m)(
e m
e i ? m)
e .
pi ( m
e b ?k
?Tk S
(i)
(i)
T
?
T
Similarly, one can simplify WT S?
w W. First, we have wk (?(xj ) ? mi ) = ?k (kj ?
P
(i)
(i)
(i)
ni
1
T
e i ) with (kj )l = k(xl , xj ). Considering wkT S?
m
i wk = n i
j=1 (wk (?(xj ) ?
(i)
T
? T
m?
i ))(wk (?(xj ) ? mi )) , we have
wkT S?
i wk =
=
=
=
ni
1 X
(i)
(i)
e i )(kj ? m
e i )T ? k
?T (k ? m
ni j=1 k j
ni
1 X
e i (ej ? 1 1n )(ej ? 1 1n )T S
e T ?k
?T S
i
ni j=1 k
ni i
ni i
ni
1 X
e i (e eT ? 1 ej 1T ? 1 1n eT + 1 1n 1T )S
e T ?k
?Tk S
ni
i ni
i j
j j
i
2
ni j=1
ni
ni
ni
1
1 Te
e T ?k
?k Si (Ini ?ni ? 1ni 1Tni )S
i
ni
ni
e i )lj = k(xl , x(i) ), In ?n is the ni ? ni identity matrix, 1n is the ni -dimensional
where (S
i
i
i
j
vector of 1?s, and ej is the canonical basis vector of ni dimensions. Thus, we obtain
d
X
k=1
=
d
X
k=1
?Tk
d X
c
X
1
1 Te
e T ?k
?k Si (Ini ? 1ni 1Tni )S
i
n
n
i
i
k=1 i=1
!
d
c
X
X
1
1e
T eT
e w ?k
?Tk S
pi Si (Ini ? 1ni 1ni )Si ?k =
n
n
i
i
i=1
wkT S?
w wk =
pi
k=1
T
1
T e
e
e w = P c pi 1 S
where S
i=1
ni i (Ini ? ni 1ni 1ni )Si . So the maximum criterion in the feature
space F is
d
X
eb ? S
e w )?k
J(W) =
?Tk (S
(13)
k=1
Similar to the observations in Section 3, the above criterion is maximized by the largest
eb ? S
ew .
eigenvectors of S
0.25
training time (second)
error rate
RAW
LDA+PCA
MMC
0.2
KMMC
0.15
0.1
0.05
0
20
25
30
class no.
35
(a) The comparison in term of error rate.
40
24
22
20
18
16
14
12
10
8
6
4
2
LDA+PCA
MMC
KMMC
20
25
30
class no.
35
40
(b) The comparison in term of training time.
Figure 1: Experimental results obtained using a linear SVM on the original data (RAW), and the
data extracted by LDA+PCA, the linear feature extractor based on MMC (MMC) and the nonlinear
feature extractor based on MMC (KMMC), which employs the Gaussian kernel with ? = 0.03125.
5
Experiments
To evaluate the performance of our new methods (both linear and nonlinear feature extractors), we ran both LDA+PCA and our methods on the ORL face dataset [11]. The ORL
dataset consists of 10 face images from 40 subjects for a total of 400 images, with some
variation in pose, facial expression and details. The resolution of the images is 112 ? 92,
with 256 gray-levels. First, we resized the images to 28 ? 23 to save the experimental
time. Then, we reduced the dimensionality of each image set to c ? 1, where c is the number of classes. At last we trained and tested a linear SVM on the dimensionality-reduced
data. As a control, we also trained and tested a linear SVM on the original data before its
dimensionality was reduced.
In order to demonstrate the effectiveness and the efficiency of our methods, we conducted
a series of experiments and compared our results with those obtained using LDA+PCA.
The error rates are shown in Fig.1(a). When trained with 3 samples and tested with 7 other
samples for each class, our method is generally better than LDA+PCA. In fact, our method
is usually better than LDA+PCA on other numbers of training samples. To save space,
we do not show all the results here. Note that our methods can even achieve lower error
rates than a linear SVM on the original data (without dimensionality reduction). However,
LDA+PCA does not demonstrate such a clear superiority over RAW. Fig. 1(a) also shows
that the kernelized (nonlinear) feature extractor based on MMC is significantly better than
the linear one, in particular when the number of classes c is large.
Besides accuracy, our methods are also much more efficient than LDA+PCA in the sense
of the training time required. Fig. 1(b) shows that our linear feature extractor is about 4
times faster than LDA+PCA. The same speedup was observed on other numbers of training
samples. Note that our nonlinear feature extractor is also faster than LDA+PCA in this case
although it is very time-consuming to calculate the kernel matrix in general. An explanation
of the speedup is that the kernel matrix size equals the number of samples, which is pretty
small in this case.
Furthermore, our method performs much better than LDA+PCA when n ? c is close to the
dimensionality D. Because the amount of training data was limited, we resized the images
to 168 dimensions to create such a situation. The experimental results are shown in Fig. 2.
In this situation, the performance of LDA+PCA drops significantly because the null space
of Sw has a small dimensionality. When LDA+PCA tries to maximize the between-class
scatter in this small null space, it loses a lot of information. On the other hand, our method
tries to maximize the between-class scatter in the original input space. From Fig. 2, we can
0.45
0.4
0.6
LDA+PCA
MMC
KMMC
0.5
0.3
error rate
error rate
0.35
0.7
LDA+PCA
MMC
KMMC
0.25
0.2
0.4
0.3
0.15
0.2
0.1
0.1
0.05
0
20
25
30
class no.
35
40
(a) Each class contains three training samples.
20
25
30
class no.
35
40
(b) Each class contains four training samples.
Figure 2: Comparison between our new methods and LDA+PCA when n ? c is close to D.
see that LDA+PCA is ineffective in this situation because it is even worse than a random
guess. But our method still produced acceptable results. Thus, the experimental results
show that our method is better than LDA+PCA in terms of both accuracy and efficiency.
6
Conclusion
In this paper, we proposed both linear and nonlinear feature extractors based on the maximum margin criterion. The new methods do not suffer from the small sample size problem.
The experimental results show that it is very efficient, accurate, and robust.
Acknowledgments
We thank D. Gunopulos, C. Domeniconi, and J. Peng for valuable discussions and comments. This
work was partially supported by NSF grants CCR-9988353 and ACI-0085910.
References
[1] R. A. Fisher. The use of multiple measurements in taxonomic problems. Annual of Eugenics,
7:179?188, 1936.
[2] M. Loog, R. P. W. Duin, and R. Haeb-Umbach. Multiclass linear dimension reduction by
weighted pairwise fisher criteria. IEEE Transactions on Pattern Analysis and Machine Intelligence, 23(7):762?766, 2001.
[3] K. Fukunaga. Introduction to Statistical Pattern Recognition. Academic Press, New York, 2nd
edition, 1990.
[4] Q. Tian, M. Barbero, Z. Gu, and S. Lee. Image classification by the foley-sammon transform.
Optical Engineering, 25(7):834?840, 1986.
[5] Z. Hong and J. Yang. Optimal discriminant plane for a small number of samples and design
method of classifier on the plane. Pattern Recognition, 24(4):317?324, 1991.
[6] G. W. Stewart. Introduction to Matrix Computations. Academic Press, New York, 1973.
[7] K. Liu, Y. Cheng, and J. Yang. A generalized optimal set of discriminant vectors. Pattern
Recognition, 25(7):731?739, 1992.
[8] L. Chen, H. Liao, M .Ko, J. Lin, and G. Yu. A new LDA-based face recognition system which
can solve the small sample size problem. Pattern Recognition, 33(10):1713?1726, 2000.
[9] S. Mika, G. R?atsch, J. Weston, B. Sch?olkopf, and K.-R. M?uller. Fisher discriminant analysis
with kernels. In Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, editors, Neural Networks for
Signal Processing IX, pages 41?48. IEEE, 1999.
[10] V. N. Vapnik. Statistical Learning Theory. John Wiley & Sons, New York, 1998.
[11] F. Samaria and A. Harter. Parameterisation of a stochastic model for human face identification.
In Proceedings of 2nd IEEE Workshop on Applications of Computer Vision, Sarasota, FL, 1994.
| 2442 |@word nd:2 sammon:1 hu:1 tried:2 covariance:1 tr:24 reduction:6 liu:2 series:1 contains:2 past:1 wd:1 si:17 scatter:27 yet:1 must:2 john:1 shape:1 drop:2 stationary:1 intelligence:1 guess:1 plane:2 simpler:1 zhang:1 consists:1 umbach:1 introduce:1 pairwise:2 theoretically:1 peng:1 multi:1 considering:2 xx:9 maximizes:1 null:7 what:1 kind:2 eigenvector:1 transformation:10 guarantee:1 pseudo:2 every:1 act:1 exactly:1 classifier:3 control:2 unit:1 grant:1 omit:1 yn:1 superiority:1 before:2 positive:2 engineering:2 understood:1 limit:1 gunopulos:1 jiang:2 mika:1 eb:3 limited:1 tian:2 practical:1 acknowledgment:1 orleans:2 practice:1 lost:1 definite:1 procedure:1 empirical:1 significantly:3 word:1 get:1 cannot:2 close:7 influence:1 instability:1 equivalent:1 lagrangian:2 maximizing:2 resolution:1 orthonormal:1 stability:2 variation:1 kernelize:1 target:1 suppose:1 recognition:8 observed:1 electrical:1 calculate:6 valuable:1 yk:1 ran:1 transforming:1 trained:3 depend:1 efficiency:2 basis:1 gu:1 easily:2 samaria:1 solve:3 haeb:1 otherwise:1 statistic:1 transform:1 obviously:2 eigenvalue:3 product:3 achieve:1 tni:2 harter:1 olkopf:1 tk:7 mmc:18 derive:2 help:1 pose:1 solves:2 c:1 involves:1 implies:2 direction:1 drawback:2 stochastic:1 human:1 argued:1 require:3 preliminary:2 decompose:1 summation:1 extension:1 mapping:3 predict:1 major:3 purpose:3 label:2 sensitive:2 largest:3 create:1 weighted:3 hope:1 uller:1 always:1 gaussian:2 modified:3 rather:1 avoid:2 pn:2 ej:4 resized:2 wilson:1 derived:1 rank:3 indicates:1 sense:3 sb:23 lj:1 kernelized:2 transformed:1 tao:1 overall:7 classification:1 ill:1 priori:1 constrained:2 equal:1 extraction:6 represents:1 yu:1 simplify:3 serious:1 employ:1 composed:1 simultaneously:1 preserve:1 floating:1 attempt:1 freedom:1 kfd:3 multiply:1 severe:1 weakness:1 pc:7 accurate:1 facial:1 unless:1 haifeng:1 euclidean:1 theoretical:1 classify:1 stewart:1 conducted:1 too:3 combined:1 st:1 lee:1 w1:1 choose:1 worse:1 derivative:1 li:1 potential:1 wk:29 coefficient:1 matter:1 try:5 lot:1 loog:1 analyze:1 bayes:2 complicated:1 ass:1 ni:36 accuracy:2 variance:6 maximized:5 nonsingular:2 generalize:1 raw:3 identification:1 hli:1 produced:1 confirmed:1 researcher:1 cc:2 reach:1 whenever:1 definition:1 involved:1 larsen:1 mi:26 dataset:2 popular:2 recall:1 dimensionality:15 cj:5 actually:2 follow:1 formulation:1 furthermore:1 just:2 stage:1 hand:3 replacing:1 nonlinear:13 lda:44 gray:1 multiplier:1 counterpart:1 regularization:2 hence:1 imprecision:1 nonzero:1 criterion:27 hong:2 generalized:3 ini:4 demonstrate:3 performs:1 image:9 measurement:2 rd:5 unconstrained:1 similarly:2 dot:3 stable:3 similarity:2 v0:5 add:2 recent:1 accomplished:1 yi:1 minimum:1 employed:2 determine:1 maximize:11 signal:1 full:1 sound:1 multiple:1 faster:2 academic:2 calculation:1 lin:1 calculates:1 ko:1 denominator:1 liao:1 metric:1 vision:1 represent:1 kernel:9 c1:2 whereas:1 want:3 singular:7 sch:1 w2:1 meaningless:1 ineffective:1 wkt:18 subject:2 comment:1 effectiveness:1 call:1 yang:3 easy:5 xj:9 suboptimal:2 multiclass:1 expression:2 pca:31 suffer:2 riverside:1 york:3 cause:2 generally:1 clear:1 eigenvectors:6 amount:1 svms:2 reduced:3 canonical:1 nsf:1 ccr:1 four:1 neither:1 pj:10 douglas:1 year:1 inverse:6 taxonomic:1 powerful:1 betweenclass:1 acceptable:1 ucr:1 orl:2 fl:1 cheng:1 annual:1 duin:1 constraint:6 precisely:1 uno:1 x2:1 barbero:1 span:1 fukunaga:1 performing:2 optical:1 speedup:2 department:2 smaller:1 son:1 separability:3 pti:1 parameterisation:1 sarasota:1 taken:1 equation:1 discus:1 nonempty:1 studying:1 operation:1 save:2 altogether:1 original:5 ensure:1 sw:44 neglect:1 calculating:1 especially:1 noticed:1 dependence:1 traditional:1 distance:7 separate:1 thank:1 capacity:1 discriminant:9 reason:1 besides:5 reformulate:1 minimizing:1 rise:1 design:1 guideline:1 observation:1 situation:4 defining:1 y1:1 perturbation:5 interclass:2 arbitrary:2 required:1 connection:2 california:1 established:2 eugenics:1 usually:2 pattern:13 max:1 explanation:1 suitable:3 overlap:1 difficulty:1 scarce:1 technology:1 naive:1 extract:1 foley:1 kj:3 literature:1 fully:1 limitation:1 proven:2 editor:1 pi:27 course:1 supported:1 last:2 face:6 overcome:1 dimension:4 xn:2 valid:1 ignores:1 author:1 simplified:1 far:2 employing:2 transaction:1 sj:3 approximate:1 keep:2 consuming:2 xi:1 discriminative:1 aci:1 decade:1 pretty:1 additionally:1 mj:12 robust:2 ca:1 expanding:1 expansion:2 domain:1 spread:2 constituted:2 motivation:1 edition:1 x1:2 fig:5 slow:1 wiley:1 xl:4 lie:1 vanish:1 extractor:17 ix:1 svm:4 incorporating:1 workshop:1 vapnik:1 ci:9 dissimilarity:3 te:2 margin:13 kx:1 chen:2 rayleigh:1 simply:1 infinitely:1 trix:1 partially:1 loses:1 extracted:1 weston:1 identity:1 jf:1 fisher:13 replace:1 hard:1 wt:5 principal:1 called:3 total:4 discriminate:1 domeniconi:1 experimental:7 la:1 ew:1 atsch:1 support:1 arises:1 evaluate:1 tested:3 |
1,588 | 2,443 | Non-linear CCA and PCA
by Alignment of Local Models
Jakob J. Verbeek? , Sam T. Roweis? , and Nikos Vlassis?
?
Informatics Institute, University of Amsterdam
?
Department of Computer Science,University of Toronto
Abstract
We propose a non-linear Canonical Correlation Analysis (CCA) method
which works by coordinating or aligning mixtures of linear models. In
the same way that CCA extends the idea of PCA, our work extends recent methods for non-linear dimensionality reduction to the case where
multiple embeddings of the same underlying low dimensional coordinates are observed, each lying on a different high dimensional manifold.
We also show that a special case of our method, when applied to only
a single manifold, reduces to the Laplacian Eigenmaps algorithm. As
with previous alignment schemes, once the mixture models have been
estimated, all of the parameters of our model can be estimated in closed
form without local optima in the learning. Experimental results illustrate
the viability of the approach as a non-linear extension of CCA.
1
Introduction
In this paper, we are interested in data that lies on or close to a low dimensional manifold
embedded, possibly non-linearly, in a Euclidean space of much higher dimension. Data
of this kind is often generated when our observations are very high dimensional but the
number of underlying degrees of freedom is small. A typical example are images of an
object under different conditions (e.g. pose and lighting). A simpler example is given in
Fig. 1, where we have data in IR3 which lies on a two dimensional manifold. We want
to recover the structure of the data manifold, so that we can ?unroll? the data manifold
and work with the data expressed in the underlying ?latent coordinates?, i.e. coordinates
on the manifold. Learning low dimensional latent representations may be desirable for
different reasons, such as compression for storage and communication, visualization of
high dimensional data, or as preprocessing for further data analysis or prediction tasks.
Recent work on unsupervised nonlinear feature extraction has pursued several complementary directions. Various nonparametric spectral methods, such as Isomap[1], LLE[2],
Kernel PCA[3] and Laplacian Eigenmaps[4] have been proposed which reduce the dimensionality of a fixed training set in a way that maximally preserve certain inter-point relationships, but these methods do not generally provide a functional mappings between the
high and low dimensional spaces that are valid both on and off the training data. In this
paper, we consider a method to integrate several local feature extractors into a single global
representation, similar to the approaches of [5, 6, 7, 8]. These methods, as well as ours,
deliver after training a functional mapping which can be used to convert previously unseen
high dimensional observations into their low dimensional global coordinates. Like most
of the above algorithms, our method performs non-linear feature extraction by minimizing
a convex objective function whose critical points can be characterized as eigenvectors of
some matrix. These algorithms are generally simple and efficient; one needs only to construct a matrix based on local feature analysis of the training data and then computes its
largest or smallest eigenvectors using standard numerical methods. In contrast, methods
like generative topographic mapping[9] and self-organizing maps[10] are prone to local
optima in the objective function.
Our method is based on the same intuitions as in earlier work: the idea is to learn a mixture
of latent variable density models on the original training data so that each mixture component acts as a local feature extractor. For example, we may use a mixture of factor analyzers
or a mixture of principal component analyzers (PCA). After this mixture has been learned,
the local feature extractors are ?coordinated? by finding, for each model, a suitable linear
mapping (and offset) from its latent variable space into a single ?global? low-dimensional
coordinate system. The local feature extractors together with the coordinating linear maps
provide a global non-linear map from the data space to the latent space and back. Learning
the mixture is driven by a density signal ? we want to place models near the training points,
while the post-coordination is driven by the idea that when two different models place significant weight on the same point, they should agree on its mapping into the global space.
Our algorithm, developed in the following section, builds upon recent work of coordination
methods. As in [6], we use a cross-entropy between a unimodal approximation and the true
posterior over global coordinates to encourage agreement. However we do not attempt to
simultaneously learn the mixture model and coordinate since this causes severe problems
with local minima. Instead, as in [7, 8], we fix a specific mixture and then study the computations involved in coordinating its local representations. We extend the latter works as
CCA extends PCA: rather than finding a projection of one set of points, we find projections
for two sets of corresponding points {xn } and {yn } (xn corresponding to yn ) into a single
latent space that project corresponding points in the two point sets as nearby as possible.
In this setting we begin by showing, in Section 3, how Laplacian Eigenmaps[4] are a special
case of the algorithms presented here when they are applied to only a single manifold.
We go on, in Section 4, to extend our algorithm to a setting in which multiple different
observation spaces are available, each one related to the same underlying global space but
through different nonlinear embeddings. This naturally gives rise to a nonlinear version of
weighted Canonical Correlation Analysis (CCA). We present results of several experiments
in the same section and we conclude the paper with a general discussion in Section 5.
2
Non-linear PCA by aligning local feature extractors
Consider a given data set X = {x1 , . . . , xN } and a collection of k local feature extractors,
fs (x) is a vector containing the, zero or more, features produced by model s. Each feature
extractor also provides an ?activity signal?, as (x) representing its confidence in modeling
the point. We convert these P
activities into posterior responsibilities using a simple softmax: p(s|x) = exp(as (x))/ r exp(ar (x)). If the experts are actually components of a
mixture, then setting the activities to the logarithm of the posteriors under the mixture will
recover exactly the same posteriors above.
Next, we consider the relationship between the given representation of the data and the
representation of the data in a global latent space, which we would like to find. Throughout,
we will use g to denote latent ?Global? coordinates for data. For the unobserved latent
coordinate g corresponding to a data point xn and conditioned on s, we assume the density:
p(g|xn , s) = N (g; ?s + As fs (xn ), ? 2 I) = N (g; gns , ? 2 I),
(1)
where N (g; ?, ?) is a Gaussian distribution on g with mean ? and covariance ?. The
mean, gns , of p(g|xn , s) is the sum of the component offset ?s in the latent space and a linear transformation, implemented by As , of fs (xn ). From now on we will use homogeneous
coordinates and write: Ls = [As ?s ] and zns = [fs (xn )> 1]> , and thus gns = Ls zns . Consider the posterior distribution on latent coordinates given some data:
X
X
p(g|x) =
p(s, g|x) =
p(s|x)p(g|x, s).
(2)
s
s
Given a fixed set of local feature extractors and a corresponding activities, we are interested
in finding linear maps Ls that give rise to ?consistent? projections of the data in the latent
space. By ?consistent?, we mean that the p(g|x, s) are similar for components with large
posterior. If the predictions are in perfect agreement for a point xn , then all the gns are
equal and the posterior p(g|x) is Gaussian, in general p(g|x) is a mixture of Gaussians. To
measure the consistency, we define the following error function:
X
?({L1 , . . . , Lk }) =
min
qns D(Qn (g) k p(g|xn , s)),
(3)
{Qn ,...QN }
n,s
where we used qns as a shorthand for p(s|xn ) and Qn is a Gaussian with mean gn
and covariance matrix ?n . The objective sums for each data point xn and model s the
Kullback-Leibler divergence D between a single Gaussian Qn (g) and the component densities p(g|x, s), weighted by the posterior p(s|xn ). It is easy to derive that in order to
minimize the objective ? w.r.t. gn and ?n we obtain:
X
gn =
qns gns and ?n = ? 2 I,
(4)
s
where I denotes the identity matrix. Skipping some additive and multiplicative constants
with respect to the linear maps Ls , the objective ? then simplifies to:
X
1X
?=
qns k gn ? gns k2 =
qns qnt k gnt ? gns k2 ? 0.
(5)
2 n,s,t
n,s
The main attraction with this setup is that our objective is a quadratic function of the linear
maps Ls , as in [7, 8]. Using some extra notation, we obtain a clearer form of the objective
as a function of the linear maps. Let:
>
un = [qn1 z>
n1 . . . qnk znk ],
> >
U = [u>
1 . . . uN ] ,
L = [L1 . . . Lk ]> .
(6)
Note that from (4) and (6) we have: gn = (un L)> . The expected projection coordinates
>
can thus be computed as: G = [g
1 . . . gN ] = UL. We define the block-diagonal matrix
P
D with k blocks given by Ds = n qns zns z>
ns . The objective can now be written as:
? = Tr{L> (D ? U> U)L}.
(7)
The objective function is invariant to translation and rotation of the global latent space and
re-scaling the latent space changes the objective monotonically, c.f. (5). To make solutions
unique with respect to translation, rotation and scaling, we impose two constraints:
X
X
?=
? )(gn ? g
? )> /N = I.
(transl.) : g
gn /N = 0, (rot. + scale) : ?g =
(gn ? g
n
n
The columns of L minimizing ? are characterized as the generalized eigenvectors:
(D ? U> U)v = ?U> Uv
?
Dv = (? + 1)U> Uv.
(8)
The value of the objective function is given by the sum of the corresponding eigenvalues
?. The smallest eigenvalue is always zero, corresponding to mapping all data into the same
1.5
1
2
0.5
1.5
0
?0.5
1
?1
0.5
?1.5
0
?2
?0.5
?2.5
?1
?3
?1.5
?3.5
2.5
?2
2
1.5
?2.5
?2
1
?1.5
?1
?0.5
0
0.5
1
1.5
2
Figure 1: Data in
IR3 with local charts
indicated by the axes
(left). Data representation in IR2 generated by optimizing
our objective function. Expected latent coordinates gn
are plotted (right).
0.5
0
?0.5
1.5
1
0.5
0
?0.5
?1
?1.5
latent coordinate. This embedding is uninformative since it is constant, therefore we select
the eigenvectors corresponding to the second up to the (d + 1)st smallest eigenvalues to
obtain the best embedding in d dimensions. Note that, as mentioned in [7], this framework
enables us to use feature extractors that provide different numbers of features.
In Fig. 1 we give an illustration of applying the above procedure to a simple manifold. The
plots show the original
P data presented to the algorithm (left) and the 2-dimensional latent
coordinates gn = s qns gns found by the algorithm (right).
3
Laplacian Eigenmaps as a special case
Consider the special case of the algorithm of Section 2, where no features are extracted.
The only information the mixture model provides are the posterior probabilities collected
in the matrix Q with [Q]ns = qns = p(s|xn ). In that case:
> >
U = Q,
L = [?>
1 . . . ?k ] ,
X
X
? = Tr{L> (D ? A)L} =
k ? s ? ? t k2
qns qnt ,
gns = ?s ,
s,t
(9)
(10)
n
P
where A = Q> Q is an adjacency P
matrix withP
[A]st = n qns qnt and D is the diagonal
degree matrix of A with [D]ss = t Ast = n qns . Optimization under the constrains
of zero mean and identity covariance leads to the generalized eigenproblem:
(D ? A)v = ?Av
?
(D ? A)v =
?
Dv
1+?
(11)
The optimization problem is exactly the Laplacian Eigenmaps algorithm[4], but applied
on the mixture components instead of the data points. Since we do not use any feature
extractors in this setting, it can be applied to mixture models that model data for which
it is hard to design feature extractors, e.g. data that has (both numerical and) categorical
features. Thus, we can use mixture densities without latent variables, e.g. mixtures of
multinomials, mixtures of Hidden Markov Models, etc. Notice that in this manner the
mixture model not only provides a soft grouping of the data through the posteriors, but also
an adjacency matrix between the groups.
4
Non-linear CCA by aligning local feature extractors
Canonical Correlation Analysis (CCA) is a data analysis method that finds correspondences
between two or more sets of measurements. The data are provided in tuples of corresponding measurements in the different spaces. The sets of measurements can be obtained by
employing different sensors to make measurements of some phenomenon. Our main interest in this paper is to develop a nonlinear extension of CCA which works when the different measurements come from separate nonlinear manifolds that share an underlying global
coordinate system. Non-linear CCA can be trained to find a shared low dimensional embedding for both manifolds, exploiting the pairwise correspondence provided by the data
set. Such models can then be used for different purposes, like sensor fusion, denoising,
filling in missing data, or predicting a measurement in one space given a measurement in
the other space. Another important aspect of this learning setup is that the use of multiple
sensors might also function as regularization helping to avoid overfitting, c.f. [11].
In CCA two (zero mean) sets of points are given: X = {x1 , . . . , xN } ? IRp and Y =
{y1 , . . . , yN } ? IRq . The aim is to find linear maps a and b, that map members of X and
Y respectively on the real line, such that the correlation between the linearly transformed
variables is maximized. This is easily shown to be equivalent to minimizing:
1X
2
E=
[axn ? byn ]
(12)
2 n
P
P
> >
>
= 1. The above is easily
under the constraint that a[ n xn x>
n ]a + b[
n yn yn ]b
generalized such that the sets do not need to be zero mean and allowing a translation as
well. We can also generalize by mapping to IRd instead of the real line, and then requiring
the sum of the covariance matrices of the projections to be identity. CCA can also be readily
extended to take into account more than two point sets, as we now show.
In the generalized CCA setting with multiple point-sets, allowing translations and linear
mappings to IRd , the objective is to minimize the squared distance between all pairs of
projections under the same constraint as above.
P We denote the projection of the n-th point
in the s-th point-set as gns and let gn = k1 s gns . We then minimize the error function:
?CCA =
1 X
1X
k gns ? gnt k2 =
k gns ? gn k2 .
2
2k n,s,t
k n,s
(13)
The objective ? in equation (5) coincides with ?CCA if qns = 1/k. The different constraints imposed upon the optimization by CCA and our objective of the previous sections
are equivalent. We can thus regard the alignment procedure as a weighted form of CCA.
This suggests using the coordination technique for non-linear CCA. This is achieved quite
easily, without modifying the objective function (5). We consider different point sets, each
having a mixture of locally valid linear projections into the ?global? latent space that is now
shared by all mixture components and point sets. We minimize the weighted sum of the
squared distances between all pairs of projections, i.e. we have pairs of projections due to
the same point set and also pairs that combine projections from different point sets.
c
We use c as an index ranging over the C different observation spaces, and write q ns
for
the posterior on component s for observation n in observation space c. Similarly, we use
c
gns
to denote the projection due component sP
from space c. The average projection due to
c
c
observation space c is then denoted by gnc = s qns
gns
. We use index r to range over all
mixture components and observation spaces, so that qnr = C1 p(s|xn ) if r corresponds to
(c = 1, s) and qnr = C1 p(s|yn ) if r corresponds to (c = 2, s), i.e. r ? (c, s). The overall
P
P
average projection then becomes: gn = C1 c gnc = r qnr gnr . The objective (5) can
now be rewritten as:
X
1 X
1 X c
c
?=
qnr k gnr ? gn k2 =
k gn ? gnc k2 +
qns k gnc ? gns
k2 . (14)
C
C
n,r
c,n
c,n,s
Observe how in (14) the objective sums between point set consistency of the projections
(first summand) and within point set consistency of the projections (second summand).
5
2
1.5
1.5
4
1
1
3
0.5
0.5
2
0
0
1
?0.5
?0.5
0
?1
?1
?1
?2
?2
?1.5
?1.5
?1
?0.5
0
0.5
1
1.5
2
?1.5
?1.5
?1
?0.5
0
0.5
1
1.5
?2
?1
?0.9
?0.8
?0.7
?0.6
?0.5
?0.4
?0.3
?0.2
?0.1
0
Figure 2: Data and
charts, indicated by
bars
(left-middle).
Latent coordinates
(vert.) and coordinate on generating
curve (hor.) (right).
The above technique can also be used to get more stable results of the chart coordination
procedure for a single manifold discussed in Section 2. Robustness for variation in the
mixture fitting can be improved by using several sets of charts fitted to the same manifold. We can then align all these sets of charts by optimizing (14). This aligns the charts
within each set and at the same time makes sure the different sets of aligned charts are
aligned, providing important regularization, since now every point is modeled by several
local models.
Note that if the charts and responsibilities are obtained using a mixture of PCA or factor
analyzers, the local linear mappings to the latent space induce a Gaussian mixture in the
latent space. This mixture can be used to compute responsibilities on components given
latent coordinates. Also, for each linear map from the data to the latent space we can
compute a pseudo inverse projecting back. By averaging the individual back projections
with the responsibilities computed in latent space we obtain a projection from the latent
space to the data space. In total, we can thus map from one observation space into another.
This is how we generated the reconstructions in the experiments reported below. When
using linear CCA for data that is non-linearly embedded, reconstructions will be poor since
linear CCA can only map into a low dimensional linear subspace.
As an illustrative example of the non-linear CCA we used two point-sets in IR2 . The first
point-set was generated on an S-shaped curve the second point set was generated along an
arc, see Fig. 2. To both point sets we added Gaussian noise and we learned a 10 component
mixture model on both sets. In the rightmost panel of Fig. 2 the, clearly successfully,
discovered latent coordinates are plotted against the coordinate on the generating curve.
Below, we describe three more challenging experiments.
In the first experiment we use two data sets which we know to share the same underlying
degrees of freedom. We use images of a face varying its gaze left-right and up-down. We
cut these images in half to obtain our two sets of images. We trained the system on 1500
image halves of 40 ? 20 pixels each. Both image halves were modeled with a mixture of 40
components. In Fig. 3 some generated right half images based on the left half are shown.
The second experiment concerns appearance based pose estimation of an object. One point
set consists of a pixel representation of images of an object and the other point set contains
the corresponding pose of the camera w.r.t. the object. For the pose parameters we used the
identity to ?extract? features (i.e. we just used one component for this space). The training
data was collected1 by moving a camera over the half-sphere centered at the object. A
mixture of 40 PCA?s was trained on the image data and aligned with the pose parameters in
a 2-dimensional latent space. The right panel of Fig. 3 shows reconstructions of the images
conditioned on various pose inputs (left image of each pair is reconstruction based on pose
of right image). Going the other way, when we input an image and estimate the pose, the
absolute errors in the longitude (0? ? 360? ) were under 10? in over 80% of the cases and
for latitude (0? ? 90? ) this was under 5? in over 90% of the cases.
1
Thanks to G. Peters for sharing the images used in [12] and recorded at the Institute for Neural
Computation, Ruhr-University Bochum, Germany.
Figure 3: Right half of the images was generated given the left half using the trained model
(left). Image reconstructions given pose parameters (right).
In the third experiment we use the same images as in the second experiment, but replace
the direct (low dimensional) supervision signal of the pose parameters with (high dimensional) correspondences in the form of images of another object in corresponding poses.
We trained a mixture of 40 PCA?s on both image sets (2000 images of 64?64 pixels in each
set) and aligned these in a 3-dimensional latent space. Comparing the pose of an object to
the pose of the nearest (in latent space) image from the other object the std. dev. of error
in latitude is 2.0? . For longitude we found 4 errors of about 180? in our 500 test cases,
the rest of the errors had std. dev. 3.9? . Given a view of one object we can reconstruct the
corresponding view of the second object, Fig. 4 shows some of the obtained reconstruction
results. All presented reconstructions were made for data not included in training.
5
Discussion
In this paper, we have extended alignment methods for single manifold nonlinear dimensionality reduction to perform non-linear CCA using measurements from multiple manifolds. We have also shown the close relationship with Laplacian Eigenmaps[4] in the
degenerate case of a single manifold and feature extractors of zero dimensionality.
In [7] a related method to coordinate local charts is proposed, which is based on the LLE
cost function as opposed to our cross-entropy term; this means that we need more than just
a set of local feature extractors and their posteriors: we also need to be able to compute
reconstruction weights, collected in a N ? N weight matrix. The weights indicate how
we can reconstruct each data point from its nearest neighbors. Computing these weights
requires access to the original data directly, not just through the ?interface? of the mixture model. Defining sensible weights and the ?right? number of neighbors might not be
straightforward, especially for data in non-Euclidean spaces. Furthermore, computing the
weights costs in principle O(N 2 ) because we need to find nearest neighbors, whereas the
presented work has running time linear in the number of data points.
In [11] it is considered how to find low dimensional representations for multiple point sets
simultaneously, given few correspondences between the point sets. The generalization of
LLE presented there for this problem is closely related to our non-linear CCA model. The
work presented here can also be extended to the case where we know only for few points in
one set to which points they correspond in the other set. The use of multiple sets of charts
for one data set is similar in spirit as the self-correspondence technique of [11] where the
data is split into several overlapping sets used to stabilize the generalized LLE.
d
c
b
a
Figure 4: I1 :
image in first
set (a), I2 :
corresponding image in
second
set
(b),
closest
image
in
second set (in
latent space)
to I1 (c), reconstruction
of I2 given I1
(d).
Finally, it would be interesting to compare our approach with treating the data in the joint
(x, y) space and employing techniques for a single point set[8, 7, 6]. In this case, points
for which we do not have the correspondence can be treated as data with missing values.
Acknowledgments
JJV and NV are supported by the Technology Foundation STW (AIF4997) applied science
division of NWO and the technology program of the Dutch Ministry of Economic Affairs.
STR is supported in part by the Learning Project of IRIS Canada and by the NSERC.
References
[1] J.B. Tenenbaum, V. de Silva, and J.C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290(5500):2319?2323, December 2000.
[2] S.T. Roweis and L.K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290(5500):2323?2326, December 2000.
[3] B. Sch?olkopf, A.J. Smola, and K. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Computation, 10:1299?1319, 1998.
[4] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and
clustering. In Advances in Neural Information Processing Systems, volume 14, 2002.
[5] C. Bregler and S.M. Omohundro. Surface learning with applications to lipreading. In Advances
in Neural Information Processing Systems, volume 6, 1994.
[6] S.T. Roweis, L.K. Saul, and G.E. Hinton. Global coordination of local linear models. In Advances in Neural Information Processing Systems, volume 14, 2002.
[7] Y.W. Teh and S.T. Roweis. Automatic alignment of local representations. In Advances in Neural
Information Processing Systems, volume 15, 2003.
[8] M. Brand. Charting a manifold. In Advances in Neural Information Processing Systems, volume 15, 2003.
[9] C.M. Bishop, M. Svens?en, and C.K.I Williams. GTM: the generative topographic mapping.
Neural Computation, 10:215?234, 1998.
[10] T. Kohonen. Self-organizing maps. Springer, 2001.
[11] J.H. Ham, D.D. Lee, and L.K. Saul. Learning high dimensional correspondences from low
dimensional manifolds. In ICML?03, workshop on the continuum from labeled to unlabeled data
in machine learning and data mining, 2003.
[12] G. Peters, B. Zitova, and C. von der Malsburg. How to measure the pose robustness of object
views. Image and Vision Computing, 20(4):249?256, 2002.
| 2443 |@word middle:1 version:1 compression:1 ruhr:1 covariance:4 tr:2 reduction:4 contains:1 ours:1 rightmost:1 comparing:1 skipping:1 written:1 readily:1 additive:1 numerical:2 enables:1 plot:1 treating:1 pursued:1 generative:2 half:8 qnt:3 affair:1 provides:3 toronto:1 simpler:1 along:1 transl:1 direct:1 shorthand:1 consists:1 combine:1 fitting:1 manner:1 pairwise:1 inter:1 expected:2 axn:1 str:1 becomes:1 project:2 begin:1 underlying:6 notation:1 provided:2 panel:2 kind:1 developed:1 finding:3 unobserved:1 transformation:1 pseudo:1 every:1 act:1 exactly:2 k2:8 yn:6 local:21 might:2 suggests:1 challenging:1 range:1 unique:1 camera:2 acknowledgment:1 block:2 procedure:3 vert:1 projection:18 confidence:1 induce:1 bochum:1 get:1 ir2:2 close:2 unlabeled:1 storage:1 ast:1 applying:1 equivalent:2 map:13 imposed:1 missing:2 go:1 straightforward:1 williams:1 l:5 convex:1 attraction:1 embedding:5 coordinate:22 variation:1 homogeneous:1 agreement:2 std:2 cut:1 labeled:1 observed:1 qns:14 mentioned:1 intuition:1 ham:1 constrains:1 trained:5 deliver:1 upon:2 division:1 easily:3 joint:1 various:2 gtm:1 describe:1 whose:1 quite:1 s:1 reconstruct:2 niyogi:1 unseen:1 topographic:2 eigenvalue:4 propose:1 reconstruction:9 kohonen:1 aligned:4 organizing:2 degenerate:1 roweis:4 olkopf:1 exploiting:1 optimum:2 generating:2 perfect:1 object:11 illustrate:1 derive:1 clearer:1 pose:14 develop:1 nearest:3 longitude:2 implemented:1 come:1 indicate:1 direction:1 closely:1 modifying:1 centered:1 adjacency:2 fix:1 generalization:1 bregler:1 extension:2 helping:1 lying:1 considered:1 exp:2 mapping:10 continuum:1 smallest:3 purpose:1 estimation:1 nwo:1 coordination:5 largest:1 successfully:1 weighted:4 uller:1 clearly:1 sensor:3 gaussian:6 always:1 aim:1 rather:1 avoid:1 varying:1 ax:1 contrast:1 irp:1 hidden:1 transformed:1 going:1 interested:2 germany:1 i1:3 pixel:3 overall:1 ir3:2 denoted:1 special:4 softmax:1 equal:1 once:1 construct:1 extraction:2 eigenproblem:1 having:1 shaped:1 unsupervised:1 filling:1 icml:1 summand:2 few:2 belkin:1 preserve:1 simultaneously:2 divergence:1 individual:1 n1:1 attempt:1 freedom:2 interest:1 mining:1 severe:1 alignment:5 withp:1 mixture:32 encourage:1 euclidean:2 logarithm:1 re:1 plotted:2 fitted:1 column:1 earlier:1 modeling:1 gn:16 dev:2 soft:1 ar:1 zn:3 cost:2 eigenmaps:7 reported:1 st:2 density:5 thanks:1 lee:1 off:1 informatics:1 gaze:1 together:1 squared:2 von:1 recorded:1 containing:1 opposed:1 possibly:1 expert:1 account:1 de:1 stabilize:1 coordinated:1 multiplicative:1 view:3 closed:1 responsibility:4 recover:2 minimize:4 chart:10 gnt:2 maximized:1 correspond:1 generalize:1 produced:1 lighting:1 sharing:1 aligns:1 against:1 involved:1 naturally:1 dimensionality:6 actually:1 back:3 higher:1 maximally:1 improved:1 furthermore:1 just:3 smola:1 correlation:4 d:1 langford:1 nonlinear:9 overlapping:1 indicated:2 hor:1 requiring:1 true:1 isomap:1 unroll:1 regularization:2 leibler:1 i2:2 self:3 qnk:1 qn1:1 illustrative:1 coincides:1 iris:1 generalized:5 omohundro:1 performs:1 l1:2 gns:16 interface:1 silva:1 image:25 ranging:1 rotation:2 functional:2 multinomial:1 volume:5 extend:2 discussed:1 significant:1 measurement:8 automatic:1 uv:2 consistency:3 similarly:1 analyzer:3 had:1 rot:1 moving:1 stable:1 access:1 supervision:1 surface:1 etc:1 align:1 aligning:3 posterior:12 closest:1 recent:3 optimizing:2 driven:2 irq:1 certain:1 lipreading:1 der:1 minimum:1 ministry:1 nikos:1 impose:1 monotonically:1 signal:3 multiple:7 unimodal:1 desirable:1 reduces:1 characterized:2 cross:2 sphere:1 post:1 laplacian:7 verbeek:1 prediction:2 vision:1 dutch:1 kernel:2 achieved:1 c1:3 whereas:1 want:2 uninformative:1 sch:1 extra:1 rest:1 sure:1 nv:1 member:1 december:2 spirit:1 near:1 split:1 embeddings:2 viability:1 easy:1 reduce:1 idea:3 simplifies:1 economic:1 pca:9 ul:1 f:4 ird:2 peter:2 cause:1 generally:2 eigenvectors:4 nonparametric:1 locally:2 tenenbaum:1 canonical:3 notice:1 coordinating:3 estimated:2 write:2 group:1 convert:2 sum:6 inverse:1 extends:3 place:2 throughout:1 scaling:2 cca:23 correspondence:7 quadratic:1 activity:4 constraint:4 svens:1 nearby:1 aspect:1 min:1 department:1 poor:1 sam:1 dv:2 invariant:1 projecting:1 equation:1 visualization:1 previously:1 agree:1 know:2 available:1 gaussians:1 rewritten:1 observe:1 spectral:2 robustness:2 original:3 denotes:1 running:1 clustering:1 malsburg:1 k1:1 build:1 especially:1 objective:18 added:1 diagonal:2 subspace:1 distance:2 separate:1 sensible:1 manifold:18 collected:2 reason:1 charting:1 index:2 relationship:3 illustration:1 providing:1 minimizing:3 modeled:2 setup:2 rise:2 stw:1 design:1 perform:1 allowing:2 teh:1 av:1 observation:9 markov:1 arc:1 defining:1 vlassis:1 communication:1 extended:3 hinton:1 y1:1 discovered:1 jakob:1 canada:1 pair:5 learned:2 able:1 bar:1 below:2 latitude:2 gnc:4 program:1 suitable:1 critical:1 treated:1 predicting:1 representing:1 scheme:1 technology:2 lk:2 categorical:1 extract:1 geometric:1 embedded:2 interesting:1 qnr:4 foundation:1 integrate:1 degree:3 znk:1 consistent:2 principle:1 share:2 translation:4 prone:1 supported:2 lle:4 institute:2 neighbor:3 saul:3 face:1 absolute:1 regard:1 curve:3 dimension:2 xn:18 valid:2 computes:1 qn:5 collection:1 made:1 preprocessing:1 employing:2 kullback:1 global:14 overfitting:1 conclude:1 tuples:1 un:3 latent:31 learn:2 sp:1 main:2 linearly:3 noise:1 complementary:1 x1:2 fig:7 en:1 n:3 lie:2 third:1 extractor:14 down:1 specific:1 bishop:1 showing:1 offset:2 concern:1 grouping:1 fusion:1 workshop:1 conditioned:2 entropy:2 appearance:1 amsterdam:1 expressed:1 nserc:1 springer:1 corresponds:2 extracted:1 identity:4 shared:2 replace:1 change:1 hard:1 included:1 typical:1 averaging:1 denoising:1 principal:1 gnr:2 total:1 experimental:1 brand:1 select:1 latter:1 phenomenon:1 |
1,589 | 2,444 | Laplace Propagation
Alex J. Smola, S.V.N. Vishwanathan
Machine Learning Group
ANU and National ICT Australia
Canberra, ACT, 0200
{smola, vishy}@axiom.anu.edu.au
Eleazar Eskin
Department of Computer Science
Hebrew University Jerusalem
Jerusalem, Israel, 91904
[email protected]
Abstract
We present a novel method for approximate inference in Bayesian models and regularized risk functionals. It is based on the propagation of
mean and variance derived from the Laplace approximation of conditional probabilities in factorizing distributions, much akin to Minka?s
Expectation Propagation. In the jointly normal case, it coincides with
the latter and belief propagation, whereas in the general case, it provides
an optimization strategy containing Support Vector chunking, the Bayes
Committee Machine, and Gaussian Process chunking as special cases.
1
Introduction
Inference via Bayesian estimation can lead to optimization problems over rather large data
sets. Exact computation in these cases is often computationally intractable, which has led
to many approximation algorithms, such as variational approximation [5], or loopy belief
propagation. However, most of these methods still rely on the propagation of the exact
probabilities (upstream and downstream evidence in the case of belief propagation), rather
than an approximation. This approach becomes costly if the random variables are real
valued or if the graphical model contains large cliques.
To fill this gap, methods such as Expectation Propagation (EP) [6] have been proposed,
with explicit modifications to deal with larger cliques and real-valued variables. EP works
by propagating the sufficient statistics of an exponential family, that is, mean and variance
for the normal distribution, between various factors of the posterior. This is an attractive
choice only if we are able to compute the required quantities explicitly (this means that we
need to solve an integral in closed form).
Furthermore computation of the mode of the posterior (MAP approximation) is a legitimate
task in its own right ? Support Vector Machines (SVM) fall into this category. In the
following we develop a cheap version of EP which requires only the Laplace approximation
in each step and show how this can be applied to SVM and Gaussian Processes.
Outline of the Paper We describe the basic ideas of LP in Section 2, show how it applies
to Gaussian Processes (in particular the Bayes Committee Machine of [9]) in Section 3,
prove that SVM chunking is a special case of LP in Section 4, and finally demonstrate in
experiments the feasibility of LP (Section 5).
2
Laplace Propagation
Let X be a set of observations and denote by ? a parameter we would like to infer by
studying p(?|X). This goal typically involves computing expectations Ep(?|X) [?], which
can only rarely be computed exactly. Hence we approximate
Ep(?|X) [?] ? argmax ? log p(?|X) =: ??
(1)
?
Varp(?|X) [?] ? ??2 [? log p(?|X)]|?=??
(2)
This is commonly referred to as the Laplace-approximation. It is exact for normal distributions and works best if ? is strongly concentrated around its mean. Solving for ?? can
be costly. However, if p(?|X) has special structure, such as being the product of several
simple terms, possibly each of them dependent only on a small number of variables at a
time, computational savings can be gained. In the following we present an algorithm to
take advantage of this structure by breaking up (1) into smaller pieces and optimizing over
them separately.
2.1
Approximate Inference
For the sake of simplicity in notation we drop the explicit dependency of ? on X and as in
[6] we assume that
N
Y
p(?) =
ti (?).
(3)
i=1
Our strategy relies on the assumption that if we succeed in finding good approximations
?
?
of each of the terms ti (?)
Qby ti (?) we will obtain an approximate maximizer ? of p(?)
?
by maximizing p?(?) := i ti (?). Key is a good approximation of each of the ti at the
maximum of p(?). This is ensured by maximizing
p?i (?) := ti (?)
N
Y
t?i (?).
(4)
j=1,j6=i
and subsequent use of the Laplace approximation of ti (?) at ??i := argmax? p?i (?) as the
new estimate t?i (?). This process is repeated until convergence. The following lemma
shows that this strategy is valid:
Lemma 1 (Fixed Point of Laplace Propagation) For all second-order fixed points the
following holds: ?? is a fixed point of Laplace propagation if and only if it is a local
optimum of p(?).
Proof Assume that ?? is a fixed point of the above algorithm. Then the first order optimality conditions require ?? log p?i (?? ) = 0 for all i and the Laplace approximation yields
?? log t?i (?? ) = ?? log ti (?? ) and ??2 log t?i (?? ) = ??2 log ti (?? ). Consequently, up to second
order, the derivatives of p?, p?i , and p agree at ?? , which implies that ?? is a local optimum.
Next assume that ?? is locally optimal. Then again, ?? log p?i (?? ) have to vanish, since the
Laplace approximation is exact up to second order. This means that also all t?i will have an
optimum at ??? , which means that ?? is a fixed point.
The next step is to establish methods for updating the approximations t?i of ti . One option
is to perform such updates sequentially, thereby improving only one t?i at a time. This is
advantageous if we can process only one approximation at a time. For parallel processing,
however, we will perform several operations at a time, that is, recompute several t?i (?)
and merge the new approximations subsequently. We will see how the BCM is a one-step
approximation of LP in the parallel case, whereas SV chunking is an exact implementation
of LP in the sequential case.
2.2
Message Passing
Message passing [7] has been widely successful for inference in graphical models. Assume
that we can split ? into a (not necessarily disjoint) set of coordinates, say ?C1 , . . . , ?CN ,
such that
N
Y
p(?) =
tN (?Ci ).
(5)
i=1
Then the goal of computing a Laplace approximation of p?i reduces to computing a Laplace
approximation for the subset of variables ?Ci , since these are the only coordinates ti depends on.
7654
0123
7654
0123
?1 ?
?
Note that an update in ?Ci means that only terms sharing vari??
2
?
ables with ?Ci are affected. For directed graphical models, these
7654
0123
?
are the conditional probabilities governing the parents and chil 3 ???
?
dren of ?Ci . Hence, to carry out calculations we only need to
7654
0123
7654
0123
?5
?4
consider local information regarding t?i (?Ci ).
In the example above ?3 depends on (?1 , ?2 ) and (?4 , ?5 ) are conditionally independent of
?1 and ?2 , given ?3 . Consequently, we may write p(?) as
p(?) = p(?1 )p(?2 )p(?3 |?1 , ?2 )p(?4 |?3 )p(?5 |?3 ).
(6)
To find the Laplace approximation corresponding to the terms involving ?3 we only need
to consider p(?3 |?1 , ?2 ) itself and its neighbors ?upstream? and ?downstream? of ?3 containing ?1 , ?2 , ?3 in their functional form.
This means that LP can be used as a drop-in replacement of exact inference in message
passing algorithms. The main difference being, that now we are propagating mean and variance from the Laplace approximation rather than true probabilities (as in message passing)
or true means and variances (as in expectation propagation).
3
Bayes Committee Machine
In this section we show that the Bayes Committee Machine (BCM) [9] corresponds to one
step of LP in conjunction with a particular initialization, namely constant t?i . As a result,
we extend BCM into an iterative method for improved precision of the estimates.
3.1
The Basic Idea
Let us assume that we are given a set
of sets of observations, say, Z1 , . . . , ZN ,
which are conditionally independent of
each other, given a parameter ?, as depicted in the figure on the right.
89:;
?>=<
jjvjv ? IIII
j
j
j
v
j
III
jjjj vvv
I$
zv
jjjj
j
u
@ABC
GFED
@ABC
GFED
@ABC
GFED
...
Z1
Z2
ZN
Repeated application of Bayes rule allows us to rewrite the conditional density p(?|Z) as
p(?|Z) ? p(Z|?)p(?) = p(?)
N
Y
i=1
p(Zi |?) ? p1?N (?)
N
Y
p(?|Zi ).
(7)
i=1
Finally, Tresp and coworkers [9] find Laplace approximations for p(?|Zi ) ? p(Zi |?)p(?)
with respect to ?. These results are then combined via (7) to come up with an overall
estimate of p(?|X, Y ).
3.2
Rewriting The BCM
The repeated invocation of Bayes rule seems wasteful, yet it was necessary in the context
of the BCM formulation to explain how estimates from subsets could be combined in a
committee like fashion. To show the equivalence of BCM with one step of LP recall the
N
third term of (7). We have
Y
p(Zi |?),
p(?|Z) = c ? p(?)
(8)
| {z }
| {z }
:=t0 (?)
i=1
:=ti (?)
where c is a suitable normalizing constant. In Gaussian processes, we generally assume
that p(?) is normal, hence t0 (?) is quadratic. This allows us to state the LP algorithm to
find the mode and curvature of p(?|Z):
Algorithm 1 Iterated Bayes Committee Machine
Initialize t?0 ? cp(?) and t?i (?) ? const.
repeat
Compute new approximations t?i (?) in parallel by finding Laplace approximations to
p?i , as defined in (4). Since t0 is normal, t?0 (?) = t0 (?). For i 6= 0 we obtain
p?i = ti (?)
N
Y
j=0,j6=i
t?i (?) = p(?)p(Zi |?)
N
Y
t?i (?).
(9)
j=1,j6=i
until Convergence
QN
Return argmax? t0 (?) i=1 t?i (?).
Note that in the first iteration (9) can be written as p?i ? p(?)p(Zi |?), since all remaining
terms t?i are constant. This means that after the first update t?i is identical to the estimates
obtained from the BCM.
Whereas the BCM stops at this point, we have the liberty to continue the approximation and
also the liberty to choose whether we use a parallel or a sequential update regime, depending on the number of processing units available. As a side-effect, we obtain a simplified
proof of the following:
Theorem 2 (Exact BCM [9]) For normal distributions the BCM is exact, that is, the Iterated BCM converges in one step.
Y
Y
Proof For normal distributions all t?i are exact, hence p(?) =
ti (?) =
t?i (?) = p?(?),
i
i
which shows that p? = p.
Note that [9] formulates the problem as one of classification or regression, that is Z =
(X, Y ), where the labels Y are conditionally independent, given X and the parameter ?.
This, however, does not affect the validity of our reasoning.
4
Support Vector Machines
The optimization goals in Support Vector Machines (SVM) are very similar to those in
Gaussian Processes: essentially the negative log posterior ? log p(?|Z) corresponds to the
objective function of the SV optimization problem.
This gives hope that LP can be adapted to SVM. In the following we show that SVM
chunking [4] and parallel SVM training [2] can be found to be special cases of LP. Taking
logarithms of (3) and defining ?i (?) := ? log ti (?) (and ?
? (?) := ? log t?i (?) analogously)
we obtain the following formulation of LP in log-space.
Algorithm 2 Logarithmic Version of Laplace Propagation
Initialize ?
?i (?)
repeat
Choose index i ? {1, . . . , N }
N
X
Minimize ?i (?) +
?
?j (?) and replace ?
?i (?) by a Taylor approximation at the
j=1,i6=j
minimum ?i of the above expression.
until All ?i agree
4.1
Chunking
To show that SV chunking is equivalent to LP in logspace, we briefly review the basic ideas
of chunking. The standard SVM optimization problem is
minimize
?,b
subject to
m
X
1
k?k2 + C
c(xi , yi , f (xi ))
2
i=1
f (xi ) = h?, ?(xi )i + b
?(?, b) :=
(10)
Here ?(x) is the map into feature space such that k(x, x0 ) = h?(x), ?(x0 )i and
c(x, y, f (x)) is a loss function penalizing the deviation between the estimate f (x) and
the observation y. We typically assume that c is convex. For the rest of the deviation we let
c(x, y, f (x)) = max(0, 1 ? yf (x)) (the analysis still holds in the general case, however it
becomes considerably more tedious). The dual of (10) becomes
minimize
?
m
m
m
X
X
1 X
?i ?j yi yj Kij k(xi , xj ) ?
?i s.t.
yi ?i = 0 and ?i ? [0, C] (11)
2 i,j=1
i=1
i=1
The basic idea of chunking is to optimize only over subsets of the vector ? at a time.
Denote by Sw the set of variables we are using in the current optimization step, let ?w be the
corresponding vector, and by ?f the variables which remain
unchanged. Likewise denote
Hww Hwf
by yw , yf the corresponding parts of y, and let H =
be the quadratic
Hf w H f f
matrix of (11), again split into terms depending on ?w and ?f respectively. Then (11),
restricted to ?w can be written as [4]
X
1 >
>
minimize ?w
Hww ?w +?f> Hf w ?w ?
?i s.t. yw
?w +yf> ?f = 0, ?i ? [0, C] (12)
?w
2
i?Sw
4.2
Equivalence to LP
We now show that the correction terms arising from chunking are the same as those arising
from LP. Denote by S1 , . . . , SN a partition of {1, . . . m} and define
X
1
c(xj , yj , f (xj )).
(13)
?0 (?, b) := k?k2 and ?i (?, b) := C
2
j?Si
Then ?
?0 = ?0 , since ?0 is purely quadratic, regardless of where we expand ?0 . As for ?i
(with i 6= 0) we have
X
X
?
?i =
yj ?j h?(xj ), ?i +
yj ?j b = h?i , ?i + bi b
(14)
j?Si
j?Si
P
P
where ?j ? Cc0 (xj , yj , f (xj )), ?i := j?Si yj ?j ?(xj ), and bi := j?Si yj ?j .1 In this
P
case minimization over ?i (?) + j6=i ?
?j (?) amounts to minimizing
X
X
1
k?k2 + C
c(xj , yj , f (xj )) + C
[h?j , ?i + bj b] s.t. f (xj ) = h?, ?(xj )i + b.
2
j?Si
j ?S
/ i
Skipping technical details, the dual optimization problem is given by
X
X
1 X
?j ?l yj yl k(xj , kl )
?j ?l yj yl k(xj , kl ) ?
?j ?
minimize
?
2
j?S
j?S
,l6
?
S
j,l?Si
i
i
i
P
P
subject to ?j ? [0, C] and j?Si yj ?j ? j6?Si yj ?j = 0.
(15)
The latter is identical to (12), the optimization problem arising from chunking, provided
that we perform the substitution ?j = ??j for all j 6? Si .
To show this last step, note that at optimality null has to be
Pan element of the subdifferential
of ?i (?) with respect to ?, b. Taking derivatives of ?i + j6=i ?
?i implies
X
X
c0 (xj , yj , f (xj )) ? C
?j .
(16)
? ? ?C
j?Si
j6=i
Matching up terms in the expansion of ? we immediately obtain ?j = ??j .
Finally, to start the approximation scheme we need to consider a proper initialization of ?
?i .
In analogy to the BCM setting we use ?
?i = 0, which leads precisely to the SVM chunking
method, where one optimizes over one subset at a time (denoted by Si ), while the other
sets are fixed, taking only their linear contribution into account.
LP does not require that all the updates of t?i (or ?
?i ) be carried out sequentially. Instead, we
can also consider parallel approximations similar to [2]. There the optimization problem is
split into several small parts and each of them is solved independently. Subsequently the
estimates are combined by averaging.
This is equivalent to one-step parallel LP: with
?i = 0 for all i 6= 0
P the initialization ?
and ?
?0 = ?0 = 12 k?k2 we minimize ?i + j6=i ?
?j in parallel. This is equivalent to
solving the SV optimization problem on the corresponding subset Si (as we saw in the
previous section). Hence, the linear terms ?i , bi arising from the approximation ?
?i (?, b) =
Ch?i , ?i + Cbi b lead to the overall approximation
X
X
1
h?i , ?i,
(17)
?
? (?, b) =
?
?i (?, b) = k?k2 +
2
i
i
with the joint minimizer being the average of the individual solutions.
5
Experiments
To test our ideas we performed a set of experiments with the widely available Web and
Adult datasets from the UCI repository [1]. All experiments were performed on a 2.4 MHz
Intel Xeon machine with 1 GB RAM using MATLAB R13. We used a RBF kernel with
? 2 = 10 [8], to obtain comparable results.
We first tested the performance of Gaussian process training with Laplace propogation
using a logistic loss function. The data was partitioned into chunks of roughly 500 samples
each and the maximum of columns in the low rank approximation [3] was set to 750.
1
Note that we had to replace the equality with set inclusion due to the fact that c is not everywhere
differentiable, hence we used sub-differentials instead.
We summarize the performance of our algorithm in Table 1. TFactor refers to the time
(in seconds) for computing the low rank factorization while TTrain denotes the training
time for the Gaussian process. We empirically observed that on all datasets the algorithm
converges in less than 3 iterations using serial updates and in less than 6 iterations using
parallel updates.
Dataset
Adult1
Adult2
Adult3
Adult4
Adult5
Adult6
Adult7
TFactor
16.38
20.07
24.41
36.29
56.82
89.78
119.39
TSerial
25.72
33.02
47.05
75.71
97.57
232.45
293.45
TParallel
53.90
75.76
106.88
202.88
169.79
348.10
559.23
Dataset
Web1
Web2
Web3
Web4
Web5
Web6
Web7
TFactor
20.33
36.27
37.09
69.9
68.15
129.86
213.54
TSerial
34.33
67.65
92.36
168.88
225.13
261.23
483.52
TParallel
93.47
88.37
212.04
251.92
249.15
663.07
838.36
Table 1: Gaussian process training with serial and parallel Laplace propogation.
We conducted another set of experiments to test the speedups obtained by seeding the
SMO with values of ? obtained by performing one iteration of Laplace propogation on the
dataset. As before we used a RBF kernel with ? 2 = 10. We partitioned the Adult1 and
Web1 datasets into 5 chunks each while the Adult4 and Web4 datasets were partitioned
into 10 chunks each. The freely available SMOBR package was modified and used for
our experiments. For simplicity we use the C-SVM and vary the regularization parameter.
TParallel , TSerial and TNoMod refer to the times required by SMO to converge when using
one iteration of parallel/serial/no LP on the dataset.
C
0.1
0.5
1.0
5.0
Adult1
TParallel TSerial
2.84
2.04
5.57
3.99
5.48
7.25
107.37 110.07
TNoMod
7.650
9.215
10.885
307.135
C
0.1
0.5
1.0
5.0
TParallel
20.42
46.29
80.33
1921.19
Adult4
TSerial
13.26
40.82
64.37
1500.42
TNoMod
59.935
63.645
107.475
1427.925
Table 2: Performance of SMO Initialization on the Adult dataset.
C
0.1
0.5
1.0
5.0
TParallel
21.36
34.64
61.15
224.15
Web1
TSerial
15.65
35.66
38.56
62.41
TNoMod
27.34
60.12
63.745
519.67
C
0.1
0.5
1.0
5.0
TParallel
63.76
140.61
254.84
1959.08
Web4
TSerial
77.05
149.80
298.59
3188.75
TNoMod
95.10
156.525
232.120
2223.225
Table 3: Performance of SMO Initialization on the Web dataset.
As can be seen our initialization significantly speeds up the SMO in many cases sometimes acheving upto 4 times speed up. Although in some cases (esp for large values of C)
our method seems to slow down convergence of SMO. In general serial updates seem to
perform better than parallel updates. This is to be expected since we use the information
from other blocks as soon as they become available in case of the serial algorithm while we
completely ignore the other blocks in the parallel algorithm.
6
Summary And Discussion
Laplace propagation fills the gap between Expectation Propagation, which requires exact
computation of first and second order moments, and message passing algorithms when
optimizing structured density functions. Its main advantage is that it only requires the
Laplace approximation in each computational step, while being applicable to a wide range
of optimization tasks. In this sense, it complements Minka?s Expectation Propagation,
whenever exact expressions are not available.
As a side effect, we showed that Tresp?s Bayes Committee Machine and Support Vector
Chunking methods are special instances of this strategy, which also sheds light on the fact
why simple averaging schemes for SVM, such as the one of Colobert and Bengio seem to
work in practice.
The key point in our proofs was that we split the data into disjoint subsets. By the assumption of independent and identically distributed data it followed that the variable assignments
are conditionally independent from each other, given the parameter ?, which led to a favorable factorization property in p(?|Z). It should be noted that LP allows one to perform
chunking-style optimization in Gaussian Processes, which effectively puts an upper bound
on the amount of memory required for optimization purposes.
Acknowledgements We thank Nir Friedman, Zoubin Ghahramani and Adam Kowalczyk
for useful suggestions and discussions.
References
[1] C. L. Blake and C. J. Merz. UCI repository of machine learning databases, 1998.
[2] R. Collobert, S. Bengio, and Y. Bengio. A parallel mixture of svms for very large scale
problems. In Advances in Neural Information Processing Systems. MIT Press, 2002.
[3] S. Fine and K. Scheinberg. Efficient SVM training using low-rank kernel representations.
Journal of Machine Learning Research, 2:243?264, Dec 2001.
http://www.jmlr.org.
[4] T. Joachims. Making large-scale SVM learning practical. In B. Sch?olkopf, C. J. C.
Burges, and A. J. Smola, editors, Advances in Kernel Methods?Support Vector Learning, pages 169?184, Cambridge, MA, 1999. MIT Press.
[5] M. I. Jordan, Z. Gharamani, T. S. Jaakkola, and L. K. Saul. An introduction to variational methods for graphical models. In Learning in Graphical Models, volume
M. I. Jordan, pages 105?162. Kluwer Academic, 1998.
[6] T. Minka. Expectation Propagation for approximative Bayesian inference. PhD thesis,
MIT Media Labs, Cambridge, USA, 2001.
[7] J. Pearl. Probabilistic Reasoning in Intelligent Systems. Morgan-Kaufman, 1988.
[8] J. C. Platt. Sequential minimal optimization: A fast algorithm for training support
vector machines. Technical Report MSR-TR-98-14, Microsoft Research, 1998.
[9] V. Tresp. A Bayesian committee machine. Neural Computation, 12(11):2719?2741,
2000.
| 2444 |@word msr:1 briefly:1 repository:2 version:2 advantageous:1 seems:2 r13:1 c0:1 tedious:1 thereby:1 tr:1 carry:1 moment:1 substitution:1 contains:1 current:1 z2:1 skipping:1 si:13 yet:1 written:2 subsequent:1 partition:1 cheap:1 seeding:1 drop:2 update:9 eskin:1 provides:1 recompute:1 ttrain:1 org:1 differential:1 become:1 prove:1 x0:2 expected:1 roughly:1 p1:1 becomes:3 provided:1 notation:1 medium:1 null:1 israel:1 kaufman:1 finding:2 act:1 ti:15 shed:1 exactly:1 ensured:1 k2:5 platt:1 unit:1 before:1 eleazar:1 local:3 esp:1 merge:1 au:1 initialization:6 equivalence:2 jjjj:2 factorization:2 bi:3 range:1 directed:1 practical:1 yj:13 practice:1 block:2 axiom:1 significantly:1 matching:1 refers:1 zoubin:1 put:1 risk:1 context:1 www:1 optimize:1 equivalent:3 map:2 maximizing:2 jerusalem:2 regardless:1 independently:1 convex:1 simplicity:2 immediately:1 legitimate:1 rule:2 fill:2 coordinate:2 laplace:22 exact:11 approximative:1 element:1 updating:1 database:1 ep:5 logspace:1 observed:1 solved:1 solving:2 rewrite:1 purely:1 completely:1 joint:1 various:1 fast:1 describe:1 widely:2 larger:1 valued:2 solve:1 say:2 statistic:1 jointly:1 itself:1 advantage:2 differentiable:1 product:1 uci:2 olkopf:1 convergence:3 parent:1 optimum:3 adam:1 converges:2 depending:2 develop:1 cbi:1 propagating:2 c:1 involves:1 implies:2 come:1 liberty:2 subsequently:2 australia:1 require:2 correction:1 hold:2 around:1 blake:1 normal:7 bj:1 vary:1 purpose:1 estimation:1 favorable:1 applicable:1 label:1 saw:1 hope:1 minimization:1 mit:3 gaussian:9 modified:1 rather:3 jaakkola:1 conjunction:1 web1:3 derived:1 joachim:1 rank:3 sense:1 inference:6 dependent:1 typically:2 expand:1 overall:2 classification:1 dual:2 denoted:1 special:5 initialize:2 saving:1 identical:2 report:1 intelligent:1 national:1 individual:1 argmax:3 replacement:1 microsoft:1 friedman:1 message:5 mixture:1 light:1 integral:1 necessary:1 taylor:1 logarithm:1 minimal:1 kij:1 xeon:1 column:1 instance:1 mhz:1 formulates:1 zn:2 assignment:1 loopy:1 deviation:2 subset:6 successful:1 conducted:1 dependency:1 sv:4 considerably:1 combined:3 chunk:3 density:2 propogation:3 probabilistic:1 yl:2 analogously:1 again:2 thesis:1 containing:2 choose:2 possibly:1 derivative:2 style:1 return:1 account:1 explicitly:1 depends:2 collobert:1 piece:1 performed:2 closed:1 lab:1 start:1 bayes:8 option:1 parallel:14 hf:2 contribution:1 minimize:6 variance:4 likewise:1 yield:1 bayesian:4 iterated:2 j6:8 explain:1 sharing:1 whenever:1 web2:1 vishy:1 minka:3 proof:4 stop:1 dataset:6 recall:1 improved:1 formulation:2 strongly:1 furthermore:1 governing:1 smola:3 varp:1 until:3 web:2 maximizer:1 propagation:17 mode:2 yf:3 logistic:1 usa:1 effect:2 validity:1 true:2 hence:6 regularization:1 equality:1 deal:1 attractive:1 conditionally:4 adult1:3 noted:1 coincides:1 outline:1 demonstrate:1 tn:1 cp:1 reasoning:2 variational:2 novel:1 functional:1 empirically:1 volume:1 extend:1 kluwer:1 refer:1 cambridge:2 i6:1 inclusion:1 had:1 curvature:1 posterior:3 own:1 showed:1 optimizing:2 optimizes:1 continue:1 yi:3 seen:1 minimum:1 morgan:1 freely:1 converge:1 coworkers:1 infer:1 reduces:1 technical:2 academic:1 calculation:1 serial:5 feasibility:1 involving:1 basic:4 regression:1 essentially:1 expectation:7 iteration:5 kernel:4 sometimes:1 dec:1 c1:1 whereas:3 subdifferential:1 separately:1 iiii:1 fine:1 sch:1 rest:1 subject:2 vvv:1 seem:2 jordan:2 split:4 iii:1 bengio:3 identically:1 affect:1 xj:15 zi:7 idea:5 cn:1 regarding:1 t0:5 whether:1 expression:2 gb:1 akin:1 passing:5 matlab:1 generally:1 useful:1 yw:2 amount:2 locally:1 concentrated:1 svms:1 category:1 http:1 disjoint:2 arising:4 write:1 affected:1 group:1 key:2 zv:1 wasteful:1 hww:2 penalizing:1 rewriting:1 ram:1 downstream:2 package:1 everywhere:1 family:1 comparable:1 bound:1 followed:1 quadratic:3 adapted:1 adult2:1 vishwanathan:1 precisely:1 alex:1 sake:1 speed:2 ables:1 optimality:2 performing:1 speedup:1 department:1 structured:1 smaller:1 remain:1 pan:1 partitioned:3 lp:19 modification:1 s1:1 making:1 restricted:1 chunking:14 computationally:1 agree:2 scheinberg:1 committee:8 studying:1 available:5 operation:1 upto:1 kowalczyk:1 denotes:1 eeskin:1 remaining:1 graphical:5 sw:2 l6:1 const:1 ghahramani:1 establish:1 unchanged:1 objective:1 quantity:1 strategy:4 costly:2 thank:1 index:1 minimizing:1 hebrew:1 negative:1 implementation:1 proper:1 perform:5 upper:1 observation:3 datasets:4 defining:1 complement:1 namely:1 required:3 kl:2 z1:2 bcm:12 smo:6 pearl:1 adult:2 able:1 regime:1 summarize:1 max:1 memory:1 belief:3 suitable:1 rely:1 regularized:1 scheme:2 cc0:1 carried:1 columbia:1 tresp:3 sn:1 nir:1 review:1 ict:1 acknowledgement:1 loss:2 suggestion:1 hwf:1 analogy:1 sufficient:1 editor:1 summary:1 repeat:2 last:1 soon:1 side:2 burges:1 fall:1 neighbor:1 taking:3 wide:1 saul:1 distributed:1 valid:1 vari:1 qn:1 commonly:1 simplified:1 functionals:1 approximate:4 ignore:1 clique:2 sequentially:2 xi:5 factorizing:1 iterative:1 why:1 table:4 gharamani:1 improving:1 expansion:1 upstream:2 necessarily:1 main:2 repeated:3 canberra:1 referred:1 intel:1 fashion:1 slow:1 precision:1 sub:1 explicit:2 exponential:1 invocation:1 breaking:1 vanish:1 third:1 jmlr:1 theorem:1 down:1 svm:13 evidence:1 normalizing:1 intractable:1 sequential:3 effectively:1 gained:1 ci:6 phd:1 anu:2 gap:2 depicted:1 led:2 logarithmic:1 applies:1 ch:1 corresponds:2 minimizer:1 relies:1 abc:3 ma:1 succeed:1 conditional:3 goal:3 consequently:2 rbf:2 replace:2 averaging:2 lemma:2 merz:1 rarely:1 support:7 latter:2 tested:1 |
1,590 | 2,445 | Gene Expression Clustering with Functional
Mixture Models
Darya Chudova,
Department of Computer Science
University of California, Irvine
Irvine CA 92697-3425
[email protected]
Christopher Hart
Division of Biology
California Institute of Technology
Pasadena, CA 91125
[email protected]
Eric Mjolsness
Department of Computer Science
University of California, Irvine
Irvine CA 92697-3425
[email protected]
Padhraic Smyth
Department of Computer Science
University of California, Irvine
Irvine CA 92697-3425
[email protected]
Abstract
We propose a functional mixture model for simultaneous clustering and
alignment of sets of curves measured on a discrete time grid. The model
is specifically tailored to gene expression time course data. Each functional cluster center is a nonlinear combination of solutions of a simple
linear differential equation that describes the change of individual mRNA
levels when the synthesis and decay rates are constant. The mixture of
continuous time parametric functional forms allows one to (a) account for
the heterogeneity in the observed profiles, (b) align the profiles in time by
estimating real-valued time shifts, (c) capture the synthesis and decay of
mRNA in the course of an experiment, and (d) regularize noisy profiles
by enforcing smoothness in the mean curves. We derive an EM algorithm for estimating the parameters of the model, and apply the proposed
approach to the set of cycling genes in yeast. The experiments show
consistent improvement in predictive power and within cluster variance
compared to regular Gaussian mixtures.
1
Introduction
Curve data arises naturally in a variety of applications. Each curve typically consists of a
sequence of measurements as a function of discrete time or some other independent variable. Examples of such data include trajectory tracks of individuals or objects (Gaffney
and Smyth, 2003) and biomedical measurements of response to drug therapies (James and
Sugar, 2003). In some cases, the curve data is measured on regular grids and the curves have
the same lengths. It is straightforward to treat such curves as elements of the corresponding
vector spaces, and apply traditional vector based clustering methodologies such as k-means
or mixtures of Gaussian distributions Often the curves are sampled irregularly, have varying lengths, lack proper alignment in the time domain or the task requires interpolation
or inference at the off-grid locations. Such properties make vector-space representations
undesirable. Curve data analysis is typically referred to as ?functional data analysis? in
the statistical literature (Ramsay and Silverman, 1997), where the observed measurements
are treated as samples from an assumed underlying continuous-time process. Clustering
in this context can be performed using mixtures of continuous functions such as splines
(James and Sugar, 2003) and polynomial regression models (DeSarbo and Cron, 1988;
Gaffney and Smyth, 2003). In this paper we focus on the specific problem of analyzing
gene expression time course data and extend the functional mixture modelling approach to
(a) cluster the data using plausible biological models for the expression dynamics, and (b)
align the expression profiles along the time axis.
Large scale gene expression profiling measures the relative abundance of tens of thousands
of mRNA molecules in the cell simultaneously. The goal of clustering in this context is to
discover groups of genes with similar dynamics and find sets of genes that participate in the
same regulatory mechanism. For the most part, clustering approaches to gene expression
data treat the observed curves as elements of the corresponding vector-space. A variety of
vector-based clustering algorithms have been successfully applied, ranging from hierarchical clustering (Eisen et al., 1998) to model based methods (Yeung et al., 2001). However,
approaches operating in the observed ?gridded? domain of discrete time are insensitive to
many of the constraints that the temporal nature of the data imposes, including
Continuity of the temporal process: The continuous-time nature of gene expression
dynamics are quite important from a scientific viewpoint. There has been some
previous work on continuous time models in this context, e.g., mixed effects mixtures of splines (Bar-Joseph et al., 2002) were applied to clustering and alignment
of the cell-cycle regulated genes in yeast and good interpolation properties were
demonstrated. However, such spline models are ?black boxes? that can approximate virtually any temporal behavior ? they do not take the specifics of gene
regulation mechanisms into account. In contrast, in this paper we propose specific
functional forms that are targeted at short time courses, in which fairly simple
reaction kinetics can describe the possible dynamics.
Alignment: Individual genes within clusters of co-regulated genes can exhibit variations
in the time of the onset of their characteristic behaviors or in their initial concentrations. Such differences can significantly increase within-cluster variability
and produce incorrect cluster assignments. We address this problem by explicitly
modelling the unknown real-valued time shifts between different genes.
Smoothing. The high noise levels of observed gene expression data imply the need for
robust estimation of mean behavior. Functional models (such as those that we
propose here) naturally impose smoothness in the learned mean curves, providing
implicit regularization for such data.
While some of these problems have been previously addressed individually, no prior work
handles all of them in a unified manner. The primary contributions of this paper are (a) a
new probabilistic model based on functional mixtures that can simultaneously cluster and
align sets of curves observed on irregular time grids, and (b) a proposal for a specific functional form that models changes in mRNA levels for short gene expression time courses.
2
2.1
Model Description
Generative Model
We describe a generative model that allows one to simulate heterogeneous sets of curves
from a mixture of functional curve models. Each generated curve Yi is a series of obser-
vations at a discrete set of values Xi of an independent variable. In many applications, and
for gene expression measurements in particular, the independent variable X is time.
We adopt the same general approach to functional curve clustering that is used in regression
mixture models (DeSarbo and Cron, 1988), random effects regression mixtures (Gaffney
and Smyth, 2003) and mixtures of spline models (James and Sugar, 2003). In all of these
models, the component densities are conditioned on the values of the independent variable
Xi , and the conditional likelihood of a set Y of N curves is defined as
P (Y|X, ?) =
N
K
P (Yi |Xi , ?k )P (k)
(1)
i=1 k=1
Here P (k) is the component probability and ? is a complete set of model parameters.
The clusters are defined by their mean curves parametrized by a set of parameters ?k :
fk (x) = f (x, ?k ), and a noise model that describes the deviation from the mean functional
form (described below in Section 2.2.
In contrast to standard Gaussian mixtures, the functional mixture is defined in continuous
time, allowing evaluation of the mean curves on a continuum of ?off-grid? time points.
This allows us to extend the functional mixture models described above by incorporating
real-valued alignment of observed curves along the time axis. In particular, the precise time
grid Xi of observation i is assumed unknown and is allowed to vary from curve to curve.
This is common in practice when the measurement process cannot be synchronized from
curve to curve. For simplicity we assume (unknown) linear shifts of the curves along the
time axis. We fix the basic time grid X, but generate each curve on its own grid (X + ?i )
with a curve-specific time offset ?i . We treat the offset corresponding to curve Yi as an
additional real-valued latent variable in the model. The conditional likelihood of a single
curve under component k is calculated by integrating out all possible offset values:
P (Yi |X, ?k ) =
?i
P (Yi |X + ?i , ?k )P (?i |?k )d?i
(2)
Finally, we assume that the measurements have additive Gaussian noise with zero mean
and diagonal covariance matrix Ck , and express the conditional likelihood as
P (Yi |X + ?i , ?k ) ? N (Yi |fk (X + ?i ), Ck )
(3)
The full set of cluster parameters ?k includes mean curve parameters ?k that define fk (x),
covariance matrix Ck , cluster probability P (k), and time shift probability P (?|k): ?k =
{?k , Ck , P (k), P (?|k)}
2.2
Functional form of the mean curves
The generative model described above uses a generic functional form f (x, ?) for the mean
curves. In this section, we introduce a parametric representation of f (x, ?) that is specifically tailored to short gene expression time courses.
To a first-order approximation, the raw mRNA levels {v1 , . . . , vN } measured in gene expression experiments can be modeled via a system of differential equations with the following structure (see Gibson and Mjolsness , eq. 1.19, and Mestl, Lemay, and Glass (1996)):
dvi
= ?g1,i (v1 , . . . , vN ) ? vi g2,i (v1 , . . . , vN )
dt
(4)
The first term on the right hand side is responsible for the synthesis of vi with maximal
rate ?, and the second term represents decay with maximal fractional rate . In general, we
don?t know the specific coefficients or nonlinear saturating functions g1 and g2 that define
the right hand-side of the equations. Instead, we make a few simplifying assumptions about
the equation and use it as a motivation for the parametric functional form that we propose
below. Specifically, suppose that
? the set of N heterogeneous variables can be divided into K groups of variables,
whose production is driven by similar mechanisms;
? the synthesis and decay functions g1 and g2 are approximately piecewise constant
in time for any given group;
? there are at most two regimes involved in the production of vi , each characterized
by their own synthesis and decay rates ? this is appropriate for short time courses;
? for each group there can be an unknown change point on the time axis where a
relatively rapid switching between the two different regimes takes place, due to
exogenous changes in the variables (v1 , . . . , vN ) outside the group.
Within the regions of constant synthesis and decay functions g1 and g2 , we can solve equation (4) analytically and obtain a family of simple exponential solutions parametrized by
?1 = {?, ?, }:
f a (x, ?1 ) =
??
? ??x ?
e
+ ,
(5)
This motivates us to construct the functional forms for the mean curves by concatenating two parameterized exponents, with an unknown change point and a smooth transition
mechanism:
f (x, ?) = f a (x, ?1 ) (1 ? ?(x, ?, ?)) + f a (x, ?2 )?(x, ?, ?)
(6)
Here f a (x, ?1 ) and f a (x, ?2 ) represent the exponents to the left and right of the switching
point, with different sets of initial conditions, synthesis and decay rates denoted by parameters ?1 and ?2 . The nonlinear sigmoid transfer function ?(x, ?, ?) allows us to model
?1
switching between the two regimes at x = ? with slope ?: ?(x, ?, ?) = (1 + e??(x??) )
The random effects on the time grid allow us to time-shift each curve individually by replacing x with (x + ?i ) in Equation (6). There are other biologically plausible transformation
on the curves in a cluster that we do not pursue in this paper, such as allowing ? to vary
with each curve, or representing minor differences in the regulatory functions g1,i and g2,i
which affect the timing of their transitions.
When learning these models from data, we restrict the class of functions in Equation (6) to
those with non-negative initial conditions, synthesis and decay rates, as well as enforcing
continuity of the exponents at the switching point: f a (?, ?1 ) = f a (?, ?2 ). Finally, given
that the log-normal noise model is well-suited to gene expression data (Yeung et al., 2001)
we use the logarithm of the functional forms proposed in Equation (6) as a general class of
functions that describe the mean behavior within the clusters.
3
Parameter Estimation
We use the well-known Expectation Maximization (EM) algorithm to simultaneously recover the full set of model parameters ? = {?1 , . . . , ?K }, as well as the posterior joint
?1
?10 0 10 20 30 40 50 60 70 80
Time [minutes]
1
0
?1
Log?intensity
?10 0 10 20 30 40 50 60 70 80
Time [minutes]
1
2
5
0
4.5
?2
?10 0 10 20 30 40 50 60 70 80
Time [minutes]
2
0
?2
?10 0 10 20 30 40 50 60 70 80
Time [minutes]
K=1
K=2
K=3
K=4
K=5
4
Inverse Switching Slope
Log?intensity
0
Log?intensity
Log?intensity
Log?intensity
1
3.5
3
2.5
2
1.5
1
0
0.5
?1
?2
?10 0 10 20 30 40 50 60 70 80
Time [minutes]
0
20
30
40
50
Switching Time [minutes]
60
Figure 1: A view of the cluster mean curves (left) and variation in the switching-point parameters across 10 cross-validation folds (right) using functional clustering with alignment
(see Section 4 for full details).
distribution of cluster membership Z and time offsets ? for each observed curve. Each
cluster is characterized by the parameters of the mean curves, noise variance, cluster probability and time shift distribution: ?k = {?k , Ck , P (k), P (?|k)}.
? In the E-step, we find the posterior distribution of the cluster membership Zi and
the time shift ?i for each curve Yi , given current cluster parameters ?;
? In the M-step, we maximize the expected log-likelihood with respect to the posterior distribution of Z and ? by adjusting ?.
Since the time shifts ? are real-valued, the E-step requires evaluation of the posterior distribution over a continuous domain of ?. Similarly, the M-step requires integration with
respect to ?. We approximate the domain of ? with a finite sample from its prior distribution. The sample is kept fixed throughout the computation. The posterior probability of the
sampled values is updated after each M-step to approximate the model distribution P (?|k).
The M-step optimization problem does not allow closed-form solutions due to nonlinearities with respect to function parameters. We use conjugate gradient descent with
a pseudo-Newton step size selection. The step size selection issue is crucial in this problem, as the second derivatives with respect to different parameters of the model differ by
orders of magnitude. This indicates the presence of ridges and ravines on the likelihood
surface, which makes gradient descent highly sensitive to the step size and slow to converge. To speed up the EM algorithm, we initialize the coefficients of the mean functional
forms by approximating the mean vectors obtained using a standard vector-based Gaussian
mixture model on the same data. This typically produces a useful set of initial parameter
values which are then optimized by running the full EM algorithm for a functional mixture
model with alignment.
We use the EM algorithm in its maximum a posteriori (MAP) formulation, using a zeromean Gaussian prior distribution on the curve-specific time shifts. The variance of the prior
distribution allows us to control the amount of shifting allowed in the model. We also use
conjugate prior distributions for the noise variance Ck to regularize the model and prohibit
degenerate solutions with near-zero covariance terms.
Figure 1 shows examples of mean curves (Equation (6)), that were learned from actual gene
expression data. Each functional form has 7 free parameters: ? = {?, ?1 , 1 , ?2 , 2 , ?, ?}.
Note that, as with many time-course gene expression data sets, having so few points
presents an obvious problem for parameter estimation directly from a single curve. However, the curve-specific time shifts in effect provide a finer sampling grid that helps to
?0.17
0.12
Functional MM
Gaussian MM
?0.18
Functional MM
Gaussian MM
0.115
?0.19
0.11
MSE
Per?point logP
?0.2
?0.21
?0.22
0.105
?0.23
?0.24
0.1
?0.25
?0.26
5
6
7
Number of components
8
9
0.095
5
6
7
Number of components
8
9
Figure 2: Cross-validated conditional logP scores (left) and cross-validated interpolation
mean-squared error (MSE) (right), as a function of the number of mixture components, for
the first cell cycle of the Cho et al. data set.
recover the parameters from observed data, in addition to the ?pooling? effect of learning
common functional forms for groups of curves. The right-hand side of Figure 1 shows a
scatter plot of the switching parameters for 5 clusters estimated from 10 different crossvalidation runs. The 5 clusters exhibit different dynamics (as indicated by the spread in
parameter space) and the algorithm finds qualitatively similar parameter estimates for each
cluster across the 10 different runs.
4
4.1
Experimental Evaluation
Gene expression data
We illustrate our approach using the immediate responses of yeast Saccharomyces cerevisiae when released from cell cycle arrest, using the raw data reported by Cho et al (1998).
Briefly, the CDC28 TS mutants were released from the cell cycle arrest by temperature
shift. Cells were harvested and RNA was collected every 10 min for 170 min, spanning two
cell cycles. The RNA was than analyzed using Affymetrix gene chip arrays. From these
data we select only the 416 genes which are reported to be actively regulated throughout
the cell cycle and are expressed for 30 continuous minutes above an Affymetrix absolute
level of 100 (a total of 385 genes pass these criteria). We normalize each gene expression
vector by its median expression value throughout the time course to reduce the influence of
probe-specific intensity biases.
4.2
Experimental results
In order to study the immediate cellular response we analyze only the first 8 time points of
this data set. We evaluate the cross-validated out-of-sample performance of the proposed
functional mixture model. A conventional Gaussian mixture model applied to observations
on the discrete time grid is used for baseline comparison. It is not at all clear a priori
that the functional mixture models with highly constrained parametric set of mean curves
should outperform Gaussian mixtures that impose no parametric assumptions and are free
to approximate any discrete grid observation. While one can expect that mixtures of splines
(Bar-Joseph et al., 2002) or functions with universal approximation capabilities can be
fitted to any mean behavior, the restricted class of functions that we proposed (based on the
simplified dynamics of the mRNA changes implied by the differential equation in Equation
(4)) is likely to fail if the true dynamics does not match the assumptions.
There are two main reasons to use the proposed restricted class of functional forms: (1)
T=7
0.175
0.065
7
8
0.09
5
9
T=8
8
0.17
9
MSE
0.12
9
0.105
0.1
0.09
5
6
7
8
0.075
0.17
0.09
0.085
9
0.075
0.07
0.065
5
6
7
8
0.06
9
T=6
0.055
7
8
9
0.1
0.06
5 6 7 8 9
Number of Components
6
0.11
0.07
0.065
5
T=7
0.075
0.08
6
7
8
Number of Components
0.18
0.15
9
0.095
0.095
6
7
8
Number of Components
5
0.1
0.11
0.13
0.08
T=5
0.115
0.14
MSE
7
T = 6:8
0.15
0.11
5
6
0.085
0.16
MSE
6
MSE
0.06
5
0.18
0.1
T=4
0.2
0.19
MSE
MSE
MSE
0.07
Functional MM
Regular MM
0.185
0.11
0.075
T=3
0.19
MSE
0.08
MSE
T=2
0.12
Functional MM
Regular MM
MSE
T=6
0.085
0.09
0.08
0.07
5 6 7 8 9
Number of Components
0.06
5 6 7 8 9
Number of Components
Figure 3: Cross-validated one-step-ahead prediction MSE (left) and cross-validated interpolation MSE (right) for the first cell cycle of the Cho et al. data set.
to be able to interpret the resulting mean curves in terms of the synthesis / decay rates at
each of the regimes as well as the switching times; (2) to naturally incorporate alignment
by real-values shifts along the time axis.
In Figures 2 and 3, we present 5-fold cross-validated out-of-sample scores, as a function
of the number of clusters, for both the functional mixture model and the baseline Gaussian
mixture model. The conditional logP score (Figure 2, left panel) estimates the average
probability assigned to a single measurement at time points 6, 7, 8 within an unseen curve,
given the first five measurements of the same curve. Higher scores indicate a better fit.
The conditioning on the first few time points allows us to demonstrate the power of models
with random effects since estimation of alignment based on partial curves improves the
probability of the remainder of the curve.
The interpolation error in Figure 2 (right panel) shows the accuracy of recovering missing measurements. The observed improvement in this score is likely due to the effect of
aligning the test curves. To evaluate the interpolation error, we trained the models on the
full training curves, and then assumed a single measurement was missing from the test
curve (at time point 2 through 7). The model was then used to make a prediction at the
time point of the missing measurement, and the interpolation error was averaged for all
time points and test curves. The right panel of Figure 3 contains a detailed view of these
results: each subplot shows the mean error in recovering values at a particular time point.
While some time points are harder to approximate than the others (in particular, T = 2, 3),
the functional mixture models provide better interpolation properties overall. Difficulties
in approximating at T = 2, 3 can be attributed to the large changes in the intensities at
these time points, and possibly indicate the limitations of the functional forms chosen as
candidate mean curves.
Finally, the left panel of Figure 3 shows improvement in one-step-ahead prediction error.
Again, we trained the models on the full curves, and then used the models to make prediction for test curves at time T given all measurements up to T ? 1 (T = 6, 7, 8). Figures
2 and 3 demonstrate a consistent improvement in the out-of-sample performance of the
functional mixtures.
The improvements seen in these plots result from integrating alignment along the time
axis into the clustering framework. We found that the functional mixture model without
alignment does not result in better out-of-sample performance than discrete-time Gaussian
mixtures. This may not be surprising given the constrained nature of the fitted functions.
In the experiments presented in this paper we used a Gaussian prior distribution on the time-
shift parameter to softly constrain the shifts to lie roughly within 1.5 time grid intervals.
The discrete grid alignment approaches that we proposed earlier in Chudova et al (2003)
can successfully align curves if one assumes offsets on the scale of multiple time grid
points. However, they are not designed to handle finer sub-grid alignments. Also worth
noting is the fact that continuous time mixtures can align curves sampled on non-uniform
time grids (such non-uniform sampling in time is relatively common in gene expression
time course data).
5
Conclusions
We presented a probabilistic framework for joint clustering and alignment of gene expression time course data using continuous time cluster models. These models allow (1) realvalued off-grid alignment of unequally spaced measurements, (2) off-grid interpolation,
and (3) regularization by enforcing smoothness implied by the functional cluster forms.
We have demonstrated that a mixture of simple parametric functions with nonlinear transition between two exponential regimes can model a broad class of gene expression profiles
in a single cell cycle of yeast. Cross-validated performance scores show the advantages
of continuous time models over standard Gaussian mixtures. Possible extensions include
adding additional curve-specific parameters, incorporating other alignment methods, and
introducing periodic functional forms for multi-cycle data.
References
Bar-Joseph, Z., Gerber, G., Gifford, D., Jaakkola, T., and Simon, I. (2002). A new approach to
analyzing gene expression time series data. In The Sixth Annual International Conference on
(Research in) Computational (Molecular) Biology (RECOMB), pages 39?48, N.Y. ACM Press.
Cho, R. J., Campbell, M. J., Winzeler, E. A., Steinmetz, L., Conway, A., Wodicka, L., Wolfsberg,
T. G., Gabrielian, A. E., Landsman, D., Lockhart, D. J., and Davis, R. W. (1998). A genomewide transcriptional analysis of the mitotic cell cycle. Mol Cell, 2(1):65?73.
Chudova, D., Gaffney, S., Mjolsness, E., and Smyth, P. (2003). Mixture models for translationinvariant clustering of sets of multi-dimensional curves. In Proceedings of the Ninth ACM
SIGKDD International Conference on Knowledge Discovery and Data Mining, pages 79?88,
Washington, DC.
DeSarbo, W. S. and Cron, W. L. (1988). A maximum likelihood methodology for clusterwise linear
regression. Journal of Classification, 5(1):249?282.
Eisen, M. B., Spellman, P. T., Brown, P. O., and Botstein, D. (1998). Cluster analysis and display of
genome-wide expression patterns. Proc Natl Acad Sci U S A, 95(25):14863?8.
Gaffney, S. J. and Smyth, P. (2003). Curve clustering with random effects regression mixtures. In
Bishop, C. M. and Frey, B. J., editors, Proceedings of the Ninth International Workshop on
Artificial Intelligence and Statistics, Key West, FL.
Gibson, M. and Mjolsness, E. (2001). Modeling the activity of single genes. In Bower, J. M. and
Bolouri, H., editors, Computational Methods in Molecular Biology. MIT Press.
James, G. M. and Sugar, C. A. (2003). Clustering for sparsely sampled functional data. Journal of
the American Statistical Association, 98:397?408.
Mestl, T., Lemay, C., and Glass, L. (1996). Chaos in high-dimensional neural and gene networks.
Physica, 98:33.
Ramsay, J. and Silverman, B. W. (1997). Functional Data Analysis. Springer-Verlag, New York, NY.
Yeung, K. Y., Fraley, C., Murua, A., Raftery, A. E., and Ruzzo, W. L. (2001). Model-based clustering
and data transformations for gene expression data. Bioinformatics, 17(10):977?987.
| 2445 |@word briefly:1 polynomial:1 covariance:3 simplifying:1 bolouri:1 harder:1 initial:4 series:2 score:6 contains:1 affymetrix:2 reaction:1 current:1 surprising:1 scatter:1 additive:1 plot:2 designed:1 generative:3 intelligence:1 short:4 location:1 obser:1 five:1 along:5 differential:3 incorrect:1 consists:1 introduce:1 manner:1 expected:1 rapid:1 roughly:1 behavior:5 multi:2 actual:1 estimating:2 underlying:1 discover:1 panel:4 pursue:1 unified:1 transformation:2 temporal:3 pseudo:1 every:1 control:1 timing:1 treat:3 frey:1 switching:9 acad:1 analyzing:2 interpolation:9 approximately:1 black:1 co:1 averaged:1 responsible:1 practice:1 silverman:2 universal:1 drug:1 gibson:2 significantly:1 integrating:2 regular:4 cannot:1 undesirable:1 selection:2 context:3 influence:1 conventional:1 map:1 demonstrated:2 center:1 missing:3 mrna:6 straightforward:1 simplicity:1 array:1 regularize:2 lemay:2 handle:2 variation:2 updated:1 suppose:1 smyth:7 us:1 element:2 sparsely:1 observed:10 capture:1 thousand:1 region:1 cycle:10 gifford:1 mjolsness:4 sugar:4 dynamic:7 trained:2 predictive:1 division:1 eric:1 unequally:1 joint:2 chip:1 describe:3 artificial:1 outside:1 vations:1 quite:1 whose:1 valued:5 plausible:2 solve:1 statistic:1 g1:5 unseen:1 noisy:1 sequence:1 advantage:1 propose:4 maximal:2 remainder:1 uci:3 degenerate:1 description:1 normalize:1 crossvalidation:1 cluster:25 produce:2 object:1 help:1 derive:1 illustrate:1 measured:3 minor:1 eq:1 recovering:2 indicate:2 synchronized:1 differ:1 chudova:3 clusterwise:1 fix:1 biological:1 extension:1 kinetics:1 physica:1 mm:8 therapy:1 ic:2 normal:1 genomewide:1 continuum:1 adopt:1 vary:2 released:2 estimation:4 proc:1 sensitive:1 individually:2 successfully:2 mit:1 gaussian:14 cerevisiae:1 rna:2 ck:6 varying:1 jaakkola:1 validated:7 focus:1 improvement:5 saccharomyces:1 modelling:2 likelihood:6 indicates:1 mutant:1 contrast:2 sigkdd:1 baseline:2 glass:2 posteriori:1 inference:1 membership:2 softly:1 typically:3 pasadena:1 issue:1 overall:1 classification:1 denoted:1 exponent:3 priori:1 smoothing:1 integration:1 fairly:1 initialize:1 constrained:2 construct:1 having:1 washington:1 sampling:2 biology:3 represents:1 broad:1 emj:1 others:1 spline:5 piecewise:1 mitotic:1 few:3 steinmetz:1 simultaneously:3 individual:3 gaffney:5 highly:2 mining:1 evaluation:3 alignment:16 mixture:35 analyzed:1 natl:1 partial:1 logarithm:1 gerber:1 fitted:2 earlier:1 modeling:1 logp:3 assignment:1 maximization:1 introducing:1 deviation:1 uniform:2 reported:2 periodic:1 cho:4 density:1 international:3 probabilistic:2 off:4 conway:1 synthesis:9 squared:1 again:1 padhraic:1 possibly:1 american:1 derivative:1 actively:1 account:2 nonlinearities:1 includes:1 coefficient:2 explicitly:1 onset:1 vi:3 performed:1 view:2 closed:1 exogenous:1 analyze:1 recover:2 capability:1 slope:2 simon:1 contribution:1 accuracy:1 variance:4 characteristic:1 spaced:1 raw:2 trajectory:1 worth:1 finer:2 simultaneous:1 sixth:1 involved:1 james:4 obvious:1 naturally:3 attributed:1 irvine:6 sampled:4 adjusting:1 knowledge:1 fractional:1 improves:1 campbell:1 higher:1 dt:1 methodology:2 response:3 botstein:1 formulation:1 box:1 zeromean:1 biomedical:1 implicit:1 hand:3 christopher:1 replacing:1 nonlinear:4 lack:1 continuity:2 indicated:1 scientific:1 yeast:4 effect:8 brown:1 true:1 regularization:2 analytically:1 assigned:1 davis:1 prohibit:1 arrest:2 criterion:1 complete:1 ridge:1 demonstrate:2 temperature:1 ranging:1 chaos:1 common:3 sigmoid:1 functional:41 conditioning:1 insensitive:1 extend:2 association:1 translationinvariant:1 interpret:1 measurement:13 smoothness:3 grid:19 fk:3 similarly:1 ramsay:2 operating:1 surface:1 align:5 aligning:1 posterior:5 own:2 driven:1 verlag:1 yi:8 caltech:1 seen:1 additional:2 impose:2 subplot:1 converge:1 maximize:1 full:6 multiple:1 smooth:1 match:1 characterized:2 profiling:1 cross:8 divided:1 hart:2 molecular:2 prediction:4 regression:5 cron:3 heterogeneous:2 basic:1 expectation:1 yeung:3 represent:1 tailored:2 cell:12 irregular:1 proposal:1 addition:1 addressed:1 interval:1 median:1 crucial:1 pooling:1 virtually:1 desarbo:3 near:1 presence:1 noting:1 variety:2 affect:1 fit:1 zi:1 restrict:1 reduce:1 shift:14 expression:25 fraley:1 york:1 useful:1 clear:1 detailed:1 amount:1 ten:1 generate:1 outperform:1 estimated:1 track:1 per:1 discrete:8 express:1 group:6 key:1 kept:1 v1:4 run:2 inverse:1 parameterized:1 place:1 family:1 throughout:3 vn:4 fl:1 display:1 fold:2 annual:1 activity:1 ahead:2 constraint:1 constrain:1 simulate:1 speed:1 min:2 relatively:2 department:3 combination:1 conjugate:2 describes:2 across:2 em:5 joseph:3 biologically:1 restricted:2 equation:11 previously:1 mechanism:4 fail:1 know:1 irregularly:1 apply:2 probe:1 hierarchical:1 generic:1 appropriate:1 gridded:1 assumes:1 clustering:17 include:2 running:1 newton:1 approximating:2 implied:2 parametric:6 concentration:1 primary:1 traditional:1 diagonal:1 cycling:1 exhibit:2 regulated:3 gradient:2 transcriptional:1 sci:1 parametrized:2 participate:1 collected:1 cellular:1 spanning:1 enforcing:3 reason:1 length:2 modeled:1 providing:1 regulation:1 negative:1 proper:1 motivates:1 unknown:5 allowing:2 observation:3 finite:1 descent:2 t:1 immediate:2 heterogeneity:1 variability:1 precise:1 dc:1 ninth:2 intensity:7 optimized:1 california:4 learned:2 address:1 able:1 bar:3 below:2 pattern:1 regime:5 including:1 shifting:1 power:2 ruzzo:1 treated:1 difficulty:1 representing:1 spellman:1 technology:1 imply:1 realvalued:1 axis:6 raftery:1 prior:6 literature:1 discovery:1 relative:1 harvested:1 expect:1 mixed:1 limitation:1 recomb:1 validation:1 consistent:2 imposes:1 viewpoint:1 dvi:1 editor:2 production:2 course:11 free:2 side:3 allow:3 bias:1 institute:1 wide:1 absolute:1 curve:63 calculated:1 transition:3 genome:1 eisen:2 qualitatively:1 simplified:1 approximate:5 gene:34 assumed:3 xi:4 don:1 ravine:1 continuous:11 regulatory:2 latent:1 nature:3 transfer:1 robust:1 ca:4 molecule:1 mol:1 lockhart:1 mse:14 domain:4 spread:1 main:1 motivation:1 noise:6 profile:5 allowed:2 referred:1 west:1 slow:1 ny:1 sub:1 exponential:2 concatenating:1 candidate:1 lie:1 bower:1 abundance:1 minute:7 specific:10 bishop:1 offset:5 decay:9 incorporating:2 workshop:1 adding:1 magnitude:1 conditioned:1 suited:1 likely:2 expressed:1 saturating:1 g2:5 springer:1 acm:2 conditional:5 goal:1 targeted:1 change:7 specifically:3 total:1 pas:1 experimental:2 select:1 arises:1 bioinformatics:1 incorporate:1 evaluate:2 |
1,591 | 2,446 | A Neuromorphic Multi-chip Model of a Disparity
Selective Complex Cell
Eric K. C. Tsang and Bertram E. Shi
Dept. of Electrical and Electronic Engineering
Hong Kong University of Science and Technology
Kowloon, HONG KONG SAR
{eeeric,eebert}@ust.hk
Abstract
The relative depth of objects causes small shifts in the left and right retinal positions of these objects, called binocular disparity. Here, we
describe a neuromorphic implementation of a disparity selective complex cell using the binocular energy model, which has been proposed to
model the response of disparity selective cells in the visual cortex. Our
system consists of two silicon chips containing spiking neurons with
monocular Gabor-type spatial receptive fields (RF) and circuits that
combine the spike outputs to compute a disparity selective complex cell
response. The disparity selectivity of the cell can be adjusted by both
position and phase shifts between the monocular RF profiles, which are
both used in biology. Our neuromorphic system performs better with
phase encoding, because the relative responses of neurons tuned to different disparities by phase shifts are better matched than the responses
of neurons tuned by position shifts.
1 Introduction
The accurate perception of the relative depth of objects enables both biological organisms
and artificial autonomous systems to interact successfully with their environment. Binocular disparity, the positional shift between corresponding points in two eyes or cameras
caused by the difference in their vantage points, is one important cue that can be used to
infer depth.
In the mammalian visual system, neurons in the visual cortex combine signals from the
left and right eyes to generate responses selective for a particular disparity [1]. Ohzawa et
al.[2] proposed the binocular energy model to explain the responses of binocular complex
cells in the cat visual cortex, and found that the predictions of this model are in good
agreement with measured data. This model also matches data from the macaque [3].
In the energy model, a neuron achieves its particular disparity tuning by either a position
or a phase shift between its monocular receptive field (RF) profiles for the left and right
eyes. Based on an analysis of a population of binocular cells, Anzai et al. [4] suggest that
the cat primarily encodes disparity via a phase shift, although position shifts may play a
larger role at higher spatial frequencies. Computational studies show that it is possible to
estimate disparity from the relative responses of model complex cells tuned to different
disparities [5][6].
This paper describes a neuromorphic implementation of disparity tuned neurons constructed according to the binocular energy model. Section 2 reviews the binocular energy
model and the encoding of disparity by position and phase shifts. Section 3 describes our
implementation. Section 4 presents measured results from the system illustrating better
performance for neurons tuned by phase than by position. This preference arises because
the position-tuned neurons are more sensitive to the mismatch in the circuits on the Gabortype filter chip than the phase-tuned neurons. We have characterized the mismatch on the
chip, as well as its effect on the complex cell outputs, and found that the phase model least
sensitive to the parameters that vary most. Section 5 summarizes our results.
2 The Binocular Energy Model
Ohzawa et al. [2] proposed the binocular energy model to explain the response of binocular complex cells measured in the cat. Anzai et al. further refined the model in a series of
papers [4][7][8]. In this model, the response of a binocular complex cell is the linear combination of the outputs of four binocular simple cells, as shown in Figure 1. The response
of a binocular simple cell is computed by applying a linear binocular filter to the input
from the two eyes, followed by a half squaring nonlinearity: r s = ( b(x R, x L, ? R, ? L) + ) 2
where b + = max { b, 0 } is the positive half-wave rectifying nonlinearity. The linear binocular filter output is the sum of two monocular filter outputs
b(c R, c L, ? R, ? L) = m(c R, ? R, IR) + m(c L, ? L, I L)
(1)
where the monocular filters linearly combine image intensity, I(x) , with a Gabor receptive
field profile
m ( c, ?, I ) = ? x g(x, c, ?)I(x)
g(x, c, ?) = ?e
1
? --- ( x ? c ) T C ? 1 ( x ? c )
2
cos ( ? T ( x
? c) + ? )
where x ? ZZ 2 indexes pixel position. The subscripts R and L denote parameters or image
intensities from the right or left eye. The parameters ? ? IR 2 and C ? IR 2 ? 2 control the
spatial frequency and bandwidth of the filter and ? controls the gain. These parameters
are assumed to be the same in all of the simple cells that make up a complex cell. However, the center position, c ? IR 2 and the phase ? ? IR vary, both between the two eyes
and among the four simple cells.
Left
Linear
Binocular
Filter
Right
Half-squaring
+
+
+
+
Even
?
+
?
+
Cx
+
+
?
+
+
+
Odd
+
?
Binocular Simple Cells
Fig. 1: Binocular energy model of a complex cell.
Binocular
Complex Cell
While the response of simple cells depends heavily upon the stimulus phase and contrast,
the response of complex cells is largely independent of the phase and contrast. The binocular energy model posits that complex cells achieve this invariance by linearly combining
the outputs of four simple cell responses whose binocular filters are in quadrature phase,
being identical except that they differ in phase by ? ? 2 . Because filters that differ in phase
by ? are identical except for a change in sign, we only require two unique binocular filters, the four required simple cell outputs being obtained by positive and negative half
squaring their outputs.
Complex cells constructed according to the binocular energy model respond to disparities
in the direction orthogonal to their preferred orientation. Their disparity tuning in this
direction depends upon the relative center positions and the relative phases of the monocular filters. A binocular complex cell whose monocular filters are shifted by
?c = c R ? c L and ?? = ? R ? ? L will respond maximally for an input disparity
D pref ? ?c ? ?? ? ? (i.e. I R(x) ? I L(x ? D pref) ). Disparity is encoded by a position shift if
?c ? 0 and ?? = 0 . Disparity is encoded by a phase shift if ?c = 0 and ?? ? 0 . The
cell uses a hybrid encoding if both ?c ? 0 and ?? ? 0 . Phase encoding and position
encoding are equivalent for the zero disparity tuned cell ( ?c = 0 and ?? = 0 ).
3 Neuromorphic Implementation
Figure 2 shows a block diagram of our binocular cell system, which uses a combination of
analog and digital processing. At this time, we use a pattern generator to supply left and
right eye input. This gives us precise electronic control over spatial shift between the left
and right eye inputs to the orientation selective neurons. We plan to replace the pattern
generator with silicon retinae in the future. The left and right eye inputs are processed by
two Gabor-type chips that contain retinotopic arrays of spiking neuron circuits whose spatial RF profiles are even and odd symmetric Gabor-type functions. The address filters
extract spikes from four neurons in each chip whose output spike rates represent the positive and negative components of the odd and even symmetric filters centered at a desired
retinal location. These spike trains are combined in the binocular combination block to
implement the summation in (1). The complex cell computation block performs the half
squaring nonlinearity and linear summation. In the following, we detail the design of the
major building blocks.
Pattern
Generator
Mixed A/D
AER chips
Gabor
Chip
Left Eye
Address
Filter
MCU
e+
eo+
o-
Left Retina Address
Gabor
Chip
Right Eye
Xilinx CPLDs
B1+
Binocular
Combination
Address
Filter
Right Retina Address
e+
e-
B1B2+
Complex Cell
Computation
Complex Cell
Response
B2-
o+
o-
Phase Encoding Selection
Fig. 2: System block diagram of a neuromorphic complex cell. The opposite direction
arrows represent the AER handshaking protocol. The three groups of four parallel arrows
represent spiking channels. The labels ?e/o? and ?+/-? represent EVEN/ODD and ON/
OFF. The top labels indicate the type of hardware used to implement each stage.
3.1 Gabor-type filtering chip
Images from each eye are passed to a Gabor-type filtering chip [9] that implements the
monocular filtering required by the simple cells. Given a spike rate encoded 32 x 64 pixel
image ( I L or I R ), each chip computes outputs ( m(c L, ? L, I L) or m(c R, ? R, I R) ) corresponding to a 32 x 64 array of center positions and two phases, 0 and ? ? ? 2 . All filters are
designed to have the same gain, spatial frequency tuning and bandwidth. We refer to the
? = 0 filter as the EVEN symmetric filter and the ? = ? ? ? 2 filter as the ODD symmetric filter. Figure 3 shows the RF profile of the EVEN and ODD filters, which differ from a
Gabor function because the function that modulates the cosine function is not a Gaussian;
it decays faster at the origin and slower at the tails. This difference should not affect the
resulting binocular complex cell responses significantly. Qian and Zhu [5] show that the
binocular complex cell responses in the energy model are insensitive to the exact shape of
the modulating envelope.
The positive and negative components of each filter output are represented by a spike rate
on separate ON and OFF channels. For example, for the right eye at center position c R ,
the EVEN-ON spike rate is proportional to m(c R, 0, IR) + and the EVEN-OFF spike rate
to ? m(c R, 0, I R) + . Spikes are encoded on a single asynchronous digital bus using the
address event representation (AER) communication protocol. The AER protocol signals
the occurrence of a spike in the array by placing an address identifying the cell that spiked
on the bus [10].
Fig. 3: The measured RF profile of the EVEN and ODD symmetric filters at the center
pixel.
3.2 AER Address Filter
Each AER address filter extracts only those spikes corresponding to the four neurons
whose RF profiles are centered at a desired retinal location and demultiplexes the spikes
as voltage pulses on four separate wires. In our addressing scheme, every neuron is
assigned a unique X (column) and Y (row) address. As addresses appear on the AER bus,
two latches latch the row and column address of each spike, which are compared with the
row and column address of the desired retinal location, which is encoded on bits 1-6 of the
address. Bit 0 (the LSB) encodes the type of filter: EVEN/ODD on the row address and
ON/OFF on the column address. Once the filter detects a spike from the desired retinal
location, it generates a voltage pulse which is demultiplexed onto one of four output lines,
depending upon the LSB of the latched row and column address.
To avoid losing events, we minimize the time the AER address filter requires to process
each address by implementing it using a Xilinx XC9500 series Complex Programmable
Logic Device (CPLD). We chose this series because of its speed and flexibility. The block
delay in each macrocell is 7ns. The series supports in system programming, enabling rapid
debugging during system design. Because the AER protocol is asynchronous, we paid particular attention to the timing in the signal path to ensure that addresses are latched correctly and to avoid glitches that could be interpreted as output spikes.
3.3 Binocular combination block
The binocular combination block combines eight spike trains to implement the summation
operation in Eq. (1) for two phase quadrature binocular filters. To compute the two binocular filter outputs required for a zero disparity tuned cell, we first set the AER address filters so that they extract spikes from monocular neurons with the same RF centers in the
left and right eyes ( ?c = 0 ) . To compute the output of the first binocular filter B1, the
binocular combination block sums the outputs of the left and right eye EVEN filters by
merging spikes from the left and right EVEN-ON channels onto a positive output line,
B1+ (shown in Fig. 2), and merging spikes from the left and right EVEN-OFF channels
onto a negative output line, B1-. The difference between the spike rates on B1+ and B1encodes the B1 filter output. However, the B1+ and B1- spike rates do not represent the
ON (positive half-wave rectified) and OFF (negative half-wave rectified) components of
the binocular filter outputs, since they may both be non-zero at the same time. To compute
the output of the second filter, B2, the binocular combination block merges spikes from
the left and right ODD channels similarly.
The system can also implement binocular filter outputs for neurons tuned to non-zero disparities. For position encoding, we change the relative addresses selected by the AER
address filters to set ?c ? 0 , but leave the binocular combination block unchanged. If we
fix the center location of the right eye RF to the center column of the chip (32), we can
detect position disparities between -31 and 32 in unit pixel steps. For phase encoding, we
leave the AER address filters unchanged and alter the routing in the binocular combination block. Because the RF profiles of the Gabor-type chips have two phase values, altering the routing as shown in Table 1 results in four distinct binocular filters with monocular
filter phase shifts of ?? = ? ? ? 2, 0, ? ? 2 and ? , which correspond to the tuned far, tuned
excitatory, tuned near and tuned inhibitory disparity cells identified by Poggio et al. [11]
The binocular combination block uses the same type of Xilinx CPLD as the AER filter.
Inputs control the monocular phase shift of the resulting binocular filter by modifying the
routing. For simplicity, we implement the merge using inclusive OR gates without arbitration. Although simultaneous spikes on the left and right channels will be merged into a
single spike, the probability that this will happen is negligible, since the width of the voltage pulse that represents each spike (~32ns) is much smaller than the inter-spike intervals,
which are on the order of milliseconds.
Table 1: Signal combinations for phase disparity encoding. Each table entry represents the
combination of right/left eye inputs combined in a binocular output line to achieve a
desired phase shift of ?? . We abbreviate EVEN/ODD by e/o and ON/OFF by +/-.
??
B1+
Binocular output line
B1B2+
B2-
?? ? 2
e+/o-
e-/o+
o+/e+
o-/e-
0
e+/e+
e-/e-
o+/o+
o-/o-
??2
e+/o+
e-/o-
o+/e-
o-/e+
?
e+/e-
e-/e+
o+/o-
o-/o+
3.4 Complex cell output
Since the spike rates at the four outputs of the binocular combination block are relatively
low, e.g. 10-1000Hz, we implement the final steps using an 8051 microcontroller (MCU)
running at 24 MHz. Integrators count the number of spikes from each channel in a fixed
time window, e.g. T = 40ms , to estimate the average spike rate on each of the four lines.
We generate the four binocular simple cell responses by positive and negative half squar-
ing the spike rate differences (B1+ - B1-) and (B2+ - B2-), and sum them to obtain the binocular complex cell output. The MCU computes one set of four simple cell and one
complex cell outputs every T seconds, where T is the time window of the integration.
4 RESULTS
We use a pattern generator to supply the left and right eye inputs, which gives us precise
control over the input disparity. In a loose biological analogy, we directly stimulate the
optic nerve. The pattern generator simultaneously excites a pair of pixels in the left and
right Gabor-type chips. The two pixels lie in the same row but are displaced by half the
input disparity to the right of the center pixel in the right chip and by half the input disparity to the left of the center pixel in the left chip. The integration time window was 40ms.
Figure 4(a) shows the response of binocular complex cells tuned to three different disparities by phase encoding. The AER address filters selected spikes from the retina locations
(32,16) in both chips. Consistent with theoretical predictions, the peaks of the non-zero
disparity tuned cells are approximately the same height, but smaller than the peak of the
zero disparity tuned filter because of the smaller size of the side peaks in the ODD filter
response in comparison with the center peak in the EVEN filter. Figure 4(a) shows the
response of binocular complex cells tuned to similar disparities by position encoding. The
negative-disparity tuned cell combines the outputs of pixels (33,16) in the left chip and
(31,16) in the right chip. The positive-disparity tuned cell combines the outputs of pixel
(31,16) in the left chip and pixel (33,16) in the right chip. The zero-disparity tuned cells
for position and phase encoding are identical. Theoretically, the position model should
result in three identical peaks that are displaced in disparity. However, the measurements
show a wide variation in the peak sizes. The responses of the phase-tuned neurons exhibit
better matching, because they were all computed from the same two sets of pixel outputs.
In contrast, the three position-tuned neurons combine the responses of the Gabor-type chip
at six different pixels.
1.5
1
0.5
0
-15
-10
-5
0
5
10
input disparity (pixels)
(a)
15
6
3
% Std. dev.
complex cell response
complex cell response
Decreasing the time over which we integrate the spike outputs of the binocular combinations stage results in faster disparity update. However, Figure 4(c) shows that this also
increases variability in the response, when measured as a percentage of the mean
response.
2
1
0
-15
-10
-5
0
5
10
input disparity (pixels)
(b)
15
4
2
0
-15
-10
-5
0
5
10
15
input disparity (pixels)
(c)
Fig. 4: (a) Response of three binocular complex cells tuned to three different disparities by
phase encoding. (b) Response of three binocular complex cells tuned to three different
disparities by position encoding. (c) Standard deviation of the response of zero disparity
complex cell expressed as a percentage of the mean response at zero disparity for two
integration windows of T = 40ms (solid line) and T = 20ms (dashed line). Statistics
computed over 80 samples.
Although they are nominally identical, the gain, damping (bandwidth), spatial frequency
and offset of neurons from different retinal locations on the same chip vary due to transis-
tor mismatch in the circuits used to implement them. We performed a numerical sensitivity analysis on the effect of variation in these parameters on the complex cell responses, by
examining how much variations in them affected the locations at which the disparity tuning curves for neurons tuned to left and right disparities crossed the disparity tuning curve
for the neuron tuned to zero disparity. These two locations form decision boundaries
between near, zero and far disparities if we classify stimuli according to disparity tuned
neuron with the maximum response. We found that the variation in the distance between
these points varied much more than their centroid. Figure 5(a) shows the sensitivity coefficients for the distance between these points, where the sensitivity coefficient is defined as
the percentage variation in the distance per percentage variation in a RF parameter. We
consider the response to be robust to variations if the sensitivity coefficient is less than 1.
In most cases, we find that the position model is less robust than the phase model.
In addition, we characterized the variability in the RF parameters for neurons from different positions on the chip. We probed the response of seven individual spiking neurons to
different spatial impulse inputs and fitted parameterized Gabor-type functions to the
responses. We then computed the standard deviation in the parameters across the neurons
probed, which we express as a percentage of the mean value. Figure 5(b) shows that the
phase model is least sensitive to variations in the parameters that vary the most.
Sensitivity
Sensitivity
Position Model
Phase Model
2
1.5
1
0.5
0
ON-OFF
Gain
Damping
Spatial
Frequency
(a)
Offset
70
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
60
50
40
30
20
Chip Variation
2.5
10
0
ON-OFF Gain
Damping
Spatial
Frequency
Offset
(b)
Fig. 5: (a) Sensitivity of the phase and position models to variations in the RF parameters
of the neurons. (b) A comparison of the sensitivity of phase model to the variability in the
RF parameters. The line indicates the percentage standard deviation in the RF parameters.
Errors bars indicate the 95% confidence interval. Solid bars show the sensitivity of the
phase model from (a).
5 CONCLUSION
We have replicated the disparity selectivity of complex cells in the visual cortex in a neuromorphic system based upon the disparity energy model. This system contains four silicon chips containing retinotopic arrays of neurons which communicate via the AER
communication protocol, as well as circuits that combine the outputs of these chips to generate the response of a model binocular complex cell. We exploit the capability of AER
protocol for point to point communication, as well as the ability to reroute spikes.
Our measurements indicate that our binocular complex cells are disparity selective and
that their selectivity can be adjusted through both position and phase encoding. However,
the relative responses of neurons tuned by phase encoding exhibit better matching than the
relative responses of neurons tuned by position encoding, because neurons tuned to different disparities by position encoding integrate outputs from different pixels while neurons
tuned by phase encoding integrate output from the same pixels.
This implementation is an initial step towards the development of a multi-chip neuromorphic system capable of extracting depth information about the visual environment using
silicon neurons with physiologically-based functionality. The next step will be to extend
the system from a single disparity tuned neuron to a set of retinotopic arrays of disparity
tuned neurons. In order to do this, we will develop a mixed analog-digital chip whose
architecture will be similar to that of the orientation tuned chip, which will combine the
outputs from left and right eye orientation-tuned chips to compute an array of neurons
tuned to the same disparity but different retinal locations. The tuned disparity can be controlled by address remapping, so additional copies of the same chip could represent neurons tuned to other disparities. This chip will increase the number of neurons we compute
simultaneously, as well as decreasing the power consumption required to compute each
neuron. In the current implementation, the digital circuits required to combine the monocular responses consume 1.2W. In contrast, the Gabor chips and their associated external
bias and interface circuits consume only 62mW, with only about 4mW required for each
Gabor chip. We expect the power consumption of the binocular combination chip to be
comparable. Computing the neuron outputs in parallel will enable us to investigate the
roles of additional processing steps such as pooling [5], [6] and normalization [12], [13].
Acknowledgements
This work was supported in part by the Hong Kong Research Grants Council under Grant
HKUST6218/01E. It was inspired by a project with Y. Miyawaki at the 2002 Telluride
Neuromorphic Workshop. The authors would like to thank K. A. Boahen for helpful discussions and for supplying the receiver board used in this work, and T. Choi for his assistance in building the system.
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
Barlow, H. B., Blackemore, C., & Pettigrew, J. D. (1967) The neural mechanism of binocular depth discrimination. J. Physiol. Lond., 193, 327-342.
Ohzawa, I., Deangelis, G. C., & Freeman, R. D. (1990) Stereoscopic depth discrimination in
the visual cortex: neurons ideally suited as disparity detectors. Science, 249, 1037-1041.
Cummings, B. G. & Parker, A. J. (1997) Responses of primary visual cortical neurons to binocular disparity without depth perception. Nature, 389, 280-283.
Anzai, A., Ohzawa, I., and Freeman, R. D. (1999a) Neural mechanisms for encoding binocular disparity: position vs. phase. J. Neurophysiol., 82, 874-890.
Qian, N., & Zhu, Y. (1997) Physiological computation of binocular disparity. Vision Res.,
37, 1811-1827.
Fleet, D. J., Wagner, H., & Heeger, D. J. (1996) Neural encoding of binocular disparity:
energy models, position shifts and phase shifts. Vision Res., 36, 1839-57.
Anzai, A., Ohzawa, I., and Freeman, R. D. (1999b) Neural mechanisms for processing binocular information I. Simple cells. J. Neurophysiol., 82, 891-908.
Anzai, A., Ohzawa, I., and Freeman, R. D. (1999c) Neural mechanisms for processing binocular information II. Complex cells. J. Neurophysiol., 82, 909-924.
Choi, T. Y. W., Shi, B. E., & Boahen, K. (2003) An Orientation Selective 2D AER Transceiver, Proceedings of the IEEE Intl. Conf. on Circuits and Systems, 4, 800-803.
Boahen, K. A. (2000) Point-to-point connectivity between neuromorphic chips using
address events. IEEE Transactions on Circuits and Systems-II: Analog and Digital Signal
Processing, 47, 416-434.
Poggio, G. F., Motter, B. C. Squatrito, S., & Trotter, Y. (1985) Responses of neurons in visual
cortex (V1 and V2) of the alert macaque to dynamic random-dot stereograms. Vision
Research, 25, 397-406.
Albrecht, D. G. & Geisler, W. S. (1991) Motion selectivity and the contrast response functions of simple cells in the visual cortex, Visual Neuroscience, 7, 531-546.
Heeger, D. J. (1992). Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9, 181-197.
| 2446 |@word kong:3 illustrating:1 trotter:1 pulse:3 paid:1 solid:2 initial:1 series:4 disparity:65 contains:1 tuned:39 current:1 ust:1 physiol:1 numerical:1 happen:1 shape:1 enables:1 designed:1 update:1 discrimination:2 v:1 cue:1 half:10 device:1 selected:2 supplying:1 location:10 preference:1 height:1 alert:1 constructed:2 supply:2 transceiver:1 consists:1 combine:10 theoretically:1 inter:1 rapid:1 multi:2 integrator:1 inspired:1 detects:1 decreasing:2 freeman:4 window:4 project:1 retinotopic:3 matched:1 circuit:9 remapping:1 mcu:3 interpreted:1 miyawaki:1 every:2 control:5 unit:1 grant:2 appear:1 positive:8 negligible:1 engineering:1 timing:1 encoding:21 subscript:1 path:1 merge:1 approximately:1 chose:1 cpld:2 co:1 unique:2 camera:1 block:14 implement:8 gabor:15 significantly:1 matching:2 vantage:1 confidence:1 suggest:1 onto:3 selection:1 applying:1 equivalent:1 shi:2 center:11 attention:1 simplicity:1 identifying:1 qian:2 array:6 his:1 population:1 autonomous:1 variation:10 sar:1 play:1 heavily:1 exact:1 losing:1 programming:1 us:3 origin:1 agreement:1 mammalian:1 std:1 role:2 electrical:1 tsang:1 boahen:3 environment:2 stereograms:1 ideally:1 dynamic:1 upon:4 eric:1 neurophysiol:3 chip:38 cat:4 represented:1 train:2 distinct:1 describe:1 deangelis:1 artificial:1 refined:1 whose:6 pref:2 larger:1 encoded:5 consume:2 ability:1 statistic:1 final:1 combining:1 flexibility:1 achieve:2 intl:1 leave:2 object:3 depending:1 develop:1 measured:5 excites:1 odd:11 eq:1 indicate:3 differ:3 direction:3 posit:1 merged:1 functionality:1 filter:48 modifying:1 centered:2 routing:3 enable:1 anzai:5 implementing:1 require:1 fix:1 biological:2 summation:3 adjusted:2 major:1 achieves:1 vary:4 tor:1 label:2 sensitive:3 modulating:1 council:1 successfully:1 kowloon:1 gaussian:1 latched:2 avoid:2 voltage:3 indicates:1 hk:1 contrast:5 centroid:1 detect:1 helpful:1 squaring:4 selective:8 pixel:18 among:1 orientation:5 development:1 plan:1 spatial:10 integration:3 field:3 once:1 zz:1 biology:1 identical:5 placing:1 represents:2 alter:1 future:1 stimulus:2 primarily:1 retina:4 simultaneously:2 individual:1 phase:42 investigate:1 accurate:1 capable:1 poggio:2 orthogonal:1 damping:3 desired:5 re:2 theoretical:1 fitted:1 column:6 classify:1 dev:1 mhz:1 altering:1 neuromorphic:10 addressing:1 entry:1 deviation:3 delay:1 examining:1 combined:2 peak:6 sensitivity:9 geisler:1 off:9 connectivity:1 pettigrew:1 containing:2 external:1 conf:1 albrecht:1 retinal:7 b2:5 coefficient:3 caused:1 depends:2 crossed:1 performed:1 microcontroller:1 wave:3 parallel:2 capability:1 rectifying:1 minimize:1 ir:6 largely:1 correspond:1 rectified:2 explain:2 simultaneous:1 detector:1 energy:13 frequency:6 associated:1 gain:5 nerve:1 higher:1 cummings:1 response:43 maximally:1 binocular:64 stage:2 stimulate:1 impulse:1 building:2 ohzawa:6 effect:2 contain:1 barlow:1 assigned:1 symmetric:5 latch:2 during:1 width:1 assistance:1 cosine:1 xilinx:3 hong:3 m:4 performs:2 motion:1 interface:1 image:4 spiking:4 insensitive:1 analog:3 organism:1 extend:1 tail:1 silicon:4 refer:1 measurement:2 tuning:5 similarly:1 nonlinearity:3 dot:1 cortex:8 selectivity:4 additional:2 eo:1 signal:5 ii:2 dashed:1 infer:1 ing:1 match:1 characterized:2 faster:2 controlled:1 prediction:2 bertram:1 vision:3 represent:6 normalization:2 cell:63 addition:1 interval:2 diagram:2 lsb:2 envelope:1 hz:1 pooling:1 extracting:1 near:2 mw:2 affect:1 architecture:1 bandwidth:3 opposite:1 identified:1 shift:17 fleet:1 six:1 passed:1 cause:1 programmable:1 hardware:1 processed:1 generate:3 percentage:6 inhibitory:1 shifted:1 millisecond:1 sign:1 stereoscopic:1 neuroscience:2 correctly:1 per:1 probed:2 affected:1 express:1 group:1 motter:1 four:15 v1:1 sum:3 parameterized:1 respond:2 communicate:1 electronic:2 decision:1 summarizes:1 comparable:1 bit:2 followed:1 demultiplexed:1 aer:17 optic:1 inclusive:1 encodes:2 generates:1 speed:1 lond:1 relatively:1 according:3 debugging:1 combination:16 describes:2 smaller:3 across:1 spiked:1 monocular:12 bus:3 reroute:1 count:1 loose:1 mechanism:4 operation:1 eight:1 v2:1 occurrence:1 slower:1 gate:1 top:1 running:1 ensure:1 exploit:1 unchanged:2 spike:33 receptive:3 primary:1 striate:1 exhibit:2 distance:3 separate:2 thank:1 consumption:2 seven:1 index:1 glitch:1 negative:7 implementation:6 design:2 neuron:42 wire:1 displaced:2 enabling:1 communication:3 precise:2 variability:3 varied:1 intensity:2 pair:1 required:6 merges:1 macaque:2 address:27 bar:2 perception:2 mismatch:3 pattern:5 rf:15 max:1 power:2 event:3 hybrid:1 abbreviate:1 zhu:2 scheme:1 technology:1 eye:19 extract:3 review:1 acknowledgement:1 relative:9 expect:1 mixed:2 filtering:3 proportional:1 analogy:1 generator:5 digital:5 integrate:3 consistent:1 row:6 excitatory:1 supported:1 asynchronous:2 copy:1 side:1 bias:1 wide:1 wagner:1 curve:2 depth:7 boundary:1 cortical:1 computes:2 author:1 replicated:1 far:2 transaction:1 preferred:1 logic:1 b1:11 receiver:1 assumed:1 physiologically:1 table:3 channel:7 nature:1 robust:2 interact:1 complex:37 protocol:6 linearly:2 arrow:2 profile:8 quadrature:2 fig:6 board:1 parker:1 n:2 position:31 heeger:2 lie:1 choi:2 offset:3 decay:1 physiological:1 workshop:1 merging:2 modulates:1 suited:1 cx:1 visual:12 positional:1 expressed:1 nominally:1 telluride:1 towards:1 replace:1 change:2 except:2 called:1 invariance:1 support:1 arises:1 handshaking:1 dept:1 arbitration:1 eebert:1 |
1,592 | 2,447 | Ranking on Data Manifolds
Dengyong Zhou, Jason Weston, Arthur Gretton,
Olivier Bousquet, and Bernhard Sch?olkopf
Max Planck Institute for Biological Cybernetics, 72076 Tuebingen, Germany
{firstname.secondname }@tuebingen.mpg.de
Abstract
The Google search engine has enjoyed huge success with its web page
ranking algorithm, which exploits global, rather than local, hyperlink
structure of the web using random walks. Here we propose a simple
universal ranking algorithm for data lying in the Euclidean space, such
as text or image data. The core idea of our method is to rank the data
with respect to the intrinsic manifold structure collectively revealed by a
great amount of data. Encouraging experimental results from synthetic,
image, and text data illustrate the validity of our method.
1
Introduction
The Google search engine [2] accomplishes web page ranking using PageRank algorithm,
which exploits the global, rather than local, hyperlink structure of the web [1]. Intuitively,
it can be thought of as modelling the behavior of a random surfer on the graph of the web,
who simply keeps clicking on successive links at random and also periodically jumps to
a random page. The web pages are ranked according to the stationary distribution of the
random walk. Empirical results show PageRank is superior to the naive ranking method, in
which the web pages are simply ranked according to the sum of inbound hyperlinks, and
accordingly only the local structure of the web is exploited.
Our interest here is in the situation where the objects to be ranked are represented as vectors
in Euclidean space, such as text or image data. Our goal is to rank the data with respect
to the intrinsic global manifold structure [6, 7] collectively revealed by a huge amount of
data. We believe for many real world data types this should be superior to a local method,
which rank data simply by pairwise Euclidean distances or inner products.
Let us consider a toy problem to explain our motivation. We are given a set of points
constructed in two moons pattern (Figure 1(a)). A query is given in the upper moon, and the
task is to rank the remaining points according to their relevances to the query. Intuitively,
the relevant degrees of points in the upper moon to the query should decrease along the
moon shape. This should also happen for the points in the lower moon. Furthermore, all
of the points in the upper moon should be more relevant to the query than the points in the
lower moon. If we rank the points with respect to the query simply by Euclidean distance,
then the left-most points in the lower moon will be more relevant to the query than the
right-most points in the upper moon (Figure 1(b)). Apparently this result is not consistent
with our intuition (Figure 1(c)).
We propose a simple universal ranking algorithm, which can exploit the intrinsic manifold
(a) Two moons ranking problem
(b) Ranking by Euclidean distance
(c) Ideal ranking
query
Figure 1: Ranking on the two moons pattern. The marker sizes are proportional to the
ranking in the last two figures. (a) toy data set with a single query; (b) ranking by the
Euclidean distances; (c) ideal ranking result we hope to obtain.
structure of data. This method is derived from our recent research on semi-supervised learning [8]. In fact the ranking problem can be viewed as an extreme case of semi-supervised
learning, in which only positive labeled points are available. An intuitive description of our
method is as follows. We first form a weighted network on the data, and assign a positive
ranking score to each query and zero to the remaining points which are ranked with respect
to the queries. All points then spread their ranking score to their nearby neighbors via the
weighted network. The spread process is repeated until a global stable state is achieved,
and all points except queries are ranked according to their final ranking scores.
The rest of the paper is organized as follows. Section 2 describes the ranking algorithm in
detail. Section 3 discusses the connections with PageRank. Section 4 further introduces a
variant of PageRank, which can rank the data with respect to the specific queries. Finally,
Section 5 presents experimental results on toy data, on digit image, and on text documents,
and Section 6 concludes this paper.
2
Algorithm
Given a set of point X = {x1 , ..., xq , xq+1 , ..., xn } ? Rm , the first q points are the queries
and the rest are the points that we want to rank according to their relevances to the queries.
Let d : X ? X ?? R denote a metric on X , such as Euclidean distance, which assigns
each pair of points xi and xi a distance d(xi , xj ). Let f : X ?? R denote a ranking
function which assigns to each point xi a ranking value fi . We can view f as a vector
f = [f1 , .., fn ]T . We also define a vector y = [y1 , .., yn ]T , in which yi = 1 if xi is a
query, and yi = 0 otherwise. If we have prior knowledge about the confidences of queries,
then we can assign different ranking scores to the queries proportional to their respective
confidences.
The algorithm is as follows:
1. Sort the pairwise distances among points in ascending order. Repeat connecting
the two points with an edge according the order until a connected graph is obtained.
2. Form the affinity matrix W defined by Wij = exp[?d2 (xi , xj )/2? 2 ] if there is
an edge linking xi and xj . Note that Wii = 0 because there are no loops in the
graph.
3. Symmetrically normalize W by S = D ?1/2 W D?1/2 in which D is the diagonal
matrix with (i, i)-element equal to the sum of the i-th row of W.
4. Iterate f (t + 1) = ?Sf (t) + (1 ? ?)y until convergence, where ? is a parameter
in [0, 1).
5. Let fi? denote the limit of the sequence {fi (t)}. Rank each point xi according its
ranking scores fi? (largest ranked first).
This iteration algorithm can be understood intuitively. First a connected network is formed
in the first step. The network is simply weighted in the second step and the weight is
symmetrically normalized in the third step. The normalization in the third step is necessary
to prove the algorithm?s convergence. In the forth step, all points spread their ranking score
to their neighbors via the weighted network. The spread process is repeated until a global
stable state is achieved, and in the fifth step the points are ranked according to their final
ranking scores. The parameter ? specifies the relative contributions to the ranking scores
from neighbors and the initial ranking scores. It is worth mentioning that self-reinforcement
is avoided since the diagonal elements of the affinity matrix are set to zero in the second
step. In addition, the information is spread symmetrically since S is a symmetric matrix.
About the convergence of this algorithm, we have the following theorem:
Theorem 1 The sequence {f (t)} converges to f ? = ?(I ? ?S)?1 y, where ? = 1 ? ?.
See also [8] for the rigorous proof. Here we only demonstrate how to obtain such a closed
form expression. Suppose f (t) converges to f ? . Substituting f ? for f (t + 1) and f (t) in
the iteration equation f (t + 1) = ?Sf (f ) + (1 ? ?)y, we have
f ? = ?f ? + (1 ? ?)y,
(1)
which can be transformed into
(I ? ?S)f ? = (1 ? ?)y.
Since (I ? ?S) is invertible, we have
f ? = (1 ? ?)(I ? ?S)?1 y.
Clearly, the scaling factor ? does not make contributions for our ranking task. Hence the
closed form is equivalent to
f ? = (I ? ?S)?1 y.
(2)
We can use this closed form to compute the ranking scores of points directly. In large-scale
real-world problems, however, we prefer using iteration algorithm. Our experiments show
that a few iterations are enough to yield high quality ranking results.
3
Connections with Google
Let G = (V, E) denote a directed graph with vertices. Let W denote the n ? n adjacency
matrix W, in which Wij = 1 if there is a link in E from vertex xi to vertex xj , and Wij = 0
otherwise. Note that W is possibly asymmetric. Define a random walk on G determined
by the following transition probability matrix
P = (1 ? )U + D ?1 W,
(3)
where U is the matrix with all entries equal to 1/n. This can be interpreted as a probability
of transition to an adjacent vertex, and a probability 1 ? of jumping to any point on the
graph uniform randomly. Then the ranking scores over V computed by PageRank is given
by the stationary distribution ? of the random walk.
In our case, we only consider graphs which are undirected and connected. Clearly, W is
symmetric in this situation. If we also rank all points without queries using our method, as
is done by Google, then we have the following theorem:
Theorem 2 For the task of ranking data represented by a connected and undirected graph
without queries, f ? and PageRank yield the same ranking list.
Proof. We fist show that the stationary distribution ? of the random walk used in Google is
proportional to the vertex degree if the graph G is undirected and connected. Let 1 denote
the 1 ? n vector with all entries equal to 1. We have
1DP
= 1D[(1 ? )U + D ?1 W ] = (1 ? )1DU + 1DD ?1 W
= (1 ? )1D + 1W = (1 ? )1D + 1D = 1D.
Let vol G denote the volume of G, which is given by the sum of vertex degrees. The
stationary distribution is then
? = 1D/vol G.
(4)
Note that ? does not depend on . Hence ? is also the the stationary distribution of the
random walk determined by the transition probability matrix D ?1 W.
Now we consider the ranking result given by our method in the situation without queries.
The iteration equation in the fourth step of our method becomes
f (t + 1) = Sf (t).
(5)
A standard result [4] of linear algebra states that if f (0) is a vector not orthogonal to the
principal eigenvector, then the sequence {f (t)} converges to the principal eigenvector of
S. Let 1 denotes the n ? 1 vector with all entries equal to 1. Then
SD1/2 1 = D?1/2 W D?1/2 D1/2 1 = D?1/2 W 1 = D?1/2 D1 = D 1/2 1.
Further, noticing that the maximal eigenvalue of S is 1 [8], we know the principal eigenvector of S is D 1/2 1. Hence
f ? = D1/2 1.
(6)
Comparing (4) with (6), it is clear that f ? and ? give the same ranking list. This completes
our proof.
4
Personalized Google
Although PageRank is designed to rank all points without respect to any query, it is easy to
modify for query-based ranking problems. Let P = D ?1 W. The ranking scores given by
PageRank are the elements of the convergence solution ? ? of the iteration equation
?(t + 1) = ?P T ?(t).
(7)
By analogy with the algorithm in Section 2, we can add a query term on the right-hand side
of (7) for the query-based ranking,
?(t + 1) = ?P T ?(t) + (1 ? ?)y.
(8)
This can be viewed as the personalized version of PageRank. We can show that the sequence {?(t)} converges to ? ? = (1 ? ?)(I ? ?P T )?1 y as before, which is equivalent
to
? ? = (I ? ?P T )?1 y.
(9)
Now let us analyze the connection between (2) and (9). Note that (9) can be transformed
into
? ? = [(D ? ?W )D ?1 ]?1 y = D(D ? ?W )?1 y.
?
In addition, f can be represented as
f ? = [D?1/2 (D ? ?W )D ?1/2 ]?1 y = D1/2 (D ? ?W )?1 D1/2 y.
(10)
Hence the main difference between ? ? and f ? is that in the latter the initial ranking score
yi of each query xi is weighted with respect to its degree.
The above observation motivates us to propose a more general personalized PageRank
algorithm,
?(t + 1) = ?P T ?(t) + (1 ? ?)D k y,
(11)
in which we assign different importance to queries with respect to their degree. The closed
form of (11) is given by
? ? = (I ? ?P T )?1 Dk y.
(12)
If k = 0, (12) is just (9); and if k = 1, we have
? ? = (I ? ?P T )?1 Dy = D(D ? ?W )?1 Dy,
which is almost as same as (10).
We can also use (12) for classification problems without any modification, besides setting
the elements of y to 1 or -1 corresponding to the positive or negative classes of the labeled
points, and 0 for the unlabeled data. This shows the ranking and classification problems
are closely related.
We can do a similar analysis of the relations to Kleinberg?s HITS [5], which is another
popular web page ranking algorithm. The basic idea of this method is also to iteratively
spread the ranking scores via the existing web graph. We omit further discussion of this
method due to lack of space.
5
Experiments
We validate our method using a toy problem and two real-world domains: image and text.
In our following experiments we use the closed form expression in which ? is fixed at 0.99.
As a true labeling is known in these problems, i.e. the image and document categories
(which is not true in real-world ranking problems), we can compute the ranking error using
the Receiver Operator Characteristic (ROC) score [3] to evaluate ranking algorithms. The
returned score is between 0 and 1, a score of 1 indicating a perfect ranking.
5.1
Toy Problem
In this experiment we considered the toy ranking problem mentioned in the introduction
section. The connected graph described in the first step of our algorithm is shown in Figure
2(a). The ranking scores with different time steps: t = 5, 10, 50, 100 are shown in Figures
2(b)-(e). Note that the scores on each moon decrease along the moon shape away from the
query, and the scores on the moon containing the query point are larger than on the other
moon. Ranking by Euclidean distance is shown in Figure 2(f), which fails to capture the
two moons structure.
It is worth mentioning that simply ranking the data according to the shortest paths [7] on
the graph does not work well. In particular, we draw the reader?s attention to the long edge
in Figure 2(a) which links the two moons. It appears that shortest paths are sensitive to
the small changes in the graph. The robust solution is to assemble all paths between two
points, and weight them by a decreasing factor.
PThis is exactly what we have done. Note
that the closed form can be expanded as f ? = i ?i S i y.
5.2
Image Ranking
In this experiment we address a task of ranking on the USPS handwritten 16x16 digits
dataset. We rank digits from 1 to 6 in our experiments. There are 1269, 929, 824, 852, 716
and 834 examples for each class, for a total of 5424 examples.
(a) Connected graph
Figure 2: Ranking on the pattern of two moons. (a) connected graph; (b)-(e) ranking with
the different time steps: t = 5, 10, 50, 100; (f) ranking by Euclidean distance.
(a) Query digit 1
1
(b) Query digit 2
1
0.994
0.9
0.85
0.8
0.7
2
1
4
6
# queries
(d) Query digit 4
8
0.6
10
0.8
0.75
0.7
0.65
Manifold ranking
Euclidean distance
0.65
Manifold ranking
Euclidean distance
2
1
0.95
4
6
# queries
(e) Query digit 5
8
0.6
10
0.9
0.9
0.85
0.85
0.8
0.75
0.7
2
4
6
# queries
8
10
0.6
2
4
8
10
0.8
0.7
0.65
Manifold ranking
Euclidean distance
6
# queries
(e) Query digit 6
0.75
0.7
0.65
0.6
ROC
0.9
0.8
4
0.95
0.85
0.75
Manifold ranking
Euclidean distance
2
1
0.95
ROC
ROC
0.9
0.85
0.75
0.992
0.99
0.95
ROC
0.996
ROC
ROC
0.998
(c) Query digit 3
1
0.95
0.65
Manifold ranking
Euclidean distance
2
4
6
# queries
8
10
0.6
Manifold ranking
Euclidean distance
6
# queries
8
10
Figure 3: ROC on USPS for queries from digits 1 to 6. Note that this experimental results
also provide indirect proof of the intrinsic manifold structure in USPS.
Figure 4: Ranking digits on USPS. The top-left digit in each panel is the query. The left
panel shows the top 99 by the manifold ranking; and the right panel shows the top 99 by
the Euclidean distance based ranking. Note that there are many more 2s with knots in the
right panel.
We randomly select examples from one class of digits to be the query set over 30 trials,
and then rank the remaining digits with respect to these sets. We use a RBF kernel with
the width ? = 1.25 to construct the affinity matrix W, but the diagonal elements are set to
zero. The Euclidean distance based ranking method is used as the baseline: given a query
set {xs }(s ? S), the points x are ranked according to that the highest ranking is given to
the point x with the lowest score of mins?S kx ? xs k.
The results, measured as ROC scores, are summarized in Figure 3; each plot corresponds
to a different query class, from digit one to six respectively. Our algorithm is comparable
to the baseline when a digit 1 is the query. For the other digits, however, our algorithm
significantly outperforms the baseline. This experimental result also provides indirect proof
of the underlying manifold structure in the USPS digit dataset [6, 7].
The top ranked 99 images obtained by our algorithm and Euclidean distance, with a random
digit 2 as the query, are shown in Figure 4. The top-left digit in each panel is the query.
Note that there are some 3s in the right panel. Furthermore, there are many curly 2s in
the right panel, which do not match well with the query: the 2s in the left panel are more
similar to the query than the 2s in the right panel. This subtle superiority makes a great
deal of sense in the real-word ranking task, in which users are only interested in very few
leading ranking results. The ROC measure is too simple to reflect this subtle superiority
however.
5.3
Text Ranking
In this experiment, we investigate the task of text ranking using the 20-newsgroups dataset.
We choose the topic rec which contains autos, motorcycles, baseball and hockey from the
version 20-news-18828.
The articles are processed by the Rainbow software package with the following options:
(1) passing all words through the Porter stemmer before counting them; (2) tossing out
any token which is on the stoplist of the SMART system; (3) skipping any headers; (4)
ignoring words that occur in 5 or fewer documents. No further preprocessing was done.
Removing the empty documents, we obtain 3970 document vectors in a 8014-dimensional
space. Finally the documents are normalized into TFIDF representation.
We use the ranking method based on normalized inner product as the baseline. The affinity
matrix W is also constructed by inner product, i.e. linear kernel. The ROC scores for 100
randomly selected queries for each class are given in Figure 5.
(a) autos
0.8
0.8
0.7
0.7
0.6
0.5
0.4
0.3
0.3
0.4
0.5
0.6
0.7
manifold ranking
0.8
0.5
0.3
0.3
0.9
(c) baseball
0.4
0.8
0.7
0.7
0.6
0.5
0.4
0.5
0.6
0.7
manifold ranking
0.8
0.9
0.8
0.9
(d) hockey
0.9
0.8
inner product
inner product
0.6
0.4
0.9
0.3
0.3
(b) motorcycles
0.9
inner product
inner product
0.9
0.6
0.5
0.4
0.4
0.5
0.6
0.7
manifold ranking
0.8
0.9
0.3
0.3
0.4
0.5
0.6
0.7
manifold ranking
Figure 5: ROC score scatter plots of 100 random queries from the category autos, motorcycles, baseball and hockey contained in the 20-newsgroups dataset.
6
Conclusion
Future research should address model selection. Potentially, if one was given a small labeled set or a query set greater than size 1, one could use standard cross validation techniques. In addition, it may be possible to look to the theory of stability of algorithms to
choose appropriate hyperparameters. There are also a number of possible extensions to
the approach. For example one could implement an iterative feedback framework: as the
user specifies positive feedback this can be used to extend the query set and improve the
ranking output. Finally, and most importantly, we are interested in applying this algorithm
to wide-ranging real-word problems.
References
[1] R. Albert, H. Jeong, and A. Barabsi. Diameter of the world wide web. Nature, 401:130?131,
1999.
[2] S. Brin and L. Page. The anatomy of a large scale hypertextual web search engine. In Proc. 7th
International World Wide Web Conf., 1998.
[3] R. Duda, P. Hart, and D. Stork. Pattern Classification. Wiley-Interscience, 2nd edition, 2000.
[4] G. H. Golub and C. F. Van Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, 1989.
[5] J. Kleinberg. Authoritative sources in a hyperlinked environment. JACM, 46(5):604?632, 1999.
[6] S. T. Roweis and L. K. Saul. Nonlinear dimensionality reduction by locally linear embedding.
Science, 290:2323?2326, 2000.
[7] J. B. Tenenbaum, V. de Silva, and J. C. Langford. Global geometric framework for nonlinear
dimensionality reduction. Science, 290:2319?2323, 2000.
[8] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch o? lkopf. Learning with local and global
consistency. In 18th Annual Conf. on Neural Information Processing Systems, 2003.
| 2447 |@word trial:1 version:2 duda:1 nd:1 d2:1 reduction:2 initial:2 contains:1 score:24 document:6 outperforms:1 existing:1 comparing:1 skipping:1 scatter:1 john:1 fn:1 periodically:1 happen:1 shape:2 designed:1 plot:2 stationary:5 fewer:1 selected:1 accordingly:1 core:1 provides:1 successive:1 along:2 constructed:2 prove:1 interscience:1 pairwise:2 behavior:1 mpg:1 decreasing:1 encouraging:1 becomes:1 underlying:1 panel:9 lowest:1 what:1 interpreted:1 eigenvector:3 exactly:1 rm:1 hit:1 omit:1 yn:1 planck:1 superiority:2 positive:4 before:2 understood:1 local:5 modify:1 limit:1 path:3 mentioning:2 directed:1 implement:1 digit:20 universal:2 empirical:1 thought:1 significantly:1 confidence:2 word:4 unlabeled:1 selection:1 operator:1 applying:1 equivalent:2 attention:1 assigns:2 importantly:1 stability:1 embedding:1 suppose:1 user:2 olivier:1 curly:1 element:5 rec:1 asymmetric:1 labeled:3 capture:1 connected:8 news:1 decrease:2 highest:1 mentioned:1 intuition:1 environment:1 depend:1 algebra:1 smart:1 baseball:3 usps:5 indirect:2 represented:3 query:54 labeling:1 header:1 larger:1 otherwise:2 final:2 sequence:4 eigenvalue:1 hyperlinked:1 propose:3 product:7 maximal:1 relevant:3 loop:1 motorcycle:3 roweis:1 forth:1 intuitive:1 description:1 validate:1 normalize:1 olkopf:1 convergence:4 empty:1 perfect:1 converges:4 inbound:1 object:1 illustrate:1 dengyong:1 measured:1 anatomy:1 closely:1 stoplist:1 brin:1 adjacency:1 assign:3 f1:1 tfidf:1 biological:1 extension:1 lying:1 considered:1 exp:1 great:2 surfer:1 substituting:1 proc:1 sensitive:1 largest:1 weighted:5 hope:1 clearly:2 rather:2 zhou:2 derived:1 rank:12 modelling:1 rigorous:1 baseline:4 sense:1 relation:1 wij:3 transformed:2 interested:2 germany:1 among:1 classification:3 equal:4 construct:1 look:1 future:1 few:2 randomly:3 huge:2 interest:1 investigate:1 golub:1 introduces:1 extreme:1 edge:3 arthur:1 necessary:1 respective:1 jumping:1 orthogonal:1 euclidean:18 walk:6 vertex:6 entry:3 uniform:1 too:1 synthetic:1 international:1 invertible:1 connecting:1 hopkins:1 reflect:1 containing:1 choose:2 possibly:1 conf:2 leading:1 toy:6 de:2 summarized:1 ranking:75 view:1 jason:1 closed:6 apparently:1 analyze:1 sort:1 option:1 contribution:2 formed:1 moon:18 who:1 characteristic:1 yield:2 lkopf:1 handwritten:1 knot:1 worth:2 cybernetics:1 explain:1 proof:5 dataset:4 popular:1 knowledge:1 dimensionality:2 organized:1 subtle:2 appears:1 supervised:2 done:3 furthermore:2 just:1 until:4 langford:1 hand:1 web:13 nonlinear:2 marker:1 lack:1 google:6 porter:1 quality:1 believe:1 validity:1 normalized:3 true:2 hence:4 symmetric:2 iteratively:1 deal:1 adjacent:1 self:1 width:1 pthis:1 demonstrate:1 silva:1 image:8 ranging:1 fi:4 superior:2 stork:1 volume:1 linking:1 extend:1 enjoyed:1 consistency:1 stable:2 add:1 recent:1 success:1 yi:3 exploited:1 greater:1 accomplishes:1 tossing:1 shortest:2 semi:2 fist:1 gretton:1 match:1 cross:1 long:1 hart:1 variant:1 basic:1 metric:1 albert:1 iteration:6 kernel:2 normalization:1 achieved:2 addition:3 want:1 baltimore:1 completes:1 source:1 sch:2 rest:2 undirected:3 symmetrically:3 ideal:2 revealed:2 counting:1 enough:1 easy:1 iterate:1 xj:4 newsgroups:2 inner:7 idea:2 expression:2 six:1 returned:1 passing:1 clear:1 amount:2 locally:1 tenenbaum:1 processed:1 category:2 diameter:1 specifies:2 vol:2 graph:14 sum:3 package:1 noticing:1 fourth:1 almost:1 reader:1 draw:1 prefer:1 scaling:1 dy:2 comparable:1 hypertextual:1 assemble:1 annual:1 occur:1 software:1 personalized:3 bousquet:2 nearby:1 kleinberg:2 min:1 expanded:1 according:10 describes:1 modification:1 intuitively:3 equation:3 discus:1 know:1 ascending:1 available:1 wii:1 away:1 appropriate:1 denotes:1 remaining:3 top:5 exploit:3 diagonal:3 affinity:4 dp:1 distance:18 link:3 topic:1 manifold:17 tuebingen:2 besides:1 potentially:1 negative:1 motivates:1 upper:4 observation:1 situation:3 sd1:1 y1:1 pair:1 connection:3 jeong:1 lal:1 engine:3 address:2 pattern:4 firstname:1 hyperlink:3 secondname:1 pagerank:10 max:1 ranked:9 improve:1 concludes:1 naive:1 auto:3 xq:2 text:7 prior:1 geometric:1 relative:1 proportional:3 analogy:1 validation:1 authoritative:1 degree:5 consistent:1 article:1 dd:1 row:1 token:1 repeat:1 last:1 side:1 institute:1 neighbor:3 stemmer:1 wide:3 saul:1 fifth:1 van:1 rainbow:1 feedback:2 xn:1 world:6 transition:3 jump:1 reinforcement:1 avoided:1 preprocessing:1 bernhard:1 keep:1 global:7 receiver:1 xi:10 search:3 iterative:1 hockey:3 nature:1 robust:1 ignoring:1 du:1 domain:1 spread:6 main:1 motivation:1 hyperparameters:1 edition:1 repeated:2 x1:1 roc:12 x16:1 wiley:1 fails:1 sf:3 clicking:1 third:2 theorem:4 removing:1 specific:1 list:2 dk:1 x:2 intrinsic:4 importance:1 kx:1 simply:6 jacm:1 contained:1 collectively:2 corresponds:1 weston:2 goal:1 viewed:2 rbf:1 change:1 loan:1 determined:2 except:1 principal:3 total:1 experimental:4 indicating:1 select:1 latter:1 relevance:2 evaluate:1 d1:5 |
1,593 | 2,448 | Distributed Optimization in Adaptive Networks
Ciamac C. Moallemi
Electrical Engineering
Stanford University
Stanford, CA 94305
[email protected]
Benjamin Van Roy
Management Science and Engineering
and Electrical Engineering
Stanford University
Stanford, CA 94305
[email protected]
Abstract
We develop a protocol for optimizing dynamic behavior of a network
of simple electronic components, such as a sensor network, an ad hoc
network of mobile devices, or a network of communication switches.
This protocol requires only local communication and simple computations which are distributed among devices. The protocol is scalable to
large networks. As a motivating example, we discuss a problem involving optimization of power consumption, delay, and buffer overflow in a
sensor network.
Our approach builds on policy gradient methods for optimization of
Markov decision processes. The protocol can be viewed as an extension
of policy gradient methods to a context involving a team of agents optimizing aggregate performance through asynchronous distributed communication and computation. We establish that the dynamics of the protocol approximate the solution to an ordinary differential equation that
follows the gradient of the performance objective.
1
Introduction
This paper is motivated by the potential of policy gradient methods as a general approach
to designing simple scalable distributed optimization protocols for networks of electronic
devices. We offer a general framework for such protocols that builds on ideas from the policy gradient literature. We also explore a specific example involving a network of sensors
that aggregates data. In this context, we propose a distributed optimization protocol that
minimizes power consumption, delay, and buffer overflow.
The proposed approach for designing protocols based on policy gradient methods comprises one contribution of this paper. In addition, this paper offers fundamental contributions to the policy gradient literature. In particular, the kind of protocol we propose can be
viewed as extending policy gradient methods to a context involving a team of agents optimizing system behavior through asynchronous distributed computation and parsimonious
local communication. Our main theoretical contribution is to show that the dynamics of
our protocol approximate the solution to an ordinary differential equation that follows the
gradient of the performance objective.
2
A General Formulation
Consider a network consisting of a set of components V = {1, . . . , n}. Associated with
this network is a discrete-time dynamical system with a finite state space W. Denote the
state of the system at time k by w(k), for k = 0, 1, 2, . . .. There are n subsets W1 , . . . , Wn
of W, each consisting of states associated with events at component i. Note that these
subsets need not be mutually exclusive or totally exhaustive. At the kth epoch, there are
n control actions a1 (k) ? A1 , . . . , an (k) ? An , where each Ai is a finite set of possible
actions that can be taken by component i. We sometimes write these control actions in
a vector form a(k) ? A = A1 ? ? ? ? An . The actions are governed by a set of policies
??11 , . . . , ??nn , parameterized by vectors ?1 ? RN1 , . . . , ?n ? RNn . Each ith action process only transitions when the state w(k) transitions to an element of Wi . At the time of
transition, the probability that ai (k) becomes any ai ? Ai is given by ??i i (ai |w(k)).
The state transitions depend on the prior state and action vector. In particular, let
P (w0 , a0 , w) be a transition kernel defining the probability of state w given prior state w0
and action a0 . Letting ? = (?1 , . . . , ?n ), we have
Pr {w(k) = w, a(k) = a|w(k ? 1) = w0 , a(k ? 1) = a0 , ?}
Y
Y
= P (w0 , a0 , w)
1{a0i =ai } .
??i i (ai |w)
i:w?Wi
i:w?W
/ i
Define Fk to be the ?-algebra generated by {(w(`), a(`))|` = 1, . . . , k}.
While the system is in state w ? W and action a ? A is applied, each component i
receives
Pn a reward ri (w, a). The average reward received by the network is r(w, a) =
1
i=1 ri (w, a).
n
Assumption 1. For every ?, the Markov chain w(k) is ergodic (aperiodic, irreducible).
Given Assumption 1, for each fixed ?, there is a well-defined long-term average reward
PK?1
1
E[ k=0 r(w(k), a(k))].
?(?) = limK?? K
We will consider a stochastic approximation iteration
?i (k + 1) = ?i (k) + ?i (k).
(1)
Here, > 0 is a constant step size and ?i (k) is a noisy estimate of the gradient ??i ?(?(k))
computed at component i based on the component?s historically observed states, actions,
and rewards, in addition to communication with other components. Our goal is to develop
an estimator ?i (k) that can be used in an adaptive, asynchronous, and decentralized context, and to establish the convergence of the resulting stochastic approximation scheme.
Our approach builds on policy gradient algorithms that have been proposed in recent years
([5, 7, 8, 3, 4, 2]). As a starting point, consider a gradient estimation method that is a
decentralized variation of the OLPOMDP algorithm of [3, 4, 1]. In this algorithm, each
component i maintains and updates an eligibility vector zi? (t) ? RNi , defined by
(2)
zi? (k) =
k
X
`=0
? k?`
??i ??i i (`) (ai (`)|w(`))
??i i (`) (ai (`)|w(`))
1{w(`)?Wi } ,
for some ? ? (0, 1). The algorithm generates an estimate ?
?i (k) = r(w(t), a(t))zi? (k) to
the local gradient ??i ?(?(k)). Note that while the credit vector zi? (t) can be computed using only local information, the gradient estimate ?
?i (t) cannot be computed without knowledge of the global reward r(x(t), a(t)) at each time. In a fully decentralized environment,
where components only have knowledge of their local rewards, this algorithm cannot be
used.
In this paper, we present a simple scalable distributed protocol through which rewards
occurring locally at each node are communicated over time across the network and gradient
estimates are generated at each node based on local information. A fundamental issue
this raises is that rewards may incur large delays before being communicated across the
network. Moreover, these delays may be random and may correlated with the underlying
events that occur in operation of the network. We address this issue and establish conditions
for convergence. Another feature of the protocol is that it is completely decentralized
? there is no central processor that aggregates and disseminates rewards. As such, the
protocol is robust to isolated changes or failures in the network. In addition to design of the
protocol, a significant contribution is in the protocol?s analysis, which we believe requires
new ideas beyond what has been employed in the prior policy gradient literature.
3
A General Framework for Protocols
We will make the following assumption regarding the policies, which is common in the
policy gradient literature ([7, 8, 3, 4, 2]).
Assumption 2. For all i and every w ? Wi , ai ? Ai , ??i i (ai |w) is a continuously differentiable function of ?i . Further, for every i, there exists a bounded function Li (w, ai , ?) such
that for all w ? Wi , ai ? Ai , ??i ??i i (ai |w) = ??i i (ai |w)Li (w, ai , ?).
The latter part of the assumption is satisfied, for example, if there exists a constant > 0
such that for each i,w ? Wi ,ai ? Ai , either ??i i (ai |w) = 0 for every ?i or ??i i (ai |w) ? ,
for all ?i .
Consider the following gradient estimator:
n
(3)
?i (k) = zi? (k)
k
1 XX ?
dij (`, k)rj (`),
n j=1
`=0
where we use the shorthand rj (`) = rj (w(`), a(`)). Here, the random variables
{d?
ij (`, k)}, with parameter ? ? (0, 1), represent an arrival process describing the communication of rewards across the network. Indeed, d?
ij (`, k) is the fraction of the reward
rj (`) at component j that is learned by component i at time k ? `. We will assume the
arrival process satisfies the following conditions.
Assumption 3. For each i, j, `, and ? ? (0, 1), the process {d?
ji (`, k)|k = `, ` + 1, ` +
2, . . .} satisfies:
1. d?
ji (`, k) is Fk -measurable.
2. There exists a scalar ? ? (0, 1) and a random variable c` such that for all k ? `,
d?
ji (`, k)
< c` ? k?` ,
?
1
(1 ? ?)?k?`
with probability 1. Further, we require that the distribution of c` given F` depend
only on (w(`), a(`)), and that there exist a constant c? such that E[c` |w(`) =
w, a(`) = a] < c? < ?, with probability 1 for all initial conditions w ? W
and a ? A.
3. The distribution of {d?
ji (`, k)|k = `, ` + 1, . . .} given F` depends only on w(`)
and a(`).
The following result, proved in our appendix [9], establishes the convergence of the longterm sample averages of ?i (t) of the form (3) to an estimate of the gradient. This type of
convergence is central to the convergence of the stochastic approximation iteration (1).
Theorem 1. Holding ? fixed, the limit
???
?i ?(?)
"K?1
#
X
1
E
= lim
?i (k)
K?? K
k=0
exists. Further,
?(?)
?
?
?(?)
lim lim sup
???
= 0.
?i
?i
??1
??1
.
4
Example: A Sensor Network
In this section, we present a model of a wireless network of sensors that gathers and communicates data to a central base station. Our example is motivated by issues arising in the
development of sensor network technology being carried out by commercial producers of
electronic devices. However, we will not take into account the many complexities associated with real sensor networks. Rather, our objective is to pose a simplified model that
motivates and provides a context for discussion of our distributed optimization protocol.
4.1
System Description
Consider a network of n sensors and a central base station. Each sensor gathers packets of
data through observation of its environment, and these packets of data are relayed through
the network to the base station via multi-hop wireless communication. Each sensor retains
a queue of packets, each obtained either through sensing or via transmission from another
sensor. Packets in a queue are indistinguishable ? each is of equal size and must be transferred to the central base station. We take the state of a sensor to be the number of packets
in the queue and denote the state of the ith sensor at time k by xi (k). The number of
packets in a queue cannot exceed a finite buffer size, which we denote by x.
A number of triggering events occur at any given device. These include (1) packetizing of
an observation (2) reception of a packet from another sensor, (3) transmission of a packet
to another sensor, (4) awakening from a period of sleep, (5) termination of a period of
attempted reception, (6) termination of a period of attempted transmission. At the time of
a triggering event, the sensor must decide on its next action. Possible actions include (1)
sleep, (2) attempt transmission, (3) attempt reception. When the buffer is full, options are
limited to (1) and (2). When the buffer is empty, options are limited (1) and (3). The action
taken by the ith sensor at time k is denoted by ai (k).
The base station will be thought of as a sensor that has an infinite buffer and perpetually
attempts reception. For each i, there is a set N(i) of entities with which the ith sensor can
directly communicate. If the ith sensor is attempting transmission of a packet and there
is at least one element of N(i) that is simultaneously attempting reception and is closer to
the base station than component i, the packet is transferred to the queue of that element. If
there are multiple such elements, one of them is chosen randomly. Note that if among the
elements of N(i) that are attempting reception, all are further away from the base station
than component i, no packet is transmitted.
Observations are made and packetized by each sensor at random times. If a sensor?s buffer
is not full when an observation is packetized, an element is added to the queue. Otherwise,
the packet is dropped from the system.
4.2
Control Policies and Objective
Every sensor employs a control policy that selects an action based on its queue length
each time a triggering event occurs. The action is maintained until occurrence of the next
triggering event. Each ith sensor?s control policy is parameterized by a vector ?i ? R2 .
Given ?i , at an event time, if the ith sensor has a non-empty queue, it chooses to transmit
with probability ?i1 . If the ith sensor does not transmit and its queue is not full, it chooses
to receive with probability ?i2 . If the sensor does not transmit or receive, then it sleeps. In
order to satisfy Assumption 2, we constrain ?i1 and ?i2 to lie in an interval [?` , ?h ], where
0 < ?` < ?h < 1.
Assume that each sensor has a finite power supply. In order to guarantee a minimum lifespan for the network, we will require that each sensor sleeps at least a fraction fs of the
time. This is enforced by considering a time window of length Ts . If, at any given time, a
sensor has not slept for a total fraction of a least fs of the preceding time Ts , it is forced to
sleep and hence not allowed to transmit or receive.
The objective is to minimize a weighted sum of the average delay and average number of
dropped packets per unit of time. Delay can be thought of as the amount of time a packet
spends in the network before arriving at the base station. Hence, the objective is:
max lim sup ?
?1 ,...,?n K??
K?1
n
1 X 1X
(xi (k) + ?Di (k)) ,
K
n i=1
k=0
where Di (k) is the number of packets dropped by sensor i at time k, and ? is a weight
reflecting the relative importance of delay and dropped packets.
5
Distributed Optimization Protocol
We now describe a simple protocol by which components a the network can communicate
rewards, in a fashion that satisfies the requirements of Theorem 1 and hence will produce
good gradient estimates. This protocol communicates the rewards across the network over
time using a distributed averaging procedure.
In order to motivate our protocol, consider a different problem. Imagine each component i
in the network is given a real value Ri . Our goal is to design an asynchronous
distributed
Pn
protocol through which each node will obtain the average R = i=1 Ri /n. To do this,
define the vector Y (0) ? Rn by Yi (0) = Ri for all i. For each edge (i, j), define a function
Q(i,j) : Rn 7? Rn by
(
(i,j)
Q`
(Y ) =
Yi +Yj
2
Y`
if ` ? {i, j},
otherwise.
At each time t, choose an edge (i, j), and set Y (k + 1) = Q(i,j) (Y (k)). If the graph
is connected and every edge is sampled infinitely often, then limk?? Y (t) = Y , where
(i,j)
Y i = R.
preserve the average value of the vector,
PnTo see this, note that the operators Q
hence i=1 Yi (k)/n = R. Further, for any k, either Y (k+1) = Y (k) or kY (k+1)?Y k <
kY (k) ? Y k. Further, Y is the unique vector with average value R that is a fixed point for
all operators Q(i,j) . Hence, as long as the graph is connected and each edge is sampled
infinitely often, Yi (k)?R as k?? and the components agree to the common average R.
In the context of our distributed optimization protocol, we will assume that each component
i maintains a scalar value Yi (k) at time k representing an estimate of the global reward.
We will define a structure by which components communicate. Define E to be the set
of edges along which communication can occur. For an ordered set of distinct edges S =
(ii , j1 ), . . . , (i|S| , j|S| ) , define a set WS ? W. Let ?(E) be the set of all possible ordered
sets of disjoint edges S, including the empty set. We will assume that the sets {WS |S ?
?(E)} are disjoint and together form a partition of W.
If w(k) ? WS , for some set S, we will assume that the components along the edges
in S communicate in the order specified by S. Define QS = Q(i|S| ,j|S| ) ? ? ? Q(i1 ,j1 ) ,
where the terms in the product are taken over the order specified by S. Define R(k) =
(r1 (k), . . . , rn (k)) is a vector of rewards occurring at time k. The update rule for the
vector Y (k) is given by Y (k + 1) = R(k + 1) + ?QS(k+1) Y (k), where
X
QS(k+1) =
1{w(k+1)?WS } QS .
S??(E)
? = {(i, j)|(i, j) ? S, WS =
Let E
6 ?}. We will make the following assumption.
? is connected.
Assumption 4. The graph (V, E)
Since the process (w(k), a(k)) is aperiodic and irreducible (Assumption 1), this assumption
guarantees that every edge on a connected subset of edges is sampled infinitely often.
Policy parameters are updated at each component according to the rule:
(4)
?i (k + 1) = ?i (k) + zi? (k)(1 ? ?)Yi (k).
In relation to equations (1) and (3), we have
h
i
k?` ?
(5)
d?
Q(`, k) ,
ji (`, k) = n(1 ? ?)?
ij
? k) = QS(k?1) ? ? ? QS(`) .
where Q(`,
The following theorem, which relies on a general stochastic approximation result from [6]
together with custom analysis available in our appendix [9], establishes the convergence of
the distributed stochastic iteration method defined by (4).
Theorem 2. For each > 0, define {? (k)|k = 0, 1, . . .} as the result of the stochastic approximation iteration (4) with the fixed value of . Assume the set {? (k)|k, } is bounded.
Define the continuous time interpolation ?? (t) by setting ?? (t) = ? (k) for t ? [k, k + ).
Then, for any sequence of processes {?? (t)|?0} there exists a subsequence that weakly
? as ?0, where ?(t)
? is a solution to the ordinary differential equation
converges to ?(t)
?? = ??? ?(?(t)).
?
(6)
?(t)
?
Further, define L to be the set of limit points of (6), and for a ? > 0, N? (L) to be a
neighborhood of radius ? about L. The fraction of time that ?? (t) spends in N? (L) over
the time interval [0, T ] goes to 1 in probability as ?0 and T ??.
Note that since we are using a constant step-size , this type of weak convergence is the
strongest one would expect. The parameters will typically oscillate in the neighborhood of
an limit point, and only weak convergence to a distribution centered around a limit point
can be established. An alternative would be to use a decreasing step size (k)?0 in (4).
In such instances, probability 1 convergence to a local optimum can often be established.
However, with decreasing step sizes, the adaptation of parameters becomes very slow as
(n) decays. We expect our protocol to be used in an online fashion, where it is ideal to
be adaptive to long-term changes in network topology or dynamics of the environment.
Hence, the constant step size case is more appropriate as it provides such adaptivity.
Also, a boundedness requirement on the iterates in Theorem 2 is necessary for the mathematical analysis of convergence. In practical numerical implementations, choices of the
policy parameters ?i would be constrained to bounded sets of Hi ? RNi . In such an implementation, the iteration (4) would be replaced with an iteration projected onto the set Hi .
The conclusions of Theorem 2 would continue to hold, but with the ODE (6) replaced with
an appropriate projected ODE. See [6] for further discussion.
5.1
Relation to the Example
In the example of Section 4, one approach to implementing our distributed optimization
protocol involves passing messages associated with the optimization protocol alongside
normal network traffic, as we will now explain. Each ith sensor should maintain and update
two vectors: a parameter vector ?i (k) ? R2 and an eligibility vector zi? (k). If a triggering
event occurs at sensor i at time k, the eligibility vector is updated according to
zi? (k) = ?z ? (k ? 1) +
??i ??i i (k) (ai (k)|w(k))
??i i (k) (ai (k)|w(k))
.
Otherwise, zi? (k) = ?zi? (k ? 1). Furthermore, each sensor maintains an estimate Yi (k)
of the global reward. At each time k, each ith sensor observes a reward (negative cost) of
ri (k) = ?xi (k) ? ?Di (k). If two neighboring sensors are both not asleep at a time k,
they communicate their global reward estimates from the previous time. If the ith sensor
is not involved in a reward communication event at that time, its global reward estimate
is updated according to Yi (k) = ?Yi (k ? 1) + ri (k). On the other hand, at any time
k that there is a communication event, its global reward estimate is updated according to
Yi (k) = ri (k) + ?(Yi (k) + ?Yj (k))/2, where j is the index of the sensor with which communication occurs. If communication occurs with multiple neighbors, the corresponding
global reward estimates are averaged pairwise in an arbitrary order. Clearly this update
process can be modeled in terms of the sets WS introduced in the previous section. In this
? contains an edge for each pair of neighbors in the sensor network,
context, the graph E
where the neighborhood relations are capture by N, as introduced in Section 4. To optimize
performance over time, each ith sensor would update its parameter values according to our
stochastic approximation iteration (4).
To highlight the simplicity of this protocol, note that each sensor need only maintain and
update a few numerical values. Furthermore, the only communication required by the
optimization protocol is that an extra scalar numerical value be transmitted and an extra
scalar numerical value be received during the reception or transmission of any packet.
As a numerical example, consider the network topology in Figure 1. Here, at every time
step, an observation arrives at a sensor with a 0.02 probability, and each sensor maintains
a queue of up to 20 observations. Policy parameters ?i1 and ?i2 for each sensor i are
constrained to lie in the interval [0.05, 0.95]. (Note that for this set of parameters, the
chance of a buffer overflow is very small, and hence did not occur in our simulations.)
A baseline policy is defined by having leaf nodes transmit with maximum probability, and
interior nodes splitting their time roughly evenly between transmission and reception, when
not forced to sleep by the power constraint.
Applying our decentralized optimization method to this example, it is clear in Figure 2
that the performance of the network is quickly and dramatically improved. Over time,
the algorithm converges to the neighborhood of a local optimum as expected. Further,
the algorithm achieves qualitatively similar performance to gradient optimization using the
centralized OLPOMDP method of [3, 4, 1], hence decentralization comes at no cost.
6
Remarks and Further Issues
We are encouraged by the simplicity and scalability of the distributed optimization protocol
we have presented. We believe that this protocol represents both an interesting direction
for practical applications involving networks of electronic devices and a significant step in
the policy gradient literature. However, there is an important outstanding issue that needs
to be addressed to assess the potential of this approach: whether or not parameters can be
adapted fast enough for this protocol to be useful in applications. There are two dimensions
Figure 1: Example network topology.
Figure 2: Convergence of method.
0
OLPOMDP
decentralized
baseline
?0.05
6
4
8
3
2
7
9
5
1
Long?Term Average Reward
10
?0.1
?0.15
?0.2
root
?0.25
?0.3
0
1
2
3
4
5
Iteration
6
7
8
9
10
6
x 10
to this issue: (1) variance of gradient estimates and (2) convergence rate of the underlying
ODE. Both should be explored through experimentation with models that capture practical
contexts. Also, there is room for research that explores how variance can be reduced and
the convergence rate of the ODE can be accelerated.
Acknowledgements
The authors thank Abbas El Gamal, Abtin Keshavarzian, Balaji Prabhakar, and Elif Uysal
for stimulating conversations on sensor network models and applications. This research
was supported by NSF CAREER Grant ECS-9985229 and by the ONR under grant MURIN00014-00-1-0637. The first author was also supported by a Benchmark Stanford Graduate
Fellowship.
References
[1] P. L. Bartlett and J. Baxter. Stochastic Optimization of Controlled Markov Decision Processes.
In IEEE Conference on Decision and Control, pages 124?129, 2000.
[2] P. L. Bartlett and J. Baxter. Estimation and Approximation Bounds for Gradient-Based Reinforcement Learning. Journal of Computer and System Sciences, 64:133?150, 2002.
[3] J. Baxter and P. L. Bartlett. Infinite-Horizon Gradient-Based Policy Search. Journal of Artificial
Intelligence Research, 15:319?350, 2001.
[4] J. Baxter, P. L. Bartlett, and L. Weaver. Infinite-Horizon Gradient-Based Policy Search: II. Gradient Ascent Algorithms and Experiments. Journal of Artificial Intelligence Research, 15:351?381,
2001.
[5] T. Jaakkola, S. P. Singh, and M. I. Jordan. Reinforcement Learning Algorithms for Partially
Observable Markov Decision Problems. In Advances in Neural Information Processing Systems
7, pages 345?352, 1995.
[6] H. J. Kushner and G. Yin. Stochastic Approximation Algorithms and Applications. SpringerVerlag, New York, NY, 1997.
[7] P. Marbach, O. Mihatsch, and J.N. Tsitsiklis. Call Admission Control and Routing in Integrated
Service Networks. In IEEE Conference on Decision and Control, 1998.
[8] P. Marbach and J.N. Tsitsiklis. Simulation?Based Optimization of Markov Reward Processes.
IEEE Transactions on Automatic Control, 46(2):191?209, 2001.
[9] C. C. Moallemi and B. Van Roy. Appendix to NIPS Submission. URL: http://www.
moallemi.com/ciamac/papers/nips-2003-appendix.pdf, 2003.
| 2448 |@word longterm:1 termination:2 simulation:2 boundedness:1 initial:1 contains:1 com:1 must:2 numerical:5 partition:1 j1:2 update:6 intelligence:2 leaf:1 device:6 ith:12 provides:2 iterates:1 node:5 relayed:1 mathematical:1 along:2 admission:1 differential:3 supply:1 shorthand:1 pairwise:1 expected:1 indeed:1 roughly:1 behavior:2 multi:1 decreasing:2 window:1 considering:1 totally:1 becomes:2 gamal:1 xx:1 bounded:3 moreover:1 underlying:2 what:1 kind:1 minimizes:1 spends:2 guarantee:2 every:8 control:9 unit:1 grant:2 before:2 service:1 engineering:3 local:8 dropped:4 limit:4 interpolation:1 reception:8 limited:2 graduate:1 averaged:1 unique:1 practical:3 yj:2 communicated:2 procedure:1 rnn:1 thought:2 cannot:3 interior:1 onto:1 operator:2 context:8 applying:1 www:1 measurable:1 optimize:1 go:1 starting:1 ergodic:1 simplicity:2 splitting:1 estimator:2 q:6 rule:2 variation:1 transmit:5 updated:4 imagine:1 commercial:1 designing:2 element:6 roy:2 balaji:1 submission:1 observed:1 electrical:2 capture:2 connected:4 observes:1 benjamin:1 environment:3 complexity:1 reward:24 dynamic:4 motivate:1 depend:2 raise:1 weakly:1 algebra:1 singh:1 incur:1 decentralization:1 completely:1 forced:2 distinct:1 describe:1 fast:1 artificial:2 aggregate:3 neighborhood:4 exhaustive:1 stanford:7 otherwise:3 noisy:1 online:1 hoc:1 sequence:1 differentiable:1 propose:2 product:1 adaptation:1 neighboring:1 description:1 ky:2 scalability:1 convergence:13 empty:3 transmission:7 extending:1 requirement:2 produce:1 r1:1 optimum:2 converges:2 prabhakar:1 develop:2 pose:1 ij:3 received:2 involves:1 come:1 direction:1 radius:1 aperiodic:2 stochastic:9 centered:1 packet:17 routing:1 implementing:1 require:2 extension:1 hold:1 around:1 credit:1 normal:1 achieves:1 estimation:2 establishes:2 weighted:1 clearly:1 sensor:45 rather:1 pn:2 mobile:1 jaakkola:1 baseline:2 el:1 nn:1 typically:1 integrated:1 a0:4 w:6 relation:3 selects:1 i1:4 issue:6 among:2 denoted:1 development:1 constrained:2 equal:1 having:1 hop:1 encouraged:1 represents:1 producer:1 irreducible:2 employ:1 randomly:1 few:1 simultaneously:1 preserve:1 replaced:2 consisting:2 maintain:2 attempt:3 centralized:1 message:1 custom:1 arrives:1 a0i:1 chain:1 edge:11 closer:1 moallemi:3 necessary:1 mihatsch:1 isolated:1 theoretical:1 instance:1 retains:1 ordinary:3 cost:2 subset:3 delay:7 dij:1 motivating:1 chooses:2 fundamental:2 explores:1 together:2 continuously:1 quickly:1 w1:1 central:5 satisfied:1 management:1 rn1:1 choose:1 li:2 account:1 potential:2 satisfy:1 ad:1 depends:1 root:1 sup:2 traffic:1 maintains:4 option:2 contribution:4 minimize:1 ass:1 variance:2 weak:2 disseminates:1 processor:1 explain:1 strongest:1 failure:1 involved:1 associated:4 di:3 sampled:3 proved:1 knowledge:2 lim:4 conversation:1 reflecting:1 improved:1 formulation:1 furthermore:2 until:1 hand:1 receives:1 believe:2 hence:8 i2:3 indistinguishable:1 during:1 eligibility:3 maintained:1 pdf:1 awakening:1 common:2 ji:5 significant:2 ai:25 automatic:1 fk:2 marbach:2 base:8 recent:1 optimizing:3 buffer:8 onr:1 continue:1 yi:11 transmitted:2 minimum:1 preceding:1 employed:1 period:3 ii:2 multiple:2 full:3 rj:4 offer:2 long:4 a1:3 controlled:1 scalable:3 involving:5 iteration:8 kernel:1 sometimes:1 represent:1 abbas:1 receive:3 addition:3 fellowship:1 ode:4 addressed:1 interval:3 extra:2 limk:2 ascent:1 jordan:1 call:1 ideal:1 exceed:1 enough:1 wn:1 baxter:4 switch:1 zi:10 topology:3 triggering:5 idea:2 regarding:1 whether:1 motivated:2 bartlett:4 url:1 f:2 queue:10 passing:1 oscillate:1 york:1 action:14 remark:1 dramatically:1 useful:1 clear:1 amount:1 locally:1 lifespan:1 reduced:1 http:1 exist:1 nsf:1 arising:1 per:1 disjoint:2 discrete:1 write:1 packetized:2 graph:4 fraction:4 year:1 sum:1 enforced:1 parameterized:2 communicate:5 decide:1 electronic:4 parsimonious:1 decision:5 appendix:4 bound:1 hi:2 sleep:6 adapted:1 occur:4 constraint:1 constrain:1 ri:8 generates:1 attempting:3 transferred:2 according:5 across:4 wi:6 pr:1 taken:3 equation:4 mutually:1 agree:1 discus:1 describing:1 letting:1 available:1 operation:1 decentralized:6 experimentation:1 olpomdp:3 away:1 appropriate:2 occurrence:1 alternative:1 include:2 kushner:1 build:3 overflow:3 establish:3 objective:6 added:1 occurs:4 exclusive:1 gradient:27 kth:1 thank:1 entity:1 consumption:2 bvr:1 w0:4 evenly:1 length:2 index:1 modeled:1 holding:1 negative:1 design:2 implementation:2 motivates:1 policy:22 observation:6 markov:5 benchmark:1 finite:4 t:2 defining:1 communication:13 team:2 rn:4 station:8 arbitrary:1 introduced:2 pair:1 required:1 specified:2 learned:1 established:2 nip:2 address:1 beyond:1 alongside:1 dynamical:1 pnto:1 max:1 including:1 power:4 event:10 weaver:1 representing:1 scheme:1 historically:1 technology:1 carried:1 epoch:1 literature:5 prior:3 acknowledgement:1 relative:1 fully:1 expect:2 highlight:1 adaptivity:1 interesting:1 agent:2 rni:2 gather:2 supported:2 wireless:2 asynchronous:4 arriving:1 tsitsiklis:2 neighbor:2 distributed:15 van:2 dimension:1 transition:5 author:2 made:1 adaptive:3 perpetually:1 simplified:1 projected:2 qualitatively:1 ec:1 reinforcement:2 transaction:1 approximate:2 observable:1 global:7 xi:3 subsequence:1 continuous:1 search:2 robust:1 ca:2 career:1 protocol:32 did:1 pk:1 main:1 arrival:2 allowed:1 fashion:2 slow:1 ny:1 comprises:1 lie:2 governed:1 communicates:2 theorem:6 specific:1 ciamac:3 sensing:1 r2:2 decay:1 explored:1 exists:5 importance:1 occurring:2 horizon:2 yin:1 explore:1 infinitely:3 ordered:2 partially:1 scalar:4 satisfies:3 relies:1 chance:1 asleep:1 stimulating:1 viewed:2 goal:2 room:1 change:2 springerverlag:1 infinite:3 averaging:1 total:1 attempted:2 latter:1 slept:1 outstanding:1 accelerated:1 correlated:1 |
1,594 | 2,449 | Training fMRI Classifiers to Discriminate
Cognitive States across Multiple Subjects
Xuerui Wang, Rebecca Hutchinson, and Tom M. Mitchell
Center for Automated Learning and Discovery
Carnegie Mellon University
5000 Forbes Avenue, Pittsburgh, PA 15213
{xuerui.wang, rebecca.hutchinson, tom.mitchell}@cs.cmu.edu
Abstract
We consider learning to classify cognitive states of human subjects,
based on their brain activity observed via functional Magnetic Resonance
Imaging (fMRI). This problem is important because such classifiers constitute ?virtual sensors? of hidden cognitive states, which may be useful
in cognitive science research and clinical applications. In recent work,
Mitchell, et al. [6,7,9] have demonstrated the feasibility of training such
classifiers for individual human subjects (e.g., to distinguish whether the
subject is reading an ambiguous or unambiguous sentence, or whether
they are reading a noun or a verb). Here we extend that line of research,
exploring how to train classifiers that can be applied across multiple human subjects, including subjects who were not involved in training the
classifier. We describe the design of several machine learning approaches
to training multiple-subject classifiers, and report experimental results
demonstrating the success of these methods in learning cross-subject
classifiers for two different fMRI data sets.
1
Introduction
The advent of functional Magnetic Resonance Imaging (fMRI) has made it possible to
safely, non-invasively observe correlates of neural activity across the entire human brain at
high spatial resolution. A typical fMRI session can produce a three dimensional image of
brain activation once per second, with a spatial resolution of a few millimeters, yielding
tens of millions of individual fMRI observations over the course of a twenty-minute session. This fMRI technology holds the potential to revolutionize studies of human cognitive
processing, provided we can develop appropriate data analysis methods.
Researchers have now employed fMRI to conduct hundreds of studies that identify which
regions of the brain are activated on average when a human performs a particular cognitive
task (e.g., reading, puzzle solving). Typical research publications describe summary statistics of brain activity in various locations, calculated by averaging together fMRI observations collected over multiple time intervals during which the subject responds to repeated
stimuli of a particular type.
Our interest here is in a different problem: training classifiers to automatically decode the
subject?s cognitive state at a single instant or interval in time. If we can reliably train such
classifiers, we may be able to use these as ?virtual sensors? of hidden cognitive states, to
observe previously hidden cognitive processes in the brain.
In recent work [6,7,9], Mitchell et al. have demonstrated the feasibility of training such
classifiers. Whereas their work focussed primarily on training a different classifier for each
human subject, our focus in this paper is on training a single classifier that can be used
across multiple human subjects, including humans not involved in the training process.
This is challenging because different brains have substantially different sizes and shapes,
and because different people may generate different brain activation given the same cognitive state. Below we briefly survey related work, describe a range of machine learning
approaches to this problem, and present experimental results showing statistically significant cross-subject classifier accuracies for two different fMRI studies.
2
Related Work
As noted above, Mitchell et al. [6,7,9] describe methods for training classifiers of cognitive states, focussing primarily on training subject-specific classifiers. More specifically,
they train classifiers that distinguish among a set of predefined cognitive states, based on a
single fMRI image or fixed window of fMRI images collected relative to the presentation
of a particular stimulus. For example, they report on successful classifiers to distinguish
whether the object presented to the subject is a sentence or a picture, whether the sentence
being viewed is ambiguous or unambiguous, whether an isolated word is a noun or a verb,
and whether an isolated noun is about a person, building, animal, etc. They used several
different classifiers, and report that dimensionality reduction methods are essential given
the high dimensional, sparse training data. They propose specific methods for dimensionality reduction that take advantage of data collected during rest periods between stimuli,
and demonstrate that these outperform standard methods for feature selection such as those
based on mutual information. Despite these positive results, there remain several limitations: classifiers are trained and applied over a fixed time window of data, classifiers are
trained only to discriminate among predefined classes of cognitive states, and they deal
only with single cognitive states rather than multiple states evolving over time.
In earlier work, Wagner et al. [11] report that they have been able to predict whether
a verbal experience will be remembered later, based on the magnitude of activity within
certain parts of left prefrontal and temporal cortices during that experience. Haxby et al.
[2] show that different patterns of fMRI activity are generated when a subject views a
photograph of a face versus a house, etc., and show that by dividing the fMRI data for each
photograph category into two samples, they could automatically match the data samples
related to the same category. Recent work on brain computer interfaces (see, e.g., [8]) also
seeks to decode observed brain activity (often EEG or direct neural recordings, rather than
fMRI) typically for the purpose of controlling external devices.
3
3.1
Approach
Learning Method
In this paper we explore the use of machine learning methods to approximate classification
functions of the following form
f : hI1 , ..., In i ? CognitiveState
where hI1 , ..., In i is a sequence of n fMRI images collected during a contiguous time interval and where CognitiveState is the set of cognitive states to be discriminated. We explore
a number of classifier training methods, including:
? Gaussian Naive Bayes (GNB). This classifier learns a class-conditional Gaussian
generative model for each feature1 . New examples are classified using Bayes rule
and the assumption that features are conditionally independent given the class
(see, for instance, [5]).
? Support Vector Machine (SVM). We employ a linear kernel Support Vector Machine (see, for instance, [1]).
? k Nearest Neighbor(kNN). We use k Nearest Neighbor with a Euclidean distance
metric, considering values of 1, 3, and 5 for k (see, for instance, [5]).
Classifiers were evaluated using a ?leave one subject out? cross validation procedure, in
which each of the m human subjects was used as a test subject while training on the remaining m?1 subjects, and the mean accuracy over these held out subjects was calculated.
3.2
Feature Extraction
In general, each input image may contain many thousands of voxels. We explored a variety
of approaches to reducing the dimensionality of the input feature vector, including methods
that select a subset of available features, methods that replace multiple feature values by
their mean, and methods that use both of these extractions. In the latter two cases, we
take means over values found within anatomically defined brain regions (e.g., dorsolateral
prefrontal cortex) which are referred to as Regions of Interest, or ROIs).
We considered the following feature extraction methods:
? Average. For each ROI, calculate the mean activity over all voxels in the ROI. Use
these ROI means as the input features.
? ActiveAvg(n). For each ROI, select the n most active voxels2 , then calculate the
mean of their values. Again, use these ROI means as the input features. Here the
?most active? voxels are those whose activity while performing the task varies the
most from their activity when the subject is at rest (see [7] for details).
? Active(n). Select the n most active voxels over the entire brain. Use only these n
voxels as input features.
3.3
Registering Data from Multiple Subjects
Given the different sizes and shapes of different brains, it is not possible to directly map the
voxels in one brain to those in another. We considered two different methods for producing
representations of fMRI data for use across multiple subjects:
? ROI Mapping. Abstract the voxel data in each brain using the Average or ActiveAvg(n) feature extraction method described above. Because each brain contains the same set of anatomically defined ROIs, we can use the resulting representation of average activity per ROI as a canonical representation across subjects.
? Talairach coordinates. The coordinate system of each brain is transformed (geometrically morphed) into the coordinate system of a standard brain (known as the
Talairach-Tournoux coordinate system [10]). After this transformation, each brain
has the same shape and size, though the transformation is usually imperfect.
1
It is well known that the Gaussian model does not accurately fit fMRI data. Some non-Gaussian
models, such as Generalized Gaussian model which makes use of the kurtosis of the data, and tdistribution which is more heavy-tailed, are in our future plan.
2
The fMRI data used here are first preprocessed by FIASCO (http://www.stat.cmu.edu/?fiasco),
and the active voxels are determined by t-test.
There are significant differences in these two approaches. First, note they differ in their
spatial resolution and in the dimension of the resulting input feature vector. ROI Mapping results in just one feature per ROI (we work with at most 35 ROIs per brain) at each
timepoint, whereas Talairach coordinates retain the voxel-level resolution (on the order of
15,000 voxels per brain). Second, the approaches have different noise characteristics. ROI
Mapping reduces noise by averaging voxel activations, whereas the Talairach transformation effectively introduces new noise due to imperfections in the morphing transformation.
Thus, the approaches have complementary advantages and disadvantages. Notice both of
these transformations require background knowledge about brain anatomy in order to identify anatomical landmarks or ROIs.
4
Case Studies
This section describes two fMRI case studies used for training classifiers (detailed in [7]).
4.1
Sentence versus Picture Study
In this fMRI study [3], thirteen normal subjects performed a sequence of trials. During
each trial they were first shown a sentence and a simple picture, then asked whether the
sentence correctly described the picture. We used this data set to explore the feasibility of
training classifiers to distinguish whether the subject is examining a sentence or a picture
during a particular time interval.
In half of the trials the picture was presented first, followed by the sentence, which we will
refer to as PS data set. In the remaining trials, the sentence was presented first, followed
by the picture, which we will call SP data set. Pictures contained geometric arrangements
of two of the following symbols: +, ?, $. Sentences were descriptions such as ?It is true
that the star is below the plus,? or ?It is not true that the star is above the plus.?
The learning task we consider here is to train a classifier to determine, given a particular
16-image interval of fMRI data, whether the subject was viewing a sentence or a picture
during this interval. In other words, we wish to learn a classifier of the form:
f : hI1 , ..., I16 i ? {Picture, Sentence}
where I1 is the image captured at the time of stimulus (picture or sentence) onset. In this
case we restrict the classifier input to 7 most relevant ROIs3 determined by a domain expert.
4.2
Syntactic Ambiguity Study
In this fMRI study [4], subjects were presented with ambiguous and unambiguous sentences, and were asked to respond to a yes-no question about the content of each sentence.
The questions were designed to ensure that the subject was in fact processing the sentence.
Five normal subjects participated in this study, which we will refer to as SA data set.
We are interested here in learning a classifier that takes as input an interval of fMRI activity,
and determines whether the subject was currently reading an unambiguous or ambiguous
sentence. An example ambiguous sentence is ?The experienced soldiers warned about the
dangers conducted the midnight raid.? An example of an unambiguous sentence is ?The
experienced soldiers spoke about the dangers before the midnight raid.? We train classifiers
of the form
f : hI1 , ..., I16 i ? {Ambiguous, Unambiguous}
3
They are pars opercularis of the inferior frontal gyrus, pars triangularis of the inferior frontal
gyrus, intra-parietal sulcus, inferior temporal gyri and sulci, inferior parietal lobule, dorsolateral prefrontal cortex, and an area around the calcarine sulcus, respectively.
where I1 is the image captured at the time when the sentence is first presented to the subject.
In this case we restrict the classifier input to 4 ROIs4 considered to be the most relevant.
5
Experimental Results
The primary goal of this work is to determine whether and how it is possible to train classifiers of cognitive states across multiple human subjects. We experimented using data
from the two case studies described above, measuring the accuracy of classifiers trained for
single subjects, as well as those trained for multiple subjects. Note we might expect the
multiple subject classification accuracies to be lower due to differences among subjects, or
to be higher due to the larger number of training examples available.
In order to test the statistical significance of our results, consider the 95% confidence intervals5 of the accuracies. Assuming that errors on test examples are i.i.d. Bernoulli(p)
distributed, the number of observed correct classifications will follow a Binomial(n, p) distribution, where n is the number of test examples. Table 1 displays the lowest accuracies
that are statistically significant at the 95% confidence level, where the expected accuracy
due to chance is 0.5 given the equal number of examples from both classes. We will not
report confidence interval individually for each accuracy because they are very similar.
Table 1: The lowest accuracies that are significantly better than chance at the 95% level.
# of examples
Lowest accuracy
5.1
SP
520
54.3%
PS
520
54.3%
SP+PS
1040
53.1%
SA
100
59.7%
ROI Mapping
We first consider the ROI Mapping method for merging data from multiple subjects. Table
2 shows the classifier accuracies for the Sentence versus Picture study, when training across
subjects and testing on the subject withheld from the training set. For comparison, it also
shows (in parentheses) the average accuracy achieved by classifiers trained and tested on
single subjects. All results are highly significant compared to the 50% accuracy expected
by chance, demonstrating convincingly the feasibility of training classifiers to distinguish
cognitive states in subjects beyond the training set. In fact, the accuracy achieved on the left
out subject for the multiple-subject classifiers is often very close to the average accuracy of
the single-subject classifiers, and in several cases it is significantly better. This surprisingly
positive result indicates that the accuracy of the multiple-subject classifier, when tested on
new subjects outside the training set, is comparable to the average accuracy achieved when
training and testing using data from a single subject. Presumably this can be explained by
the fact that it is trained using an order of magnitude more training examples, from twelve
subjects rather than one. The increase in training set size apparently compensates for the
variability among subjects.
A second trend apparent in Table 2 is that the accuracies in SP or PS data sets are better
than the accuracies when using their union (SP+PS). Presumably this is due to the fact that
the context in which the stimulus (picture or sentence) appears is more consistent when we
restrict to data in which these stimuli are presented in the same sequence.
4
They include pars opercularis of the inferior frontal gyrus, pars triangularis of the inferior frontal
gyrus, Wernicke?s area, and the superior temporal gyrus.
5
Under cross validation, we learn m classifiers, and the accuracy we reported is the mean accuracy
of these classifiers. The size of the confidence interval we compute is the upper bound of the size of
Table 2: Multiple-subject accuracies in the Sentence versus Picture study (ROI mapping).
Numbers in parenthesis are the corresponding mean accuracies of single-subject classifiers.
METHOD
Average
Average
Average
Average
Average
ActiveAvg(20)
ActiveAvg(20)
ActiveAvg(20)
ActiveAvg(20)
CLASSIFIER
GNB
SVM
1NN
3NN
5NN
GNB
1NN
3NN
5NN
SP
88.8% (90.6%)
86.5% (89.0%)
84.8% (86.5%)
86.5% (87.5%)
88.7% (89.4%)
92.5% (95.4%)
91.5% (94.4%)
93.1% (95.4%)
93.8% (95.0%)
PS
82.3% (79.6%)
77.1% (83.7%)
73.8% (61.9%)
75.8% (69.2%)
78.7% (74.6%)
87.3% (88.1%)
83.8% (82.5%)
86.2% (83.7%)
87.5% (86.2%)
SP+PS
74.3% (66.5%)
75.3% (69.8%)
63.7% (59.7%)
67.3% (59.7%)
68.3% (60.4%)
72.8% (75.4%)
66.0% (71.2%)
71.5% (73.2%)
72.0% (73.2%)
Table 3: Multiple-subject accuracies in the Syntactic Ambiguity study (ROI mapping).
Numbers in parenthesis are the corresponding mean accuracies of single-subject classifiers.
To choose n in ActiveAvg(n), we explored all even numbers less than 50, reporting the best.
METHOD
Average
Average
Average
Average
Average
ActiveAvg(n)
ActiveAvg(n)
ActiveAvg(n)
ActiveAvg(n)
ActiveAvg(n)
CLASSIFIER
GNB
SVM
1NN
3NN
5NN
GNB
SVM
1NN
3NN
5NN
ACCURACY
58.0% (61.0%)
54.0% (63.0%)
56.0% (54.0%)
57.0% (64.0%)
58.0% (60.0%)
64.0% (68.0%)
65.0% (71.0%)
64.0% (61.0%)
69.0% (60.0%)
62.0% (64.0%)
Classifier accuracies for the Syntactic Ambiguity study are shown in Table 3. Note accuracies above 59.7% are significantly better than chance. The accuracies for both singlesubject and multiple-subject classifiers are lower than in the first study, perhaps due in part
to the smaller number of subjects and training examples. Although we cannot draw strong
conclusions from the results of this study, it provides modest additional support for the feasibility of training multiple-subject classifiers using ROI mapping. Note that accuracies of
the multiple-subject classifiers are again comparable to those of single subject classifiers.
5.2
Talairach Coordinates
Next we explore the Talairach coordinates method for merging data from multiple subjects.
Here we consider the Syntactic Ambiguity study only6 . Note one difficulty in utilizing the
Talairach transformation here is that slightly different regions of the brain were scanned for
different subjects. Figure 1 shows the portions of the brain that were scanned for two of the
subjects along with the intersection of these regions from all five subjects. In combining
data from multiple subjects, we used only the data in this intersection.
the true confidence interval of the mean accuracy, which can be shown using the Lagrangian method.
6
We experienced technical difficulties in applying the Talairach transformation software to the
Sentence versus Picture study (see [3] for details).
Subject 1
Subject 2
Intersecting all subjects
Figure 1: The two leftmost panels show in color the scanned portion of the brain for two
subjects (Syntactic Ambiguity study) in Talairach space in sagittal view. The rightmost
panel shows the intersection of these scanned bands across all five subjects.
The results of training multiple-subject classifiers based on the Talairach coordinates
method are shown in Table 4. Notice the results are comparable to those achieved by the
earlier ROI Mapping method in Table 3. Based on these results, we cannot state that one
of these methods is significantly more accurate than the other. When using the Talairach
method, we found the most effective feature extraction approach was the Active(n) feature
selection approach, which chooses the n most active voxels from across the brain. Note
that it is not possible to use this feature selection approach with the ROI Mapping method,
because the individual voxels from different brains can only be aligned after performing
the Talairach transformation.
Table 4: Multiple-subject accuracies in the Syntactic Ambiguity study (Talairach coordinates). Numbers in parenthesis are the mean accuracies of single-subject classifiers. For n
in Active(n), we explored all even numbers less than 200, reporting the best.
METHOD
Active(n)
Active(n)
Active(n)
Active(n)
Active(n)
6
CLASSIFIER
GNB
SVM
1NN
3NN
5NN
ACCURACY
63.0% (72.0%)
67.0% (71.0%)
60.0% (64.0%)
60.0% (69.0%)
62.0% (69.0%)
Summary and Conclusions
The primary goal of this research was to determine whether it is feasible to use machine
learning methods to decode mental states across multiple human subjects. The successful
results for two case studies indicate that this is indeed feasible.
Two methods were explored to train multiple-subject classifiers based on fMRI data. ROI
mapping abstracts fMRI data by using the mean fMRI activity in each of several anatomically defined ROIs to map different brains in terms of ROIs. The transformation to Talairach coordinates morphs brains into a standard coordinate frame, retaining the approximate spatial resolution of the original data. Using these approaches, it was possible to
train classifiers to distinguish, e.g., whether the subject was viewing a picture or a sentence
describing a picture, and to apply these successfully to subjects outside the training set.
In many cases, the classification accuracy for subjects outside the training set equalled or
exceeded the accuracy achieved by training on data from just the single subject. The results using the two methods showed no statistically significant difference in the Syntactic
Ambiguity study.
It is important to note that while our empirical results demonstrate the ability to successfully
distinguish among a predefined set of states occurring at specific times while the subject
performs specific tasks, they do not yet demonstrate that trained classifiers can reliably detect cognitive states occurring at arbitrary times while the subject performs arbitrary tasks.
We intend to pursue this more general goal in future work. We foresee many opportunities
for future machine learning research in this area. For example, we plan to next learn models of temporal behavior, in contrast to the work reported here which considers only data
at a single time interval. Machine learning methods such as Hidden Markov Models and
Dynamic Bayesian Networks appear relevant. A second research direction is to develop
learning methods that take advantage of data from multiple studies, in contrast to the single
study efforts described here.
Acknowledgments
We are grateful to Marcel Just for providing the fMRI data for these experiments, and
for many valuable discussions and suggestions. We would like to thank Francisco Pereira
and Radu S. Niculescu for providing much code to run our experiments, and Vladimir
Cherkassky, Joel Welling, Erika Laing and Timothy Keller for their instruction on techniques related to Talairach transformation.
References
[1] Burges, C., A Tutorial on Support Vector Machines for Pattern Recognition, Journal of data
Mining and Knowledge Discovery, 2(2),121-167, 1998.
[2] Haxby, J., Gobbini, M., Furey, M., Ishai, A., Schouten, J., & Pietrini, P., Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex, Science, 293, 2425-2430,
2001.
[3] Keller, T., Just, M., & Stenger, V., Reading Span and the Time-course of Cortical Activation in
Sentence-Picture Verification,Annual Convention of the Psychonomic Society, Orlando, FL, 2001.
[4] Mason, R., Just, M., Keller, T., & Carpenter, P., Ambiguity in the Brain: What Brain Imaging Reveals about the Processing of Syntactically Ambiguous Sentences, Journal of Experimental
Psychology: Learning, Memory, and Cognition, in press, 2003.
[5] Mitchell, T.M., Machine Learning, McGraw-Hill, 1997
[6] Mitchell, T.M., Hutchinson, R., Just, M., Niculescu, R., Pereira, F., & Wang, X., Classifying
Instantaneous Cognitive States from fMRI Data, The American Medical Informatics Association 2003
Annual Symposium, to appear, 2003
[7] Mitchell, T.M., Hutchinson, R., Niculescu, R., Pereira, F., Wang, X., Just, M., & Newman, S.,
Learning to Decode Cognitive States from Brain Images, Machine Learning: Special Issue on Data
Mining Lessons Learned, accepted, 2003
[8] NIPS 2001 Brain Computer Interface Workshop, Whistler, BC, Canada, December 2001.
[9] Pereira, F., Just, M., & Mitchell, T.M., Distinguishing Natural Language Processes on the Basis
of fMRI-measured Brain Activation, PKDD 2001, Freiburg, Germany, 2001.
[10] Talairach, J., & Tournoux, P., Co-planar Stereotaxic Atlas of the Human Brain, Thieme, New
York, 1988.
[11] Wagner, A., Schacter, D., Rotte, M., Koutstaal, W., Maril, A., Dale, A., Rosen, B., & Buckner,
R., Building Memories: Remembering and Forgetting of Verbal Experiences as Predicted by Brain
Activity, Science, 281, 1188-1191, 1998.
| 2449 |@word trial:4 briefly:1 instruction:1 seek:1 reduction:2 contains:1 bc:1 rightmost:1 activation:5 yet:1 shape:3 haxby:2 designed:1 atlas:1 generative:1 half:1 device:1 mental:1 provides:1 opercularis:2 location:1 five:3 registering:1 along:1 direct:1 midnight:2 symposium:1 forgetting:1 indeed:1 expected:2 behavior:1 pkdd:1 brain:36 automatically:2 window:2 considering:1 provided:1 furey:1 panel:2 advent:1 lowest:3 what:1 thieme:1 substantially:1 pursue:1 transformation:10 safely:1 temporal:5 classifier:56 medical:1 appear:2 producing:1 positive:2 before:1 despite:1 might:1 plus:2 challenging:1 equalled:1 co:1 range:1 statistically:3 lobule:1 acknowledgment:1 testing:2 union:1 procedure:1 danger:2 area:3 empirical:1 evolving:1 significantly:4 word:2 confidence:5 cannot:2 close:1 selection:3 context:1 applying:1 www:1 map:2 demonstrated:2 center:1 lagrangian:1 keller:3 survey:1 resolution:5 rule:1 utilizing:1 coordinate:11 controlling:1 decode:4 distinguishing:1 pa:1 trend:1 recognition:1 observed:3 wang:4 thousand:1 calculate:2 region:5 valuable:1 asked:2 dynamic:1 trained:7 grateful:1 solving:1 basis:1 various:1 train:8 describe:4 effective:1 newman:1 outside:3 whose:1 apparent:1 larger:1 erika:1 compensates:1 ability:1 statistic:1 knn:1 syntactic:7 advantage:3 sequence:3 kurtosis:1 propose:1 relevant:3 combining:1 aligned:1 description:1 p:7 produce:1 leave:1 object:2 develop:2 stat:1 measured:1 nearest:2 sa:2 strong:1 dividing:1 c:1 marcel:1 indicate:1 predicted:1 convention:1 differ:1 direction:1 feature1:1 anatomy:1 correct:1 human:13 viewing:2 virtual:2 require:1 orlando:1 timepoint:1 exploring:1 hold:1 around:1 considered:3 roi:24 normal:2 presumably:2 puzzle:1 predict:1 mapping:11 cognition:1 ventral:1 purpose:1 laing:1 currently:1 individually:1 successfully:2 sensor:2 gaussian:5 imperfection:1 rather:3 publication:1 focus:1 bernoulli:1 indicates:1 contrast:2 detect:1 buckner:1 schacter:1 nn:15 niculescu:3 entire:2 typically:1 hidden:4 transformed:1 i1:2 interested:1 germany:1 issue:1 among:5 classification:4 wernicke:1 retaining:1 plan:2 resonance:2 special:1 noun:3 spatial:4 mutual:1 equal:1 once:1 animal:1 extraction:5 fmri:30 future:3 report:5 stimulus:6 rosen:1 few:1 primarily:2 employ:1 individual:3 interest:2 highly:1 mining:2 intra:1 joel:1 introduces:1 yielding:1 activated:1 held:1 predefined:3 accurate:1 experience:3 modest:1 conduct:1 euclidean:1 isolated:2 instance:3 classify:1 earlier:2 contiguous:1 disadvantage:1 measuring:1 subset:1 hundred:1 successful:2 examining:1 conducted:1 i16:2 reported:2 ishai:1 varies:1 hutchinson:4 morphs:1 chooses:1 person:1 twelve:1 retain:1 stenger:1 informatics:1 together:1 intersecting:1 again:2 ambiguity:8 choose:1 prefrontal:3 cognitive:20 external:1 expert:1 american:1 potential:1 star:2 onset:1 later:1 view:2 performed:1 apparently:1 portion:2 bayes:2 forbes:1 accuracy:36 who:1 characteristic:1 identify:2 lesson:1 yes:1 millimeter:1 bayesian:1 accurately:1 researcher:1 classified:1 involved:2 mitchell:9 knowledge:2 color:1 dimensionality:3 appears:1 exceeded:1 higher:1 follow:1 tom:2 planar:1 evaluated:1 revolutionize:1 though:1 foresee:1 just:8 overlapping:1 perhaps:1 building:2 contain:1 true:3 deal:1 conditionally:1 during:7 inferior:6 ambiguous:7 unambiguous:6 noted:1 generalized:1 leftmost:1 hill:1 freiburg:1 demonstrate:3 performs:3 syntactically:1 interface:2 image:9 instantaneous:1 superior:1 functional:2 psychonomic:1 discriminated:1 million:1 extend:1 association:1 mellon:1 significant:5 refer:2 morphed:1 session:2 language:1 cortex:4 etc:2 recent:3 showed:1 certain:1 success:1 remembered:1 captured:2 additional:1 remembering:1 employed:1 gnb:6 determine:3 focussing:1 period:1 multiple:27 reduces:1 technical:1 match:1 clinical:1 cross:4 feasibility:5 parenthesis:4 cmu:2 metric:1 kernel:1 achieved:5 whereas:3 fiasco:2 background:1 participated:1 interval:11 rest:2 subject:81 recording:1 december:1 call:1 automated:1 variety:1 fit:1 psychology:1 restrict:3 imperfect:1 avenue:1 whether:14 effort:1 york:1 constitute:1 useful:1 detailed:1 ten:1 band:1 category:2 gyrus:6 generate:1 http:1 outperform:1 canonical:1 tutorial:1 notice:2 per:5 correctly:1 anatomical:1 carnegie:1 demonstrating:2 sulcus:3 preprocessed:1 hi1:4 spoke:1 imaging:3 geometrically:1 run:1 respond:1 reporting:2 draw:1 dorsolateral:2 comparable:3 bound:1 xuerui:2 fl:1 followed:2 distinguish:7 display:1 annual:2 activity:13 scanned:4 software:1 span:1 performing:2 radu:1 across:11 remain:1 describes:1 smaller:1 slightly:1 anatomically:3 explained:1 previously:1 describing:1 available:2 apply:1 observe:2 appropriate:1 magnetic:2 pietrini:1 original:1 binomial:1 remaining:2 ensure:1 include:1 opportunity:1 instant:1 society:1 intend:1 arrangement:1 question:2 gobbini:1 primary:2 responds:1 distance:1 thank:1 landmark:1 collected:4 considers:1 assuming:1 code:1 providing:2 vladimir:1 thirteen:1 design:1 reliably:2 twenty:1 tournoux:2 upper:1 observation:2 markov:1 withheld:1 parietal:2 variability:1 frame:1 verb:2 arbitrary:2 canada:1 rebecca:2 sentence:27 learned:1 nip:1 able:2 beyond:1 below:2 pattern:2 usually:1 reading:5 convincingly:1 including:4 memory:2 difficulty:2 natural:1 technology:1 picture:18 naive:1 morphing:1 voxels:10 discovery:2 geometric:1 relative:1 par:4 expect:1 suggestion:1 limitation:1 versus:5 validation:2 sagittal:1 verification:1 consistent:1 classifying:1 heavy:1 course:2 summary:2 surprisingly:1 verbal:2 schouten:1 burges:1 neighbor:2 focussed:1 wagner:2 face:2 sparse:1 distributed:2 calculated:2 dimension:1 cortical:1 dale:1 made:1 voxel:3 welling:1 correlate:1 approximate:2 mcgraw:1 active:13 reveals:1 pittsburgh:1 francisco:1 tailed:1 table:10 learn:3 eeg:1 domain:1 soldier:2 sp:7 significance:1 noise:3 repeated:1 complementary:1 carpenter:1 referred:1 raid:2 experienced:3 pereira:4 wish:1 house:1 learns:1 minute:1 specific:4 invasively:1 showing:1 symbol:1 explored:4 experimented:1 svm:5 tdistribution:1 mason:1 essential:1 workshop:1 merging:2 effectively:1 magnitude:2 occurring:2 cherkassky:1 intersection:3 photograph:2 timothy:1 explore:4 contained:1 talairach:16 determines:1 chance:4 conditional:1 viewed:1 presentation:1 goal:3 replace:1 content:1 feasible:2 typical:2 specifically:1 reducing:1 determined:2 averaging:2 discriminate:2 accepted:1 experimental:4 select:3 whistler:1 people:1 support:4 latter:1 frontal:4 stereotaxic:1 tested:2 |
1,595 | 245 | Analog Circuits for Constrained Optimization
A nalog Circuits for Constrained Optimization
John C. Platt 1
Computer Science Department, 256-80
California Institute of Technology
Pasadena, CA 91125
ABSTRACT
This paper explores whether analog circuitry can adequately perform constrained optimization. Constrained optimization circuits
are designed using the differential multiplier method. These circuits fulfill time-varying constraints correctly. Example circuits include a quadratic programming circuit and a constrained flip-flop.
1
INTRODUCTION
Converting perceptual and cognitive tasks into constrained optimization problems
is a useful way of generating neural networks to solve those tasks. Researchers have
used constrained optimization networks to solve the traveling salesman problem
[Durbin, 1987] [Hopfield, 1985], to perform object recognition [Gindi, 1988], and to
decode error-correcting codes [Platt, 1986].
Implementing constrained optimization in analog VLSI is advantageous, because an
analog VLSI chip can solve a large number of differential equations in parallel [Mead,
1989]. However, analog circuits only approximate the desired differential equations.
Therefore, we have built test circuits to determine whether analog circuits can fulfill
user-specified constraints.
2
THE DIFFERENTIAL MULTIPLIER METHOD
The differential multiplier method (DMM) is a method for creating differential equations that perform constrained optimization. The DMM was originally proposed
by [Arrow, 1958] as an economic model. It was used as a neural network by [Platt,
1987].
1
Current address: Synaptics, 2860 Zanker Road, Suite IDS, San Jose, CA 95134
777
778
Platt
_?f
~
_gf
g
I
X
I
~
~V
A
I
--
-- -
-
Figure 1. The architecture of the DMM. The x capacitor in the figure represents the Xi neurons in the network. The - f' box computes the current needed for
the neurons to minimize f . The rest of the circuitry causes the network to fulfill
the constraint g( i) = o.
x
y
G3
Figure 2. A circuit that implements quadratic programming. x, y, and A are
voltages. "Te" refers to a transconductance amplifier.
Analog Circuits for Constrained Optimization
A constrained optimization problem is find a x such that I(x) is minimized subject
to a constraint g(x) = O. In order to find a constrained minimum, the DMM finds
the critical points (x, A) of the Lagrangian
& = I(x) + Ag(i),
(1)
by performing gradient descent on the variables x and gradient ascent on the Lagrange multiplier A:
dXi _
0& _
0I
\ og
A
,
dt
OXi
OXi
OXi
(2)
dA
0&
_
dt
+ OA g(x).
=
=
The DMM can be thought of as a neural network which performs gradient descent
on a function I(x), plus feedback circuitry to find the A that causes the neural
network output to fulfill the constraint g(i) = 0 (see figure 1).
The gradient ascent on the A is necessary for stability. The stability can be examined by combining the two equations (2) to yield a set of second-order differential
equations
(3)
which is analogous to the equations that govern a spring-mass-damping system.
The differential equations (3) converge to the constrained minima if the damping
matrix
(4)
is positive definite.
The DMM can be extended to satisfy multiple simultaneous constraints. The stability of the DMM can also be improved. See [Platt, 1987] for more details.
3
QUADRATIC PROGRAMMING CIRCUIT
This section describes a circuit that solves a specific quadratic programming problem for two variables. A quadratic programming circuit is interesting, because the
basic differential multiplier method is guaranteed to find the constrained minimum.
Also, quadratic programming is useful: it is frequently a sub-problem in a more
complex task. A method of solving general nonlinear constrained optimization is
sequential quadratic programming [Gill, 1981].
We build a circuit to solve a time-dependent quadratic programming problem for
two variables:
minA(x - XO)2 + B(y - YO)2,
(5)
subject to the constraint
ex + Dy + E(t)
= O.
(6)
779
780
Platt
Constraint Fulfillment for Quadratic Programming
~,
I
I
I
I
I
I
0.2
observed, target (V) 0.0
I
I
I
I
I
I
~
-0.2
0.0
0.4 0.8 1.2 1.6
Time (10- 2 Sec)
2.0
Figure 3. Plot of two input voltages of transconductance amplifier. The
dashed line is the externally applied voltage E(t). The solid line is the circuit's
solution of -Cx - Dy. The constraint depends on time: the voltage E(t) is a
square wave. The linear constraint is fulfilled when the two voltages are the same.
When E(t) changes suddenly, the circuit changes -Cx - Dy to compensate. The
unusually shaped noise is caused by digitization by the oscilloscope.
Constraint Fulfillment with Ringing
0.3
0.1
observed, target (V)
-0.1
-0.3
0.0
1.0
2.0
3.0
Time (10- 2 Sec)
4.0
Figure 4. Plot of two input voltages of transconductance amplifier: the constraint forces are increased, which causes the system to undergo damped oscillations
around the constraint manifold.
Analog Circuits for Constrained Optimization
The basic differential multiplier method converts the quadratic programming problem into a system of differential equations:
dx
kl dt = -2Ax + 2Axo - C)..,
dy
k2 dt = -2By + 2Byo - D)",
d)"
k3 dt = ex
(7)
+ Dy + E(t).
The first two equations are implemented with a resistor and capacitor (with a follower for zero output impedance). The third is implemented with resistor summing
into the negative input of a transconductance amplifier. The positive input of the
amplifier is connected to E(t).
The circuit in figure 2 implements the system of differential equations
(8)
where K is the transconductance of the transconductance amplifier. The two systems of differential equations (7) and (8) can match with suitably chosen constants.
The circuit in figure 2 actually performs quadratic programming. The constraint is
fulfilled when the voltages on the inputs of the transconductance amplifier are the
same. The 9 function is a difference between these voltages. Figure 3 is a plot of
-Cx - Dy and E(t) as a function of time: they match reasonably well. The circuit
in figure 2 therefore successfully fulfills the specified constraint.
Decreasing the capacitance C3 changes the spring constant of the second-order differential equation. The forces that push the system towards the constraint manifold
are increased without changing the damping. Therefore, the system becomes underdamped and the constraint is fulfilled with ringing (see figure 4).
The circuit in figure 2 can be easily expanded to solve general quadratic programming for N variables: simply add more Xi neurons) and interconnect them with
resistors.
4
CONSTRAINED FLIP-FLOP
A flip-flop is two inverters hooked together in a ring. It is a bistable circuit: one
inverter is on while the other inverter is off. A flip-flop can also be considered the
simplest neural network: two neurons which inhibit each other.
If the inverters have infinite gain, then the flip-flop in figure 5 minimizes the function
781
782
Platt
G2
-V:!
GI
I
U2
U1
G4
-VI
h
Figure 5. A flip-flop. U1 and U2 are voltages.
...
G2
G1
UI
G1
Gg
I
e1
G4
-===-
Figure 6. A circuit for constraining a flip-flop. Ul, U2 , and A are voltages.
Analog Circuits for Constrained Optimization
Constraint Satisfaction for Non-Quadratic f
0.8
0.4
observed, target (V)
0.0
0.0
0.4
0.8
1.2
Time (10- 2 Sec)
1.6
Figure 7. Constraint fulfillment for a non-quadratic optimization function.
The plot consists of the two input voltages of the transconductance amplifier. Again,
E(t) is the dashed line and -Cx - Dy is the solid line. The constraint is fulfilled
when the two voltages are the same. As the constraint changes with time, the flipflop changes state and the location of the constrained minimum changes abruptly.
After the abrupt change, the constraint is temporarily not fulfilled. However, the
circuit quickly fulfills the constraint. The temporary violation of the constraint
causes the transient spikes in the -Cx - Dy voltage.
783
784
Platt
Now, we can construct a circuit that minimizes the function in equation (9), subject
to some linear constraint ex + Dy + E(t) = 0, where x and y are the inputs to the
inverters. The circuit diagram is shown in figure 6. Notice that this circuit is very
similar to the quadratic programming circuit. Now, the x and y circuits are linked
with a flip-flop, which adds non-quadratic terms to the optimization function.
The voltages -ex - Dy and E(t) for this circuit are plotted in figure 7. For most
of the time, -ex - Dy is close to the externally applied voltage E(t). However,
because G 1 ;/; G 4 and G 2 ;/; G 5 , the flip-flop moves from one minima to the other
and the constraint is temporarily violated. But, the circuitry gradually enforces the
constraint again. The temporary constraint violation can be seen in figure 7.
5
CONCLUSIONS
This paper examines real circuits that have been constrained with the differential
multiplier method. The differential multiplier method seems to work, even when the
underlying circuit is non-linear, as in the case of the constrained flip-flop. Other papers examine applications of the differential multiplier method [Platt, 19S7] [Gindi,
19S5]. These applications could be built with the same parallel analog hardware
discussed in this paper.
Acknowledgement
This paper was made possible by funding from AT&T Bell Labs. Hardware was
provided by Carver Mead, and Synaptics, Inc.
References
Arrow, K., Hurwicz, L., Uzawa, H., [195S], Studies in Linear Nonlinear Programming, Stanford University Press, Stanford, CA.
Durbin, R., Willshaw, D., [19S7], "An Analogue Approach to the Travelling Salesman Problem," Nature, 326, 6S9-69l.
Gill, P. E., Murray, W., Wright, M. H., [19S1], Practical Optimization, Academic
Press, London.
Gindi, G, Mjolsness, E., Anandan, P., [19SS], "Neural Networks for Model Matching
and Perceptual Organization," Advances in Neural Information Processing Systems
I, 61S-625.
Hopfield, J. J., Tank, D. W., [19S5], "'Neural' Computation of Decisions in Optimization Problems," Bioi. Cyber., 52, 141-152.
Mead, C. A., [19S9], Analog VLSI and Neural Systems, Addison-Wesley, Reading,
MA.
Platt, J. C., Hopfield, J. J., [19S6], "Analog Decoding with Neural Networks,"
Neural Networks for Computing, Snowbird, UT, 364-369.
Platt, J. C., Barr, A., [19S7], "Constrained Differential Optimization," Neural Information and Processing Systems, 612-621.
| 245 |@word build:1 implemented:2 murray:1 multiplier:9 suddenly:1 advantageous:1 seems:1 adequately:1 suitably:1 move:1 capacitance:1 spike:1 fulfillment:3 transient:1 bistable:1 gindi:3 gradient:4 implementing:1 solid:2 barr:1 oa:1 digitization:1 gg:1 mina:1 manifold:2 performs:2 current:2 code:1 around:1 considered:1 wright:1 dx:1 follower:1 k3:1 john:1 funding:1 circuitry:4 inverter:5 negative:1 designed:1 plot:4 analog:12 discussed:1 perform:3 s5:2 neuron:4 successfully:1 descent:2 flop:10 extended:1 location:1 fulfill:4 synaptics:2 varying:1 voltage:15 differential:18 og:1 add:2 consists:1 ax:1 yo:1 specified:2 kl:1 c3:1 g4:2 california:1 temporary:2 oscilloscope:1 frequently:1 examine:1 address:1 dependent:1 seen:1 interconnect:1 minimum:5 decreasing:1 anandan:1 gill:2 pasadena:1 vlsi:3 converting:1 determine:1 converge:1 becomes:1 provided:1 dashed:2 underlying:1 tank:1 circuit:34 mass:1 multiple:1 built:2 analogue:1 critical:1 satisfaction:1 match:2 minimizes:2 ringing:2 constrained:22 academic:1 compensate:1 force:2 ag:1 construct:1 shaped:1 e1:1 suite:1 technology:1 represents:1 basic:2 unusually:1 willshaw:1 k2:1 minimized:1 platt:11 reading:1 gf:1 zanker:1 acknowledgement:1 positive:2 diagram:1 interesting:1 id:1 mead:3 rest:1 amplifier:8 ascent:2 organization:1 subject:3 cyber:1 undergo:1 plus:1 examined:1 capacitor:2 violation:2 constraining:1 damped:1 practical:1 enforces:1 architecture:1 implement:2 definite:1 necessary:1 economic:1 institute:1 damping:3 hurwicz:1 carver:1 desired:1 plotted:1 whether:2 bell:1 thought:1 uzawa:1 matching:1 ul:1 s7:3 road:1 refers:1 increased:2 abruptly:1 feedback:1 computes:1 made:1 san:1 close:1 cause:4 s9:2 useful:2 approximate:1 lagrangian:1 hardware:2 summing:1 simplest:1 abrupt:1 xi:2 correcting:1 examines:1 notice:1 explores:1 fulfilled:5 correctly:1 impedance:1 s6:1 nature:1 stability:3 off:1 decoding:1 reasonably:1 ca:3 analogous:1 together:1 quickly:1 target:3 decode:1 user:1 programming:14 again:2 complex:1 changing:1 da:1 recognition:1 cognitive:1 creating:1 arrow:2 noise:1 convert:1 observed:3 jose:1 sec:3 inc:1 satisfy:1 connected:1 caused:1 mjolsness:1 depends:1 vi:1 inhibit:1 oscillation:1 decision:1 lab:1 dy:11 linked:1 sub:1 govern:1 ui:1 wave:1 resistor:3 parallel:2 guaranteed:1 perceptual:2 third:1 quadratic:16 externally:2 durbin:2 solving:1 minimize:1 square:1 specific:1 constraint:28 yield:1 easily:1 hopfield:3 chip:1 u1:2 sequential:1 spring:2 hooked:1 transconductance:8 performing:1 expanded:1 te:1 researcher:1 push:1 london:1 department:1 simultaneous:1 cx:5 describes:1 simply:1 stanford:2 solve:5 g3:1 s:1 lagrange:1 s1:1 temporarily:2 g2:2 gi:1 g1:2 dxi:1 gradually:1 xo:1 gain:1 u2:3 equation:13 ma:1 ut:1 bioi:1 underdamped:1 needed:1 addison:1 flip:10 actually:1 towards:1 combining:1 wesley:1 originally:1 dt:5 travelling:1 salesman:2 oxi:3 change:7 improved:1 infinite:1 box:1 dmm:7 traveling:1 generating:1 nonlinear:2 ring:1 include:1 object:1 axo:1 flipflop:1 fulfills:2 violated:1 snowbird:1 ex:5 solves:1 |
1,596 | 2,450 | 1-norm Support Vector Machines
Ji Zhu, Saharon Rosset, Trevor Hastie, Rob Tibshirani
Department of Statistics
Stanford University
Stanford, CA 94305
{jzhu,saharon,hastie,tibs}@stat.stanford.edu
Abstract
The standard 2-norm SVM is known for its good performance in twoclass classi?cation. In this paper, we consider the 1-norm SVM. We
argue that the 1-norm SVM may have some advantage over the standard
2-norm SVM, especially when there are redundant noise features. We
also propose an ef?cient algorithm that computes the whole solution path
of the 1-norm SVM, hence facilitates adaptive selection of the tuning
parameter for the 1-norm SVM.
1
Introduction
In standard two-class classi?cation problems, we are given a set of training data (x 1 , y1 ),
. . . (xn , yn ), where the input xi ? Rp , and the output yi ? {1, ?1} is binary. We wish to
?nd a class?cation rule from the training data, so that when given a new input x, we can
assign a class y from {1, ?1} to it.
To handle this problem, we consider the 1-norm support vector machine (SVM):
?
?
??
q
n
?1 ? yi ??0 +
min
?j hj (xi )??
?0 ,?
s.t.
i=1
j=1
?1 = |?1 | + ? ? ? + |?q | ? s,
(1)
+
(2)
where D = {h1 (x), . . . hq (x)} is a dictionary of basis functions, and s is a tuning parame?
ter. The solution is denoted as ??0 (s) and ?(s);
the ?tted model is
f?(x) = ??0 +
q
??j hj (x).
(3)
j=1
The classi?cation rule is given by sign[ f?(x)]. The 1-norm SVM has been successfully
used in [1] and [9]. We argue in this paper that the 1-norm SVM may have some advantage
over the standard 2-norm SVM, especially when there are redundant noise features.
To get a good ?tted model f?(x) that performs well on future data, we also need to select
an appropriate tuning parameter s. In practice, people usually pre-specify a ?nite set of
values for s that covers a wide range, then either use a separate validation data set or use
cross-validation to select a value for s that gives the best performance among the given set.
?
In this paper, we illustrate that the solution path ?(s)
is piece-wise linear as a function of
s (in the Rq space); we also propose an ef?cient algorithm to compute the exact whole
?
solution path {?(s),
0 ? s ? ?}, hence help us understand how the solution changes
with s and facilitate the adaptive selection of the tuning parameter s. Under some mild
?
assumptions, we show that the computational cost to compute the whole solution path ?(s)
2
is O(nq min(n, q) ) in the worst case and O(nq) in the best case.
0.0
0.2
??0.4
0.6
0.8
Before delving into the technical details, we illustrate the concept of piece-wise linearity
?
of the solution path ?(s)
with a simple example. We generate 10 training data in each of
two classes. The ?rst class has two standard normal independent inputs x 1 , x2 . The second
class also has two standard normal independent
on 4.5 ? x21 +x22 ?
? but conditioned
?
?inputs,
2
8. The dictionary of basis functions is D = { 2x1 , 2x2 , 2x1 x2 , x1 , x22 }. The solution
?
path ?(s)
as a function of s is shown in Figure 1. Any segment between two adjacent
?
vertical lines is linear. Hence the right derivative of ?(s)
with respect to s is piece-wise
constant (in Rq ). The two solid paths are for x21 and x22 , which are the two relevant features.
0.0
0.5
s
1.0
1.5
?
as a function of s.
Figure 1: The solution path ?(s)
In section 2, we motivate why we are interested in the 1-norm SVM. In section 3, we
?
describe the algorithm that computes the whole solution path ?(s).
In section 4, we show
some numerical results on both simulation data and real world data.
2
Regularized support vector machines
The standard 2-norm SVM is equivalent to ?t a model that
?
?
??
q
n
?1 ? yi ??0 +
?j hj (xi )?? + ??22 ,
min
?0 ,?j
i=1
j=1
(4)
+
where ? is a tuning parameter. In practice, people usually choose hj (x)?s to be the basis
functions of a reproducing kernel Hilbert space. Then a kernel trick allows the dimension
of the transformed feature space to be very large, even in?nite in some cases (i.e. q = ?),
without causing extra computational burden ([2] and [12]). In this paper, however, we will
concentrate on the basis representation (3) rather than a kernel representation.
Notice that (4) has the form loss + penalty, and ? is the tuning parameter that controls
the tradeoff between loss and penalty. The loss (1 ? yf )+ is called the hinge loss, and
the penalty is called the ridge penalty. The idea of penalizing by the sum-of-squares of the
parameters is also used in neural networks, where it is known as weight decay. The ridge
penalty shrinks the ?tted coef?cients ?? towards zero. It is well known that this shrinkage
? hence possibly improves the ?tted model?s
has the effect of controlling the variances of ?,
prediction accuracy, especially when there are many highly correlated features [6]. So from
a statistical function estimation point of view, the ridge penalty could possibly explain the
success of the SVM ([6] and [12]). On the other hand, computational learning theory has
associated the good performance of the SVM to its margin maximizing property [11], a
property of the hinge loss. [8] makes some effort to build a connection between these two
different views.
In this paper, we replace the ridge penalty in (4) with the L1 -norm of ?, i.e. the lasso
penalty [10], and consider the 1-norm SVM problem:
?
?
??
q
n
?1 ? yi ??0 +
min
?j hj (xi )?? + ??1 ,
(5)
?0 ,?
i=1
j=1
+
which is an equivalent Lagrange version of the optimization problem (1)-(2).
The lasso penalty was ?rst proposed in [10] for regression problems, where the response y
is continuous rather than categorical. It has also been used in [1] and [9] for classi?cation
problems under the framework of SVMs. Similar to the ridge penalty, the lasso penalty also
? towards zero, hence (5) also bene?ts from the reduction
shrinks the ?tted coef?cients ??s
in ?tted coef?cients? variances. Another property of the lasso penalty is that because of the
L1 nature of the penalty, making ? suf?ciently large, or equivalently s suf?ciently small,
will cause some of the coef?cients ??j ?s to be exactly zero. For example, when s = 1 in
Figure 1, only three ?tted coef?cients are non-zero. Thus the lasso penalty does a kind of
continuous feature selection, while this is not the case for the ridge penalty. In (4), none of
the ??j ?s will be equal to zero.
It is interesting to note that the ridge penalty corresponds to a Gaussian prior for the ?j ?s,
while the lasso penalty corresponds to a double-exponential prior. The double-exponential
density has heavier tails than the Gaussian density. This re?ects the greater tendency of
the lasso to produce some large ?tted coef?cients and leave others at 0, especially in high
dimensional problems. Recently, [3] consider a situation where we have a small number of
training data, e.g. n = 100, and a large number of basis functions, e.g. q = 10, 000. [3]
argue that in the sparse scenario, i.e. only a small number of true coef?cients ? j ?s are nonzero, the lasso penalty works better than the ridge penalty; while in the non-sparse scenario,
e.g. the true coef?cients ? j ?s have a Gaussian distribution, neither the lasso penalty nor
the ridge penalty will ?t the coef?cients well, since there is too little data from which to
estimate these non-zero coef?cients. This is the curse of dimensionality taking its toll.
Based on these observations, [3] further propose the bet on sparsity principle for highdimensional problems, which encourages using lasso penalty.
3
Algorithm
Section 2 gives the motivation why we are interested in the 1-norm SVM. To solve the
1-norm SVM for a ?xed value of s, we can transform (1)-(2) into a linear programming
problem and use standard software packages; but to get a good ?tted model f?(x) that
performs well on future data, we need to select an appropriate value for the tuning paramter
s. In this section, we propose an ef?cient algorithm that computes the whole solution path
?
?(s),
hence facilitates adaptive selection of s.
3.1
Piece-wise linearity
? of (1)-(2) as s increases, we will notice that since both
If we follow the solution path ?(s)
?
i (1 ? yi fi )+ and ?1 are piece-wise linear, the Karush-Kuhn-Tucker conditions will
not change when s increases unless a residual (1 ? yi f?i ) changes from non-zero to zero,
or a ?tted coef?cient ??j (s) changes from non-zero to zero, which correspond to the non
?
smooth points of i (1 ? yi f?i )+ and ?1 . This implies that the derivative of ?(s)
with
respect to s is piece-wise constant, because when the Karush-Kuhn-Tucker conditions do
? will not change either. Hence it indicates that the whole
not change, the derivative of ?(s)
?
solution path ?(s) is piece-wise linear. See [13] for details.
?
Thus to compute the whole solution path ?(s),
all we need to do is to ?nd the joints, i.e.
the asterisk points in Figure 1, on this piece-wise linear path, then use straight lines to
?
?
interpolate them, or equivalently, to start at ?(0)
= 0, ?nd the right derivative of ?(s),
let
?
s increase and only change the derivative when ?(s) gets to a joint.
3.2
Initial solution (i.e. s = 0)
The following notation is used. Let V = {j : ??j (s) = 0}, E = {i : 1 ? yi f?i = 0},
L = {i : 1 ? yi f?i > 0} and u for the right derivative of ??V (s): u1 = 1 and ??V (s)
?
denotes the components of ?(s)
with indices in V. Without loss of generality, we assume
?
#{yi = 1} ? #{yi = ?1}; then ??0 (0) = 1, ??j (0) = 0. To compute the path that ?(s)
?
follows, we need to compute the derivative of ?(s) at 0. We consider a modi?ed problem:
min
(1 ? yi fi )+ +
(1 ? yi fi )
(6)
?0 ,?j
s.t.
yi =1
yi =?1
?1 ? ?s, fi = ?0 +
q
?j hj (xi ).
(7)
j=1
Notice that if yi = 1, the loss is still (1 ? yi fi )+ ; but if yi = ?1, the loss becomes
?
with respect to ?s is the same no matter
(1 ? yi fi ). In this setup, the derivative of ?(?s)
?
what value ?s is, and one can show that it coincides with the right derivative of ?(s)
?
when s is suf?ciently small. Hence this setup helps us ?nd the initial derivative u of ?(s).
Solving (6)-(7), which can be transformed into a simple linear programming problem, we
get initial V, E and L. |V| should be equal to |E|. We also have:
??0 (?s)
u0
1
+
?s
?
.
(8)
=
0
u
??V (?s)
?s starts at 0 and increases.
3.3
Main algorithm
? proceeds as following:
The main algorithm that computes the whole solution path ?(s)
1. Increase ?s until one of the following two events happens:
? A training point hits E, i.e. 1 ? yi fi = 0 becomes 1 ? yi fi = 0 for some i.
? A basis function in V leaves V, i.e. ??j = 0 becomes ??j = 0 for some j.
Let the current ??0 , ?? and s be denoted by ??0old , ??old and sold .
2. For each j ? ?
/ V, we solve:
u0 + V uj hj (xi ) + uj ? hj ? (xi ) = 0 for i ? E
?old
= 1
V sign(?j )uj + |uj ? |
where u0 , uj and uj ? are the unknowns. We then compute:
?lossj ?
=
yi u 0 +
uj hj (xi ) + uj ? hj ? (xi ) .
?s
L
(9)
(10)
V
3. For each i ? E, we solve:
u0 + V uj hj (xi ) = 0 for i ? E\{i }
?old
= 1
V sign(?j )uj
where u0 and uj are the unknowns. We then compute:
?lossi
=
yi u 0 +
uj hj (xi ) .
?s
L
(11)
(12)
V
4. Compare the computed values of ?loss
?s from step 2 and step 3. There are q ?|V|+
|E| = q + 1 such values. Choose the smallest negative ?loss
?s . Hence,
? If the smallest ?loss
?s is non-negative, the algorithm terminates; else
?
? If the smallest negative ?loss
?s corresponds to a j in step 2, we update
u
?
.
(13)
V ? V ? {j }, u ?
uj ?
? If the smallest negative ?loss
?s corresponds to a i in step 3, we update u and
E ? E\{i }, L ? L ? {i } if necessary.
(14)
?
In either of the last two cases, ?(s) changes as:
old
??0 (sold + ?s)
u0
??0
,
(15)
+
?s
?
=
u
??V (sold + ?s)
??Vold
and we go back to step 1.
?
In the end, we get a path ?(s),
which is piece-wise linear.
3.4
Remarks
Due to the page limit, we omit the proof that this algorithm does indeed give the exact
? of (1)-(2) (see [13] for detailed proof). Instead, we explain a little
whole solution path ?(s)
what each step of the algorithm tries to do.
? gets to a joint on the solution path and the right
Step 1 of the algorithm indicates that ?(s)
?
derivative of ?(s) needs to be changed if either a residual (1 ? yi f?i ) changes from non-zero
to zero, or the coef?cient of a basis function ??j (s) changes from non-zero to zero, when s
increases. Then there are two possible types of actions that the algorithm can take: (1) add
a basis function into V, or (2) remove a point from E.
? if adding each basis function hj ? (x)
Step 2 computes the possible right derivative of ?(s)
?
into V. Step 3 computes the possible right derivative of ?(s)
if removing each point i
? (determined by either (9) or (11)) is such that
from E. The possible right derivative of ?(s)
the training points in E are kept in E when s increases, until the next joint (step 1) occurs.
? changes according to u. Step 4
?loss/?s indicates how fast the loss will decrease if ?(s)
takes the action corresponding to the smallest negative ?loss/?s. When the loss can not
be decreased, the algorithm terminates.
1
2
3
4
5
3.5
Table 1: Simulation results of 1-norm and 2-norm SVM
Test Error (SE)
Simulation
1-norm
2-norm
No Penalty |D|
No noise input 0.073 (0.010) 0.08 (0.02) 0.08 (0.01)
5
2 noise inputs
0.074 (0.014) 0.10 (0.02) 0.12 (0.03)
14
4 noise inputs
0.074 (0.009) 0.13 (0.03) 0.20 (0.05)
27
6 noise inputs
0.082 (0.009) 0.15 (0.03) 0.22 (0.06)
44
8 noise inputs
0.084 (0.011) 0.18 (0.03) 0.22 (0.06)
65
# Joints
94 (13)
149 (20)
225 (30)
374 (52)
499 (67)
Computational cost
?
We have proposed an algorithm that computes the whole solution path ?(s).
A natural
question is then what is the computational cost of this algorithm? Suppose |E| = m at a
joint on the piece-wise linear solution path, then it takes O(qm2 ) to compute step 2 and
step 3 of the algorithm through Sherman-Morrison updating formula. If we assume the
training data are separable by the dictionary D, then all the training data are eventually
going to have loss (1 ? yi f?i )+ equal to zero. Hence it is reasonable to assume the number
of joints on the piece-wise linear solution path is O(n). Since the maximum value of m
is min(n, q) and the minimum value of m is 1, we get the worst computational cost is
O(nq min(n, q)2 ) and the best computational cost is O(nq). Notice that this is a rough
calculation of the computational cost under some mild assumptions. Simulation results
(section 4) actually indicate that the number of joints tends to be O(min(n, q)).
4
Numerical results
In this section, we use both simulation and real data results to illustrate the 1-norm SVM.
4.1
Simulation results
The data generation mechanism is the same as the one described in section 1, except that
we generate 50 training data in each of two classes, and to make harder problems, we
sequentially augment the inputs with additional two, four, six and eight standard normal
noise inputs. Hence the second class almost completely surrounds the ?rst, like the skin
surrounding the oragne, in a two-dimensional subspace. The Bayes error rate for this problem is 0.0435, irrespective of dimension. In the original input space, a hyperplane cannot
separate the classes; we use an enlarged feature space corresponding
the 2nd degree poly? to ?
nomial kernel, hence the dictionary of basis functions is D = { 2xj , 2xj xj , x2j , j, j =
1, . . . p}. We generate 1000 test data to compare the 1-norm SVM and the standard 2-norm
SVM. The average test errors over 50 simulations, with different numbers of noise inputs,
are shown in Table 1. For both the 1-norm SVM and the 2-norm SVM, we choose the
tuning parameters to minimize the test error, to be as fair as possible to each method. For
comparison, we also include the results for the non-penalized SVM.
From Table 1 we can see that the non-penalized SVM performs signi?cantly worse than the
penalized ones; the 1-norm SVM and the 2-norm SVM perform similarly when there is no
noise input (line 1), but the 2-norm SVM is adversely affected by noise inputs (line 2 - line
5). Since the 1-norm SVM has the ability to select relevant features and ignore redundant
features, it does not suffer from the noise inputs as much as the 2-norm SVM. Table 1 also
shows the number of basis functions q and the number of joints on the piece-wise linear
solution path. Notice that q < n and there is a striking linear relationship between |D| and
#Joints (Figure 2). Figure 2 also shows the 1-norm SVM result for one simulation.
0.05
100
200
300
Number of Joints
Test Error
0.10
400
500
0.20
0.15
1.0
0.5
??
0.0
?0.5
0
2
s
4
6
0
2
s
4
6
10
20
30
40
50
Number of Bases
60
Figure 2: Left and middle panels: 1-norm SVM when there are 4 noise inputs. The left panel is the
?
piece-wise linear solution path ?(s).
The two upper paths correspond to x21 and x22 , which are the
relevant features. The middle panel is the test error along the solution path. The dash lines correspond
to the minimum of the test error. The right panel illustrates the linear relationship between the number
of basis functions and the number of joints on the solution path when q < n.
4.2
Real data results
In this section, we apply the 1-norm SVM to classi?cation of gene microarrays. Classi?cation of patient samples is an important aspect of cancer diagnosis and treatment. The
2-norm SVM has been successfully applied to microarray cancer diagnosis problems ([5]
and [7]). However, one weakness of the 2-norm SVM is that it only predicts a cancer class
label but does not automatically select relevant genes for the classi?cation. Often a primary
goal in microarray cancer diagnosis is to identify the genes responsible for the classi?cation, rather than class prediction. [4] and [5] have proposed gene selection methods, which
we call univariate ranking (UR) and recursive feature elimination (RFE) (see [14]), that can
be combined with the 2-norm SVM. However, these procedures are two-step procedures
that depend on external gene selection methods. On the other hand, the 1-norm SVM has
an inherent gene (feature) selection property due to the lasso penalty. Hence the 1-norm
SVM achieves the goals of classi?cation of patients and selection of genes simultaneously.
We apply the 1-norm SVM to leukemia data [4]. This data set consists of 38 training data
and 34 test data of two types of acute leukemia, acute myeloid leukemia (AML) and acute
lymphoblastic leukemia (ALL). Each datum is a vector of p = 7, 129 genes. We use the
original input xj , i.e. the jth gene?s expression level, as the basis function, i.e. q = p.
The tuning parameter is chosen according to 10-fold cross-validation, then the ?nal model
is ?tted on all the training data and evaluated on the test data. The number of joints on
the solution path is 104, which appears to be O(n)
O(q). The results are summarized
in Table 2. We can see that the 1-norm SVM performs similarly to the other methods
in classi?cation and it has the advantage of automatically selecting relevant genes. We
should notice that the maximum number of genes that the 1-norm SVM can select is upper
bounded by n, which is usually much less than q in microarray problems.
5
Conclusion
We have considered the 1-norm SVM in this paper. We illustrate that the 1-norm SVM may
have some advantage over the 2-norm SVM, especially when there are redundant features.
?
The solution path ?(s)
of the 1-norm SVM is a piece-wise linear function in the tuning
Table 2: Results on Microarray Classi?cation
Method
2-norm SVM UR
2-norm SVM RFE
1-norm SVM
CV Error
2/38
2/38
2/38
Test Error
3/34
1/34
2/34
# of Genes
22
31
17
parameter s. We have proposed an ef?cient algorithm to compute the whole solution path
? of the 1-norm SVM, and facilitate adaptive selection of the tuning parameter s.
?(s)
Acknowledgments
Hastie was partially supported by NSF grant DMS-0204162, and NIH grant ROI-CA72028-01. Tibshirani was partially supported by NSF grant DMS-9971405, and NIH grant
ROI-CA-72028.
References
[1] Bradley, P. & Mangasarian, O. (1998) Feature selection via concave minimization and support
vector machines. In J. Shavlik (eds), ICML?98. Morgan Kaufmann.
[2] Evgeniou, T., Pontil, M. & Poggio., T. (1999) Regularization networks and support vector machines. Advances in Large Margin Classi?ers. MIT Press.
[3] Friedman, J., Hastie, T, Rosset, S, Tibshirani, R. & Zhu, J. (2004) Discussion of ?Consistency in
boosting? by W. Jiang, G. Lugosi, N. Vayatis and T. Zhang. Annals of Statistics. To appear.
[4] Golub,T., Slonim,D., Tamayo,P., Huard,C., Gaasenbeek,M., Mesirov,J., Coller,H., Loh,M.,
Downing,J. & Caligiuri,M. (1999) Molecular classi?cation of cancer: class discovery and class prediction by gene expression monitoring. Science 286, 531-536.
[5] Guyon,I., Weston,J., Barnhill,S. & Vapnik,V. (2002) Gene selection for cancer classi?cation using
support vector machines. Machine Learning 46, 389-422.
[6] Hastie, T., Tibshirani, R. & Friedman, J. (2001) The Elements of Statistical Learning. SpringerVerlag, New York.
[7] Mukherjee, S., Tamayo,P., Slonim,D., Verri,A., Golub,T., Mesirov,J. & Poggio, T. (1999) Support
vector machine classi?cation of microarray data. Technical Report AI Memo 1677, MIT.
[8] Rosset, S., Zhu, J. & Hastie, T. (2003) Boosting as a regularized path to a maximum margin
classi?er. Technical Report, Department of Statistics, Stanford University, CA.
[9] Song, M., Breneman, C., Bi, J., Sukumar, N., Bennett, K., Cramer, S. & Tugcu, N. (2002) Prediction of protein retention times in anion-exchange chromatography systems using support vector
regression. Journal of Chemical Information and Computer Sciences, September.
[10] Tibshirani, R. (1996) Regression shrinkage and selection via the lasso. J.R.S.S.B. 58, 267-288.
[11] Vapnik, V. (1995) Tha Nature of Statistical Learning Theory. Springer-Verlag, New York.
[12] Wahba, G. (1999) Support vector machine, reproducing kernel Hilbert spaces and the randomized GACV. Advances in Kernel Methods - Support Vector Learning, 69-88, MIT Press.
[13] Zhu, J. (2003) Flexible statistical modeling. Ph.D. Thesis. Stanford University.
[14] Zhu, J. & Hastie, T. (2003) Classi?cation of gene microarrays by penalized logistic regression.
Biostatistics. Accepted.
| 2450 |@word mild:2 version:1 middle:2 norm:50 nd:5 tamayo:2 simulation:8 myeloid:1 solid:1 harder:1 reduction:1 initial:3 selecting:1 bradley:1 current:1 numerical:2 remove:1 update:2 leaf:1 nq:4 boosting:2 zhang:1 downing:1 along:1 ect:1 consists:1 indeed:1 nor:1 automatically:2 little:2 curse:1 becomes:3 linearity:2 notation:1 panel:4 bounded:1 biostatistics:1 what:3 xed:1 kind:1 concave:1 exactly:1 hit:1 control:1 grant:4 omit:1 yn:1 appear:1 before:1 retention:1 tends:1 limit:1 slonim:2 jiang:1 path:31 lugosi:1 range:1 bi:1 breneman:1 acknowledgment:1 responsible:1 practice:2 recursive:1 procedure:2 pontil:1 nite:2 pre:1 protein:1 get:7 cannot:1 selection:12 equivalent:2 maximizing:1 go:1 rule:2 handle:1 annals:1 controlling:1 suppose:1 exact:2 programming:2 trick:1 element:1 updating:1 mukherjee:1 predicts:1 tib:1 worst:2 decrease:1 rq:2 motivate:1 depend:1 solving:1 segment:1 basis:13 completely:1 gacv:1 joint:13 surrounding:1 fast:1 describe:1 stanford:5 solve:3 ability:1 statistic:3 transform:1 advantage:4 toll:1 propose:4 mesirov:2 causing:1 relevant:5 cients:10 rfe:2 rst:3 double:2 produce:1 leave:1 help:2 illustrate:4 stat:1 signi:1 implies:1 indicate:1 concentrate:1 kuhn:2 aml:1 lymphoblastic:1 anion:1 elimination:1 exchange:1 assign:1 karush:2 considered:1 cramer:1 normal:3 roi:2 dictionary:4 achieves:1 smallest:5 estimation:1 label:1 successfully:2 minimization:1 rough:1 mit:3 gaussian:3 rather:3 hj:13 shrinkage:2 bet:1 indicates:3 transformed:2 going:1 interested:2 among:1 flexible:1 denoted:2 augment:1 equal:3 evgeniou:1 icml:1 leukemia:4 future:2 others:1 report:2 inherent:1 modi:1 simultaneously:1 interpolate:1 friedman:2 highly:1 golub:2 weakness:1 x22:4 necessary:1 poggio:2 unless:1 old:5 re:1 modeling:1 cover:1 cost:6 too:1 rosset:3 combined:1 density:2 randomized:1 cantly:1 thesis:1 choose:3 possibly:2 worse:1 adversely:1 external:1 derivative:14 summarized:1 matter:1 ranking:1 piece:14 h1:1 view:2 try:1 start:2 bayes:1 minimize:1 square:1 accuracy:1 variance:2 kaufmann:1 correspond:3 identify:1 none:1 monitoring:1 straight:1 cation:16 explain:2 barnhill:1 coef:12 trevor:1 ed:2 lossi:1 tucker:2 dm:2 associated:1 proof:2 treatment:1 improves:1 dimensionality:1 hilbert:2 actually:1 back:1 appears:1 follow:1 specify:1 response:1 verri:1 evaluated:1 shrink:2 generality:1 until:2 hand:2 logistic:1 yf:1 facilitate:2 effect:1 concept:1 true:2 hence:13 regularization:1 chemical:1 nonzero:1 adjacent:1 encourages:1 coincides:1 ridge:9 performs:4 saharon:2 l1:2 wise:14 ef:4 recently:1 fi:8 nih:2 mangasarian:1 ji:1 tail:1 surround:1 cv:1 ai:1 tuning:11 consistency:1 similarly:2 sherman:1 acute:3 add:1 base:1 scenario:2 verlag:1 binary:1 success:1 yi:25 morgan:1 minimum:2 greater:1 additional:1 redundant:4 morrison:1 u0:6 smooth:1 technical:3 calculation:1 cross:2 molecular:1 prediction:4 regression:4 patient:2 kernel:6 vayatis:1 decreased:1 else:1 microarray:5 extra:1 facilitates:2 call:1 ciently:3 coller:1 ter:1 xj:4 hastie:7 lasso:12 wahba:1 idea:1 twoclass:1 tradeoff:1 microarrays:2 six:1 heavier:1 expression:2 effort:1 penalty:24 song:1 suffer:1 loh:1 york:2 cause:1 remark:1 action:2 detailed:1 se:1 ph:1 svms:1 generate:3 nsf:2 notice:6 sign:3 tibshirani:5 diagnosis:3 affected:1 four:1 penalizing:1 neither:1 nal:1 caligiuri:1 kept:1 sum:1 package:1 striking:1 almost:1 reasonable:1 guyon:1 dash:1 datum:1 fold:1 x2:3 software:1 aspect:1 min:8 separable:1 department:2 according:2 terminates:2 ur:2 nomial:1 rob:1 making:1 happens:1 paramter:1 eventually:1 mechanism:1 end:1 eight:1 apply:2 appropriate:2 rp:1 original:2 denotes:1 include:1 x21:3 hinge:2 especially:5 build:1 uj:13 skin:1 question:1 occurs:1 primary:1 september:1 hq:1 subspace:1 separate:2 parame:1 argue:3 index:1 relationship:2 equivalently:2 setup:2 negative:5 memo:1 unknown:2 perform:1 upper:2 vertical:1 observation:1 sold:3 t:1 situation:1 y1:1 reproducing:2 bene:1 connection:1 proceeds:1 usually:3 sparsity:1 event:1 natural:1 regularized:2 residual:2 zhu:5 irrespective:1 categorical:1 prior:2 discovery:1 loss:18 suf:3 interesting:1 generation:1 validation:3 asterisk:1 degree:1 principle:1 cancer:6 changed:1 penalized:4 supported:2 last:1 jth:1 understand:1 shavlik:1 wide:1 taking:1 sparse:2 dimension:2 xn:1 world:1 computes:7 adaptive:4 ignore:1 gene:15 sequentially:1 xi:11 continuous:2 why:2 table:6 nature:2 delving:1 ca:3 qm2:1 poly:1 main:2 whole:11 noise:13 motivation:1 fair:1 x1:3 enlarged:1 gaasenbeek:1 chromatography:1 cient:6 wish:1 exponential:2 removing:1 formula:1 er:2 decay:1 svm:49 burden:1 vapnik:2 adding:1 conditioned:1 illustrates:1 margin:3 vold:1 univariate:1 lagrange:1 huard:1 partially:2 springer:1 corresponds:4 tha:1 weston:1 goal:2 towards:2 tted:11 replace:1 bennett:1 change:11 springerverlag:1 determined:1 except:1 hyperplane:1 classi:17 called:2 x2j:1 accepted:1 tendency:1 select:6 highdimensional:1 support:10 people:2 correlated:1 |
1,597 | 2,451 | An Infinity-sample Theory for Multi-category
Large Margin Classification
Tong Zhang
IBM T.J. Watson Research Center
Yorktown Heights, NY 10598
[email protected]
Abstract
The purpose of this paper is to investigate infinity-sample properties of
risk minimization based multi-category classification methods. These
methods can be considered as natural extensions to binary large margin
classification. We establish conditions that guarantee the infinity-sample
consistency of classifiers obtained in the risk minimization framework.
Examples are provided for two specific forms of the general formulation,
which extend a number of known methods. Using these examples, we
show that some risk minimization formulations can also be used to obtain conditional probability estimates for the underlying problem. Such
conditional probability information will be useful for statistical inferencing tasks beyond classification.
1
Motivation
Consider a binary classification problem where we want to predict label y ? {?1} based
on observation x. One of the most significant achievements for binary classification in
machine learning is the invention of large margin methods, which include support vector
machines and boosting algorithms. Based on a set of observations (X1 , Y1 ), . . . , (Xn , Yn ),
a large margin classification algorithm produces a decision function f?n by empirically minimizing a loss function that is often a convex upper bound of the binary classification error
function. Given f?n , the binary decision rule is to predict y = 1 if f?n (x) ? 0, and to predict
y = ?1 otherwise (the decision rule at f?n (x) = 0 is not important). In the literature, the
following form of large margin binary classification is often encountered: we minimize the
empirical risk associated with a convex function ? in a pre-chosen function class Cn :
n
1X
?(f (Xi )Yi ).
f?n = arg min
f ?Cn n
i=1
(1)
Originally such a scheme was regarded as a compromise to avoid computational difficulties
associated with direct classification error minimization, which often leads to an NP-hard
problem. The current view in the statistical literature interprets such methods as algorithms
to obtain conditional probability estimates. For example, see [3, 6, 9, 11] for some related
studies. This point of view allows people to show the consistency of various large margin
methods: that is, in the large sample limit, the obtained classifiers achieve the optimal
Bayes error rate. For example, see [1, 4, 7, 8, 10, 11]. The consistency of a learning
method is certainly a very desirable property, and one may argue that a good classification
method should be consistent in the large sample limit.
Although statistical properties of binary classification algorithms based on the risk minimization formulation (1) are quite well-understood due to many recent works such as
those mentioned above, there are much fewer studies on risk minimization based multicategory problems which generalizes the binary large margin method (1). The complexity
of possible generalizations may be one reason. Another reason may be that one can always estimate the conditional probability for a multi-category problem using the binary
classification formulation (1) for each category, and then pick the category with the highest estimated conditional probability (or score).1 However, it is still useful to understand
whether there are more natural alternatives, and what kind of risk minimization formulation
which generalizes (1) can be used to yield consistent classifiers in the large sample limit.
An important step toward this direction has recently been taken in [5], where the authors
proposed a multi-category extension of the support vector machine that is Bayes consistent
(note that there were a number of earlier proposals that were not consistent). The purpose
of this paper is to generalize their investigation so as to include a much wider class of risk
minimization formulations that can lead to consistent classifiers in the infinity-sample limit.
We shall see that there is a rich structure in risk minimization based multi-category classification formulations. Multi-category large margin methods have started to draw more attention recently. For example, in [2], learning bounds for some multi-category convex risk
minimization methods were obtained, although the authors did not study possible choices
of Bayes consistent formulations.
2
Multi-category classification
We consider the following K-class classification problem: we would like to predict the
label y ? {1, . . . , K} of an input vector x. In this paper, we only consider the simplest
scenario with 0 ? 1 classification loss: we have a loss of 0 for correct prediction, and loss
of 1 for incorrect prediction.
In binary classification, the class label can be determined using the sign of a decision function. This can be generalized to K class classification problem as follows: we consider K
decision functions fc (x) where c = 1, . . . , K and we predict the label y of x as:
T (f (x)) = arg
max
c?{1,...,K}
fc (x),
(2)
where we denote by f (x) the vector function f (x) = [f1 (x), . . . , fK (x)].
Note that if two or more components of f achieve the same maximum value, then we
may choose any of them as T (f ). In this framework, fc (x) is often regarded as a scoring
function for category c that is correlated with how likely x belongs to category c (compared
with the remaining k ? 1 categories). The classification error is given by:
`(f ) = 1 ? EX P (Y = T (X)|X).
Note that only the relative strength of fc compared with the alternatives is important. In
particular, the decision rule given in (2) does not change when we add the same numerical
quantity to each component of f (x). This allows us to impose one constraint on the vector
f (x) which decreases the degree of freedom K of the K-component vector f (x) to K ? 1.
1
This approach is often called one-versus-all or ranking in machine learning. Another main approach is to encode a multi-category classification problem into binary classification sub-problems.
The consistency of such encoding schemes can be difficult to analyze, and we shall not discuss them.
For example, in the binary classification case, we can enforce f1 (x)+f2 (x) = 0, and hence
f (x) can be represented as [f1 (x), ?f1 (x)]. The decision rule in (2), which compares
f1 (x) ? f2 (x), is equivalent to f1 (x) ? 0. This leads to the binary classification rule
mentioned in the introduction.
In the multi-category case, one may also interpret the possible constraint on the vector
function f , which reduces its degree of freedom from K to K ? 1 based on the following
reasoning. In many cases, we seek fc (x) as a function of p(Y = c|x). Since we have a
PK
constraint c=1 p(Y = c|x) = 1 (implying that the degree of freedom for p(Y = c|x) is
K ? 1), the degree of freedom for f is also K ? 1 (instead of K). However, we shall point
out that in the algorithms we formulate below, we may either enforce such a constraint
that reduces the degree of freedom of f , or we do not impose any constraint, which keeps
the degree of freedom of f to be K. The advantage of the latter is that it allows the
computation of each fc to be decoupled. It is thus much simpler both conceptually and
numerically. Moreover, it directly handles multiple-label problems where we may assign
each x to multiple labels of y ? {1, . . . , K}. In this scenario, we do not have a constraint.
In this paper, we consider an empirical risk minimization method to solve a multi-category
problem, which is of the following general form:
n
1X
?Yi (f (Xi )).
f?n = arg min
f ?Cn n
i=1
(3)
As we shall see later, this method is a natural generalization of the binary classification
method (1). Note that one may consider an even more general form with ?Y (f (X)) replaced by ?Y (f (X), X), which we don?t study in this paper.
From the standard learning theory, one can expect that with appropriately chosen Cn , the
solution f?n of (3) approximately minimizes the true risk R(f?) with respect to the unknown
underlying distribution within the function class Cn ,
R(f ) = EX,Y ?Y (f (X)) = EX L(P (?|X), f (X)),
(4)
where P (?|X) = [P (Y = 1|X), . . . , P (Y = K|X)] is the conditional probability, and
L(q, f ) =
K
X
qc ?c (f ).
(5)
c=1
In order to understand the large sample behavior of the algorithm based on solving (3), we
first need to understand the behavior of a function f that approximately minimizes R(f ).
We introduce the following definition (also referred to as classification calibrated in [1]):
Definition 2.1 Consider ?c (f ) in (4). We say that the formulation is admissible (classification calibrated) on a closed set ? ? [??, ?]K if the following conditions hold:
?c, ?c (?) : ? ? (??, ?] is bounded below and continuous; ?c {f : ?c (f ) < ?} is
non-empty and dense in ?; ?q, if L(q, f ? ) = inf f L(q, f ), then fc? = supk fk? implies
qc = supk qk .
Since we allow ?c (f ) = ?, we use the convention that qc ?c (f ) = 0 when qc = 0 and
?c (f ) = ?. The following result relates the approximate minimization of the ? risk to
the approximate minimization of classification error:
Theorem 2.1 Let B be the set of all Borel measurable functions. For a closed set ? ?
[??, ?]K , let B? = {f ? B : ?x, f (x) ? ?}. If ?c (?) is admissible on ?, then for a
Borel measurable distribution, R(f ) ? inf g?B? R(g) implies `(f ) ? inf g?B `(g).
Proof Sketch. First we show that the admissibility implies that ? > 0, ?? > 0 such that ?q
and x:
inf
{L(q, f ) : fc = sup fk } ? inf L(q, g) + ?.
(6)
qc ?supk qk ?
g??
k
m
If (6) does not hold, then ? > 0, and a sequence of (c , f m , q m ) with f m ? ? such that
fcmm = supk fkm , qcmm ? supk qkm ? , and L(q m , f m ) ? inf g?? L(q m , g) ? 0. Taking a
limit point of (cm , f m , q m ), and using the continuity of ?c (?), we obtain a contradiction
(technical details handling the infinity case are skipped). Therefore (6) must be valid.
Now we consider a vector function f (x) ? ?B . Let q(x) = P (?|x). Given X, if P (Y =
T (f (X))|X) ? P (Y = T (q(X))|X)+, then equation (6) implies that L(q(X), f (X)) ?
inf g?? L(q(X), g) + ?. Therefore
`(f ) ? inf `(g) =EX [P (Y = T (q(X))|X) ? P (Y = T (f (X))|X)]
g?B
? + EX I(P (Y = T (q(X))|X) ? P (Y = T (f (X))|X) > )
LX (q(X), f (X)) ? inf g?B? LX (q(X), g)
? + EX
?
R(f ) ? inf g?B? R(g)
= +
.
?
In the above derivation we use I to denote the indicator function. Since and ? are arbitrary,
we obtain the theorem by letting ? 0. 2
Clearly, based on the above theorem, an admissible risk minimization formulation is suitable for multi-category classification problems. The classifier obtained from minimizing (3) can approach the Bayes error rate if we can show that with appropriately chosen
function class Cn , approximate minimization of (3) implies approximate minimization
of (4). Learning bounds of this forms have been very well-studied in statistics and machine learning. For example, for large margin binary classification, such bounds can be
found in [4, 7, 8, 10, 11, 1], where they were used to prove the consistency of various
large margin methods. In order to achieve consistency, it is also necessary to take a sequence of function classes Cn (C1 ? C2 ? ? ? ? ) such that ?n Cn is dense in the set of
Borel measurable functions. The set Cn has the effect of regularization, which ensures that
P
R(f?n ) ? inf f ?Cn R(f ). It follows that as n ? ?, R(f?n ) ? inf f ?B R(f ). Theorem 2.1
P
then implies that `(f?n ) ? inf f ?B `(f ).
The purpose of this paper is not to study similar learning bounds that relate approximate
minimization of (3) to the approximate minimization of (4). See [2] for a recent investigation. We shall focus on the choices of ? that lead to admissible formulations. We pay
special attention to the case that each ?c (f ) is a convex function of f , so that the resulting
formulation becomes computational more tractable. Instead of working with the general
form of ?c in (4), we focus on two specific choices listed in the next two sections.
3
Unconstrained formulations
We consider unconstrained formulation with the following choice of ?:
!
K
X
?c (f ) = ?(fc ) + s
t(fk ) ,
(7)
k=1
where ?, s and t are appropriately chosen functions that are continuously differentiable.
The first term, which has a relatively simple form, depends on the label c. The second
term is independent of the label, and can be regarded as a normalization term. Note that
this function is symmetric with respect to components of f . This choice treats all potential
classes equally. It is also possible to treat different classes differently (e.g. replacing ?(fc )
by ?c (fc )), which can be useful if we associate different classification loss to different
kinds of errors.
3.1
Optimality equation and probability model
Using (7), the conditional true risk (5) can be written as:
L(q, f ) =
K
X
qc ?(fc ) + s
K
X
c=1
!
t(fc ) .
c=1
In the following, we study the property of the optimal vector f ? that minimizes L(q, f )
for a fixed q. Given q, the optimal solution f ? of L(q, f ) satisfies the following first order
condition:
qc ?0 (fc? ) + ?f ? t0 (fc? ) = 0
(c = 1, . . . , K).
(8)
PK
?
0
where quantity ?f ? = s ( k=1 t(fk )) is independent of k.
Clearly this equation relates qc to fc? for each component c. The relationship of q and f ?
defined by (8) can be regarded as the (infinite sample-size) probability model associated
with the learning method (3) with ? given by (7).
The following result presents a simple criterion to check admissibility. We skip the proof
for simplicity. Most of our examples satisfy the condition.
Proposition 3.1 Consider (7). Assume ?c (f ) is continuous on [??, ?]K and bounded
below. If s0 (u) ? 0 and ?p > 0, p?0 (f ) + t0 (f ) = 0 has a unique solution fp that is an
increasing function of p, then the formulation is admissible.
If s(u) = u, the condition ?p > 0 in Proposition 3.1 can be replaced by ?p ? (0, 1).
3.2
Decoupled formulations
We let s(u) = u in (7). The optimality condition (8) becomes
qc ?0 (fc? ) + t0 (fc? ) = 0
(c = 1, . . . , K).
(9)
This means that we have K decoupled equalities, one for each fc . This is the simplest and
in the author?s opinion, the most interesting formulation. Since the estimation problem in
(3) is also decoupled into K separate equations, one for each component of f?n , this class
of methods are computationally relatively simple and easy to parallelize. Although this
method seems to be preferable for multi-category problems, it is not the most efficient way
for two-class problem (if we want to treat the two classes in a symmetric manner) since we
have to solve two separate equations. We only need to deal with one equation in (1) due
to the fact that an effective constraint f1 + f2 = 0 can be used to reduce the number of
equations. This variable elimination has little impact if there are many categories.
In the following, we list some examples of multi-category risk minimization formulations.
They all satisfy the admissibility condition in Proposition 3.1. We focus on the relationship
of the optimal optimizer function f? (q) and the conditional probability q. For simplicity,
we focus on the choice ?(u) = ?u.
3.2.1 ?(u) = ?u and t(u) = eu
?
We obtain the following probability model: qc = efc . This formulation is closely related
PK
to the maximum-likelihood estimate with conditional model qc = efc / k=1 efk (logistic
regression). In particular, if we choose a function class such that the normalization condiPK
tion k=1 efk = 1 holds, then the two formulations are identical. However, they become
different when we do not impose such a normalization condition.
Another very important and closely related formulation is the choice of ?(u) = ? ln u
and t(u) = u. This is an extension of maximum-likelihood estimate with probability
model qc = fc . The resulting
P method is identical to maximum-likelihood if we choose
our function class such that k fk = 1. However, the formulation
P also allows us to use
function classes that do not satisfy the normalization constraint k fk = 1. Therefore this
method is more flexible.
3.2.2 ?(u) = ?u and t(u) = ln(1 + eu )
This version uses binary logistic regression loss, and we have the following probability
?
model: qc = (1 + e?fc )?1 . Again this is an unnormalized model.
3.2.3 ?(u) = ?u and t(u) = p1 |u|p (p > 1)
We obtain the following probability model: qc = sign(fc? )|fc? |p?1 . This means that at the
solution, fc? ? 0. One may modify it such that we allow fc? ? 0 to model the condition
probability qc = 0.
3.2.4 ?(u) = ?u and t(u) =
1
p
max(u, 0)p (p > 1)
In this probability model, we have the following relationship: qc = max(fc? , 0)p?1 . The
equation implies that we allow fc? ? 0 to model the conditional probability qc = 0. Therefore, with a fixed function class, this model is more powerful than the previous one. However, at the optimal solution, fc? ? 1. This requirement can be further alleviated with the
following modification.
3.2.5 ?(u) = ?u and t(u) =
1
p
min(max(u, 0)p , p(u ? 1) + 1) (p > 1)
In this probability model, we have the following relationship at the exact solution: qc =
min(max(f?c , 0), 1)p?1 . Clearly this model is more powerful than the previous model since
the function value fc? ? 1 can be used to model qc = 1.
3.3
Coupled formulations
In the coupled formulation with s(u) 6= u, the probability model can be normalized in a
certain way. We list a few examples.
3.3.1 ?(u) = ?u, and t(u) = eu , and s(u) = ln(u)
This is the standard logistic regression model. The probability model is:
K
X
qc (x) = exp(fc? (x))(
exp(fc? (x)))?1 .
c=1
The right hand side is always normalized (sum up to 1). Note that the model is not continuous at infinities, and thus not admissible in our definition. However, we may consider the
region ? = {f : supk fk = 0}, and it is easy to check that this model is admissible in ?.
Let fc? = fc ? supk fk ? ?, then f ? has the same decision rule as f and R(f ) = R(f ? ).
Therefore Theorem 2.1 implies that R(f ) ? inf g?B R(g) implies `(f ) ? inf g?B `(g).
0
0
3.3.2 ?(u) = ?u, and t(u) = |u|p , and s(u) = p1 |u|p/p (p, p0 > 1)
The probability model is:
qc (x) = (
K
X
0
0
0
0
|fk? (x)|p )(p?p )/p sign(fc? (x))|fc? (x)|p ?1 .
k=1
We may replace t(u) by t(u) = max(0, u)p , and the probability model becomes:
qc (x) = (
K
X
0
0
0
0
max(fk? (x), 0)p )(p?p )/p max(fc? (x), 0)p ?1 .
k=1
These formulations do not seem to have advantages over the decoupled counterparts. Note
0
that if we let p ? 1, then the sum of the p0p?1 -th power of the right hand side ? 1. In a
way, this means that the model is normalized in the limit of p ? 1.
4
Constrained formulations
As pointed out, one may impose constraints on possible choices of f . We may impose
such a condition when we specify the function class Cn . However, for clarity, we shall
directly impose a condition into our formulation. If we impose a constraint into (7), then
its effect is rather similar to that of the second term in (7). In this section, we consider
a direct extension of binary large-margin method (1) to multi-category case. The choice
given below is motivated by [5], where an extension of SVM was proposed. We use a risk
formulation that is different from (7), and for simplicity, we will consider linear equality
constraint only:
K
X
?c (f ) =
?(?fk ),
s.t. f ? ?,
(10)
k=1,k6=c
where we define ? as:
? = {f :
K
X
fk = 0} ? {f : sup fk = ?}.
k
k=1
We may interpret the added constraint as a restriction on the function class Cn in (3) such
that every f ? Cn satisfies the constraint. Note that with K = 2, this leads to the usually
binary large margin method. Using (10), the conditional true risk (5) can be written as:
L(q, f ) =
K
X
(1 ? qc )?(?fc ),
s.t. f ? ?.
(11)
c=1
The following result provides a simple way to check the admissibility of (10).
Proposition 4.1 If ? is a convex function which is bounded below and ?0 (0) < 0, then (10)
is admissible on ?.
Proof Sketch. The continuity condition is straight-forward to verify. We may also assume
that ?(?) ? 0 without loss of generality. Now let f achieves the minimum of L(q, ?). If
fc = ?, then it is clear that qc = 1 and thus qk = 0 for k 6= c. This implies that
for k 6= c, ?(?fk ) = inf f ?(?f ), and thus fk < 0. If fc = supk fk < ?, then the
constraint implies fc ? 0. It is easy to see that ?k, qc ? qk since otherwise, we must have
?(?fk ) > ?(?fc ), and thus ?0 (?fk ) > 0 and ?0 (?fc ) < 0, implying that with sufficient
small ? > 0, ?(?(fk + ?)) < ?(?fk ) and ?(?(fc ? ?)) < ?(?fc ). A contradiction. 2
Using the above criterion, we can convert any admissible convex ? for the binary formulation (1) into an admissible multi-category classification formulation (10).
In [5] the special case of SVM (with loss function ?(u) = max(0, 1 ? u)) was studied. The
authors demonstrated the admissibility by direct calculation, although no results similar to
Theorem 2.1 were established. Such a result is needed to prove consistency. The treatment
presented here generalizes their study. Note that for the constrained formulation, it is more
difficult to relate fc at the optimal solution to a probability model, since such a model will
have a much more complicated form compared with the unconstrained counterpart.
5
Conclusion
In this paper we proposed a family of risk minimization methods for multi-category classification problems, which are natural extensions of binary large margin classification methods. We established admissibility conditions that ensure the consistency of the obtained
classifiers in the large sample limit. Two specific forms of risk minimization were proposed and examples were given to study the induced probability models. As an implication
of this work, we see that it is possible to obtain consistent (conditional) density estimation
using various non-maximum likelihood estimation methods. One advantage of some of the
newly proposed methods is that they allow us to model zero density directly. Note that for
the maximum-likelihood method, near zero density may cause serious robustness problems
at least in theory.
References
[1] P.L. Bartlett, M.I. Jordan, and J.D. McAuliffe. Convexity, classification, and risk
bounds. Technical Report 638, Statistics Department, University of California, Berkeley, 2003.
[2] Ilya Desyatnikov and Ron Meir. Data-dependent bounds for multi-category classification based on convex losses. In COLT, 2003.
[3] J. Friedman, T. Hastie, and R. Tibshirani. Additive logistic regression: A statistical
view of boosting. The Annals of Statistics, 28(2):337?407, 2000. With discussion.
[4] W. Jiang. Process consistency for adaboost. The Annals of Statistics, 32, 2004. with
discussion.
[5] Y. Lee, Y. Lin, and G. Wahba. Multicategory support vector machines, theory, and
application to the classification of microarray data and satellite radiance data. Journal
of American Statistical Association, 2002. accepted.
[6] Yi Lin. Support vector machines and the bayes rule in classification. Data Mining
and Knowledge Discovery, pages 259?275, 2002.
[7] G. Lugosi and N. Vayatis. On the Bayes-risk consistency of regularized boosting
methods. The Annals of Statistics, 32, 2004. with discussion.
[8] Shie Mannor, Ron Meir, and Tong Zhang. Greedy algorithms for classification - consistency, convergence rates, and adaptivity. Journal of Machine Learning Research,
4:713?741, 2003.
[9] Robert E. Schapire and Yoram Singer. Improved boosting algorithms using
confidence-rated predictions. Machine Learning, 37:297?336, 1999.
[10] Ingo Steinwart. Support vector machines are universally consistent. J. Complexity,
18:768?791, 2002.
[11] Tong Zhang. Statistical behavior and consistency of classification methods based on
convex risk minimization. The Annals of Statitics, 32, 2004. with discussion.
| 2451 |@word version:1 seems:1 seek:1 p0:1 pick:1 score:1 current:1 com:1 must:2 written:2 additive:1 numerical:1 implying:2 greedy:1 fewer:1 provides:1 mannor:1 boosting:4 ron:2 lx:2 simpler:1 zhang:3 height:1 c2:1 direct:3 become:1 incorrect:1 prove:2 manner:1 introduce:1 behavior:3 p1:2 multi:18 little:1 increasing:1 becomes:3 provided:1 underlying:2 moreover:1 bounded:3 what:1 kind:2 cm:1 minimizes:3 guarantee:1 berkeley:1 every:1 preferable:1 classifier:6 yn:1 mcauliffe:1 understood:1 treat:3 modify:1 limit:7 encoding:1 jiang:1 parallelize:1 approximately:2 lugosi:1 studied:2 unique:1 empirical:2 alleviated:1 pre:1 confidence:1 risk:23 restriction:1 equivalent:1 measurable:3 demonstrated:1 center:1 attention:2 fkm:1 convex:8 formulate:1 qc:25 simplicity:3 contradiction:2 rule:7 regarded:4 handle:1 annals:4 exact:1 us:1 associate:1 region:1 ensures:1 eu:3 decrease:1 highest:1 mentioned:2 convexity:1 complexity:2 solving:1 compromise:1 f2:3 differently:1 various:3 represented:1 derivation:1 effective:1 quite:1 solve:2 say:1 otherwise:2 statistic:5 advantage:3 sequence:2 differentiable:1 achieve:3 achievement:1 convergence:1 empty:1 requirement:1 satellite:1 produce:1 wider:1 inferencing:1 ex:6 skip:1 implies:11 convention:1 direction:1 closely:2 correct:1 opinion:1 elimination:1 assign:1 f1:7 generalization:2 investigation:2 proposition:4 extension:6 hold:3 considered:1 exp:2 predict:5 optimizer:1 achieves:1 radiance:1 purpose:3 estimation:3 label:8 minimization:22 clearly:3 always:2 rather:1 avoid:1 encode:1 focus:4 check:3 likelihood:5 skipped:1 dependent:1 arg:3 classification:40 flexible:1 colt:1 k6:1 constrained:2 special:2 identical:2 qkm:1 np:1 report:1 serious:1 few:1 replaced:2 friedman:1 freedom:6 investigate:1 mining:1 certainly:1 implication:1 necessary:1 decoupled:5 earlier:1 statitics:1 calibrated:2 density:3 lee:1 continuously:1 ilya:1 again:1 choose:3 american:1 potential:1 satisfy:3 ranking:1 depends:1 later:1 view:3 tion:1 closed:2 analyze:1 sup:2 bayes:6 complicated:1 minimize:1 qk:4 yield:1 conceptually:1 generalize:1 straight:1 definition:3 associated:3 proof:3 newly:1 treatment:1 knowledge:1 originally:1 adaboost:1 specify:1 improved:1 formulation:31 generality:1 sketch:2 working:1 hand:2 steinwart:1 replacing:1 continuity:2 logistic:4 effect:2 normalized:3 true:3 verify:1 counterpart:2 hence:1 regularization:1 equality:2 symmetric:2 deal:1 yorktown:1 unnormalized:1 criterion:2 generalized:1 reasoning:1 recently:2 empirically:1 extend:1 association:1 interpret:2 numerically:1 significant:1 unconstrained:3 consistency:12 fk:21 pointed:1 add:1 recent:2 belongs:1 inf:16 scenario:2 certain:1 binary:20 watson:2 yi:3 scoring:1 minimum:1 impose:7 relates:2 multiple:2 desirable:1 reduces:2 technical:2 calculation:1 lin:2 equally:1 impact:1 prediction:3 regression:4 normalization:4 c1:1 proposal:1 vayatis:1 want:2 microarray:1 appropriately:3 induced:1 shie:1 seem:1 jordan:1 near:1 easy:3 hastie:1 wahba:1 interprets:1 reduce:1 cn:13 t0:3 whether:1 motivated:1 bartlett:1 cause:1 useful:3 clear:1 listed:1 category:24 simplest:2 schapire:1 meir:2 sign:3 estimated:1 tibshirani:1 shall:6 clarity:1 invention:1 sum:2 convert:1 powerful:2 tzhang:1 family:1 draw:1 decision:8 bound:7 pay:1 encountered:1 strength:1 infinity:6 constraint:14 min:4 optimality:2 relatively:2 department:1 modification:1 taken:1 computationally:1 equation:8 ln:3 discus:1 needed:1 singer:1 letting:1 tractable:1 generalizes:3 enforce:2 alternative:2 robustness:1 remaining:1 include:2 ensure:1 multicategory:2 yoram:1 establish:1 added:1 quantity:2 separate:2 argue:1 reason:2 toward:1 relationship:4 minimizing:2 difficult:2 robert:1 relate:2 unknown:1 upper:1 observation:2 ingo:1 y1:1 arbitrary:1 california:1 established:2 beyond:1 below:5 usually:1 fp:1 max:9 power:1 suitable:1 natural:4 difficulty:1 regularized:1 indicator:1 scheme:2 rated:1 started:1 coupled:2 literature:2 discovery:1 relative:1 loss:9 expect:1 admissibility:6 adaptivity:1 interesting:1 versus:1 degree:6 sufficient:1 consistent:8 s0:1 ibm:2 side:2 allow:4 understand:3 taking:1 xn:1 valid:1 rich:1 author:4 forward:1 universally:1 approximate:6 keep:1 xi:2 don:1 continuous:3 correlated:1 efk:2 did:1 pk:3 main:1 dense:2 motivation:1 x1:1 referred:1 borel:3 ny:1 tong:3 sub:1 admissible:10 theorem:6 specific:3 list:2 svm:2 margin:13 fc:45 likely:1 efc:2 supk:8 satisfies:2 conditional:12 replace:1 hard:1 change:1 determined:1 infinite:1 called:1 accepted:1 support:5 people:1 latter:1 handling:1 |
1,598 | 2,452 | Learning a world model and planning with a
self-organizing, dynamic neural system
Marc Toussaint
Institut f?ur Neuroinformatik
Ruhr-Universit?at Bochum, ND 04
44780 Bochum?Germany
[email protected]
Abstract
We present a connectionist architecture that can learn a model of the
relations between perceptions and actions and use this model for behavior planning. State representations are learned with a growing selforganizing layer which is directly coupled to a perception and a motor
layer. Knowledge about possible state transitions is encoded in the lateral connectivity. Motor signals modulate this lateral connectivity and
a dynamic field on the layer organizes a planning process. All mechanisms are local and adaptation is based on Hebbian ideas. The model is
continuous in the action, perception, and time domain.
1 Introduction
Planning of behavior requires some knowledge about the consequences of actions in a
given environment. A world model captures such knowledge. There is clear evidence that
nervous systems use such internal models to perform predictive motor control, imagery,
inference, and planning in a way that involves a simulation of actions and their perceptual
implications [1, 2]. However, the level of abstraction, the representation, on which such
simulation occurs is hardly the level of physical coordinates. A tempting hypothesis is
that the representations the brain uses for reasoning and planning are particularly designed
(by adaptation or evolution) for just this purpose. To address such ideas we first need
a basic model for how a connectionist architecture can encode a world model and how
self-organization of inherent representations is possible.
In the field of machine learning, world models are a standard approach to handle behavior organization problems (for a comparison of model-based approaches to the classical,
model-free Reinforcement Learning see, e.g., [3]). The basic idea of using neural networks
to model the environment was given in [4, 5]. Our approach for a connectionist world
model (CWM) is functionally similar to existing Machine Learning approaches with selforganizing state space models [6, 7]. It is able to grow neural representations for different
world states and to learn the implications of actions in terms of state transitions. It differs
though from classical approaches in some crucial points:
? The model is continuous in the action, the perception, as well as the time domain.
motor layer a
ka (aji , a)
xi
wji
i
j
ks (sj , s)
perceptive layer s
Figure 1: Schema of the CWM architecture.
? All mechanisms are based on local interactions. The adaptation mechanisms are largely
derived from the idea of Hebbian plasticity. E.g., the lateral connectivity, which encodes
knowledge about possible state transitions, is adapted by a variant of the temporal Hebb
rule and allows local adaptation of the world model to local world changes.
? The coupling to the motor system is fully integrated in the architecture via a mechanism
incorporating modulating synapses (comparable to shunting mechanisms).
? The two dynamic processes on the CWM, the ?tracking? process estimating the current
state and the planning process (similar to Dynamic Programming), will be realized by
activation dynamics on the architecture, incorporating in particular lateral interactions,
inspired by neural fields [8].
The outline of the paper is as follows: In the next section we describe our architecture,
the dynamics of activation and the couplings to perception and motor layers. In section 3
we introduce a dynamic process that generates, as an attractor, a value field over the layer
which is comparable to a state value function estimating the expected future return and allows for goal-oriented behavior organization. The self-organization process and adaptation
mechanisms are described in section 4. We demonstrate the features of the model on a
maze problem in section 5 and finally discuss the results and the model in general terms.
2 The model
The core of the connectionist world model (CWM) is a neural layer which is coupled to a
perceptual layer and a motor layer, see figure 1. Let us enumerate the units of the central
layer by i = 1, .., N . Lateral connections within the P
layer may exist and we denote a
connection from the i-th to j-th unit by (ji). E.g., ? (ji) ? means ?summing over all
existing connections (ji)?. To every unit we associate an activation xj ? R which is
governed by the dynamics
X
?x x? j = ?xj + ks (sj , s) + ?
ka (aji , a) wji xi ,
(1)
(ji)
which we will explain in detail in the following. First of all, xi are the time-dependent
activations and the dot-notation ?x x? = F (x) means a time derivative which we algorithmically implemented by a Euler integration step x(t) = x(t ? 1) + ?1x F (x(t ? 1)).
The first term in (1) induces an exponential relaxation while the second and third terms are
the inputs. ks (sj , s) is the forward excitation that unit j receives from the perceptive layer.
Here, sj is the codebook vector (receptive field) of unit j onto the perception layer which
is compared to the current stimulus s via the kernel function ks . We will choose Gaussian
kernels as it is the case, e.g., for typical Radial Basis function networks.
P
The third term, (ji) ka (aji , a) wji xi , describes the lateral interaction on the central layer.
Namely, unit j receives lateral input from unit i iff there exists a connection (ji) from i to
j. This lateral input is weighted by the connection?s synaptic strength wji . Additionally
there is another term entering multiplicatively into this lateral interaction: Lateral inputs
are modulated depending on the current motor activation. We chose a modulation of the
following kind: To every existing connection (ji) we associate a codebook vector aji onto
the motor layer which is compared to the current motor activity a via a Gaussian kernel
function ka . Due to the multiplicative coupling, a connection contributes to lateral inputs
only when the current motor activity ?matches? the codebook vector of this connection.
The modulation of information transmission by multiplicative or divisive interactions is a
fundamental principle in biological neural systems [9]. One example is shunting inhibition where inhibitory synapses attach to regions of the dentritic tree near to the soma and
thereby modulate the transmission of the dentritic input [10]. In our architecture, a shunting synapse, receiving input from the motor layer, might attach to only one branch of a
(lateral) dentritic tree and thereby multiplicatively modulate the lateral inputs summed up
at this subtree.
For the following it is helpful if we briefly discuss a certain relation between equation (1)
and a classical probabilistic approach. Let us assume normalized kernel functions
?(sj ? s)2
1
?(aji ? a)2
1
?
,
.
exp
k
(a
,
a)
=
exp
ks (sj , s) = ?
a
ji
2?s2
2?a2
2? ?s
2? ?a
These kernel functions can directly be interpreted as probabilities: ks (sj , s) represents
the probability P (s|j) that the stimulus is s if j is active, and ka (aji , a) the probability
P (a|j, i) that the action is a if a transition i ? j occurred. As for typical hidden Markov
models we may derive the prior probability distribution P (j|a), given the action:
P (a|j, i) P (j|i)
P (j|i)
P (j|a, i) =
= ka (aji , a)
,
P (a|i)
P (a|i)
X
P (j|i)
P (j|a) =
ka (aji , a)
P (i) .
P (a|i)
i
P
P (a|i) can be computed by normalizing P (a|j, i) P (j|i) over j such that j P (j|a, i) =
1.
PWhat we would like to point out here is that in equation (1), the lateral input
(ji) ka (aji , a) wji xi can be compared to the prior P (j|a) under the assumption that
xi is proportional to P (i) and if we have an adaptation mechanism for wji which converges
to a value proportional to P (j|i) and which also ensures normalization, i.e.,
P
k
(a
a
ji , a) wji = 1 for all i and a. This insight will help to judge some details of
j
the next two section. The probabilistic interpretation can be further exploited, e.g., comparing the input of a unit j (or, in the quasi-stationary case, xj itself) to the posterior and
deriving theoretically grounded adaptation mechanisms. But this is not within the scope of
this paper.
3 The dynamics of planning
To organize goal-oriented behavior we assume that, in parallel to the activation dynamics (1), there exists a second dynamic process which can be motivated from classical approaches to Reinforcement Learning [11, 12]. Recall the Bellman equation
h
i
X
X
V?? (i) =
?(a|i)
P (j|i, a) r(j) + ? V?? (j) ,
(2)
a
j
P?
yielded by the expectation V ? (i) of the discounted future return R(t) = ? =1 ? ??1 %(t+? ),
which yields R(t) = %(t+1) + ? R(t+1), when situated in state i. Here, ? is the discount
factor and we presumed that the received rewards %(t) actually depend only on the state
and thus enter equation (2) only in terms of the reward function r(i) (we neglect here that
rewards may directly depend on the action). Behavior is described by a stochastic policy
?(a|i), the probability of executing action a in state i. Knowing the property (2) of V ? it is
straight-forward to define a recursion algorithm for an approximation V of V ? such that V
converges to V ? . This recursion algorithm is called Value Iteration and reads
X
X
?v ?V? (i) = ?V? (i) +
?(a|i)
P (j|i, a) r(j) + ? V? (j) ,
(3)
a
j
with a ?reciprocal learning rate? or time constant ?v . Note that (2) is the fixed point equation
of (3).
The practical meaning of the state-value function V is that it quantifies how desirable and
promising it is to reach a state i, also accounting for future rewards to be expected. In
particular, if one knows the current state i it is a simple and efficient rule of behavior to
choose that action a that will lead to the neighbor state j with maximal V (j) (the greedy
policy). In that sense, V (i) provides a smooth gradient towards desirable goals. Note
though that direct Value Iteration presumes that the state and action spaces are known and
finite, and that the current state and the world model P (j|i, a) is known.
How can we transfer these classical ideas to our model? We suppose that the CWM is
given a goal stimulus g from outside, i.e., it is given the command to reach a world state
that corresponds to the stimulus g. This stimulus induces a reward excitation ri = ks (si , g)
for each unit i. Now, besides the activations xi , we introduce another field over the CWM,
the value field vi , which is in analogy to the state-value function V (i). The dynamics is
?v v? i = ? vi + ri + ? max(wji vj ) ,
(ji)
(4)
and well comparable to (3): One difference is that vi estimates the ?current-plusfuture? reward %(t) + ?R(t) rather than the future reward only?in the upper notation
thisPcorresponds to the value iteration ?v ?V? (i) = ?V? (i) + r(i) +
P
j P (j|i, a) ? V? (j) . As it is commonly done for Value Iteration, we asa ?(a|i)
sumed ? to be the greedy policy. More precisely, we considered only that action (i.e., that
connection (ji)) that leads to the neighbor state j with maximal value wji vj . In effect, the
summations over a as well as over j can be replaced by a maximization over (ji). Finally
we replaced the probability factor P (j|i, a) by wji ?we will see in the next section how
wji is learned and what it will converge to.
In practice, the value field will relax quickly to its fixed point vi? = ri + ? max(ji) (wji vj? )
and stay there if the goal does not change and if the world model is not re-adapted (see the
experiments). The quasi-stationary value field vi together with the current (typically nonstationary) activations xi allow the system to generate a motor signal that guides towards
the goal. More precisely, the value field vi determines for every unit i the ?best? neighbor
unit ki = argmaxj wji vj . The output motor signal is then the activation average
X
a=
xi aki i
(5)
i
of the motor codebook vectors aki i that have been learned for the corresponding connections. Hence, the information flow between the central layer and the motor system is in
both ways: In the ?tracking? process as given by equation (1) the information flows from
the motor layer to the central layer: Motor signals activate the corresponding connections
and cause lateral, predictive excitations. In the action selection process as given by equation (5) the signals flow from the central layer back to the motor layer to induce the motor
activity that should turn predictions into reality.
Depending on the specific problem and the representation of motor commands on the motor
layer, a post-processing of the motor signal a, e.g. a competition between contradictory
motor units, might be necessary. In our experiments we will have two motor units and will
always normalize the 2D vector a to unit length.
4 Self-organization and adaptation
The self-organization process of the central layer combines techniques from standard selforganizing maps [13, 14] and their extensions w.r.t. growing representations [15, 16] and
the learning of temporal dependencies in lateral connections [17, 18]. The free variables
of a CWM subject to adaptation are (1) the number of neurons and the lateral connectivity
itself, (2) the codebook vectors si and aji to the perceptive and motor layers, respectively,
and (3) the weights wji of the lateral connections. The adaptation mechanisms we propose are based on three general principles: (1) the addition of units for representation of
novel states (novelty), (2) the fine tuning of the codebook vectors of units and connections (plasticity), and (3) the adaptation of lateral connections in favor of better prediction
performance (prediction).
Novelty. Mechanisms similar to those of FuzzyARTMAPs [15] or Growing Neural Gas
[16] account for the insertion of new units when novelty is detected. We detect novelty
in a straight-forward manner, namely when the difference between the actual perception
and the best matching unit becomes too large. To make this detection more robust, we
use a low-pass filter (leaky integrator). At a given time, let z be the best matching unit,
z = argmaxi xi . For this unit we integrate the error measure ez
?e e? z = ?ez + (1 ? ks (sz , s)) .
We normalize ks (sz , s) such that it equals 1 in the perfect matching case when sz = s.
Whenever this error measure exceeds a threshold called vigilance, ez > ?, ? ? [0, 1], we
generate a new unit j with the codebook vector equal to the current perception, sj = s,
and a connection from the last best matching unit z? with the codebook vector equal to the
current motor signal, ajz? = a. The errors of both, the new and the old unit, are reset to
zero, ez ? 0, ej = 0.
Plasticity. We use simple Hebbian plasticity to fine tune the representations of existing
units and connections. Over time, the receptive fields of units and connections become
more and more similar to the average stimuli that activated them. We use the update rules
?s s? z = ?sz + s ,
?a a? zz? = ?azz? + a ,
with learning time constants ?s and ?a .
Prediction and a temporal Hebb rule. Although perfect prediction is not the actual objective of the CWM, the predictive power is a measure of the correctness of the learned
world model and good predictive power is one-to-one with good behavior planning. The
first and simple mechanism to adapt the predictive power is to grow a new lateral connection between two successive best matching units z? and z if it does not yet exist. The new
connection is initialized with wzz? = 1 and azz? = a. The second, more interesting mechanism addresses the adaptation of wji based on new experiences and can be motivated as
follows: The temporal Hebb rule strengthens a synapse if the pre- and post-synaptic neurons spike in sequence, depending on the inter-spike-interval, and is supposed to roughly
describe LTP and LTD (see, e.g.,[19]). In a population code model, this corresponds to a
measure of correlation between the pre-synaptic and the delayed post-synaptic activity. In
our case we additionally have to account for the action-dependence of a lateral connection.
We do so by considering the term ka (aji , a) xi instead of only the pre-synaptic activity.
As a measure of temporal correlation we choose to relate this term to the derivative x? j
of the post-synaptic unit instead of its delayed activation?this saves us from specifying
an ad-hoc ?typical? delay and directly reflects that, in equation (1), lateral inputs relate to
the derivative of xj . Hence, we consider the product x? j ka (aji , a) xi as the measure of
correlation. Our concrete implementation is a robust version of this idea:
?w w? ji = ?ji [cji ? wji ?ji ] , where
?? c?ji = ?cji + x? j ka (aji , a) xi , ?? ?? ji = ??ji + ka (aji , a) xi .
Here, cji and ?ji are simply low-pass filters of x? j ka (aji , a) xi and of ka (aji , a) xi .
The term wji ?ji ensures convergence (assuming quasi static cji and ?ji ) of wji towards
cji ?ji . The time scale of adaptation is modulated by the recent activity ?ji of the connection.
5 Experiments
To demonstrate the functionality of the CWM we consider a simple maze problem. The
parameters we used are
?x
?
2 ?s2
2 ?a2
?v
?
?e
?s
?a
?w
??
2
0.1
0.01
0.5
2
0.8
10
20
5
10
100
.
Figure 2a displays the geometry of the maze. The ?agent? is allowed to move continuously
in this maze. The motor signal is 2-dimensional and encodes the forces f in x- and y? = 0.2 (f ? x).
? As a
directions; the agent has a momentum and friction according to x
stimulus, the CWM is given the 2D position x.
Figure 2a also displays the (lateral) topology of the central layer after 30 000 time steps of
self-organization, after which the system becomes quasi-stationary. The model is learned
from scratch, initialized with one random unit. During this first phase, behavior planning
is switched off and the maze is explored with a random walk that changes its direction only
with probability 0.1 at a time. In the illustration, the positions of the units correspond to
the codebook vectors that have been learned. The directedness and the codebook vectors
of the connections can not displayed.
After the self-organization phase we switched on behavior planning. A goal stimulus corresponding to a random position in the maze is given and changed every time the agent
reaches the goal. Generally, the agent has no problem finding a path to the goal. Figure 2b
already displays a more interesting example. The agent has reached goal A and now seeks
for goal B. However, we blocked the trespass 1. Starting at A the agent moves normally
until it reaches the blockade. It stays there and moves slowly up an down in front of the
blockade for a while?this while is of the order of the low-pass filter time scale ?? . During
this time, the lateral weights of the connections pointing to the left are depressed and after
about 150 time steps, this change of weights has enough influence on the value field dynamics (4) to let the agent chose the way around the bottom to goal B. Figure 2c displays
the next scene: Starting at B, the agent tries to reach goal C again via the blockade 1 (the
previous adaptation depressed only the connections from right to left). Again, it reaches the
blockade, stays there for a while, and then takes the way around to goal C. Figures 2d and
2e repeat this experiment with blockade 2. Starting at D, the agent reaches the blockade
2 and eventually chooses the way around to goal E. Then, seeking for goal F, the agent
reaches the blockade first from the left, thereafter from the bottom, then from the right,
then it tries from the bottom again, and finally learned that none of these paths are valid
anymore and chooses the way all around to goal F. Figures 2f shows that, once the world
model has re-adapted to account for these blockades, the agent will not forget about them:
Here, moving from G to H, it does not try to trespass block 2.
a
c
b
B
B
1
1
C
A
e
d
f
1
1
1
G
D
F
2
2
H
2
E
E
Figure 2: The CWM on a maze problem: (a) the outcome of self-organization; (b-c) agent
movements from goal A to B to C, here, the trespass 1 was blocked and requires readaptation of the world model; (d-f) agent movements that demonstrate adaptation to a second
blockade. Please see the text for more explanations.
The reader is encouraged to also refer to the movies of these experiments, deposited
at www.marc-toussaint.net/03-cwm/, which visualize much better the dynamics of selforganization, the planning behavior, the dynamics of the value field, and the world model
readaptation.
6 Discussion
The goal of this research is an understanding of how neural systems may learn and represent
a world model that allows for the generation of goal-directed behavioral sequences. In our
approach for a connectionist world model a perceptual and a motor layer are coupled to selforganize a model of the perceptual implications of motor activity. A dynamical value field
on the learned world model organizes behavior planning?a method in principle borrowed
from classical Value Iteration. A major feature of our model is its adaptability. The state
space model is developed in a self-organizing way and small world changes require only
little re-adaptation of the CWM. The system is continuous in the action, perception, and
time domain and all dynamics and adaptivity rely on local interactions only.
Future work will include the more rigorous probabilistic interpretations of CWMs which
we already indicated in section 2. Another, rather straight-forward extension will be to replace random-walk exploration by more directed, information seeking exploration methods
as they have already been developed for classical world models [20, 21].
Acknowledgments
I acknowledge support from the German Bundesministerium f?ur Bildung und Forschung
(BMBF).
References
[1] G. Hesslow. Conscious thought as simulation of behaviour and perception. Trends in Cognitive
Sciences, 6:242?247, 2002.
[2] Rick Grush. The emulation theory of representation: motor control, imagery, and perception.
Behavioral and Brain Sciences, 2003. To appear.
[3] M.D. Majors and R.J. Richards. Comparing model-free and model-based reinforcement
learning. Cambridge University Engineering Department Technical Report CUED/F- INENG/TR.286, 1997.
[4] D.E. Rumelhart, P. Smolensky, J.L. McClelland, and G. E. Hinton. Schemata and sequential
thought processes in PDP models. In D.E. Rumelhart and J. L. McClelland, editors, Parallel
Distributed Processing, volume 2, pages 7?57. MIT Press, Cambridge, 1986.
[5] M. Jordan and D. Rumelhart. Forward models: Supervised learning with a distal teacher. Cognitive Science, 16:307?354, 1992.
[6] B. Kr?ose and M. Eecen. A self-organizing representation of sensor space for mobile robot
navigation. In Proc. of Int. Conf. on Intelligent Robots and Systems (IROS 1994), 1994.
[7] U. Zimmer. Robust world-modelling and navigation in a real world. NeuroComputing, 13:247?
260, 1996.
[8] S. Amari. Dynamics of patterns formation in lateral-inhibition type neural fields. Biological
Cybernetics, 27:77?87, 1977.
[9] W.A. Phillips and W. Singer. In the search of common foundations for cortical computation.
Behavioral and Brain Sciences, 20:657?722, 1997.
[10] L.F. Abbott. Realistic synaptic inputs for network models. Network: Computation in Neural
Systems, 2:245?258, 1991.
[11] D.P. Bertsekas and J.N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific, 1996.
[12] R.S. Sutton and A.G. Barto. Reinforcement Learning. MIT Press, Cambridge, 1998.
[13] C. von der Malsburg. Self-organization of orientation-sensitive cells in the striate cortex. Kybernetik, 15:85?100, 1973.
[14] T. Kohonen. Self-organizing maps. Springer, Berlin, 1995.
[15] G.A. Carpenter, S. Grossberg, N. Markuzon, J.H. Reynolds, and D.B. Rosen. Fuzzy ARTMAP:
A neural network architecture for incremental supervised learning of analog multidimensional
maps. IEEE Transactions on Neural Networks, 5:698?713, 1992.
[16] B. Fritzke. A growing neural gas network learns topologies. In G. Tesauro, D.S. Touretzky,
and T.K. Leen, editors, Advances in Neural Information Processing Systems 7, pages 625?632.
MIT Press, Cambridge MA, 1995.
[17] C.M. Bishop, G.E. Hinton, and I.G.D. Strachan. GTM through time. In Proc. of IEEE Fifth Int.
Conf. on Artificial Neural Networks. Cambridge, 1997.
[18] J.C. Wiemer. The time-organized map algorithm: Extending the self-organizing map to spatiotemporal signals. Neural Computation, 15:1143?1171, 2003.
[19] P. Dayan and L.F. Abbott. Theoretical Neuroscience. MIT Press, 2001.
[20] J. Schmidhuber. Adaptive confidence and adaptive curiosity. Technical Report FKI-149-91,
Technical University Munich, 1991.
[21] N. Meuleau and P. Bourgine. Exploration of multi-state environments: Local measures and
back-propagation of uncertainty. Machine Learning, 35:117?154, 1998.
| 2452 |@word version:1 briefly:1 selforganization:1 nd:1 ruhr:1 simulation:3 seek:1 accounting:1 thereby:2 tr:1 bourgine:1 reynolds:1 existing:4 ka:14 current:11 comparing:2 activation:10 si:2 yet:1 deposited:1 realistic:1 plasticity:4 motor:31 designed:1 update:1 stationary:3 greedy:2 nervous:1 reciprocal:1 core:1 meuleau:1 provides:1 codebook:10 successive:1 direct:1 become:1 combine:1 behavioral:3 manner:1 introduce:2 theoretically:1 inter:1 presumed:1 expected:2 roughly:1 behavior:12 planning:13 growing:4 multi:1 brain:3 integrator:1 bellman:1 inspired:1 discounted:1 actual:2 little:1 considering:1 becomes:2 estimating:2 notation:2 what:1 kind:1 interpreted:1 fuzzy:1 developed:2 finding:1 temporal:5 every:4 multidimensional:1 universit:1 control:2 unit:29 normally:1 appear:1 organize:1 bertsekas:1 engineering:1 local:6 consequence:1 kybernetik:1 sutton:1 path:2 modulation:2 might:2 chose:2 k:9 specifying:1 directed:2 practical:1 acknowledgment:1 grossberg:1 practice:1 block:1 differs:1 aji:16 thought:2 matching:5 pre:3 radial:1 induce:1 confidence:1 bochum:2 onto:2 selection:1 influence:1 www:1 map:5 starting:3 artmap:1 rule:5 insight:1 deriving:1 directedness:1 population:1 handle:1 coordinate:1 suppose:1 programming:2 us:1 hypothesis:1 associate:2 trend:1 rumelhart:3 strengthens:1 particularly:1 richards:1 bottom:3 hesslow:1 capture:1 region:1 ensures:2 movement:2 environment:3 und:1 insertion:1 reward:7 dynamic:18 depend:2 predictive:5 asa:1 basis:1 gtm:1 describe:2 activate:1 argmaxi:1 detected:1 artificial:1 formation:1 outside:1 outcome:1 encoded:1 relax:1 amari:1 favor:1 itself:2 hoc:1 sequence:2 net:1 propose:1 interaction:6 maximal:2 reset:1 adaptation:16 product:1 kohonen:1 organizing:5 iff:1 supposed:1 competition:1 normalize:2 convergence:1 transmission:2 extending:1 perfect:2 converges:2 executing:1 incremental:1 help:1 coupling:3 depending:3 derive:1 cued:1 received:1 borrowed:1 implemented:1 involves:1 judge:1 direction:2 cwm:14 emulation:1 functionality:1 filter:3 stochastic:1 exploration:3 require:1 behaviour:1 biological:2 summation:1 extension:2 around:4 considered:1 exp:2 bildung:1 scope:1 visualize:1 pointing:1 major:2 a2:2 purpose:1 proc:2 sensitive:1 modulating:1 correctness:1 weighted:1 reflects:1 mit:4 sensor:1 gaussian:2 always:1 rather:2 ej:1 rick:1 command:2 mobile:1 barto:1 encode:1 derived:1 modelling:1 rigorous:1 sense:1 detect:1 helpful:1 inference:1 abstraction:1 dependent:1 dayan:1 integrated:1 typically:1 hidden:1 relation:2 quasi:4 germany:1 orientation:1 integration:1 summed:1 field:15 equal:3 once:1 zz:1 encouraged:1 represents:1 future:5 rosen:1 connectionist:5 stimulus:8 report:2 inherent:1 intelligent:1 oriented:2 neurocomputing:1 delayed:2 replaced:2 geometry:1 phase:2 attractor:1 bundesministerium:1 detection:1 organization:10 navigation:2 activated:1 implication:3 necessary:1 experience:1 institut:1 tree:2 old:1 initialized:2 re:3 walk:2 theoretical:1 maximization:1 euler:1 delay:1 too:1 front:1 dependency:1 teacher:1 spatiotemporal:1 chooses:2 fundamental:1 stay:3 probabilistic:3 off:1 receiving:1 together:1 quickly:1 concrete:1 continuously:1 connectivity:4 imagery:2 central:7 again:3 von:1 choose:3 vigilance:1 slowly:1 cognitive:2 conf:2 derivative:3 return:2 presumes:1 account:3 de:1 int:2 vi:6 ad:1 multiplicative:2 try:3 schema:2 reached:1 parallel:2 largely:1 yield:1 correspond:1 fki:1 none:1 cybernetics:1 straight:3 explain:1 synapsis:2 reach:8 touretzky:1 whenever:1 synaptic:7 static:1 recall:1 knowledge:4 organized:1 adaptability:1 actually:1 back:2 supervised:2 synapse:2 leen:1 done:1 though:2 just:1 correlation:3 until:1 receives:2 propagation:1 indicated:1 scientific:1 effect:1 normalized:1 evolution:1 hence:2 entering:1 read:1 blockade:9 distal:1 during:2 self:13 aki:2 please:1 excitation:3 outline:1 demonstrate:3 reasoning:1 meaning:1 novel:1 common:1 mt:1 physical:1 ji:25 volume:1 analog:1 occurred:1 interpretation:2 functionally:1 refer:1 blocked:2 cambridge:5 enter:1 phillips:1 tuning:1 depressed:2 dot:1 moving:1 robot:2 cortex:1 inhibition:2 posterior:1 recent:1 selforganize:1 tesauro:1 schmidhuber:1 certain:1 der:1 exploited:1 wji:18 converge:1 novelty:4 tempting:1 signal:9 branch:1 desirable:2 hebbian:3 smooth:1 exceeds:1 match:1 adapt:1 technical:3 post:4 shunting:3 prediction:5 variant:1 basic:2 neuro:1 expectation:1 iteration:5 kernel:5 normalization:1 grounded:1 represent:1 cell:1 addition:1 fine:2 interval:1 grow:2 crucial:1 subject:1 ltp:1 flow:3 jordan:1 nonstationary:1 near:1 fritzke:1 enough:1 xj:4 architecture:8 topology:2 idea:6 knowing:1 motivated:2 cji:5 ltd:1 cause:1 hardly:1 action:16 enumerate:1 generally:1 clear:1 tune:1 selforganizing:3 discount:1 conscious:1 situated:1 induces:2 mcclelland:2 generate:2 exist:2 inhibitory:1 neuroscience:1 algorithmically:1 thereafter:1 soma:1 threshold:1 iros:1 abbott:2 relaxation:1 uncertainty:1 reader:1 rub:1 comparable:3 layer:27 ki:1 display:4 yielded:1 activity:7 adapted:3 strength:1 precisely:2 ri:3 scene:1 encodes:2 generates:1 friction:1 department:1 munich:1 according:1 describes:1 ur:2 equation:8 discus:2 turn:1 mechanism:12 argmaxj:1 eventually:1 know:1 german:1 singer:1 anymore:1 save:1 include:1 malsburg:1 neglect:1 classical:7 seeking:2 objective:1 move:3 already:3 realized:1 occurs:1 spike:2 receptive:2 dependence:1 striate:1 gradient:1 lateral:25 berlin:1 athena:1 assuming:1 besides:1 length:1 code:1 multiplicatively:2 illustration:1 relate:2 implementation:1 policy:3 perform:1 upper:1 neuron:2 markov:1 finite:1 acknowledge:1 gas:2 displayed:1 hinton:2 pdp:1 neuroinformatik:2 namely:2 connection:25 learned:8 address:2 able:1 curiosity:1 dynamical:1 perception:11 pattern:1 smolensky:1 max:2 explanation:1 power:3 force:1 attach:2 rely:1 recursion:2 movie:1 coupled:3 text:1 prior:2 understanding:1 fully:1 adaptivity:1 interesting:2 generation:1 proportional:2 analogy:1 toussaint:2 foundation:1 integrate:1 switched:2 agent:13 principle:3 editor:2 changed:1 repeat:1 last:1 free:3 tsitsiklis:1 guide:1 allow:1 neighbor:3 fifth:1 leaky:1 distributed:1 cortical:1 world:23 transition:4 maze:7 valid:1 forward:5 commonly:1 reinforcement:4 adaptive:2 transaction:1 sj:8 sz:4 active:1 summing:1 xi:16 zimmer:1 continuous:3 search:1 quantifies:1 reality:1 additionally:2 promising:1 learn:3 transfer:1 robust:3 contributes:1 marc:2 domain:3 vj:4 s2:2 allowed:1 carpenter:1 hebb:3 bmbf:1 ose:1 momentum:1 position:3 exponential:1 governed:1 perceptual:4 third:2 learns:1 down:1 dentritic:3 specific:1 bishop:1 explored:1 evidence:1 normalizing:1 incorporating:2 exists:2 sequential:1 kr:1 forschung:1 subtree:1 forget:1 simply:1 ez:4 tracking:2 springer:1 corresponds:2 determines:1 ma:1 modulate:3 goal:20 towards:3 replace:1 change:5 typical:3 contradictory:1 called:2 pas:3 divisive:1 organizes:2 internal:1 perceptive:3 support:1 modulated:2 scratch:1 |
1,599 | 2,453 | Can We Learn to Beat the Best Stock
Allan Borodin1 Ran El-Yaniv2 Vincent Gogan1
Department of Computer Science
University of Toronto1 Technion - Israel Institute of Technology2
{bor,vincent}@cs.toronto.edu [email protected]
Abstract
A novel algorithm for actively trading stocks is presented. While traditional universal algorithms (and technical trading heuristics) attempt to
predict winners or trends, our approach relies on predictable statistical
relations between all pairs of stocks in the market. Our empirical results
on historical markets provide strong evidence that this type of technical trading can ?beat the market? and moreover, can beat the best stock
in the market. In doing so we utilize a new idea for smoothing critical
parameters in the context of expert learning.
1 Introduction: The Portfolio Selection Problem
The portfolio selection (PS) problem is a challenging problem for machine learning, online
algorithms and, of course, computational finance. As is well known (e.g. see Lugosi [1])
sequence prediction under the log loss measure can be viewed as a special case of portfolio selection, and perhaps more surprisingly, from a certain worst case minimax criterion,
portfolio selection is not essentially any harder (than prediction) as shown in [2] (see also
[1], Thm. 20 & 21). But there seems to be a qualitative difference between the practical
utility of ?universal? sequence prediction and universal portfolio selection. Simply stated,
universal sequence prediction algorithms under various probabilistic and worst-case models work very well in practice whereas the known universal portfolio selection algorithms
do not seem to provide any substantial benefit over a naive investment strategy (see Sec. 4).
A major pragmatic question is whether or not a computer program can consistently outperform the market. A closer inspection of the interesting ideas developed in information
theory and online learning suggests that a promising approach is to exploit the natural
volatility in the market and in particular to benefit from simple and rather persistent statistical relations between stocks rather than to try to predict stock prices or ?winners?. We
present a non-universal portfolio selection algorithm1 , which does not try to predict winners. The motivation behind our algorithm is the rationale behind constant rebalancing
algorithms and the worst case study of universal trading introduced by Cover [3]. Not only
does our proposed algorithm substantially ?beat the market? on historical markets, it also
beats the best stock. So why are we presenting this algorithm and not just simply making
money? There are, of course some caveats and obstacles to utilizing the algorithm. But for
large investors the possibility of a goose laying silver (if not golden) eggs is not impossible.
1
Any PS algorithm can be modified to be universal by investing any fixed fraction of the initial
wealth in a universal algorithm.
Assume a market with m stocks. Let vt = (vt (1), . . . , vt (m)) be the closing prices of the
m stocks for the tth day, where vt (j) is the price of the jth stock. It is convenient to work
with relative prices xt (j) = vt (j)/vt?1 (j) so that an investment of $d in the jth stock just
before the tth period yields dxt (j) dollars. We let xt = (xt (1), . . . , xt (m)) denote the
market vector of relative prices corresponding to the tth day. A portfolio b is an allocation
of wealth in the stocks, specified by the proportions b = (b(1),
P . . . , b(m)) of current dollar
wealth invested in each of the stocks, where b(j) ? 0 and j b(j) = 1. The daily return
P
of a portfolio b w.r.t. a market vector x is b ? x = j b(j)x(j) and the (compound) total
return, retX (b1 , . . . ,Q
bn ), of a sequence of portfolios b1 , . . . , bn w.r.t. a market sequence
n
X = x1 , . . . , xn is t=1 bt ? xt . A portfolio selection algorithm is any deterministic or
randomized rule for specifying a sequence of portfolios.
The simplest strategy is to ?buy-and-hold? stocks using some portfolio b. We denote this strategy by BAHb and let U-BAH denote the uniform buy-and-hold when b =
(1/m, . . . , 1/m). We say that a portfolio selection algorithm ?beats the market? when
it outperforms U-BAH on a given market sequence although in practice ?the market? can
be represented by some non-uniform BAH (e.g. DJIA). Buy-and-hold strategies rely on the
tendency of successful markets to grow. Much of modern portfolio theory focuses on how
to choose a good b for the buy-and-hold strategy. The seminal ideas of Markowitz in [4]
yield an algorithmic procedure for choosing the weights of the portfolio b so as to minimize the variance for any feasible expected return. This variance minimization is possible
by placing appropriate larger weights on subsets of anti-correlated stocks, an idea which
we shall also utilize. We denote the optimal in hindsight buy-and-hold strategy (i.e. invest
only in the best stock) by BAH? .
An alternative approach to the static buy-and-hold is to dynamically change the portfolio
during the trading period. This approach is often called ?active trading?. One example
of active trading is constant rebalancing; namely, fix a portfolio b and (re)invest your
dollars each day according to b. We denote this constant rebalancing strategy by CBALb
and let CBAL? denote the optimal (in hindsight) CBAL. A constant rebalancing strategy
can often take advantage of market fluctuations to achieve a return significantly greater
than that of BAH? . CBAL? is always at least as good as the best stock BAH? and in some real
market sequences a constant rebalancing strategy will take advantage of market fluctuations
and significantly outperform the best stock (see Table 1). For now, consider Cover and
Gluss? [5] classic (but contrived) example
of cash and one stock and
? 1of?a ?market
? ? 1 consisting
? ?1?
the market sequence of price relatives 1/2
, 12 , 1/2
, 2 , . . . Now consider the CBALb
with b = ( 21 , 21 ). On each odd day the daily return of CBALb is 21 1 + 21 21 = 43 and on
each even day, it is 3/2. The total return over n days is therefore (9/8)n/2 , illustrating
how a constant rebalancing strategy can yield exponential returns in a ?no-growth market?.
Under the assumption that the daily market vectors are observations of identically and
independently distributed (i.i.d) random variables, it is shown in [6] that CBAL? performs
at least as good (in the sense of expected total return) as the best online portfolio selection
algorithm. However, many studies (see e.g. [7]) argue that stock price sequences do have
long term memory and are not i.i.d.
A non-traditional objective (in computational finance) is to develop online trading strategies that are in some sense always guaranteed to perform well. Within a line of research
pioneered by Cover [5, 3, 2] one attempts to design portfolio selection algorithms that
can provably do well (in terms of their total return) with respect to some online or offline
benchmark algorithms. Two natural online benchmark algorithms are the uniform buy and
hold U-BAH, and the uniform constant rebalancing strategy U-CBAL, which is CBALb with
1
1
,..., m
). A natural offline benchmark is BAH? and a more challenging offline
b = (m
benchmark is CBAL? .
Cover and Ordentlich?s Universal Portfolios algorithm [3, 2], denoted here by UNIVERSAL,
was proven to be universal against CBAL? , in the sense that for every market sequence X of
m stocks over n days, it guarantees a sub-exponential (indeed polynomial) ratio in n,
? m?1 ?
(1)
retX (CBAL? )/retX (UNIVERSAL) ? O n 2
From a theoretical perspective this is surprising as the ratio is a polynomial in n (for fixed
m) whereas CBAL? is capable of exponential returns. From a practical perspective, while the
m?1
ratio n 2 is not very useful, the motivation that underlies the potential of CBAL algorithms
is useful! We follow this motivation and develop a new algorithm which we call ANTICOR.
By attempting to systematically follow the constant rebalancing philosophy, ANTICOR is
capable of some extraordinary performance in the absence of transaction costs, or even
with very small transaction costs.
2 Trying to Learn the Winners
The most direct approach to expert learning and portfolio selection is a ?(reward based)
weighted average prediction? algorithm which adaptively computes a weighted average of
experts by gradually increasing (by some multiplicative or additive update rule) the relative
weights of the more successful experts. For example, in the context of the PS problem
consider the ?exponentiated gradient? EG(?) algorithm proposed by Helmbold et al. [8].
The EG(?) algorithm computes the next portfolio to be
bt (j) exp {?xt (j)/(bt ? xt )}
bt+1 (j) = Pm
j=1 bt (j) exp {?xt (j)/(bt ? xt )}
where ? is a ?learning rate? parameter. EG was designed to greedily choose the best
portfolio for yesterday?s market xt while at the same time paying a penalty from moving far from
pyesterday?s portfolio. For a universal bound on EG, Helmbold et al. set
? = 2xmin 2(log m)/n where xmin is a lower bound on any price relative.2 It is easy
to see that as n increases, ? decreases to 0 so that we can think of ? as being very small in
order to achieve universality. When ? = 0, the algorithm EG(?) degenerates to the uniform
CBAL which is not a universal algorithm. It is also the case that if each day the price relatives
for all stocks were identical, then EG (as well as other PS algorithms) will converge to the
uniform CBAL. Combining a small learning rate with a ?reasonably balanced? market we
expect the performance of EG to be similar to that of the uniform CBAL and this is confirmed
by our experiments (see Table1).3
Cover?s universal algorithms adaptively learn each day?s portfolio by increasing the weights
of successful CBALs. The update rule for these universal algorithms is
R
b ? rett (CBALb )d?(b)
bt+1 = R
,
rett (CBALb )d?(b)
where ?(?) is some prior distribution over portfolios. Thus, the weight of a possible portfolio is proportional to its total return rett (b) thus far times its prior. The particular universal algorithm we consider in our experiments uses the Dirichlet prior (with parameters
( 21 , . . . , 21 )) [2]. Within a constant factor, this algorithm attains the optimal ratio (1) with
respect to CBAL? .4 The algorithm is equivalent to a particular static distribution over the
2
Helmbold et al. show how to eliminate the need to know xmin and n. While EG can be made
universal, its performance ratio is only sub-exponential (and not polynomial) in n.
3
Following Helmbold et al. we fix ? = 0.01 in our experiments.
4
Experimentally (on our datasets) there is a negligible difference between the uniform universal
algorithm in [3] and the above Dirichlet universal algorithm.
class of all CBALs. This equivalence helps to demystify the universality result and also
shows that the algorithm can never outperform CBAL? .
A different type of ?winner learning? algorithm can be obtained from any sequence prediction strategy. For each stock, a (soft) sequence prediction algorithm provides a probability
p(j) that the next symbol will be j ? {1, . . . , m}. We view this as a prediction that stock
j will have the best price relative for the next day and set bt+1 (j) = pj . We consider predictions made using the prediction component of the well-known Lempel-Ziv (LZ) lossless
compression algorithm [9]. This prediction component is nicely described in Langdon [10]
and in Feder [11]. As a prediction algorithm, LZ is provably powerful in various senses.
First it can be shown that it is asymptotically optimal with respect to any stationary and
ergodic finite order Markov source (Rissanen [12]). Moreover, Feder shows that LZ is also
universal in a worst case sense with respect to the (offline) benchmark class of all finite
state prediction machines. To summarize, the common approach to devising PS algorithms
has been to attempt and learn winners using winner learning schemes.
3 The Anticor Algorithm
We propose a different approach, motivated by the CBAL ?philosophy?. How can we interpret the success of the uniform CBAL on the Cover and Gluss example of Sec. 1? Clearly,
the uniform CBAL here is taking advantage of price fluctuation by constantly transferring
wealth from the high performing stock to the anti-correlated low performing stock. Even
in a less contrived market, we should be able to take advantage when a stock is currently
outperforming other stocks especially if this strong performance is anti-correlated with the
performance of these other stocks. Our ANTICORw algorithm considers a short market history (consisting of two consecutive ?windows?, each of w trading days) so as to model
statistical relations between each pair of stocks. Let
LX1 = log(xt?2w+1 ), . . . , log(xt?w )T and LX2 = log(xt?w+1 ), . . . , log(xt )T ,
where log(xk ) denotes (log(xk (1)), . . . , log(xk (m))). Thus, LX1 and LX2 are the two
vector sequences (equivalently, two w ? m matrices) constructed by taking the logarithm
over the market subsequences corresponding to the time windows [t ? 2w + 1, t ? w]
and [t ? w + 1, t], respectively. We denote the jth column of LXk by LXk (j). Let
?k = (?k (1), . . . , ?k (m)), be the vectors of averages of columns of LXk (that is,
?k (j) = E{LXk (j)}). Similarly, let ?k , be the vector of standard deviations of columns
of LXk . The cross-correlation matrix (and its normalization) between column vectors in
LX1 and LX2 are defined as:
Mcov (i, j) = (LX1 (i) ? ?1 (i))T (LX2 (j) ? ?2 (j));
? Mcov (i,j)
?1 (i), ?2 (j) 6= 0;
?1 (i)?2 (j)
Mcor (i, j)
0
otherwise.
Mcor (i, j) ? [?1, 1] measures the correlation between log-relative prices of stock i over
the first window and stock j over the second window. For each pair of stocks i and j we
compute claimi?j , the extent to which we want to shift our investment from stock i to
stock j. Namely, there is such a claim iff ?2 (i) > ?2 (j) and Mcor (i, j) > 0 in which case
claimi?j = Mcor (i, j) + A(i) + A(j) where A(h) = |Mcor (h, h)| if Mcor (h, h) < 0,
else 0. Following our interpretation for the success of a CBAL, Mcor (i, j) > 0 is used
to predict that stocks i and j will be correlated in consecutive windows (i.e. the current window and the next window based on the evidence for the last two windows) and
Mcor (h, h) < 0 predicts that stock h will be anti-correlated with itself over consec? t (i) + P [transferj?i ? transferi?j ] where
utive windows. Finally, bt+1 (i) = b
j6=i
? t (i) ? claimi?j / P claimi?j and b
? t is the resulting portfolio just aftransferi?j = b
j
ter market closing (on day t).
SP500: Anticor vs. window size
NYSE: Anticor vs. window size
w
BAH(Anticor )
w
Anticor
12
8
w
Best Stock
Market Return
10
Total Return
Total Return (log?scale)
10
Anticorw
5
10
BAH(Anticorw)
Anticorw
Best Stock
Market
Best Stock
8
Anticorw
6
4
2
10
2
1
10
Best Stock
1
0
10
2
5
10
15
20
25
5
30
10
15
20
25
30
Window Size (w)
Window Size (w)
Figure 1: ANTICORw ?s total return (per $1 investment) vs. window size 2 ? w ? 30 for
NYSE (left) and SP500 (right).
Our ANTICORw algorithm has one critical parameter, the window size w. In Figure 1 we
depict the total return of ANTICORw on two historical datasets as a function of the window
size w = 2, . . . , 30. As we might expect, the performance of ANTICORw depends significantly on the window size. However, for all w, ANTICORw beats the uniform market and,
moreover, it beats the best stock using most window sizes. Of course, in online trading we
cannot choose w in hindsight. Viewing the ANTICORw algorithms as experts, we can try to
learn the best expert. But the windows, like individual stocks, induce a rather volatile set
of experts and standard expert combination algorithms [13] tend to fail.
Alternatively, we can adaptively learn and invest in some weighted average of all ANTICORw
algorithms with w less than some maximum W . The simplest case is a uniform investment on all the windows; that is, a uniform buy-and-hold investment on the algorithms
ANTICORw , w ? [2, W ], denoted by BAHW (ANTICOR). Figure 2 (left) graphs the total return
of BAHW (ANTICOR) as a function of W for all values of 2 ? W ? 50 with respect to the
NYSE dataset (see details below). Similar graphs for the other datasets we consider appear
qualitatively the same and the choice W = 30 is clearly not optimal. However, for all
W ? 3, BAHW (ANTICOR) beats the best stock in all our experiments.
DJIA: Dec 14, 2002 ? Jan 14, 2003
NYSE: Total Return vs. Max Window
10
1.1
BAHW(Anticor)
10
Best Stock
MArket
4
10
3
10
Best Stock
2.8
Anticor2
2.2
2.6
1
BAHW(Anticor)
5
Total Return
Total Return (log?scale)
10
6
Anticor1
Stocks
7
2
0.9
2.4
1.8
0.8
2.2
1.6
0.7
2
1.4
0.6
2
10
1.8
1.2
0.5
1
10
1.6
1
0.4
0
10
5
10
15
20
25
30
35
Maximal Window size (W)
40
45
50
5 10 15 20 25
Days
5 10 15 20 25
Days
5 10 15 20 25
Days
Figure 2: Left: BAHW (ANTICOR)?s total return (per $1 investment) as a function of the
maximal window W . Right: Cumulative returns for last month of the DJIA dataset: stocks
(left panel); ANTICORw algorithms trading the stocks (denoted ANTICOR1 , middle panel);
ANTICORw algorithms trading the ANTICOR algorithms (right panel).
Since we now consider the various algorithms as stocks (whose prices are determined by
the cumulative returns of the algorithms), we are back to our original portfolio selection
problem and if the ANTICOR algorithm performs well on stocks it may also perform well on
algorithms. We thus consider active investment in the various ANTICORw algorithms using
ANTICOR. We again consider all windows w ? W . Of course, we can continue to compound
the algorithm any number of times. Here we compound twice and then use a buy-and-hold
investment. The resulting algorithm is denoted BAHW (ANTICOR(ANTICOR)). One impact
of this compounding, depicted in Figure 2 (right), is to smooth out the anti-correlations
exhibited in the stocks. It is evident that after compounding twice the returns become
almost completely correlated thus diminishing the possibility that additional compounding
will substantially help.5 This idea for eliminating critical parameters may be applicable in
other learning applications. The challenge is to understand the conditions and applications
in which the process of compounding algorithms will have this smoothing effect!
4 Experimental Results
We present an experimental study of the the ANTICOR algorithm and the three online learning algorithms described in Sec. 2. We focus on BAH30 (ANTICOR), abbreviated by ANTI1
and BAH30 (ANTICOR(ANTICOR)), abbreviated by ANTI2 . Four historical datasets are used.
The first NYSE dataset, is the one used in [3, 2, 8, 14]. This dataset contains 5651 daily
prices for 36 stocks in the New York Stock Exchange (NYSE) for the twenty two year period July 3rd , 1962 to Dec 31st , 1984. The second TSE dataset consists of 88 stocks from
the Toronto Stock Exchange (TSE), for the five year period Jan 4th , 1994 to Dec 31st ,
1998. The third dataset consists of the 25 stocks from SP500 which (as of Apr. 2003) had
the largest market capitalization. This set spans 1276 trading days for the period Jan 2nd ,
1998 to Jan 31st , 2003. The fourth dataset consists of the thirty stocks composing the Dow
Jones Industrial Average (DJIA) for the two year period (507 days) from Jan 14th , 2001 to
Jan 14th , 2003.6
These four datasets are quite different in nature (the market returns for these datasets appear
in the first row of Table 1). While every stock in the NYSE increased in value, 32 of the
88 stocks in the TSE lost money, 7 of the 25 stocks in the SP500 lost money and 25 of
the 30 stocks in the ?negative market? DJIA lost money. All these sets include only highly
liquid stocks with huge market capitalizations. In order to maximize the utility of these
datasets and yet present rather different markets, we also ran each market in reverse. This
is simply done by reversing the order and inverting the relative prices. The reverse datasets
are denoted by a ?-1? superscript. Some of the reverse markets are particularly challenging.
For example, all of the NYSE?1 stocks are going down. Note that the forward and reverse
markets (i.e. U-BAH) for the TSE are both increasing but that the TSE?1 is also a challenging
market since so many stocks (56 of 88) are declining.
Table 1 reports on the total returns of the various algorithms for all eight datasets. We see
that prediction algorithms such as LZ can do quite well but the more aggressive ANTI1 and
2
ANTI have excellent and sometimes fantastic returns. Note that these active strategies beat
the best stock and even CBAL? in all markets with the exception of the TSE?1 in which
they still significantly outperform the market. The reader may well be distrustful of what
appears to be such unbelievable returns for ANTI1 and ANTI2 especially when applied to
the NYSE dataset. However, recall that the NYSE dataset consists of n = 5651 trading
days and the y such that y n = the total NYSE return is approximately 1.0029511 for ANTI1
(respectively, 1.0074539 for ANTI2 ); that is, the average daily increase is less than .3%
5
This smoothing effect also allows for the use of simple prediction algorithms such as ?expert
advice? algorithms [13], which can now better predict a good window size. We have not explored
this direction.
6
The four datasets, including their sources and individual stock compositions can be downloaded
from http://www.cs.technion.ac.il/?rani/portfolios.
(respectively, .75%). Thus a transaction cost of 1% can present a significant challenge
to such active trading strategies (see also Sec. 5). We observe that UNIVERSAL and EG
have no substantial advantage over U-CBAL. Some previous expositions of these algorithms
highlighted particular combinations of stocks where the returns significantly outperformed
UNIVERSAL and the best stock. But the same can be said for U-CBAL.
Algorithm
M ARKET (U-BAH)
B EST S TOCK
CBAL?
U-CBAL
ANTI1
ANTI2
LZ
EG
UNIVERSAL
NYSE
14.49
54.14
250.59
27.07
17,059,811.56
238,820,058.10
79.78
27.08
26.99
TSE
1.61
6.27
6.77
1.59
26.77
39.07
1.32
1.59
1.59
SP500
1.34
3.77
4.06
1.64
5.56
5.88
1.67
1.64
1.62
DJIA
0.76
1.18
1.23
0.81
1.59
2.28
0.89
0.81
0.80
NYSE?1
0.11
0.32
2.86
0.22
246.22
1383.78
5.41
0.22
0.22
TSE?1
1.67
37.64
58.61
1.18
7.12
7.27
4.80
1.19
1.19
SP500?1
0.87
1.65
1.91
1.09
6.61
9.69
1.20
1.09
1.07
DJIA?1
1.43
2.77
2.97
1.53
3.67
4.60
1.83
1.53
1.53
Table 1: Monetary returns in dollars (per $1 investment) of various algorithms for four
different datasets and their reversed versions. The winner and runner-up for each market
appear in boldface. All figures are truncated to two decimals.
5 Concluding Remarks
When handling a portfolio of m stocks our algorithm may perform up to m transactions per day. A major concern is therefore the commissions it will incur. Within
the proportional commission model (see e.g. [14] and [15], Sec. 14.5.4) there exists
a fraction ? ? (0, 1) such that an investor pays at a rate of ?/2 for each buy and
for each sell. Therefore, the return of a sequence
b1 , . . . , bn of portfolios ?with reP
Q ?
? t (j)|) , where
spect to a market sequence x1 , . . . , xn is t bt ? xt (1 ? j ?2 |bt (j) ? b
1
?t =
(bt (1)xt (1), . . . , bt (m)xt (m)). Our investment algorithm in its simplest form
b
bt ?xt
can tolerate very small proportional commission rates and still beat the best stock.7 We
note that Blum and Kalai [14] showed that the performance guarantee of UNIVERSAL still
holds (and gracefully degrades) in the case of proportional commissions. Many current
online brokers only charge a small per share commission rate. A related problem that one
must face when actually trading is the difference between bid and ask prices. These bid-ask
spreads (and the availability of stocks for both buying and selling) are typically functions
of stock liquidity and are typically smaller for large market capitalization stocks. We consider here only very large market cap stocks. As a final caveat, we note that we assume
that any one portfolio selection algorithm has no impact on the market! But just like any
goose laying golden eggs, widespread use will soon lead to the end of the goose; that is,
the market will quickly react.
Any report of abnormal returns using historical markets should be suspected of ?data
snooping?. In particular, when a dataset is excessively mined by testing many strategies
there is a substantial chance that one of the strategies will be successful by simple overfitting. Another data snooping hazard is stock selection. For example, the 36 stocks selected for the NYSE dataset were all known to have survived for 22 years. Our ANTICOR
algorithms were fully developed using only the NYSE and TSE datasets. The DJIA and
SP500 sets were obtained (from public domain sources) after the algorithms were fixed.
Finally, our algorithm has one parameter (the maximal window size W ). Our experiments
indicate that the algorithm?s performance is robust with respect to W (see Figure 2).
7
For example, with ? = 0.1% we can typically beat the best stock. These results will be presented
in the full paper.
A number of well-respected works report on statistically robust ?abnormal? returns for
simple ?technical analysis? heuristics, which slightly beat the market. For example, the
landmark study of Brock et al. [16] apply 26 simple trading heuristics to the DJIA index
from 1897 to 1986 and provide strong support for technical analysis heuristics. While
consistently beating the market is considered a great (if not impossible) challenge, our
approach to portfolio selection indicates that beating the best stock is an achievable goal.
What is missing at this point of time is an analytical model which better explains why
our active trading strategies are so successful. In this regard, we are investigating various
?statistical adversary? models along the lines suggested by [17, 18]. Namely, we would
like to show that an algorithm performs well (relative to some benchmark) for any market
sequence that satisfies certain constraints on its empirical statistics.
References
[1] G.
Lugosi.
Lectures
on
prediction
URL:http://www.econ.upf.es/?lugosi/ihp.ps, 2001.
of
individual
sequences.
[2] T.M. Cover and E. Ordentlich. Universal portfolios with side information. IEEE Transactions
on Information Theory, 42(2):348?363, 1996.
[3] T.M. Cover. Universal portfolios. Mathematical Finance, 1:1?29, 1991.
[4] H. Markowitz. Portfolio Selection: Efficient Diversification of Investments. John Wiley and
Sons, 1959.
[5] T.M. Cover and D.H. Gluss. Empirical bayes stock market portfolios. Advances in Applied
Mathematics, 7:170?181, 1986.
[6] T.M. Cover and J.A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc., 1991.
[7] A. Lo and C. MacKinlay. A Non-Random Walk Down Wall Street. Princeton University Press,
1999.
[8] D.P. Helmbold, R.E. Schapire, Y. Singer, and M.K. Warmuth. Portfolio selection using multiplicative updates. Mathematical Finance, 8(4):325?347, 1998.
[9] J. Ziv and A. Lempel. Compression of individual sequences via variable rate coding. IEEE
Transactions on Information Theory, 24:530?536, 1978.
[10] G.G. Langdon. A note on the lempel-ziv model for compressing individual sequences. IEEE
Transactions on Information Theory, 29:284?287, 1983.
[11] M. Feder. Gambling using a finite state machine. IEEE Transactions on Information Theory,
37:1459?1465, 1991.
[12] J. Rissanen. A universal data compression system. IEEE Transactions on Information Theory,
29:656?664, 1983.
[13] N. Cesa-Bianchi, Y. Freund, D. Haussler, D.P. Helmbold, R.E. Schapire, and M.K. Warmuth.
How to use expert advice. Journal of the ACM, 44(3):427?485, May 1997.
[14] A. Blum and A. Kalai. Universal portfolios with and without transaction costs. Machine Learning, 30(1):23?30, 1998.
[15] A. Borodin and R. El-Yaniv. Online Computation and Competitive Analysis. Cambridge University Press, 1998.
[16] L. Brock, J. Lakonishok, and B. LeBaron. Simple technical trading rules and the stochastic
properties of stock returns. Journal of Finance, 47:1731?1764, 1992.
[17] P. Raghavan. A statistical adversary for on-line algorithms. DIMACS Series in Discrete Mathematics and Theoretical Computer Science, 7:79?83, 1992.
[18] A. Chou, J.R. Cooperstock, R. El-Yaniv, M. Klugerman, and T. Leighton. The statistical adversary allows optimal money-making trading strategies. In Proceedings of the 6th Annual
ACM-SIAM Symposium on Discrete Algorithms, 1995.
| 2453 |@word illustrating:1 middle:1 rani:2 polynomial:3 seems:1 proportion:1 compression:3 eliminating:1 nd:1 version:1 achievable:1 leighton:1 bn:3 harder:1 initial:1 contains:1 series:1 liquid:1 outperforms:1 langdon:2 current:3 surprising:1 universality:2 yet:1 must:1 john:2 additive:1 designed:1 update:3 depict:1 v:4 stationary:1 selected:1 devising:1 warmuth:2 inspection:1 xk:3 short:1 caveat:2 provides:1 toronto:2 five:1 mathematical:2 along:1 constructed:1 direct:1 become:1 symposium:1 persistent:1 qualitative:1 consists:4 lx2:4 allan:1 expected:2 indeed:1 market:56 buying:1 window:26 increasing:3 rebalancing:8 moreover:3 panel:3 israel:1 what:2 substantially:2 developed:2 hindsight:3 guarantee:2 every:2 golden:2 charge:1 growth:1 finance:5 appear:3 before:1 negligible:1 fluctuation:3 approximately:1 lugosi:3 might:1 twice:2 dynamically:1 suggests:1 challenging:4 specifying:1 equivalence:1 statistically:1 practical:2 thirty:1 testing:1 practice:2 investment:12 lost:3 procedure:1 jan:6 survived:1 empirical:3 universal:30 significantly:5 convenient:1 induce:1 cannot:1 selection:18 context:2 impossible:2 seminal:1 www:2 equivalent:1 deterministic:1 missing:1 independently:1 toronto1:1 ergodic:1 helmbold:6 react:1 rule:4 haussler:1 utilizing:1 classic:1 pioneered:1 us:1 trend:1 element:1 particularly:1 predicts:1 worst:4 compressing:1 decrease:1 lxk:5 xmin:3 ran:2 substantial:3 balanced:1 predictable:1 reward:1 incur:1 sp500:7 completely:1 selling:1 stock:81 various:7 represented:1 choosing:1 whose:1 heuristic:4 larger:1 quite:2 say:1 otherwise:1 statistic:1 invested:1 think:1 itself:1 highlighted:1 superscript:1 online:10 final:1 sequence:20 advantage:5 analytical:1 propose:1 maximal:3 combining:1 monetary:1 iff:1 degenerate:1 achieve:2 snooping:2 invest:3 contrived:2 p:6 table1:1 yaniv:2 silver:1 volatility:1 help:2 develop:2 ac:2 odd:1 paying:1 strong:3 c:3 indicate:1 trading:20 direction:1 stochastic:1 raghavan:1 viewing:1 public:1 explains:1 exchange:2 fix:2 wall:1 hold:10 considered:1 exp:2 great:1 algorithmic:1 predict:5 claim:1 major:2 consecutive:2 outperformed:1 applicable:1 currently:1 largest:1 weighted:3 minimization:1 compounding:4 clearly:2 always:2 modified:1 rather:4 kalai:2 cash:1 focus:2 consistently:2 indicates:1 industrial:1 greedily:1 attains:1 dollar:4 sense:4 chou:1 el:3 bt:14 eliminate:1 transferring:1 diminishing:1 typically:3 relation:3 going:1 provably:2 ziv:3 denoted:5 smoothing:3 special:1 never:1 nicely:1 identical:1 placing:1 sell:1 jones:1 markowitz:2 report:3 modern:1 individual:5 consisting:2 attempt:3 huge:1 possibility:2 highly:1 runner:1 behind:2 sens:1 arket:1 closer:1 capable:2 daily:5 logarithm:1 walk:1 re:1 theoretical:2 increased:1 column:4 soft:1 obstacle:1 tse:9 cover:10 cost:4 deviation:1 subset:1 uniform:13 technion:3 successful:5 commission:5 adaptively:3 st:3 randomized:1 siam:1 probabilistic:1 quickly:1 again:1 cesa:1 choose:3 expert:10 return:35 actively:1 aggressive:1 potential:1 sec:5 coding:1 availability:1 inc:1 depends:1 multiplicative:2 try:3 view:1 doing:1 competitive:1 investor:2 bayes:1 minimize:1 il:2 variance:2 yield:3 bor:1 vincent:2 confirmed:1 j6:1 history:1 against:1 demystify:1 static:2 dataset:11 ask:2 recall:1 cap:1 actually:1 back:1 appears:1 tolerate:1 day:19 follow:2 done:1 nyse:15 just:4 correlation:3 dow:1 widespread:1 perhaps:1 effect:2 excessively:1 ihp:1 eg:10 during:1 yesterday:1 criterion:1 dimacs:1 trying:1 presenting:1 evident:1 performs:3 novel:1 common:1 volatile:1 winner:8 interpretation:1 interpret:1 significant:1 composition:1 cambridge:1 declining:1 rd:1 pm:1 similarly:1 mathematics:2 closing:2 portfolio:41 had:1 moving:1 money:5 showed:1 perspective:2 reverse:4 compound:3 certain:2 diversification:1 outperforming:1 success:2 continue:1 vt:6 rep:1 greater:1 additional:1 converge:1 maximize:1 period:6 upf:1 july:1 full:1 smooth:1 technical:5 cross:1 long:1 hazard:1 impact:2 prediction:16 underlies:1 essentially:1 normalization:1 sometimes:1 dec:3 whereas:2 want:1 wealth:4 grow:1 source:3 else:1 exhibited:1 capitalization:3 tend:1 seem:1 call:1 ter:1 identically:1 easy:1 bid:2 idea:5 shift:1 whether:1 motivated:1 utility:2 feder:3 url:1 penalty:1 york:1 remark:1 useful:2 tth:3 simplest:3 http:2 schapire:2 outperform:4 per:5 econ:1 discrete:2 shall:1 four:4 rissanen:2 blum:2 pj:1 utilize:2 asymptotically:1 graph:2 fraction:2 year:4 powerful:1 mcor:8 fourth:1 almost:1 reader:1 abnormal:2 bound:2 pay:1 guaranteed:1 spect:1 mined:1 annual:1 constraint:1 your:1 span:1 concluding:1 attempting:1 performing:2 department:1 according:1 combination:2 smaller:1 slightly:1 son:2 making:2 gradually:1 goose:3 abbreviated:2 fail:1 singer:1 know:1 end:1 eight:1 observe:1 apply:1 appropriate:1 lempel:3 alternative:1 algorithm1:1 original:1 thomas:1 denotes:1 dirichlet:2 include:1 exploit:1 especially:2 respected:1 objective:1 question:1 strategy:19 degrades:1 traditional:2 said:1 gradient:1 reversed:1 landmark:1 street:1 gracefully:1 argue:1 considers:1 extent:1 boldface:1 laying:2 index:1 ratio:5 decimal:1 equivalently:1 stated:1 negative:1 design:1 twenty:1 perform:3 bianchi:1 observation:1 datasets:12 markov:1 benchmark:6 finite:3 anti:6 beat:13 truncated:1 thm:1 fantastic:1 introduced:1 inverting:1 pair:3 namely:3 specified:1 unbelievable:1 able:1 adversary:3 suggested:1 below:1 beating:2 borodin:1 summarize:1 challenge:3 program:1 max:1 memory:1 including:1 critical:3 natural:3 rely:1 minimax:1 scheme:1 lossless:1 naive:1 brock:2 prior:3 relative:10 freund:1 loss:1 expect:2 dxt:1 rationale:1 fully:1 interesting:1 lecture:1 allocation:1 proportional:4 proven:1 downloaded:1 suspected:1 systematically:1 share:1 row:1 lo:1 course:4 surprisingly:1 last:2 soon:1 jth:3 offline:4 side:1 exponentiated:1 understand:1 institute:1 taking:2 face:1 benefit:2 distributed:1 liquidity:1 regard:1 xn:2 ordentlich:2 cumulative:2 computes:2 forward:1 made:2 qualitatively:1 historical:5 far:2 lz:5 transaction:10 djia:9 active:6 buy:10 overfitting:1 investigating:1 b1:3 alternatively:1 subsequence:1 investing:1 why:2 table:4 promising:1 learn:6 reasonably:1 nature:1 composing:1 robust:2 correlated:6 lebaron:1 excellent:1 domain:1 apr:1 spread:1 motivation:3 x1:2 advice:2 gambling:1 egg:2 extraordinary:1 wiley:2 sub:2 exponential:4 third:1 down:2 xt:18 symbol:1 explored:1 evidence:2 concern:1 exists:1 depicted:1 simply:3 chance:1 relies:1 constantly:1 satisfies:1 acm:2 viewed:1 month:1 goal:1 exposition:1 price:16 absence:1 feasible:1 change:1 experimentally:1 broker:1 determined:1 reversing:1 called:1 total:16 tendency:1 experimental:2 e:1 est:1 exception:1 pragmatic:1 support:1 philosophy:2 princeton:1 handling:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.