Unnamed: 0
int64 0
7.24k
| id
int64 1
7.28k
| raw_text
stringlengths 9
124k
| vw_text
stringlengths 12
15k
|
---|---|---|---|
3,000 | 3,719 | Online Learning of Assignments
Matthew Streeter
Daniel Golovin
Andreas Krause
Google, Inc.
Pittsburgh, PA 15213
[email protected]
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
California Institute of Technology
Pasadena, CA 91125
[email protected]
Abstract
Which ads should we display in sponsored search in order to maximize our revenue?
How should we dynamically rank information sources to maximize the value of
the ranking? These applications exhibit strong diminishing returns: Redundancy
decreases the marginal utility of each ad or information source. We show that
these and other problems can be formalized as repeatedly selecting an assignment
of items to positions to maximize a sequence of monotone submodular functions
that arrive one by one. We present an efficient algorithm for this general problem
and analyze it in the no-regret model. Our algorithm possesses strong theoretical
guarantees, such as a performance ratio that converges to the optimal constant
of 1 ? 1/e. We empirically evaluate our algorithm on two real-world online
optimization problems on the web: ad allocation with submodular utilities, and
dynamically ranking blogs to detect information cascades.
1
Introduction
Consider the problem of repeatedly choosing advertisements to display in sponsored search to
maximize our revenue. In this problem, there is a small set of positions on the page, and each time
a query arrives we would like to assign, to each position, one out of a large number of possible
ads. In this and related problems that we call online assignment learning problems, there is a set
of positions, a set of items, and a sequence of rounds, and on each round we must assign an item
to each position. After each round, we obtain some reward depending on the selected assignment,
and we observe the value of the reward. When there is only one position, this problem becomes
the well-studied multiarmed bandit problem [2]. When the positions have a linear ordering the
assignment can be construed as a ranked list of elements, and the problem becomes one of selecting
lists online. Online assignment learning thus models a central challenge in web search, sponsored
search, news aggregators, and recommendation systems, among other applications.
A common assumption made in previous work on these problems is that the quality of an assignment
is the sum of a function on the (item, position) pairs in the assignment. For example, online advertising
models with click-through-rates [6] make an assumption of this form. More recently, there have been
attempts to incorporate the value of diversity in the reward function [16]. Intuitively, even though the
best K results for the query ?turkey? might happen to be about the country, the best list of K results
is likely to contain some recipes for the bird as well. This will be the case if there are diminishing
returns on the number of relevant links presented to a user; for example, if it is better to present each
user with at least one relevant result than to present half of the users with no relevant results and
half with two relevant results. We incorporate these considerations in a flexible way by providing
an algorithm that performs well whenever the reward for an assignment is a monotone submodular
function of its set of (item, position) pairs.
Our key contributions are: (i) an efficient algorithm, TABULAR G REEDY, that provides a (1 ? 1/e)
approximation ratio for the problem of optimizing assignments under submodular utility functions, (ii)
an algorithm for online learning of assignments, TG BANDIT, that has strong performance guarantees
in the no-regret model, and (iii) an empirical evaluation on two problems of information gathering on
the web.
1
2
The assignment learning problem
We consider problems, where we have K positions (e.g., slots for displaying ads), and need to assign
to each position an item (e.g., an ad) in order to maximize a utility function (e.g., the revenue from
clicks on the ads). We address both the offline problem, where the utility function is specified in
advance, and the online problem, where a sequence of utility functions arrives over time, and we need
to repeatedly select a new assignment.
The Offline Problem. In the offline problem we are given sets P1 , P2 , . . . , PK , where Pk is the
set of items that may be placed in position k. We assume without loss of generality that these sets
are disjoint.1 An assignment is a subset S ? V, where V = P1 ? P2 ? ? ? ? ? PK is the set of all
items. We call an assignment feasible, if at most one item is assigned to each position (i.e., for all k,
|S ? Pk | ? 1). We use P to refer to the set of feasible assignments.
Our goal is to find a feasible assignment maximizing a utility function f : 2V ? R?0 . As we discuss
later, many important assignment problems satisfy submodularity, a natural diminishing returns
property: Assigning a new item to a position k increases the utility more if few elements have been
assigned yet, and less if many items have already been assigned. Formally, a utility function f is
called submodular, if for all S ? S 0 and s ?
/ S 0 it holds that f (S ?{s})?f (S) ? f (S 0 ?{s})?f (S 0 ).
We will also assume f is monotone (i.e., for all S ? S 0 , we have f (S) ? f (S 0 )). Our goal is thus,
for a given non-negative, monotone and submodular utility function f , to find a feasible assignment
S ? of maximum utility, S ? = arg maxS?P {f (S)}.
This optimization problem is NP-hard. In fact, a stronger negative result holds:
Theorem 1 ([14]). For any > 0, any algorithm guaranteed to obtain a solution within a factor of
(1 ? 1/e + ) of maxS?P {f (S)} requires exponentially many evaluations of f in the worst case.
In light of this negative result, we can only hope to efficiently obtain a solution that achieves a fraction
of (1 ? 1/e) of the optimal value. In ?3.2 we develop such an algorithm.
The Online Problem. The offline problem is inappropriate to model dynamic settings, where the
utility function may change over time, and we need to repeatedly select new assignments, trading
off exploration (experimenting with ad display to gain information about the utility function), and
exploitation (displaying ads which we believe will maximize utility). More formally, we face a
sequential decision problem, where, on each round (which, e.g., corresponds to a user query for a
particular term), we want to select an assignment St (ads to display). We assume that the sets P1 , P2 ,
. . . , PK are fixed in advance for all rounds. After we select the assignment we obtain reward ft (St )
for some non-negative monotone submodular utility function ft . We call the setting where we do
not get any information about ft beyond the reward the bandit feedback model. In contrast, in the
full-information feedback model we obtain oracle access to ft (i.e., we can evaluate ft on arbitrary
feasible assignments). Both models arise in real applications, as we show in ?5.
P
The goal is to maximize the total reward we obtain, namely t ft (St ). Following the multiarmed
bandit literature, we evaluate our performance after T rounds by comparing our total reward against
that obtained by a clairvoyant algorithm with knowledge of the sequence of functions hf1 , . . . , fT i,
but with the restriction that it must select the same assignment on each round. The difference between
the clairvoyant algorithm?s total reward and ours is called our regret. The goal is then to develop an
algorithm whose expected regret grows sublinearly in the number of rounds; such an algorithm is
said to have (or be) no-regret. However, since sums of submodular functions remain
submodular, the
P
clairvoyant algorithm has to solve an offline assignment problem with f (S) = t ft (S). Considering
Theorem 1, no polynomial-time algorithm can possibly hope to achieve a no-regret guarantee. To
accommodate this fact, we discount the reward of the clairvoyant algorithm by a factor of (1 ? 1/e):
We define the (1 ? 1/e)-regret of a random sequence hS1 , . . . , ST i as
( T
)
" T
#
X
X
1
1?
? max
ft (S) ? E
ft (St ) .
S?P
e
t=1
t=1
Our goal is then to develop efficient algorithms whose (1 ? 1/e)-regret grows sublinearly in T .
1
If the same item can be placed in multiple positions, simply create multiple distinct copies of it.
2
Subsumed Models. Our model generalizes several common models for sponsored search ad
selection, and web search results. These include models with click-through-rates, in which it is
assumed that each (ad, position) pair has some probability p(a, k) of being clicked on, and there is
some monetary reward b(a) that is obtained whenever ad a is clicked on. Often, the click-throughrates are assumed to be separable, meaning p(a, k) has the functional form ?(a) ? ?(k) for some
functions ? and ?. See [7, 12] for more details on sponsored search ad allocation.
Note that in both
P
of these cases, the (expected) reward of a set S of (ad, position) pairs is (a,k)?S g(a, k) for some
nonnegative function g. It is easy to verify that such a reward function is monotone submodular.
Thus, we can capture this model in our framework by setting Pk = A ? {k}, where A is the set of
ads. Another subsumed model, for web search, appears in [16]; it assumes that each user is interested
in a particular set of results, and any list of results that intersects this set generates a unit of value;
all other lists generate no value, and the ordering of results is irrelevant. Again, the reward function
is monotone submodular. In this setting, it is desirable to display a diverse set of results in order to
maximize the likelihood that at least one of them will interest the user.
Our model is flexible in that we can handle position-dependent effects and diversity considerations
simultaneously. For example, we can handle the case that each user u is interested in a particular
set Au of ads and looks at a set Iu of positions, and the reward of an assignment S is any monotoneincreasing concave function g of |S ? (Au ? Iu )|. If Iu = {1, 2, . . . , k} and g(x) = x, this models
the case where the quality is the number of relevant result that appear in the first k positions. If Iu
equals all positions and g(x) = min {x, 1} we recover the model of [16].
3
3.1
An approximation algorithm for the offline problem
The locally greedy algorithm
A simple approach to the assignment problem is the following greedy procedure: the algorithm
steps through all K positions (according to some fixed, arbitrary ordering). For position k, it simply
chooses the item that increases the total value as much as possible, i.e., it chooses
sk = arg max {f ({s1 , . . . , sk?1 } + s)} ,
s?Pk
where, for a set S and element e, we write S + e for S ? {e}. Perhaps surprisingly, no matter which
ordering over the positions is chosen, this so-called locally greedy algorithm produces an assignment
that obtains at least half the optimal value [8]. In fact, the following more general result holds. We
will use this lemma in the analysis of our improved offline algorithm, which uses the locally greedy
algorithm as a subroutine.
PK
Lemma 2. Suppose f : 2V ? R?0 is of the form f (S) = f0 (S) + k=1 fk (S ? Pk ) where
f0 : 2V ? R?0 is monotone submodular, and fk : 2Pk ? R?0 is arbitrary for k ? 1. Let L be the
solution returned by the locally greedy algorithm. Then f (L) + f0 (L) ? maxS?P {f (S)}.
The proof is given in an extended version of this paper [9]. Observe that in the special case where
fk ? 0 for all k ? 1, Lemma 2 says that f (L) ? 12 maxS?P f (S). In [9] we provide a simple
example showing that this 1/2 approximation ratio is tight.
3.2
An algorithm with optimal approximation ratio
We now present an algorithm that achieves the optimal approximation ratio of 1 ? 1/e, improving on
the 12 approximation for the locally greedy algorithm. Our algorithm associates with each partition
Pk a color ck from a palette [C] of C colors, where we use the notation [n] = {1, 2, . . . , n}. For any
SK
set S ? V ? [C] and vector ~c = (c1 , . . . , cK ), define sample~c (S) = k=1 {x ? Pk : (x, ck ) ? S}.
Given a set S of (item, color) pairs, which we may think of as labeling each item with one or more
colors, sample~c (S) returns a set containing each item x that is labeled with whatever color ~c assigns
to the partition that contains x. Let F (S) denote the expected value of f (sample~c (S)) when each
color ck is selected uniformly at random from [C]. Our TABULAR G REEDY algorithm greedily
optimizes F , as shown in the following pseudocode.
Observe that when C = 1, there is only one possible choice for ~c, and TABULAR G REEDY is
simply the locally greedy algorithm from ?3.1. In the limit as C ? ?, TABULAR G REEDY can
intuitively be viewed as an algorithm for a continuous extension of the problem followed by a
3
Algorithm: TABULAR G REEDY
Input: integer C, sets P1 , P2 , . . . , PK , function f : 2V ? R?0 (where V =
SK
k=1
Pk )
set G := ?.
for c from 1 to C do
/* For each color
*/
for k from 1 to K do
/* For each partition */
set gk,c = arg maxx?Pk ?{c} {F (G + x)}
/* Greedily pick gk,c
*/
set G := G + gk,c ;
for each k ? [K], choose ck uniformly at random from [C].
return sample~c (G), where ~c := (c1 , . . . , cK ).
rounding procedure, in the same spirit as Vondr?ak?s continuous-greedy algorithm [4]. In our case, the
continuous extension is to compute a probability distribution Dk for each position k with support in
Pk (plus a special ?select nothing? outcome), such that if we independently sample an element xk
from Dk , E [f ({x1 , . . . , xK })] is maximized. It turns out that if the positions individually, greedily,
and in round-robin fashion, add infinitesimal units of probability mass to their distributions so as to
maximize this objective function, they achieve the same objective function value as if, rather than
making decisions in a round-robin fashion, they had cooperated and added the combination of K
infinitesimal probability mass units (one per position) that greedily maximizes the objective function.
The latter process, in turn, can be shown to be equivalent to a greedy algorithm for maximizing a
(different) submodular function subject to a cardinality constraint, which implies that it achieves
a 1 ? 1/e approximation ratio [15]. TABULAR G REEDY represents a tradeoff between these two
extremes; its performance is summarized by the following theorem. For now, we assume that the
arg max in the inner loop is computed exactly. In the extended version [9], we bound the performance
loss that results from approximating the arg max (e.g., by estimating F by repeated sampling).
Theorem 3. Suppose f is monotone submodular.
?1 Then F (G) ? ?(K, C) ? maxS?P {f (S)}, where
K
1 C
?(K, C) is defined as 1 ? (1 ? C ) ? 2 C .
It follows that, for any ? > 0, TABULAR G REEDY achieves a (1 ? 1/e ? ?) approximation factor
using a number of colors that is polynomial in K and 1/?. The theorem will follow immediately
from the combination of two key lemmas, which we now prove. Informally, Lemma 4 analyzes the
approximation error due to the outer greedy loop of the algorithm, while Lemma 5 analyzes the
approximation error due to the inner loop.
Lemma 4. Let Gc = {g1,c , g2,c , . . . , gK,c }, and let G?
c = G1 ? G2 ? . . . ? Gc?1 . For each
?
color c, choose Ec ? R such that F (G?
?
G
)
?
max
c
x?Rc {F (Gc ? x)} ? Ec where Rc :=
c
{R : ?k ? [K] , |R ? (Pk ? {c})| = 1} is the set of all possible choices for Gc . Then
F (G) ? ?(C) ? max {f (S)} ?
S?P
where ?(C) = 1 ? 1 ?
C
X
Ec .
(3.1)
c=1
1 C
.
C
Proof (Sketch). We will refer to an element R of Rc as a row, and to c as the color of the row. Let
SC
R[C] := c=1 Rc be the set of all rows. Consider the function H : 2R[C] ? R?0 , defined as
S
H(R) = F
R?R R . We will prove the lemma in three steps: (i) H is monotone submodular, (ii)
TABULAR G REEDY is simply the locally greedy algorithm for finding a set of C rows that maximizes
H, where the cth greedy step is performed with additive error Ec , and (iii) TABULAR G REEDY obtains
the guarantee (3.1) for maximizing H, and this implies the same ratio for maximizing F .
To show that H is monotone submodular, it suffices to show that F is monotone submodular.
Because F (S) = E~c [f (sample~c (S))], and because a convex combination of monotone submodular
functions is monotone submodular, it suffices to show that for any particular coloring ~c, the function
f (sample~c (S)) is monotone submodular. This follows from the definition of sample and the fact
that f is monotone submodular.
The second claim is true by inspection. To prove the third claim, we note that the row colors for a set
of rows R can be interchanged with no effect on H(R). For problems with this special property, it is
4
known that the locally greedy algorithm obtains an approximation ratio of ?(C) = 1 ? (1 ? C1 )C [15].
Theorem 6 of [17] extends this result to handle additive error, and yields
C
X
F (G) = H({G1 , G2 , . . . , GC }) ? ?(C) ?
max
{H(R)} ?
Ec .
R?R[C] :|R|?C
c=1
To complete the proof, it suffices to show that maxR?R[C] :|R|?C {H(R)} ? maxS?P {f (S)}. This
follows from
S the fact that for any assignment S ? P, we can find a set R(S) of C rows such that
sample~c ( R?R(S) R) = S with probability 1, and therefore H(R(S)) = f (S).
We now bound the performance of the the inner loop of TABULAR G REEDY.
Lemma 5. Let f ? = maxS?P {f (S)}, and let Gc , G?
c , and Rc be defined as inthe statement of
K
?
?2 ?
Lemma 4. Then, for any c ? [C], F (G?
?
G
)
?
max
f .
c
R?Rc {F (Gc ? R)} ? 2 C
c
Proof (Sketch). Let N denote the number of partitions whose color (assigned by ~c) is c. For R ? Rc ,
?
?
?
let ?~c (R) := f (sample~c (G?
c ?R))?f (sample~
c (Gc )), and let Fc (R) := F (Gc ?R)?F (Gc ). By
definition, Fc (R) = E~c [?~c (R)] = P [N = 1] E~c [?~c (R)|N = 1] + P [N ? 2] E~c [?~c (R)|N ? 2],
where we have used the fact that ?~c (R) = 0 when N = 0. The idea of the proof is that
the first of these terms dominates as C ? ?, and that E~c [?~c (R)|N = 1] can be optimized
exactly simply by optimizing each element of Pk ? {c} independently. Specifically, it can
PK
be seen that E~c [?~c (R)|N = 1] =
k=1 fk (R ? (Pk ? {c})) for suitable fk . Additionally,
f0 (R) = P [N ? 2] E~c [?~c (R)|N ? 2] is a monotone submodular function of a set of (item,
color) pairs, for the same reasons F is. Applying Lemma 2 with these {fk : k ? 0} yields
Fc (Gc ) + P [N ? 2] E~c [?~c (Gc)|N ? 2] ? maxR?Rc {Fc (R)}. To complete the proof, it suf?2
fices to show P [N ? 2] ? K
and E~c [?~c (Gc )|N ? 2] ? f ? . The first inequality holds
2 C
because, if we let M be the number of pairs of partitions that are both assigned color c, we have
?2
P [N ? 2] = P [M ? 1] ? E [M ] = K
. The second inequality follows from the fact that for
2 C
?
any ~c we have ?~c (Gc ) ? f (sample~c (Gc ? Gc )) ? f ? .
4
An algorithm for online learning of assignments
We now transform the offline algorithm of ?3.2 into an online algorithm. The high-level idea behind
this transformation is to replace each greedy decision made by the offline algorithm with a no-regret
online algorithm. A similar approach was used in [16] and [18] to obtain an online algorithm for
different (simpler) online problems.
Algorithm: TG BANDIT (described in the full-information feedback model)
Input: integer C, sets P1 , P2 , . . . , PK
for each k ? [K] and c ? [C], let Ek,c be a no-regret algorithm with action set Pk ? {c}.
for t from 1 to T do
t
for each k ? [K] and c ? [C], let gk,c
? Pk ? {c} be the action selected by Ek,c
for each k ? [K], choose ck n
uniformly at random fromo
[C]. Define ~c = (c1 , . . . , cK ).
t
select the set Gt = sample~c gk,c
: k ? [K] , c ? [C]
observe ft , and let F?t (S) := ft (sample~c (S))
for each k ? [K], n
c ? [C] do
o n
o
t?
define Gk,c ? gkt 0 ,c0 : k 0 ? [K] , c0 < c ? gkt 0 ,c : k 0 < k
for each x ? Pk ? {c}, feed back F?t (Gt? + x) to Ek,c as the reward for choosing x
k,c
The following theorem summarizes the performance of TG BANDIT.
C
?1
Theorem 6. Let rk,c be the regret of Ek,c , and let ?(K, C) = 1 ? 1 ? C1
? K
. Then
2 C
" T
#
( T
)
"K C
#
X
X
XX
E
ft (Gt ) ? ?(K, C) ? max
ft (S) ? E
rk,c .
t=1
S?P
t=1
5
k=1 c=1
Observe that Theorem 6 is similar to Theorem 3, with the addition of the E [rk,c ] terms. The idea
of the proof is to view TG BANDIT as a version of TABULAR G REEDY that, instead of greedily
selecting single (element,color) pairs gk,c ? Pk ? {c}, greedily selects (element vector, color) pairs
~gk,c ? PkT ? {c} (here, PkT is the T th power of the set Pk ). We allow for the case that the greedy
decision is made imperfectly, with additive error rk,c ; this is the source of the extra terms. Once this
correspondence is established, the theorem follows along the lines of Theorem 3. For a proof, see the
extended version [9].
Corollary 7. If TG BANDIT is run with randomized weighted majority [5] as the subroutine, then
" T
#
( T
)
!
K p
X
X
X
E
ft (Gt ) ? ?(K, C) ? max
ft (S) ? O C
T log |Pk | .
S?P
t=1
where ?(K, C) = 1 ? 1 ?
1 C
C
t=1
k=1
?1
? K
.
2 C
?
? 3/2 T 1/4 OPT) ignoring logarithmic
Optimizing for C in Corollary 7 nyields (1 ? 1eo
)-regret ?(K
PT
factors, where OPT := maxS?P
t=1 ft (S) is the value of the static optimum.
Dealing with bandit feedback. TG BANDIT can be modified to work in the bandit feedback model.
The idea behind this modification is that on each round we ?explore? with some small probability, in
such a way that on each round we obtain an unbiased estimate of the desired feedback values F?t (Gt?
k,c +
x) for each k ? [K], c ? [C], and x ? Pk . This technique can be used
to
achieve
a
bound
similar
to
2
1
3
3
the one stated in Corollary 7, but with an additive regret term of O (T |V| CK) (log |V|) .
Stronger notions of regret. By substituting in different algorithms for the subroutines Ek,c , we can
obtain additional guarantees. For example, Blum and Mansour [3] consider online problems in which
we are given time-selection functions I1 , I2 , . . . , IM . Each time-selection function I : [T ] ? [0, 1]
associates a weight with each round, and defines a corresponding weighted notion of regret in the
natural way. Blum and Mansour?s algorithm guarantees low weighted regret with respect to all
M time selection functions simultaneously. This can be used to obtain low regret with respect to
different (possibly overlapping) windows of time simultaneously, or to obtain low regret with respect
to subsets of rounds that have particular features. By using their algorithm as a subroutine within
TG BANDIT, we get similar guarantees, both in the full information and bandit feedback models.
5
Evaluation
We evaluate TG BANDIT experimentally on two applications: Learning to rank blogs that are effective
in detecting cascades of information, and allocating advertisements to maximize revenue.
5.1
Online learning of diverse blog rankings
We consider the problem of ranking a set of blogs and news sources on the web. Our approach is
based on the following idea: A blogger writes a posting, and, after some time, other postings link to
it, forming cascades of information propagating through the network of blogs.
More formally, an information cascade is a directed acyclic graph of vertices (each vertex corresponds
to a posting at some blog), where edges are annotated by the time difference between the postings.
Based on this notion of an information cascade, we would like to select blogs that detect big
cascades (containing many nodes) as early as possible (i.e., we want to learn about an important
event before most other readers). In [13] it is shown how one can formalize this notion of utility
using a monotone submodular function that measures the informativeness of a subset of blogs.
Optimizing the submodular function yields a small set of blogs that ?covers? most cascades. This
utility function prefers diverse sets of blogs, minimizing the overlap of the detected cascades, and
therefore minimizing redundancy.
The work by [13] leaves two major shortcomings: Firstly, they select a set of blogs rather than a
ranking, which is of practical importance for the presentation on a web service. Secondly, they do
not address the problem of sequential prediction, where the set of blogs must be updated dynamically
over time. In this paper, we address these shortcomings.
6
4
x 10
0.75
Performance
7.5
7
6.5
6
1
2
4
Number of colors
(a) Blogs: Offline results
1
4 colors
0.74
4 colors
0.8
1 color
0.6
0.4
0.2
Average Payoff
Average
Maximum
Avg. normalized performance
8
2 colors
0.73
0.72
1 color
0.71
0.7
0.69
0
0
100
200
300
Number of rounds (days)
(b) Blogs: Online results
0.68
2
10
3
10
4
5
6
10
10
10
Number of rounds
(c) Ad display: Online results
Figure 1: (a,b) Results for discounted blog ranking (? = 0.8), in offline (a) and online (b) setting. (c)
Performance of TG BANDIT with C = 1, 2, and 4 colors for the sponsored search ad selection problem (each
round is a query). Note that C = 1 corresponds to the online algorithm of [16, 18].
Results on offline blog ranking. In order to model the blog ranking problem, we adopt the
assumption that different users have different attention spans: Each user will only consider blogs
appearing in a particular subset of positions. In our experiments, we assume that the probability that
a user is willing to look at position k is proportional to ? k , for some discount factor 0 < ? < 1.
More formally, let g be the monotone submodular function measuring the informativeness of any set
of blogs, defined as in [13]. Let Pk = B ? {k}, where B is the set of blogs. Given an assignment
S ? P, let S [k] = S ? {P1 ? P2 ? . . . ? Pk } be the assignment of blogs to positions 1 through k.
PK
We define the discounted value of the assignment S as f (S) = k=1 ? k g(S [k] ) ? g(S [k?1] ) . It
can be seen that f : 2V ? R?0 is monotone submodular.
For our experiments, we use the data set of [13], consisting of 45,192 blogs, 16,551 cascades, and
2 million postings collected during 12 months of 2006. We use the population affected objective
of [13], and use a discount factor of ? = 0.8. Based on this data, we run our TABULAR G REEDY
algorithm with varying numbers of colors C on the blog data set. Fig. 1(a) presents the results of this
experiment. For each value of C, we generate 200 rankings, and report both the average performance
and the maximum performance over the 200 trials. Increasing C leads to an improved performance
over the locally greedy algorithm (C = 1).
Results on online learning of blog rankings. We now consider the online problem where on
each round t we want to output a ranking St . After we select the ranking, a new set of cascades
occurs, modeled using a separate submodular function ft , and we obtain a reward of ft (St ). In
our experiments, we choose one assignment per day, and define ft as the utility associated with the
cascades occurring on that day. Note that ft allows us to evaluate the performance of any possible
ranking St , hence we can apply TG BANDIT in the full-information feedback model.
We compare the performance of our online algorithm using C = 1 and C = 4. Fig. 1(b) presents the
average cumulative reward gained over time by both algorithms. We normalize the average reward by
the utility achieved by the TABULAR G REEDY algorithm (with C = 1) applied to the entire data set.
Fig. 1(b) shows that the performance of both algorithms rapidly (within the first 47 rounds) converges
to the performance of the offline algorithm. The TG BANDIT algorithm with C = 4 levels out at an
approximately 4% higher reward than the algorithm with C = 1.
5.2
Online ad display
We evaluate TG BANDIT for the sponsored search ad selection problem in a simple Markovian model
incorporating the value of diverse results and complex position-dependence among clicks. In this
model, each user u is defined by two sets of probabilities: pclick (a) for each ad a ? A, and pabandon (k)
for each position k ? [K]. When presented an assignment of ads {a1 , a2 , . . . , aK }, where ak
occupies position k, the user scans the positions in increasing order. For each position k, the user
clicks on ak with probability pclick (ak ), leaving the results page forever. Otherwise, with probability
(1 ? pclick (ak )) ? pabandon (k), the user loses interest and abandons the results without clicking on
anything. Finally, with probability (1 ? pclick (ak )) ? (1 ? pabandon (k)), the user proceeds to look at
position k + 1. The reward function ft is the number of clicks, which is either zero or one. We only
receive information about ft (St ) (i.e., bandit feedback).
7
In our evaluation, there are 5 positions, 20 available ads, and two (equally frequent) types of users:
type 1 users interested in all positions (pabandon ? 0), and type 2 users that quickly lose interest
(pabandon ? 0.5). There are also two types of ads, half of type 1 and half of type 2, and users are
probabilistically more interested in ads of their own type than those of the opposite type. Specifically,
for both types of users we set pclick (a) = 0.5 if a has the same type as the user, and pclick (a) = 0.2
otherwise. In Fig. 1(c) we compare the performance of TG BANDIT with C = 4 to the online
algorithm of [16, 18], based on the average of 100 experiments. The latter algorithm is equivalent
to running TG BANDIT with C = 1. They perform similarly in the first 104 rounds; thereafter the
former algorithm dominates.
It can be shown that with several different types of users with distinct pclick (?) functions the offline
problem of finding an assignment within 1 ? 1e + ? of optimal is NP-hard. This is in contrast to the
case in which pclick and pabandon are the same for all users; in this case the offline problem simply
requires finding an optimal policy for a Markov decision process, which can be done efficiently using
well-known algorithms. A slightly different Markov model of user behavior which is efficiently
solvable was considered in [1]. In that model, pclick and pabandon are the same for all users, and pabandon
is a function of the ad in the slot currently being scanned rather than its index.
6
Related Work
For a general introduction to the literature on submodular function maximization, see [19]. For
applications of submodularity to machine learning and AI see [11].
Our offline problem is known as maximizing a monotone submodular function subject to a (simple)
partition matroid constraint in the operations research and theoretical computer science communities.
The study of this problem culminated in the elegant (1?1/e) approximation algorithm of Vondr?ak [20]
and a matching unconditional lower bound of Mirrokni et al. [14]. Vondr?ak?s algorithm, called
the continuous-greedy algorithm, has also been extended to handle arbitrary matroid constraints [4].
The continuous-greedy algorithm, however, cannot be applied to our problem directly, because it
requires the ability to sample f (?) on infeasible sets S ?
/ P. In our context, this means it must have
the ability to ask (for example) what the revenue will be if ads a1 and a2 are placed in position #1
simultaneously. We do not know how to answer such questions in a way that leads to meaningful
performance guarantees.
In the online setting, the most closely related work is that of Streeter and Golovin [18]. Like us, they
consider sequences of monotone submodular reward functions that arrive online, and develop an
online algorithm that uses multi-armed bandit algorithms as subroutines. The key difference from our
work is that, as in [16], they are concerned with selecting a set of K items rather than the more general
problem of selecting an assignment of items to positions addressed in this paper. Kakade et al. [10]
considered the general problem of using ?-approximation algorithms to construct no ?-regret online
algorithms, and essentially proved it could be done for the class of linear optimization problems in
which the cost function has the form c(S, w) for a solution S and weight vector w, and c(S, w) is
linear in w. However, their result is orthogonal to ours, because our objective function is submodular
and not linear2 .
7
Conclusions
In this paper, we showed that important problems, such as ad display in sponsored search and
computing diverse rankings of information sources on the web, require optimizing assignments
under submodular utility functions. We developed an efficient algorithm, TABULAR G REEDY, which
obtains the optimal approximation ratio of (1 ? 1/e) for this NP-hard optimization problem. We
also developed an online algorithm, TG BANDIT, that asymptotically achieves no (1 ? 1/e)-regret
for the problem of repeatedly selecting informative assignments, under the full-information and
bandit-feedback settings. Finally, we demonstrated that our algorithm outperforms previous work on
two real world problems, namely online ranking of informative blogs and ad allocation.
Acknowledgments. This work was supported in part by Microsoft Corporation through a gift as well as
through the Center for Computational Thinking at Carnegie Mellon, by NSF ITR grant CCR-0122581 (The
Aladdin Center), and by ONR grant N00014-09-1-1044.
2
One may linearize a submodular function by using a separate dimension for every possible function
argument, but this leads to exponentially worse convergence time and regret bounds for the algorithms in [10]
relative to TG BANDIT.
8
References
[1] Gagan Aggarwal, Jon Feldman, S. Muthukrishnan, and Martin P?al. Sponsored search auctions with
markovian users. In WINE, pages 621?628, 2008.
[2] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed
bandit problem. SIAM Journal on Computing, 32(1):48?77, 2002.
[3] Avrim Blum and Yishay Mansour. From external to internal regret. Journal of Machine Learning Research,
8:1307?1324, 2007.
[4] Gruia Calinescu, Chandra Chekuri, Martin P?al, and Jan Vondr?ak. Maximizing a submodular set function
subject to a matroid constraint. SIAM Journal on Computing. To appear.
[5] Nicol`o Cesa-Bianchi, Yoav Freund, David Haussler, David P. Helmbold, Robert E. Schapire, and Manfred K.
Warmuth. How to use expert advice. J. ACM, 44(3):427?485, 1997.
[6] Benjamin Edelman, Michael Ostrovsky, and Michael Schwarz. Internet advertising and the generalized
second price auction: Selling billions of dollars worth of keywords. American Economic Review, 97(1):242?
259, 2007.
[7] Jon Feldman and S. Muthukrishnan. Algorithmic methods for sponsored search advertising. In Zhen Liu
and Cathy H. Xia, editors, Performance Modeling and Engineering. 2008.
[8] Marshall L. Fisher, George L. Nemhauser, and Laurence A. Wolsey. An analysis of approximations for
maximizing submodular set functions - II. Mathematical Programming Study, (8):73?87, 1978.
[9] Daniel Golovin, Andreas Krause, and Matthew Streeter. Online learning of assignments that maximize
submodular functions. CoRR, abs/0908.0772, 2009.
[10] Sham M. Kakade, Adam Tauman Kalai, and Katrina Ligett. Playing games with approximation algorithms.
In STOC, pages 546?555, 2007.
[11] Andreas Krause and Carlos Guestrin. Beyond convexity: Submodularity in machine learning. Tutorial at
ICML 2008. http://www.select.cs.cmu.edu/tutorials/icml08submodularity.html.
[12] S?ebastien Lahaie, David M. Pennock, Amin Saberi, and Rakesh V. Vohra. Sponsored search auctions. In
Noam Nisan, Tim Roughgarden, Eva Tardos, and Vijay V. Vazirani, editors, Algorithmic Game Theory.
Cambridge University Press, New York, NY, USA, 2007.
[13] Jure Leskovec, Andreas Krause, Carlos Guestrin, Christos Faloutsos, Jeanne VanBriesen, and Natalie
Glance. Cost-effective outbreak detection in networks. In KDD, pages 420?429, 2007.
[14] Vahab Mirrokni, Michael Schapira, and Jan Vondr?ak. Tight information-theoretic lower bounds for welfare
maximization in combinatorial auctions. In EC, pages 70?77, 2008.
[15] George L. Nemhauser, Laurence A. Wolsey, and Marshall L. Fisher. An analysis of approximations for
maximizing submodular set functions - I. Mathematical Programming, 14(1):265?294, 1978.
[16] Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. Learning diverse rankings with multi-armed
bandits. In ICML, pages 784?791, 2008.
[17] Matthew Streeter and Daniel Golovin. An online algorithm for maximizing submodular functions. Technical
Report CMU-CS-07-171, Carnegie Mellon University, 2007.
[18] Matthew Streeter and Daniel Golovin. An online algorithm for maximizing submodular functions. In
NIPS, pages 1577?1584, 2008.
[19] Jan Vondr?ak. Submodularity in Combinatorial Optimization. PhD thesis, Charles University, Prague,
Czech Republic, 2007.
[20] Jan Vondr?ak. Optimal approximation for the submodular welfare problem in the value oracle model. In
STOC, pages 67?74, 2008.
9
| 3719 |@word trial:1 exploitation:1 version:4 polynomial:2 stronger:2 laurence:2 c0:2 willing:1 pick:1 accommodate:1 liu:1 contains:1 selecting:6 daniel:4 ours:2 outperforms:1 com:1 comparing:1 yet:1 assigning:1 must:4 additive:4 happen:1 partition:6 informative:2 kdd:1 sponsored:11 ligett:1 half:5 selected:3 greedy:18 item:19 leaf:1 warmuth:1 inspection:1 xk:2 manfred:1 provides:1 detecting:1 node:1 firstly:1 simpler:1 rc:8 along:1 mathematical:2 natalie:1 clairvoyant:4 prove:3 edelman:1 sublinearly:2 expected:3 behavior:1 p1:6 multi:2 discounted:2 armed:2 inappropriate:1 considering:1 cardinality:1 becomes:2 clicked:2 estimating:1 notation:1 xx:1 maximizes:2 mass:2 increasing:2 gift:1 what:1 developed:2 finding:3 transformation:1 corporation:1 guarantee:8 every:1 concave:1 exactly:2 ostrovsky:1 whatever:1 unit:3 grant:2 appear:2 before:1 service:1 engineering:1 limit:1 ak:13 culminated:1 approximately:1 might:1 plus:1 bird:1 au:2 studied:1 dynamically:3 directed:1 practical:1 acknowledgment:1 regret:22 writes:1 procedure:2 jan:4 empirical:1 maxx:1 cascade:11 matching:1 get:2 cannot:1 selection:6 context:1 applying:1 restriction:1 equivalent:2 www:1 demonstrated:1 center:2 maximizing:10 attention:1 independently:2 convex:1 formalized:1 assigns:1 immediately:1 helmbold:1 haussler:1 population:1 handle:4 notion:4 updated:1 tardos:1 pt:1 suppose:2 yishay:1 user:26 programming:2 us:2 pa:2 element:8 associate:2 labeled:1 ft:23 capture:1 worst:1 eva:1 news:2 ordering:4 decrease:1 benjamin:1 convexity:1 reward:22 dynamic:1 tight:2 selling:1 intersects:1 muthukrishnan:2 distinct:2 effective:2 shortcoming:2 query:4 sc:1 labeling:1 detected:1 cathy:1 choosing:2 outcome:1 whose:3 solve:1 say:1 katrina:1 otherwise:2 ability:2 g1:3 think:1 transform:1 abandon:1 online:34 sequence:6 blogger:1 frequent:1 relevant:5 loop:4 monetary:1 rapidly:1 achieve:3 amin:1 normalize:1 recipe:1 billion:1 convergence:1 optimum:1 produce:1 adam:1 converges:2 tim:1 depending:1 develop:4 linearize:1 propagating:1 keywords:1 strong:3 p2:6 c:3 trading:1 implies:2 submodularity:4 closely:1 annotated:1 exploration:1 occupies:1 linear2:1 require:1 assign:3 suffices:3 opt:2 secondly:1 im:1 extension:2 hold:4 considered:2 welfare:2 algorithmic:2 claim:2 matthew:4 substituting:1 interchanged:1 achieves:5 early:1 major:1 adopt:1 a2:2 wine:1 lose:1 combinatorial:2 currently:1 hs1:1 schwarz:1 individually:1 create:1 weighted:3 hope:2 modified:1 ck:9 rather:4 kalai:1 varying:1 probabilistically:1 corollary:3 joachim:1 rank:2 likelihood:1 experimenting:1 contrast:2 greedily:6 detect:2 dollar:1 jeanne:1 dependent:1 entire:1 diminishing:3 pasadena:1 bandit:27 subroutine:5 selects:1 interested:4 i1:1 iu:4 arg:5 among:2 flexible:2 html:1 special:3 marginal:1 equal:1 once:1 construct:1 sampling:1 represents:1 look:3 icml:2 jon:2 thinking:1 tabular:14 np:3 report:2 few:1 simultaneously:4 consisting:1 microsoft:1 attempt:1 ab:1 subsumed:2 detection:1 interest:3 evaluation:4 arrives:2 extreme:1 light:1 behind:2 unconditional:1 allocating:1 edge:1 lahaie:1 orthogonal:1 desired:1 theoretical:2 leskovec:1 vahab:1 modeling:1 markovian:2 marshall:2 cover:1 measuring:1 yoav:2 assignment:40 maximization:2 tg:16 cost:2 republic:1 vertex:2 subset:4 imperfectly:1 rounding:1 answer:1 chooses:2 st:9 randomized:1 siam:2 off:1 michael:3 quickly:1 thesis:1 again:1 cesa:2 central:1 containing:2 choose:4 possibly:2 worse:1 external:1 ek:5 expert:1 american:1 return:5 diversity:2 summarized:1 inc:1 matter:1 satisfy:1 ranking:16 ad:30 nisan:1 later:1 performed:1 view:1 analyze:1 recover:1 carlos:2 contribution:1 construed:1 efficiently:3 maximized:1 yield:3 window:1 advertising:3 worth:1 vohra:1 aggregator:1 whenever:2 definition:2 infinitesimal:2 against:1 proof:8 associated:1 static:1 gain:1 proved:1 ask:1 knowledge:1 color:24 formalize:1 auer:1 back:1 coloring:1 appears:1 feed:1 higher:1 day:3 follow:1 improved:2 done:2 though:1 generality:1 chekuri:1 sketch:2 web:8 overlapping:1 google:2 glance:1 defines:1 schapira:1 quality:2 perhaps:1 believe:1 grows:2 usa:1 effect:2 contain:1 verify:1 true:1 unbiased:1 normalized:1 hence:1 assigned:5 former:1 i2:1 round:20 during:1 game:2 anything:1 generalized:1 complete:2 theoretic:1 performs:1 saberi:1 auction:4 meaning:1 consideration:2 recently:1 charles:1 common:2 pseudocode:1 functional:1 empirically:1 exponentially:2 million:1 mellon:3 multiarmed:3 refer:2 cambridge:1 feldman:2 ai:1 fk:6 similarly:1 submodular:40 had:1 access:1 f0:4 gt:5 add:1 own:1 showed:1 optimizing:5 irrelevant:1 optimizes:1 n00014:1 inequality:2 blog:25 onr:1 caltech:1 seen:2 analyzes:2 additional:1 george:2 guestrin:2 eo:1 maximize:11 ii:3 gruia:1 full:5 multiple:2 desirable:1 turkey:1 sham:1 aggarwal:1 technical:1 equally:1 a1:2 prediction:1 essentially:1 cmu:3 chandra:1 achieved:1 c1:5 receive:1 addition:1 want:3 krause:4 addressed:1 source:5 country:1 leaving:1 extra:1 posse:1 pennock:1 subject:3 elegant:1 spirit:1 prague:1 call:3 integer:2 iii:2 easy:1 concerned:1 matroid:3 nonstochastic:1 click:7 opposite:1 andreas:4 inner:3 idea:5 itr:1 tradeoff:1 economic:1 utility:20 peter:1 returned:1 york:1 repeatedly:5 action:2 prefers:1 informally:1 discount:3 locally:9 generate:2 schapire:2 http:1 nsf:1 tutorial:2 disjoint:1 per:2 ccr:1 diverse:6 carnegie:3 write:1 affected:1 redundancy:2 key:3 thereafter:1 blum:3 graph:1 asymptotically:1 monotone:22 fraction:1 sum:2 run:2 arrive:2 extends:1 reader:1 decision:5 summarizes:1 bound:6 internet:1 guaranteed:1 followed:1 display:8 correspondence:1 oracle:2 nonnegative:1 roughgarden:1 scanned:1 constraint:4 generates:1 kleinberg:1 argument:1 min:1 span:1 separable:1 martin:2 according:1 combination:3 remain:1 slightly:1 vanbriesen:1 kakade:2 cth:1 making:1 s1:1 modification:1 outbreak:1 intuitively:2 gathering:1 thorsten:1 discus:1 turn:2 know:1 generalizes:1 available:1 operation:1 apply:1 observe:5 appearing:1 faloutsos:1 assumes:1 running:1 include:1 gkt:2 approximating:1 objective:5 already:1 added:1 occurs:1 question:1 dependence:1 mirrokni:2 said:1 exhibit:1 nemhauser:2 calinescu:1 link:2 separate:2 majority:1 outer:1 collected:1 reason:1 modeled:1 index:1 ratio:9 providing:1 minimizing:2 robert:3 statement:1 stoc:2 gk:9 noam:1 negative:4 stated:1 ebastien:1 policy:1 perform:1 bianchi:2 markov:2 payoff:1 extended:4 gc:16 mansour:3 arbitrary:4 community:1 david:3 pair:9 namely:2 specified:1 palette:1 optimized:1 california:1 czech:1 established:1 nip:1 address:3 beyond:2 jure:1 proceeds:1 challenge:1 max:18 power:1 suitable:1 event:1 ranked:1 natural:2 overlap:1 solvable:1 technology:1 zhen:1 review:1 literature:2 nicol:2 relative:1 freund:2 loss:2 suf:1 wolsey:2 allocation:3 proportional:1 acyclic:1 revenue:5 krausea:1 informativeness:2 displaying:2 editor:2 playing:1 row:7 placed:3 surprisingly:1 copy:1 supported:1 infeasible:1 offline:16 allow:1 institute:1 face:1 tauman:1 feedback:10 dimension:1 xia:1 world:2 cumulative:1 made:3 avg:1 ec:6 vazirani:1 obtains:4 vondr:7 forever:1 dealing:1 maxr:2 filip:1 pittsburgh:2 assumed:2 search:14 continuous:5 streeter:5 sk:4 robin:2 additionally:1 pkt:2 learn:1 golovin:5 ca:1 ignoring:1 improving:1 complex:1 pk:31 big:1 arise:1 hf1:1 nothing:1 repeated:1 x1:1 fig:4 advice:1 fashion:2 ny:1 christos:1 position:40 clicking:1 third:1 advertisement:2 posting:5 theorem:12 rk:4 showing:1 list:5 dk:2 dominates:2 incorporating:1 avrim:1 sequential:2 corr:1 importance:1 gained:1 phd:1 occurring:1 reedy:14 vijay:1 gagan:1 logarithmic:1 fc:4 simply:6 likely:1 explore:1 forming:1 g2:3 recommendation:1 corresponds:3 loses:1 acm:1 slot:2 goal:5 viewed:1 presentation:1 month:1 replace:1 price:1 feasible:5 hard:3 change:1 experimentally:1 specifically:2 fisher:2 uniformly:3 lemma:11 called:4 total:4 rakesh:1 meaningful:1 select:11 formally:4 internal:1 support:1 radlinski:1 latter:2 scan:1 incorporate:2 evaluate:6 |
3,001 | 372 | A Recurrent Neural Network for Word Identification
from Continuous Phoneme Strings
Candace A. Kamm
Bellcore
Morristown, NJ 07962-1910
Robert B. Allen
Bellcore
Morristown, NJ 07962-1910
Abstract
A neural network architecture was designed for locating word boundaries and
identifying words from phoneme sequences. This architecture was tested in
three sets of studies. First, a highly redundant corpus with a restricted
vocabulary was generated and the network was trained with a limited number of
phonemic variations for the words in the corpus. Tests of network performance
on a transfer set yielded a very low error rate. In a second study, a network was
trained to identify words from expert transcriptions of speech. On a transfer
test, error rate for correct simultaneous identification of words and word
boundaries was 18%. The third study used the output of a phoneme classifier as
the input to the word and word boundary identification network. The error rate
on a transfer test set was 49% for this task. Overall, these studies provide a first
step at identifying words in connected discourse with a neural network.
1 INTRODUCTION
During the past several years, researchers have explored the use of neural networks for
classifying spectro-temporal speech patterns into phonemes or other sub-word units (e.g.,
Harrison & Fallside, 1989; Kamm & Singhal, 1990; Waibel et al., 1989). Less effort has
focussed on the use of neural nets for identifying words from the phoneme sequences that
these spectrum-to-phoneme classifiers might produce. Several recent papers, however,
have combined the output of neural network phoneme recognizers with other techniques,
including dynamic time warping (DTW) and hidden Markov models (HMM) (e.g.,
Miyatake, et al., 1990; Morgan & Bourlard, 1990).
Simple recurrent neural networks (Allen, 1990; Elman, 1990; Jordan, 1986) have been
shown to be able to recognize simple sequences of features and have been applied to
linguistic tasks such as resolution of pronoun reference (Allen, 1990). We consider
whether they can be applied to the recognition of words from phoneme sequences. This
paper presents the results of three sets of experiments using recurrent neural networks to
locate word boundaries and to identify words from phoneme sequences. The three
experiments differ primarily in the degree of similarity between the input phoneme
sequences and the input information that would typically be generated by a spectrum-tophoneme classifier.
2 NETWORK ARCHITECTURE
The network architecture is shown in Figure 1. Sentence-length phoneme sequences are
stepped past the network one phoneme at a time. The input to the network on a given
206
A Recurrent Neural Network
time step within a sequence consists of three 46-element vectors (corresponding to 46
phoneme classes) that identify the phoneme and the two subsequent phonemes. The
activation of state unit Si on the step at time t is a weighted sum of the activation of its
corresponding hidden unit (H) and the state unit's activation on the previous time step,
where (3 is the weighting factor for the hidden unit activation and !.l is the state memory
weighting factor: Si,l=(3Hi ,l-l+!.lSi,t-l. In this research (3=1.0 and 1.1=0.5. The output of
the network consists of one unit for each word in the lexicon and an additional unit
whose activation indicates the presence of a word boundary.
Weights from the hidden units to the word units were updated based on error observed
only at phoneme positions that corresponded to the end of a word (Allen, 1988). The end
of the phoneme sequence was padded with codes representing silence. State unit
activations were reset to zero at the end of each sentence. The network was trained using
a momentum factor of a=O.9 and an average learning rate of 1l=0.05. The learning rate
was adjusted for each output unit proportionally to the relative frequency of occurrence
of the word corresponding to that unit.
word units
123
n
. ..
word?
boundary
unit
Output
units
units
Input
units
Input
r
Input t.1
~.II
?
138 Input units
(46 phoneme classes J[ 3 time steps)
Figure 1: Recurrent Network for Word Identification
3 EXPERIMENT 1: DICTIONARY TRANSCRIPTIONS
3.1
PROCEDURE
A corpus was constructed from a vocabulary of 72 words. The words appeared in a
variety of training contexts across sentences and the sentences were constrained to a very
small set of syntactic constructions. The vocabulary set included a subset of rhyming
words. Transcriptions of each word were obtained from Webster's Seventh Collegiate
Dictionary and from an American-English orthographic-phonetic dictionary (Shoup,
1973). These transcriptions describe words in isolation only and do not reflect
co articulations that occur when the sentences are spoken. For this vocabulary, 26 of the
words had one pronunciation, 19 had two variations, and the remaining 27 had from 3 to
17 variations. The corpus consisted of 18504 sentences of about 7 words each. 6000 of
these sentences were randomly selected and reserved as transfer sentences.
207
208
Allen and Kamm
The input to the network was a sequence of 46-element vectors designed to emulate the
activations that might be obtained from a neural network phoneme classifier with 46
output classes (Kamm and Singhal, 1990). Since a phoneme classifier is likely to
generate a set of phoneme candidates for each position in a sequence, we modified the
activations in each input vector to mimic this expected situation. Confusion data were
obtained from a neural network classifier trained to map spectro-temporal input to an
output activation vector for the 46 phoneme classes. In this study, input activations for
phonemes that accounted for fewer than 5% of the confusions with the correct phoneme
remained set to 0.0, while input activations for phonemes accounting for higher
proportions of the total confusions with the correct phoneme were set to twice those
proportions, with an upper limit of 1.0. This resulted in relatively high activation levels
for one to three elements and activation of 0.0 for the others. Overall, the network had
138 (46x3) input units, 80 hidden units, 80 state units, and 73 output units (one for each
word and one boundary detection unit). The network was trained for 50000 sentence
sequences chosen randomly (with replacement) from the training corpus. Each sequence
was prepared with randomly selected transcriptions from the available dialectal
variations.
3.2
RESULTS
In all the experiments discussed in this paper, performance of the network was analyzed
using a sequential decision strategy. First, word boundaries were hypothesized at all
locations where the activation of the boundary unit exceeded a predefined threshold (0.5).
Then, the activations of the word units were scanned and the unit with the highest
activation was selected as the identified word. By comparing the locations of true word
boundaries with hypothesized boundaries, a false alarm rate (Le., the number of spurious
boundaries divided by the number of non-boundaries) was computed. Word error rate
was then computed by dividing the number of incorrect words at correctly-detected word
boundaries by the total number of words in the transfer set. This word error rate includes
both deletions (Le., missed boundaries) and word substitutions (Le., incorrectly-identified
words at correct boundaries). Total error rate is obtained by summing the word error rate
and the false alarm rate.
On the 6000 sentence test set, the network correctly located 99.3% of the boundaries,
with a word error rate of 1.7% and a false alarm rate of 0.3% Overall, this yielded a total
error rate of 2.0%. To further test the robustness of this procedure to noisy input, three
networks were trained with the same procedures as above except that the input phoneme
sequences were distorted. In the first network, there was a 30% chance that input
phonemes would be duplicated. In a second network, there was a 30% chance that input
phonemes would be deleted. In the third network, there was a 70% chance that an input
phoneme would be substituted with another closely-related phoneme. Total error rates
were 11.7% for the insertion network, 20.9% for the deletion network, and 10.0% for the
substitution network.
Even with these fairly severe distortions, the network is moderately successful at the
boundary detection/word identification task. Experiment 2 was designed to study
network performance on a more diverse and realistic training set.
A Recurrent Neural Network
4 EXPERIMENT 2: EXPERT TRANSCRIPTIONS OF SPEECH
4.1
PROCEDURE
To provide a richer and more natural sample of transcriptions of continuous speech,
training sequences were derived from phonetic transcriptions of 207 sentences from the
DARPA Acoustic-Phonetic (AP) speech corpus (TIMIT) (Lamel, et a/., 1986). The
training set consisted of 4-5 utterances of each of 50 sentences, spoken by different
talkers. One other utterance of each sentence (spoken by different talkers) was used for
transfer tests. This corpus contained 277 unique words. For training, the transcripts were
also segmented into words. When a word boundary was spanned by a single phoneme
because of co articulation (for example, the transcription /haejr/ for the phrase "had
your"), the coarticulated phoneme (in this example, 1]1) was arbitrarily assigned to only
the first word of the phrase. These transcriptions differ from those used in Experiment 1
primarily in the amount of phonemic variation observed at word boundaries, and so
provide a more difficult boundary detection task for the network. As in Experiment 1,
the input to the network on any time step was a set of three 46-element vectors. The
original input vectors (obtained from the phonetic transcriptions) were modified based on
the phoneme confusion data described in Section 3.1. The network had 138 (3x46) input
units, 80 hidden units, 80 state units, and 278 output units.
4.2
RESULTS
After training on 80000 sentence sequences randomly selected from the 207 sentence
training set (approximately 320,000 weight updates), the network was tested on the 50
sentence transfer set. With a threshold for the boundary detection unit of 0.5. the
network was 87.5% correct at identifying word boundaries and had a false alarm rate of
2.3%. The word error rate was 15.5%. Thus, using the sequential decision strategy, the
total error rate was 17.8%.
Considering all word boundaries (i.e., not just the correctly-detected boundaries), word
identification was 90.3% correct when the top candidate only (Le., the output unit with
the highest activation) was evaluated and 96.3% for the top three word choices. Because
there were instances where boundaries were not detected. but the word unit activations
indicated the correct word. a decision strategy that simultaneously considered both the
activation of the word boundary unit and the activation of the word units was also
explored. However, the distributions of the word unit activations at non-boundary
locations (Le., within words) and at word boundaries overlapped significantly, so this
strategy was unsuccessful at improving performance. In retrospect, this result is not very
surprising, since the network was not trained for word identification at non-boundaries.
Many of the transfer cases were similar to the training sentences, but some interesting
generalizations were observed. For example, the word "beautiful" always appeared in
the training set as /bjuf;)fi/, but its appearance in the test set as /bjuf;)t1J1/ was correctly
identified. That is, the variation in the final syllable did not prevent correct identification
of the word boundary or the word, despite the fact that the network had seen other
instances of the phoneme sequence lUll in the words "woolen" and "football" (second
syllable). Of the 135 word transcriptions in the transfer set that were unique (Le., they
did not appear in the training set). the net correctly identified 72% based on the top
candidate and 85% within the top 3 candidates. Not surprisingly, performance for the
275 words in the transfer set with non-unique transcriptions was higher, with 96% correct
209
210
Allen and Kamm
for the top candidate and 98% for the top 3 choices.
There was evidence that the network occasionally made use of phoneme context beyond
word boundaries to distinguish among confusable transcriptions. For example, the words
"an", "and", and "in" all appeared in the transfer set on at least one occasion as !~n/, but
each was correctly identified. However, many word identification errors were confusions
between words with similar final phonemes (e.g., confusing "she" with "be", "pewter"
with "order"). This result suggests that, in some instances, the model is not making
sufficient use of prior context.
5 EXPERIMENT 3: MACHINE-GENERATED TRANSCRIPTIONS
5.1
CORPUS AND PROCEDURE
In this experiment, the input to the network was obtained by postprocessing the output of
a spectrum-to-phoneme neural network classifier to produce sequences of phoneme
candidates. The spectrum-to-phoneme classifier, (Kamm and Singhal, 1990), generates a
series of 46-element vectors of output activations corresponding to 46 phoneme classes.
The spectro-temporal input speech patterns are stepped past the classifier in 5-ms
increments, and the classifier generates output vectors on each step. Since phonemes
typically have average durations longer than 5 ms, a postprocessing stage was required to
compress the output of the classifier into a sequence of phoneme candidates appropriate
for use as input to the boundary detection/word identification neural network.
The training set was a subset of the DARPA A-P corpus consisting of 2 sentences spoken
by each of 106 talkers. The postprocessor provided output sequences for the training
sentences that were quite noisy, inserting 2233 spurious phonemes and deleting 581 of
the 7848 phonemes identified by expert transcription. Furthermore, in 2914 instances, the
phoneme candidate with highest average activation was not the correct phoneme.
However, this result was not unexpected, since the postprocessing heuristics are still
under development. The primary purpose for using the postprocessor output was to
provide a difficult input set for the boundary detection/word identification network.
After postprocessing, the highest-activation phoneme candidate sequences were mapped
to the input vectors for the boundary detection/word identification network as follows:
the vector elements corresponding to the three highest-activation phoneme candidate
classes were set to their corresponding average activation values, and the other 43
phoneme classes were set to 0.0 activation. The network had 138 (Le., 46x3 ) input units,
40 hidden units, 40 state units and 22 output units (21 words and 1 boundary unit). The
network was trained for 40000 sentence sequences and then tested on a transfer set
consisting of each sentence spoken by a new set of 105 talkers. The sequences in the
transfer set were also quite noisy, with 2162 inserted phoneme positions and 775 of 7921
phonemes deleted. Further, the top candidate was not the correct phoneme in 3175
positions.
5.2
RESULTS
The boundary detection performance of this network was 56%, much poorer than for the
networks with less noisy input. Since the network sometimes identified the word
boundary at a slightly different phoneme position than had been initially identified, we
implemented a more lenient scoring criterion that scored a "correct" boundary detection
A Recurrent Neural Network
whenever the activation of the boundary unit exceeded the threshold criterion at the true
boundary position or at the position immediately preceding the true boundary. Even with
this looser scoring criterion, only 65% of the boundaries in the transfer set were correctly
detected using a boundary detection threshold of 0.5. The false alarm rate was 9% and
the word error rates were 40%, yielding a total error rate of 49%. This is much larger
than the error rate for the network in Experiment 2. This difference may be explained in
part by the presence of insertions in the input stream in this experiment as compared to
Experiment 2, which had no insertions. The results of Experiment 2 indicated that this
recurrent architecture has a limited capacity for considering past information (Le., as
evidenced by substitution errors such as "she" for "be" and "pewter" for "order"). As a
result, poorer performance might be expected when the information required for word
boundary detection or word identification spans longer input sequences, as occurs when
the input contains extra vectors representing inserted phonemes.
5.3
NON-RECURRENT NETWORK
To evaluate the utility of the recurrent network architecture for this task, a simple nonrecurrent network was trained using the same training set. In addition to the t, t+1 and
t+2 input slots (Fig. 1), the non-recurrent network also included t-1 and t-2 slots, in an
attempt to match some of the information about past context that may be available to the
recurrent network through the state unit activations. Thus, the input required 230 (Le.,
5x46) input units. The network had no state units, 40 hidden units and 22 output units,
and was trained through 40000 sentence sequences. On the transfer set, using a boundary
detection threshold of 0.5, 75% of the word boundaries were correctly detected, and a
false alarm rate of 31 %. The word error rate was 60%. Thus, the recurrent net performed
consistently better than this non-recurrent network both in terms of fewer false alarms
and fewer word errors, despite the fact that the non-recurrent network had more weights.
These results suggest that recurrence is important for the boundary and word
identification task with this noisy input set.
6 DISCUSSION AND FUTURE DIRECTIONS
The current results suggest that this neural network model may provide a way of
integrating lower-level spectrum-to-phoneme classification networks and phoneme-toword classification networks for automatic speech recognition. The results of these
initial experiments are moderately encouraging, demonstrating that this architecture can
be used successfully for boundary detection with moderately large (200-word) and noisy
corpora, although performance drops significantly when the input stream has many
inserted and deleted phonemes. Furthermore, these experiments demonstrate the
importance of recurrence.
Many unresolved questions about the application of this model for the word
boundaryIword identification task remain. The performance of this model needs to be
compared with that of other techniques for locating and identifying words from phoneme
sequences (for example, the two-level dynamic programming algorithm described by
Levinson et a/., 1990).
Word-identification performance of the model (based on the output class with highest
activation) is far from perfect, suggesting that additional strategies are needed to improve
performance. First, word identification errors substituting "she" for "be" and "pewter" for
"order" suggest that the network sometimes uses information only from one or two
previous time steps to make word choices. Efforts to extend the persistence of
211
212
Allen and Kamm
infonnation in the state units beyond this limit may improve perfonnance, and may be
especially helpful when the input is corrupted by phoneme insertions. Another possible
strategy for improving perfonnance would be to use the locations and identities of words
whose boundaries and identities can be hypothesized with high certainty as "islands of
reliability". These anchor points could then help detennine whether the word choice at a
less certain boundary location is a reasonable one, based on features like word length (in
phonemes) or semantic or syntactic constraints. In addition, an algorithm that considers
more than just the top candidate at each hypothesized word position and that uses
semantic and syntactic constraints for reducing ambiguity might be prove more robust
than the single-best-choice word identification strategy used in the current experiments.
Schemes that attempt to identify word sequences without specifically locating word
boundaries should be explored. The question of whether this network architecture will
scale to successfully handle still larger corpora and realistic applications also requires
further study. These unresolved issues notwithstanding, the current work demonstrates
the feasibility of an integrated neural-based system for performing several levels of
processing of speech, from spectro-temporal pattern classification to word identification.
References
Allen, R.B. Sequential connectionist networks for answering simple questions about a microworld.
Proceedings o/the Cognitive Science Society, 489-495, 1988.
Allen, R.B. Connectionist language users. Connection Science, 2, 279-311, 1990.
Elman, J. L. Finding structure in time. Cognitive Science, 14, 179-211, 1990.
Harrison, T. and Fallside, F. A connectionist model for phoneme recognition in continuous speech.
Proc. ICASSP 89, 417-420, 1989.
Jordan, M. I. Serial order: A parallel distributed processing approach. (Tech. Rep. No. 8604). San
Diego: University of California, Institute for Cognitive Science, 1986.
Kamm, C. and Singhal, S. Effect of neural network input span on phoneme classification, Proc.
IJCNN June 1990, 1,195-200,1990.
Lamel, L., Kassel, R. and Seneff, S. Speech database development: Design and analysis of the
acoustic-phonetic corpus. Proc. DARPA Speech Recognition Workshop, 100-109, 1986.
Levinson, S. E., Ljolje, A. and Miller, L. G. Continuous speech recognition from a phonetic
transcription. Proc. ICASSP-90, 93-96, 1990.
Miyatake, M., Sawai, H., Minami, Y. and Shikano, H. Integrated training for spotting Japanese
phonemes using large phonemic time-delay neural networks. Proc. ICASSP 90, 449-452,1990.
Morgan, N. and Bourlard, H. Continuous speech recognition using multilayer perceptrons with
hidden Markov models. Proc. ICASSP 90, 413-416,1990.
Shoup, J. E. American English Orthographic-Phonemic Dictionary. NTIS Report AD763784,
1973.
Waibel, A., Hanazawa, T., Hinton, G., Shikano, K. and Lang, K. Phoneme recognition using timedelay neural networks. IEEE Trans. ASSP, 37, 328-339, 1989.
Webster's Seventh Collegiate Dictionary. Springfield, MA: Merriam Company, 1972.
| 372 |@word proportion:2 accounting:1 initial:1 substitution:3 series:1 contains:1 past:5 current:3 comparing:1 surprising:1 activation:30 si:2 lang:1 realistic:2 subsequent:1 webster:2 designed:3 drop:1 update:1 selected:4 fewer:3 lexicon:1 location:5 constructed:1 incorrect:1 consists:2 prove:1 expected:2 elman:2 kamm:8 company:1 encouraging:1 considering:2 provided:1 string:1 spoken:5 finding:1 nj:2 temporal:4 certainty:1 morristown:2 classifier:11 demonstrates:1 unit:48 appear:1 limit:2 despite:2 ap:1 approximately:1 might:4 twice:1 suggests:1 co:2 limited:2 unique:3 orthographic:2 x3:2 procedure:5 significantly:2 persistence:1 word:99 integrating:1 suggest:3 context:4 map:1 duration:1 resolution:1 identifying:5 immediately:1 spanned:1 handle:1 variation:6 increment:1 updated:1 construction:1 diego:1 user:1 programming:1 us:2 overlapped:1 element:6 recognition:7 located:1 database:1 observed:3 inserted:3 connected:1 highest:6 insertion:4 moderately:3 dynamic:2 trained:10 icassp:4 darpa:3 emulate:1 describe:1 ntis:1 detected:5 corresponded:1 pronunciation:1 whose:2 richer:1 quite:2 heuristic:1 larger:2 distortion:1 football:1 syntactic:3 noisy:6 hanazawa:1 final:2 sequence:27 net:3 reset:1 unresolved:2 inserting:1 pronoun:1 produce:2 perfect:1 help:1 recurrent:15 transcript:1 phonemic:4 dividing:1 implemented:1 differ:2 direction:1 closely:1 correct:12 generalization:1 minami:1 adjusted:1 considered:1 talker:4 substituting:1 collegiate:2 dictionary:5 purpose:1 proc:6 infonnation:1 successfully:2 weighted:1 always:1 modified:2 linguistic:1 derived:1 june:1 she:3 consistently:1 indicates:1 tech:1 helpful:1 typically:2 integrated:2 initially:1 hidden:9 spurious:2 springfield:1 overall:3 among:1 classification:4 issue:1 bellcore:2 development:2 constrained:1 fairly:1 mimic:1 future:1 others:1 connectionist:3 report:1 primarily:2 randomly:4 simultaneously:1 recognize:1 resulted:1 consisting:2 replacement:1 attempt:2 detection:13 highly:1 severe:1 analyzed:1 yielding:1 predefined:1 poorer:2 perfonnance:2 confusable:1 instance:4 sawai:1 phrase:2 singhal:4 subset:2 delay:1 successful:1 seventh:2 corrupted:1 combined:1 reflect:1 ambiguity:1 cognitive:3 expert:3 american:2 lull:1 suggesting:1 includes:1 candace:1 stream:2 performed:1 coarticulated:1 parallel:1 timit:1 phoneme:65 reserved:1 miller:1 identify:4 identification:19 researcher:1 simultaneous:1 whenever:1 frequency:1 duplicated:1 exceeded:2 higher:2 evaluated:1 furthermore:2 just:2 stage:1 retrospect:1 indicated:2 effect:1 hypothesized:4 consisted:2 true:3 assigned:1 semantic:2 during:1 recurrence:2 timedelay:1 m:2 occasion:1 criterion:3 demonstrate:1 confusion:5 allen:9 lamel:2 postprocessing:4 fi:1 discussed:1 extend:1 automatic:1 language:1 had:13 reliability:1 recognizers:1 similarity:1 longer:2 x46:2 recent:1 phonetic:6 occasionally:1 certain:1 rep:1 arbitrarily:1 seneff:1 scoring:2 morgan:2 seen:1 additional:2 preceding:1 redundant:1 ii:1 levinson:2 segmented:1 match:1 divided:1 serial:1 feasibility:1 multilayer:1 rhyming:1 sometimes:2 addition:2 harrison:2 extra:1 postprocessor:2 jordan:2 presence:2 variety:1 isolation:1 architecture:8 identified:8 whether:3 utility:1 effort:2 locating:3 speech:13 proportionally:1 amount:1 prepared:1 generate:1 lsi:1 correctly:8 diverse:1 threshold:5 demonstrating:1 deleted:3 prevent:1 padded:1 year:1 sum:1 distorted:1 reasonable:1 looser:1 missed:1 decision:3 confusing:1 hi:1 syllable:2 distinguish:1 yielded:2 occur:1 scanned:1 constraint:2 ijcnn:1 your:1 generates:2 span:2 performing:1 relatively:1 waibel:2 remain:1 across:1 slightly:1 island:1 making:1 explained:1 restricted:1 needed:1 end:3 available:2 detennine:1 appropriate:1 occurrence:1 robustness:1 original:1 compress:1 top:8 remaining:1 ljolje:1 lenient:1 especially:1 kassel:1 society:1 warping:1 question:3 occurs:1 strategy:7 primary:1 fallside:2 mapped:1 capacity:1 hmm:1 stepped:2 considers:1 length:2 code:1 difficult:2 robert:1 design:1 upper:1 markov:2 incorrectly:1 situation:1 hinton:1 assp:1 locate:1 evidenced:1 required:3 sentence:22 connection:1 acoustic:2 california:1 deletion:2 trans:1 able:1 beyond:2 spotting:1 pattern:3 articulation:2 appeared:3 including:1 memory:1 unsuccessful:1 deleting:1 natural:1 beautiful:1 bourlard:2 representing:2 scheme:1 improve:2 dtw:1 utterance:2 prior:1 nonrecurrent:1 relative:1 interesting:1 degree:1 sufficient:1 classifying:1 accounted:1 surprisingly:1 english:2 silence:1 institute:1 focussed:1 distributed:1 boundary:52 vocabulary:4 made:1 san:1 far:1 spectro:4 transcription:17 anchor:1 corpus:12 summing:1 shikano:2 spectrum:5 continuous:5 transfer:15 robust:1 improving:2 japanese:1 substituted:1 did:2 alarm:7 miyatake:2 scored:1 fig:1 sub:1 position:8 momentum:1 candidate:12 answering:1 third:2 weighting:2 remained:1 explored:3 evidence:1 workshop:1 false:7 sequential:3 importance:1 notwithstanding:1 likely:1 appearance:1 unexpected:1 contained:1 chance:3 discourse:1 ma:1 slot:2 identity:2 toword:1 included:2 specifically:1 except:1 reducing:1 total:7 perceptrons:1 evaluate:1 tested:3 |
3,002 | 3,720 | A Bayesian Analysis of Dynamics in Free Recall
Richard Socher
Department of Computer Science
Stanford University
Stanford, CA 94305
[email protected]
Samuel J. Gershman, Adler J. Perotte, Per B. Sederberg
Department of Psychology
Princeton University
Princeton, NJ 08540
{sjgershm,aperotte,persed}@princeton.edu
Kenneth A. Norman
Department of Psychology
Princeton University
Princeton, NJ 08540
[email protected]
David M. Blei
Department of Computer Science
Princeton University
Princeton, NJ 08540
[email protected]
Abstract
We develop a probabilistic model of human memory performance in free recall
experiments. In these experiments, a subject first studies a list of words and then
tries to recall them. To model these data, we draw on both previous psychological
research and statistical topic models of text documents. We assume that memories
are formed by assimilating the semantic meaning of studied words (represented
as a distribution over topics) into a slowly changing latent context (represented
in the same space). During recall, this context is reinstated and used as a cue for
retrieving studied words. By conceptualizing memory retrieval as a dynamic latent
variable model, we are able to use Bayesian inference to represent uncertainty and
reason about the cognitive processes underlying memory. We present a particle
filter algorithm for performing approximate posterior inference, and evaluate our
model on the prediction of recalled words in experimental data. By specifying the
model hierarchically, we are also able to capture inter-subject variability.
1
Introduction
Modern computational models of verbal memory assume that the recall of items is shaped by their
semantic representations. The precise nature of this relationship is an open question. To address
it, recent research has used information from diverse sources, such as behavioral data [14], brain
imaging [13] and text corpora [8]. However, a principled framework for integrating these different
types of information is lacking. To this end, we develop a model of human memory that encodes
probabilistic dependencies between multiple information sources and the hidden variables that couple
them. Our model lets us combine multiple sources of information and multiple related memory
experiments.
Our model builds on the Temporal Context Model (TCM) of [10, 16]. TCM was developed to explain
the temporal structure of human behavior in free recall experiments, where subjects are presented
with lists of words (presented one at a time) and then asked to recall them in any order. TCM posits a
slowly changing mental context vector whose evolution is driven by lexical input. At study, words
are bound to context states through learning; during recall, context information is used as a cue
to probe for stored words. TCM can account for numerous regularities in free recall data, most
prominently the finding that subjects tend to consecutively recall items that were studied close in
time to one another. (This effect is called the temporal contiguity effect.) TCM explains this effect
by positing that recalling an item also triggers recall of the context state that was present when the
1
item was studied; subjects can use this retrieved context state to access items that were studied close
in time to the just-recalled item. The fact that temporal contiguity effects in TCM are mediated
indirectly (via item-context associations) rather than directly (via item-item associations) implies that
temporal contiguity effects should persist when subjects are prevented from forming direct item-item
associations; for evidence consistent with this prediction, see [9].
Importantly, temporal structure is not the only organizing principle in free recall data: Semantic
relatedness between items also influences the probability of recalling them consecutively [11].
Moreover, subjects often recall semantically-related items that were not presented at study. (These are
called extra-list intrusions; see [15].) To capture this semantic structure, we will draw on probabilistic
topic models of text documents, specifically latent Dirichlet allocation (LDA) [3]. LDA is an
unsupervised model of document collections that represents the meaning of documents in terms of
a small number of ?topics,? each of which is a distribution over words. When fit to a corpus, the
most probable words of these distributions tend to represent the semantic themes (like ?sports? or
?chemistry?) that permeate the collection. LDA has been used successfully as a psychological model
of semantic representation [7].
We model free recall data by combining the underlying assumptions of TCM with the latent semantic
space provided by LDA. Specifically, we reinterpret TCM as a dynamic latent variable model where
the mental context vector specifies a distribution over topics. In other words, the human memory
component of our model represents the drifting mental context as a sequence of mixtures of topics, in
the same way that LDA represents documents. With this representation, the dynamics of the mental
context are determined by two factors: the posterior probability over topics given a studied or recalled
word (semantic inference) and the retrieval of previous contexts (episodic retrieval). These dynamics
let us capture both the episodic and semantic structure of human verbal memory.
The work described here goes beyond prior TCM modeling work in two ways: First, our approach
allows us to infer the trajectory of the context vector over time, which (in turn) allows us to predict
the item-by-item sequence of word recalls; by contrast, previous work (e.g., [10, 16]) has focused
on fitting the summary statistics of the data. Second, we model inter-subject variability using a
hierarchical model specification; this approach allows us to capture both common and idiosyncratic
features of the behavioral data.
The rest of the paper is organized as follows. In Section 2 we describe LDA and in Section 3 we
describe our model, which we refer to as LDA-TCM. In Section 4 we describe a particle filter for
performing posterior inference in this model. In Section 5.1 we present simulation results showing
how this model reproduces fundamental behavioral effects in free recall experiments. In Section
5.2 we present inference results for a dataset collected by Sederberg and Norman in which subjects
performed free recall of words.
2
Latent Dirichlet allocation
Our model builds on probabilistic topic models, specifically latent Dirichlet allocation. Latent
Dirichlet allocation (LDA) is a probabilistic model of document collections [3]. LDA posits a set
of K topics, each of which is a distribution over a fixed vocabulary, and documents are represented
as mixtures over these topics. Thus, each word is assumed to be drawn from a mixture model with
corpus-wide components (i.e., the topics) and document-specific mixture proportions. When fit to a
collection of documents, the topic distributions often reflect the themes that permeate the document
collection.
More formally, assume that there are K topics ?k , each of which is a distribution over words. (We
will call the K ? W matrix ? the word distribution matrix.) For each document, LDA assumes the
following generative process:
1. Choose topic proportions ? ? Dir(?).
2. For each of the N words wn :
(a) Choose a topic assignment zn ? Mult(?).
(b) Choose a word wn ? Mult(?zn ).
2
Figure 1: A graphical model of LDA-TCM.
Given a collection of documents, posterior inference in LDA essentially reverses this process to
decompose the corpus according to its topics and find the corresponding distributions over words.
Posterior inference is intractable, but many approximation algorithms have been developed [3, 7, 17].
In addition to capturing the semantic content of documents, recent psychological work has shown
that several aspects of LDA make it attractive as a model of human semantic representation [7]. In
our model of memory, the topic proportions ? play the role of a ?mental context? that guides memory
retrieval by parameterizing a distribution over words to recall.
3
Temporal context and memory
We now turn to a model of human memory that uses the latent representation of LDA to capture the
semantic aspects of recall experiments. Our data consist of two types of observations: a corpus of
documents from which we have obtained the word distribution matrix, 1 and behavioral data from
free recall experiments, which are studied and recalled words from multiple subjects over multiple
runs of the experiment. Our goal is to model the psychological process of recall in terms of a drifting
mental context.
The human memory component of our model is based on the Temporal Context Model (TCM). There
are two core principles of TCM: (1) Memory retrieval involves reinstating a representation of context
that was active at the time of study; and (2) context change is driven by features of the studied stimuli
[10, 16, 14]. We capture these principles by representing the mental context drift of each subject
with a trajectory of latent variables ?n . Our use of the same variable name (?) and dimensionality
for the context vector and for topics reflects our key assertion: Context and topics reside in the same
meaning space.
The relationship between context and topics is specified in the generative process of the free recall
data. The generative process encompasses both the study phase and the recall phase of the memory
experiment. During study, the model specifies the distribution of the trajectory of internal mental
contexts of the subject. (These variables are important in the next phase when recalling words
episodically.) First, the initial mental context is drawn from a Gaussian:
N (0, ?I),
?s,0
K identity matrix.2 Then, for each studied word the
where s denotes the study phase and I is a K
mental context drifts according to
?s,n
(1)
N (hs,n , ?I),
(2)
where
hs,n = 1 ?s,n
1
+ (1
1 ) log(ps,n ).
(3)
1
For simplicity, we fix the word distribution matrix to one fit using the method of [3]. In future work, we will
explore how the data from the free recall experiment could be used to constrain estimates of the word distribution
matrix.
2
More precisely, context vectors are log-transformed topic vectors (see [1, 2]). When generating words from
the topics, we renormalize the context vector.
3
This equation identifies the two pulls on mental context drift when the subject is studying words: the
? s,n ? ??,ws,n , the posterior probabilities of each topic given the
previous context vector ?n?1 and p
current word and the topic distribution matrix. This second term captures the idea that mental context
is updated with the meaning of the current word (see also [2] for a related treatment of topic dynamics
in the context of text modeling). For example, if the studied word is ?stocks? then the mental context
might drift toward topics that also have words like ?business?, ?financial?, and ?market? with high
probability. (Note that this is where the topic model and memory model are coupled.) The parameter
?1 controls the rate of drift, while ? controls its noisiness.
During recall, the model specifies a distribution over drifting contexts and recalled words. For each
time t, the recalled word is assumed to be generated from a mixture of two components. Effectively,
there are two ?paths? to recalling a word: a semantic path and an episodic path.
The semantic path recalls words by ?free associating? according to the LDA generative process:
Using the current context as a distribution over topics, it draws a topic randomly and then draws a
word from this topic (this is akin to thinking of a word that is similar in meaning to just-recalled
words). Formally, the probability of recalling a word via the semantic path is expressed as the
marginal probability of that word induced by the current context:
Ps (w) = ?(?r,t ) ? ??,w ,
(4)
where ? is a function that maps real-valued vectors onto the simplex (i.e., positive vectors that sum to
one) and the index r denotes the recall phase.
The episodic path recalls words by drawing them exclusively from the set of studied words. This path
puts a high probability on words that were studied in a context that resembles the current context
(this is akin to remembering words that you studied when you were thinking about things similar to
what you are currently thinking about). Formally, the episodic distribution over words is expressed as
a weighted sum of delta functions (each corresponding to a word distribution that puts all its mass on
a single studied word), where the weight for a particular study word is determined by the similarity
of the context at recall to the state of context when the word was studied:
ut,w
Pe (w) = P
,
(5)
i ut,i
where
ut =
PN
n=1 ?s,ws,n /d(?(?r,t ), ?(?s,n )) .
Here d(?, ?) is a similarity function between distributions (here we use the negative KL-divergence)
and is a parameter controlling the curvature of the similarity function. We define {?s,ws,n }N
n=1 to
be delta functions defined at study words. Because people tend not to repeatedly recall words, we
remove the corresponding delta function after a word is recalled.
Our model assumes that humans use some mixture of these two paths, determined by mixing
proportion ?. Letting wr,t ? Mult(?t ), we have
?t (w) = ?Ps (w) + (1 ? ?)Pe (w).
(6)
Intuitively, ? in Equation 6 controls the balance between semantic influences and episodic influences.
When ? approaches 1, we obtain a ?pure semantic? model wherein words are recalled essentially by
free association (this is similar to the model used by [7] to model semantically-related intrusions in
free recall). When ? approaches 0, we obtain a ?pure episodic? model wherein words are recalled
exclusively from the study list. An intermediate value of ? is essential to simultaneously explaining
temporal contiguity and semantic effects in memory.
Finally, the context drifts according to
?r,t+1 ? N (hr,t , ?I),
(7)
hr,t = ?2 ?r,t + ?3 log(?
pr,t ) + ?4 ?s,n(wr,t ) .
(8)
where
This is similar to how context drifts in the study phase, except that the context is additionally pushed
by the context that was present when the recalled word was studied. This is obtained mathematically
by defining n(wr,t ) to be a mapping from a recalled word to the index of the same word at study. For
4
Figure 2: Simulated and empirical recall data. Data replotted from [9]. (Left) Probability of first
recall curve. (Right) Conditional response probability curve.
example, if the recalled word is ?cat? and cat was the sixth studied word then n(wr,t ) = 6. If there
is a false recall, i.e., the subject recalls a word that was not studied, then ?s,n(wr,t ) is set to the zero
vector.
This generative model is depicted graphically in Figure 1, where ? = {?1:4 , ?, ?, } represents the
set of model parameters and ? is the set of hyperparameters.
To model inter-subject variability, we extend our model hierarchically, defining group-level prior
distributions from which subject-specific parameters are assumed to be drawn [6]. This approach
allows for inter-subject variability and, at the same time, it allows us to gain statistical strength from
the ensemble by coupling subjects in terms of higher-level hyperparameters. We choose our group
prior over subject i?s parameters to factorize as follows:
i
i
P (?1:4
, ? i , ?i , i ) = P (?1i )P (?2:4
)P (? i )P (?i )P (i ).
(9)
i
In more detail, the factors take on the following functional forms: ?1i ? Beta(c, d), ?2:4
?
i
i
i
Dir(?), ? ? Exp(?), ? ? Beta(a, b), ? Gamma(?1 , ?2 ). Except where mentioned otherwise,
we used the following hyperparameter values: a = b = c = d = 1, ? = [1, 1, 1], ?1 = 1, ?2 = 1.
For some model variants (described in Section 5.2) we set the parameters to a fixed value rather than
inferring them.
Here, we use the model to answer the following questions about behavior in free recall experiments:
(1) Do both semantic and temporal factors influence recall, and if so what are their relative contributions; (2) What are the relevant dimensions of variation across subjects? In our model, semantic
and temporal factors exert their influence via the context vector, while variation across subjects is
expressed in the parameters drawn from the group prior. Thus, our goal in inference is to compute the
posterior distribution over the context trajectory and subject-specific parameters, given a sequence
of studied and recalled words. We can also use this posterior to make predictions about what words
will be recalled by a subject at each point during the recall phase. By comparing the predictive
performance of different model variants, we can examine what types of model assumptions (like the
balance between semantic and temporal factors) best capture human behavior.
4
Inference
We now describe an approximate inference algorithm for computing the posterior distribution. Letting
? = {?s,0:N , ?r,1:T , ?}, the posterior is:
P (?|W) =
P (wr,1:T |?s,1:N , ?r,1:T , ws,1:N )P (?r,1:T |?s,1:N )P (?s,1:N |ws,1:N , ?s,0 )P (?s,0 )P (?)
.
P (ws,1:N , wr,1:T )
(10)
Because computing the posterior exactly is intractable (the denominator involves a high-dimensional
integral that cannot be solved exactly), we approximate it with a set of C samples using the particle
filter algorithm [4], which can be summarized as follows. At time t > 0:
5
Figure 3: Factors contributing to context change during recall on a single list. (Left) Illustration of how
three successively recalled words influence context. Each column corresponds to a specific recalled
word (shown in the top row). The bars in each cell correspond to individual topics (specifically,
these are the top ten inferred topics at recall; the center legend shows the top five words associated
with each topic). Arrows schematically indicate the flow of influence between the components. The
context vector at recall (Middle Row) is updated by the posterior over topics given the recalled word
(Top Row) and also by retrieved study contexts (Bottom Row). (Right) Plot of the inferred context
trajectory at study and recall for a different list, in a 2-dimensional projection of the context space
obtained by principal components analysis.
(c)
1. Sample recall context ?t
(c)
2. Compute weights vt
using (7).
(c)
? P wr,t |?r,t using (6).
3. Resample the particles according to their weights.
Using this sample-based approximation, the posterior is approximated as a sum of the delta functions
placed at the samples:
P (?|W) ?
5
C
1 X
? ? ? ?(c) .
C c=1
(11)
Results
We evaluate our model in two ways. First, we generate data from the generative model and record
a number of common psychological measurements to assess to what extent the model reproduces
qualitative patterns of recall behavior. Second, we perform posterior inference and evaluate the
predictive performance of the model on a real dataset gathered by Sederberg and Norman.
5.1
Simulations
For the simulations, the following parameters were used: ?1 = 0.2, ?2 = 0.55, ?3 = 0.05, ? =
0.00001, ? = 0.2, = 1.7. Note that these parameters have not been fit quantitatively to the data; here
we are simply trying to reproduce qualitative patterns. These values have been chosen heuristically
without a systematic search through the parameter space. The results are averaged over 400 random
study lists of 12 words each. In Figure 3, we compare our simulation results to data collected by [9].
Figure 2 (left) shows the probability of first recall (PFR) curve, which plots the probability of each
list position being the first recalled word. This curve illustrates how words in later positions are more
likely to be recalled first, a consequence (in our model) of initializing the recall context with the last
study context. Figure 2 (right) shows the lag conditional response probability (lag-CRP) curve, which
plots the conditional probability of recalling a word given the last recalled word as a function of the
lag (measured in terms of serial position) between the two. This curve demonstrates the temporal
6
Figure 4: (Left) Box-plot of average predictive log-probability of recalled words under different
models. S: pure semantic model; E: pure episodic model. Green line indicates chance. See text for
more detailed descriptions of these models. (Right) Box-plot of inferred parameter values across
subjects.
contiguity effect observed in human recall behavior: the increased probability of recalling words that
were studied nearby in time to the last-recalled word. As in TCM, this effect is present in our model
because items studied close in time to one another have similar context vectors; as such, cuing with
contextual information from time t will facilitate recall of other items studied in temporal proximity
to time t.
5.2
Modeling psychological data
The psychological data modeled here are from a not-previously-published dataset collected by
Sederberg and Norman. 30 participants studied 8 lists of words for a delayed free-recall task. Each
list was composed of 15 common nouns, chosen at random and without replacement from one of 28
categories, such as Musical Instruments, Sports, or Four-footed Animals. After fitting LDA to the
TASA corpus [5], we ran the particle filter with 1000 particles on the Sederberg and Norman dataset.
Our main interest here is comparing our model (which we refer to as the semantic-episodic model)
against various special hyperparameter settings that correspond to alternative psychological accounts
of verbal memory. The models being compared include:
1. Pure semantic: defined by drawing words exclusively from the semantic path, with ? = 1.
This type of model has been used by [7] to examine semantic similarity effects in free recall.
2. Pure episodic: defined by drawing words exclusively from the episodic path, with ? = 0.
3. Semantic-episodic: a = b = 1 (uniform beta prior on ?). This corresponds to a model in
which words are drawn from a mixture of the episodic and semantic paths.
We also compare against a null (chance) model in which all words in the vocabulary have an equal
probability of being recalled.
As a metric of model comparison, we calculate the model?s predictive probability for the word
recalled at time t given words 1 to t ? 1, for all t:
T
X
? log p(wr,t |wr,1:t?1 , ws,1:N ).
(12)
t=1
This metric is proportional to the accumulative prediction error [19], a variant of cross-validation
designed for time series models.
To assure ourselves that the particle filter we used does not suffer from weight degeneracy, we
P
?1
C
(c) 2
also calculated the effective sample size, as recommended by [4]: ESS =
.
c=1 v
Conventionally, it is desirable that the effective sample size is at least half the number of particles.
This desideratum was satisfied for all the models we explored.
7
Before we present the quantitative results, it is useful to examine some examples of inferred context
change and how it interacts with word recall. Figure 3 shows the different factors at work in
generating context change during recall on a single trial, illustrating how semantic inference and
retrieved episodic memories combine to drive context change. The legend showing the top words in
each topic illustrates how these topics appear to capture some of the semantic structure of the recalled
words. On the right of Figure 3, we show another representation of context change (from a different
trial), where the context trajectory is projected onto the first two principal components of the context
vector. We can see from this figure how recall involves reinstatement of studied contexts: Recalling a
word pulls the inferred context vector in the direction of the (inferred) contextual state associated
with that word at study.
Figure 4 (left) shows the average predictive log-probability of recalled words for the models described
above. Overall, the semantic-episodic model outperforms the pure episodic and pure semantic
models in predictive accuracy (superiority over the closest competitor, the pure episodic model, was
confirmed by a paired-sample t-test, with p < 0.002). To gain deeper insight into this pattern of
results, consider the behavior of the different ?pure? models with respect to extra-list intrusions
vs. studied list items. The pure episodic model completely fails to predict extra-list intrusions,
because it restricts recall to the study list (i.e., it assigns zero predictive probability to extra-list
items). Conversely, the pure semantic model does a poor job of predicting recall of studied list items,
because it does not scope recall to the study list. Thus, each of these models is hobbled by crucial (but
complementary) shortcomings. The semantic-episodic model, by occupying an intermediate position
between these two extremes, is able to capture both the semantic and temporal structure in free recall.
Our second goal in inference was to examine individual differences in parameter fits. Figure 4
(right) shows box-plots of the different parameters. In some cases there is substantial variability
across subjects, such as for the similarity parameter . Another pattern to notice is that the values of
the episodic-semantic trade-off parameter ? tend to cluster close to 0 (the episodic extreme of the
spectrum), consistent with the fact that the pure episodic and semantic-episodic models are fairly
comparable in predictive accuracy. Future work will assess the extent to which these across-subject
differences in parameter fits reflect stable individual differences in memory functioning.
6
Discussion
We have presented here LDA-TCM, a probabilistic model of memory that integrates semantic and
episodic influences on recall behavior. By formalizing this model as a probabilistic graphical model,
we have provided a common language for developing and comparing more sophisticated variants. Our
simulation and empirical results show that LDA-TCM captures key aspects of the experimental data
and provides good accuracy at making item-by-item recall predictions. The source code for learning
and inference and the experimental datasets are available at www.cs.princeton.edu/?blei.
There are a number of advantages to adopting a Bayesian approach to modeling free recall behavior.
First, it is easy to integrate more sophisticated semantic models such as hierarchical Dirichlet
processes [18]. Second, hierarchical model specification gives us the power to capture both common
and idiosyncratic behavioral patterns across subjects, thereby opening a window onto individual
differences in memory. Finally, this approach makes it possible to integrate other sources of data, such
as brain imaging data. In keeping with the graphical model formalism, we plan to augment LDA-TCM
with additional nodes representing variables measured with functional magnetic resonance imaging
(fMRI). Existing studies have used fMRI data to decode semantic states in the brain [12] and predict
recall behavior at the level of semantic categories [13]. Incorporating fMRI data into the model will
have several benefits: The fMRI data will serve as an additional constraint on the inference process,
thereby improving our ability to track subjects? mental states during encoding and recall; fMRI will
give us a new way of validating the model ? we will be able to measure the model?s ability to predict
both brain states and behavior; also, by examining the relationship between latent context states and
fMRI data, we will gain insight into how mental context is instantiated in the brain.
Acknowledgements
RS acknowledges support from the Francis Robbins Upton Fellowship and the ERP Fellowship. This
work was done while RS was at Princeton University. PBS acknowledges support from National
Institutes of Health research grant MH080526.
8
References
[1] J. Aitchison. The statistical analysis of compositional data. Journal of the Royal Statistical
Society. Series B (Methodological), pages 139?177, 1982.
[2] D.M. Blei and J.D. Lafferty. Dynamic topic models. In Proceedings of the 23rd international
conference on Machine learning, pages 113?120. ACM New York, NY, USA, 2006.
[3] D.M. Blei, A.Y. Ng, and M.I. Jordan. Latent dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[4] A. Doucet and N. De Freitas. Sequential Monte Carlo Methods in Practice. Springer, 2001.
[5] ST Dumais and TK Landauer. A solution to Platos problem: The latent semantic analysis theory
of acquisition, induction and representation of knowledge. Psychological Review, 104:211?240,
1997.
[6] A. Gelman and J. Hill. Data analysis using regression and multilevel/hierarchical models.
Cambridge University Press, 2007.
[7] T.L. Griffiths, M. Steyvers, and J.B. Tenenbaum. Topics in semantic representation. Psychological Review, 114(2):211?244, 2007.
[8] M.W. Howard, B. Jing, K.M. Addis, and M.J. Kahana. Semantic structure and episodic memory.
Handbook of Latent Semantic Analysis, pages 121?142, 2007.
[9] M.W. Howard and M.J. Kahana. Contextual variability and serial position effects in free recall.
Journal of Experimental Psychology: Learning, Memory, and Cognition, 25(4):923, 1999.
[10] M.W. Howard and M.J. Kahana. A distributed representation of temporal context. Journal of
Mathematical Psychology, 46:269?299, 2002.
[11] M.W. Howard and M.J. Kahana. When does semantic similarity help episodic retrieval? Journal
of Memory and Language, 46(1):85?98, 2002.
[12] T.M. Mitchell, S.V. Shinkareva, A. Carlson, K. Chang, V.L. Malave, R.A. Mason, and M.A.
Just. Predicting human brain activity associated with the meanings of nouns. Science,
320(5880):1191?1195, 2008.
[13] S.M. Polyn, V.S. Natu, J.D. Cohen, and K.A. Norman. Category-specific cortical activity
precedes retrieval during memory search. Science, 310(5756):1963?1966, 2005.
[14] S.M. Polyn, K.A. Norman, and M.J. Kahana. A context maintenance and retrieval model of
organizational processes in free recall. Psychological Review, 116(1):129, 2009.
[15] H.L. Roediger and K.B. McDermott. Creating false memories: Remembering words not
presented in lists. Journal of Experimental Psychology Learning Memory and Cognition,
21:803?803, 1995.
[16] P.B. Sederberg, M.W. Howard, and M.J. Kahana. A context-based theory of recency and
contiguity in free recall. Psychological Review, 115(4):893?912, 2008.
[17] Y. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm for
latent Dirichlet allocation. In Neural Information Processing Systems, 2006.
[18] Y.W. Teh, M.I. Jordan, M.J. Beal, and D.M. Blei. Hierarchical dirichlet processes. Journal of
the American Statistical Association, 101(476):1566?1581, 2006.
[19] E.J. Wagenmakers, P. Gr?unwald, and M. Steyvers. Accumulative prediction error and the
selection of time series models. Journal of Mathematical Psychology, 50(2):149?166, 2006.
9
| 3720 |@word h:2 trial:2 illustrating:1 middle:1 proportion:4 open:1 heuristically:1 simulation:5 r:2 thereby:2 initial:1 series:3 exclusively:4 document:14 outperforms:1 existing:1 freitas:1 current:5 comparing:3 contextual:3 remove:1 plot:6 designed:1 v:1 cue:2 generative:6 half:1 item:22 es:1 sederberg:6 core:1 record:1 blei:6 mental:15 provides:1 node:1 org:1 perotte:1 positing:1 five:1 mathematical:2 direct:1 beta:3 retrieving:1 qualitative:2 combine:2 fitting:2 behavioral:5 inter:4 market:1 behavior:10 examine:4 brain:6 window:1 provided:2 underlying:2 moreover:1 formalizing:1 mass:1 null:1 what:6 contiguity:6 developed:2 finding:1 nj:3 temporal:16 quantitative:1 reinterpret:1 exactly:2 demonstrates:1 control:3 grant:1 appear:1 superiority:1 positive:1 before:1 consequence:1 encoding:1 path:11 might:1 exert:1 studied:26 resembles:1 specifying:1 conversely:1 averaged:1 practice:1 episodic:26 empirical:2 mult:3 projection:1 word:89 integrating:1 griffith:1 onto:3 close:4 cannot:1 gelman:1 selection:1 put:2 context:70 influence:8 recency:1 collapsed:1 www:1 map:1 lexical:1 center:1 go:1 graphically:1 focused:1 simplicity:1 assigns:1 pure:13 parameterizing:1 insight:2 importantly:1 pull:2 financial:1 steyvers:2 variation:2 updated:2 controlling:1 trigger:1 play:1 decode:1 us:1 assure:1 approximated:1 persist:1 bottom:1 role:1 observed:1 polyn:2 solved:1 capture:12 initializing:1 calculate:1 trade:1 ran:1 principled:1 mentioned:1 substantial:1 asked:1 dynamic:7 predictive:8 serve:1 completely:1 stock:1 represented:3 cat:2 various:1 instantiated:1 describe:4 effective:2 shortcoming:1 monte:1 precedes:1 newman:1 whose:1 lag:3 stanford:2 valued:1 drawing:3 otherwise:1 ability:2 statistic:1 beal:1 sequence:3 advantage:1 relevant:1 combining:1 organizing:1 mixing:1 description:1 regularity:1 p:3 cluster:1 jing:1 generating:2 tk:1 help:1 coupling:1 develop:2 measured:2 job:1 conceptualizing:1 c:2 involves:3 implies:1 revers:1 indicate:1 direction:1 posit:2 filter:5 consecutively:2 human:12 explains:1 multilevel:1 fix:1 decompose:1 probable:1 mathematically:1 proximity:1 exp:1 mapping:1 predict:4 scope:1 cognition:2 resample:1 integrates:1 currently:1 robbins:1 successfully:1 occupying:1 reflects:1 weighted:1 gaussian:1 rather:2 pn:1 noisiness:1 methodological:1 indicates:1 intrusion:4 contrast:1 inference:16 hidden:1 w:7 transformed:1 reproduce:1 overall:1 augment:1 animal:1 noun:2 special:1 fairly:1 plan:1 marginal:1 equal:1 resonance:1 shaped:1 ng:1 represents:4 unsupervised:1 thinking:3 fmri:6 future:2 simplex:1 stimulus:1 quantitatively:1 richard:2 opening:1 modern:1 randomly:1 composed:1 simultaneously:1 divergence:1 gamma:1 individual:4 delayed:1 national:1 phase:7 ourselves:1 replacement:1 recalling:8 interest:1 mixture:7 extreme:2 integral:1 renormalize:1 psychological:12 increased:1 column:1 modeling:4 formalism:1 assertion:1 reinstatement:1 zn:2 assignment:1 organizational:1 uniform:1 examining:1 gr:1 stored:1 dependency:1 answer:1 dir:2 adler:1 dumais:1 st:1 fundamental:1 international:1 probabilistic:7 systematic:1 off:1 natu:1 reflect:2 satisfied:1 successively:1 choose:4 slowly:2 cognitive:1 creating:1 american:1 shinkareva:1 account:2 de:1 chemistry:1 summarized:1 performed:1 try:1 later:1 francis:1 participant:1 contribution:1 ass:2 formed:1 accuracy:3 musical:1 ensemble:1 correspond:2 gathered:1 bayesian:4 carlo:1 trajectory:6 confirmed:1 drive:1 published:1 explain:1 sixth:1 against:2 competitor:1 acquisition:1 associated:3 couple:1 gain:3 degeneracy:1 dataset:4 treatment:1 mitchell:1 recall:65 knowledge:1 ut:3 dimensionality:1 organized:1 sophisticated:2 higher:1 wherein:2 response:2 done:1 box:3 just:3 crp:1 lda:19 name:1 effect:11 facilitate:1 accumulative:2 usa:1 functioning:1 norman:7 evolution:1 semantic:46 attractive:1 during:9 samuel:1 trying:1 hill:1 meaning:6 variational:1 common:5 functional:2 cohen:1 association:5 extend:1 refer:2 measurement:1 malave:1 cambridge:1 rd:1 particle:8 language:2 access:1 specification:2 similarity:6 stable:1 curvature:1 posterior:14 closest:1 recent:2 retrieved:3 driven:2 vt:1 mcdermott:1 additional:2 remembering:2 recommended:1 multiple:5 desirable:1 infer:1 cross:1 retrieval:8 serial:2 prevented:1 paired:1 prediction:6 permeate:2 variant:4 desideratum:1 denominator:1 essentially:2 metric:2 regression:1 maintenance:1 represent:2 adopting:1 cell:1 addition:1 schematically:1 fellowship:2 source:5 crucial:1 extra:4 rest:1 subject:28 tend:4 induced:1 validating:1 thing:1 plato:1 legend:2 flow:1 lafferty:1 jordan:2 call:1 intermediate:2 easy:1 wn:2 fit:6 psychology:6 associating:1 idea:1 tcm:17 akin:2 suffer:1 york:1 compositional:1 repeatedly:1 useful:1 detailed:1 ten:1 tenenbaum:1 category:3 generate:1 specifies:3 restricts:1 notice:1 delta:4 per:1 wr:10 track:1 diverse:1 aitchison:1 hyperparameter:2 group:3 key:2 four:1 pb:1 drawn:5 changing:2 erp:1 kenneth:1 imaging:3 sum:3 run:1 uncertainty:1 you:3 upton:1 draw:4 comparable:1 pushed:1 capturing:1 bound:1 activity:2 strength:1 precisely:1 constraint:1 constrain:1 encodes:1 nearby:1 aspect:3 performing:2 department:4 developing:1 according:5 kahana:6 poor:1 across:6 making:1 intuitively:1 pr:1 equation:2 previously:1 turn:2 letting:2 instrument:1 end:1 studying:1 available:1 probe:1 hierarchical:5 indirectly:1 magnetic:1 alternative:1 drifting:3 assumes:2 dirichlet:8 denotes:2 top:5 include:1 graphical:3 carlson:1 cuing:1 build:2 society:1 wagenmakers:1 question:2 interacts:1 simulated:1 topic:38 collected:3 extent:2 reason:1 toward:1 induction:1 code:1 index:2 relationship:3 illustration:1 modeled:1 balance:2 idiosyncratic:2 negative:1 perform:1 teh:2 observation:1 datasets:1 howard:5 defining:2 variability:6 precise:1 drift:7 inferred:6 david:1 specified:1 kl:1 recalled:27 address:1 able:4 beyond:1 bar:1 pattern:5 encompasses:1 replotted:1 green:1 memory:29 royal:1 power:1 business:1 predicting:2 hr:2 representing:2 numerous:1 identifies:1 acknowledges:2 conventionally:1 mediated:1 coupled:1 health:1 text:5 prior:5 review:4 acknowledgement:1 contributing:1 relative:1 lacking:1 allocation:6 proportional:1 gershman:1 validation:1 integrate:2 assimilating:1 consistent:2 principle:3 row:4 summary:1 placed:1 last:3 free:22 keeping:1 sjgershm:1 verbal:3 guide:1 deeper:1 institute:1 wide:1 explaining:1 benefit:1 distributed:1 curve:6 dimension:1 vocabulary:2 calculated:1 cortical:1 reside:1 collection:6 projected:1 welling:1 approximate:3 relatedness:1 reproduces:2 active:1 doucet:1 handbook:1 corpus:6 assumed:3 factorize:1 landauer:1 spectrum:1 search:2 latent:15 additionally:1 nature:1 ca:1 improving:1 roediger:1 hierarchically:2 main:1 arrow:1 hyperparameters:2 complementary:1 ny:1 fails:1 theme:2 inferring:1 position:5 prominently:1 pe:2 specific:5 pfr:1 showing:2 list:18 explored:1 mason:1 evidence:1 intractable:2 socher:2 consist:1 essential:1 false:2 effectively:1 incorporating:1 sequential:1 illustrates:2 tasa:1 depicted:1 simply:1 explore:1 likely:1 forming:1 expressed:3 sport:2 chang:1 springer:1 corresponds:2 chance:2 acm:1 conditional:3 goal:3 identity:1 content:1 change:6 specifically:4 determined:3 except:2 semantically:2 principal:2 called:2 experimental:5 unwald:1 formally:3 internal:1 people:1 support:2 evaluate:3 princeton:11 |
3,003 | 3,721 | Noisy Generalized Binary Search
Robert Nowak
University of Wisconsin-Madison
1415 Engineering Drive, Madison WI 53706
[email protected]
Abstract
This paper addresses the problem of noisy Generalized Binary Search (GBS).
GBS is a well-known greedy algorithm for determining a binary-valued hypothesis through a sequence of strategically selected queries. At each step, a query is
selected that most evenly splits the hypotheses under consideration into two disjoint subsets, a natural generalization of the idea underlying classic binary search.
GBS is used in many applications, including fault testing, machine diagnostics,
disease diagnosis, job scheduling, image processing, computer vision, and active
learning. In most of these cases, the responses to queries can be noisy. Past work
has provided a partial characterization of GBS, but existing noise-tolerant versions of GBS are suboptimal in terms of query complexity. This paper presents
an optimal algorithm for noisy GBS and demonstrates its application to learning
multidimensional threshold functions.
1
Introduction
This paper studies learning problems of the following form. Consider a finite, but potentially very
large, collection of binary-valued functions H defined on a domain X . In this paper, H will be called
the hypothesis space and X will be called the query space. Each h ? H is a mapping from X to
{?1, 1}. Assume that the functions in H are unique and that one function, h? ? H, produces the
correct binary labeling. The goal is to determine h? through as few queries from X as possible. For
each query x ? X , the value h? (x), corrupted with independently distributed binary noise, is observed. If the queries were noiseless, then they are usually called membership queries to distinguish
them from other types of queries [Ang01]; here we will simply refer to them as queries. Problems
of this nature arise in many applications , including channel coding [Hor63], experimental design
[R?en61], disease diagnosis [Lov85], fault-tolerant computing [FRPU94], job scheduling [KPB99],
image processing [KK00], computer vision [SS93, GJ96], computational geometry [AMM+ 98], and
active learning [Das04, BBZ07, Now08].
Past work has provided a partial characterization of this problem. If the responses to queries are
noiseless, then selecting the optimal sequence of queries from X is equivalent to determining an
optimal binary decision tree, where a sequence of queries defines a path from the root of the tree
(corresponding to H) to a leaf (corresponding to a single element of H). In general the determination of the optimal tree is NP-complete [HR76]. However, there exists a greedy procedure
that yields query sequences that are within an O(log |H|) factor of the optimal search tree depth
[GG74, KPB99, Lov85, AMM+ 98, Das04], where |H| denotes the cardinality of H. The greedy
procedure is referred to as Generalized Binary Search (GBS) [Das04, Now08] or the splitting algorithm [KPB99, Lov85, GG74]), and it reduces to classic binary search in special cases [Now08].
The GBS algorithm is outlined in Figure 1(a). At each step GBS selects a query that results in
the most even split of the hypotheses under consideration into two subsets responding +1 and ?1,
respectively, to the query. The correct response to the query eliminates one of these two subsets
from further consideration. Since the hypotheses are assumed to be distinct, it is clear that GBS
terminates in at most |H| queries (since it is always possible to find query that eliminates at least
1
Noisy Generalized Binary Search (NGBS)
initialize: p0 uniform over H.
for i = 0, 1, 2, . . .
P
1) xi = arg minx?X | h?H pi (h)h(x)|.
2) Obtain noisy response yi .
3) Bayes update pi ? pi+1 ; Eqn. (1).
Generalized Binary Search (GBS)
initialize: i = 0, H0 = H.
while |Hi | > 1
P
1) Select xi = arg minx?X | h?Hi h(x)|.
2) Obtain response yi = h? (xi ).
3) Set Hi+1 = {h ? Hi : h(xi ) = yi },
i = i + 1.
hypothesis selected at each step:
b
hi := arg maxh?H pi (h)
(a)
(b)
Figure 1: Generalized binary search (GBS) algorithm and a noise-tolerant variant (NGBS).
one hypothesis at each step). In fact, there are simple examples demonstrating that this is the best
one can hope to do in general [KPB99, Lov85, GG74, Das04, Now08]. However, it is also true that
in many cases the performance of GBS can be much better [AMM+ 98, Now08]. In general, the
number of queries required can be bounded in terms of a combinatorial parameter of H called the
extended teaching dimension [Ang01, Heg95] (also see [HPRW96] for related work). Alternatively,
there exists a geometric relation between the pair (X , H), called the neighborly condition, that is
sufficient to bound the number of queries needed [Now08].
The focus of this paper is noisy GBS. In many (if not most) applications it is unrealistic to assume
that the responses to queries are without error. Noise-tolerant versions of classic binary search have
been well-studied. The classic binary search problem is equivalent to learning a one-dimensional
binary-valued threshold function by selecting point evaluations of the function according to a bisection procedure. A noisy version of classic binary search was studied first in the context of channel
coding with feedback [Hor63]. Horstein?s probabilistic bisection procedure [Hor63] was shown to
be optimal (optimal decay of the error probability) [BZ74] (also see[KK07]).
One straightforward approach to noisy GBS was explored in [Now08]. The idea is to follow the GBS
algorithm, but to repeat the query at each step multiple times in order to decide whether the response
is more probably +1 or ?1. The strategy of repeating queries has been suggested as a general
approach for devising noise-tolerant learning algorithms [K?aa? 06]. This simple approach has been
studied in the context of noisy versions of classic binary search and shown to be suboptimal [KK07].
Since classic binary search is a special case of the general problem, it follows immediately that the
approach proposed in [Now08] is suboptimal. This paper addresses the open problem of determining
an optimal strategy for noisy GBS. An optimal noise-tolerant version of GBS is developed here. The
number of queries an algorithm requires to confidently identify h? is called the query complexity of
the algorithm. The query complexity of the new algorithm is optimal, and we are not aware of any
other algorithm with this capability.
It is also shown that optimal convergence rate and query complexity is achieved for a broad class
of geometrical hypotheses arising in image recovery and binary classification. Edges in images and
decision boundaries in classification problems are naturally viewed as curves in the plane or surfaces embedded in higher-dimensional spaces and can be associated with multidimensional threshold functions valued +1 and ?1 on either side of the curve/surface. Thus, one important setting for
GBS is when X is a subset of d dimensional Euclidean space and the set H consists of multidimensional threshold functions. We show that our algorithm achieves the optimal query complexity for
actively learning multidimensional threshold functions in noisy conditions.
The paper is organized as follows. Section 2 describes the Bayesian algorithm for noisy GBS and
presents the main results. Section 3 examines the proposed method for learning multidimensional
threshold functions. Section 4 discusses an agnostic algorithm that performs well even if h? is not
in the hypothesis space H. Proofs are given in Section 5.
2
A Bayesian Algorithm for Noisy GBS
In noisy GBS, one must cope with erroneous responses. Specifically, assume that the binary response
y ? {?1, 1} to each query x ? X is an independent realization of the random variable Y satisfying
P(Y = h? (x)) > P(Y = ?h? (x)), where h? ? H is fixed but unknown. In other words, the
response is only probably correct. If a query x is repeated more than once, then each response is
2
an independent realization of Y . Define the noise-level for the query x as ?x := P(Y = ?h? (x)).
Throughout the paper we will let ? := supx?X ?x and assume that ? < 1/2.
A Bayesian approach to noisy GBS is investigated
in this paper. Let p0 be a known probability meaP
sure over H. That is, p0 : H ? [0, 1] and h?H p0 (h) = 1. The measure p0 can be viewed as an
initial weighting over the hypothesis class, expressing the fact that all hypothesis are equally reasonable prior to making queries. After each query and response (xi , yi ), i = 0, 1, . . . , the distribution
is updated according to
pi+1 (h) ? pi (h) ? (1?zi (h))/2 (1 ? ?)(1+zi (h))/2 ,
(1)
where zi (h) = h(xiP
)yi , h ? H, ? is any constant satisfying 0 < ? < 1/2, and pi+1 (h) is
normalized to satisfy h?H pi+1 (h) = 1 . The update can be viewed as an application of Bayes rule
and its effect is simple; the probability masses of hypotheses that agree with the label yi are boosted
relative to those that disagree. The parameter ? controls the size of the boost. The hypothesis with
the largest weight is selected at each step: b
hi := arg maxh?H pi (h). If the maximizer is not unique,
one of the maximizers is selected at random. The goal of noisy GBS is to drive the error P(b
hi 6= h? )
to zero as quickly as possible by strategically selecting the queries. A similar procedure has been
shown to be optimal for noisy (classic) binary search problem [BZ74, KK07]. The crucial distinction
here is that GBS calls for a fundamentally different approach to query selection.
The query selection at eachPstep must be informative with respect to the distribution pi . For example,
if the weighted prediction h?H pi (h)h(x) is close to zero for a certain x, then a label at that point is
informative due to the large disagreement among the hypotheses. This suggests the following noisetolerant variant of GBS outlined in Figure 1. This paper shows that a slight variation of the query
selection in the NGBS algorithm in Figure 1 yields an algorithm with optimal query complexity.
It is shown that as long as ? is larger than the noise-level of each query, then the NGBS produces
a sequence of hypotheses, b
h0 , b
h1 , . . . , such that P(b
hn 6= h? ) is bounded above by a monotonically
decreasing sequence (see Theorem 1). The main interest of this paper is an algorithm that drives the
error to zero exponentially fast, and this requires the query selection criterion to be modified slightly.
To see why this is necessary, suppose that at some step of the NGBS algorithm a single hypothesis
(e.g., h? ) has the majority of the probability mass. Then the weighted prediction will be almost
equal to the prediction of that hypothesis (i.e., close to +1 or ?1 for all queries), and therefore the
responses to all queries are relatively certain and non-informative. Thus, the convergence of the
algorithm could become quite slow in such conditions. A similar effect is true in the case of noisy
(classic) binary search [BZ74, KK07]. To address this issue, the query selection criterion is modified
via randomization so that the response to the selected query is always highly uncertain.
In order to state the modified selection procedure and the main results, observe that the query space
X can be partitioned into equivalence subsets such that every h ? H is constant
S for all queries in
each such subset. Let A denote the smallest such partition. Note that X = A?A A. For every
A ? A and h ? H, the value of h(x) is constant (either +1 or ?1) for all x ? A; denote this value
by h(A). As first noted in [Now08], A can play an important role in GBS. In particular, observe that
the query selection step in NGBS is equivalent to an optimization over A rather that X itself. The
randomization of the query selection step is based on the notion of neighboring sets in A.
Definition 1 Two sets A, A0 ? A are said to be neighbors if only a single hypothesis (and its
complement, if it also belongs to H) outputs a different value on A and A0 .
The modified NGBS algorithm is outlined in Figure 2. Note that the query selection step is identical
to that of the original NGBS algorithm, unless there exist two neighboring sets with strongly bipolar
weighted responses. In the latter case, a query is randomly selected from one of these two sets with
equal probability, which guarantees a highly uncertain response.
Theorem 1 Let P denotes the underlying probability measure (governing noises and algorithm randomization). If ? > ?, then both the NGBS and modified NGBS algorithms, in Figure 1(b) and
Figure 2, respectively, generate a sequence of hypotheses such that P(b
hn 6= h? ) ? an < 1, where
{an }n?0 is a monotonically decreasing sequence.
The condition ? > ? ensures that the update (1) is not overly aggressive. We now turn to the
matter of sufficient conditions guaranteeing that P(b
hn 6= h? ) ? 0 exponentially fast with n. The
3
Modified NGBS
initialize: p0 uniform over H.
for i = 0, 1, 2, . . .
P
1) Let b P
= minA?A | h?H pi (h)h(A)|.
neighboring sets A and A0
P If there exists
0
with h?H pi (h)h(A) > b and h?H pi (h)h(A ) < ?b , then select xi from
A or A0 with P
probability 1/2 each. Otherwise select xi from the set Amin =
arg minA?A | h?H pi (h)h(A)|. In the case that the sets above are non-unique,
choose at random any one satisfying the requirements.
2) Obtain noisy response yi .
3) Bayes update pi ? pi+1 ; Eqn. (1).
hypothesis selected at each step:
b
hi := arg maxh?H pi (h)
Figure 2: Modified NGBS algorithm.
exponential convergence rate of classic binary search hinges on the fact that the hypotheses can be
ordered with respect to X . In general situations, the hypothesis space cannot be ordered in such a
fashion, but the neighborhood graph of A provides a similar local structure.
Definition 2 The pair (X , H) is said to be neighborly if the neighborhood graph of A is connected
(i.e., for every pair of sets in A there exists a sequence of neighboring sets that begins at one of the
pair and ends with the other).
In essence, the neighborly condition simply means that each hypothesis is locally distinguishable
from all others. By ?local? we mean in the vicinity of points x where the output of the hypothesis
changes from +1 to ?1. The neighborly condition was first introduced in [Now08] in the analysis
of GBS. It is shown in Section 3 that the neighborly condition holds for the important case of
hypothesis spaces consisting of multidimensional threshold functions. If (X , H) is neighborly, then
the modified NGBS algorithm guarantees that P(b
hi 6= h? ) ? 0 exponentially fast.
Theorem 2 Let P denotes the underlying probability measure (governing noises and algorithm randomization). If ? > ? and (X , H) is neighborly, then the modified NGBS algorithm in Figure 2
generates a sequence of hypotheses satisfying
P(b
hn 6= h? ) ? |H| (1 ? ?)n ? |H| e??n , n = 0, 1, . . .
n
o
?
?(1??)
1
with exponential constant ? = min 1?c
1 ? ?(1??)
, where
2 , 4
1?? ?
?
Z
?
c := min max h(x) dP (x) .
P
h?H
(2)
X
The exponential convergence rate1 is governed by the key parameter 0 ? c? < 1. The minimizer in
(2) exists because the minimization can be computed over the space of finite-dimensional probability
mass functions over the elements of A. As long as no hypothesis is constant over the whole of
X , the value of c? is typically a small constant much less than 1 that is independent of the size
of H (see [Now08, Now09] and the next section for concrete examples). In such situations, the
convergence rate of modified NGBS is optimal, up to constant factors. No other algorithm can solve
the noisy GBS problem with a lower query complexity. The query complexity of the modified NGBS
algorithm can be derived as follows. Let ? > 0 be a prespecified confidence parameter. The number
|H|
of queries required to ensure that P(b
hn 6= h? ) ? ? is n ? ??1 log |H|
? = O(log ? ), which is the
optimal query complexity. Intuitively, O(log |H|) bits are required to encode each hypothesis. More
formally, the classic noisy binary search problem satisfies the assumptions of Theorem 2 [Now08],
1
?
Note that the factor 1 ?
?(1??)
1??
?
?(1??)
?
?
in the exponential rate parameter ? is a positive constant
strictly less than 1. For a noise level ? this factor is maximized by a value ? ? (?, 1/2) which tends to
(1/2 + ?)/2 as ? tends to 1/2.
4
and hence it is a special case of the general problem. It is known that the optimal query complexity
for noisy classic binary search is O(log |H|
? ) [BZ74, KK07].
We contrast this with the simple noise-tolerant GBS algorithm based on repeating each query in the
standard GBS algorithm of Figure 1(a) multiple times to control the noise (see [K?aa? 06, Now08] for
related derivations). It follows from Chernoff?s bound that the query complexity of determining the
log(1/?)
correct label for a single query with confidence at least 1 ? ? is O( |1/2??|
2 ). Suppose that GBS
0 /?)
requires n0 queries in the noiseless situation. Then using the union bound, we require O( log(n
|1/2??|2 )
queries at each step to guarantee that the labels determined for all n0 queries are correct with probability 1 ? ?. If (X , H) is neighborly, then GBS requires n0 = O(log |H|) queries in noiseless
conditions [Now08]. Therefore, under the conditions of Theorem 2, the query complexity of the
simple noise-tolerant GBS algorithm is O(log |H| log log?|H| ), a logarithmic factor worse than the
optimal query complexity.
3
Noisy GBS for Learning Multidimensional Thresholds
We now apply the theory and modified NGBS algorithm to the problem of learning multidimensional
threshold functions from point evaluations, a problem that arises commonly in computer vision
[SS93, GJ96, AMM+ 98], image processing [KK00], and active learning [Das04, BBZ07, CN08,
Now08]. In this case, the hypotheses are determined by (possibly nonlinear) decision surfaces in
d-dimensional Euclidean space (i.e., X is a subset of Rd ), and the queries are points in Rd . It
suffices to consider linear decision surfaces of the form ha,b (x) := sign(ha, xi + b), where a ? Rd ,
kak2 = 1, b ? R, |b| ? c for some constant c < ?, and ha, xi denotes the inner product in Rd .
Note that hypotheses of this form can be used to represent nonlinear decision surfaces by applying
a nonlinear mapping to the query space.
Theorem 3 Let H be a finite collection of hypotheses of form sign(ha, xi + b), for some constant
c < ?. Then the hypotheses selected by the modified NGBS algorithm with ? > ? satisfy
P(b
hn 6= h? ) ? |H| e??n ,
?(1??)
. Moreover, b
hn can be computed in time polynomial in |H|.
with ? = 41 1 ? ?(1??)
1?? ?
?
Based on the discussion at the end of the previous section, we conclude that the query complexity
of the modified NGBS algorithm is O(log |H|); this is the optimal up to constant factors. The only
other algorithm with this capability that we are aware of was analyzed in [BBZ07], and it is based
on a quite different approach tailored specifically to linear threshold problem.
4
Agnostic Algorithms
We also mention the possibility of agnostic algorithms guaranteed to find the best hypothesis in H
even if the optimal hypothesis h? is not in H and/or the assumptions of Theorem 2 or 3 do not hold.
The best hypothesis in H is the one that minimizes the error with respect to a given probability measure on X , denoted by PX . The following theorem, proved in [Now09], demonstrates an agnostic
algorithm that performs almost as well as empirical risk minimization (ERM) in general, and has
the optimal O(log |H|/?) query complexity when the conditions of Theorem 2 hold.
Theorem 4 Let PX denote a probability distribution on X and suppose we have a query budget
of n. Let h1 denote the hypothesis selected by modified NGBS using n/3 of the queries and let h2
denote the hypothesis selected by ERM from n/3 queries drawn independently from PX . Draw the
remaining n/3 queries independently from P? , the restriction of PX to the set ? ? X on which h1
b? (h1 ) and R
b? (h2 ) denote the average number of errors made by h1 and
and h2 disagree, and let R
b? (h1 ), R
b? (h2 )}. Then, in general,
h2 on these queries. Select b
h = arg min{R
p
E[R(b
h)] ? min{E[R(h1 )], E[R(h2 )]} + 3/n ,
where R(h), h ? H, denotes the probability of error of h with respect to PX and E denotes the
expectation with respect to all random quantities. Furthermore, if the assumptions of Theorem 2
hold with noise bound ?, then
2
P(b
h 6= h? ) ? N e??n/3 + 2e?n|1?2?| /6 .
5
5
5.1
Appendix: Proofs
Proof of Theorem 1
Let E denote expectation with respect to P, and define Cn := (1 ? pn (h? ))/pn (h? ). Note that
Cn ? [0, ?) reflects the amount of mass that pn places on the suboptimal hypotheses. First note
that
P(b
hn 6= h? ) ? P(pn (h? ) < 1/2) = P(Cn > 1) ? E[Cn ] , by Markov?s inequality.
Next, observe that
E[Cn ]
= E[(Cn /Cn?1 ) Cn?1 ] = E [E[(Cn /Cn?1 ) Cn?1 |pn?1 ]]
= E [Cn?1 E[(Cn /Cn?1 )|pn?1 ]] ? E[Cn?1 ] max E[(Cn /Cn?1 )|pn?1 ]
pn?1
n
? C0
max max E[(Ci+1 /Ci )|pi ]
.
i=0,...,n?1
pi
Note that because p0 is assumed to be uniform, C0 = |H| ? 1. A similar conditioning technique is employed for interval estimation in [BZ74]. The rest of the proof entails showing that
E[(Ci+1 /Ci )|pi ] < 1, which proofs the result, and requires a very different approach than [BZ74].
P
The precise form of p1 , p2 , . . . is derived as follows. Let ?i = (1 + h pi (h) zi (h))/2, the
weighted proportion of hypotheses that agree with yi . P
The factor that normalizes the updated dis(1?zi (h))/2
tribution
in
(1)
is
related
to
?
as
follows.
Note
that
(1 ? ?)(1+zi (h))/2 =
i
h pi (h) ?
P
P
h:zi (h)=?1 pi (h)? +
h:zi (h)=1 pi (h)(1 ? ?) = (1 ? ?i )? + ?i (1 ? ?). Thus,
pi+1 (h) = pi (h)
? (1?zi (h))/2 (1 ? ?)(1+zi (h))/2
(1 ? ?i )? + ?i (1 ? ?)
Denote the reciprocal of the update factor for pi+1 (h? ) by
?i :=
(1 ? ?i )? + ?i (1 ? ?)
,
? (1?Zi (h? ))/2 (1 ? ?)(1+Zi (h? ))/2
(3)
where zi (h? ) = h? (xi )yi , and observe that pi+1 (h? ) = pi (h? )/?i . Thus,
Ci+1
(1 ? pi (h? )/?i )pi (h? )
?i ? pi (h? )
=
=
.
?
?
Ci
pi (h )/?i (1 ? pi (h ))
1 ? pi (h? )
Now to bound maxpi E[Ci+1 /Ci |pi ] < 1 we will show that maxpi E[?i |pi ] < 1. To accomplish
this, we will assume that pi is arbitrary.
+
For every
P A ? A and every h ? H let h(A) denote the value of h on the set A. Define ?A =
(1 + h pi (h)h(A))/2, the proportion of hypotheses that take the value +1 on A. Note that for
+
every A we have 0 < ?A
< 1, since at least one hypothesis has the value ?1 on A and p(h) > 0 for
all h ? H. Let Ai denote that set that xi is selected from, and consider the four possible situations:
h? (xi ) = +1, yi = +1 : ?i =
h? (xi ) = +1, yi = ?1 : ?i =
h? (xi ) = ?1, yi = +1 : ?i =
h? (xi ) = ?1, yi = ?1 : ?i =
+
+
(1??A
)?+?A
(1??)
i
i
1??
+
+
?A
?+(1??A
)(1??)
i
i
?
+
+
(1??A
)?+?A
(1??)
i
i
?
+
+
?A
?+(1??A
)(1??)
i
i
1??
To bound E[?i |pi ] it is helpful to condition on Ai . Define qi := Px,y|Ai (h? (x) 6= Y ). If h? (Ai ) =
+1, then
E[?i |pi , Ai ]
=
=
+
+
+
(1 ? ?A
)? + ?A
(1 ? ?)
? + ? + (1 ? ?A
)(1 ? ?)
i
i
i
(1 ? qi ) + Ai
qi
1??
?
?(1 ? qi ) qi (1 ? ?)
+
+
?A
+ (1 ? ?A
)
.
+
i
i
1??
?
6
+
+
Define ?i+ (Ai ) := ?A
+ (1 ? ?A
)
i
i
E[?i |pi , Ai ]
=
h
?(1?qi )
1??
qi (1??)
?
i
. Similarly, if h? (Ai ) = ?1, then
?(1 ? qi ) qi (1 ? ?)
+
+
(1 ? ?A
)
+
?
+
=: ?i? (Ai )
Ai
i
1??
?
+
By assumption qi ? ? < 1/2, and since ? < ? < 1/2 the factor
?(1??)
?
?(1?qi )
1??
+
qi (1??)
?
?
?(1??)
1??
+
< 1. Define
?0 := 1 ?
?(1 ? ?) ?(1 ? ?)
?
,
1??
?
to obtain the bounds
Since both
5.2
?i+ (Ai )
and
?i+ (Ai ) ?
+
+
?A
+ (1 ? ?A
)(1 ? ?0 ) ,
i
i
(4)
?i? (Ai )
+
?A
(1
i
(5)
?i? (Ai )
?
? ?0 ) + (1 ?
+
?A
)
i
.
are less than 1, it follows that E[?i |pi ] < 1.
Proof of Theorem 2
The proof amounts to obtaining upper bounds for ?i+ (Ai ) and ?i? (Ai ), defined above in (4) and
(5). For every A ? P
A and any probability measure p on H the weighted prediction on A is defined
to be W (p, A) := h?H p(h)h(A), where h(A) is the constant value of h for every x ? A. The
following lemma plays a crucial role in the analysis of the modified NGBS algorithm.
Lemma 1 If (X , H) is neighborly, then for every probability measure p on H there either exists a set
A ? A such that |W (p, A)| ? c? or a pair of neighboring sets A, A0 ? A such that W (p, A) > c?
and W (p, A0 ) < ?c? .
Proof of Lemma 1: Suppose that minA?A |W (p, A)| > c? . Then there must exist A, A0 ? A
such that W (p, A) > c? and W (p, A0 ) < ?c? , otherwise c? cannot be the minimax moment
of H. To see this suppose, for
that W (p, A) > c? for all A ? A. Then for every
R instance,
P
distribution
P on X we have X h?H p(h)h(x)dPR (x) > c? . This contradicts Rthe definition of
R P
P
?
c since X h?H p(h)h(x)dP (x) ? h?H p(h)| X h(x) dP (x)| ? maxh?H | X h(x) dP (x)|.
The neighborly condition guarantees that there exists a sequence of neighboring sets beginning at A
and ending at A0 . Since |W (p, A)| > c? on every set and the sign of W (p, ?) must change at some
point in the sequence, it follows that there exist neighboring sets satisfying the claim.
Now consider two distinct situations. Define bi := minA?A |W (pi , A)|. First suppose that there do
not exist neighboring sets A and A0 with W (pi , A) > bi and W (pi , A0 ) < ?bi . Then by Lemma 1,
this implies that bi ? c? , and according the query selection step of the modified NGBS algorithm,
+
Ai = arg minA |W (pi , A)|. Note that because |W (pi , Ai )| ? c? , (1 ? c? )/2 ? ?A
? (1 + c? )/2.
i
Hence, both ?i+ (Ai ) and ?i? (Ai ) are bounded above by 1 ? ?0 (1 ? c? )/2.
Now suppose that there exist neighboring sets A and A0 with W (pi , A) > bi and W (pi , A0 ) < ?bi .
Recall that in this case Ai is randomly chosen to be A or A0 with equal probability. Note that
+
+
?
?
0
?A
> (1 + bi )/2 and ?A
0 < (1 ? bi )/2. If h (A) = h (A ) = +1, then applying (4) results in
1 ? bi
1 + bi
1
1 + bi
1
(1 +
+
(1 ? ?0 )) = (2 ? ?0
) ? 1 ? ?0 /4 ,
2
2
2
2
2
since bi > 0. Similarly, if h? (A) = h? (A0 ) = ?1, then (5) yields E[?i |pi , Ai ? {A, A0 }] <
1 ? ?0 /4. If h? (A) = ?1 on A and h? (A0 ) = +1, then applying (5) on A and (4) on A0 yields
1 +
+
+
+
E[?i |pi , Ai ? {A, A0 }] ?
?A (1 ? ?0 ) + (1 ? ?A
) + ?A
0 + (1 ? ?A0 )(1 ? ?0 )
2
1
+
+
+
+
=
(1 ? ?A
+ ?A
0 + (1 ? ?0 )(1 + ?A ? ?A0 ))
2
1
+
+
=
(2 ? ?0 (1 + ?A
? ?A
0 ))
2
?0
+
+
= 1 ? (1 + ?A
? ?A
0 ) ? 1 ? ?0 /2 ,
2
E[?i |pi , Ai ? {A, A0 }] <
7
+
+
?
?
0
since 0 ? ?A
? ?A
0 ? 1. The final possibility is that h (A) = +1 and h (A ) = ?1. Apply (4) on
0
A and (5) on A to obtain
1 +
+
+
+
E[?i |pi , Ai ? {A, A0 }] ?
?A + (1 ? ?A
)(1 ? ?0 ) + ?A
0 (1 ? ?0 ) + (1 ? ?A0 )
2
1
+
+
+
+
(1 + ?A
? ?A
=
0 + (1 ? ?0 )(1 ? ?A + ?A0 ))
2
+
+
?
?
?
Next, use the fact that because A and A0 are neighbors, ?A
? ?A
0 = pi (h ) ? pi (?h ); if ?h does
?
not belong to H, then pi (?h ) = 0. Hence,
E[?i |pi , Ai ? {A, A0 }] ?
=
?
1
+
+
+
+
(1 + ?A
? ?A
0 + (1 ? 0 )(1 ? ?A + ?A0 ))
2
1
(1 + pi (h? ) ? pi (?h? ) + (1 ? 0 )(1 ? pi (h? ) + pi (?h? )))
2
?0
1
(1 + pi (h? ) + (1 ? 0 )(1 ? pi (h? ))) = 1 ? (1 ? pi (h? )) ,
2
2
since the bound is maximized when pi (?h? ) = 0. Now bound E[?i |pi ] by the maximum of the
conditional bounds above to obtain
n
?0
?0
?0 o
E[?i |pi ] ? max 1 ? (1 ? pi (h? )) , 1 ?
, 1 ? (1 ? c? )
,
2
4
2
and thus it is easy to see that
n?
o
Ci+1
E [?i |pi ] ? pi (h? )
0
? ?0
E
?
1
?
min
(1
?
c
),
.
|pi
=
Ci
1 ? pi (h? )
2
4
5.3
Proof of Theorem 3
First we show that the pair (Rd , H) is neighborly (Definition 2). Each A ? A is a polytope in Rd .
These polytopes are generated by intersections of the halfspaces corresponding to the hypotheses.
Any two polytopes that share a common face are neighbors (the hypothesis whose decision boundary
defines the face, and its complement if it exists, are the only ones that predict different values on
these two sets). Since the polytopes tessellate Rd , the neighborhood graph of A is connected.
Next consider the final bound in the proof of Theorem 2, above. We next show that the value of c? ,
defined in (2), is 0. Since the offsets b of the hypotheses are all less than c in magnitude, it follows
that the distance from the origin to the nearest point of the decision surface of every hypothesis is at
most c. Let Pr denote the uniform probability distribution on a ball of radius r centered at the origin
in Rd . Then for every h of the form sign(ha, xi + b)
Z
? c,
h(x)
dP
(x)
r
d
r
R
R
and limr?? X h(x) dPr (x) = 0 and so c? = 0.
P
Lastly, note that the modified NGBS algorithm involves computing h?H pi (h)h(A) for all A ? A
at each step. The computational complexity of each step is therefore proportional to the cardinality
of A, which is equal to the number of polytopes generated by intersections of half-spaces. It is
Pd
known that |A| = i=0 |H|
= O(|H|d ) [Buc43].
i
8
References
[AMM+ 98] E. M. Arkin, H. Meijer, J. S. B. Mitchell, D. Rappaport, and S.S. Skiena. Decision trees
for geometric models. Intl. J. Computational Geometry and Applications, 8(3):343?
363, 1998.
[Ang01]
D. Angluin. Queries revisited. Springer Lecture Notes in Comp. Sci.: Algorithmic
Learning Theory, pages 12?31, 2001.
[BBZ07]
M.-F. Balcan, A. Broder, and T. Zhang. Margin based active learning. In Conf. on
Learning Theory (COLT), 2007.
[Buc43]
R. C. Buck. Partition of space. The American Math. Monthly, 50(9):541?544, 1943.
[BZ74]
M. V. Burnashev and K. Sh. Zigangirov. An interval estimation problem for controlled
observations. Problems in Information Transmission, 10:223?231, 1974.
[CN08]
R. Castro and R. Nowak. Minimax bounds for active learning. IEEE Trans. Info.
Theory, pages 2339?2353, 2008.
[Das04]
S. Dasgupta. Analysis of a greedy active learning strategy. In Neural Information
Processing Systems, 2004.
[FRPU94] U. Feige, E. Raghavan, D. Peleg, and E. Upfal. Computing with noisy information.
SIAM J. Comput., 23(5):1001?1018, 1994.
[GG74]
M. R. Garey and R. L. Graham. Performance bounds on the splitting algorithm for
binary testing. Acta Inf., 3:347?355, 1974.
[GJ96]
D. Geman and B. Jedynak. An active testing model for tracking roads in satellite
images. IEEE Trans. PAMI, 18(1):1?14, 1996.
[Heg95]
T. Heged?us. Generalized teaching dimensions and the query complexity of learning.
In 8th Annual Conference on Computational Learning Theory, pages 108?117, 1995.
[Hor63]
M. Horstein. Sequential decoding using noiseless feedback. IEEE Trans. Info. Theory,
9(3):136?143, 1963.
[HPRW96] L. Hellerstein, K. Pillaipakkamnatt, V. Raghavan, and D. Wilkins. How many queries
are needed to learn? J. ACM, 43(5):840?862, 1996.
[HR76]
L. Hyafil and R. L. Rivest. Constructing optimal binary decision trees is NP-complete.
Inf. Process. Lett., 5:15?17, 1976.
[K?aa? 06]
M. K?aa? ri?ainen. Active learning in the non-realizable case. In Algorithmic Learning
Theory, pages 63?77, 2006.
[KK00]
A. P. Korostelev and J.-C. Kim. Rates of convergence fo the sup-norm risk in image
models under sequential designs. Statistics & Probability Letters, 46:391?399, 2000.
[KK07]
R. Karp and R. Kleinberg. Noisy binary search and its applications. In Proceedings
of the 18th ACM-SIAM Symposium on Discrete Algorithms (SODA 2007), pages 881?
890, 2007.
[KPB99]
S. R. Kosaraju, T. M. Przytycka, and R. Borgstrom. On an optimal split tree problem.
Lecture Notes in Computer Science: Algorithms and Data Structures, 1663:157?168,
1999.
[Lov85]
D. W. Loveland. Performance bounds for binary testing with arbitrary weights. Acta
Informatica, 22:101?114, 1985.
[Now08]
R. Nowak. Generalized binary search. In Proceedings of the 46th Allerton Conference
on Communications, Control, and Computing, pages 568?574, 2008.
[Now09]
R. Nowak. The geometry of generalized binary search. 2009. Preprint available at
http://arxiv.org/abs/0910.4397.
[R?en61]
A. R?enyi. On a problem in information theory. MTA Mat. Kut. Int. Kozl., page 505516,
1961. reprinted in Selected Papers of Alfred R?enyi, vol. 2, P. Turan, ed., pp. 631-638.
Akademiai Kiado, Budapest, 1976.
[SS93]
M.J. Swain and M.A. Stricker. Promising directions in active vision. Int. J. Computer
Vision, 11(2):109?126, 1993.
9
| 3721 |@word version:5 polynomial:1 proportion:2 norm:1 c0:2 open:1 p0:7 mention:1 moment:1 initial:1 selecting:3 past:2 existing:1 must:4 partition:2 informative:3 ainen:1 update:5 n0:3 greedy:4 selected:13 leaf:1 devising:1 half:1 plane:1 beginning:1 reciprocal:1 prespecified:1 characterization:2 provides:1 revisited:1 math:1 allerton:1 org:1 zhang:1 become:1 symposium:1 przytycka:1 consists:1 dpr:2 p1:1 decreasing:2 cardinality:2 provided:2 begin:1 underlying:3 bounded:3 moreover:1 agnostic:4 mass:4 rivest:1 minimizes:1 developed:1 turan:1 horstein:2 guarantee:4 every:13 multidimensional:8 bipolar:1 demonstrates:2 control:3 positive:1 engineering:1 local:2 tends:2 path:1 pami:1 acta:2 studied:3 equivalence:1 suggests:1 bi:12 jedynak:1 unique:3 testing:4 union:1 tribution:1 procedure:6 empirical:1 word:1 confidence:2 road:1 cannot:2 close:2 selection:10 hyafil:1 scheduling:2 context:2 applying:3 risk:2 restriction:1 equivalent:3 straightforward:1 limr:1 independently:3 splitting:2 immediately:1 recovery:1 examines:1 rule:1 classic:12 notion:1 variation:1 updated:2 suppose:7 play:2 hypothesis:45 origin:2 arkin:1 element:2 satisfying:5 geman:1 observed:1 role:2 preprint:1 ensures:1 connected:2 halfspaces:1 disease:2 pd:1 complexity:17 derivation:1 distinct:2 fast:3 enyi:2 query:79 labeling:1 neighborhood:3 h0:2 quite:2 whose:1 larger:1 valued:4 solve:1 otherwise:2 statistic:1 noisy:26 itself:1 final:2 sequence:12 product:1 neighboring:9 realization:2 budapest:1 amin:1 convergence:6 requirement:1 transmission:1 intl:1 produce:2 satellite:1 guaranteeing:1 nearest:1 job:2 p2:1 involves:1 implies:1 peleg:1 direction:1 radius:1 correct:5 centered:1 raghavan:2 require:1 suffices:1 generalization:1 randomization:4 strictly:1 hold:4 mapping:2 predict:1 algorithmic:2 claim:1 achieves:1 smallest:1 estimation:2 combinatorial:1 label:4 largest:1 weighted:5 reflects:1 hope:1 minimization:2 always:2 modified:18 rather:1 pn:8 boosted:1 karp:1 encode:1 derived:2 focus:1 contrast:1 kim:1 realizable:1 helpful:1 membership:1 xip:1 zigangirov:1 typically:1 a0:28 relation:1 selects:1 arg:8 classification:2 among:1 issue:1 denoted:1 colt:1 special:3 initialize:3 equal:4 aware:2 once:1 chernoff:1 identical:1 broad:1 np:2 others:1 fundamentally:1 rate1:1 few:1 strategically:2 randomly:2 geometry:3 consisting:1 skiena:1 ab:1 interest:1 highly:2 possibility:2 evaluation:2 analyzed:1 sh:1 diagnostics:1 edge:1 nowak:5 partial:2 necessary:1 unless:1 tree:7 euclidean:2 amm:5 uncertain:2 instance:1 subset:7 uniform:4 swain:1 supx:1 corrupted:1 accomplish:1 broder:1 siam:2 kut:1 probabilistic:1 decoding:1 quickly:1 concrete:1 hn:8 choose:1 possibly:1 worse:1 conf:1 american:1 actively:1 aggressive:1 coding:2 int:2 matter:1 satisfy:2 root:1 h1:7 sup:1 bayes:3 capability:2 korostelev:1 maximized:2 yield:4 identify:1 bayesian:3 bisection:2 comp:1 drive:3 fo:1 ed:1 definition:4 pp:1 garey:1 naturally:1 associated:1 proof:10 proved:1 mitchell:1 recall:1 organized:1 higher:1 follow:1 response:17 strongly:1 furthermore:1 governing:2 lastly:1 eqn:2 nonlinear:3 maximizer:1 defines:2 effect:2 normalized:1 true:2 vicinity:1 hence:3 essence:1 noted:1 criterion:2 generalized:9 loveland:1 mina:5 complete:2 performs:2 balcan:1 geometrical:1 image:7 consideration:3 common:1 conditioning:1 exponentially:3 belong:1 slight:1 refer:1 expressing:1 monthly:1 ai:27 rd:8 outlined:3 similarly:2 teaching:2 entail:1 maxh:4 surface:6 belongs:1 inf:2 certain:2 inequality:1 binary:32 fault:2 kosaraju:1 yi:13 employed:1 determine:1 monotonically:2 multiple:2 reduces:1 determination:1 long:2 noisetolerant:1 equally:1 controlled:1 qi:12 prediction:4 variant:2 vision:5 noiseless:5 expectation:2 arxiv:1 represent:1 tailored:1 achieved:1 interval:2 crucial:2 eliminates:2 rest:1 probably:2 sure:1 call:1 split:3 easy:1 zi:13 suboptimal:4 inner:1 idea:2 cn:17 reprinted:1 whether:1 gb:35 burnashev:1 buck:1 clear:1 amount:2 repeating:2 locally:1 informatica:1 generate:1 maxpi:2 angluin:1 exist:5 http:1 heged:1 sign:4 disjoint:1 arising:1 overly:1 diagnosis:2 alfred:1 discrete:1 dasgupta:1 mat:1 vol:1 key:1 four:1 threshold:10 demonstrating:1 drawn:1 wisc:1 graph:3 letter:1 soda:1 place:1 throughout:1 reasonable:1 decide:1 almost:2 draw:1 decision:9 appendix:1 graham:1 bit:1 bound:15 hi:9 guaranteed:1 distinguish:1 annual:1 ri:1 generates:1 kleinberg:1 min:5 relatively:1 px:6 mta:1 according:3 ball:1 terminates:1 describes:1 slightly:1 contradicts:1 feige:1 wi:1 partitioned:1 making:1 castro:1 intuitively:1 pr:1 erm:2 agree:2 discus:1 turn:1 needed:2 end:2 available:1 apply:2 observe:4 hellerstein:1 disagreement:1 original:1 denotes:6 responding:1 ensure:1 remaining:1 hinge:1 madison:2 quantity:1 strategy:3 kak2:1 said:2 minx:2 dp:5 distance:1 sci:1 majority:1 evenly:1 polytope:1 robert:1 potentially:1 info:2 design:2 unknown:1 disagree:2 upper:1 observation:1 markov:1 finite:3 situation:5 extended:1 communication:1 precise:1 arbitrary:2 introduced:1 complement:2 pair:6 required:3 distinction:1 polytopes:4 boost:1 trans:3 address:3 suggested:1 usually:1 confidently:1 including:2 max:5 unrealistic:1 natural:1 minimax:2 prior:1 geometric:2 determining:4 relative:1 wisconsin:1 embedded:1 lecture:2 proportional:1 h2:6 upfal:1 sufficient:2 pi:75 share:1 normalizes:1 borgstrom:1 repeat:1 dis:1 side:1 neighbor:3 face:2 distributed:1 feedback:2 depth:1 dimension:2 boundary:2 curve:2 ending:1 lett:1 collection:2 neighborly:11 commonly:1 made:1 cope:1 rthe:1 active:9 tolerant:8 assumed:2 conclude:1 xi:17 alternatively:1 search:22 why:1 promising:1 nature:1 channel:2 learn:1 obtaining:1 investigated:1 constructing:1 domain:1 main:3 whole:1 noise:15 arise:1 repeated:1 akademiai:1 referred:1 fashion:1 slow:1 exponential:4 comput:1 governed:1 weighting:1 stricker:1 meijer:1 theorem:15 erroneous:1 showing:1 explored:1 decay:1 offset:1 maximizers:1 exists:8 sequential:2 ci:10 magnitude:1 budget:1 margin:1 intersection:2 logarithmic:1 distinguishable:1 simply:2 ordered:2 tracking:1 springer:1 aa:4 minimizer:1 satisfies:1 acm:2 conditional:1 goal:2 viewed:3 change:2 specifically:2 determined:2 lemma:4 called:6 ece:1 experimental:1 wilkins:1 select:4 formally:1 latter:1 arises:1 kiado:1 |
3,004 | 3,722 | Bootstrapping from Game Tree Search
Joel Veness
University of NSW and NICTA
Sydney, NSW, Australia 2052
[email protected]
David Silver
University of Alberta
Edmonton, AB Canada T6G2E8
[email protected]
William Uther
NICTA and the University of NSW
Sydney, NSW, Australia 2052
[email protected]
Alan Blair
University of NSW and NICTA
Sydney, NSW, Australia 2052
[email protected]
Abstract
In this paper we introduce a new algorithm for updating the parameters of a heuristic evaluation function, by updating the heuristic towards the values computed by
an alpha-beta search. Our algorithm differs from previous approaches to learning
from search, such as Samuel?s checkers player and the TD-Leaf algorithm, in two
key ways. First, we update all nodes in the search tree, rather than a single node.
Second, we use the outcome of a deep search, instead of the outcome of a subsequent search, as the training signal for the evaluation function. We implemented
our algorithm in a chess program Meep, using a linear heuristic function. After
initialising its weight vector to small random values, Meep was able to learn high
quality weights from self-play alone. When tested online against human opponents, Meep played at a master level, the best performance of any chess program
with a heuristic learned entirely from self-play.
1
Introduction
The idea of search bootstrapping is to adjust the parameters of a heuristic evaluation function towards the value of a deep search. The motivation for this approach comes from the recursive nature
of tree search: if the heuristic can be adjusted to match the value of a deep search of depth D, then
a search of depth k with the new heuristic would be equivalent to a search of depth k + D with the
old heuristic.
Deterministic, two-player games such as chess provide an ideal test-bed for search bootstrapping.
The intricate tactics require a significant level of search to provide an accurate position evaluation;
learning without search has produced little success in these domains. Much of the prior work in
learning from search has been performed in chess or similar two-player games, allowing for clear
comparisons with existing methods.
Samuel (1959) first introduced the idea of search bootstrapping in his seminal checkers player. In
Samuel?s work the heuristic function was updated towards the value of a minimax search in a subsequent position, after black and white had each played one move. His ideas were later extended
by Baxter et al. (1998) in their chess program Knightcap. In their algorithm, TD-Leaf, the heuristic
function is adjusted so that the leaf node of the principal variation produced by an alpha-beta search
is moved towards the value of an alpha-beta search at a subsequent time step.
Samuel?s approach and TD-Leaf suffer from three main drawbacks. First, they only update one
node after each search, which discards most of the information contained in the search tree. Second,
their updates are based purely on positions that have actually occurred in the game, or which lie
on the computed line of best play. These positions may not be representative of the wide variety
of positions that must be evaluated by a search based program; many of the positions occurring in
1
time = t
time = t+1
time = t
time = t+1
TD
TD-Root
RootStrap(minimax) and TreeStrap(minimax)
TreeStrap(minimax) only
TD-Leaf
Figure 1: Left: TD, TD-Root and TD-Leaf backups. Right: RootStrap(minimax) and TreeStrap(minimax).
large search trees come from sequences of unnatural moves that deviate significantly from sensible
play. Third, the target search is performed at a subsequent time-step, after a real move and response
have been played. Thus, the learning target is only accurate when both the player and opponent
are already strong. In practice, these methods can struggle to learn effectively from self-play alone.
Work-arounds exist, such as initializing a subset of the weights to expert provided values, or by
attempting to disable learning once an opponent has blundered, but these techniques are somewhat
unsatisfactory if we have poor initial domain knowledge.
We introduce a new framework for bootstrapping from game tree search that differs from prior
work in two key respects. First, all nodes in the search tree are updated towards the recursive
minimax values computed by a single depth limited search from the root position. This makes
full use of the information contained in the search tree. Furthermore, the updated positions are
more representative of the types of positions that need to be accurately evaluated by a search-based
player. Second, as the learning target is based on hypothetical minimax play, rather than positions
that occur at subsequent time steps, our methods are less sensitive to the opponent?s playing strength.
We applied our algorithms to learn a heuristic function for the game of chess, starting from random
initial weights and training entirely from self-play. When applied to an alpha-beta search, our chess
program learnt to play at a master level against human opposition.
2
Background
The minimax search algorithm exhaustively computes the minimax value to some depth D, using a
heuristic function H? (s) to evaluate non-terminal states at depth D, based on a parameter vector ?.
We use the notation VsD0 (s) to denote the value of state s in a depth D minimax search from root
state s0 . We define TsD0 to be the set of states in the depth D search tree from root state s0 . We define
the principal leaf, lD (s), to be the leaf state of the depth D principal variation from state s. We use
?
the notation ? to indicate a backup that updates the heuristic function towards some target value.
?
Temporal difference (TD) learning uses a sample backup H? (st ) ? H? (st+1 ) to update the estimated value at one time-step towards the estimated value at the subsequent time-step (Sutton, 1988).
Although highly successful in stochastic domains such as Backgammon (Tesauro, 1994), direct TD
performs poorly in highly tactical domains. Without search or prior domain knowledge, the target
value is noisy and improvements to the value function are hard to distinguish. In the game of chess,
using a naive heuristic and no search, it is hard to find checkmate sequences, meaning that most
games are drawn.
The quality of the target value can be significantly improved by using a minimax backup to update
the heuristic towards the value of a minimax search. Samuel?s checkers player (Samuel, 1959) introduced this idea, using an early form of bootstrapping from search that we call TD-Root. The
parameters of the heuristic function, ?, were adjusted towards the minimax search value at the next
?
complete time-step (see Figure 1), H? (st ) ? VsDt+1 (st+1 ). This approach enabled Samuel?s check2
ers program to achieve human amateur level play. Unfortunately, Samuel?s approach was handicapped by tying his evaluation function to the material advantage, and not to the actual outcome
from the position.
The TD-Leaf algorithm (Baxter et al., 1998) updates the value of a minimax search at one timestep towards the value of a minimax search at the subsequent time-step (see Figure 1). The parameters of the heuristic function are updated by gradient descent, using an update of the form
?
VsDt (st ) ? VsDt+1 (st+1 ). The root value of minimax search is not differentiable in the parameters, as a small change in the heuristic value can result in the principal variation switching to a
completely different path through the tree. The TD-Leaf algorithm ignores these non-differentiable
boundaries by assuming that the principal variation remains unchanged, and follows the local gradient given that variation. This is equivalent to updating the heuristic function of the principal leaf,
?
H? (lD (st )) ? VsDt+1 (st+1 ). The chess program Knightcap achieved master-level play when trained
using TD-Leaf against a series of evenly matched human opposition, whose strength improved at
a similar rate to Knightcap?s. A similar algorithm was introduced contemporaneously by Beal and
Smith (1997), and was used to learn the material values of chess pieces. The world champion checkers program Chinook used TD-Leaf to learn an evaluation function that compared favorably to its
hand-tuned heuristic function (Schaeffer et al., 2001).
Both TD-Root and TD-Leaf are hybrid algorithms that combine a sample backup with a minimax
backup, updating the current value towards the search value at a subsequent time-step. Thus the
accuracy of the learning target depends both on the quality of the players, and on the quality of the
search. One consequence is that these learning algorithms are not robust to variations in the training
regime. In their experiments with the chess program Knightcap (Baxter et al., 1998), the authors
found that it was necessary to prune training examples in which the opponent blundered or made
an unpredictable move. In addition, the program was unable to learn effectively from games of
self-play, and required evenly matched opposition. Perhaps most significantly, the piece values were
initialised to human expert values; experiments starting from zero or random weights were unable
to exceed weak amateur level. Similarly, the experiments with TD-Leaf in Chinook also fixed the
important checker and king values to human expert values.
In addition, both Samuel?s approach and TD-Leaf only update one node of the search tree. This
does not make efficient use of the large tree of data, typically containing millions of values, that
is constructed by memory enhanced minimax search variants. Furthermore, the distribution of root
positions that are used to train the heuristic is very different from the distribution of positions that are
evaluated during search. This can lead to inaccurate evaluation of positions that occur infrequently
during real games but frequently within a large search tree; these anomalous values have a tendency
to propagate up through the search tree, ultimately affecting the choice of best move at the root.
In the following section, we develop an algorithm that attempts to address these shortcomings.
3
Minimax Search Bootstrapping
Our first algorithm, RootStrap(minimax), performs a minimax search from the current position st ,
at every time-step t. The parameters are updated so as to move the heuristic value of the root node
?
towards the minimax search value, H? (st ) ? VsDt (st ). We update the parameters by stochastic
gradient descent on the squared error between the heuristic value and the minimax search value. We
treat the minimax search value as a constant, to ensure that we move the heuristic towards the search
value, and not the other way around.
?t = VsD
(st ) ? H? (st )
t
?
?? = ? ?? ?t2 = ??t ?? H? (st )
2
where ? is a step-size constant. RootStrap(??) is equivalent to RootStrap(minimax), except it uses
the more efficient ??-search algorithm to compute VsDt (st ).
For the remainder of this paper we consider heuristic functions that are computed by a linear combination H? (s) = ?(s)T ?, where ?(s) is a vector of features of position s, and ? is a parameter
vector specifying the weight of each feature in the linear combination. Although simple, this form
of heuristic has already proven sufficient to achieve super-human performance in the games of Chess
3
Algorithm
Backup
TD
TD-Root
H? (st ) ? H? (st+1 )
?
H? (st ) ? VsD
(st+1 )
t+1
TD-Leaf
H? (lD (st )) ? VsD
(st+1 )
t+1
RootStrap(minimax)
TreeStrap(minimax)
TreeStrap(??)
?
?
?
H? (st ) ? VsD
(st )
t
?
D
H? (s) ? Vst (s), ?s ? TsDt
?
??
D
H? (s) ? [bD
st (s), ast (s)], ?s ? Tt
Table 1: Backups for various learning algorithms.
Algorithm 1 TreeStrap(minimax)
Randomly initialise ?
Initialise t ? 1, s1 ? start state
while st is not terminal do
V ? minimax(st , H? , D)
for s ? search tree do
? ? V (s) ? H? (s)
?? ? ?? + ???(s)
end for
? ? ? + ??
Select at = argmax V (st ? a)
a?A
Execute move at , receive st+1
t?t+1
end while
Algorithm 2 DeltaFromTransTbl(s, d)
Initialise ?? ? ~0, t ? probe(s)
if t is null or depth(t) < d then
return ??
end if
if lowerbound(t) > H? (s) then
?? ? ?? + ?(lowerbound(t) ? H? (s))?H? (s)
end if
if upperbound(t) < H? (s) then
?? ? ?? + ?(upperbound(t) ? H? (s))?H? (s)
end if
for s0 ? succ(s) do
?? ? DeltaF romT ransT bl(s0 )
end for
return ??
(Campbell et al., 2002), Checkers (Schaeffer et al., 2001) and Othello (Buro, 1999). The gradient
descent update for RootStrap(minimax) then takes the particularly simple form ??t = ??t ?(st ).
Our second algorithm, TreeStrap(minimax), also performs a minimax search from the current position st . However, TreeStrap(minimax) updates all interior nodes within the search tree. The
parameters are updated, for each position s in the tree, towards the minimax search value of s,
?
H? (s) ? VsDt (s), ?s ? TsDt . This is again achieved by stochastic gradient descent,
?t (s) = VsD
(s) ? H? (s)
t
X
X
?
?t (s)2 = ?
?t (s)?(s)
?? = ? ??
2
D
D
s?Ts
s?Ts
t
t
The complete algorithm for TreeStrap(minimax) is described in Algorithm 1.
4
Alpha-Beta Search Bootstrapping
The concept of minimax search bootstrapping can be extended to ??-search. Unlike minimax
search, alpha-beta does not compute an exact value for the majority of nodes in the search tree.
Instead, the search is cut off when the value of the node is sufficiently high or low that it can no
longer contribute to the principal variation. We consider a depth D alpha-beta search from root
D
position s0 , and denote the upper and lower bounds computed for node s by aD
s0 (s) and bs0 (s) reD
D
D
spectively, so that bs0 (s) ? Vs0 (s) ? as0 (s). Only one bound applies in cut off nodes: in the case
D
of an alpha-cut we define bD
s0 (s) to be ??, and in the case of a beta-cut we define as0 (s) to be ?.
D
D
D
If no cut off occurs then the bounds are exact, i.e. as0 (s) = bs0 (s) = Vs0 (s).
The bounded values computed by alpha-beta can be exploited by search bootstrapping, by using a
one-sided loss function. If the value from the heuristic evaluation is larger than the a-bound of the
?
deep search value, then it is reduced towards the a-bound, H? (s) ? aD
st (s). Similarly, if the value
from the heuristic evaluation is smaller than the b-bound of the deep search value, then it is increased
4
?
towards the b-bound, H? (s) ? bD
st (s). We implement this idea by gradient descent on the sum of
one-sided squared errors:
?
?ta (s) =
?
?tb (s)
giving
??t =
=
aD
st (s) ? H? (s)
0
if H? (s) > aD
st (s)
otherwise
bD
st (s) ? H? (s)
0
if H? (s) < bD
st (s)
otherwise
?
X a 2
X ? a
?
??
?t (s) + ?tb (s)2 = ?
?t (s) + ?tb (s) ?(s)
2
??
??
s?Tt
s?Tt
where Tt?? is the set of nodes in the alpha-beta search tree at time t. We call this algorithm
TreeStrap(??), and note that the update for each node s is equivalent to the TreeStrap(minimax)
update when no cut-off occurs.
4.1
Updating Parameters in TreeStrap(??)
High performance ??-search routines rely on transposition tables for move ordering, reducing the
size of the search space, and for caching previous search results (Schaeffer, 1989). A natural way
to compute ?? for TreeStrap(??) from a completed ??-search is to recursively step through the
transposition table, summing any relevant bound information. We call this procedure DeltaFromTransTbl, and give the pseudo-code for it in Algorithm 2.
DeltaFromTransTbl requires a standard transposition table implementation providing the following
routines:
? probe(s), which returns the transposition table entry associated with state s.
? depth(t), which returns the amount of search depth used to determine the bound estimates
stored in transposition table entry t.
? lowerbound(t), which returns the lower bound stored in transposition entry t.
? upperbound(t), which returns the upper bound stored in transposition entry t.
In addition, DeltaFromTransTbl requires a parameter d ? 1, that limits updates to ?? from transposition table entries based on a minimum of search depth of d. This can be used to control the number
of positions that contribute to ?? during a single update, or limit the computational overhead of the
procedure.
4.2
The TreeStrap(??) algorithm
The TreeStrap(??) algorithm can be obtained by two straightforward modifications to Algorithm 1.
First, the call to minimax(st , H? , D) must be replaced with a call to ??-search(st , H? , D). Secondly, the inner loop computing ?? is replaced by invoking DeltaF romT ransT bl(st ).
5
Learning Chess Program
We implemented our learning algorithms in Meep, a modified version of the tournament chess engine
Bodo. For our experiments, the hand-crafted evaluation function of Bodo was removed and replaced
by a weighted linear combination of 1812 features. Given a position s, a feature vector ?(s) can be
constructed from the 1812 numeric values of each feature. The majority of these features are binary.
?(s) is typically sparse, with approximately 100 features active in any given position. Five wellknown, chess specific feature construction concepts: material, piece square tables, pawn structure,
mobility and king safety were used to generate the 1812 distinct features. These features were a
strict subset of the features used in Bodo, which are themselves simplistic compared to a typical
tournament engine (Campbell et al., 2002).
The evaluation function H? (s) was a weighted linear combination of the features i.e. H? (s) =
?(s)T ?. All components of ? were initialised to small random numbers. Terminal positions were
5
evaluated as ?9999.0, 0 and 9999.0 for a loss, draw and win respectively. In the search tree, mate
scores were adjusted inward slightly so that shorter paths to mate were preferred when giving mate,
and vice-versa. When applying the heuristic evaluation function in the search, the heuristic estimates
were truncated to the interval [?9900.0, 9900.0].
Meep contains two different modes: a tournament mode and a training mode. When in tournament
mode, Meep uses an enhanced alpha-beta based search algorithm. Tournament mode is used for
evaluating the strength of a weight configuration. In training mode however, one of two different
types of game tree search algorithms are used. The first is a minimax search that stores the entire
game tree in memory. This is used by the TreeStrap(minimax) algorithm. The second is a generic
alpha-beta search implementation, that uses only three well known alpha-beta search enhancements:
transposition tables, killer move tables and the history heuristic (Schaeffer, 1989). This simplified
search routine was used by the TreeStrap(??) and RootStrap(??) algorithms. In addition, to reduce
the horizon effect, checking moves were extended by one ply. During training, the transposition
table was cleared before the search routine was invoked.
Simplified search algorithms were used during training to avoid complicated interactions with the
more advanced heuristic search techniques (such as null move pruning) useful in tournament play.
It must be stressed that during training, no heuristic or move ordering techniques dependent on
knowing properties of the evaluation weights were used by the search algorithms.
Furthermore, a quiescence search (Beal, 1990) that examined all captures and check evasions was
applied to leaf nodes. This was to improve the stability of the leaf node evaluations. Again, no
knowledge based pruning was performed inside the quiescence search tree, which meant that the
quiescence routine was considerably slower than in Bodo.
6
Experimental Results
We describe the details of our training procedures, and then proceed to explore the performance
characteristics of our algorithms, RootStrap(??), TreeStrap(minimax) and TreeStrap(??) through
both a large local tournament and online play. We present our results in terms of Elo ratings. This is
the standard way of quantifying the strength of a chess player within a pool of players. A 300 to 500
Elo rating point difference implies a winning rate of about 85% to 95% for the higher rated player.
6.0.1
Training Methodology
At the start of each experiment, all weights were initialised to small random values. Games of selfplay were then used to train each player. To maintain diversity during training, a small opening book
was used. Once outside of the opening book, moves were selected greedily from the results of the
search. Each training game was played within 1m 1s Fischer time controls. That is, both players
start with a minute on the clock, and gain an additional second every time they make a move. Each
training game would last roughly five minutes.
We selected the best step-size for each learning algorithm, from a series of preliminary experiments:
? = 1.0 ? 10?5 for TD-Leaf and RootStrap(??), ? = 1.0 ? 10?6 for TreeStrap(minimax) and
5.0 ? 10?7 for TreeStrap(??). The TreeStrap variants used a minimum search depth parameter of
d = 1. This meant that the target values were determined by at least one ply of full-width search,
plus a varying amount of quiescence search.
6.1
Relative Performance Evaluation
We ran a competition between many different versions of Meep in tournament mode, each using
a heuristic function learned by one of our algorithms. In addition, a player based on randomly
initialised weights was included as a reference, and arbitrarily assigned an Elo rating of 250. The
best ratings achieved by each training method are displayed in Table 2.
We also measured the performance of each algorithm at intermediate stages throughout training.
Figure 2 shows the performance of each learning algorithm with increasing numbers of games on
a single training run. As each training game is played using the same time controls, this shows the
6
Learning from self?play: Rating versus Number of training games
2500
Rating (Elo)
2000
TreeStrap(alpha?beta)
RootStrap(alpha?beta)
TreeStrap(minimax)
TD?Leaf
Untrained
1500
1000
500
0
1
10
2
3
10
10
Number of training games
4
10
Figure 2: Performance when trained via self-play starting from random initial weights. 95% confidence intervals are marked at each data point. The x-axis uses a logarithmic scale.
Algorithm
TreeStrap(??)
TreeStrap(minimax)
RootStrap(??)
TD-Leaf
Untrained
Elo
2157 ? 31
1807 ? 32
1362 ? 59
1068 ? 36
250 ? 63
Table 2: Best performance when trained by self play. 95% confidence intervals given.
performance of each learning algorithm given a fixed amount of computation. Importantly, the time
used for each learning update also took away from the total thinking time.
The data shown in Table 2 and Figure 2 was generated by BayesElo, a freely available program that
computes maximum likelihood Elo ratings. In each table, the estimated Elo rating is given along
with a 95% confidence interval. All Elo values are calculated relative to the reference player, and
should not be compared with Elo ratings of human chess players (including the results of online
play, described in the next section). Approximately 16000 games were played in the tournament.
The results demonstrate that learning from many nodes in the search tree is significantly more efficient than learning from a single root node. TreeStrap(minimax) and TreeStrap(??) learn effective
weights in just a thousand training games and attain much better maximum performance within the
duration of training. In addition, learning from alpha-beta search is more effective than learning
from minimax search. Alpha-beta search significantly boosts the search depth, by safely pruning
away subtrees that cannot affect the minimax value at the root. Although the majority of nodes now
contain one-sided bounds rather than exact values, it appears that the improvements to the search
depth outweigh the loss of bound information.
Our results demonstrate that the TreeStrap based algorithms can learn a good set of weights, starting
from random weights, from self-play in the game of chess. Our experiences using TD-Leaf in this
setting were similar to those described in (Baxter et al., 1998); within the limits of our training
scheme, learning occurred, but only to the level of weak amateur play. Our results suggest that
TreeStrap based methods are potentially less sensitive to initial starting conditions, and allow for
speedier convergence in self play; it will be interesting to see whether similar results carry across to
domains other than chess.
7
Algorithm
TreeStrap(??)
TreeStrap(??)
Training Partner
Self Play
Shredder
Rating
1950-2197
2154-2338
Table 3: Blitz performance at the Internet Chess Club
6.2
Evaluation by Internet Play
We also evaluated the performance of the heuristic function learned by TreeStrap(??), by using it in
Meep to play against predominantly human opposition at the Internet Chess Club. We evaluated two
heuristic functions, the first using weights trained by self-play, and the second using weights trained
against Shredder, a grandmaster strength commercial chess program.
The hardware used online was a 1.8Ghz Opteron, with 256Mb of RAM being used for the transposition table. Approximately 350K nodes per second were seen when using the learned evaluation
function. A small opening book was used to make the engine play a variety of different opening
lines. Compared to Bodo, the learned evaluation routine was approximately 3 times slower, even
though the evaluation function contained less features. This was due to a less optimised implementation, and the heavy use of floating point arithmetic.
Approximately 1000 games were played online, using 3m 3s Fischer time controls, for each heuristic
function. Although the heuristic function was fixed, the online rating fluctuates significantly over
time. This is due to the high K factor used by the Internet Chess Club to update Elo ratings, which
is tailored to human players rather than computer engines.
The online rating of the heuristic learned by self-play corresponds to weak master level play. The
heuristic learned from games against Shredder were roughly 150 Elo stronger, corresponding to
master level performance. Like TD-Leaf, TreeStrap also benefits from a carefully chosen opponent,
though the difference between self-play and ideal conditions is much less drastic. Furthermore,
a total of 13.5/15 points were scored against registered members who had achieved the title of
International Master.
We expect that these results could be further improved by using more powerful hardware, a more
sophisticated evaluation function, or a better opening book. Furthermore, we used a generic alphabeta search algorithm for learning. An interesting follow-up would be to explore the interaction
between our learning algorithms and the more exotic alpha-beta search enhancements.
7
Conclusion
Our main result is demonstrating, for the first time, an algorithm that learns to play master level
Chess entirely through self play, starting from random weights. To provide insight into the nature
of our algorithms, we focused on a single non-trivial domain. However, the ideas that we have
introduced are rather general, and may have applications beyond deterministic two-player game tree
search.
Bootstrapping from search could, in principle, be applied to many other search algorithms.
Simulation-based search algorithms, such as UCT, have outperformed traditional search algorithms
in a number of domains. The TreeStrap algorithm could be applied, for example, to the heuristic
function that is used to initialise nodes in a UCT search tree with prior knowledge (Gelly & Silver,
2007). Alternatively, in stochastic domains the evaluation function could be updated towards the
value of an expectimax search, or towards the one-sided bounds computed by a *-minimax search
(Hauk et al., 2004; Veness & Blair, 2007). This approach could be viewed as a generalisation of approximate dynamic programming, in which the value function is updated from a multi-ply Bellman
backup.
Acknowledgments
NICTA is funded by the Australian Government as represented by the Department of Broadband,
Communications and the Digital Economy and the Australian Research Council through the ICT
Centre of Excellence program.
8
References
Baxter, J., Tridgell, A., & Weaver, L. (1998). Knightcap: a chess program that learns by combining
td(lambda) with game-tree search. Proc. 15th International Conf. on Machine Learning (pp.
28?36). Morgan Kaufmann, San Francisco, CA.
Beal, D. F. (1990). A generalised quiescence search algorithm. Artificial Intelligence, 43, 85?98.
Beal, D. F., & Smith, M. C. (1997). Learning piece values using temporal differences. Journal of
the International Computer Chess Association.
Buro, M. (1999). From simple features to sophisticated evaluation functions. First International
Conference on Computers and Games (pp. 126?145).
Campbell, M., Hoane, A., & Hsu, F. (2002). Deep Blue. Artificial Intelligence, 134, 57?83.
Gelly, S., & Silver, D. (2007). Combining online and offline learning in UCT. 17th International
Conference on Machine Learning (pp. 273?280).
Hauk, T., Buro, M., & Schaeffer, J. (2004). Rediscovering *-minimax search. Computers and
Games (pp. 35?50).
Samuel, A. L. (1959). Some studies in machine learning using the game of checkers. IBM Journal
of Research and Development, 3.
Schaeffer, J. (1989). The history heuristic and alpha-beta search enhancements in practice. IEEE
Transactions on Pattern Analysis and Machine Intelligence, PAMI-11, 1203?1212.
Schaeffer, J., Hlynka, M., & Jussila, V. (2001). Temporal difference learning applied to a high
performance game playing program. IJCAI, 529?534.
Sutton, R. (1988). Learning to predict by the method of temporal differences. Machine Learning, 3,
9?44.
Tesauro, G. (1994). TD-gammon, a self-teaching backgammon program, achieves master-level play.
Neural Computation, 6, 215?219.
Veness, J., & Blair, A. (2007). Effective use of transposition tables in stochastic game tree search.
IEEE Symposium on Computational Intelligence and Games (pp. 112?116).
9
| 3722 |@word version:2 stronger:1 simulation:1 propagate:1 invoking:1 nsw:6 recursively:1 carry:1 ld:3 initial:4 configuration:1 series:2 score:1 contains:1 tuned:1 cleared:1 existing:1 current:3 com:1 must:3 bd:5 subsequent:8 update:18 alone:2 intelligence:4 leaf:24 selected:2 smith:2 transposition:12 cse:2 node:21 contribute:2 club:3 five:2 vs0:2 constructed:2 direct:1 beta:19 along:1 symposium:1 combine:1 overhead:1 inside:1 introduce:2 excellence:1 intricate:1 roughly:2 themselves:1 frequently:1 multi:1 terminal:3 bellman:1 alberta:1 td:30 little:1 actual:1 unpredictable:1 increasing:1 provided:1 notation:2 matched:2 bounded:1 spectively:1 exotic:1 null:2 bs0:3 killer:1 tying:1 inward:1 bootstrapping:11 temporal:4 pseudo:1 every:2 hypothetical:1 safely:1 control:4 safety:1 before:1 generalised:1 local:2 treat:1 struggle:1 limit:3 consequence:1 switching:1 sutton:2 bodo:5 optimised:1 path:2 approximately:5 pami:1 black:1 tournament:9 plus:1 au:3 examined:1 specifying:1 limited:1 lowerbound:3 acknowledgment:1 recursive:2 practice:2 implement:1 differs:2 procedure:3 significantly:6 attain:1 confidence:3 gammon:1 othello:1 suggest:1 cannot:1 interior:1 ast:1 applying:1 seminal:1 equivalent:4 deterministic:2 outweigh:1 straightforward:1 starting:6 duration:1 focused:1 insight:1 importantly:1 his:3 enabled:1 initialise:4 stability:1 rediscovering:1 variation:7 updated:8 target:8 play:31 enhanced:2 ualberta:1 exact:3 construction:1 commercial:1 us:5 programming:1 infrequently:1 particularly:1 updating:5 cut:6 initializing:1 capture:1 thousand:1 ordering:2 removed:1 ran:1 exhaustively:1 dynamic:1 ultimately:1 trained:5 pawn:1 purely:1 completely:1 succ:1 various:1 represented:1 train:2 distinct:1 shortcoming:1 describe:1 effective:3 artificial:2 outcome:3 outside:1 whose:1 heuristic:42 larger:1 fluctuates:1 otherwise:2 fischer:2 noisy:1 online:8 beal:4 sequence:2 advantage:1 differentiable:2 took:1 interaction:2 mb:1 remainder:1 relevant:1 loop:1 combining:2 poorly:1 achieve:2 bed:1 moved:1 competition:1 convergence:1 enhancement:3 ijcai:1 t6g2e8:1 silver:4 develop:1 vsd:5 measured:1 as0:3 sydney:3 implemented:2 c:1 strong:1 come:2 blair:4 indicate:1 implies:1 australian:2 drawback:1 stochastic:5 opteron:1 human:10 australia:3 material:3 require:1 government:1 tridgell:1 preliminary:1 secondly:1 adjusted:4 around:1 sufficiently:1 predict:1 elo:11 achieves:1 early:1 proc:1 outperformed:1 title:1 sensitive:2 council:1 champion:1 vice:1 weighted:2 grandmaster:1 super:1 modified:1 rather:5 caching:1 avoid:1 varying:1 improvement:2 unsatisfactory:1 backgammon:2 check:1 likelihood:1 greedily:1 economy:1 dependent:1 inaccurate:1 typically:2 entire:1 quiescence:5 development:1 once:2 veness:3 thinking:1 t2:1 opening:5 randomly:2 floating:1 replaced:3 argmax:1 william:2 maintain:1 ab:1 attempt:1 highly:2 joel:1 evaluation:22 adjust:1 jussila:1 subtrees:1 accurate:2 necessary:1 experience:1 amateur:3 shorter:1 mobility:1 tree:28 old:1 increased:1 subset:2 entry:5 successful:1 stored:3 learnt:1 considerably:1 hlynka:1 st:39 international:5 off:4 pool:1 squared:2 again:2 containing:1 lambda:1 conf:1 book:4 expert:3 return:6 upperbound:3 diversity:1 tactical:1 depends:1 ad:4 piece:4 performed:3 later:1 root:15 vst:1 red:1 start:3 complicated:1 square:1 accuracy:1 kaufmann:1 characteristic:1 who:1 weak:3 accurately:1 produced:2 history:2 against:7 initialised:4 pp:5 associated:1 schaeffer:7 gain:1 hsu:1 knowledge:4 routine:6 carefully:1 actually:1 campbell:3 sophisticated:2 appears:1 ta:1 higher:1 follow:1 methodology:1 response:1 improved:3 evaluated:6 execute:1 though:2 furthermore:5 just:1 stage:1 uct:3 clock:1 hand:2 mode:7 quality:4 perhaps:1 effect:1 concept:2 contain:1 assigned:1 white:1 game:33 self:16 during:7 width:1 samuel:10 tactic:1 complete:2 tt:4 demonstrate:2 performs:3 meaning:1 invoked:1 predominantly:1 million:1 association:1 occurred:2 significant:1 versa:1 similarly:2 blundered:2 teaching:1 centre:1 had:2 funded:1 longer:1 discard:1 tesauro:2 wellknown:1 store:1 binary:1 success:1 arbitrarily:1 exploited:1 seen:1 minimum:2 additional:1 somewhat:1 disable:1 morgan:1 prune:1 freely:1 determine:1 signal:1 arithmetic:1 full:2 alan:1 match:1 variant:2 anomalous:1 simplistic:1 tailored:1 achieved:4 receive:1 background:1 addition:6 affecting:1 interval:4 hoane:1 unlike:1 checker:7 strict:1 member:1 call:5 ideal:2 exceed:1 intermediate:1 baxter:5 variety:2 affect:1 inner:1 idea:6 reduce:1 knowing:1 whether:1 unnatural:1 suffer:1 proceed:1 deep:6 useful:1 clear:1 amount:3 hardware:2 reduced:1 generate:1 exist:1 estimated:3 per:1 blue:1 key:2 demonstrating:1 drawn:1 evasion:1 timestep:1 ram:1 sum:1 run:1 master:8 powerful:1 throughout:1 draw:1 initialising:1 entirely:3 opposition:4 bound:14 internet:4 played:7 distinguish:1 strength:5 occur:2 attempting:1 department:1 combination:4 poor:1 smaller:1 slightly:1 across:1 modification:1 s1:1 chess:26 sided:4 remains:1 drastic:1 end:6 available:1 opponent:6 probe:2 away:2 generic:2 slower:2 bayeselo:1 ensure:1 completed:1 giving:2 gelly:2 unchanged:1 bl:2 move:15 already:2 occurs:2 traditional:1 gradient:6 win:1 unable:2 majority:3 sensible:1 evenly:2 partner:1 trivial:1 nicta:5 assuming:1 code:1 providing:1 unfortunately:1 potentially:1 favorably:1 implementation:3 unsw:2 allowing:1 upper:2 mate:3 descent:5 t:2 displayed:1 truncated:1 extended:3 communication:1 buro:3 canada:1 rating:13 david:1 introduced:4 required:1 engine:4 learned:7 registered:1 boost:1 address:1 able:1 beyond:1 pattern:1 handicapped:1 regime:1 program:17 tb:3 including:1 memory:2 natural:1 hybrid:1 rely:1 weaver:1 advanced:1 minimax:51 scheme:1 improve:1 rated:1 axis:1 naive:1 deviate:1 prior:4 ict:1 checking:1 relative:2 loss:3 expect:1 interesting:2 proven:1 versus:1 digital:1 sufficient:1 uther:2 s0:7 principle:1 playing:2 heavy:1 ibm:1 last:1 offline:1 allow:1 wide:1 sparse:1 ghz:1 benefit:1 boundary:1 depth:17 calculated:1 world:1 numeric:1 evaluating:1 computes:2 ignores:1 author:1 made:1 expectimax:1 san:1 simplified:2 transaction:1 alpha:19 pruning:3 approximate:1 preferred:1 active:1 summing:1 francisco:1 alternatively:1 search:117 table:18 learn:8 nature:2 robust:1 ca:2 untrained:2 domain:9 main:2 motivation:1 backup:9 scored:1 crafted:1 representative:2 broadband:1 edmonton:1 position:23 winning:1 lie:1 ply:3 third:1 learns:2 minute:2 specific:1 er:1 blitz:1 effectively:2 contemporaneously:1 occurring:1 horizon:1 selfplay:1 logarithmic:1 explore:2 contained:3 applies:1 corresponds:1 marked:1 viewed:1 king:2 quantifying:1 towards:18 hard:2 change:1 included:1 typical:1 except:1 reducing:1 determined:1 generalisation:1 principal:7 total:2 tendency:1 experimental:1 player:18 select:1 stressed:1 meant:2 evaluate:1 tested:1 |
3,005 | 3,723 | Anomaly Detection with Score functions based on
Nearest Neighbor Graphs
Manqi Zhao
ECE Dept.
Boston University
Boston, MA 02215
[email protected]
Venkatesh Saligrama
ECE Dept.
Boston University
Boston, MA, 02215
[email protected]
Abstract
We propose a novel non-parametric adaptive anomaly detection algorithm for high
dimensional data based on score functions derived from nearest neighbor graphs
on n-point nominal data. Anomalies are declared whenever the score of a test
sample falls below ?, which is supposed to be the desired false alarm level. The
resulting anomaly detector is shown to be asymptotically optimal in that it is uniformly most powerful for the specified false alarm level, ?, for the case when
the anomaly density is a mixture of the nominal and a known density. Our algorithm is computationally efficient, being linear in dimension and quadratic in
data size. It does not require choosing complicated tuning parameters or function
approximation classes and it can adapt to local structure such as local change in
dimensionality. We demonstrate the algorithm on both artificial and real data sets
in high dimensional feature spaces.
1 Introduction
Anomaly detection involves detecting statistically significant deviations of test data from nominal
distribution. In typical applications the nominal distribution is unknown and generally cannot be
reliably estimated from nominal training data due to a combination of factors such as limited data
size and high dimensionality.
We propose an adaptive non-parametric method for anomaly detection based on score functions that
maps data samples to the interval [0, 1]. Our score function is derived from a K-nearest neighbor
graph (K-NNG) on n-point nominal data. Anomaly is declared whenever the score of a test sample
falls below ? (the desired false alarm error). The efficacy of our method rests upon its close connection to multivariate p-values. In statistical hypothesis testing, p-value is any transformation of the
feature space to the interval [0, 1] that induces a uniform distribution on the nominal data. When test
samples with p-values smaller than ? are declared as anomalies, false alarm error is less than ?.
We develop a novel notion of p-values based on measures of level sets of likelihood ratio functions.
Our notion provides a characterization of the optimal anomaly detector, in that, it is uniformly most
powerful for a specified false alarm level for the case when the anomaly density is a mixture of the
nominal and a known density. We show that our score function is asymptotically consistent, namely,
it converges to our multivariate p-value as data length approaches infinity.
Anomaly detection has been extensively studied. It is also referred to as novelty detection [1, 2],
outlier detection [3], one-class classification [4, 5] and single-class classification [6] in the literature. Approaches to anomaly detection can be grouped into several categories. In parametric
approaches [7] the nominal densities are assumed to come from a parameterized family and generalized likelihood ratio tests are used for detecting deviations from nominal. It is difficult to use
parametric approaches when the distribution is unknown and data is limited. A K-nearest neighbor
1
(K-NN) anomaly detection approach is presented in [3, 8]. There an anomaly is declared whenever
the distance to the K-th nearest neighbor of the test sample falls outside a threshold. In comparison
our anomaly detector utilizes the global information available from the entire K-NN graph to detect
deviations from the nominal. In addition it has provable optimality properties. Learning theoretic
approaches attempt to find decision regions, based on nominal data, that separate nominal instances
from their outliers. These include one-class SVM of Sch?olkopf et. al. [9] where the basic idea
is to map the training data into the kernel space and to separate them from the origin with maximum margin. Other algorithms along this line of research include support vector data description
[10], linear programming approach [1], and single class minimax probability machine [11]. While
these approaches provide impressive computationally efficient solutions on real data, it is generally
difficult to precisely relate tuning parameter choices to desired false alarm probability.
Scott and Nowak [12] derive decision regions based on minimum volume (MV) sets, which does
provide Type I and Type II error control. They approximate (in appropriate function classes) level
sets of the unknown nominal multivariate density from training samples. Related work by Hero
[13] based on geometric entropic minimization (GEM) detects outliers by comparing test samples
to the most concentrated subset of points in the training sample. This most concentrated set is the
K-point minimum spanning tree(MST) for n-point nominal data and converges asymptotically to
the minimum entropy set (which is also the MV set). Nevertheless, computing K-MST for n-point
data is generally intractable. To overcome these computational limitations [13] proposes heuristic
greedy algorithms based on leave-one out K-NN graph, which while inspired by K-MST algorithm
is no longer provably optimal. Our approach is related to these latter techniques, namely, MV sets
of [12] and GEM approach of [13]. We develop score functions on K-NNG which turn out to be the
empirical estimates of the volume of the MV sets containing the test point. The volume, which is a
real number, is a sufficient statistic for ensuring optimal guarantees. In this way we avoid explicit
high-dimensional level set computation. Yet our algorithms lead to statistically optimal solutions
with the ability to control false alarm and miss error probabilities.
The main features of our anomaly detector are summarized. (1) Like [13] our algorithm scales
linearly with dimension and quadratic with data size and can be applied to high dimensional feature
spaces. (2) Like [12] our algorithm is provably optimal in that it is uniformly most powerful for
the specified false alarm level, ?, for the case that the anomaly density is a mixture of the nominal
and any other density (not necessarily uniform). (3) We do not require assumptions of linearity,
smoothness, continuity of the densities or the convexity of the level sets. Furthermore, our algorithm
adapts to the inherent manifold structure or local dimensionality of the nominal density. (4) Like [13]
and unlike other learning theoretic approaches such as [9, 12] we do not require choosing complex
tuning parameters or function approximation classes.
2 Anomaly Detection Algorithm: Score functions based on K-NNG
In this section we present our basic algorithm devoid of any statistical context. Statistical analysis
appears in Section 3. Let S = {x1 , x2 , ? ? ? , xn } be the nominal training set of size n belonging to
the unit cube [0, 1]d . For notational convenience we use ? and xn+1 interchangeably to denote a test
point. Our task is to declare whether the test point is consistent with nominal data or deviates from
the nominal data. If the test point is an anomaly it is assumed to come from a mixture of nominal
distribution underlying the training data and another known density (see Section 3).
Let d(x, y) be a distance function denoting the distance between any two points x, y ? [0, 1]d . For
simplicity we denote the distances by dij = d(xi , xj ). In the simplest case we assume the distance
function to be Euclidean. However, we also consider geodesic distances to exploit the underlying manifold structure. The geodesic distance is defined as the shortest distance on the manifold.
The Geodesic Learning algorithm, a subroutine in Isomap [14, 15] can be used to efficiently and
consistently estimate the geodesic distances. In addition by means of selective weighting of different coordinates note that the distance function could also account for pronounced changes in local
dimensionality. This can be accomplished for instance through Mahalanobis distances or as a by
product of local linear embedding [16]. However, we skip these details here and assume that a
suitable distance metric is chosen.
Once a distance function is defined our next step is to form a K nearest neighbor graph (K-NNG) or
alternatively an ? neighbor graph (?-NG). K-NNG is formed by connecting each xi to the K closest
2
points {xi1 , ? ? ? , xiK } in S ? {xi }. We then sort the K nearest distances for each xi in increasing
order di,i1 ? ? ? ? ? di,iK and denote RS (xi ) = di,iK , that is, the distance from xi to its K-th
nearest neighbor. We construct ?-NG where xi and xj are connected if and only if dij ? ?. In this
case we define NS (xi ) as the degree of point xi in the ?-NG.
For the simple case when the anomalous density is an arbitrary mixture of nominal and uniform
density1 we consider the following two score functions associated with the two graphs K-NNG and
?-NNG respectively. The score functions map the test data ? to the interval [0, 1].
n
K-LPE: p?K (?) =
1X
I{RS (?)?RS (xi )}
n i=1
?-LPE: p?? (?) =
1X
I{NS (?)?NS (xi )}
n i=1
(1)
n
(2)
where I{?} is the indicator function.
Finally, given a pre-defined significance level ? (e.g., 0.05), we declare ? to be anomalous if
p?K (?), p?? (?) ? ?. We call this algorithm Localized p-value Estimation (LPE) algorithm. This
choice is motivated by its close connection to multivariate p-values(see Section 3).
The score function K-LPE (or ?-LPE) measures the relative concentration of point ? compared to
the training set. Section 3 establishes that the scores for nominally generated data is asymptotically
uniformly distributed in [0, 1]. Scores for anomalous data are clustered around 0. Hence when scores
below level ? are declared as anomalous the false alarm error is smaller than ? asymptotically (since
the integral of a uniform distribution from 0 to ? is ?).
anomaly detection via K?LPE, n=200, K=6, ?=0.05
5
4
4
3
3
2
2
1
1
0
0
?1
?1
?2
?2
?3
?3
?4
?4
?5
?5
empirical distribution of the scoring function K?LPE
12
nominal data
anomaly data
10
empirical density
Bivariate Gaussian mixture distribution
5
level set at ?=0.05
8
6
4
2
labeled as anomaly
labeled as nominal
?6
?6
?4
?2
0
2
?6
?6
4
?4
?2
0
2
4
0
0
0.2
?=0.05
0.4
0.6
value of K?LPE
0.8
1
Figure 1: Left: Level sets of the nominal bivariate Gaussian mixture distribution used to illustrate the KLPE algorithm. Middle: Results of K-LPE with K = 6 and Euclidean distance metric for m = 150 test
points drawn from a equal mixture of 2D uniform and the (nominal) bivariate distributions. Scores for the test
points are based on 200 nominal training samples. Scores falling below a threshold level 0.05 are declared as
anomalies. The dotted contour corresponds to the exact bivariate Gaussian density level set at level ? = 0.05.
Right: The empirical distribution of the test point scores associated with the bivariate Gaussian appear to be
uniform while scores for the test points drawn from 2D uniform distribution cluster around zero.
Figure 1 illustrates the use of K-LPE algorithm for anomaly detection when the nominal data is a
2D Gaussian mixture. The middle panel of figure 1 shows the detection results based on K-LPE are
consistent with the theoretical contour for significance level ? = 0.05. The right panel of figure 1
shows the empirical distribution (derived from the kernel density estimation) of the score function
K-LPE for the nominal (solid blue) and the anomaly (dashed red) data. We can see that the curve for
the nominal data is approximately uniform in the interval [0, 1] and the curve for the anomaly data
has a peak at 0. Therefore choosing the threshold ? = 0.05 will approximately control the Type I
error within 0.05 and minimize the Type II error. We also take note of the inherent robustness of our
algorithm. As seen from the figure (right) small changes in ? lead to small changes in actual false
alarm and miss levels.
?
?
1 Pn
1
1
When the mixing density is not uniform but, say f1 , the score functions must be modified to p
?K (?) = n
i=1 I R (?)f1 (?) ? R (xi )f1 (xi )
S
S
?
?
NS (?)
NS (xi )
1 Pn
and p
?? (?) = n
for the two graphs K-NNG and ?-NNG respectively.
i=1 I f (?) ? f (x )
1
1
1
i
3
To summarize the above discussion, our LPE algorithm has three steps:
(1) Inputs: Significance level ?, distance metric (Euclidean, geodesic, weighted etc.).
(2) Score computation: Construct K-NNG (or ?-NG) based on dij and compute the score function
K-LPE from Equation 1 (or ?-LPE from Equation 2).
(3) Make Decision: Declare ? to be anomalous if and only if p?K (?) ? ? (or p?? (?) ? ?).
Computational Complexity: To compute each pairwise distance requires O(d) operations; and
O(n2 d) operations for all the nodes in the training set. In the worst-case computing the K-NN graph
(for small K) and the functions RS (?), NS (?) requires O(n2 ) operations over all the nodes in the
training data. Finally, computing the score for each test data requires O(nd+n) operations(given that
RS (?), NS (?) have already been computed).
Remark: LPE is fundamentally different from non-parametric density estimation or level set estimation schemes (e.g., MV-set). These approaches involve explicit estimation of high dimensional
quantities and thus hard to apply in high dimensional problems. By computing scores for each test
sample we avoid high-dimensional computation. Furthermore, as we will see in the following section the scores are estimates of multivariate p-values. These turn out to be sufficient statistics for
optimal anomaly detection.
3
Theory: Consistency of LPE
A statistical framework for the anomaly detection problem is presented in this section. We establish
that anomaly detection is equivalent to thresholding p-values for multivariate data. We will then
show that the score functions developed in the previous section is an asymptotically consistent estimator of the p-values. Consequently, it will follow that the strategy of declaring an anomaly when a
test sample has a low score is asymptotically optimal.
Assume that the data belongs to the d-dimensional unit cube [0, 1]d and the nominal data is sampled from a multivariate density f0 (x) supported on the d-dimensional unit cube [0, 1]d . Anomaly
detection can be formulated as a composite hypothesis testing problem. Suppose test data, ? comes
from a mixture distribution, namely, f (?) = (1 ? ?)f0 (?) + ?f1 (?) where f1 (?) is a mixing density
supported on [0, 1]d . Anomaly detection involves testing the nominal hypotheses H0 : ? = 0 versus
the alternative (anomaly) H1 : ? > 0. The goal is to maximize the detection power subject to false
alarm level ?, namely, P(declare H1 | H0 ) ? ?.
Definition 1. Let P0 be the nominal probability measure and f1 (?) be P0 measurable. Suppose the
likelihood ratio f1 (x)/f0 (x) does not have non-zero flat spots on any open ball in [0, 1]d . Define
the p-value of a data point ? as
?
?
f1 (?)
f1 (x)
?
p(?) = P0 x :
f0 (x)
f0 (?)
Note that the definition naturally accounts for singularities which may arise if the support of f0 (?)
is a lower dimensional manifold. In this case we encounter f1 (?) > 0, f0 (?) = 0 and the p-value
p(?) = 0. Here anomaly is always declared(low score).
The above formula can be thought of as a mapping of ? ? [0, 1]. Furthermore, the distribution of
p(?) under H0 is uniform on [0, 1]. However, as noted in the introduction there are other such transformations. To build intuition about the above transformation and its utility consider the following
example. When the mixing density is uniform, namely, f1 (?) = U (?) where U (?) is uniform over
[0, 1]d , note that ?? = {? | p(?) ? ?} is a density level set at level ?. It is well known (see [12])
that such a density level set is equivalent to a minimum volume set of level ?. The minimum volume
set at level ? is known to be the uniformly most powerful decision region for testing H0 : ? = 0
versus the alternative H1 : ? > 0 (see [13, 12]). The generalization to arbitrary f1 is described next.
Theorem 1. The uniformly most powerful test for testing H0 : ? = 0 versus the alternative
(anomaly) H1 : ? > 0 at a prescribed level ? of significance P(declare H1 | H0 ) ? ? is:
?
H1 , p(?) ? ?
?(?) =
H0 , otherwise
4
Proof. We provide the main idea for the proof. First, measure theoretic arguments are used to
establish p(X) as a random variable over [0, 1] under both nominal and anomalous distributions.
d
d
Next when X ? f0 , i.e., distributed with nominal density it follows that the random variable p(X) ?
d
d
U [0, 1]. When X ? f = (1 ? ?)f0 + ?f1 with ? > 0 the random variable, p(X) ? g where g(?)
is a monotonically decreasing PDF supported on [0, 1]. Consequently, the uniformly most powerful
test for a significance level ? is to declare p-values smaller than ? as anomalies.
Next we derive the relationship between the p-values and our score function. By definition, RS (?)
and RS (xi ) are correlated because the neighborhood of ? and xi might overlap. We modify our
algorithm to simplify our analysis. We assume n is odd (say) and can be written as n = 2m + 1.
We divide training set S into two parts:
S = S1 ? S2 = {x0 , x1 , ? ? ? , xm } ? {xm+1 , ? ? ? , x2m }
P
1
We modify ?-LPE to p?? (?) = m
?K (?)
xi ?S1 I{NS2 (?)?NS1 (xi )} (or K-LPE to p
P
1
I
).
Now
R
(?)
and
R
(x
)
are
independent.
S2
S1
i
xi ?S1 {RS2 (?)?RS1 (xi )}
m
=
Furthermore, we assume f0 (?) satisfies the following two smoothness conditions:
1. the Hessian matrix H(x) of f0 (x) is always dominated by a matrix with largest eigenvalue
?M , i.e., ?M s.t. H(x) ? M ?x and ?max (M ) ? ?M
2. In the support of f0 (?), its value is always lower bounded by some ? > 0.
We have the following theorem.
Theorem 2. Consider the setup above with the training data {xi }ni=1 generated i.i.d. from f0 (x).
Let ? ? [0, 1]d be an arbitrary test sample. It follows that for a suitable choice K and under the
above smoothness conditions,
n??
|?
pK (?) ? p(?)| ?? 0 almost surely, ?? ? [0, 1]d
For simplicity, we limit ourselves to the case when f1 is uniform. The proof of Theorem 2 consists
of two steps:
n??
? We show that the expectation ES1 [?
p? (?)] ?? p(?) (Lemma 3). This result is then exn??
tended to K-LPE (i.e. ES1 [?
pK (?)] ?? p(?)) in Lemma 4.
n??
? Next we show that p?K (?) ?? ES1 [?
pK (?)] via concentration inequality (Lemma 5).
q
1/15
3
d
, with probability at least 1 ? e??m /2 ,
Lemma 3 (?-LPE). By picking ? = m? 5d 2?e
lm (?) ? ES1 [?
p? (?)] ? um (?)
(3)
where
1/15
/2
1/15
/2
lm (?) = P0 {x : (f0 (?) ? ?1 ) (1 ? ?2 ) ? (f0 (x) + ?1 ) (1 + ?2 )} ? e??m
um (?) = P0 {x : (f0 (?) + ?1 ) (1 + ?2 ) ? (f0 (x) ? ?1 ) (1 ? ?2 )} + e??m
?1 = ?M m?6/5d /(2?e(d + 2)) and ?2 = 2m?1/6 .
Proof. We only prove the lower bound since the upper bound follows along similar lines. By interchanging the expectation with the summation,
"
#
1 X
ES1 [?
p? (?)] = ES1
I{NS2 (?)?NS1 (xi )}
m
xi ?S1
h
i
1 X
=
Exi ES1 \xi I{NS2 (?)?NS1 (xi )}
m
xi ?S1
= Ex1 [PS1 \x1 (NS2 (?) ? NS1 (x1 ))]
5
where the last inequality follows from the symmetric structure of {x0 , x1 , ? ? ? , xm }.
n??
Clearly the objective of the proof is to show PS1 \x1 (NS2 (?) ? NS1 (x1 )) ?? I{f0 (?)?f0 (x1 )} .
Skipping technical details, this can be accomplishedRin two steps. (1) Note that NS (x1 ) is a binomial
random variable with success probability q(x1 ) := B? f0 (x1 + t)dt. This relates PS1 \x1 (NS2 (?) ?
NS1 (x1 )) to I{q(?)?q(x1 )} . (2) We relate I{q(?)?q(x1 )} to I{f0 (?)?f0 (x1 )} based on the function
smoothness condition. The details of these two steps are shown in the below.
Note that NS1 (x1 ) ? Binom(m, q(x1 )). By Chernoff bound of binomial distribution, we have
2
?
? 2mq(x
PS1 \x1 (NS1 (x1 ) ? mq(x1 ) ? ?) ? e
1)
that is, NS1 (x1 ) is concentrated around mq(x1 ). This implies,
PS1 \x1 (NS2 (?) ? NS1 (x1 )) ? I{NS
2
(?)?mq(x1 )+?x1 }
2
?x
1
? 2mq(x
?e
(4)
1)
We choose ?x1 = q(x1 )m? (? will be specified later) and reformulate equation (4) as
PS1 \x1 (NS2 (?) ? NS1 (x1 )) ? I?
NS (?)
q(x )
2
? Vol(B1 )
mVol(B? )
?
?
2
(1+ m1??
)
q(x1 )m2??1
2
? e?
(5)
R
Next, we relate q(x1 )(or B? f0 (x1 + t)dt) to f0 (x1 ) via the Taylor?s expansion and the smoothness
condition of f0 ,
?
?R
Z
? ?
?
1
?M ?2
?
? B? f0 (x1 + t)dt
M
? f0 (x1 )? ?
?
ktk2 dt =
(6)
?
?
?
Vol(B? )
2 Vol(B? ) B?
2d(d + 2)
and then equation (5) becomes
PS1 \x1 (NS2 (?) ? NS1 (x1 )) ? I?
?
?
NS (?)
?M ?2
2
? f0 (x1 )+ 2d(d+2)
mVol(B? )
?
(
2
1+ 1??
m
)
? e?
q(x1 )m2??1
2
By applying the same steps to NS2 (?) as equation 4 (Chernoff bound) and equation 6 (Taylor?s
explansion), we have with probability at least 1 ? e?
Ex1 [PS1 \x1 (NS2 (?) ? NS1 (x1 ))] ?
6
Finally, by choosing ?2 = m? 5d ?
q(?)m2??1
2
,
??
??
??
q(x1 )m2??1
? ?
??
?
?M ?2
?M ?2
2
2
2
? f0 (x1 )+ 2d(d+2)
?e
1? 1??
1+ 1??
Px1 f0 (?)? 2d(d+2)
m
m
d
2?e
and ? = 5/6, we prove the lemma.
?
?
Lemma 4 (K-LPE). By picking K = 1 ? 2m?1/6 m2/5 (f0 (?) ? ?1 ), with probability at least
1/15
1 ? e??m /2 ,
lm (?) ? ES1 [?
pK (?)] ? um (?)
(7)
Proof. The proof is very similar to the proof to Lemma 3 and we only give a brief outline here. Now
n??
the objective is to show PS1 \x1 (RS2 (?) ? RS1 (x1 )) ?? I{f0 (?)?f0 (x1 )} .The basic idea is to use
the result of Lemma 3. To accomplish this, we note that {RS2 (?) ? RS1 (x1 )} contains the events
{NS2 (?) ? K} ? {NS1 (x1 ) ? K}, or equivalently
{NS2 (?) ? q(?)m ? K ? q(?)m} ? {NS1 (x1 ) ? q(x1 )m ? K ? q(x1 )m}
(8)
By the tail probability of Binomial distribution, the probability of the above two events converges to
1 exponentially fast if K ? q(?)m < 0 and K ? q(x1 )m > 0. By using the same two-step bounding
techniques developed in the proof to Lemma 3, these two inequalities are implied by
K ? m2/5 (f0 (?) ? ?1 ) < 0 and K ? m2/5 (f0 (x1 ) + ?1 ) > 0
?
?
Therefore if we choose K = 1 ? 2m?1/6 m2/5 (f0 (?) ? ?1 ), we have with probability at least
?1/15
/2
1 ? e??m
,
?1/15
PS1 \x1 (RS2 (?) ? RS1 (x1 )) ? I{(f0 (?)??1 )(1??2 )?(f0 (x1 )+?1 )(1+?2 )} ? e??m
6
/2
Remark: Lemma 3 and Lemma 4 were proved with specific choices for ? and K. However, they
can be chosen in a range of values, but will lead to different lower and upper bounds. We will show
in Section 4 via simulation that our LPE algorithm is generally robust to choice of parameter K.
P
1
Lemma 5. Suppose K = cm2/5 and denote p?K (?) = m
xi ?S1 I{RS2 (?)?RS1 (xi )} . We have
2 m1/5
c2 ? 2
d
? 2?
P0 (|ES1 [?
pK (?)] ? p?K (?)| > ?) ? 2e
where ?d is a constant and is defined as the minimal number of cones centered at the origin of angle
?/6 that cover Rd .
Proof. We can not apply Law of Large Number in this case because I{RS2 (?)?RS1 (xi )} are correlated. Instead, we need to use the more generalized concentration-of-measure
inequality such
P
1
as MacDiarmid?s inequality[17]. Denote F (x0 , ? ? ? , xm ) = m
xi ?S1 I{RS2 (?)?RS1 (xi )} . From
Corollary 11.1 in [18],
sup
x0 ,??? ,xm ,x0i
|F (x0 , ? ? ? , xi , ? ? ? , xm ) ? F (x0 , ? ? ? , x0i , ? ? ? , xn )| ? K?d /m
(9)
Then the lemma directly follows from applying McDiarmid?s inequality.
Theorem 2 directly follows from the combination of Lemma 4 and Lemma 5 and a standard application of the first Borel-Cantelli lemma. We have used Euclidean distance in Theorem 2. When the
support of f0 lies on a lower dimensional manifold (say d0 < d) adopting the geodesic metric leads
to faster convergence. It turns out that d0 replaces d in the expression for ?1 in Lemma 3.
4
Experiments
First, to test the sensitivity of K-LPE to parameter changes, we run K-LPE on the benchmark dataset Banana [19] with K varying from 2 to 12. We randomly pick 109 points with ?+1? label and
regard them as the nominal training data. The test data comprises of 108 ?+1? points and 183 ??1?
points (ground truth) and the algorithm is supposed to predict ?+1? data as nominal and ??1? data
as anomalous. Scores computed for test set using Equation 1 is oblivious to true f1 density (??1?
labels). Euclidean distance metric is adopted for this experiment.
To control false alarm at level ?, points with score smaller than ? are predicted as anomaly. Empirical false alarm and true positives are computed from ground truth. We vary ? to obtain the empirical
ROC curve. The above procedure is followed for the rest of the experiments in this section. As
shown in 2(a), the LPE algorithm is insensitive to K. For comparison we plot the empirical ROC
curve of the one-class SVM of [9]. For our OC-SVM implementation, for a fixed bandwidth, c, we
obtain the empirical ROC curve by varying ?. We then vary the bandwidth, c, to obtain the best
(in terms of AUC) ROC curve. The optimal bandwidth turns out to be c = 1.5. In LPE if we set
? = 0.05 we get empirical F A = 0.06 and for ? = 0.08, empirical F A = 0.09. For OC-SVM we
are unaware of any natural way of picking c and ? to control FA rate based on training data.
Next, we apply our K-LPE to the problem where the nominal and anomalous data are generated in
the following way:
?? ? ?
??
?? ? ?
??
? ?
??
1
1
8
1 0
?8
1 0
49 0
f0 ? N
,
+ N
,
, f1 ? N 0,
(10)
0
0 9
0
0 9
0 49
2
2
We call ROC curve corresponding to the optimal Bayesian classifier as the Clairvoyant ROC (the
red dashed curve in Figure 2(b)). The other two curves are averaged (over 15 trials) empirical ROC
curves via LPE. Here we set K = 6 and n = 40 or n = 160. We see that for a relatively small
training set of size 160 the average empirical ROC curve is very close to the clairvoyant ROC curve.
Finally, we ran LPE on three real-world datasets: Wine, Ionosphere[20] and MNIST US Postal
Service (USPS) database of handwritten digits. If there are more than 2 labels in the data set, we
artificially regard points with one particular label as nominal and regard the points with other labels
as anomalous. For example, for the USPS dataset, we regard instances of digit 0 as nominal and
instances of digits 1, ? ? ? , 9 as anomaly. The data points are normalized to be within [0, 1]d and we
7
2D Gaussian mixture
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
true positives
true positives
banana data set
1
0.5
ROC of LPE (K=2)
0.4
0.5
0.4
ROC of LPE (K=4)
0.3
0.3
ROC of LPE (K=6)
ROC of LPE (K=8)
0.2
0.2
ROC of LPE (K=10)
ROC of LPE (K=12)
0.1
ROC of LPE(n=40)
ROC of LPE(n=160)
0.1
Clairvoyant ROC
ROC of one?class SVM
0
0
0.1
0.2
0.3
0.4
0.5
0.6
false positives
0.7
0.8
0.9
0
1
0
0.1
(a) SVM vs. K-LPE for Banana Data
0.2
0.3
0.4
0.5
0.6
false positives
0.7
0.8
0.9
1
(b) Clairvoyant vs. K-LPE
Figure 2: (a) Empirical ROC curve of K -LPE on the banana dataset with K = 2, 4, 6, 8, 10, 12 (with
n = 400) vs the empirical ROC curve of one class SVM developed in [9]; (b) Empirical ROC curves of
K -LPE algorithm vs clairvoyant ROC curve (f0 is given by Equation 10) for K = 6 and for n = 40 or 160.
1
1
0.9
0.9
0.8
0.8
0.8
0.7
0.7
0.7
0.6
0.6
0.6
0.5
0.4
true positive
1
0.9
true positive
true positive
use geodesic distance [14]. The ROC curves are shown in Figure 3. The feature dimension of Wine
is 13 and we apply the ?-LPE algorithm with ? = 0.9 and n = 39. The test set is a mixture of
20 nominal points and 158 anomaly points. The feature dimension of Ionosphere is 34 and we
apply the K-LPE algorithm with K = 9 and n = 175. The test set is a mixture of 50 nominal points
and 126 anomaly points. The feature dimension of USPS is 256 and we apply the K-LPE algorithm
with K = 9 and n = 400. The test set is a mixture of 367 nominal points and 33 anomaly points.
In USPS, setting ? = 0.5 induces empirical false-positive 6.1% and empirical false alarm rate 5.7%
(In contrast F P = 7% and F A = 9% with ? = 5% for OC-SVM as reported in [9]). Practically we
find that K-LPE is more preferable to ?-LPE and as a rule of thumb setting K ? n2/5 is generally
effective.
0.5
0.4
0.5
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
false positive
(a) Wine
0.7
0.8
0.9
1
0
0.1
0
0.1
0.2
0.3
0.4
0.5
0.6
false positive
0.7
(b) Ionosphere
0.8
0.9
1
0
0
0.1
0.2
0.3
0.4
0.5
0.6
false positive
0.7
0.8
0.9
1
(c) USPS
Figure 3: ROC curves on real datasets via LPE; (a) Wine dataset with D = 13, n = 39, ? = 0.9; (b)
Ionosphere dataset with D = 34, n = 175, K = 9; (c) USPS dataset with D = 256, n = 400, K = 9.
5
Conclusion
In this paper, we proposed a novel non-parametric adaptive anomaly detection algorithm which leads
to a computationally efficient solution with provable optimality guarantees. Our algorithm takes a
K-nearest neighbor graph as an input and produces a score for each test point. Scores turn out to be
empirical estimates of the volume of minimum volume level sets containing the test point. While
minimum volume level sets provide an optimal characterization for anomaly detection, they are
high dimensional quantities and generally difficult to reliably compute in high dimensional feature
spaces. Nevertheless, a sufficient statistic for optimal tradeoff between false alarms and misses is
the volume of the MV set itself, which is a real number. By computing score functions we avoid
computing high dimensional quantities and still ensure optimal control of false alarms and misses.
The computational cost of our algorithm scales linearly in dimension and quadratically in data size.
8
References
[1] C. Campbell and K. P. Bennett, ?A linear programming approach to novelty detection,? in Advances in
Neural Information Processing Systems 13. MIT Press, 2001, pp. 395?401.
[2] M. Markou and S. Singh, ?Novelty detection: a review ? part 1: statistical approaches,? Signal Processing,
vol. 83, pp. 2481?2497, 2003.
[3] R. Ramaswamy, R. Rastogi, and K. Shim, ?Efficient algorithms for mining outliers from large data sets,?
in Proceedings of the ACM SIGMOD Conference, 2000.
[4] R. Vert and J. Vert, ?Consistency and convergence rates of one-class svms and related algorithms,? Journal
of Machine Learning Research, vol. 7, pp. 817?854, 2006.
[5] D. Tax and K. R. M?
uller, ?Feature extraction for one-class classification,? in Artificial neural networks
and neural information processing, Istanbul, TURQUIE, 2003.
[6] R. El-Yaniv and M. Nisenson, ?Optimal singl-class classification strategies,? in Advances in Neural Information Processing Systems 19. MIT Press, 2007.
[7] I. V. Nikiforov and M. Basseville, Detection of abrupt changes: theory and applications. Prentice-Hall,
New Jersey, 1993.
[8] K. Zhang, M. Hutter, and H. Jin, ?A new local distance-based outlier detection approach for scattered
real-world data,? March 2009, arXiv:0903.3257v1[cs.LG].
[9] B. Sch?
olkopf, J. C. Platt, J. Shawe-Taylor, A. J. Smola, and R. Williamson, ?Estimating the support of a
high-dimensional distribution,? Neural Computation, vol. 13, no. 7, pp. 1443?1471, 2001.
[10] D. Tax, ?One-class classification: Concept-learning in the absence of counter-examples,? Ph.D. dissertation, Delft University of Technology, June 2001.
[11] G. R. G. Lanckriet, L. E. Ghaoui, and M. I. Jordan, ?Robust novelty detection with single-class MPM,?
in Neural Information Processing Systems Conference, vol. 18, 2005.
[12] C. Scott and R. D. Nowak, ?Learning minimum volume sets,? Journal of Machine Learning Research,
vol. 7, pp. 665?704, 2006.
[13] A. O. Hero, ?Geometric entropy minimization(GEM) for anomaly detection and localization,? in Neural
Information Processing Systems Conference, vol. 19, 2006.
[14] J. B. Tenenbaum, V. de Silva, and J. C. Langford, ?A global geometric framework fo nonlinear dimensionality reduction,? Science, vol. 290, pp. 2319?2323, 2000.
[15] M. Bernstein, V. D. Silva, J. C. Langford, and J. B. Tenenbaum, ?Graph approximations to geodesics on
embedded manifolds,? 2000.
[16] S. T. Roweis and L. K. Saul, ?Nonlinear dimensionality reduction by local linear embedding,? Science,
vol. 290, pp. 2323?2326, 2000.
[17] C. McDiarmid, ?On the method of bounded differences,? in Surveys in Combinatorics.
University Press, 1989, pp. 148?188.
[18] L. Devroye, L. Gy?orfi, and G. Lugosi, A Probabilistic Theory of Pattern Recognition.
New York, Inc., 1996.
Cambridge
Springer Verlag
[19] ?Benchmark repository.? [Online]. Available: http://ida.first.fhg.de/projects/bench/benchmarks.htm
[20] A. Asuncion and D. J. Newman, ?UCI machine learning repository,? 2007. [Online]. Available:
http://www.ics.uci.edu/?mlearn/MLRepository.html
9
| 3723 |@word trial:1 repository:2 middle:2 nd:1 open:1 cm2:1 r:7 simulation:1 p0:6 pick:1 solid:1 reduction:2 contains:1 score:35 efficacy:1 denoting:1 comparing:1 ida:1 skipping:1 yet:1 must:1 written:1 mst:3 plot:1 v:4 greedy:1 mpm:1 dissertation:1 detecting:2 provides:1 characterization:2 node:2 postal:1 mcdiarmid:2 zhang:1 along:2 c2:1 ns2:13 ik:2 clairvoyant:5 consists:1 prove:2 x0:6 pairwise:1 inspired:1 detects:1 decreasing:1 actual:1 increasing:1 becomes:1 project:1 estimating:1 linearity:1 underlying:2 panel:2 bounded:2 developed:3 transformation:3 guarantee:2 preferable:1 um:3 classifier:1 platt:1 control:6 unit:3 appear:1 positive:12 declare:6 service:1 local:7 modify:2 limit:1 approximately:2 lugosi:1 might:1 studied:1 limited:2 range:1 statistically:2 averaged:1 testing:5 digit:3 spot:1 procedure:1 empirical:19 lpe:48 thought:1 composite:1 vert:2 orfi:1 pre:1 get:1 cannot:1 close:3 convenience:1 prentice:1 context:1 applying:2 www:1 equivalent:2 map:3 measurable:1 survey:1 simplicity:2 abrupt:1 m2:8 estimator:1 rule:1 mq:5 embedding:2 notion:2 coordinate:1 nominal:43 suppose:3 anomaly:45 programming:2 exact:1 hypothesis:3 origin:2 lanckriet:1 recognition:1 labeled:2 database:1 worst:1 region:3 connected:1 counter:1 ran:1 intuition:1 convexity:1 complexity:1 geodesic:8 singh:1 upon:1 localization:1 usps:6 htm:1 exi:1 jersey:1 fast:1 effective:1 artificial:2 newman:1 choosing:4 outside:1 h0:7 neighborhood:1 heuristic:1 say:3 otherwise:1 ability:1 statistic:3 itself:1 online:2 eigenvalue:1 propose:2 product:1 saligrama:1 uci:2 mixing:3 tax:2 adapts:1 roweis:1 supposed:2 description:1 pronounced:1 olkopf:2 convergence:2 cluster:1 yaniv:1 produce:1 converges:3 leave:1 derive:2 develop:2 illustrate:1 x0i:2 nearest:9 odd:1 predicted:1 involves:2 come:3 skip:1 implies:1 c:1 nng:10 nisenson:1 centered:1 require:3 f1:16 clustered:1 generalization:1 singularity:1 summation:1 practically:1 around:3 hall:1 ground:2 ic:1 mapping:1 predict:1 lm:3 vary:2 entropic:1 wine:4 estimation:5 label:5 grouped:1 density1:1 largest:1 establishes:1 weighted:1 minimization:2 uller:1 mit:2 clearly:1 gaussian:6 always:3 modified:1 avoid:3 pn:2 varying:2 corollary:1 derived:3 june:1 notational:1 consistently:1 likelihood:3 cantelli:1 contrast:1 detect:1 el:1 nn:4 entire:1 istanbul:1 selective:1 subroutine:1 i1:1 fhg:1 provably:2 classification:5 html:1 proposes:1 cube:3 equal:1 once:1 construct:2 extraction:1 ng:4 chernoff:2 interchanging:1 fundamentally:1 inherent:2 simplify:1 oblivious:1 randomly:1 binom:1 ourselves:1 delft:1 attempt:1 detection:27 mining:1 mixture:14 integral:1 nowak:2 tree:1 euclidean:5 divide:1 taylor:3 desired:3 theoretical:1 minimal:1 hutter:1 instance:4 cover:1 cost:1 deviation:3 subset:1 uniform:13 dij:3 reported:1 accomplish:1 density:24 devoid:1 peak:1 sensitivity:1 bu:2 probabilistic:1 xi1:1 picking:3 connecting:1 containing:2 choose:2 zhao:1 account:2 de:2 gy:1 summarized:1 inc:1 combinatorics:1 mv:6 later:1 h1:6 ramaswamy:1 sup:1 red:2 sort:1 complicated:1 asuncion:1 minimize:1 formed:1 ni:1 efficiently:1 rastogi:1 bayesian:1 handwritten:1 thumb:1 mlearn:1 detector:4 fo:1 tended:1 whenever:3 definition:3 markou:1 pp:8 naturally:1 associated:2 di:3 proof:10 sampled:1 proved:1 dataset:6 dimensionality:6 campbell:1 appears:1 dt:4 follow:1 furthermore:4 smola:1 langford:2 nonlinear:2 continuity:1 ns1:15 normalized:1 true:7 isomap:1 concept:1 hence:1 symmetric:1 mahalanobis:1 ex1:2 interchangeably:1 auc:1 noted:1 oc:3 mlrepository:1 generalized:2 pdf:1 outline:1 theoretic:3 demonstrate:1 silva:2 novel:3 exponentially:1 volume:10 insensitive:1 tail:1 m1:2 significant:1 cambridge:1 smoothness:5 tuning:3 rd:1 consistency:2 px1:1 shawe:1 f0:41 impressive:1 longer:1 etc:1 multivariate:7 closest:1 belongs:1 verlag:1 inequality:6 success:1 accomplished:1 scoring:1 seen:1 minimum:8 rs2:7 exn:1 novelty:4 shortest:1 maximize:1 monotonically:1 dashed:2 ii:2 relates:1 signal:1 surely:1 d0:2 technical:1 faster:1 adapt:1 ensuring:1 anomalous:9 basic:3 metric:5 expectation:2 arxiv:1 kernel:2 adopting:1 addition:2 interval:4 sch:2 rest:2 unlike:1 subject:1 rs1:7 jordan:1 call:2 bernstein:1 ps1:10 xj:2 bandwidth:3 idea:3 tradeoff:1 whether:1 motivated:1 expression:1 utility:1 hessian:1 york:1 remark:2 generally:6 involve:1 extensively:1 ph:1 induces:2 concentrated:3 category:1 simplest:1 svms:1 tenenbaum:2 http:2 dotted:1 estimated:1 blue:1 vol:11 threshold:3 nevertheless:2 falling:1 drawn:2 v1:1 graph:12 asymptotically:7 cone:1 run:1 angle:1 parameterized:1 powerful:6 family:1 almost:1 utilizes:1 decision:4 bound:5 followed:1 quadratic:2 replaces:1 infinity:1 precisely:1 x2:1 flat:1 dominated:1 declared:7 argument:1 optimality:2 prescribed:1 relatively:1 combination:2 ball:1 march:1 belonging:1 smaller:4 s1:8 outlier:5 ghaoui:1 computationally:3 equation:8 turn:5 hero:2 adopted:1 available:3 operation:4 nikiforov:1 apply:6 appropriate:1 alternative:3 robustness:1 encounter:1 binomial:3 include:2 ensure:1 exploit:1 sigmod:1 build:1 establish:2 implied:1 objective:2 already:1 quantity:3 parametric:6 concentration:3 strategy:2 fa:1 distance:22 separate:2 manifold:6 srv:1 spanning:1 provable:2 length:1 devroye:1 relationship:1 reformulate:1 ratio:3 equivalently:1 difficult:3 setup:1 lg:1 relate:3 xik:1 implementation:1 reliably:2 unknown:3 upper:2 datasets:2 benchmark:3 jin:1 banana:4 arbitrary:3 venkatesh:1 namely:5 specified:4 connection:2 quadratically:1 below:5 pattern:1 scott:2 xm:6 summarize:1 max:1 power:1 suitable:2 overlap:1 event:2 natural:1 indicator:1 minimax:1 scheme:1 technology:1 brief:1 deviate:1 review:1 literature:1 geometric:3 relative:1 law:1 embedded:1 shim:1 limitation:1 declaring:1 versus:3 localized:1 degree:1 sufficient:3 consistent:4 thresholding:1 supported:3 last:1 fall:3 neighbor:9 saul:1 distributed:2 regard:4 overcome:1 dimension:6 xn:3 curve:18 world:2 contour:2 unaware:1 adaptive:3 approximate:1 global:2 ktk2:1 b1:1 assumed:2 gem:3 x2m:1 xi:32 alternatively:1 robust:2 expansion:1 williamson:1 complex:1 necessarily:1 artificially:1 significance:5 main:2 pk:5 linearly:2 s2:2 bounding:1 alarm:16 arise:1 n2:3 x1:58 referred:1 borel:1 roc:25 scattered:1 n:11 comprises:1 explicit:2 lie:1 weighting:1 formula:1 theorem:6 specific:1 svm:8 ionosphere:4 bivariate:5 intractable:1 mnist:1 false:22 illustrates:1 margin:1 boston:4 entropy:2 nominally:1 springer:1 corresponds:1 truth:2 satisfies:1 acm:1 ma:2 goal:1 formulated:1 consequently:2 basseville:1 bennett:1 absence:1 change:6 hard:1 typical:1 uniformly:7 miss:4 lemma:17 ece:2 support:5 latter:1 dept:2 es1:9 bench:1 correlated:2 |
3,006 | 3,724 | Unsupervised Feature Selection for the
k-means Clustering Problem
Christos Boutsidis
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180
[email protected]
Michael W. Mahoney
Department of Mathematics
Stanford University
Stanford, CA 94305
[email protected]
Petros Drineas
Department of Computer Science
Rensselaer Polytechnic Institute
Troy, NY 12180
[email protected]
Abstract
We present a novel feature selection algorithm for the k-means clustering problem.
Our algorithm is randomized and, assuming an accuracy parameter ? ? (0, 1),
selects and appropriately rescales in an unsupervised manner ?(k log(k/?)/?2 )
features from a dataset of arbitrary dimensions. We prove that, if we run any
?-approximate k-means algorithm (? ? 1) on the features selected using our
method, we can find a (1 + (1 + ?)?)-approximate partition with high probability.
1
Introduction
Clustering is ubiquitous in science and engineering, with numerous and diverse application domains,
ranging from bioinformatics and medicine to the social sciences and the web [15]. Perhaps the most
well-known clustering algorithm is the so-called ?k-means? algorithm or Lloyd?s method [22], an
iterative expectation-maximization type approach, which attempts to address the following objective: given a set of points in a Euclidean space and a positive integer k (the number of clusters), split
the points into k clusters so that the total sum of the (squared Euclidean) distances of each point to
its nearest cluster center is minimized. This optimization objective is often called the k-means clustering objective. (See Definition 1 for a formal discussion of the k-means objective.) The simplicity
of the objective, as well as the good behavior of the associated algorithm (Lloyd?s method [22, 28]),
have made k-means enormously popular in applications [32].
In recent years, the high dimensionality of the modern massive datasets has provided a considerable
challenge to k-means clustering approaches. First, the curse of dimensionality can make algorithms
for k-means clustering very slow, and, second, the existence of many irrelevant features may not
allow the identification of the relevant underlying structure in the data [14]. Practitioners addressed
such obstacles by introducing feature selection and feature extraction techniques. It is worth noting that feature selection selects a small subset of actual features from the data and then runs the
clustering algorithm only on the selected features, whereas feature extraction constructs a small set
of artificial features and then runs the clustering algorithm on the constructed features. Despite the
significance of the problem, as well as the wealth of heuristic methods addressing it (see Section
3), there exist no provably accurate feature selection methods and extremely few provably accurate
feature extraction methods for the k-means clustering objective (see Section 3.1 for the later case).
1
Our work here addresses this shortcoming by presenting the first provably accurate feature selection
algorithm for k-means clustering. Our algorithm constructs a probability distribution for the feature
space, and then selects a small number of features (roughly k log(k), where k is the number of
clusters) with respect to the computed probabilities. (See Section 2 for a detailed description of
our algorithm.) Then, we argue that running k-means clustering algorithms on the selected features
returns a constant-factor approximate partition to the optimal. (See Theorem 1 in Section 2.)
We now formally define the k-means clustering problem using the so-called cluster indicator matrix.
Also, recall that the Frobenius norm of a matrix (denoted by ???F ) is equal to the square root of the
sum of the squares of its elements. (See also Section 4.1 for useful notation.)
Definition 1 [T HE K - MEANS CLUSTERING PROBLEM ]
Given a matrix A ? Rn?d (representing n points ? rows ? described with respect to d features ?
columns) and a positive integer k denoting the number of clusters, find the n ? k indicator matrix
Xopt such that
2
Xopt = arg min
A ? XX T A
F .
(1)
X?X
The optimal value of the k-means clustering objective is
2
2
T
Fopt = min
A ? XX T A
F =
A ? Xopt Xopt
A
F .
X?X
(2)
In the above X denotes the set of all n ? k indicator matrices X.
We briefly expand on the notion of an n ? k indicator matrix X. Such matrices have exactly one
non-zero element per row, which denotes cluster membership. Equivalently, for all i = 1, . . . , n and
j = 1, . . . , k, the i-th row (point) of A belongs to the j-th cluster if and only if Xij is non-zero;
?
in particular Xij = 1/ sj , where sj is the number of points in the corresponding cluster (i.e. the
number of non-zero elements in the j-th column of X). Note that the columns of X are normalized
and pairwise orthogonal so that their Euclidean norm is equal to one, and X T X = Ik , where Ik
is the k ? k identity matrix. An example of such an indicator matrix X representing three points
(rows in X) belonging to two different clusters (columns in X) is given below; note that the points
corresponding to the first two rows of X belong to the first cluster (s1 = 2) and the other point to
the second cluster (s2 = 1):
? ?
?
1/?2
0
X = ? 1/ 2
0? ? .
0
1/ 1
The above definition of the k-means objective is exactly equivalent with the standard definition of
2
?n
k-means clustering [28]. To see this notice that
A ? XX T A
F = i=1 ||A(i) ? X(i) X T A||22 ,
while for i = 1, ..., n, X(i) X T A denotes the centroid of the cluster the i-th point belongs to. In the
above, A(i) and X(i) denote the i-th rows of A and X, respectively.
2
The feature selection algorithm and the quality-of-clustering results
Algorithm 1 takes as inputs the matrix A ? Rn?d , the number of clusters k, and an accuracy
parameter ? ? (0, 1). It first computes the top-k right singular vectors of A (columns of Vk ? Rd?k ).
Using these vectors, it computes the so-called (normalized) leverage scores [4, 24]; for i = 1, ..., d
the i-th leverage score equals the square of the Euclidian norm of the i-th row of Vk (denoted by
(Vk )(i) ). The i-th leverage score characterizes the importance of the i-th feature with respect to the
?
k-means objective. Notice that these scores (see the definition
?n of pi s in step 2 of Algorithm 1) form
a probability distribution over the columns of A since i=1 pi = 1. Then, the algorithm chooses
a sampling parameter r that is equal to the number of (rescaled) features that we want to select.
In order to prove our theoretical bounds, r should be fixed to r = ?(k log(k/?)/?2 ) at this step
(see section 4.4). In practice though, a small value of r, for example r = 10k, seems suficient (see
section 5). Having r fixed, Algorithm 1 performs r i.i.d random trials where in each trial one column
of A is selected by the following random process: we throw a biased die with d faces with each face
corresponding to a column of A, where for i = 1, ..., d the i-th face occurs with probability pi . We
select the column of A that corresponds to the face we threw in the current trial. Finally, note that the
running time of Algorithm 1 is dominated by
( the time required
) to compute the top-k right singular
vectors of the matrix A, which is at most O min{nd2 , n2 d} .
2
Input: n ? d matrix A (n points, d features), number of clusters k, parameter ? ? (0, 1).
1. Compute the top-k right singular vectors of A, denoted by Vk ? Rd?k .
2. Compute the (normalized) leverage
scores pi
, for i = 1, . . . , d,
2
pi =
(Vk )(i)
/k.
2
3. Fix a sampling parameter r = ?(k log(k/?)/?2 ).
4. For t = 1, . . . , r i.i.d random trials:
? keep the i-th feature with probability pi and multiply it by the factor (rpi )?1/2 .
5. Return the n ? r matrix A? containing the selected (rescaled) features.
? with r = ?(k log(k/?)/?2 ).
Output: n ? r matrix A,
Algorithm 1: A randomized feature selection algorithm for the k-means clustering problem.
In order to theoretically evaluate the accuracy of our feature selection algorithm, and provide some
a priori guarantees regarding the quality of the clustering after feature selection is performed, we
chose to report results on the optimal value of the k-means clustering objective (the Fopt of Definition 1). This metric of accuracy has been extensively used in the Theoretical Computer Science
community in order to analyze approximation algorithms for the k-means clustering problem. In
particular, existing constant factor or relative error approximation algorithms for k-means (see, for
example, [21, 1] and references therein) invariably approximate Fopt .
Obviously, Algorithm 1 does not return a partition of the rows of A. In a practical setting, it would
be employed as a preprocessing step. Then, an approximation algorithm for the k-means clustering
problem would be applied on A? in order to determine the partition of the rows of A. In order to
formalize our discussion, we borrow a definition from the approximation algorithms literature.
Definition 2 [ K - MEANS APPROXIMATION ALGORITHM ]
An algorithm is a ??-approximation? for the k-means clustering problem (? ? 1) if it takes inputs
A and k, and returns an indicator matrix X? that satisfies with probability at least 1 ? ?? ,
A ? X? X?T A
2 ? ? min
A ? XX T A
2 .
F
F
X?X
(3)
In the above ?? ? [0, 1) is the failure probability of the algorithm.
Clearly, when ? = 1, then X? is the optimal partition, which is a well-known NP-hard objective. If
we allow ? > 1, then many approximation algorithms exist in the literature. For example, the work
of [21], achieves ? = 1 + ?, for some ? ? (0, 1] in time linear on the size of the input. Similarly,
the k-means++ method of [1] achieves ? = O(log(k)) using the popular Lloyd?s algorithm and a
sophisticated randomized seeding. Theorem 1 (see Section 4 for its proof) is our main quality-ofapproximation result for our feature selection algorithm.
Theorem 1 Let the n?d matrix A and the positive integer k be the inputs of the k-means clustering
problem. Let ? ? (0, 1), and run Algorithm 1 with inputs A, k, and ? in order to construct the n ? r
matrix A? containing the selected features, where r = ?(k log(k/?)/?2 ).
If we run any ?-approximation algorithm (? ? 1) for the k-means clustering problem, whose failure probability is ?? , on inputs A? and k, the resulting cluster indicator matrix X?? satisfies with
probability at least 0.5 ? ?? ,
A ? X?? X??T A
2 ? (1 + (1 + ?)?) min
A ? XX T A
2 .
F
F
X?X
(4)
The failure probability of the above theorem can be easily reduced using standard boosting methods.
3
3
Related work
Feature selection has received considerable attention in the machine learning and data mining communities. A large number of different techniques appeared in prior work, addressing the feature
selection within the context of both clustering and classification. Surveys include [13], as well
as [14], which reports the results of the NIPS 2003 challenge in feature selection. Popular feature
selection techniques include the Laplacian scores [16], the Fisher scores [9], or the constraint scores
[33]. In this section, we opt to discuss only a family of feature selection methods that are closely
related to the leverage scores of our algorithm. To the best of our knowledge, all previous feature
selection methods come with no theoretical guarantees of the form that we describe here.
Given as input an n ? d object-feature matrix A and a positive integer k, feature selection for Principal Components Analysis (PCA) corresponds to the task of identifying a subset of k columns from
A that capture essentially the same information as do the top k principal components of A. Jolliffe [18] surveys various methods for the above task. Four of them (called B1, B2, B3, and B4
in [18]) employ the Singular Value Decomposition of A in order to identify columns that are somehow correlated with its top k left singular vectors. In particular, B3 employs exactly the leverage
scores in order to greedily select the k columns corresponding to the highest scores; no theoretical
results are reported. An experimental evaluation of the methods of [18] on real datasets appeared
in [19]. Another approach employing the matrix of the top k right singular vectors of A and a
Procrustes-type criterion appeared in [20]. From an applications perspective, [30] employed the
methods of [18] and [20] for gene selection in microarray data analysis. From a complementary
viewpoint, feature selection for clustering seeks to identify those features that have the most discriminative power among the set of all features. Continuing the aforementioned line of research,
many recent papers present methods that somehow employ the SVD of the input matrix in order
to select discriminative features; see, for example, [23, 5, 25, 26]. Finally, note that employing
the leverage scores in a randomized manner similar to Algorithm 1 has already been proven to be
accurate for least-squares regression [8] and PCA [7, 2].
3.1 Connections with the SVD
A well-known property connects the SVD of a matrix and k-means clustering. Recall Definition
T
A is a matrix of rank at most k. From the SVD optimality [11], we
1, and notice that Xopt Xopt
immediately get that (see section 4.1 for useful notation)
2
2
2
T
?A??k ?F = ?A ? Ak ?F ?
A ? Xopt Xopt
A
F = Fopt .
(5)
A more interesting connection between the SVD and k-means appeared in [6]. If the n ? d matrix
A is projected on the subspace spanned by its top k left singular vectors, then the resulting n ? k
matrix A? = Uk ?k corresponds to a mapping of the original d-dimensional space to the optimal
k-dimensional space. This process is equivalent to feature extraction: the top k left singular vectors
(the columns of Uk ) correspond to the constructed features (?k is a simple rescaling operator).
Prior to the work of [6], it was empirically known that running k-means clustering algorithms on
the low-dimensional matrix A? was a viable alternative to clustering the high-dimensional matrix A.
? opt denote the optimal
The work of [6] formally argued that if we let the cluster indicator matrix X
?
k-means partition on A, i.e.,
2
? opt = arg min
X
(6)
A? ? XX T A?
,
X?X
F
then using this partition on the rows of the original matrix A is a 2-approximation to the optimal
partition, a.k.a.,
2
2
T
? opt X
? opt
A
? 2 min
A ? XX T A
F .
(7)
A ? X
X?X
F
The above result is the starting point of our work here. Indeed, we seek to replace the k artificial
features that are extracted via the SVD with a small number (albeit slightly larger than k) of actual
features. On the positive side, an obvious advantage of feature selection vs. feature extraction is
the immediate interpretability of the former. On the negative side, our approximation accuracy is
slightly worse (2 + ?, see Theorem 1 with ? = 1) and we need slightly more than k features.
4
4
The proof of Theorem 1
This section gives the proof of Theorem 1. We start by introducing useful notation; then, we present
a preliminary lemma and the proof itself.
4.1
Notation
Given an n ? d matrix A, let Uk ? Rn?k (resp. Vk ? Rd?k ) be the matrix of the top k left (resp.
right) singular vectors of A, and let ?k ? Rk?k be a diagonal matrix containing the top k singular
values of A. If we let ? be the rank of A, then A??k is equal to A ? Ak , with Ak = Uk ?k VkT .
?A?F and ?A?2 denote the Frobenius and the spectral norm of a matrix A, respectively. A+
denotes the pseudo-inverse of A and ||A+ ||2 = ?max (A+ ) = 1/?min (A), where ?max (X) and
?min (X) denote the largest and the smallest non-zero singular values of a matrix X, respectively.
A useful property of matrix norms is that for any two matrices X and Y , ?XY ?F ? ?X?F ?Y ?2
and ?XY ?F ? ?X?2 ?Y ?F ; this is a stronger version of the standard submultiplicavity property
for matrix norms. We call P a projector matrix if it is square and P 2 = P . We use E[y] to take the
expectation of a random variable y and Pr[e] to take the probability of a random event e. Finally, we
abbreviate ?independent identically distributed? to ?i.i.d? and ?with probability? to ?w.p?.
4.2 Sampling and rescaling matrices
We introduce a simple matrix formalism in order to conveniently represent the sampling and rescaling processes of Algorithm 1. Let S be a d ? r sampling matrix that is constructed as follows: S
is initially empty. For all t = 1, . . . , r, in turn, if the i-th feature of A is selected by the random
sampling process described in Algorithm 1, then ei (a column vector of all-zeros, except for its i-th
entry which is set to one) is appended to S. Also, let D be a r ? r diagonal rescaling matrix constructed as follows: D is initially an all-zeros matrix. For all t ?
= 1, . . . , r, in turn, if the i-th feature
of A is selected, then the next diagonal entry of D is set to 1/ rpi . Thus, by using the notation of
this paragraph, Algorithm 1 outputs the matrix A? = ASD ? Rn?r .
4.3 A preliminary lemma and sufficient conditions
Lemma 1 presented below gives upper and lower bounds for the largest and the smallest singular
values of the matrix VkT SD, respectively. This also implies that VkT SD has full rank. Finally, it
argues that the matrix ASD can be used to provide a very accurate approximation to the matrix Ak .
Lemma 1 provides four sufficient conditions for designing provably accurate feature selection algorithms for k-means clustering. To see this notice that, in the proof of eqn. (4) given below, the
results of Lemma 1 are sufficient to prove our main theorem; the rest of the arguments apply to all
sampling and rescaling matrices S and D. Any feature selection algorithm, i.e. any sampling matrix
S and rescaling matrix D, that satisfy bounds similar to those of Lemma 1, can be employed to
design a provably accurate feature selection algorithm for k-means clustering. The quality of such
an approximation will be proportional to the tightness of the bounds of the three terms of Lemma
1 (||VkT SD||2 , ||(VkT SD)+ ||2 , and ||E||F ). Where no rescaling is allowed in the selected features,
the bottleneck in the approximation accuracy of a feature selection algorithm would be to find a
sampling matrix S such that only ||(VkT S)+ ||2 is bounded from above. To see this notice that, in
Lemma 1, for any S, ||VkT S||2 ? 1, and (after applying the submultiplicavity property of Section
4.1 in eqn. 13) ||E||F ? ||(VkT S)+ ||2 ||A ? Ak ||. It is worth emphasizing that the same factor
||(VkT S)+ ||2 appeared to be the bottleneck in the design of provably accurate column-based lowrank approximations (see, for example, Theorem 1.5 in [17] and eqn. (3.19) in [12]). It is evident
from the above observations that other column sampling methods (see, for example, [17, 3, 2] and
references therein), satisfying similar bounds to those of Lemma 1, immediately suggest themselves
for the design of provably accurate feature selection algorithms for k-means clustering. Finally,
equations (101) and (102) of Lemma 4.4 in [31] suggest that a sub-sampled randomized Fourier
transform can be used for the design of a provably accurate feature extraction algorithm for k-means
clustering, since they provide bounds similar to those of Lemma 1 by replacing the matrices S and
D of our algorithm with a sub-sampled randomized Fourier transform matrix (see the matrix R of
eqn. (6) in [31]).
5
Lemma 1 Assume that the sampling matrix S and the rescaling matrix D are constructed using
Algorithm 1 (see also Section 4.2) with inputs A, k, and ? ? (0, 1). Let co and c1 be absolute
constants that will be specified later. If the sampling parameter r of Algorithm 1 satisfies
r ? 2c1 c2o k log(c1 c2o k/?2 )/?2 ,
then all four statements below hold together with probability at least 0.5:
?
1.
VkT SD
2 = ?max (VkT SD) ? 1 + ?.
?
2.
(VkT SD)+
2 = 1/?min (VkT SD) ? 1/(1 ? ?.
3. VkT SD is a full rank matrix, i.e. rank(VkT SD) = k.
4. Ak = (ASD)(VkT SD)+ VkT + E, with ?E?F ? ? ?A ? Ak ?F .
?
?
?
To simplify notation, we set ? = ? 36/c1 and ? = ? 6/(2c1 c2o log(c1 c2o k/?2 )) + 6?2 /(1 ? ?).
Proof: First, we will apply Theorem 3.1 of [29] for an appropriate random vector y. Toward that end,
for i = 1, ..., d, the i-th column of the matrix VkT is denoted by (VkT )(i) . We define the random vector
?
y ? Rk as follows: for i = 1, ..., d Pr[y = yi ] = pi , where yi = (1/ pi )(VkT )(i) is a realization of
y. This definition of y and the definition of the sampling and rescaling matrices S and D imply
that
?
?d
VkT SDDS T Vk = 1r i=1 yi yiT . Our choice of pi = ||(VkT )(i) ||2 /k implies that ||y||2 ? k. Note
?d
also that E[yy T ] = i=1 pi ?1pi (VkT )(i) ?1pi (VkT )(i) )T = VkT Vk = Ik . Obviously, ||E[yy T ]||2 =
1. Our choice of r allows us to apply
Theorem 3.1 of [29],
which, combined with the Markov?s
inequality on the random variable z =
VkT SDDS T Vk ? Ik
2 implies that w.p at least 1 ? 1/6,
?
T
Vk SDDS T Vk ? Ik
? 6c0 k log(r)/r,
2
for a sufficiently large (unspecified in [29]) constant co . Standard matrix perturbation theory results [11] imply that for i = 1, ..., k
?
T
(
)
Vk SDDS T Vk ? Ik
= ?i2 VkT SD ? 1 ? 6co k log(r)/r.
2
Our choice of r and simple algebra suffices to show that log(r)/r ? ?2 /(c1 c2o k), which implies that
the first two statements of the Lemma hold w.p at least 1 ? 5/6. To prove the third statement, we
only need to show that the k-th singular value of VkT SD is positive. Our choice of ? ? (0, 1) and
the second condition of the Lemma imply that ?k (VkT SD) > 0. To prove the fourth statement:
Ak ? ASD(VkT SD)+ VkT
=
Ak ? Ak SD(VkT SD)+ VkT ? A??k SD(VkT SD)+ VkT
F (8)
F
?
Ak ? Ak SD(VkT SD)+ VkT
F +
A??k SD(VkT SD)+ VkT
(9)
.
F
|
{z
} |
{z
}
?1
?2
In the above, in eqn. (8) we replaced A by Ak +A??k , and in eqn. (9) we used the triangle inequality.
The first term of eqn. (9) is bounded by
?1 =
Ak ? Uk ?k VkT SD(VkT SD)+ VkT
F
(10)
T
=
Ak ? Uk ?k Ik Vk
= 0.
(11)
F
In the above, in eqn. (10) we replaced Ak by Uk ?k VkT , and in eqn. (11) we set
(VkT SD)(VkT SD)+ = Ik , since VkT SD is a rank-k matrix w.p 1 ? 5/6. The second term of
eqn. (9) is bounded by
T
?2 =
U??k ???k V??k
SD(VkT SD)+ VkT
F
(12)
T
(13)
?
???k V??k
SD(VkT SD)+
F .
T
, and in eqn. (13) U??k and VkT can
In the above, in eqn. (12) we replaced A??k by U??k ???k V??k
be dropped without increasing a unitarily invariant norm such as the Frobenius matrix norm. If the
first three statements of the lemma hold w.p at least 1 ? 5/6, then w.p at least 1 ? 1/3,
?
?
T
???k V??k
SD(VkT SD)+
? (? 6/(2c1 c2o log(c1 c2o k/?2 )) + 6?2 /(1 ? ?)) ?A ? Ak ? .
F
F
(The proof of this last argument is omitted from this extended abstract.) Finally, notice that the first
three statements have the same failure probability 1/6 and the fourth statement fails w.p 1/3; the
union bound implies that all four statements hold together with probability at least 0.5.
?
6
4.4
The proof of eqn. (4) of Theorem 1
We assume that Algorithm 1 fixes r to the value specified in Lemma 1; note that this does not violate
2
the asymptotic notation used in Algorithm 1. We start by manipulating the term
A ? X?? X??T A
F
in eqn. (4). Replacing A by Ak + A??k , and using the Pythagorean theorem (the subspaces spanned
by the components Ak ? X?? X??T Ak and A??k ? X?? X??T A??k are perpendicular) we get
A ? X?? X??T A
2 =
(I ? X?? X??T )Ak
2 +
(I ? X?? X??T )A??k
2 .
(14)
F
F
F
|
{z
} |
{z
}
?32
?42
Since I ?X?? X??T
is a projector matrix, it can be dropped
We first bound the second term of eqn. (14).
without increasing a unitarily invariant norm. Now eqn. (5) implies that
?42 ? Fopt .
(15)
We now bound the first term of eqn. (14):
?3 ?
(I ? X?? X??T )ASD(Vk SD)+ VkT
F + ?E?F
(16)
T
+
?
(I ? X?? X?? )ASD F (Vk SD) 2 + ?E?F
(17)
?
T
+
?
?
(I ? Xopt Xopt )ASD
F
(Vk SD)
2 + ?E?F
(18)
?
T
?
?
(I ? Xopt Xopt
)ASD(Vk SD)+
F ?(Vk SD)?2
(Vk SD)+
2 + ?E?F
(19)
?
T
+ T
+
? (I ? Xopt Xopt )ASD(Vk SD) Vk F ?(Vk SD)?2 (Vk SD) 2 + ?E?F (20)
=
|
{z
}
?5
?? X
? ?T is a projector
In eqn. (16) we used Lemma 1, the triangle inequality, and the fact that I ? X
matrix and can be dropped without increasing a unitarily invariant norm. In eqn. (17) we used
submultiplicativity (see Section 4.1) and the fact that VkT can be?dropped without changing the
spectral norm. In eqn. (18) we replaced X?? by Xopt and the factor ? appeared in the first term. To
better understand this step, notice that X?? gives a ?-approximation to the optimal k-means clustering
of the matrix ASD, and any other n ? k indicator matrix (for example, the matrix Xopt ) satisfies
(
(
2
)
)
T
I ? X?? X??T ASD
2 ? ? min
(I ? XX T )ASD
2 ? ?
I ? Xopt Xopt
ASD
.
F
F
X?X
F
In eqn. (19) we first introduced the k ? k identity matrix Ik =
(rank(VkT SD) = k) and then we used submultiplicativity (see Section 4.1). In eqn. (20) we introduced VkT without changing the Frobenius norm. We further manipulate the term ?5 of eqn. (20):
T
T
?5 ?
(I ? Xopt Xopt
)Ak
F +
(I ? Xopt Xopt
)E
F
(21)
T
?
(I ? Xopt Xopt
)AVk VkT
F + ||E||F
(22)
?
(23)
? (1 + ?) Fopt
In eqn. (21) we used Lemma 1 and the triangle inequality. In eqn. (22) we replaced Ak by AVk VkT
T
T
and dropped I ? Xopt Xopt
from the second term (I ? Xopt Xopt
is a projector matrix and does not
increase the Frobenius norm). In eqn. (23) we dropped the projector matrix Vk VkT and used eqn. (5)
and Definition 1. Combining equations (20), (23), (5), Lemma 1, and the fact that ? ? 1, we get
?
?
1+?
?
?3 ? ? (
(1 + ?) + ?) Fopt .
1??
|
{z
}
(VkT SD)+ (VkT SD)
?6
Simple algebra suffices to show that for any ? ? (0, 1), for any positive integer k ? 1, and for some
sufficiently large constant c1 , it is
?6 ?
?
1 + ?,
thus
?32 ? ?(1 + ?)Fopt .
(24)
Combining eqn. (24) with eqns. (14) and (15) concludes the proof of eqn. (4). Using asymptotic
notation our choice of r satisfies r = ?(k log(k/?)/?2 ). Note that Theorem 1 fails only if Lemma 1
or the ?-approximation k-means clustering algorithm fail, which happens w.p at most 0.5 + ?? .
7
NIPS (k = 3)
Bio (k = 3)
r = 5k
P
F
.847 .758
.742 .764
r = 10k
P
F
.847 .751
.935 0.726
r = 20k
P
F
.859 .749
1
.709
All
P
F
.881 .747
1
.709
Table 1: Numerics from our experiments (Leverage scores).
NIPS
0.08
ruyter
all leverage scores
best set ( r = 30 )
0.07
hand
Leverage Scores
0.06
0.05
0.04
information
code
universality
sources
0.03
tishby
naftali
neural
0.02 center
hebrew
0.01
0
0
1000
2000
3000
4000
features
5000
6000
7000
Figure 1: Leverage scores for the NIPS dataset.
5
Empirical study
We present an empirical evaluation of Algorithm 1 on two real datasets. We show that it selects the
most relevant features (Figure 1) and that the clustering obtained after feature selection is performed
is very accurate (Table 1). It is important to note that the choice of r in the description of Algorithm
1 is a sufficient - not necessary - condition to prove our theoretical bounds. Indeed, a much smaller
choice of r, for example r = 10k, is often sufficient for good empirical results.
We first experimented with a NIPS documents dataset (see http://robotics.stanford.
edu/?gal/ and [10]). The data consist of a 184 ? 6314 document-term matrix A, with Aij denoting the number of occurrences of the j-th term in the i-th document. Each document is a paper
that appeared in the proceedings of NIPS 2001, 2002, or 2003, and belongs to one of the following
three topic categories: (i) Neuroscience, (ii) Learning Theory, and (iii) Control and Reinforcement
Learning. Each term appeared at least once in one of the 184 documents. We evaluated the accuracy
of Algorithm 1 by running the Lloyd?s heuristic1 on the rescaled features returned by our method.
In order to drive down the failure probability of Algorithm 1, we repeated it 30 times (followed by
the Lloyd? heuristic each time) and kept the partition that minimized the objective value. We report
the percentage of correctly classified objects (denoted by P , 0 ? P ? 1), as well as the value of
the k-means objective (i.e., the value F = ||A ? X?? X??T A||2F /||A||2F of Theorem 1; the division by
the ||A||2F is for normalization). Results are depicted in Table 1. Notice that only a small subset of
features suffices to approximately reproduce the partition obtained when all features were kept. In
Figure 1 we plotted the distribution of the leverage scores for the 6314 terms (columns) of A; we
also highlighted the features returned by Algorithm 1 when the sampling parameter r is set to 10k.
We observed that terms corresponding to the largest leverage scores had significant discriminative
power. In particular, ruyter appeared almost exclusively in documents of the first and third categories, hand appeared in documents of the third category, information appeared in documents
of the first category, and code appeared in documents of the second and third categories only. We
also experimented with microarray data showing the expression levels of 5520 genes (features) for
31 patients (objects) having three different cancer types [27]: 10 patients with gastrointestinal stromal tumor, 12 with leiomyosarcoma, and 9 with synovial sarcoma. Table 1 depicts the results from
our experiments by choosing k = 3. Note that the Lloyd?s heuristic worked almost perfectly when r
was set to 10k and perfectly when r was set to 20k. Experimental parameters set to the same values
as in the first experiment.
1
We ran 30 iterations of the E-M step with 30 different random initializations and returned the partition that
minimized the k-means objective function, i.e. we ran kmeans(A, k, ?Replicates?, 30, ?Maxiter?, 30) in MatLab.
8
References
[1] D. Arthur and S. Vassilvitskii. k-means++: the advantages of careful seeding. In Proceedings of the 18th Annual ACM-SIAM Symposium
on Discrete algorithms (SODA), pages 1027?1035, 2007.
[2] C. Boutsidis, M. W. Mahoney, and P. Drineas. Unsupervised feature selection for Principal Components Analysis. In Proceedings of the
14th Annual ACM SIGKDD Conference (KDD), pages 61?69, 2008.
[3] S. Chandrasekaran and I. Ipsen. On rank-revealing factorizations. SIAM Journal on Matrix Analysis and Applications, 15:592?622,
1994.
[4] S. Chatterjee and A. S. Hadi. Influential observations, high leverage points, and outliers in linear regression. Statistical Science, 1:379?
393, 1986.
[5] Y. Cui and J. G. Dy. Orthogonal principal feature selection. manuscript.
[6] P. Drineas, A. Frieze, R. Kannan, S. Vempala, and V. Vinay. Clustering in large graphs and matrices. In Proceedings of the 10th Annual
ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 291?299, 1999.
[7] P. Drineas, M. Mahoney, and S. Muthukrishnan. Relative-Error CUR Matrix Decompositions. SIAM Journal on Matrix Analysis and
Applications, 30:844?881, 2008.
[8] P. Drineas, M. Mahoney, and S. Muthukrishnan. Sampling algorithms for ?2 regression and applications. In Proceedings of the 17th
Annual ACM-SIAM Symposium on Discrete Algorithms (SODA), pages 1127?1136, 2006.
[9] D. Foley and J. Sammon, J.W. An optimal set of discriminant vectors. IEEE Transactions on Computers, C-24(3):281?289, March 1975.
[10] A. Globerson, G. Chechik, F. Pereira, and N. Tishby. Euclidean Embedding of Co-occurrence Data. The Journal of Machine Learning
Research, 8:2265?2295, 2007.
[11] G. Golub and C. V. Loan. Matrix Computations. Johns Hopkins University Press, Baltimore, 1989.
[12] S. A. Goreinov, E. E. Tyrtyshnikov, and N. L. Zamarashkin A theory of pseudoskeleton approximations. Linear Algebra and Its
Applications, 261:1-21, 1997.
[13] I. Guyon and A. Elisseeff. An introduction to variable and feature selection. Journal of Machine Learning Research, 3:1157?1182, 2003.
[14] I. Guyon, S. Gunn, A. Ben-Hur, and G. Dror. Result analysis of the NIPS 2003 feature selection challenge. In Advances in Neural
Information Processing Systems (NIPS) 17, pages 545?552, 2005.
[15] J.A. Hartigan. Clustering algorithms. John Wiley & Sons, Inc. New York, NY, USA, 1975.
[16] X. He, D. Cai, and P. Niyogi. Laplacian score for feature selection. In Advances in Neural Information Processing Systems (NIPS) 18,
pages 507?514. 2006.
[17] Y.P. Hong and C.T. Pan. Rank-revealing QR factorizations and the singular value decomposition. Mathematics of Computation,
58:213232, 1992.
[18] I. Jolliffe. Discarding variables in a principal component analysis. I: Artificial data. Applied Statistics, 21(2):160?173, 1972.
[19] I. Jolliffe. Discarding variables in a principal component analysis. II: Real data. Applied Statistics, 22(1):21?31, 1973.
[20] W. Krzanowski. Selection of variables to preserve multivariate data structure, using principal components. Applied Statistics, 36(1):22?
33, 1987.
[21] A. Kumar, Y. Sabharwal, and S. Sen. A simple linear time (1 + ?)-approximation algorithm for k-means clustering in any dimensions.
In Proceedings of the 45th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 454?462, 2004.
[22] S.P. Lloyd. Least squares quantization in PCM. Unpublished Bell Lab. Tech. Note, portions presented at the Institute of Mathematical
Statistics Meeting Atlantic City, NJ, September 1957. Also, IEEE Trans Inform Theory (Special Issue on Quantization), vol IT-28, pages
129-137, March 1982.
[23] Y. Lu, I. Cohen, X. S. Zhou, and Q. Tian. Feature selection using principal feature analysis. In Proceedings of the 15th international
conference on Multimedia, pages 301?304, 2007.
[24] M. W. Mahoney and P. Drineas. CUR Matrix Decompositions for Improved Data Analysis. In Proceedings of the National Academy of
Sciences, USA (PNAS), 106, pages 697-702, 2009.
[25] A. Malhi and R. Gao. PCA-based feature selection scheme for machine defect classification. IEEE Transactions on Instrumentation and
Measurement, 53(6):1517?1525, Dec. 2004.
[26] K. Mao. Identifying critical variables of principal components for unsupervised feature selection. IEEE Transactions on Systems, Man,
and Cybernetics, 35(2):339?344, April 2005.
[27] T. Nielsen et al. Molecular characterisation of soft tissue tumors: A gene expression study. Lancet, 359:1301?1307, 2002.
[28] R. Ostrovsky, Y. Rabani, L. J. Schulman, and C. Swamy. The effectiveness of Lloyd-type methods for the k-means problem. In
Proceedings of the 47th Annual IEEE Symposium on Foundations of Computer Science (FOCS), pages 165?176, 2006.
[29] M. Rudelson, and R. Vershynin, Sampling from large matrices: An approach through geometric functional analysis. Journal of the ACM
(JACM), 54(4), July 2007.
[30] A. Wang and E. A. Gehan. Gene selection for microarray data analysis using principal component analysis. Stat Med, 24(13):2069?2087,
July 2005.
[31] F. Woolfe, E. Liberty, V. Rokhlin, and M. Tygert. A fast randomized algorithm for the approximation of matrices. Applied and Computational Harmonic Analysis, 25 (3): 335-366, 2008.
[32] X. Wu et al. Top 10 algorithms in data mining analysis. Knowl. Inf. Syst., 14(1):1?37, 2007.
[33] D. Zhang, S. Chen, and Z.-H. Zhou. Constraint score: A new filter method for feature selection with pairwise constraints. Pattern
Recognition, 41(5):1440?1451, 2008.
9
| 3724 |@word trial:4 version:1 briefly:1 norm:13 seems:1 stronger:1 c0:1 sammon:1 seek:2 decomposition:4 elisseeff:1 euclidian:1 score:20 exclusively:1 denoting:2 document:9 existing:1 atlantic:1 current:1 rpi:4 universality:1 john:2 partition:11 kdd:1 seeding:2 v:1 selected:9 provides:1 boosting:1 zhang:1 mathematical:1 constructed:5 symposium:5 ik:9 viable:1 focs:2 prove:6 paragraph:1 introduce:1 manner:2 theoretically:1 pairwise:2 indeed:2 roughly:1 themselves:1 behavior:1 gastrointestinal:1 actual:2 curse:1 increasing:3 provided:1 xx:8 underlying:1 notation:8 bounded:3 maxiter:1 unspecified:1 dror:1 gal:1 nj:1 guarantee:2 pseudo:1 exactly:3 ostrovsky:1 uk:7 bio:1 control:1 positive:7 engineering:1 dropped:6 sd:46 despite:1 ak:23 approximately:1 chose:1 therein:2 initialization:1 co:4 factorization:2 perpendicular:1 tian:1 practical:1 globerson:1 practice:1 union:1 xopt:28 empirical:3 bell:1 revealing:2 chechik:1 suggest:2 get:3 krzanowski:1 selection:38 operator:1 context:1 applying:1 equivalent:2 projector:5 center:2 attention:1 starting:1 survey:2 simplicity:1 identifying:2 immediately:2 borrow:1 spanned:2 embedding:1 notion:1 resp:2 massive:1 designing:1 element:3 satisfying:1 recognition:1 gunn:1 observed:1 wang:1 capture:1 zamarashkin:1 rescaled:3 highest:1 ran:2 algebra:3 division:1 triangle:3 drineas:6 easily:1 various:1 muthukrishnan:2 fast:1 shortcoming:1 describe:1 artificial:3 choosing:1 whose:1 heuristic:3 stanford:4 larger:1 tightness:1 niyogi:1 statistic:4 transform:2 itself:1 highlighted:1 obviously:2 advantage:2 cai:1 sen:1 relevant:2 combining:2 realization:1 academy:1 description:2 frobenius:5 qr:1 cluster:17 empty:1 mmahoney:1 object:3 avk:2 ben:1 stat:1 nearest:1 lowrank:1 received:1 throw:1 c:3 come:1 implies:6 liberty:1 sabharwal:1 closely:1 filter:1 argued:1 fix:2 suffices:3 preliminary:2 opt:5 hold:4 sufficiently:2 mapping:1 achieves:2 smallest:2 omitted:1 threw:1 knowl:1 largest:3 city:1 clearly:1 zhou:2 vk:25 nd2:1 rank:9 tech:1 sigkdd:1 centroid:1 greedily:1 membership:1 initially:2 manipulating:1 expand:1 reproduce:1 selects:4 provably:8 tyrtyshnikov:1 issue:1 arg:2 classification:2 among:1 aforementioned:1 denoted:5 priori:1 special:1 submultiplicavity:2 equal:5 construct:3 once:1 extraction:6 having:2 sampling:16 unsupervised:4 minimized:3 report:3 np:1 simplify:1 few:1 employ:3 modern:1 frieze:1 preserve:1 national:1 replaced:5 connects:1 attempt:1 invariably:1 mining:2 multiply:1 evaluation:2 replicates:1 mahoney:5 golub:1 accurate:11 necessary:1 xy:2 arthur:1 orthogonal:2 euclidean:4 continuing:1 plotted:1 theoretical:5 column:18 formalism:1 obstacle:1 soft:1 maximization:1 introducing:2 addressing:2 subset:3 entry:2 tishby:2 reported:1 chooses:1 combined:1 vershynin:1 international:1 randomized:7 siam:5 michael:1 together:2 hopkins:1 squared:1 containing:3 worse:1 return:4 rescaling:9 syst:1 lloyd:8 b2:1 inc:1 rescales:1 satisfy:1 later:2 root:1 performed:2 lab:1 analyze:1 characterizes:1 portion:1 start:2 appended:1 square:6 accuracy:7 hadi:1 correspond:1 identify:2 identification:1 lu:1 worth:2 drive:1 cybernetics:1 tissue:1 classified:1 inform:1 definition:12 failure:5 boutsidis:2 obvious:1 associated:1 proof:9 petros:1 cur:2 sampled:2 dataset:3 popular:3 recall:2 knowledge:1 hur:1 dimensionality:2 ubiquitous:1 formalize:1 nielsen:1 sophisticated:1 sdds:4 manuscript:1 improved:1 april:1 evaluated:1 though:1 hand:2 eqn:29 web:1 ei:1 replacing:2 somehow:2 quality:4 perhaps:1 unitarily:3 b3:2 usa:2 normalized:3 former:1 i2:1 eqns:1 naftali:1 die:1 criterion:1 hong:1 presenting:1 evident:1 performs:1 argues:1 ranging:1 harmonic:1 novel:1 functional:1 empirically:1 cohen:1 b4:1 belong:1 he:2 significant:1 measurement:1 rd:3 mathematics:2 similarly:1 had:1 tygert:1 multivariate:1 recent:2 perspective:1 irrelevant:1 belongs:3 instrumentation:1 inf:1 inequality:4 meeting:1 yi:3 drinep:1 employed:3 determine:1 july:2 ii:2 full:2 violate:1 pnas:1 manipulate:1 molecular:1 laplacian:2 regression:3 essentially:1 expectation:2 metric:1 patient:2 woolfe:1 iteration:1 represent:1 normalization:1 robotics:1 dec:1 c1:10 whereas:1 want:1 addressed:1 baltimore:1 wealth:1 singular:14 source:1 microarray:3 appropriately:1 biased:1 rest:1 fopt:8 med:1 effectiveness:1 integer:5 practitioner:1 call:1 noting:1 leverage:14 split:1 identically:1 iii:1 perfectly:2 submultiplicativity:2 regarding:1 bottleneck:2 vassilvitskii:1 expression:2 pca:3 returned:3 york:1 matlab:1 useful:4 detailed:1 procrustes:1 extensively:1 category:5 reduced:1 http:1 exist:2 xij:2 percentage:1 notice:8 neuroscience:1 per:1 yy:2 correctly:1 diverse:1 discrete:3 vol:1 four:4 yit:1 characterisation:1 changing:2 hartigan:1 asd:13 kept:2 graph:1 defect:1 sum:2 year:1 run:5 inverse:1 fourth:2 soda:3 family:1 almost:2 chandrasekaran:1 guyon:2 wu:1 dy:1 bound:10 followed:1 annual:6 constraint:3 worked:1 dominated:1 fourier:2 argument:2 extremely:1 min:11 optimality:1 kumar:1 rabani:1 vempala:1 department:3 influential:1 march:2 cui:1 belonging:1 smaller:1 slightly:3 son:1 pan:1 s1:1 happens:1 outlier:1 invariant:3 pr:2 equation:2 discus:1 turn:2 fail:1 jolliffe:3 end:1 apply:3 polytechnic:2 spectral:2 appropriate:1 occurrence:2 alternative:1 swamy:1 existence:1 original:2 denotes:4 top:11 clustering:40 running:4 include:2 rudelson:1 medicine:1 objective:14 already:1 occurs:1 diagonal:3 september:1 subspace:2 distance:1 topic:1 argue:1 discriminant:1 toward:1 kannan:1 assuming:1 code:2 goreinov:1 hebrew:1 equivalently:1 ipsen:1 statement:8 troy:2 negative:1 numerics:1 design:4 upper:1 observation:2 datasets:3 markov:1 vkt:60 immediate:1 extended:1 rn:4 perturbation:1 arbitrary:1 community:2 introduced:2 unpublished:1 required:1 specified:2 connection:2 nip:9 trans:1 address:2 below:4 pattern:1 appeared:12 challenge:3 interpretability:1 max:3 power:2 event:1 critical:1 indicator:9 abbreviate:1 representing:2 scheme:1 imply:3 numerous:1 concludes:1 foley:1 prior:2 literature:2 schulman:1 geometric:1 relative:2 asymptotic:2 interesting:1 proportional:1 proven:1 foundation:2 sufficient:5 lancet:1 viewpoint:1 pi:12 row:10 cancer:1 last:1 aij:1 formal:1 allow:2 side:2 understand:1 institute:3 face:4 absolute:1 distributed:1 dimension:2 computes:2 made:1 reinforcement:1 preprocessing:1 projected:1 employing:2 social:1 transaction:3 sj:2 approximate:4 keep:1 gene:4 b1:1 discriminative:3 rensselaer:2 iterative:1 table:4 ruyter:2 ca:1 pseudoskeleton:1 vinay:1 domain:1 significance:1 main:2 s2:1 n2:1 allowed:1 complementary:1 repeated:1 enormously:1 depicts:1 ny:3 slow:1 wiley:1 christos:1 sub:2 fails:2 pereira:1 mao:1 third:4 theorem:15 rk:2 emphasizing:1 down:1 discarding:2 showing:1 experimented:2 sarcoma:1 consist:1 quantization:2 albeit:1 importance:1 chatterjee:1 chen:1 depicted:1 pcm:1 jacm:1 gao:1 conveniently:1 corresponds:3 satisfies:5 extracted:1 acm:5 stromal:1 identity:2 kmeans:1 careful:1 replace:1 fisher:1 considerable:2 hard:1 man:1 loan:1 except:1 principal:10 lemma:20 called:5 total:1 tumor:2 multimedia:1 experimental:2 svd:6 select:4 formally:2 rokhlin:1 bioinformatics:1 pythagorean:1 evaluate:1 correlated:1 |
3,007 | 3,725 | Bayesian Belief Polarization
Alan Jern
Department of Psychology
Carnegie Mellon University
[email protected]
Kai-min K. Chang
Language Technologies Institute
Carnegie Mellon University
[email protected]
Charles Kemp
Department of Psychology
Carnegie Mellon University
[email protected]
Abstract
Empirical studies have documented cases of belief polarization, where two people with opposing prior beliefs both strengthen their beliefs after observing the
same evidence. Belief polarization is frequently offered as evidence of human
irrationality, but we demonstrate that this phenomenon is consistent with a fully
Bayesian approach to belief revision. Simulation results indicate that belief polarization is not only possible but relatively common within the set of Bayesian
models that we consider.
Suppose that Carol has requested a promotion at her company and has received a score of 50 on an
aptitude test. Alice, one of the company?s managers, began with a high opinion of Carol and became
even more confident of her abilities after seeing her test score. Bob, another manager, began with a
low opinion of Carol and became even less confident about her qualifications after seeing her score.
On the surface, it may appear that either Alice or Bob is behaving irrationally, since the same piece
of evidence has led them to update their beliefs about Carol in opposite directions. This situation is
an example of belief polarization [1, 2], a widely studied phenomenon that is often taken as evidence
of human irrationality [3, 4].
In some cases, however, belief polarization may appear much more sensible when all the relevant
information is taken into account. Suppose, for instance, that Alice was familiar with the aptitude
test and knew that it was scored out of 60, but that Bob was less familiar with the test and assumed
that the score was a percentage. Even though only one interpretation of the score can be correct,
Alice and Bob have both made rational inferences given their assumptions about the test.
Some instances of belief polarization are almost certain to qualify as genuine departures from rational inference, but we argue in this paper that others will be entirely compatible with a rational
approach. Distinguishing between these cases requires a precise normative standard against which
human inferences can be compared. We suggest that Bayesian inference provides this normative
standard, and present a set of Bayesian models that includes cases where polarization can and cannot emerge. Our work is in the spirit of previous studies that use careful rational analyses in order
to illuminate apparently irrational human behavior (e.g. [5, 6, 7]).
Previous studies of belief polarization have occasionally taken a Bayesian approach, but often the
goal is to show how belief polarization can emerge as a consequence of approximate inference in
a Bayesian model that is subject to memory constraints or processing limitations [8]. In contrast,
we demonstrate that some examples of polarization are compatible with a fully Bayesian approach.
Other formal accounts of belief polarization have relied on complex versions of utility theory [9],
or have focused on continuous hypothesis spaces [10] unlike the discrete hypothesis spaces usually
considered by psychological studies of belief polarization. We focus on discrete hypothesis spaces
and require no additional machinery beyond the basics of Bayesian inference.
We begin by introducing the belief revision phenomena considered in this paper and developing a
Bayesian approach that clarifies whether and when these phenomena should be considered irrational.
We then consider several Bayesian models that are capable of producing belief polarization and
illustrate them with concrete examples. Having demonstrated that belief polarization is compatible
1
(a)
Contrary updating
(i)
Divergence
(ii)
(b)
Parallel updating
Convergence
A
P (h1 ) 0.5
0.5
0.5
B
Prior
beliefs
Updated
beliefs
Prior
beliefs
Updated
beliefs
Prior
beliefs
Updated
beliefs
Figure 1: Examples of belief updating behaviors for two individuals, A (solid line) and B (dashed
line). The individuals begin with different beliefs about hypothesis h1 . After observing the same set
of evidence, their beliefs may (a) move in opposite directions or (b) move in the same direction.
with a Bayesian approach, we present simulations suggesting that this phenomenon is relatively
generic within the space of models that we consider. We finish with some general comments on
human rationality and normative models.
1
Belief revision phenomena
The term ?belief polarization? is generally used to describe situations in which two people observe
the same evidence and update their respective beliefs in the directions of their priors. A study by
Lord, et al. [1] provides one classic example in which participants read about two studies, one of
which concluded that the death penalty deters crime and another which concluded that the death
penalty has no effect on crime. After exposure to this mixed evidence, supporters of the death
penalty strengthened their support and opponents strengthened their opposition.
We will treat belief polarization as a special case of contrary updating, a phenomenon where two
people update their beliefs in opposite directions after observing the same evidence (Figure 1a).
We distinguish between two types of contrary updating. Belief divergence refers to cases in which
the person with the stronger belief in some hypothesis increases the strength of his or her belief
and the person with the weaker belief in the hypothesis decreases the strength of his or her belief
(Figure 1a(i)). Divergence therefore includes cases of traditional belief polarization. The opposite
of divergence is belief convergence (Figure 1a(ii)), in which the person with the stronger belief
decreases the strength of his or her belief and the person with the weaker belief increases the strength
of his or her belief. Contrary updating may be contrasted with parallel updating (Figure 1b), in
which the two people update their beliefs in the same direction. Throughout this paper, we consider
only situations in which both people change their beliefs after observing some evidence. All such
situations can be unambiguously classified as instances of parallel or contrary updating.
Parallel updating is clearly compatible with a normative approach, but the normative status of divergence and convergence is less clear. Many authors argue that divergence is irrational, and many
of the same authors also propose that convergence is rational [2, 3]. For example, Baron [3] writes
that ?Normatively, we might expect that beliefs move toward the middle of the range when people
are presented with mixed evidence.? (p. 210) The next section presents a formal analysis that challenges the conventional wisdom about these phenomena and clarifies the cases where they can be
considered rational.
2
A Bayesian approach to belief revision
Since belief revision involves inference under uncertainty, Bayesian inference provides the appropriate normative standard. Consider a problem where two people observe data d that bear on some
hypothesis h1 . Let P1 (?) and P2 (?) be distributions that capture the two people?s respective beliefs.
Contrary updating occurs whenever one person?s belief in h1 increases and the other person?s belief
in h1 decreases, or when
[P1 (h1 |d) ? P1 (h1 )] [P2 (h1 |d) ? P2 (h1 )] < 0 .
2
(1)
Family 1
(a)
H
D
H
Family 2
(b)
(c)
(d)
(e)
V
V
V
V
D
H
D
H
D
H
(f)
V
D
H
D
(g)
(h)
V
V
H
D
H
D
Figure 2: (a) A simple Bayesian network that cannot produce either belief divergence or belief
convergence. (b) ? (h) All possible three-node Bayes nets subject to the constraints described in
the text. Networks in Family 1 can produce only parallel updating, but networks in Family 2 can
produce both parallel and contrary updating.
We will use Bayesian networks to capture the relationships between H, D, and any other variables
that are relevant to the situation under consideration. For example, Figure 2a captures the idea that
the data D are probabilistically generated from hypothesis H. The remaining networks in Figure 2
show several other ways in which D and H may be related, and will be discussed later.
We assume that the two individuals agree on the variables that are relevant to a problem and agree
about the relationships between these variables. We can formalize this idea by requiring that both
people agree on the structure and the conditional probability distributions (CPDs) of a network N
that captures relationships between the relevant variables, and that they differ only in the priors they
assign to the root nodes of N . If N is the Bayes net in Figure 2a, then we assume that the two people
must agree on the distribution P (D|H), although they may have different priors P1 (H) and P2 (H).
If two people agree on network N but have different priors on the root nodes, we can create a single
expanded Bayes net to simulate the inferences of both individuals. The expanded network is created
by adding a background knowledge node B that sends directed edges to all root nodes in N , and acts
as a switch that sets different root node priors for the two different individuals. Given this expanded
network, distributions P1 and P2 in Equation 1 can be recovered by conditioning on the value of the
background knowledge node and rewritten as
[P (h1 |d, b1 ) ? P (h1 |b1 )] [P (h1 |d, b2 ) ? P (h1 |b2 )] < 0
(2)
where P (?) represents the probability distribution captured by the expanded network.
Suppose that there are exactly two mutually exclusive hypotheses. For example, h1 and h0 might
state that the death penalty does or does not deter crime. In this case Equation 2 implies that contrary
updating occurs when
[P (d|h1 , b1 ) ? P (d|h0 , b1 )] [P (d|h1 , b2 ) ? P (d|h0 , b2 )] < 0 .
(3)
Equation 3 is derived in the supporting material, and leads immediately to the following result:
R1: If H is a binary variable and D and B are conditionally independent given
H, then contrary updating is impossible.
Result R1 follows from the observation that if D and B are conditionally independent given H, then
the product in Equation 3 is equal to (P (d|h1 ) ? P (d|h0 ))2 , which cannot be less than zero.
R1 implies that the simple Bayes net in Figure 2a is incapable of producing contrary updating, an
observation previously made by Lopes [11]. Our analysis may help to explain the common intuition
that belief divergence is irrational, since many researchers seem to implicitly adopt a model in which
H and D are the only relevant variables. Network 2a, however, is too simple to capture the causal
relationships that are present in many real world situations. For example, the promotion example at
the beginning of this paper is best captured using a network with an additional node that represents
the grading scale for the aptitude test. Networks with many nodes may be needed for some real
world problems, but here we explore the space of three-node networks.
We restrict our attention to connected graphs in which D has no outgoing edges, motivated by the
idea that the three variables should be linked and that the data are the final result of some generative
process. The seven graphs that meet these conditions are shown in Figures 2b?h, where the additional variable has been labeled V . These Bayes nets illustrate cases in which (b) V is an additional
3
Models
Conventional wisdom
Belief divergence
Belief convergence
Parallel updating
X
X
Family 1
Family 2
X
X
X
X
Table 1: The first column represents the conventional wisdom about which belief revision phenomena are normative. The models in the remaining columns include all three-node Bayes nets. This set
of models can be partitioned into those that support both belief divergence and convergence (Family
2) and those that support neither (Family 1).
piece of evidence that bears on H, (c) V informs the prior probability of H, (d)?(e) D is generated
by an intervening variable V , (f) V is an additional generating factor of D, (g) V informs both the
prior probability of H and the likelihood of D, and (h) H and D are both effects of V . The graphs
in Figure 2 have been organized into two families. R1 implies that none of the graphs in Family 1 is
capable of producing contrary updating. The next section demonstrates by example that all three of
the graphs in Family 2 are capable of producing contrary updating.
Table 1 compares the two families of Bayes nets to the informal conclusions about normative approaches that are often found in the psychological literature. As previously noted, the conventional
wisdom holds that belief divergence is irrational but that convergence and parallel updating are
both rational. Our analysis suggests that this position has little support. Depending on the causal
structure of the problem under consideration, a rational approach should allow both divergence and
convergence or neither.
Although we focus in this paper on Bayes nets with no more than three nodes, the class of all network
structures can be partitioned into those that can (Family 2) and cannot (Family 1) produce contrary
updating. R1 is true for Bayes nets of any size and characterizes one group of networks that belong
to Family 1. Networks where the data provide no information about the hypotheses must also fail
to produce contrary updating. Note that if D and H are conditionally independent given B, then
the left side of Equation 3 is equal to zero, meaning contrary updating cannot occur. We conjecture
that all remaining networks can produce contrary updating if the cardinalities of the nodes and the
CPDs are chosen appropriately. Future studies can attempt to verify this conjecture and to precisely
characterize the CPDs that lead to contrary updating.
3
Examples of rational belief divergence
We now present four scenarios that can be modeled by the three-node Bayes nets in Family 2.
Our purpose in developing these examples is to demonstrate that these networks can produce belief
divergence and to provide some everyday examples in which this behavior is both normative and
intuitive.
3.1
Example 1: Promotion
We first consider a scenario that can be captured by Bayes net 2f, in which the data depend on two
independent factors. Recall the scenario described at the beginning of this paper: Alice and Bob
are responsible for deciding whether to promote Carol. For simplicity, we consider a case where
the data represent a binary outcome?whether or not Carol?s r?esum?e indicates that she is included
in The Directory of Notable People?rather than her score on an aptitude test. Alice believes that
The Directory is a reputable publication but Bob believes it is illegitimate. This situation is represented by the Bayes net and associated CPDs in Figure 3a. In the tables, the hypothesis space H =
{?Unqualified? = 0, ?Qualified? = 1} represents whether or not Carol is qualified for the promotion,
the additional factor V = {?Disreputable? = 0, ?Reputable? = 1} represents whether The Directory
is a reputable publication, and the data variable D = {?Not included? = 0, ?Included? = 1} represents whether Carol is featured in it. The actual probabilities were chosen to reflect the fact that only
an unqualified person is likely to pad their r?esum?e by mentioning a disreputable publication, but that
4
(a)
B
Alice
Bob
(b)
P(V=1)
0.01
0.9
B
Alice
Bob
V
B
Alice
Bob
P(H=1)
0.6
0.4
V
H
D
V
0
0
1
1
H
0
1
0
1
V
0
1
P(D=1)
0.5
0.1
0.1
0.9
(c)
P(H=1)
0.1
0.9
H
V
0
0
1
1
D
H
0
1
0
1
P(D=1)
0.4
0.01
0.4
0.6
(d)
B
Alice
Bob
P(V=0) P(V=1) P(V=2) P(V=3)
0.6
0.2
0.1
0.1
0.1
0.1
0.2
0.6
B
Alice
Bob
P(V1=1)
0.9
0.1
P(H=1)
1
1
0
0
H
B
Alice
Bob
V1
V
V
0
1
2
3
P(V=1)
0.9
0.1
D
V
0
1
2
3
P(D=0) P(D=1) P(D=2) P(D=3)
0.7
0.1
0.1
0.1
0.1
0.7
0.1
0.1
0.1
0.1
0.7
0.1
0.1
0.1
0.1
0.7
V1
0
0
1
1
V2
0
1
0
1
P(H=1)
0.5
0.1
0.5
0.9
P(V2=1)
0.5
0.5
V2
H
D
V2
0
1
P(D=1)
0.1
0.9
Figure 3: The Bayes nets and conditional probability distributions used in (a) Example 1: Promotion,
(b) Example 2: Religious belief, (c) Example 3: Election polls, (d) Example 4: Political belief.
only a qualified person is likely to be included in The Directory if it is reputable. Note that Alice
and Bob agree on the conditional probability distribution for D, but assign different priors to V and
H. Alice and Bob therefore interpret the meaning of Carol?s presence in The Directory differently,
resulting in the belief divergence shown in Figure 4a.
This scenario is one instance of a large number of belief divergence cases that can be attributed to two
individuals possessing different mental models of how the observed evidence was generated. For
instance, suppose now that Alice and Bob are both on an admissions committee and are evaluating a
recommendation letter for an applicant. Although the letter is positive, it is not enthusiastic. Alice,
who has less experience reading recommendation letters interprets the letter as a strong endorsement.
Bob, however, takes the lack of enthusiasm as an indication that the author has some misgivings [12].
As in the promotion scenario, the differences in Alice?s and Bob?s experience can be effectively
represented by the priors they assign to the H and V nodes in a Bayes net of the form in Figure 2f.
3.2
Example 2: Religious belief
We now consider a scenario captured by Bayes net 2g. In our example for Bayes net 2f, the status
of an additional factor V affected how Alice and Bob interpreted the data D, but did not shape their
prior beliefs about H. In many cases, however, the additional factor V will influence both people?s
prior beliefs about H as well as their interpretation of the relationship between D and H. Bayes net
2g captures this situation, and we provide a concrete example inspired by an experiment conducted
by Batson [13].
Suppose that Alice believes in a ?Christian universe:? she believes in the divinity of Jesus Christ and
expects that followers of Christ will be persecuted. Bob, on the other hand, believes in a ?secular
universe.? This belief leads him to doubt Christ?s divinity, but to believe that if Christ were divine,
his followers would likely be protected rather than persecuted. Now suppose that both Alice and
Bob observe that Christians are, in fact, persecuted, and reassess the probability of Christ?s divinity.
This situation is represented by the Bayes net and associated CPDs in Figure 3b. In the tables, the
hypothesis space H = {?Human? = 0, ?Divine? = 1} represents the divinity of Jesus Christ, the
additional factor V = {?Secular? = 0, ?Christian? = 1} represents the nature of the universe, and
the data variable D = {?Not persecuted? = 0, ?Persecuted? = 1} represents whether Christians are
subject to persecution. The exact probabilities were chosen to reflect the fact that, regardless of
worldview, people will agree on a ?base rate? of persecution given that Christ is not divine, but that
more persecution is expected if the Christian worldview is correct than if the secular worldview is
correct. Unlike in the previous scenario, Alice and Bob agree on the CPDs for both D and H, but
5
(a)
P (H = 1)
1
(b)
(c)
(d)
1
1
1
0.5
0.5
0.5
A
0.5
B
0
0
0
Prior
beliefs
Updated
beliefs
Prior
beliefs
Updated
beliefs
0
Prior
beliefs
Updated
beliefs
Prior
beliefs
Updated
beliefs
Figure 4: Belief revision outcomes for (a) Example 1: Promotion, (b) Example 2: Religious belief,
(c) Example 3: Election polls, and (d) Example 4: Political belief. In all four plots, the updated
beliefs for Alice (solid line) and Bob (dashed line) are computed after observing the data described
in the text. The plots confirm that all four of our example networks can lead to belief divergence.
differ in the priors they assign to V . As a result, Alice and Bob disagree about whether persecution
supports or undermines a Christian worldview, which leads to the divergence shown in Figure 4b.
This scenario is analogous to many real world situations in which one person has knowledge that
the other does not. For instance, in a police interrogation, someone with little knowledge of the case
(V ) might take a suspect?s alibi (D) as strong evidence of their innocence (H). However, a detective
with detailed knowledge of the case may assign a higher prior probability to the subject?s guilt
based on other circumstantial evidence, and may also notice a detail in the suspect?s alibi that only
the culprit would know, thus making the statement strong evidence of guilt. In all situations of this
kind, although two people possess different background knowledge, their inferences are normative
given that knowledge, consistent with the Bayes net in Figure 2g.
3.3
Example 3: Election polls
We now consider two qualitatively different cases that are both captured by Bayes net 2h. The
networks considered so far have all included a direct link between H and D. In our next two
examples, we consider cases where the hypotheses and observed data are not directly linked, but are
coupled by means of one or more unobserved causal factors.
Suppose that an upcoming election will be contested by two Republican candidates, Rogers and
Rudolph, and two Democratic candidates, Davis and Daly. Alice and Bob disagree about the various
candidates? chances of winning, with Alice favoring the two Republicans and Bob favoring the two
Democrats. Two polls were recently released, one indicating that Rogers was most likely to win the
election and the other indicating that Daly was most likely to win. After considering these polls,
they both assess the likelihood that a Republican will win the election.
This situation is represented by the Bayes net and associated CPDs in Figure 3c. In the tables,
the hypothesis space H = {?Democrat wins? = 0, ?Republican wins? = 1} represents the winning
party, the variable V = {?Rogers? = 0, ?Rudolph? = 1, ?Davis? = 2, ?Daly? = 3} represents the
winning candidate, and the data variables D1 = D2 = {?Rogers? = 0, ?Rudolph? = 1, ?Davis? =
2, ?Daly? = 3} represent the results of the two polls. The exact probabilities were chosen to reflect
the fact that the polls are likely to reflect the truth with some noise, but whether a Democrat or
Republican wins is completely determined by the winning candidate V . In Figure 3c, only a single
D node is shown because D1 and D2 have identical CPDs. The resulting belief divergence is shown
in Figure 4c.
Note that in this scenario, Alice?s and Bob?s different priors cause them to discount the poll that
disagrees with their existing beliefs as noise, thus causing their prior beliefs to be reinforced by the
mixed data. This scenario was inspired by the death penalty study [1] alluded to earlier, in which
a set of mixed results caused supporters and opponents of the death penalty to strengthen their
existing beliefs. We do not claim that people?s behavior in this study can be explained with exactly
the model employed here, but our analysis does show that selective interpretation of evidence is
sometimes consistent with a rational approach.
6
3.4
Example 4: Political belief
We conclude with a second illustration of Bayes net 2h in which two people agree on the interpretation of an observed piece of evidence but disagree about the implications of that evidence. In
this scenario, Alice and Bob are two economists with different philosophies about how the federal
government should approach a major recession. Alice believes that the federal government should
increase its own spending to stimulate economic activity; Bob believes that the government should
decrease its spending and reduce taxes instead, providing taxpayers with more spending money. A
new bill has just been proposed and an independent study found that the bill was likely to increase
federal spending. Alice and Bob now assess the likelihood that this piece of legislation will improve
the economic climate.
This scenario can be modeled by the Bayes net and associated CPDs in Figure 3d. In the tables, the
hypothesis space H = {?Bad policy? = 0, ?Good policy? = 1} represents whether the new bill is
good for the economy and the data variable D = {?No spending? = 0, ?Spending increase? = 1}
represents the conclusions of the independent study. Unlike in previous scenarios, we introduce two
additional factors, V 1 = {?Fiscally conservative? = 0, ?Fiscally liberal? = 1}, which represents the
optimal economic philosophy, and V 2 = {?No spending? = 0, ?Spending increase? = 1}, which
represents the spending policy of the new bill. The exact probabilities in the tables were chosen to
reflect the fact that if the bill does not increase spending, the policy it enacts may still be good for
other reasons. A uniform prior was placed on V 2 for both people, reflecting the fact that they have
no prior expectations about the spending in the bill. However, the priors placed on V 1 for Alice and
Bob reflect their different beliefs about the best economic policy. The resulting belief divergence
behavior is shown in Figure 4d. The model used in this scenario bears a strong resemblance to the
probabilogical model of attitude change developed by McGuire [14] in which V 1 and V 2 might be
logical ?premises? that entail the ?conclusion? H.
4
How common is contrary updating?
We have now described four concrete cases where belief divergence is captured by a normative
approach. It is possible, however, that belief divergence is relatively rare within the Bayes nets
of Family 2, and that our four examples are exotic special cases that depend on carefully selected
CPDs. To rule out this possibility, we ran simulations to explore the space of all possible CPDs for
the three networks in Family 2.
We initially considered cases where H, D, and V were binary variables, and ran two simulations
for each model. In one simulation, the priors and each row of each CPD were sampled from a
symmetric Beta distribution with parameter 0.1, resulting in probabilities highly biased toward 0
and 1. In the second simulation, the probabilities were sampled from a uniform distribution. In
each trial, a single set of CPDs were generated and then two different priors were generated for
each root node in the graph to simulate two individuals, consistent with our assumption that two
individuals may have different priors but must agree about the conditional probabilities. 20,000
trials were carried out in each simulation, and the proportion of trials that led to convergence and
divergence was computed. Trials were only counted as instances of convergence or divergence if
|P (H = 1|D = 1) ? P (H = 1)| > for both individuals, with = 1 ? 10?5 .
The results of these simulations are shown in Table 2. The supporting material proves that divergence and convergence are equally common, and therefore the percentages in the table show the
frequencies for contrary updating of either type. Our primary question was whether contrary updating is rare or anomalous. In all but the third simulation, contrary updating constituted a substantial
proportion of trials, suggesting that the phenomenon is relatively generic. We were also interested
in whether this behavior relied on particular settings of the CPDs. The fact that percentages for the
uniform distribution are approximately the same or greater than for the biased distribution indicates
that contrary updating appears to be a relatively generic behavior for the Bayes nets we considered.
More generally, these results directly challenge the suggestion that normative accounts are not suited
for modeling belief divergence.
The last two columns of Table 2 show results for two simulations with the same Bayes net, the
only difference being whether V was treated as 2-valued (binary) or 4-valued. The 4-valued case
is included because both Examples 3 and 4 considered multi-valued additional factor variables V .
7
V
H
Biased
Uniform
2-valued V
4-valued V
V
V
V
D
9.6%
18.2%
D
H
12.7%
16.0%
H
D
0%
0%
H
D
23.3%
20.0%
Table 2: Simulation results. The percentages indicate the proportion of trials that produced contrary updating using the specified Bayes net (column) and probability distributions (row). The
prior and conditional probabilities were either sampled from a Beta(0.1, 0.1) distribution (biased) or a Beta(1, 1) distribution (uniform). The probabilities for the simulation results shown
in the last column were sampled from a Dirichlet([0.1, 0.1, 0.1, 0.1]) distribution (biased) or a
Dirichlet([1, 1, 1, 1]) distribution (uniform).
In Example 4, we used two binary variables, but we could have equivalently used a single 4-valued
variable. Belief convergence and divergence are not possible in the binary case, a result that is
proved in the supporting material. We believe, however, that convergence and divergence are fairly
common whenever V takes three or more values, and the simulation in the last column of the table
confirms this claim for the 4-valued case.
Given that belief divergence seems relatively common in the space of all Bayes nets, it is natural
to explore whether cases of rational divergence are regularly encountered in the real world. One
possible approach is to analyze a large database of networks that capture everyday belief revision
problems, and to determine what proportion of networks lead to rational divergence. Future studies
can explore this issue, but our simulations suggest that contrary updating is likely to arise in cases
where it is necessary to move beyond a simple model like the one in Figure 2a and consider several
causal factors.
5
Conclusion
This paper presented a family of Bayes nets that can account for belief divergence, a phenomenon
that is typically considered to be incompatible with normative accounts. We provided four concrete
examples that illustrate how this family of networks can capture a variety of settings where belief
divergence can emerge from rational statistical inference. We also described a series of simulations
that suggest that belief divergence is not only possible but relatively common within the family of
networks that we considered.
Our work suggests that belief polarization should not always be taken as evidence of irrationality,
and that researchers who aim to document departures from rationality may wish to consider alternative phenomena instead. One such phenomenon might be called ?inevitable belief reinforcement?
and occurs when supporters of a hypothesis update their belief in the same direction for all possible
data sets d. For example, a gambler will demonstrate inevitable belief reinforcement if he or she
becomes increasingly convinced that a roulette wheel is biased towards red regardless of whether
the next spin produces red, black, or green. This phenomenon is provably inconsistent with any fully
Bayesian approach, and therefore provides strong evidence of irrationality.
Although we propose that some instances of polarization are compatible with a Bayesian approach,
we do not claim that human inferences are always or even mostly rational. We suggest, however,
that characterizing normative behavior can require careful thought, and that formal analyses are
invaluable for assessing the rationality of human inferences. In some cases, a formal analysis will
provide an appropriate baseline for understanding how human inferences depart from rational norms.
In other cases, a formal analysis will suggest that an apparently irrational inference makes sense once
all of the relevant information is taken into account.
8
References
[1] C. G. Lord, L. Ross, and M. R. Lepper. Biased assimilation and attitude polarization: The
effects of prior theories on subsequently considered evidence. Journal of Personality and
Social Psychology, 37(1):2098?2109, 1979.
[2] L. Ross and M. R. Lepper. The perseverance of beliefs: Empirical and normative considerations. In New directions for methodology of social and behavioral science: Fallible judgment
in behavioral research. Jossey-Bass, San Francisco, 1980.
[3] J. Baron. Thinking and Deciding. Cambridge University Press, Cambridge, 4th edition, 2008.
[4] A. Gerber and D. Green. Misperceptions about perceptual bias. Annual Review of Political
Science, 2:189?210, 1999.
[5] M. Oaksford and N. Chater. A rational analysis of the selection task as optimal data selection.
Psychological Review, 101(4):608?631, 1994.
[6] U. Hahn and M. Oaksford. The rationality of informal argumentation: A Bayesian approach
to reasoning fallacies. Psychological Review, 114(3):704?732, 2007.
[7] S. Sher and C. R. M. McKenzie. Framing effects and rationality. In N. Chater and M. Oaksford,
editors, The probablistic mind: Prospects for Bayesian cognitive science. Oxford University
Press, Oxford, 2008.
[8] B. O?Connor. Biased evidence assimilation under bounded Bayesian rationality. Master?s
thesis, Stanford University, 2006.
[9] A. Zimper and A. Ludwig. Attitude polarization. Technical report, Mannheim Research Institute for the Economics of Aging, 2007.
[10] A. K. Dixit and J. W. Weibull. Political polarization. Proceedings of the National Academy of
Sciences, 104(18):7351?7356, 2007.
[11] L. L. Lopes. Averaging rules and adjustment processes in Bayesian inference. Bulletin of the
Psychonomic Society, 23(6):509?512, 1985.
[12] A. Harris, A. Corner, and U. Hahn. ?Damned by faint praise?: A Bayesian account. In A. D. De
Groot and G. Heymans, editors, Proceedings of the 31th Annual Conference of the Cognitive
Science Society, Austin, TX, 2009. Cognitive Science Society.
[13] C. D. Batson. Rational processing or rationalization? The effect of disconfirming information
on a stated religious belief. Journal of Personality and Social Psychology, 32(1):176?184,
1975.
[14] W. J. McGuire. The probabilogical model of cognitive structure and attitude change. In R. E.
Petty, T. M. Ostrom, and T. C. Brock, editors, Cognitive Responses in Persuasion. Lawrence
Erlbaum Associates, 1981.
9
| 3725 |@word trial:6 middle:1 version:1 stronger:2 proportion:4 seems:1 norm:1 d2:2 confirms:1 simulation:15 detective:1 solid:2 series:1 score:6 document:1 petty:1 existing:2 recovered:1 culprit:1 follower:2 must:3 applicant:1 cpds:13 shape:1 christian:6 plot:2 update:5 generative:1 selected:1 directory:5 beginning:2 mental:1 provides:4 node:17 liberal:1 admission:1 direct:1 beta:3 behavioral:2 introduce:1 expected:1 behavior:8 p1:5 frequently:1 multi:1 manager:2 enthusiastic:1 inspired:2 company:2 little:2 actual:1 election:6 cardinality:1 considering:1 revision:8 begin:2 provided:1 exotic:1 bounded:1 becomes:1 what:1 kind:1 interpreted:1 weibull:1 developed:1 unobserved:1 act:1 secular:3 exactly:2 demonstrates:1 appear:2 producing:4 positive:1 treat:1 qualification:1 aging:1 consequence:1 oxford:2 meet:1 approximately:1 probablistic:1 might:5 black:1 studied:1 praise:1 suggests:2 alice:30 someone:1 mentioning:1 range:1 directed:1 taxpayer:1 responsible:1 disconfirming:1 writes:1 featured:1 empirical:2 thought:1 refers:1 seeing:2 suggest:5 cannot:5 wheel:1 selection:2 impossible:1 influence:1 conventional:4 bill:6 demonstrated:1 aptitude:4 exposure:1 attention:1 regardless:2 economics:1 focused:1 simplicity:1 immediately:1 rule:2 his:5 classic:1 analogous:1 updated:8 suppose:7 rationality:6 strengthen:2 exact:3 distinguishing:1 hypothesis:16 associate:1 updating:31 persuasion:1 labeled:1 database:1 observed:3 capture:8 connected:1 bass:1 decrease:4 prospect:1 ran:2 substantial:1 intuition:1 irrational:6 depend:2 lord:2 completely:1 differently:1 represented:4 various:1 tx:1 attitude:4 describe:1 outcome:2 h0:4 kai:1 widely:1 valued:8 stanford:1 ability:1 rudolph:3 final:1 indication:1 net:29 propose:2 product:1 causing:1 relevant:6 ludwig:1 tax:1 academy:1 intervening:1 intuitive:1 everyday:2 dixit:1 convergence:14 r1:5 assessing:1 produce:8 generating:1 help:1 illustrate:3 informs:2 depending:1 received:1 strong:5 p2:5 c:1 involves:1 indicate:2 implies:3 differ:2 direction:8 correct:3 subsequently:1 human:9 deter:1 opinion:2 material:3 rogers:4 require:2 government:3 premise:1 assign:5 hold:1 considered:11 deciding:2 lawrence:1 claim:3 major:1 adopt:1 released:1 purpose:1 daly:4 ross:2 him:1 create:1 federal:3 promotion:7 clearly:1 always:2 aim:1 rather:2 probabilistically:1 publication:3 chater:2 derived:1 focus:2 she:3 likelihood:3 indicates:2 contrast:1 political:5 guilt:2 baseline:1 sense:1 inference:16 economy:1 typically:1 pad:1 initially:1 her:10 favoring:2 selective:1 interested:1 provably:1 issue:1 special:2 fairly:1 genuine:1 equal:2 religious:4 having:1 once:1 identical:1 represents:15 jern:1 promote:1 inevitable:2 future:2 thinking:1 others:1 cpd:1 report:1 divergence:34 national:1 individual:9 familiar:2 opposing:1 attempt:1 circumstantial:1 possibility:1 highly:1 implication:1 edge:2 capable:3 necessary:1 experience:2 respective:2 machinery:1 gerber:1 causal:4 psychological:4 instance:8 column:6 earlier:1 modeling:1 contested:1 introducing:1 expects:1 rare:2 uniform:6 undermines:1 conducted:1 erlbaum:1 too:1 characterize:1 confident:2 person:9 concrete:4 thesis:1 reflect:6 cognitive:5 corner:1 doubt:1 account:7 suggesting:2 de:1 b2:4 includes:2 notable:1 caused:1 piece:4 later:1 h1:17 root:5 roulette:1 observing:5 characterizes:1 apparently:2 linked:2 relied:2 participant:1 parallel:8 bayes:29 red:2 analyze:1 ass:2 spin:1 baron:2 became:2 who:2 reinforced:1 clarifies:2 wisdom:4 judgment:1 bayesian:23 produced:1 none:1 researcher:2 bob:30 classified:1 argumentation:1 explain:1 whenever:2 against:1 frequency:1 associated:4 attributed:1 rational:17 sampled:4 proved:1 logical:1 recall:1 knowledge:7 organized:1 formalize:1 carefully:1 reflecting:1 appears:1 innocence:1 higher:1 ostrom:1 unambiguously:1 methodology:1 response:1 though:1 just:1 hand:1 lack:1 stimulate:1 resemblance:1 believe:2 normatively:1 effect:5 requiring:1 true:1 verify:1 polarization:23 read:1 symmetric:1 death:6 climate:1 conditionally:3 davis:3 noted:1 demonstrate:4 invaluable:1 reasoning:1 meaning:2 spending:11 consideration:3 possessing:1 charles:1 began:2 common:7 recently:1 psychonomic:1 enthusiasm:1 conditioning:1 discussed:1 interpretation:4 belong:1 he:1 interpret:1 mellon:3 cambridge:2 connor:1 language:1 entail:1 surface:1 behaving:1 recession:1 base:1 money:1 own:1 scenario:14 occasionally:1 certain:1 worldview:4 incapable:1 binary:6 qualify:1 captured:6 additional:11 greater:1 employed:1 determine:1 dashed:2 ii:2 alan:1 technical:1 equally:1 anomalous:1 basic:1 cmu:3 expectation:1 represent:2 sometimes:1 background:3 batson:2 concluded:2 sends:1 appropriately:1 biased:8 unlike:3 posse:1 comment:1 subject:4 suspect:2 contrary:24 regularly:1 spirit:1 seem:1 inconsistent:1 presence:1 switch:1 variety:1 finish:1 psychology:4 restrict:1 opposite:4 interprets:1 economic:4 idea:3 reduce:1 grading:1 supporter:3 illegitimate:1 fallible:1 gambler:1 whether:15 motivated:1 utility:1 penalty:6 cause:1 generally:2 clear:1 detailed:1 discount:1 documented:1 percentage:4 ajern:1 notice:1 carnegie:3 discrete:2 affected:1 group:1 four:6 poll:8 neither:2 v1:3 graph:6 legislation:1 letter:4 uncertainty:1 master:1 lope:2 almost:1 throughout:1 family:21 endorsement:1 incompatible:1 entirely:1 opposition:1 distinguish:1 encountered:1 annual:2 activity:1 strength:4 occur:1 constraint:2 precisely:1 simulate:2 min:1 expanded:4 relatively:7 conjecture:2 department:2 developing:2 rationalization:1 increasingly:1 partitioned:2 making:1 explained:1 taken:5 equation:5 agree:10 mutually:1 previously:2 alluded:1 fail:1 committee:1 needed:1 know:1 mind:1 informal:2 rewritten:1 opponent:2 observe:3 v2:4 generic:3 appropriate:2 mckenzie:1 alternative:1 personality:2 remaining:3 include:1 dirichlet:2 prof:1 hahn:2 society:3 upcoming:1 mcguire:2 irrationality:4 move:4 question:1 occurs:3 depart:1 primary:1 exclusive:1 traditional:1 illuminate:1 win:6 link:1 sensible:1 seven:1 argue:2 kemp:1 toward:2 reason:1 economist:1 modeled:2 relationship:5 illustration:1 providing:1 equivalently:1 mostly:1 statement:1 stated:1 policy:5 disagree:3 observation:2 supporting:3 situation:12 divine:3 precise:1 deters:1 police:1 specified:1 crime:3 framing:1 beyond:2 usually:1 departure:2 democratic:1 groot:1 reading:1 challenge:2 unqualified:2 memory:1 green:2 belief:111 treated:1 natural:1 improve:1 technology:1 republican:5 oaksford:3 created:1 carried:1 coupled:1 sher:1 brock:1 text:2 prior:31 literature:1 disagrees:1 understanding:1 review:3 fully:3 expect:1 bear:3 mixed:4 interrogation:1 limitation:1 suggestion:1 offered:1 jesus:2 consistent:4 editor:3 ckemp:1 row:2 austin:1 compatible:5 convinced:1 placed:2 last:3 qualified:3 formal:5 weaker:2 allow:1 side:1 institute:2 bias:1 characterizing:1 bulletin:1 emerge:3 world:4 evaluating:1 author:3 made:2 qualitatively:1 reinforcement:2 san:1 counted:1 far:1 party:1 social:3 approximate:1 implicitly:1 status:2 confirm:1 mannheim:1 b1:4 assumed:1 conclude:1 knew:1 francisco:1 continuous:1 fallacy:1 protected:1 table:12 nature:1 requested:1 complex:1 did:1 constituted:1 universe:3 noise:2 scored:1 arise:1 edition:1 strengthened:2 assimilation:2 position:1 wish:1 winning:4 candidate:5 perceptual:1 third:1 bad:1 normative:15 faint:1 evidence:22 adding:1 effectively:1 suited:1 democrat:3 led:2 explore:4 likely:8 adjustment:1 recommendation:2 chang:1 christ:7 truth:1 chance:1 harris:1 conditional:5 goal:1 careful:2 towards:1 change:3 included:6 determined:1 contrasted:1 averaging:1 conservative:1 called:1 indicating:2 people:18 support:5 carol:9 philosophy:2 outgoing:1 d1:2 phenomenon:14 |
3,008 | 3,726 | Extending Phase Mechanism to Differential
Motion Opponency for Motion Pop-Out
Yicong Meng and Bertram E. Shi
Department of Electronic and Computer Engineering
Hong Kong University of Science and Technology
Clear Water Bay, Kowloon, Hong Kong
{eeyicong, eebert}@ust.hk
Abstract
We extend the concept of phase tuning, a ubiquitous mechanism among
sensory neurons including motion and disparity selective neurons, to the
motion contrast detection. We demonstrate that the motion contrast can be
detected by phase shifts between motion neuronal responses in different
spatial regions. By constructing the differential motion opponency in
response to motions in two different spatial regions, varying motion contrasts
can be detected, where similar motion is detected by zero phase shifts and
differences in motion by non-zero phase shifts. The model can exhibit either
enhancement or suppression of responses by either different or similar
motion in the surrounding. A primary advantage of the model is that the
responses are selective to relative motion instead of absolute motion, which
could model neurons found in neurophysiological experiments responsible
for motion pop-out detection.
1
In trod u ction
Motion discontinuity or motion contrast is an important cue for the pop-out of salient moving
objects from contextual backgrounds. Although the neural mechanism underlying the motion
pop-out detection is still unknown, the center-surround receptive field (RF) organization is
considered as a physiological basis responsible for the pop-out detection.
The center-surround RF structure is simple and ubiquitous in cortical cells especially in neurons
processing motion and color information. Nakayama and Loomis [1] have predicted the existence
of motion selective neurons with antagonistic center-surround receptive field organization in 1974.
Recent physiological experiments [2][3] show that neurons with center-surround RFs have been
found in both middle temporal (MT) and medial superior temporal (MST) areas related to motion
processing. This antagonistic mechanism has been suggested to detect motion segmentation [4],
figure/ground segregation [5] and the differentiation of object motion from ego-motion [6].
There are many related works [7]-[12] on motion pop-out detection. Some works [7]-[9] are based
on spatio-temporal filtering outputs, but motion neurons are not fully interacted by either only
inhibiting similar motion [7] or only enhancing opposite motion [8]. Heeger, et al. [7] proposed a
center-surround operator to eliminate the response dependence upon rotational motions. But the
Heeger's model only shows a complete center-surround interaction for moving directions. With
respect to the surrounding speed effects, the neuronal responses are suppressed by the same speed
with the center motion but not enhanced by other speeds. Similar problem existed in [8], which only
modeled the suppression of neuronal responses in the classical receptive field (CRF) by similar
motions in surrounding regions. Physiological experiments [10][11] show that many neurons in
visual cortex are sensitive to the motion contrast rather than depend upon the absolute direction and
speed of the object motion. Although pooling over motion neurons tuned to different velocities can
eliminate the dependence upon absolute velocities, it is computationally inefficient and still can't
give full interactions of both suppression and enhancement by similar and opposite surrounding
motions. The model proposed by Dellen, et al. [12] computed differential motion responses directly
from complex cells in V1 and didn't utilize responses from direction selective neurons.
In this paper, we propose an opponency model which directly responds to differential motions by
utilizing the phase shift mechanism. Phase tuning is a ubiquitous mechanism in sensory information
processing, including motion, disparity and depth detection. Disparity selective neurons in the
visual cortex have been found to detect disparities by adjusting the phase shift between the receptive
field organizations in the left and right eyes [13][14]. Motion sensitive cells have been modeled in
the similar way as the disparity energy neurons and detect image motions by utilizing the phase shift
between the real and imaginary parts of temporal complex valued responses, which are comparable
to images to the left and right eyes [15]. Therefore, the differential motion can be modeled by
exploring the similarity between images from different spatial regions and from different eyes.
The remainder of this paper is organized as following. Section 2 illustrates the phase shift motion
energy neurons which estimate image velocities by the phase tuning in the imaginary path of the
temporal receptive field responses. In section 3, we extend the concept of phase tuning to the
construction of differential motion opponency. The phase difference determines the preferred
velocity difference between adjacent areas in retinal images. Section 4 investigates properties of
motion pop-out detection by the proposed motion opponency model. Finally, in section 5, we relate
our proposed model to the neural mechanism of motion integration and motion segmentation in
motion related areas and suggest a possible interpretation for adaptive center-surround interactions
observed in biological experiments.
2
Phase Shift Motion Energy Neurons
Adelson and Bergen [16] proposed the motion energy model for visual motion perception by
measuring spatio-temporal orientations of image sequences in space and time. The motion energy
model posits that the responses of direction-selective V1 complex cells can be computed by a
combination of two linear spatio-temporal filtering stages, followed by squaring and summation.
The motion energy model was extended in [15] to be phase tuned by splitting the complex valued
temporal responses into real and imaginary paths and adding a phase shift on the imaginary path.
Figure 1(a) demonstrates the schematic diagram of the phase shift motion energy model. Here we
assume an input image sequence in two-dimensional space (x, y) and time t. The separable
spatio-temporal receptive field ensures the cascade implementation of RF with spatial and temporal
filters. Due to the requirement of the causal temporal RF, the phase shift motion energy model
didn?t adopt the Gabor filter like the spatial RF. The phase shift spatio-temporal RF is modeled with
a complex valued function f ( x, y, t ) = g ( x, y ) ? h ( t , ? ) , where the spatial and temporal RFs are
denoted by g ( x, y ) and h ( t , ? ) respectively,
g ( x, y ) = N ( x, y | 0, C ) exp ( j? x x + j? y y )
h ( t , ? ) = hreal ( t ) + exp ( j? ) himag ( t )
(1)
and C is the covariance matrix of the spatial Gaussian envelope and ? is the phase tuning of the
motion energy neuron. The real and imaginary profiles of the temporal receptive field are Gamma
modulated sinusoidal functions with quadrature phases,
hreal ( t ) = G ( t | ? ,? ) cos ( ?t t )
(2)
himag ( t ) = G ( t | ? ,? ) sin ( ?t t )
The envelopes for complex exponentials are functions of Gaussian and Gamma distributions,
N ( x, y | 0, C ) =
? x2
y2
exp ? ? 2 ? 2
? 2? x 2? y
2?? x? y
?
1
?
??
?
(3)
hreal (t )
g ( x, y )
himag (t )
g ( x, y )
(?)2
(?)2
M
M
M
(?)2
(?)2
M
M
M
2
(?)
2
(?)
Vreal
V (? )
e j?
Vimag
(a)
Ev ( ? max )
(?)2
wc ( x, y )
e j?min
Ev (
(b)
M
0
) w ( x, y )
c
M
Ev ( ? min )
M
E?v ( ? )
??? K
x , y ,?
e j0
e j?min
ws ( x, y )
Ks
c
e
???
j?
ws ( x, y )
e j0
x , y ,?
ws ( x, y )
wc ( x, y )
M
e j? max
e j?max
(?)2
M
(?)2
M
M
(?)2
M
2
(?)
M
M
(?)2
(c)
Figure 1. (a) shows the diagram of the phase shift motion energy model adapted from [15]. (b)
draws the spatiotemporal representation of the phase shift motion energy neuron with the real
and imaginary receptive field demonstrated by the two left pictures. (c) illustrates the
construction of differential motion opponency with a phase difference ? from two populations
of phase shift motion energy neurons in two spatial areas c and s. To avoid clutter, the space
location (x, y) is not explicitly shown in phase tuned motion energies.
G (t | ? ,? ) =
1
? t
t ? ?1 exp ? ?
?(? )? ?
? ?
?
? u (t )
?
(4)
where ? (? ) is the gamma function and u ( t ) is the unit step function. The parameters ? and ?
determine the temporal RF size. As derived in [15], the motion energy at location (x, y) can be
computed by
E v ( x, y, ? ) = S + P cos ( ? ? ? )
(5)
where
S = Vreal
2
+ Vimag
2
*
P = 2 VrealVimag
(
*
? = arg VrealVimag
(6)
)
and complex valued responses in real and imaginary paths are obtained as,
Vreal ( x, y, t ) =
??? g (? , ? ) h (? ) I ( x ? ? , y ? ? , t ? ? ) d? d? d?
real
? ,? ,?
Vimag ( x, y, t ) =
??? g (? , ? ) h (? ) I ( x ? ? , y ? ? , t ? ? ) d? d? d?
? ? ?
(7)
imag
, ,
The superscript * represents the complex conjugation and the phase shift parameter ? controls the
spatio-temporal orientation tuning. To avoid clutter, the spatial location variables x and y for S, P,
?, Vreal and Vimag are not explicitly shown in Eq. (5) and (6). Figure 1(b) demonstrates the even and
odd profiles of the spatio-temporal RF tuned to a particular phase shift.
? 0
? 0
(a)
(b)
Figure 2. Two types of differential motion opponency constructions of (a) center-surrounding
interaction and (b) left-right interaction. Among cells in area MT with surrounding
modulations, 25% of cells are with the antagonistic RF structure in the top row and another
50% of cells have the integrative RF structure as shown in the bottom row.
3
Extending Phase
Op p on ency
Mechanism
to
D i f f e r e nt i a l
Motion
Based on the above phase shift motion energy model, the local image velocity at each spatial
location can be represented by a phase shift which leads to the peak response across a population of
motion energy neurons. Across regions of different motions, there are clear discontinuities on the
estimated velocity map. The motion discontinuities can be detected by edge detectors on the
velocity map to segment different motions. However, this algorithm for motion discontinuities
detection can?t discriminate between the object motion and uniform motions in contextual
backgrounds.
Here we propose a phase mechanism to detect differential motions inspired by the disparity energy
model and adopt the center-surround inhibition mechanism to pop out the object motion from
contextual background motions. The motion differences between different spatial locations can be
modeled in the similar way as the disparity model. The motion energies from two neighboring
locations are considered as the retinal images to the left and right eyes. Thus, we can construct a
differential motion opponency by placing two populations of phase shift motion energy neurons at
different spatial locations and the energy E?v ( ? ) of the opponency is the squared modulus of the
averaged phase shift motion energies over space and phase,
E ?v ( ? ) =
??? E ( x, y, ? ) ? w ( x, y, ? | ? ) dxdyd ?
v
2
(8)
where w ( x, y, ? ) is the profile for differential motion opponency and ?v is the velocity difference
between the two spatial regions defined by the kernel w ( x, y, ? ) . Since w ( x, y, ? ) is intended to
implement the functional role of spatial interactions, it is desired to be a separable function in space
and phase domain and can be modeled by phase tuned summation of two spatial kernels,
w ( x, y, ? | ? ) = wc ( x, y ) e j? + e j?+ j? ws ( x, y )
(9)
where wc ( x, y ) and ws ( x, y ) are Gaussian kernels of different spatial sizes ? c and ? s , and ? is
the phase difference representing velocity difference between two spatial regions c and s.
Substituting Eq. (9) into Eq. (8), the differential motion energy can be reformulated as
E?v ( ? ) = K c + e j? K s
2
(10)
3
3
3
2
2
2
1
1
1
0
0
-1
-1
-2
-2
-2
-3
-3
-3
-3
-3
1
Right Velocity
Right Velocity
0.98
0.96
0.94
0.92
0
0.9
0.88
-1
0.86
0.84
0.82
-2
-1
0
1
Left Velocity
2
3
-2
-1
0
1
Left Velocity
2
3
0.8
(a)
(b)
Figure 3. (a) Phase map and (b) peak magnitude map are obtained from stimuli of two patches
of random dots moving with different velocities. The two patches of stimuli are statistically
independent but share the same spatial properties: dot size of 2 pixels, dot density of 10% and
dot coherence level of 100%. The phase tuned population of motion energy neurons are
applied to each patch of random dots with RF parameters: ?t = 2?/8, ?t = 2?/16, ?x = 5 and ?
= 5.5. For each combination of velocities from left and right patches, averaged phase shifts
over space and time are computed and so do the magnitudes of peak responses. The unit for
velocities is pixels per frame.
where
Kc =
??? E ( x, y, ? ) exp ( j? ) w ( x, y ) dxdyd ?
v,c
c
x , y ,?
Ks =
??? E ( x, y, ? ) exp ( j? ) w ( x, y ) dxdyd ?
v,s
(11)
s
x, y ,?
Ev ,c ( x, y, ? ) and Ev , s ( x, y, ? ) are phase shift motion energies at location (x, y) and with phase
shift ?. Utilizing the results in Eq. (5) and (6), Eq. (10) and (11) generate similar results,
E ?v ( ? ) = Sopp + Popp cos ( ?opp ? ? )
(12)
where
Sopp = K c
2
+ Ks
Popp = 2 K c K s*
2
(13)
?opp = arg ( K c K s* )
According to above derivations, by varying the phase shift ? between ?? and ?, the relative motion
energy of the differential motion opponency can be modeled as population responses across a
population of phase tuned motion opponencies. The response is completely specified by three
parameters Sopp , Popp and ?opp .
The schematic diagram of this opponency is illustrated in Figure 1(c). The differential motion
opponency is constituted by three stages. At the first stage, a population of phase shift motion
energy neurons is applied to be selective to different velocities. At the second stage, motion
energies from the first stage are weighted by kernels tuned to different spatial locations and phase
shifts respectively for both spatial regions and two single differential motion signals in region c and
region s are achieved by integrating responses from these two regions over space and phase tuning.
Finally, the differential motion energy is computed by the squared modulus of the summation of the
integrated motion signal in region c and phase shifted motion signal in region s. The subscripts c
and s represent two interacted spatial regions which are not limited to the center and surround
regions. The opponency could also be constructed by the neighboring left and right
Inhibitive interaction, ? = ?/2
Excitatory interaction, ? =0
Inhibitory
2
1.6
Responses
1.6
Responses
Excitatory
2
1.2
0.8
1.2
0.8
0.4
0.4
0
0
pi/2
pi
3pi/2
Surrouding Direction
0
0
2pi
(a)
Model by Petkov et al. [8]
pi/2
pi
3pi/2
Surrouding Direction
(b)
Model by Heeger et al. [7]
Inhibitory
2
2pi
Inhibitory
2
1.6
1.6
Responses
Responses
1.2
0.8
1.2
0.8
0.4
0.4
0
0
0
pi/2
pi
Surrouding Direction
3pi/2
2pi
0
pi/2
pi
Surrouding Direction
3pi/2
2pi
(c)
(d)
Figure 4. Demonstrations of center-surround differential motion opponency, where (a) show
the excitation of opposite directions outside the CRF and (b) show the inhibition by
surrounding motions in same directions. The center-surround inhibition models by Petkov, et
al. [8] and Heeger, et al. [7] are shown in (c) and (d). Responses above 1 indicate enhancement
and responses below 1 indicate suppressions.
spatial regions. Figure 2 shows two types of structures for the differential motion opponency. In
[17], the authors demonstrates that among cells in area MT with surrounding modulations, 25% of
cells are with the antagonistic RF structure as shown in Figure 2(a) and another 50% of cells have
the integrative RF structure as shown in Figure 2(b).
The velocity difference tuning of the opponency is determined by the phase shift parameter ?
combined with parameters of spatial and temporal frequencies for motion energy neurons. The
larger phase shift magnitude prefers the bigger velocity difference. This phase tuning of velocity
difference is consistent with the phase tuning of motion energy neurons. Figure 3 shows the phase
map obtained by using random dots stimuli with different velocities on two spatial patches (left and
right patches with sizes of 128 pixels 128 pixels). Along the diagonal line, velocities from left and
right patches are equal to each other and therefore phase estimates are zeros along this line.
Deviated from the diagonal line to upper-left and lower-right, the phase magnitudes increase while
positive phases indicate larger left velocities and negative phases indicate larger right velocities.
The phase tuning can give a good classification of velocity differences.
4
V a l i d a t i o n o f D i f f e r e n t i a l M o t i o n O pp o n e n c y
Out derivation and analysis above show that the phase shift between two neighboring spatial regions
is a good indicator for motion difference between these two regions. In this section, we validate the
proposed differential motion opponency by two sets of experiments, which show effects of both
surrounding directions and speeds on the center motion.
Inhibitory
1.6
1.6
1.2
1.2
0.8
0.4
0.8
0.4
0
-2
Inhibitory
2
Responses
Responses
2
0
-1.5
-1
-0.5
0
0.5
Center Speed
1
1.5
2
-2
-1.5
-1
-0.5
0
0.5
Center Speed
1
1.5
2
(a)
(b)
Figure 5. The insensitivity of the proposed opponency model to absolute center and
surrounding velocities is demonstrated in (a), where responses are enhanced for all center
velocities from -2 to 2 pixels per frame. In (b), the model by Heeger, et al. [7] only shows
enhancement when the center speed matches the preferred speed of 1.2 pixel per frame.
Similarly, responses above 1 indicate enhancement and below 1 indicate suppressions. In both
curves, the velocity differences between center and surrounding regions are maintained as a
constant of 3 pixels per frame.
Physiological experiments [2][3] have demonstrated that the neuronal activities in the classical
receptive field are suppressed by responses outside the CRF to stimuli with similar motions
including both directions and speeds on the center and surrounding regions. On the contrary, visual
stimuli of opposite directions or quite different speeds outside the CRF enhance the responses in the
CRF. In their experiments, they used a set of stimuli of random dots moving at different velocities,
where there are small patches of moving random dots on the center.
We tested the properties of the proposed opponency model for motion difference measurement by
using similar random dots stimuli. The random dots on background move with different speeds and
in different direction but have the same statistical parameters: dot size of 2 pixels, dot density of
10% and motion coherence level of 100%. The small random dots patches are placed on the center
of background stimuli to stimulate the neurons in the CRF. These small patches share the same
statistical parameters with background random dots but move with a constant velocity of 1 pixel per
frame.
Figure 4 shows results for the enhanced and suppressed responses in the CRF with varying
surrounding directions. The phase shift motion energy neurons had the same spatial and temporal
frequencies and the same receptive field sizes, and were selective to vertical orientations. The
preferred spatial frequency was 2?/16 radian per pixel and the temporal frequency was 2?/16 radian
per frame. The sizes of RF in horizontal and vertical directions were respectively 5 pixels and 10
pixels, corresponding to a spatial bandwidth of 1.96 octaves. The time constant ? was 5.5 frames
which resulted in a temporal bandwidth of 1.96 octaves. As shown in Figure 4 (a) and (b), the
surrounding motion of opposite direction gives the largest response to the motion in the CRF for the
inhibitory interaction and the smallest response for the excitatory interaction.
Results demonstrated in Figure 4 are consistent with physiological results reported in [3]. In Born?s
paper, inhibitory cells show response enhancement and excitatory cells show response suppression
when surrounding motions are in opposite directions. The 3-dB bandwidth for the surrounding
moving direction is about 135 degrees for the physiological experiments while the bandwidth is
about 180 degrees for the simulation results in our proposed model.
Models proposed by Petkov, et al. [8] and Heeger, et al. [7] also show clear inhibition between
opposite motions. The Petkov?s model achieves the surrounding suppression for each point in
( x, y, t ) space by the subtraction between responses from that point and its surroundings and
followed by a half-wave rectification,
+
E% v ,? ( x, y, t ) = Ev ,? ( x, y, t ) ? ? ? Sv ,? ( x, y, t )
(14)
where Ev ,? ( x, y, t ) is the motion energy at location (x,y) and time t for a given preferred speed v and
orientation ?, Sv ,? ( x, y, t ) is the average motion energy in the surrounding of point (x, y, t),
E% v ,? ( x, y, t ) is the suppressed motion energy and the factor ? controls the inhibition strength. The
inhibition term is computed by weighted motion energy
Sv ,? ( x, y, t ) = Ev ,? ( x, y, t ) ? wv ,? ( x, y, t )
(15)
where wv ,? ( x, y, t ) is the surround weighting function.
The Heeger?s model constructs the center-surround motion opponent by computing the weighted
sum of responses from motion selective cells,
Rv ,? ( t ) = ? ? ( x, y ) ?? Ev ,? ( x, y, t ) ? E? v ,? ( x, y, t ) ??
(16)
x, y
where ? ( x, y ) is a center-surround weighting function and the motion energy at each point should
be normalized across all cells with different tuning properties.
As shown in Figure 4 (c) and (d) for results of Petkov?s and Heeger?s models, we replace the
conventional frequency tuned motion energy neuron with our proposed phase tuned neuron. The
model by Petkov, et al. [8] is generally suppressive and only reproduces less suppression for
opposite motions, which is inconsistent with results from [3]. The model by Heeger, et al. [7] has
similar properties with our proposed model with respect to both excitatory and inhibitory
interactions.
To evaluate the sensitivity of the proposed opponency model to velocity differences, we did
simulations by using similar stimuli with the above experiment in Figure 4 but maintaining a
constant velocity difference of 3 pixels per frame between the center and surrounding random dot
patches. As shown in Figure 5, by varying the velocities of random dots on the center region, we
found that responses by the proposed model are always enhanced independent upon absolute
velocities of center stimuli, but responses by the Heeger?s model achieve the enhancement at a
center velocity of 1.2 pixels per frame and maintain suppressed at other speeds.
5
D i s c u s s i on
We proposed a new biologically plausible model of the differential motion opponency to model the
spatial interaction property of motion energy neurons. The proposed opponency model is motivated
by the phase tuning mechanism of disparity energy neurons which infers the disparity information
from the phase difference between complex valued responses to left and right retinal images.
Hence, the two neighboring spatial areas can be considered as left and right images and the motion
difference between these two spatial regions is detected by the phase difference between the
complex valued responses at these two regions. Our experimental results demonstrate a consistent
conclusion with physiological experiments that motions of opposite directions and different speeds
outside the CRF can show both inhibitive and excitatory effects on the CRF responses. The
inhibitive interaction helps to segment the moving object from backgrounds when fed back to
low-level features such as edges, orientations and color information.
Except providing a unifying phase mechanism in understanding neurons with different functional
roles at different brain areas, the proposed opponency model could possibly provide a way to
understand the motion integration and motion segmentation. Integration and segmentation are two
opposite motion perception tasks but co-exist to constitute two fundamental types of motion
processing. Segmentation is achieved by discriminating motion signals from different objects,
which is thought to be due to the antagonistic interaction between center and surrounding RFs.
Integration is obtained by utilizing the enhancing function of surrounding areas to CRF areas. Both
types of processing have been found in motion related areas including area MT and MST. Tadin, et
al. [18] have found that motion segmentation dominants at high stimulus contrast and gives the way
to motion integration at low stimulus contrast. Huang, et al. [19] suggests that the surrounding
modulation is adaptive according to the visual stimulus such as contrasts and noise levels. Since our
proposed opponency model determines the functional role of neurons by only the phase shift
parameter, this makes the proposed model to be an ideal candidate model for the adaptive
surrounding modulation with the phase tuning between two spatial regions.
References
[1]. K. Nakayama and J. M. Loomis, ?Optical velocity patterns, velocity-sensitive neurons, and space
perception: A hypothesis,? Perception, vol. 3, 63-80, 1974.
[2]. K. Tanaka, K. Hikosaka, H. Saito, M. Yukie, Y. Fukada and E. Iwai, ?Analysis of local and
wide-field movements in the superior temporal visual areas of the macaque monkey,? Journal of
Neuroscience, vol. 6, pp. 134-144, 1986.
[3]. R. T. Born and R. B. H. Tootell, ?Segregation of global and local motion processing in primate
middle temporal visual area,? Nature, vol. 357, pp. 497-499, 1992.
[4]. J. Allman, F. Miezin and E. McGuinness, ?Stimulus specific responses from beyond the classical
receptive field: Neurophysiological mechanisms for local-global comparisions in visual neurons,?
Annual Review Neuroscience, vol. 8, pp. 407-430, 1985.
[5]. V. A. F. Lamme, ?The neurophysiology of figure-ground segregation in primary visual cortex,?
Journal of Neuroscience, vol. 15, pp. 1605-1615, 1995.
[6]. D. C. Bradley and R. A. Andersen, ?Center-surround antagonism based on disparity in primate area
MT,? Journal of Neuroscience, vol. 18, pp. 7552-65, 1998.
[7]. D. J. Heeger, A. D. Jepson and E. P. Simoncelli, ?Recovering observer translation with
center-surround operators,? Proc IEEE Workshop on Visual Motion, pp. 95-100, Oct 1991.
[8]. N. Petkov and E. Subramanian, ?Motion detection, noise reduction, texture suppression, and contour
enhancement by spatiotemporal Gabor filters with surround inhibition,? Biological Cybernetics, vol. 97,
pp. 423-439, 2007.
[9]. M. Escobar and P. Kornprobst, ?Action recognition with a Bio-inspired feedforward motion
processing model: the richness of center-surround interactions,? ECCV '08: Proceedings of the 10th
European Conference on Computer Vision, pp. 186-199, Marseille, France, 2008.
[10]. B. J. Frost and K. Nakayama, ?Single visual neurons code opposing motion independent of
direction,? Science, vol. 200, pp. 744-745, 1983.
[11]. A. Cao and P. H. Schiller, ?Neural responses to relative speed in the primary visual cortex of rhesus
monkey,? Visual Neuroscience, vol. 20, pp. 77-84, 2003.
[12]. B. K. Dellen, J. W. Clark and R. Wessel, ?Computing relative motion with complex cells,? Visual
Neuroscience, vol. 22, pp. 225-236, 2005.
[13]. I. Ohzawa, G. C. Deangelis and R. D. Freeman, ?Encoding of binocular disparity by complex cells
in the cat?s visual cortex,? Journal of Neurophysiology, vol. 77, pp. 2879-2909, 1997.
[14]. D. J. Fleet, H. Wagner and D. J. Heeger, ?Neural Encoding of binocular disparity: energy model,
position shifts and phase shifts,? Vision Research, vol. 26, pp. 1839-1857, 1996.
[15]. Y. C. Meng and B. E. Shi, ?Normalized Phase Shift Motion Energy Neuron Populations for Image
Velocity Estimation,? International Joint Conference on Neural Network, Atlanta, GA, June 14-19,
2009.
[16]. E. H. Adelson and J. R. Bergen, ?Spatiotemporal energy models for the perception of motion,? J.
Opt. Soc. Am. A Opt. Image Sci. Vis., vol. 2, pp. 284-299, 1985.
[17]. D. K. Xiao, S. Raiguel, V. Marcar, J. Koenderink and G. A. Orban, ?The spatial distribution of the
antagonistic surround of MT/V5,? Cereb Cortex, vol. 7, pp. 662-677, 1997.
[18]. D. Tadin, J. S. Lappin, L. A. Gilroy and R. Blake, ?Perceptual consequences of centre-surround
antagonism in visual motion processing,? Nature, vol. 424, pp. 312-315, 2003.
[19]. X. Huang, T. D. Albright and G. R. Stoner, ?Adaptive surround modulation in cortical area MT,?
Neuron, vol. 53, pp. 761-770, 2007.
| 3726 |@word neurophysiology:2 kong:2 middle:2 integrative:2 simulation:2 rhesus:1 covariance:1 reduction:1 born:2 disparity:12 tuned:10 imaginary:7 bradley:1 contextual:3 nt:1 ust:1 mst:2 medial:1 cue:1 half:1 location:10 along:2 constructed:1 differential:20 brain:1 inspired:2 freeman:1 underlying:1 didn:2 monkey:2 differentiation:1 temporal:23 demonstrates:3 control:2 unit:2 imag:1 bio:1 positive:1 engineering:1 local:4 consequence:1 encoding:2 meng:2 subscript:1 path:4 modulation:5 k:3 wessel:1 suggests:1 co:4 limited:1 statistically:1 averaged:2 responsible:2 implement:1 j0:2 saito:1 area:16 cascade:1 gabor:2 thought:1 integrating:1 suggest:1 ga:1 operator:2 tootell:1 conventional:1 map:5 demonstrated:4 shi:2 center:33 petkov:7 splitting:1 utilizing:4 population:8 antagonistic:6 enhanced:4 construction:3 hypothesis:1 ego:1 velocity:38 recognition:1 fukada:1 observed:1 bottom:1 role:3 region:24 ensures:1 richness:1 movement:1 marseille:1 depend:1 segment:2 upon:4 basis:1 completely:1 joint:1 represented:1 cat:1 surrounding:23 derivation:2 ction:1 deangelis:1 detected:5 outside:4 quite:1 larger:3 valued:6 plausible:1 superscript:1 advantage:1 sequence:2 propose:2 interaction:15 remainder:1 neighboring:4 cao:1 insensitivity:1 achieve:1 validate:1 interacted:2 enhancement:8 requirement:1 extending:2 escobar:1 object:7 help:1 op:1 odd:1 eq:5 soc:1 recovering:1 predicted:1 indicate:6 direction:21 posit:1 filter:3 opt:2 biological:2 summation:3 exploring:1 antagonism:2 considered:3 ground:2 blake:1 exp:6 substituting:1 inhibiting:1 achieves:1 adopt:2 smallest:1 estimation:1 proc:1 sensitive:3 largest:1 weighted:3 kowloon:1 gaussian:3 always:1 rather:1 avoid:2 varying:4 derived:1 june:1 hk:1 contrast:8 suppression:9 detect:4 am:1 bergen:2 squaring:1 eliminate:2 integrated:1 w:5 kc:1 selective:9 france:1 pixel:14 arg:2 among:3 orientation:5 classification:1 denoted:1 spatial:33 integration:5 field:12 construct:2 equal:1 represents:1 placing:1 adelson:2 stimulus:14 surroundings:1 gamma:3 resulted:1 phase:70 intended:1 maintain:1 opposing:1 detection:9 organization:3 atlanta:1 edge:2 trod:1 desired:1 opponency:25 causal:1 measuring:1 uniform:1 reported:1 spatiotemporal:3 sv:3 combined:1 density:2 peak:3 sensitivity:1 fundamental:1 discriminating:1 international:1 enhance:1 squared:2 andersen:1 huang:2 possibly:1 inefficient:1 koenderink:1 sinusoidal:1 retinal:3 explicitly:2 vi:1 observer:1 wave:1 cybernetics:1 detector:1 energy:42 frequency:5 pp:18 radian:2 adjusting:1 color:2 infers:1 ubiquitous:3 segmentation:6 organized:1 back:1 response:45 stage:5 binocular:2 horizontal:1 ency:1 stimulate:1 yukie:1 modulus:2 effect:3 ohzawa:1 concept:2 y2:1 normalized:2 hence:1 illustrated:1 adjacent:1 sin:1 maintained:1 excitation:1 hong:2 octave:2 complete:1 demonstrate:2 crf:11 cereb:1 motion:140 image:13 superior:2 functional:3 mt:7 extend:2 interpretation:1 measurement:1 surround:21 tuning:14 similarly:1 centre:1 stoner:1 had:1 dot:16 moving:7 cortex:6 similarity:1 inhibition:7 dominant:1 recent:1 wv:2 subtraction:1 determine:1 signal:4 rv:1 full:1 simoncelli:1 match:1 hikosaka:1 bigger:1 schematic:2 bertram:1 enhancing:2 vision:2 kernel:4 represent:1 achieved:2 cell:16 background:7 diagram:3 suppressive:1 envelope:2 pooling:1 db:1 contrary:1 inconsistent:1 allman:1 ideal:1 feedforward:1 bandwidth:4 opposite:10 shift:35 fleet:1 motivated:1 reformulated:1 constitute:1 prefers:1 action:1 generally:1 clear:3 clutter:2 generate:1 exist:1 inhibitory:8 shifted:1 estimated:1 inhibitive:3 per:9 neuroscience:6 vol:16 salient:1 mcguinness:1 utilize:1 v1:2 sum:1 electronic:1 patch:11 draw:1 coherence:2 investigates:1 comparable:1 followed:2 conjugation:1 deviated:1 existed:1 annual:1 activity:1 adapted:1 strength:1 x2:1 loomis:2 wc:4 speed:16 orban:1 min:3 separable:2 optical:1 department:1 according:2 combination:2 across:4 frost:1 suppressed:5 biologically:1 primate:2 computationally:1 segregation:3 rectification:1 mechanism:13 fed:1 opponent:1 existence:1 top:1 maintaining:1 unifying:1 especially:1 classical:3 move:2 v5:1 receptive:11 primary:3 dependence:2 responds:1 diagonal:2 exhibit:1 schiller:1 sci:1 dxdyd:3 water:1 code:1 modeled:7 rotational:1 demonstration:1 providing:1 iwai:1 relate:1 negative:1 implementation:1 unknown:1 upper:1 vertical:2 neuron:36 extended:1 frame:9 specified:1 pop:8 tanaka:1 discontinuity:4 macaque:1 beyond:1 suggested:1 below:2 perception:5 ev:9 pattern:1 rf:17 including:4 max:3 subramanian:1 indicator:1 representing:1 technology:1 eye:4 picture:1 review:1 understanding:1 relative:4 fully:1 filtering:2 clark:1 degree:2 consistent:3 xiao:1 share:2 pi:16 translation:1 row:2 eccv:1 excitatory:6 placed:1 understand:1 wide:1 wagner:1 absolute:5 curve:1 depth:1 cortical:2 contour:1 sensory:2 author:1 adaptive:4 preferred:4 opp:3 reproduces:1 global:2 spatio:7 bay:1 nature:2 nakayama:3 complex:12 european:1 constructing:1 domain:1 jepson:1 did:1 constituted:1 noise:2 profile:3 quadrature:1 neuronal:4 position:1 heeger:12 exponential:1 candidate:1 perceptual:1 weighting:2 miezin:1 specific:1 lamme:1 physiological:7 workshop:1 adding:1 texture:1 magnitude:4 illustrates:2 neurophysiological:2 visual:16 determines:2 oct:1 replace:1 determined:1 except:1 discriminate:1 albright:1 experimental:1 modulated:1 evaluate:1 tested:1 eebert:1 |
3,009 | 3,727 | Th e Wi sdo m o f Cro wds in th e Recoll ection o f
Ord er In fo rma ti on
Mark Steyvers, Michael Lee, Brent Miller, Pernille Hemmer
Department of Cognitive Sciences
University of California Irvine
[email protected]
Abstract
When individuals independently recollect events or retrieve facts from
memory, how can we aggregate these retrieved memories to reconstruct the
actual set of events or facts? In this research, we report the performance of
individuals in a series of general knowledge tasks, where the goal is to
reconstruct from memory the order of historic events , or the order of items
along some physical dimension. We introduce two Bayesian models for
aggregating order information based on a Thurstonian approach and
Mallows model. Both models assume that each individual's reconstruction
is based on either a random permutation of the unobserved ground truth, or
by a pure guessing strategy. We apply MCMC to make inferences about the
underlying truth and the strategies employed by individuals. The models
demonstrate a "wisdom of crowds " effect, where the aggregated or derings
are closer to the true ordering than the orderings of the best individual.
1
I nt ro duc t io n
Many demonstrations have shown that aggregating the judgments of a number of individuals
results in an estimate that is close to the true answer, a phenomenon that has come to be
known as the ?wisdom of crowds? [1]. This was demonstrated by Galton, who sho wed that
the estimated weight of an ox, when averaged across individuals, closely approximated the
true weight [2]. Similarly, on the game show Who Wants to be a Millionaire, contestants are
given the opportunity to ask all members of the audience to answer multiple choice
questions. Over several seasons of the show, the modal response of the audience
corresponded to the correct answer 91% of the time. More sophisticated aggregation
approaches have been developed for multiple choice tasks, such as Cult ural Consensus
Theory, that additionally take differences across indi viduals and items into account [3]. The
wisdom of crowds idea is currently used in several real-world applications, such as
prediction markets [4], spam filtering, and the prediction of consumer preferences through
collaborative filtering. Recently, it was shown that a form of the wisdom of crowds
phenomenon also occurs within a single person [5]. Averaging multiple guesses from one
person provides better estimates than the individual guesses.
We are interested in applying this wisdom of crowds phenomenon to human memory
involving situations where individuals have to retrieve information more complex than
single numerical estimates or answers to multiple choice questions. We will focus here on
memory for order information. For example, we test individuals on their ability to
reconstruct from memory the order of historic events (e.g., the order of US presidents), or
the magnitude along some physical dimension (e.g., the order of largest US cities). We then
develop computational models that infer distributions over orderings to explain the observed
orderings across individuals. The goal is to demonstrate a wisdom of crowds effects where
the inferred orderings are closer to the actual ordering than the orderings produced by the
majority of individuals.
Aggregating rank order data is not a new problem. In social choice theory, a number of
systems have been developed for aggregating rank order preferences for groups (Marden,
1995). Preferential voting systems, where voters explicitly rank order their candidate
preferences, are designed to pick one or several candidates out of a field of man y. These
systems, such as the Borda count, perform well in aggregating the individuals' rank order
data, but with an inherent bias towards determining the top members of the list. However, as
voting is a means for expressing individual preferences, there is no ground truth. The goal
for these systems is to determine an aggregate of preferences that is in some sense ?fair? to
all members of the group. The rank aggregation problem has also been studied in machine
learning and information retrieval [6,7]. For example, if one is presented with a ranked list
of webpages from several search engines, how can these be combined to create a single
ranking that is more accurate and less sensitive to spam?
Relatively little research has been done on the rank order aggregation problem with the goal
of approximating a known ground truth. In follow-ups to Galton's work, some experiments
were performed testing the ability of individuals to rank-order magnitudes in psychophysical
experiments [8]. Also, an informal aggregation model for rank order data was developed for
the Cultural Consensus Theory, using factor analysis of the covariance structure of rank
order judgments [3]. This was used to (partially) recover the order of causes of death in the
US on the basis of the individual orderings.
We present empirical and theoretical research on the wisdom of crowds phenomenon for
rank order aggregation. No communication between people is allowed for these tasks, and
therefore the aggregation method operates on the data produced b y independent decisionmakers. Importantly, for all of the problems there is a known ground truth. We compare
several heuristic computational approaches?based on voting theory and existing models of
social choice?that analyze the individual judgments and provide a single answer as output,
which can be compared to the ground truth. We refer to these synthesized answers as
?group? answers because they capture the collective wisdom of the group, even though no
communication between group members occurred. We also apply probabilistic models based
on a Thurstonian approach and Mallows model. The Thurstonian model represents the group
knowledge about items as distributions on an interval dimension [9]. Mallows model is a
distance-based model that represents the group answer as a modal ordering of items, and
assumes each individual to have orderings that are more or less close to the modal ordering
[10]. Although Thurstonian and Mallows type of models have often been used to analyze
preference rankings [11], they have not been applied, as far as we are aware, to ordering
problems where there is a ground truth. We also present extensions of these models that
allow for the possibility of different response strategies?some individuals might be purely
guessing because they have no knowledge of the problem and others might have partial
knowledge of the ground truth. We develop efficient MCMC algorithms to infer the latent
group orderings and assignments of individuals to response strategies. The advantage of
MCMC estimation procedure is that it gives a probability distribution over group orderings,
and we can therefore assess the likelihood of any particular group ordering.
2
Expe r i me nt
2 .1
M e t ho d
Participants were 78 undergraduate students at t he University of California, Irvine. The
experiment was composed of 17 questions involving general knowledge regarding:
population statistics (4 questions), geography (3 questions), dates, such as release dates for
movies and books (7 questions), U.S. Presidents, material hardness, the 10 Commandments,
and the first 10 Amendments of the U.S. Co nstitution. An interactive interface was presented
on a computer screen. Participants were instructed to order the presented items (e.g., ?Order
these books by their first release date, earliest to most recent?), and responded by dragging
the individual items on the screen to the desired location in the ordering. The initial ordering
of the 10 items within a question was randomized across all questions and all participants.
Table 1: Unique orderings for each individual for the states and presidents ordering problems
A
B
C
D
E
F
G
H
I
J
0
2
A
B
C
D
E
F
G
I
H
J
1
1
A AA
B BB
C CC
DED
F D E
E F F
G GH
HH I
I I G
J J J
1 1 2
5 1 1
A
B
C
D
F
E
G
I
H
J
2
1
A AA
B BB
C CD
E EC
DF E
F D F
G GH
I H G
H I I
J J J
2 2 2
1 1 1
A
B
D
C
F
E
G
H
I
J
2
3
A AA
B BB
C CC
DD D
E E E
F GH
I H F
H I G
G F J
J J I
3 3 3
2 1 1
A
B
C
D
F
E
I
G
H
J
3
1
A AA
B BB
C CC
E F F
F DD
D E E
G GH
I I G
HH I
J J J
3 3 3
1 1 1
A
B
D
C
F
E
H
G
I
J
3
1
A
B
D
F
C
E
G
H
I
J
3
1
AA
B B
CD
E C
F F
GE
DH
I I
HG
J J
4 4
1 1
A
B
D
C
H
E
F
G
I
J
4
1
A
C
B
D
E
F
H
I
J
G
4
1
AA
E B
B C
CG
D F
F D
G E
I H
H I
J J
4 5
1 1
A
B
D
F
C
H
E
G
I
J
5
1
A
B
D
F
E
C
G
I
H
J
5
1
AA
B B
D E
F F
ED
C C
HG
GH
I I
J J
5 5
1 1
A
B
C
D
F
H
I
G
E
J
6
1
A
B
C
E
D
G
J
F
I
H
6
1
AA
B B
DD
CH
H C
F E
E F
I I
GG
J J
6 6
1 1
A
B
F
C
D
E
I
H
G
J
6
1
A
B
F
D
C
E
H
I
G
J
6
1
AA
B B
C C
DH
H F
I E
F D
EG
G I
J J
7 7
1 1
A
B
F
C
E
D
I
H
G
J
7
1
A
C
B
F
D
I
E
H
G
J
7
1
ABB
F AC
B CD
DEA
CH F
EDH
H I E
I F G
GG J
J J I
7 7 7
1 1 1
A
B
C
H
F
D
I
E
G
J
8
1
A AB
B GA
C CC
I B F
F D J
E F D
DE E
G I G
HH H
J J I
8 8 8
1 1 1
A
B
C
H
I
D
F
E
G
J
9
1
A BA
B AB
D F D
C CH
HH I
F D C
I I F
J E E
E GG
G J J
9 9 10
1 1 1
A
B
E
I
D
F
C
H
G
J
A
B
C
D
E
F
G
H
I
J
0
5
A
B
C
D
E
F
G
H
J
I
1
1
A AA
B BC
C CB
DED
ED E
GF F
F GG
HH H
I I I
J J J
1 1 1
2 1 1
A
B
C
D
E
F
H
G
J
I
2
1
A AA
B BC
C CB
E ED
DD E
F G F
HF G
GH H
I I J
J J I
2 2 2
1 3 1
A
B
C
D
E
F
G
J
I
H
3
1
A AA
B BC
C CB
E E E
DD D
F G F
G F G
I I H
J H J
H J I
3 3 3
1 1 1
A
B
C
D
E
F
J
G
I
H
4
1
A AA
B BB
C CC
D E E
EDD
H F G
GH F
F I J
J J H
I G I
4 4 4
1 1 1
A
B
D
C
E
F
I
G
J
H
4
1
A
B
D
E
C
F
I
G
H
J
4
1
AA
B B
C C
DE
I D
E F
G J
F H
J I
HG
6 6
1 1
A
B
C
E
D
F
J
I
G
H
6
1
A
B
C
E
D
H
G
J
F
I
6
1
AA
B B
C C
E E
D F
I D
G J
HG
F I
J H
6 6
1 1
A
B
E
C
D
I
G
F
H
J
6
1
A
C
B
D
E
G
I
J
F
H
6
1
AA
CD
B C
D B
E E
H F
F I
J H
I G
G J
6 6
1 1
A
B
C
D
E
J
I
F
G
H
7
1
A
B
C
E
D
J
G
F
I
H
7
1
AA
B B
CD
E C
G E
D J
I F
J I
F G
HH
7 7
1 1
A
C
B
E
D
F
I
J
H
G
7
1
A
C
B
E
G
D
I
F
J
H
7
1
AA
C C
B E
E B
I G
DD
G I
F F
HH
J J
7 7
1 1
A
B
C
E
G
H
F
I
J
D
8
1
A
B
D
C
J
E
G
F
I
H
8
1
A AA
C E E
B CD
E BB
GF C
HD G
D I F
J H J
F GH
I J I
8 8 8
1 1 1
A
F
C
D
B
G
H
E
I
J
8
1
A AA
B BB
C CC
E E J
DD D
H J F
J H G
I I E
GF I
F GH
9 9 9
1 1 1
A
C
B
F
D
G
J
I
E
H
9
1
A AA
C CB
D EC
B B E
ED I
I J H
J G F
G F G
F I D
HH J
9 9 10
1 1 1
A
C
B
F
E
J
H
D
G
I
2 .2
A AA
F B F
C F E
B J C
I D D
D CB
E E I
HH G
G GH
J I J
D
F
B
C
E
A
G
I
H
J
ABA
B AC
DEG
I I F
F F B
EH J
C CD
J D H
HG I
G J E
B
A
E
I
H
C
D
J
F
G
E AD
F H E
C C I
G EH
AD A
D F B
B J F
H I C
I BG
J G J
C
I
G
A
D
F
H
E
B
J
B
J
H
C
I
A
D
F
E
G
E C
HB
G I
BG
J H
A J
F F
DD
C E
I A
I
E
A
H
G
D
F
C
B
J
J
G
B
H
I
F
E
A
C
D
J H
G J
I I
DG
H E
E F
CD
B C
F B
AA
A = Oregon
B = Utah
C = Nebras ka
D = Iowa
E = Alabama
F = Ohio
G = Virginia
H = Delaware
I = Connecticut
J = Maine
10 10 11 11 11 12 13 14 14 14 16 18 20 22 24 26 26 33 37 42
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
A AA
C C E
B F B
F EC
GD F
H BH
E I I
J H G
D G J
I J D
A
B
I
G
C
E
D
F
H
J
A AA
CH B
B BC
E EG
J C J
GD D
H J I
DF H
I I E
F G F
A
B
D
J
E
G
C
I
H
F
A AA
B CB
G F C
C EH
F B J
I J I
H GD
J I E
DD G
EH F
A
C
E
B
I
F
J
H
G
D
A
C
F
H
E
D
G
J
B
I
AA
E E
DC
I B
G J
B I
CD
HG
F H
J F
A
C
F
B
J
H
I
E
G
D
A
E
G
D
C
J
F
I
H
B
CA
G J
J G
A I
ED
B C
I H
D B
F F
H E
H
C
D
I
J
E
G
F
A
B
A = George Wash ington
B = John Adams
C = Thomas Jefferson
D = James Monroe
E = Andrew Jackson
F = Theodore Roosevelt
G = Wood row Wilson
H = Franklin D. Roosevelt
I = Harry S. Truman
J = Dwight D. Eisenhower
10 10 10 10 11 12 12 13 13 13 13 14 14 14 14 15 17 18 19 26 28
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
R e s ul t s
To evaluate the performance of participants as well as models, we measured the distance
between the reconstructed and the correct ordering. A commonly used distance metric for
orderings is Kendall?s ?. This distance metric counts the number of adjacent pairwise
disagreements between orderings. Values of ? range from: 0 ? ? ? ?(? ? 1)/2, where N is
the number of items in the order (10 for all of our questions). A value of zero means the
ordering is exactly right, and a value of one means that the ordering is correct except for two
neighboring items being transposed, and so on up to the maximum po ssible value of 45.
Table 1 shows all unique orderings, by column, that were produced for two problems:
arranging U.S. States by east-west location, and sorting U.S. Presidents by the time they
served in office. The correct ordering is shown on the right. The columns are sorted by
Kendall's ? distance. The first and second number below each ordering correspond to
Kendall's ? distance and the number of participants who produced the ordering respectively.
These two examples show that only a small number of participants reproduce d the correct
ordering (in fact, for 11 out of 17 problems, no participant gave the correct answer). It also
shows that very few orderings are produced by multiple participants. For 8 out of 17
problems, each participant produced a unique ordering.
To summarize the results across participants, the column labeled PC in Table 2 shows the
proportion of individuals who got the ordering exactly right for each of the ordering task
questions. On average, about one percent of participants recreated the correct rank ordering
perfectly. The column ?, shows the mean ? values over the population of participants for
each of the 17 sorting task questions. As this is a prior knowledge task, it is interesting to
note the best performance overall was achieved on the Presidents, States from west to east,
Oscar movies, and Movie release dates tas ks. These four questions relate to educational and
cultural knowledge that seems most likely to be shared by our undergraduate subjects.
Finally, an important summary statistic is the performance of the best individual. Instead of
picking the best individual separately for each problem, we find the individual who scores
best across all problems. Table 2, bottom row, shows that this individual has on average a ?
distance of 7.8. To demonstrate the wisdom of crowds effect, we have to show that the
synthesized group ordering outperforms the ordering, on average, of this best individual.
3
Mo de li ng
We evaluated a number of aggregation models on their ability to reconstruct the ground truth
based on the group ordering inferred from individual orderings. First, we evaluate two
heuristic methods from social choice theory based on the mode and Borda counts. One
drawback of such heuristic aggregation models is that they create no explicit representation
of each individual's working knowledge. Therefore, even though such methods can aggregate
Table 2: Performance of the four models and human participants
Problem
books
city population europe
city population us
city population world
country landmass
country population
hardness
holidays
movies releasedate
oscar bestmovies
oscar movies
presidents
rivers
states westeast
superbowl
ten amendments
ten commandments
AVERAGE
BEST INDIVIDUAL
Humans
?
PC
.000 12.3
.000 16.9
.000 15.9
.000 19.3
.000 10.9
.000 14.6
.000 15.3
.051
8.9
.013
7.3
.013 11.2
.000 11.9
.064
7.5
.000 16.1
.026
8.2
.000 18.6
.013 14.0
.000 16.8
.011 13.3
0
7.8
Thurstonian Model
C
?
Rank
0
5
91
0 11
81
0
7
96
0 16
73
0
5
95
0 12
74
0 14
64
0
4
78
0
2
95
0
4
90
0
1
100
0
2
87
0 13
77
0
2
88
0 16
65
0
2
97
0
8
90
.00 7.29
84.8
Mallows Model
C
? Rank
0
5 91
0 12 77
0
7 96
0 16 73
0
5 95
0 11 82
0 14 64
0
5 77
0
2 95
0
4 90
0
1 100
0
1 94
0 14 67
0
2 88
0 15 71
0
3 96
0
7 91
.00 7.29 85.1
Borda Counts
C
? Rank
0
7 82
0 11 81
0 12 67
0 15 77
0
5 95
0 11 82
0 11 91
0
4 78
0
2 95
0
3 97
0
2 96
0
3 79
0 11 91
0
3 78
0 10 96
0
5 90
0 12 74
.00 7.47 85.3
Mode
C
? Rank
0 12 40
0 17 42
0 16 45
0 19 44
0
7 76
0 15 53
0 15 46
1
0 100
0
2 95
0
3 97
0
2 96
1
0 100
0 16 42
0
1 97
0 19 40
0
4 95
0 17 51
.12 9.67 68.2
the individual pieces of knowledge across individuals, they cannot explain why individuals
rank the items in a particular way. To address this potential weakness, we develop two
simple probabilistic models based on the Thurstonian approach [9] and Mallows model [10].
3 .1
H e uri s ti c Mo del s
We tested two heuristic aggregation models. In the simplest heuristic, based on the mode, the
group answer is based on the most frequently occurring sequence of all observed sequences.
In cases where several different sequences correspond to the mode, a ra ndomly chosen
modal sequence was picked. The second method uses the Borda count method, a widely used
technique from voting theory. In the Borda count method, weighted counts are assigned such
that the first choice ?candidate? receives a count of N (where N is the number of candidates),
the second choice candidate receives a count of N-1, and so on. These counts are summed
across candidates and the candidate with the highest count is considered the ?most
preferred?. Here, we use the Borda count to create an ordering over all items by ordering the
Borda counts.
Table 2 reports the performance of all of the aggregation models. For each, we checked
whether the inferred group order is correct (C) and measured Kendall's ?. We also report in
the rank column the percentage of participants who perform worse or the same as the group
answer, as measured by ?. With the rank statistic, we can verify the wisdom of crowds effect.
In an ideal model, the aggregate answer should be as good as or better than all of the
individuals in the group. Table 1 shows the results separately for each problem, and averaged
across all the problems.
These results show that the mode heuristic leads to the worst performance overall in rank.
On average, the mode is as good or better of an estimate than 68% of participants. This
means that 32% of participants came up with better solutions individually. This is not
surprising, since, with an ordering of 10 items, it is possible that only a few participants will
agree on the ordering of items. The difficulty in inferring the mode makes it an unreliable
method for constructing a group answer. This problem will be exacerbated for orderings
involving more than 10 items, as the number of possible orderings grows combinatorially.
The Borda count method performs relatively well in terms of Kendall's ? and overall rank
performance. On average, these methods perform with ranks of 85%, indicating that the
group answers from these methods score amongst the best individuals . On average, the
Borda count has an average distance of 7.47, which outperforms the best individual over all
problems.
Thurstonian model (z = 1)
Guessing model (z = 0)
A
B
x1
x2
A
C
A
x3
C
B
C
x4
B
C
C
A
B
A
B
y1 : A < B < C
y3 : C < B < A
y2 : A < C < B
y4 : C < A < B
Figure 1. Illustration of the extended Thurstonian Model with a guessing component
3 .2
A Th ur s to ni an M o del
In the Thurstonian approach, the overall item knowledge for the group is represented
explicitly as a set of coordinates on an interval dimension. The interval representation is
justifiable, at least for some of the problems in our study that involve one -dimensional
concepts, such as the relative timing of events, or the lengths of items. We will introduce an
extension of the Thurstonian approach where the orderings of some of the individuals are
drawn from a Thurstonian model and others are drawn are based on a guessing process with
no relation to the underlying interval representation.
To introduce the basic Thurstonian approach, let N be the number of items in the ordering
task and M the number of individuals ordering the items. Each item is represented as a value
? ? along this dimension, where ? ? 1,? , ? . Each individual is assumed to have access to
this group-level information. Ho wever, individuals might not have precise knowledge about
the exact location of each item. We model each individual's location of the item by a single
sample from a Normal distribution, centered on the item?s group location. Specifically, in
our model, when determining the order of items, a person ? ? 1, ? , ? samples a value from
each item ?, ??? ~ N ?? , ?? . The standard deviation ?? captures the uncertainty that
individuals have about item ? and the samples ??? represent the mental representation for the
individual. The ordering for each individual is then based on the ordering of their samples.
Let ?? be the observed ordering of the items for individual j so that ?? = (? 1, ? 2, ? , ? ? ) if and
only if ??1 ? < ??2? < ? < ??? ? . Figure 1 (left panel) shows an example of this basic
Thurstonian model with group-level information for three items, A, B, and C and two
individuals. In the illustration, there is a larger degree of overlap between the representations
for B and C making it likely that items B and C are transposed (as illustrated for the second
individual).
We extend this basic Thurstonian model by incorporating a guessing component. We found
this to be a necessary extension because some individuals in the ordering tasks actually were
(a)
(b)
?
?
?0
?0
xj
?
?
zj
yj
zj
j=1,?,M
yj
j=1,?,M
Figure 2. Graphical model of the extended Thurstonian Model (a) and Mallows model (b)
Presidents
Country Landmass
George Washington (1)
Russia (1)
John Adams (2)
Canada (4)
Thomas Jefferson (3)
China (2)
James Monroe (5)
United States (3)
Andrew Jackson (4)
Brazil (7)
Theodore Roosevelt (6)
Australia (5)
Woodrow Wilson (7)
India (6)
Franklin D. Roosevelt (9)
Argentina (8)
Harry S. Truman (8)
Kazakhstan (10)
Dwight D. Eisenhower (10)
Freedom of speech & religion (1)
Right to bear arms (2)
No quartering of soldiers (4)
No unreasonable searches (3)
Due process (5)
Trial by Jury (6)
Civil Trial by Jury (7)
No cruel punishment (8)
Right to non-specified rights (10)
Sudan (9)
First
Last
Ten Amendments
Power for the States & People (9)
Largest
Smallest
Figure 3. Sample Thurstonian inferred distributions. The vertical order is the ground truth
ordering, while the numbers in parentheses show the inferred group ordering
not familiar with any of the items in the ordering tasks (such as the Ten Commandments or
ten amendents). In the extended Thurstonian model, the ordering of such cases are assumed
to originate from a single distribution, ??? ~ N ?0 , ?0 , where no distinction is made
between the different items?all samples come from the same distribution with parameters
? 0, ?0. Therefore, the orderings produced by the individuals under this model are
completely random. For example, Figure 1, right panel shows two orderings produced from
this guessing model. We associate a latent state ?? with each individual that determines
whether the ordering from each individual is produced by the guessing model or the
Thurstonian mo del:
??? | ?? , ?? , ? 0, ?0 ~
N ?? , ??
N ?0, ?0
?? = 1
?? = 0.
(1)
To complete the model, we placed a standard prior on all normal distributions, ? ?, ? ?
1 ? 2 . Figure 2a shows the graphical model for the Thurstonian model. Although the model
looks straightforward as a hierarchical model, inference in this model has proven to be
difficult because the observed variable ?? is a deterministic ranking function (indicated by
the double bordered circle) of the underlying latent variable ?? . The basic Thurstonian model
was introduced by Thurstone in 1927, but only recently have MCMC methods been
developed for estimation [12]. We developed a simplified MCMC procedure as described in
the supplementary materials that allows for efficient estimation of the underlying true
ordering, as well as the assignment of individuals to response strategies.
The results of the extended Thurstonian model are shown in Table 2. The model performs
approximately as well as the Borda count method. The model does not recover the exact
answer for any of the 17 problems, based on the knowledge provided by the 78 participants.
It is possible that a larger sample size is needed in order to achieve perfect reconstructions of
the ground truth. However, the model, on average, has an distance of 7.29 from the actual
truth, which is better than the best individual over all problems.
One advantage of the probabilistic approach is that it gives insight into the difficulty of the
task and the response strategies of individuals. For some problems, such as the Ten
Commandments, 32% of individuals were assigned to the guessing strategy ( ?? = 0). For
other problems, such as the US Presidents, only 16% of individuals were assigned to the
guessing strategy, indicating that knowledge about this domain was more widely distributed
in our group of individuals. Therefore, the extension of the Thurstonia n model can eliminate
individuals who are purely guessing the answers.
An advantage of the representation underlying the Thurstonian model is that it allows a
visualization of group knowledge not only in terms of the order of items, but also in terms of
the uncertainty associated with each item on the interval scale. Figure 3 shows the inferred
distributions for four problems where the model performed relatively well. The crosses
correspond to the mean of ? ? across all samples, and the error bars represent the standard
deviations ?? based on a geometric average across all samples. These visualizations are
intuitive, and show how some items are confused with others in the group population. For
instance, nearly all participants were able to identify Maine as the easternmost state in our
list, but many confused the central states. Li kewise, there was a large agreement on the
proper placement of "the right to bear arms " in the amendments question ? this amendment
is often popularly referred to as ?The Second Amendment?.
3 .3
M al l ow s Mo del
One drawback of the Thurstonian model is that it gives an analog representation for each
item, which might be inappropriate for some problems. For example, it seems
psychologically implausible that the ten amendments or Ten Commandments are mentally
represented as coordinates on an interval scale. Therefore, we also applied probabilistic
models where the group answer is based on a pure rank ordering. One such a model is
Mallows model [7, 9, 10], a distance-based model that assumes that observed orderings that
are close to the group ordering are more likely than those far away. One instantiation of
Mallows model is based on Kendall's distance to measure the number of pairwise
permutations between the group order and the individual order. Specifically, the probability
of any observed order ?, given the group order ? is:
? ?|?, ? =
1
? ?
? ???
?,?
(2)
where ? is the Kendall's distance. The scaling parameter ? determines how close the
observed orders are to the group ordering. As described by [7], the normalization function
? ? does not depend on ? and can be calculated efficiently by:
???
? 1??
?=1 1????
?? =
.
(3)
The model as stated in the Eqs. (2) and (3) describe that standard Mallows model that has
often been used to model preference ranking data. We now introduce a si mple variant of this
model that allows for contaminants. The idea is that some of the individuals orderings do not
originate at all from some common group knowledge, and instead are based on a guessing
process. The extended model introduces a latent state ?? where ?? = 1 if the individual j
produced the ordering based on Mallows model and ?? = 0 if the individual is guessing. We
model guessing by choosing an ordering uniformly from all possible orderings of N items.
Therefore, in the extended model, we have
1
? ??|?,?, ?? =
? ?
? ???
1/?!
?,?
?? = 1
?? = 0.
(4)
To complete the model, we place a Bernoulli(1/2) prior over ?? . The MCMC inference
algorithm to estimate the distribution over ?, ? and ? given the observed data is based on
earlier work [6]. We extended the algorithm to estimate ? and also allow for the efficient
estimation of ?. The details of the inference procedure are described in the supplementary
materials.
The result of the inference algorithm is a probability distribution over group answers ?, of
which we take the mode as the single answer for a particular problem. Note that the inferred
group ordering does not have to correspond with an ordering of any particular individual.
The model just finds the ordering that is close to all of the observed orderings, except those
that can be better explained by a guessing process. Figure 4 illustrates the model solution
based on a single MCMC sample for the Ten Commandments and ten amendment sorting
tasks. The figure shows the distribution of distances from the inferred group ordering. Each
circle corresponds to an individual. Individuals assigned to Mallows model and the guessing
model are illustrated by filled and unfilled circles respectively. The solid and dashed red
lines show the expected distributions based on the model parameters. Note that although
Mallows model describes an exponential falloff in probability based on the distance from the
group ordering, the expected distributions also take into account the number of orderings
that exist at each distance (see [11], page 79, for a recursive algorithm to compute this).
Ten Commandments
Number of Individuals
6
5
4
3
zj ?1
2
zj ? 0
1
0
0
5
10
15
20
25
30
35
40
45
?
Ten Amendments
Number of Individuals
8
6
zj ?1
zj ? 0
4
2
0
0
5
10
15
20
25
30
35
40
45
?
d ( y j , ?)
Figure 4. Distribution of distances from group answer for two example problems.
Figure 4 shows the distribution over individuals that are captured by the two routes in the
model. The individuals with a Kendall's ? above below 15 tend to be assigned to Mallows
route and all other individuals are assigned to the the guessing route. Interestingly, the
distribution over distances appears to be bimodal, especially for the Ten Commandments.
The middle peak of the distribution occurs at 22, which is close to the expected value of 22.5
based on guessing. This result seems intuitively plausible -- not everybody has studied the
Ten Commandments, let alone the order in which they occur.
Table 2 shows the results for the extended Mallows model across all 17 problems. The
overall performance, in terms of Kendall 's ? and rank is comparable to the Thurstonian
model and the Borda count method. Therefore, there does not appear to be any overall
advantage of this particular approach. For the Ten Commandments and ten amendment
sorting tasks, Mallows model performs the same or better than the Thurstonian model. This
suggests that for particular ordering tasks, where there is arguably no underl ying analog
representation, a pure rank-ordering representation such as Mallows model might have an
advantage.
4
Co nc l us io ns
We have presented two heuristic aggregation approaches, as well as two probabilistic
approaches, for the problem of aggregating rank orders to uncover a ground truth. For each
problem, we found that there were individuals who performed better than the aggregation
models (although we cannot identify these individuals until after the fact). However, across
all problems, no person consistently outperformed the model. Therefore, for all aggregation
methods, except for the mode, we demonstrated a wisdom of crowds effect, where the
average performance of the model was better than the best individual over all problems.
We also presented two probabilistic approaches based on the classic Thurstonian and
Mallows approach. While neither of these models outperformed the simple Borda count
heuristic models, they do have some advantages over them. The Thurstonian model not only
extracts a group ordering, but also a representation of the uncertainty associated with the
ordering. This can be visualized to gain insight into mental representations and processes. In
addition, the Thurstonian and Mallows models were both extended with a guessing
component to allow for the possibility that some individuals simply do not know any of the
answers for a particular problem. Finally, although not explored here, the Bayesian approach
potentially offers advantages over heuristic approaches because the probabilistic model can
be easily expanded with additional sources of knowledge, such as confidence judgments
from participants and background knowledge about t he items.
R e fer e nces
[1] Surowiecki, J. (2004). The Wisdom of Crowds. New York, NY: W. W. Norton & Company, Inc.
[2] Galton, F. (1907). Vox P opuli. Nature, 75, 450-451.
[3] Romney, K. A., Batchelder, W. H., Weller, S. C. (1987). Recent Applications of Cultural Consensus
Theory. American Behavioral Scientist, 31, 163-177.
[4] Dani, V., Madani, O., Pennock, D.M., Sanghai, S.K., & Galebach, B. (2006). An Empirical
Comparison of Algorithms for Aggregating Expert Predictions. In Proceedings of the Conference on
Uncertainty in Artificial Intelligence (UAI).
[5] Vul, E & Pashler, H (2008). Measuring the Crowd Within: Probabilistic representations Within
individuals. Psychological Science, 19(7) 645-647.
[6] Lebanon, G. & Lafferty, J. (2002). Cranking: Combining Rankings using Conditional Models on
Permutations. P roc. of the 19th International Conf erence on Machine Learning.
[7] Lebanon, G., & Mao, Y. (2008). Non-Parametric Modeling of Partially Ranked Data. Journal of
Machine Learning Research, 9, 2401-2429.
[8] Gordon, K. (1924). Group Judgments in the Field of Lifted Weights. Journal of Experimental
Psychology, 7, 398-400.
[9] Thurstone, L. L. (1927). A law of comparative judgement. Psychological Review, 34, 273? 286.
[10] Mallows, C.L. (1957). Non-null ranking models, Biometrika, 44:114? 130.
[11] Marden, J. I. (1995). Analyzing and Modeling Rank Data. New York, NY: Chapman & Hall USA.
[12] Yao, G., & B?ckenholt, U. (1999). Bayesian estimation of Thurstonian ranking models based on
the Gibbs sampler. British Journal of Mathematical and Statistical Psychology, 52, 79? 92.
| 3727 |@word trial:2 middle:1 judgement:1 proportion:1 seems:3 covariance:1 pick:1 solid:1 initial:1 series:1 score:2 united:1 bc:4 interestingly:1 franklin:2 outperforms:2 existing:1 ka:1 nt:2 surprising:1 si:1 john:2 numerical:1 designed:1 alone:1 intelligence:1 guess:2 item:36 cult:1 mental:2 provides:1 location:5 preference:7 mathematical:1 along:3 behavioral:1 introduce:4 pairwise:2 hardness:2 ra:1 expected:3 market:1 frequently:1 company:1 actual:3 little:1 inappropriate:1 provided:1 confused:2 underlying:5 cultural:3 panel:2 null:1 developed:5 unobserved:1 argentina:1 y3:1 ti:2 voting:4 interactive:1 exactly:2 ro:1 biometrika:1 connecticut:1 appear:1 arguably:1 scientist:1 aggregating:7 timing:1 io:2 analyzing:1 approximately:1 might:5 voter:1 studied:2 k:1 theodore:2 china:1 suggests:1 co:2 range:1 averaged:2 unique:3 testing:1 mallow:20 yj:2 recursive:1 x3:1 procedure:3 empirical:2 erence:1 got:1 ups:1 confidence:1 cannot:2 close:6 ga:1 bh:1 applying:1 pashler:1 deterministic:1 demonstrated:2 educational:1 straightforward:1 independently:1 pure:3 insight:2 importantly:1 jackson:2 marden:2 retrieve:2 steyvers:2 population:7 hd:1 thurstone:2 coordinate:2 holiday:1 president:8 arranging:1 brazil:1 classic:1 exact:2 us:1 agreement:1 associate:1 approximated:1 labeled:1 observed:9 bottom:1 capture:2 worst:1 sanghai:1 jury:2 cro:1 ordering:78 highest:1 depend:1 purely:2 basis:1 completely:1 po:1 easily:1 represented:3 describe:1 artificial:1 corresponded:1 aggregate:4 choosing:1 crowd:12 heuristic:9 widely:2 larger:2 supplementary:2 plausible:1 reconstruct:4 ability:3 statistic:3 advantage:7 sequence:4 reconstruction:2 jefferson:2 fer:1 neighboring:1 uci:1 combining:1 rma:1 date:4 sudan:1 achieve:1 indi:1 intuitive:1 webpage:1 double:1 comparative:1 adam:2 perfect:1 develop:3 ac:2 andrew:2 measured:3 eq:1 exacerbated:1 come:2 closely:1 correct:8 drawback:2 popularly:1 centered:1 human:3 australia:1 vox:1 material:3 geography:1 extension:4 considered:1 ground:11 normal:2 hall:1 cb:6 sho:1 mo:4 smallest:1 estimation:5 outperformed:2 currently:1 sensitive:1 individually:1 largest:2 combinatorially:1 create:3 city:4 weighted:1 dani:1 season:1 lifted:1 wilson:2 office:1 earliest:1 release:3 focus:1 consistently:1 rank:26 likelihood:1 dragging:1 bernoulli:1 cg:1 romney:1 sense:1 inference:5 eliminate:1 relation:1 reproduce:1 interested:1 overall:6 summed:1 field:2 aware:1 wever:1 ng:1 washington:1 chapman:1 x4:1 represents:2 look:1 nearly:1 aba:1 report:3 others:3 gordon:1 inherent:1 few:2 composed:1 dg:1 individual:76 madani:1 familiar:1 ab:2 freedom:1 possibility:2 weakness:1 introduces:1 pc:2 hg:6 accurate:1 closer:2 partial:1 preferential:1 necessary:1 filled:1 desired:1 circle:3 theoretical:1 psychological:2 instance:1 column:5 earlier:1 modeling:2 measuring:1 assignment:2 deviation:2 virginia:1 millionaire:1 weller:1 answer:21 combined:1 gd:3 person:4 punishment:1 international:1 river:1 peak:1 randomized:1 lee:1 probabilistic:8 contaminant:1 picking:1 recreated:1 michael:1 yao:1 central:1 russia:1 worse:1 cognitive:1 brent:1 book:3 american:1 roosevelt:4 expert:1 conf:1 li:2 account:2 potential:1 nces:1 de:3 harry:2 student:1 inc:1 oregon:1 explicitly:2 ranking:7 ad:2 bg:2 piece:1 performed:3 picked:1 kendall:9 analyze:2 red:1 aggregation:13 recover:2 participant:20 hf:1 borda:12 collaborative:1 ass:1 ni:1 amendment:10 responded:1 who:8 efficiently:1 miller:1 judgment:5 wisdom:12 correspond:4 identify:2 bayesian:3 produced:10 served:1 cc:6 justifiable:1 wed:1 explain:2 fo:1 implausible:1 falloff:1 ed:5 checked:1 norton:1 james:2 associated:2 batchelder:1 transposed:2 irvine:2 gain:1 ask:1 knowledge:17 sophisticated:1 actually:1 uncover:1 appears:1 ta:1 follow:1 modal:4 response:5 done:1 ox:1 though:2 evaluated:1 just:1 until:1 working:1 receives:2 duc:1 del:4 mode:9 indicated:1 grows:1 utah:1 effect:5 usa:1 dwight:2 true:4 verify:1 y2:1 concept:1 assigned:6 death:1 unfilled:1 illustrated:2 eg:2 adjacent:1 game:1 abb:1 everybody:1 ection:1 gg:4 complete:2 demonstrate:3 performs:3 interface:1 gh:10 percent:1 ohio:1 recently:2 common:1 mentally:1 physical:2 extend:1 occurred:1 he:2 analog:2 synthesized:2 expressing:1 refer:1 gibbs:1 similarly:1 access:1 europe:1 expe:1 recent:2 retrieved:1 route:3 came:1 vul:1 captured:1 george:2 additional:1 employed:1 aggregated:1 determine:1 ural:1 dashed:1 multiple:5 infer:2 cross:1 offer:1 retrieval:1 parenthesis:1 prediction:3 involving:3 basic:4 variant:1 metric:2 df:2 psychologically:1 represent:2 normalization:1 bimodal:1 achieved:1 audience:2 addition:1 want:1 separately:2 background:1 interval:6 country:3 source:1 pennock:1 subject:1 tend:1 member:4 lafferty:1 ideal:1 hb:1 xj:1 gave:1 psychology:2 perfectly:1 idea:2 decisionmakers:1 regarding:1 delaware:1 whether:2 ul:1 speech:1 york:2 cause:1 involve:1 ten:16 visualized:1 simplest:1 percentage:1 exist:1 zj:6 estimated:1 group:39 four:3 drawn:2 neither:1 wood:1 cranking:1 oscar:3 uncertainty:4 place:1 scaling:1 comparable:1 occur:1 placement:1 x2:1 thurstonian:29 expanded:1 relatively:3 department:1 contestant:1 across:13 describes:1 ur:1 wi:1 making:1 explained:1 intuitively:1 agree:1 visualization:2 count:18 hh:9 needed:1 know:1 ge:1 edd:1 informal:1 unreasonable:1 apply:2 hierarchical:1 away:1 disagreement:1 ho:2 thomas:2 top:1 assumes:2 graphical:2 opportunity:1 mple:1 especially:1 approximating:1 psychophysical:1 hemmer:1 question:13 occurs:2 strategy:8 parametric:1 guessing:19 amongst:1 ow:1 distance:17 majority:1 me:1 originate:2 consensus:3 consumer:1 length:1 y4:1 illustration:2 demonstration:1 ying:1 nc:1 difficult:1 potentially:1 relate:1 stated:1 ba:1 collective:1 proper:1 galton:3 perform:3 ord:1 vertical:1 situation:1 extended:9 communication:2 precise:1 dc:1 y1:1 canada:1 inferred:8 introduced:1 specified:1 truman:2 maine:2 california:2 engine:1 alabama:1 distinction:1 recollect:1 address:1 able:1 bar:1 below:2 summarize:1 woodrow:1 memory:6 power:1 event:5 overlap:1 ranked:2 eh:4 difficulty:2 arm:2 movie:5 surowiecki:1 extract:1 gf:3 prior:3 geometric:1 review:1 determining:2 relative:1 law:1 historic:2 permutation:3 bear:2 interesting:1 filtering:2 proven:1 iowa:1 degree:1 dd:9 cd:9 row:2 summary:1 placed:1 last:1 bias:1 allow:3 india:1 distributed:1 dimension:5 calculated:1 world:2 instructed:1 commonly:1 made:1 simplified:1 spam:2 far:2 ec:3 social:3 bb:7 reconstructed:1 lebanon:2 preferred:1 unreliable:1 deg:1 instantiation:1 uai:1 assumed:2 search:2 latent:4 why:1 table:9 additionally:1 nature:1 ca:1 dea:1 complex:1 constructing:1 domain:1 soldier:1 ded:2 fair:1 allowed:1 x1:1 west:2 referred:1 roc:1 screen:2 ny:2 n:1 inferring:1 mao:1 explicit:1 exponential:1 candidate:7 british:1 er:1 list:3 explored:1 incorporating:1 undergraduate:2 magnitude:2 wash:1 illustrates:1 occurring:1 uri:1 sorting:4 monroe:2 civil:1 simply:1 likely:3 religion:1 partially:2 aa:27 ch:4 truth:13 determines:2 corresponds:1 dh:2 conditional:1 goal:4 sorted:1 towards:1 shared:1 man:1 specifically:2 except:3 operates:1 uniformly:1 averaging:1 sampler:1 experimental:1 east:2 indicating:2 mark:2 people:2 evaluate:2 mcmc:7 tested:1 phenomenon:4 |
3,010 | 3,728 | Canonical Time Warping
for Alignment of Human Behavior
Fernando de la Torre
Robotics Institute
Carnegie Mellon University
[email protected]
Feng Zhou
Robotics Institute
Carnegie Mellon University
www.f-zhou.com
Abstract
Alignment of time series is an important problem to solve in many scientific disciplines. In particular, temporal alignment of two or more subjects performing
similar activities is a challenging problem due to the large temporal scale difference between human actions as well as the inter/intra subject variability. In this
paper we present canonical time warping (CTW), an extension of canonical correlation analysis (CCA) for spatio-temporal alignment of human motion between
two subjects. CTW extends previous work on CCA in two ways: (i) it combines
CCA with dynamic time warping (DTW), and (ii) it extends CCA by allowing
local spatial deformations. We show CTW?s effectiveness in three experiments:
alignment of synthetic data, alignment of motion capture data of two subjects performing similar actions, and alignment of similar facial expressions made by two
people. Our results demonstrate that CTW provides both visually and qualitatively
better alignment than state-of-the-art techniques based on DTW.
1
Introduction
Temporal alignment of time series has been an active research topic in many scientific disciplines
such as bioinformatics, text analysis, computer graphics, and computer vision. In particular, temporal alignment of human behavior is a fundamental step in many applications such as recognition
[1], temporal segmentation [2] and synthesis of human motion [3]. For instance consider Fig. 1a
which shows one subject walking with varying speed and different styles and Fig. 1b which shows
two subjects reading the same text.
Previous work on alignment of human motion has been addressed mostly in the context of recognizing human activities and synthesizing realistic motion. Typically, some models such as hidden
Markov models [4, 5, 6], weighted principal component analysis [7], independent component analysis [8, 9] or multi-linear models [10] are learned from training data and in the testing phase the
time series is aligned w.r.t. the learned dynamic model. In the context of computer vision a key
aspect for successful recognition of activities is building view-invariant representations. Junejo et
al. [1] proposed a view-invariant descriptor for actions making use of the affinity matrix between
time instances. Caspi and Irani [11] temporally aligned videos from two closely attached cameras.
Rao et al. [12, 13] aligned trajectories of two moving points using constraints from the fundamental
matrix. In the literature of computer graphics, Hsu et al. [3] proposed the iterative motion warping,
a method that finds a spatio-temporal warping between two instances of motion captured data. In the
context of data mining there have been several extensions of DTW [14] to align time series. Keogh
and Pazzani [15] used derivatives of the original signal to improve alignment with DTW. Listgarten
et al. [16] proposed continuous profile models, a probabilistic method for simultaneously aligning
and normalizing sets of time series.
A relatively unexplored problem in behavioral analysis is the alignment between the motion of the
body of face in two or more subjects (e.g., Fig. 1). Major challenges to solve human motion align1
(a)
(b)
Figure 1: Temporal alignment of human behavior. (a) One person walking in normal pose, slow
speed, another viewpoint and exaggerated steps (clockwise). (b) Two people reading the same text.
ment problems are: (i) allowing alignment between different sets of multidimensional features (e.g.,
audio/video), (ii) introducing a feature selection or feature weighting mechanism to compensate for
subject variability or irrelevant features and (iii) execution rate [17]. To solve these problems, this
paper proposes canonical time warping (CTW) for accurate spatio-temporal alignment between two
behavioral time series. We pose the problem as finding the temporal alignment that maximizes the
spatial correlation between two behavioral samples coming from two subjects. To accommodate for
subject variability and take into account the difference in the dimensionally of the signals, CTW uses
CCA as a measure of spatial alignment. To allow temporal changes CTW incorporates DTW. CTW
extends DTW by adding a feature weighting mechanism that is able to align signals of different
dimensionality. CTW also extends CCA by incorporating time warping and allowing local spatial
transformations.
The remainder of the paper is organized as follows. Section 2 reviews related work on dynamic time
warping and canonical correlation analysis. Section 3 describes the new CTW algorithm. Section 4
extends CTW to take into account local transformations. Section 5 provides experimental results.
2
Previous work
This section describes previous work on canonical correlation analysis and dynamic time warping.
2.1 Canonical correlation analysis
Canonical correlation analysis (CCA) [18] is a technique to extract common features from a pair of
multivariate data. CCA identifies relationships between two sets of variables by finding the linear
combinations of the variables in the first set1 (X ? Rdx ?n ) that are most correlated with the linear
combinations of the variables in the second set (Y ? Rdy ?n ). Assuming zero-mean data, CCA finds
a combination of the original variables that minimizes:
Jcca (Vx , Vy ) = kVxT X ? VyT Yk2F
s.t. VxT XXT Vx = VyT YYT Vy = Ib ,
(1)
where Vx ? Rdx ?b is the projection matrix for X (similarly for Vy ). The pair of canonical variates
(vxT X, vyT Y) is uncorrelated with other canonical variates of lower order. Each successive canonical variate pair achieves the maximum correlation orthogonal to the preceding pairs. Eq. 1 has a
closed form solution in terms of a generalized eigenvalue problem. See [19] for a unification of
several component analysis methods and a review of numerical techniques to efficiently solve the
generalized eigenvalue problems.
In computer vision, CCA has been used for matching sets of images in problems such as activity
recognition from video [20] and activity correlation from cameras [21]. Recently, Fisher et al. [22]
Bold capital letters denote a matrix X, bold lower-case letters a column vector x. xi represents the ith
column of the matrix X. xij denotes the scalar in the ith row and j th column of the matrix X. All non-bold
letters represent?scalars. 1m?n , 0m?n ? Rm?n are matrices of ones and zeros. In ? Rn?n is an identity
matrix. kxk = xT x denotes the Euclidean distance. kXk2F = Tr(XT X) designates the Frobenious norm.
X ? Y and X ? Y are the Hadamard and Kronecker product of matrices. Vec(X) denotes the vectorization of
matrix X. {i : j} lists the integers, {i, i + 1, ? ? ? , j ? 1, j}.
1
2
5
4
3
2
1 2 3 4 5 6 7
5
4
3
2
1 2 3 4 5 6 7 8 9
(a)
1 0
1 1 1 0 1 1 0 1
1
2 1
0 1 1 1 1 1 1 0
2
3 1
1 0 1 1 1 1 1 1
3
4 1
1 1 0 1 0 0 1 1
4
5 1
1 1 0 1 0 0 1 1
5
6 0
1 1 1 0 1 1 0 1
6
7 1
0 1 1 1 1 1 1 0
7
1
(b)
2
3
4
5
(c)
6
7
8
9
1
2
3
4
5
(d)
6
7
8
9
Figure 2: Dynamic time warping. (a) 1-D time series (nx = 7 and ny = 9). (b) DTW alignment.
(c) Binary distance matrix. (d) Policy function at each node, where ?, -, ? denote the policy,
?(pt ) = [1, 0]T , [1, 1]T , [0, 1]T , respectively. The optimal alignment path is denoted in bold.
proposed an extension of CCA with parameterized warping functions to align protein expressions.
The learned warping function is a linear combination of hyperbolic tangent functions with nonnegative coefficients, ensuring monotonicity. Unlike our method, the warping function is unable to
deal with feature weighting.
2.2 Dynamic time warping
Given two time series, X = [x1 , x2 , ? ? ? , xnx ] ? Rd?nx and Y = [y1 , y2 , ? ? ? , yny ] ? Rd?ny ,
dynamic time warping [14] is a technique to optimally align the samples of X and Y such that the
following sum-of-squares cost is minimized:
Jdtw (P) =
m
X
kxpxt ? ypyt k2 ,
(2)
t=1
where m is the number of indexes (or steps) needed to align both signals. The correspondence
matrix P can be parameterized by a pair of path vectors, P = [px , py ]T ? R2?m , in which px ?
{1 : nx }m?1 and py ? {1 : ny }m?1 denote the composition of alignment in frames. For instance,
the ith frame in X and the j th frame in Y are aligned iff there exists pt = [pxt , pyt ]T = [i, j]T
for some t. P has to satisfy three additional constraints: boundary condition (p1 ? [1, 1]T and
pm ? [nx , ny ]T ), continuity (0 ? pt ? pt?1 ? 1) and monotonicity (t1 ? t2 ? pt1 ? pt2 ? 0).
Although the number of possible ways to align
X and Y is exponential in nx and ny , dynamic programming [23] offers an efficient (O nx ny ) approach to minimize Jdtw using Bellman?s equation:
L? (pt ) = min kxpxt ? ypyt k2 + L? (pt+1 ),
?(pt )
(3)
where the cost-to-go value function, L? (pt ), represents the remaining cost starting at tth step to
be incurred following the optimum policy ? ? . The policy function, ? : {1 : nx } ? {1 : ny } ?
{[1, 0]T , [0, 1]T , [1, 1]T }, defines the deterministic transition between consecutive steps, pt+1 =
pt + ?(pt ). Once the policy queue is known, the alignment steps can be recursively constructed
from the starting point, p1 = [1, 1]T . Fig. 2 shows an example of DTW to align two 1-D time series.
3
Canonical time warping (CTW)
This section describes the energy function and optimization strategies for CTW.
3.1 Energy function for CTW
In order to have a compact and compressible energy function for CTW, it is important to notice that
Eq. 2 can be rewritten as:
Jdtw (Wx , Wy ) =
ny
nx X
X
wix T wjy kxi ? yj k2 = kXWxT ? YWyT k2F ,
(4)
i=1 j=1
where Wx ? {0, 1}m?nx , Wy ? {0, 1}m?ny are binary selection matrices that need to be inferred
to align X and Y. In Eq. 4 the matrices Wx and Wy encode the alignment path. For instance,
3
y
x th
x
frame in X and pyt
wtp
x = w y = 1 assigns correspondence between the pt
tp
t
th
frame in Y. For
t
convenience, we denote, Dx = WxT Wx , Dy = WyT Wy and W = WxT Wy . Observe that Eq.
4 is very similar to the CCA?s objective (Eq. 1). CCA applies a linear transformation to the rows
(features), while DTW applies binary transformations to the columns (time).
In order to accommodate for differences in style and subject variability, add a feature selection mechanism, and reduce the dimensionality of the signals, CTW adds a linear transformation (VxT , VyT )
(as CCA) to the least-squares form of DTW (Eq. 4). Moreover, this transformation allows aligning
temporal signals with different dimensionality (e.g., video and motion capture). CTW combines
DTW and CCA by minimizing:
Jctw (Wx , Wy , Vx , Vy ) = kVxT XWxT ? VyT YWyT k2F ,
dx ?b
(5)
dy ?b
where Vx ? R
, Vy ? R
, b ? min(dx , dy ) parameterize the spatial warping by projecting the sequences into the same coordinate system. Wx and Wy warp the signal in time to
achieve optimum temporal alignment. Similar to CCA, to make CTW invariant to translation, rotation and scaling, we impose the following constraints: (i) XWxT 1m = 0dx , YWyT 1m = 0dy , (ii)
VxT XDx XT Vx = VyT YDy YT Vy = Ib and (iii) VxT XWYT Vy to be a diagonal matrix. Eq. 5
is the main contribution of this paper. CTW is a direct and clean extension of CCA and DTW to
align two signals X and Y in space and time. It extends previous work on CCA by adding temporal
alignment and on DTW by allowing a feature selection and dimensionality reduction mechanism for
aligning signals of different dimensions.
3.2
Optimization for CTW
Algorithm 1: Canonical Time Warping
input : X, Y
output: Vx , Vy , Wx , Wy
begin
Initialize Vx = Idx , Vy = Idy
repeat
Use dynamic programming to compute, Wx , Wy , for aligning the sequences, VxT X, VyT Y
Set columns of, VT = [VxT , VyT ], be the leading b generalized eigenvectors of:
0
XWYT
XDx XT
0
V
=
V?
YWT XT
0
0
YDy YT
until Jctw converges
end
Optimizing Jctw is a non-convex optimization problem with respect to the alignment matrices
(Wx , Wy ) and projection matrices (Vx , Vy ). We alternate between solving for Wx , Wy using
DTW, and optimally computing the spatial projections using CCA. These steps monotonically decrease Jctw and since the function is bounded below it will converge to a critical point.
Alg. 1 illustrates the optimization process (e.g., Fig. 3e). The algorithm starts by initializing Vx
and Vy with identity matrices. Alternatively, PCA can be applied independently to each set, and
used as initial estimation of Vx and Vy if dx 6= dy . In the case of high-dimensional data, the
generalized eigenvalue problem is solved by regularizing the covariance matrices adding a scaled
identity matrix. The dimension b is selected to preserve 90% of the total correlation. We consider
the algorithm to converge when the difference between two consecutive values of Jctw is small.
4
Local canonical time warping (LCTW)
In the previous section we have illustrated how CTW can align in space and time two time series of
different dimensionality. However, there are many situations (e.g., aligning long sequences) where
a global transformation of the whole time series is not accurate. For these cases, local models
have been shown to provide better performance [3, 24, 25]. This section extends CTW by allowing
multiple local spatial deformations.
4
4.1 Energy function for LCTW
Let us assume that the spatial transformation for each frame in X and Y can be model as a
linear combination of kx or ky bases. Let be Vx = [V1x T , ? ? ? , Vkxx T ]T ? Rkx dx ?b , Vy =
T
T
[V1y , ? ? ? , Vkyy ]T ? Rky dy ?b and b ? min(kx dx , ky dy ). CTW allows for a more flexible spatial
warping by minimizing:
Jlctw (Wx , Wy , Vx , Vy , Rx , Ry )
(6)
=
ny
nx X
X
ky
ky
kx
kx
X
X
X
X
y
x
x T
y T
2
x 2
wix T wjy k
ric
V
x
?
r
V
y
k
+
kF
r
k
+
kFy rycy k2
i
j
x
cx
cx
jcy cy
x
cx =1
i=1 j=1
=kVxT
h
(1kx ? X) ?
(RTx
cy =1
cx =1
cy =1
i
h
i
? 1dx ) WxT ? VyT (1ky ? Y) ? (RTy ? 1dy ) WyT k2F + kFx Rx k2F + kFy Ry k2F ,
x
denotes the coefficient (or
where Rx ? Rnx ?kx , Ry ? Rny ?ky are the weighting matrices. ric
x
y
th
th
weight) of the cx basis for the i frame of X (similarly for rjcy ). We further constrain the weights
to be positive (i.e., Rx , Ry ? 0) and the sum of weights to be one (i.e., Rx 1kx = 1nx , Ry 1ky =
1ny ) for each frame. The last two regularization terms, Fx ? Rnx ?nx , Fy ? Rny ?ny , are 1st
order differential operators of rxcx ? Rnx ?1 , rycy ? Rny ?1 , encouraging smooth solutions over time.
Observe that Jctw is a special case of Jlctw when kx = ky = 1.
4.2
Optimization for LCTW
Algorithm 2: Local Canonical Time Warping
input : X, Y
output: Wx , Wy , Vx , Vy , Rx , Ry
begin
Initialize,
x
ric
x
Vx = 1kx ? Idx , Vy = 1ky ? Idy
(cx ? 1)nx
cx nx
(cy ? 1)ny
cy ny
y
= 1 for b
c<i?b
c, rjc
= 1 for b
c<j?b
c
y
kx
kx
ky
ky
repeat
Denote,
Zx = (1kx ? X) ? (RTx ? 1dx ), Zy = (1ky ? Y) ? (RTy ? 1dy )
Qx = VxT (Ikx ? X), Qy = VyT (Iky ? Y)
Use dynamic programming to compute, Wx , Wy , between the sequences, VxT Zx , VyT Zy
Set columns of, VT = [VxT , VyT ], be the leading b generalized eigenvectors,
0
Zx WZTy
Zx Dx ZTx
0
V
=
V?
0
Zy Dy ZTy
Zy WT ZTx
0
Set, r = Vec([Rx , Ry ]), be the solution of the quadratic programming problem,
1kx ?kx ? Dx ? QTx Qx + Ikx ? FTx Fx
?1kx ?ky ? W ? QTx Qy
T
min r
r
?1ky ?kx ? WT ? QTy Qx
1ky ?ky ? Dy ? QTy Qy + Iky ? FTy Fy
r
T
0
1kx ? Inx
s.t.
r = 1nx +ny r ? 0nx kx +ny ky
T
0
1ky ? Iny
until Jlctw converges
end
As in the case of CTW, we use an alternating scheme for optimizing Jlctw , which is summarized in
Alg. 2. In the initialization, we assume that each time series is divided into kx or ky equal parts,
being the identity matrix the starting value for Vcxx , Vcyy and block structure matrices for Rx , Ry .
5
The main difference between the alternating scheme of Alg. 1 and Alg. 2 is that the alternation
step is no longer unique. For instance, when fixing Vx , Vy , one can optimize either Wx , Wy
or Rx , Ry . Consider a simple example of warping sin(t1 ) towards sin(t2 ), one could shift the
first sequence along time axis by ?t = t2 ? t1 or do the linear transformation, at1 sin(t1 ) + bt1 ,
where at1 = cos(t2 ? t1 ) and bt1 = cos(t1 ) sin(t2 ? t1 ). In order to better control the tradeoff between time warping and spatial transformation, we propose a stochastic selection process.
Let us denote pw|v the conditional probability of optimizing W when fixing V. Given the prior
probabilities [pw , pv , pr ], we can derive the conditional probabilities using Bayes? theorem and the
fact that, [pr|w , pr|v , pv|r ] = 1 ? [pv|w , pw|v , pw|r ]. [pv|w , pw|v , pw|r ]T = A?1 b , where A =
"
#
"
#
pw ?pv 0
0
pw
0
pr and b =
pw
. Fig. 3f (right-lower corner) shows the optimization
0 ?pv pr
?pv + pr
strategy, pw = .5, pv = .3, pr = .2, where the time warping process is more often optimized.
5
Experiments
This section demonstrates the benefits of CTW and LCTW against state-of-the-art DTW approaches
to align synthetic data, motion capture data of two subjects performing similar actions, and similar
facial expressions made by two people.
5.1 Synthetic data
In the first experiment we synthetically generated two spatio-temporal signals (3-D in space and 1-D
in time) to evaluate the performance of CTW and LCTW. The first two spatial dimensions and the
time dimension are generated as follows: X = UTx ZMTx and Y = UTy ZMTy , where Z ? R2?m is
a curve in two dimensions (Fig. 3a). Ux , Uy ? R2?2 are randomly generated affine transformation
matrices for the spatial warping and Mx ? Rnx ?m , My ? Rny ?m , m ? max(nx , ny ) are randomly
generated matrices for time warping2 . The third spatial dimension is generated by adding a (1 ? nx )
or (1 ? ny ) extra row to X and Y respectively, with zero-mean Gaussian noise (see Fig. 3a-b).
We compared the performance of CTW and LCTW against three other methods: (i) dynamic time
warping (DTW) [14], (ii) derivative dynamic time warping (DDTW) [15] and (iii) iterative time
warping (IMW) [3]. Recall that in the case of synthetic data we know the ground truth alignment
matrix Wtruth = Mx MTy . The error between the ground truth and a given alignment Walg is
computed by the area enclosed between both paths (see Fig. 3g).
Fig. 3c-f show the spatial warping estimated by each algorithm. DDTW (Fig. 3c) cannot deal with
this example because the feature derivatives do not capture well the structure of the sequence. IMW
(Fig. 3d) warps one sequence towards the other by translating and re-scaling each frame in each
dimension. Fig. 3h shows the testing error (space and time) for 100 new generated time series. As it
can be observed CTW and LCTW obtain the best performance. IMW has more parameters (O(dn))
than CTW (O(db)) and LCTW (O(kdb + kn)), and hence IMW is more prone to overfitting. IMW
tries to fit the noisy dimension (3rd spatial component) biasing alignment in time (Fig. 3g), whereas
CTW and LCTW have a feature selection mechanism which effectively cancels the third dimension.
Observe that the null space for the projection matrices in CTW is vxT = [.002, .001, ?.067]T , vyT =
[?.002, ?.001, ?.071]T .
5.2 Motion capture data
In the second experiment we apply CTW and LCTW to align human motion with similar behavior.
The motion capture data is taken from the CMU-Multimodal Activity Database [26]. We selected
a pair of sub-sequences from subject 1 and subject 3 cooking brownies. Typically, each sequence
contains 500-1000 frames. For each instance we computed the quaternions for the 20 joints resulting
in a 60 dimensional feature vector that describes the body configuration. CTW and LCTW are
initialized as described in previous sections and optimized until convergence. The parameters of
LCTW are manually set to kx = 3, ky = 3 and pw = .5, pv = .3, pr = .2.
2
The generation of time transformation matrix Mx (similar for My ) is initialized by setting Mx = Inx .
Then, randomly pick and replicate m ? nx columns of Mx . We normalize each row Mx 1m = 1nx to make
the new frame to be an interpolation of zi .
6
40
4
30
2
20
10
?10
0
0
10
?10
?10
0
10
20
30
2
0
?2
0
2
0
?2
20
30
?10
0
10
20
30
5
?2
40
?4
?2
0
2
40
(a)
(b)
?2
0.4
0.15
4
10
0.1
0.2
0.1
2
20
0
30
5
10
0.05
15
0
5
10
15
?0.05
V
?0.1
?0.1
W
?0.15
5
?0.1
0
0.1
10
15
0.2
20
?0.15
20
25
0
(e)
0.1
Truth
DTW
DDTW
IMW
CTW
LCTW
0.12
0.1
60
0.06
V
70
0.04
80
0.02
15
20
0.2
90
0
20
(f)
30
45
0.14
R
10
25
35
40
0.16
0.08
5
30
35
(d)
50
W
?0.1
2
40
0
?0.05
15
4
(c)
0.15
0.05
0
10
40
(g)
60
80
DTW DDTW
IMW
CTW LCTW
(h)
Figure 3: Example with synthetic data. Time series are generated by (a) spatio-temporal transformation of 2-D latent sequence (b) adding Gaussian noise in the 3rd dimension. The result of space
warping is computed by (c) derivative dynamic time warping (DDTW), (d) iterative time warping
(IMW), (e) canonical time warping (CTW) and (f) local canonical time warping (LCTW). The energy function and order of optimizing the parameters for CTW and LCTW are shown in the top right
and lower right corner of the graphs. (g) Comparison of the alignment results for several methods.
(h) Mean and variance of the alignment error.
DTW
CTW
LCTW
100
0.5
0.05
0.05
0
0
0
?0.5
?0.05
?0.05
200
300
400
500
0
?0.5
0
(a)
0.5
0.05
0.1
0.15
600
?0.15
(b)
?0.1
?0.05
(c)
0
700
200
400
(d)
600
800
DTW
CTW
LCTW
(e)
Figure 4: Example of motion capture data alignment. (a) PCA. (b) CTW. (c) LCTW. (d) Alignment
path. (e) Motion capture data. 1st row subject one, rest of the rows aligned subject two.
Fig. 4 shows the alignment results for the action of opening a cabinet. The projection on the principal
components for both sequences can be seen in Fig. 4a. CTW and LCTW project the sequences in
a low dimensional space that maximizes the correlation (Fig. 4b-c). Fig. 4d shows the alignment
path. In this case, we do not have ground truth data, and we evaluated the results visually. The first
row of Fig. 4e shows few instances of the first subject, and the last three rows the alignment of the
third subject for DTW, CTW and LCTW. Observe that CTW and LCTW achieve better temporal
alignment.
7
5.3
Facial expression data
In this experiment we tested the ability of CTW and LCTW to align facial expressions. We took
29 subjects from the RU-FACS database [27] which consists of interviews with men and women
of varying ethnicity. The action units (AUs) in this database have been manually coded, and we
selected AU12 (smiling) to run our experiments. Each event of AU12 is coded with an onset (start),
peak and offset (end). We used person-specific AAM [28] to track 66 landmark points on the face.
For the alignment of AU12 we only used 18 landmarks corresponding to the outline of the mouth,
so for each frame we have a vector (R36?1 ) with (x, y) coordinates.
We took subject 14 and 30 and ran CTW and LCTW on the segments where the AU12 was coded.
The parameters of LCTW are manually set to kx = 3, ky = 3 and pw = .5, pv = .3, pr = .2. Fig. 5
shows the results of the alignment. Fig. 5b-c shows that the low dimensional projection obtained
with CTW and LCTW has better alignment than DTW in Fig. 5a. Fig. 5d shows the position of
the peak frame as the intersection of the two dotted lines. As we can observe from Fig. 5d, the
alignment paths found by CTW and LCTW are closer to the manually labeled peak than the ones
found by DTW. This shows that CTW and LCTW provide better alignment because the manually
labeled peaks in both sequences should be aligned. Fig. 5e shows several frames illustrating the
alignment.
0.2
0.25
0.2
50
0.15
0
0.1
?150 ?100
?50
0
50
100
0
?0.1
(a)
30
40
50
60
?0.1
?0.05
?0.2
20
0
0.05
?50
?0.1
0
(b)
0.1
?0.2
?0.2
0.2
DTW
CTW
LCTW
10
0.1
70
80
?0.1
0
(c)
0.1
90
50
(d)
100
150
DTW
CTW
LCTW
(e)
Figure 5: Example of facial expression alignment. (a) PCA. (b) CTW. (c) LCTW. (d) Alignment
path. (e) Frames from an AU12 event. The AU peaks are indicated by arrows.
6
Conclusions
In this paper we proposed CTW and LCTW for spatio-temporal alignment of time series. CTW
integrates the benefits of DTW and CCA into a clean and simple formulation. CTW extends DTW by
adding a feature selection mechanism and enables alignment of signals with different dimensionality.
CTW extends CCA by adding temporal alignment and allowing temporal local projections. We
illustrated the benefits of CTW for alignment of motion capture data and facial expressions.
7
Acknowledgements
This material is based upon work partially supported by the National Science Foundation under
Grant No. EEC-0540865.
8
References
[1] I. N. Junejo, E. Dexter, I. Laptev, and P. P?erez. Cross-view action recognition from temporal selfsimilarities. In ECCV, pages 293?306, 2008.
[2] F. Zhou, F. de la Torre, and J. K. Hodgins. Aligned cluster analysis for temporal segmentation of human
motion. In FGR, pages 1?7, 2008.
[3] E. Hsu, K. Pulli, and J. Popovic. Style translation for human motion. In SIGGRAPH, 2005.
[4] M. Brand, N. Oliver, and A. Pentland. Coupled hidden Markov models for complex action recognition.
In CVPR, pages 994?999, 1997.
[5] M. Brand and A. Hertzmann. Style machines. In SIGGRAPH, pages 183?192, 2000.
[6] G. W. Taylor, G. E. Hinton, and S. T. Roweis. Modeling human motion using binary latent variables. In
NIPS, volume 19, page 1345, 2007.
[7] A. Heloir, N. Courty, S. Gibet, and F. Multon. Temporal alignment of communicative gesture sequences.
J. Visual. Comp. Animat., 17(3-4):347?357, 2006.
[8] A. Shapiro, Y. Cao, and P. Faloutsos. Style components. In Graphics Interface, pages 33?39, 2006.
[9] G. Liu, Z. Pan, and Z. Lin. Style subspaces for character animation. J. Visual. Comp. Animat., 19(34):199?209, 2008.
[10] A. M. Elgammal and C.-S. Lee. Separating style and content on a nonlinear manifold. In CVPR, 2004.
[11] Y. Caspi and M. Irani. Aligning non-overlapping sequences. Int. J. Comput. Vis., 48(1):39?51, 2002.
[12] C. Rao, A. Gritai, M. Shah, and T. Fathima Syeda-Mahmood. View-invariant alignment and matching of
video sequences. In ICCV, pages 939?945, 2003.
[13] A. Gritai, Y. Sheikh, C. Rao, and M. Shah. Matching trajectories of anatomical landmarks under viewpoint, anthropometric and temporal transforms. Int. J. Comput. Vis., 2009.
[14] L. Rabiner and B.-H. Juang. Fundamentals of speech recognition. Prentice Hall, 1993.
[15] E. J. Keogh and M. J. Pazzani. Derivative dynamic time warping. In SIAM ICDM, 2001.
[16] J. Listgarten, R. M. Neal, S. T. Roweis, and A. Emili. Multiple alignment of continuous time series. In
NIPS, pages 817?824, 2005.
[17] Y. Sheikh, M. Sheikh, and M. Shah. Exploring the space of a human action. In ICCV, 2005.
[18] T. W. Anderson. An introduction to multivariate statistical analysis. Wiley-Interscience, 2003.
[19] F. de la Torre. A unification of component analysis methods. Handbook of Pattern Recognition and
Computer Vision, 2009.
[20] T. K. Kim and R. Cipolla. Canonical correlation analysis of video volume tensors for action categorization
and detection. IEEE Trans. Pattern Anal. Mach. Intell., 31:1415?1428, 2009.
[21] C. C. Loy, T. Xiang, and S. Gong. Multi-camera activity correlation analysis. In CVPR, 2009.
[22] B. Fischer, V. Roth, and J. Buhmann. Time-series alignment by non-negative multiple generalized canonical correlation analysis. BMC bioinformatics, 8(10), 2007.
[23] D. P. Bertsekas. Dynamic programming and optimal control. 1995.
[24] Z. Ghahramani and G. E. Hinton. The EM algorithm for mixtures of factor analyzers. University of
Toronto Tec. Rep., 1997.
[25] J. J. Verbeek, S. T. Roweis, and N. A. Vlassis. Non-linear CCA and PCA by alignment of local models.
In NIPS, 2003.
[26] F. de la Torre, J. K. Hodgins, J. Montano, S. Valcarcel, A. Bargteil, X. Martin, J. Macey, A. Collado,
and P. Beltran. Guide to the Carnegie Mellon University Multimodal Activity (CMU-MMAC) Database.
Carnegie Mellon University Tec. Rep., 2009.
[27] M. S. Bartlett, G. C. Littlewort, M. G. Frank, C. Lainscsek, I. Fasel, and J. R. Movellan. Automatic
recognition of facial actions in spontaneous expressions. J. Multimed., 1(6):22?35, 2006.
[28] I. Matthews and S. Baker. Active appearance models revisited. Int. J. Comput. Vis., 60(2):135?164, 2004.
9
| 3728 |@word illustrating:1 pw:12 norm:1 replicate:1 covariance:1 pick:1 tr:1 accommodate:2 recursively:1 reduction:1 initial:1 configuration:1 series:17 contains:1 liu:1 com:1 dx:11 kdb:1 realistic:1 numerical:1 wx:14 enables:1 xdx:2 selected:3 ith:3 provides:2 node:1 toronto:1 successive:1 compressible:1 revisited:1 along:1 constructed:1 direct:1 differential:1 dn:1 consists:1 combine:2 xwxt:2 behavioral:3 interscience:1 inter:1 behavior:4 p1:2 multi:2 ry:9 bellman:1 encouraging:1 kfy:2 begin:2 project:1 moreover:1 bounded:1 maximizes:2 baker:1 null:1 minimizes:1 rtx:2 finding:2 transformation:13 temporal:24 unexplored:1 multidimensional:1 rm:1 k2:4 scaled:1 control:2 demonstrates:1 unit:1 grant:1 cooking:1 bertsekas:1 t1:7 positive:1 fasel:1 local:10 mach:1 path:8 interpolation:1 initialization:1 au:2 challenging:1 kxk2f:1 rnx:4 co:2 uy:1 unique:1 camera:3 testing:2 yj:1 block:1 movellan:1 area:1 hyperbolic:1 projection:7 matching:3 protein:1 convenience:1 cannot:1 selection:7 operator:1 prentice:1 context:3 py:2 www:1 optimize:1 deterministic:1 yt:2 roth:1 go:1 starting:3 independently:1 convex:1 assigns:1 coordinate:2 fx:2 pt:12 spontaneous:1 rty:2 programming:5 us:1 ydy:2 recognition:8 walking:2 database:4 labeled:2 observed:1 initializing:1 capture:9 parameterize:1 solved:1 cy:5 decrease:1 ran:1 hertzmann:1 dynamic:15 solving:1 segment:1 laptev:1 upon:1 basis:1 multimodal:2 joint:1 siggraph:2 xxt:1 rkx:1 tec:2 brownie:1 pt1:1 solve:4 cvpr:3 ability:1 littlewort:1 fischer:1 emili:1 noisy:1 listgarten:2 eigenvalue:3 sequence:16 interview:1 took:2 propose:1 ment:1 coming:1 product:1 remainder:1 aligned:7 hadamard:1 cao:1 iff:1 achieve:2 roweis:3 normalize:1 ky:21 convergence:1 cluster:1 optimum:2 juang:1 vkyy:1 categorization:1 converges:2 derive:1 fixing:2 gong:1 pose:2 eq:7 c:1 imw:8 wyt:2 closely:1 torre:4 stochastic:1 human:14 vx:16 translating:1 material:1 ftx:1 keogh:2 extension:4 iny:1 exploring:1 hall:1 ground:3 normal:1 visually:2 matthew:1 major:1 achieves:1 consecutive:2 estimation:1 xnx:1 facs:1 integrates:1 communicative:1 weighted:1 gaussian:2 zhou:3 rdx:2 dexter:1 varying:2 encode:1 kim:1 typically:2 hidden:2 flexible:1 denoted:1 proposes:1 spatial:15 art:2 initialize:2 special:1 equal:1 once:1 manually:5 bmc:1 represents:2 k2f:5 cancel:1 pulli:1 minimized:1 t2:5 pxt:1 opening:1 few:1 rjc:1 randomly:3 ztx:2 simultaneously:1 preserve:1 national:1 intell:1 phase:1 detection:1 mining:1 intra:1 alignment:54 mixture:1 accurate:2 oliver:1 closer:1 unification:2 facial:7 orthogonal:1 mahmood:1 euclidean:1 taylor:1 initialized:2 re:1 deformation:2 instance:8 column:7 idy:2 modeling:1 rao:3 tp:1 cost:3 introducing:1 recognizing:1 successful:1 wix:2 graphic:3 optimally:2 kn:1 eec:1 kxi:1 synthetic:5 my:2 person:2 st:2 fundamental:3 peak:5 siam:1 probabilistic:1 lee:1 discipline:2 synthesis:1 uty:1 woman:1 corner:2 derivative:5 style:7 leading:2 account:2 de:4 ywt:1 bold:4 summarized:1 coefficient:2 int:3 satisfy:1 onset:1 vi:3 view:4 try:1 closed:1 start:2 bayes:1 pyt:2 utx:1 contribution:1 minimize:1 square:2 vyt:13 descriptor:1 variance:1 efficiently:1 rabiner:1 zy:4 trajectory:2 rx:9 zx:4 comp:2 yny:1 against:2 energy:5 cabinet:1 hsu:2 recall:1 dimensionality:6 segmentation:2 organized:1 yyt:1 qty:2 evaluated:1 formulation:1 anderson:1 correlation:13 until:3 nonlinear:1 overlapping:1 continuity:1 defines:1 indicated:1 scientific:2 building:1 smiling:1 y2:1 regularization:1 hence:1 alternating:2 irani:2 neal:1 illustrated:2 deal:2 sin:4 generalized:6 outline:1 demonstrate:1 motion:20 interface:1 image:1 regularizing:1 recently:1 common:1 rotation:1 attached:1 volume:2 mellon:4 composition:1 rdy:1 vec:2 rd:4 automatic:1 pm:1 similarly:2 erez:1 ctw:56 analyzer:1 moving:1 longer:1 align:13 aligning:6 add:2 base:1 multivariate:2 wxt:3 exaggerated:1 optimizing:4 irrelevant:1 dimensionally:1 binary:4 rep:2 vt:2 alternation:1 captured:1 seen:1 additional:1 preceding:1 impose:1 converge:2 fernando:1 monotonically:1 signal:11 ii:4 clockwise:1 multiple:3 smooth:1 gesture:1 offer:1 compensate:1 long:1 cross:1 divided:1 lin:1 icdm:1 coded:3 ensuring:1 verbeek:1 vision:4 cmu:3 represent:1 robotics:2 qy:3 whereas:1 addressed:1 extra:1 rest:1 unlike:1 subject:20 db:1 incorporates:1 effectiveness:1 integer:1 au12:5 synthetically:1 iii:3 ethnicity:1 ikx:2 variate:3 fit:1 zi:1 elgammal:1 reduce:1 tradeoff:1 shift:1 expression:8 pca:4 bartlett:1 wtp:1 queue:1 speech:1 lainscsek:1 action:11 eigenvectors:2 transforms:1 tth:1 shapiro:1 xij:1 canonical:19 vy:17 notice:1 dotted:1 estimated:1 track:1 anatomical:1 carnegie:4 key:1 capital:1 clean:2 graph:1 sum:2 run:1 letter:3 parameterized:2 extends:9 frobenious:1 dy:11 scaling:2 ric:3 set1:1 cca:22 correspondence:2 quadratic:1 nonnegative:1 activity:8 constraint:3 kronecker:1 constrain:1 x2:1 aspect:1 speed:2 fty:1 min:4 performing:3 relatively:1 px:2 martin:1 alternate:1 combination:5 describes:4 pan:1 character:1 em:1 sheikh:3 making:1 projecting:1 invariant:4 pr:9 iccv:2 taken:1 equation:1 mechanism:6 needed:1 know:1 end:3 rewritten:1 apply:1 observe:5 faloutsos:1 shah:3 original:2 denotes:4 remaining:1 top:1 pt2:1 ghahramani:1 feng:1 warping:35 objective:1 tensor:1 strategy:2 diagonal:1 inx:2 affinity:1 mx:6 distance:2 unable:1 subspace:1 separating:1 landmark:3 nx:20 topic:1 manifold:1 fy:2 idx:2 assuming:1 ru:1 index:1 relationship:1 fgr:1 minimizing:2 loy:1 mostly:1 frank:1 negative:1 synthesizing:1 anal:1 policy:5 allowing:6 markov:2 pentland:1 situation:1 hinton:2 variability:4 vlassis:1 y1:1 rn:1 frame:15 inferred:1 pair:6 optimized:2 learned:3 nip:3 trans:1 able:1 wy:15 below:1 pattern:2 biasing:1 reading:2 challenge:1 max:1 video:6 mouth:1 critical:1 event:2 buhmann:1 scheme:2 improve:1 temporally:1 dtw:27 identifies:1 axis:1 extract:1 coupled:1 text:3 review:2 literature:1 prior:1 tangent:1 kf:1 acknowledgement:1 xiang:1 generation:1 men:1 bt1:2 enclosed:1 at1:2 foundation:1 incurred:1 affine:1 viewpoint:2 uncorrelated:1 translation:2 row:8 prone:1 eccv:1 repeat:2 last:2 supported:1 guide:1 allow:1 warp:2 institute:2 face:2 benefit:3 boundary:1 dimension:10 curve:1 transition:1 made:2 qualitatively:1 qx:3 yk2f:1 kfx:1 compact:1 monotonicity:2 aam:1 global:1 active:2 overfitting:1 handbook:1 vxt:11 popovic:1 spatio:6 xi:1 alternatively:1 continuous:2 iterative:3 designates:1 vectorization:1 latent:2 pazzani:2 alg:4 complex:1 hodgins:2 main:2 arrow:1 whole:1 noise:2 animation:1 profile:1 courty:1 body:2 x1:1 fig:25 slow:1 junejo:2 ny:18 v1y:1 sub:1 position:1 pv:10 wiley:1 exponential:1 comput:3 ib:2 weighting:4 third:3 theorem:1 xt:5 specific:1 list:1 r2:3 offset:1 normalizing:1 incorporating:1 exists:1 adding:7 effectively:1 execution:1 illustrates:1 kx:21 cx:7 intersection:1 appearance:1 visual:2 kxk:1 ux:1 partially:1 scalar:2 applies:2 cipolla:1 truth:4 caspi:2 rny:4 conditional:2 identity:4 towards:2 fisher:1 content:1 change:1 iky:2 wt:2 principal:2 total:1 experimental:1 la:4 brand:2 people:3 quaternion:1 bioinformatics:2 evaluate:1 audio:1 tested:1 correlated:1 |
3,011 | 3,729 | Nonparametric Bayesian Texture Learning and
Synthesis
Long (Leo) Zhu1 Yuanhao Chen2 William Freeman1 Antonio Torralba1
1
2
CSAIL, MIT
Department of Statistics, UCLA
{leozhu, billf, antonio}@csail.mit.edu
[email protected]
Abstract
We present a nonparametric Bayesian method for texture learning and synthesis.
A texture image is represented by a 2D Hidden Markov Model (2DHMM) where
the hidden states correspond to the cluster labeling of textons and the transition
matrix encodes their spatial layout (the compatibility between adjacent textons).
The 2DHMM is coupled with the Hierarchical Dirichlet process (HDP) which allows the number of textons and the complexity of transition matrix grow as the
input texture becomes irregular. The HDP makes use of Dirichlet process prior
which favors regular textures by penalizing the model complexity. This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly
and automatically. The HDP-2DHMM results in a compact representation of textures which allows fast texture synthesis with comparable rendering quality over
the state-of-the-art patch-based rendering methods. We also show that the HDP2DHMM can be applied to perform image segmentation and synthesis. The preliminary results suggest that HDP-2DHMM is generally useful for further applications in low-level vision problems.
1
Introduction
Texture learning and synthesis are important tasks in computer vision and graphics. Recent attempts
can be categorized into two different styles. The first style emphasizes the modeling and understanding problems and develops statistical models [1, 2] which are capable of representing texture using
textons and their spatial layout. But the learning is rather sensitive to the parameter settings and the
rendering quality and speed is still not satisfactory. The second style relies on patch-based rendering
techniques [3, 4] which focus on rendering quality and speed, but forego the semantic understanding
and modeling of texture.
This paper aims at texture understanding and modeling with fast synthesis and high rendering quality. Our strategy is to augment the patch-based rendering method [3] with nonparametric Bayesian
modeling and statistical learning. We represent a texture image by a 2D Hidden Markov Model
(2D-HMM) (see figure (1)) where the hidden states correspond to the cluster labeling of textons and
the transition matrix encodes the texton spatial layout (the compatibility between adjacent textons).
The 2D-HMM is coupled with the Hierarchical Dirichlet process (HDP) [5, 6] which allows the
number of textons (i.e. hidden states) and the complexity of the transition matrix to grow as more
training data is available or the randomness of the input texture becomes large. The Dirichlet process prior penalizes the model complexity to favor reusing clusters and transitions and thus regular
texture which can be represented by compact models. This framework (HDP-2DHMM) discovers
the semantic meaning of texture in an explicit way that the texton vocabulary and their spatial layout
are learnt jointly and automatically (the number of textons is fully determined by HDP-2DHMM).
Once the texton vocabulary and the transition matrix are learnt, the synthesis process samples the
latent texton labeling map according to the probability encoded in the transition matrix. The final
1
Figure 1: The flow chart of texture learning and synthesis. The colored rectangles correspond to the index
(labeling) of textons which are represented by image patches. The texton vocabulary shows the correspondence
between the color (states) and the examples of image patches. The transition matrices show the probability
(indicated by the intensity) of generating a new state (coded by the color of the top left corner rectangle),
given the states of the left and upper neighbor nodes (coded by the top and left-most rectangles). The inferred
texton map shows the state assignments of the input texture. The top-right panel shows the sampled texton
map according to the transition matrices. The last panel shows the synthesized texture using image quilting
according to the correspondence between the sampled texton map and the texton vocabulary.
image is then generated by selecting the image patches based on the sampled texton labeling map.
Here, image quilting [3] is applied to search and stitch together all the patches so that the boundary
inconsistency is minimized. By contrast to [3], our method is only required to search a much smaller
set of candidate patches within a local texton cluster. Therefore, the synthesis cost is dramatically
reduced. We show that the HDP-2DHMM is able to synthesize texture in one second (25 times faster
than image quilting) with comparable quality. In addition, the HDP-2DHMM is less sensitive to the
patch size which has to be tuned over different input images in [3].
We also show that the HDP-2DHMM can be applied to perform image segmentation and synthesis.
The preliminary results suggest that the HDP-2DHMM is generally useful for further applications
in low-level vision problems.
2
Previous Work
Our primary interest is texture understanding and modeling. The FRAME model [7] provides a
principled way to learn Markov random field models according to the marginal image statistics.
This model is very successful in capturing stochastic textures, but may fail for more structured
textures due to lack of spatial modeling. Zhu et al. [1, 2] extend it to explicitly learn the textons and
their spatial relations which are represented by extra hidden layers. This new model is parametric
(the number of texton clusters has to be tuned by hand for different texture images) and model
selection which might be unstable in practice, is needed to avoid overfitting. Therefore, the learning
is sensitive to the parameter settings. Inspired by recent progress in machine learning, we extend
the nonparametric Bayesian framework of coupling 1D HMM and HDP [6] to deal with 2D texture
image. A new model (HDP-2DHMM) is developed to learn texton vocabulary and spatial layouts
jointly and automatically.
Since the HDP-2DHMM is designed to generate appropriate image statistics, but not pixel intensity,
a patch-based texture synthesis technique, called image quilting [3], is integrated into our system to
sample image patches. The texture synthesis algorithm has also been applied to image inpainting
[8].
2
?
?'
?
?
z1
z2
x2
x1
z4
z3
?
?
z
?
...
x4
x3
...
...
Figure 2: Graphical representation of the
HDP-2DHMM. ?, ?0 , ? are hyperparameters
set by hand. ? are state parameters. ? and
? are emission and transition parameters, respectively. i is the index of nodes in HMM.
L(i) and T (i) are two nodes on the left and
top of node i. zi are hidden states of node i. xi
are observations (features) of the image patch
at position i.
...
Malik et al. [9, 10] and Varma and Zisserman [11] study the filter representations of textons which
are related to our implementations of visual features. But the interactions between textons are not
explicitly considered. Liu et al. [12, 13] address texture understanding by discovering regularity
without explicit statistical texture modeling.
Our work has partial similarities with the epitome [14] and jigsaw [15] models for non-texture
images which also tend to model appearance and spatial layouts jointly. The major difference is
that their models, which are parametric, cannot grow automatically as more data is available. Our
method is closely related to [16] which is not designed for texture learning. They use hierarchical
Dirichlet process, but the models and the image feature representations, including both the image
filters and the data likelihood model, are different. The structure of 2DHMM is also discussed in
[17]. Other work using Dirichlet prior includes [18, 19].
Tree-structured vector quantization [20] has been used to speed up existing image-based rendering
algorithms. While this is orthogonal to our work, it may help us optimize the rendering speed. The
meaning of ?nonparametric? in this paper is under the context of Bayesian framework which differs
from the non-Bayesian terminology used in [4].
3 Texture Modeling
3.1 Image Patches and Features
A texture image I is represented by a grid of image patches {xi } with size of 24 ? 24 in this paper
where i denotes the location. {xi } will be grouped into different textons by the HDP-2DHMM.
We begin with a simplified model where the positions of textons represented by image patches are
pre-determined by the image grid, and not allowed to shift. We will remove this constraint later.
Each patch xi is characterized by a set of filter responses {wil,h,b } which correspond to values b of
image filter response h at location l. More precisely, each patch is divided into 6 by 6 cells (i.e.
l = 1..36) each of which contains 4 by 4 pixels. For each pixel in cell l, we calculate 37 (h = 1..37)
image filter responses which include the 17 filters used in [21], Difference of Gaussian (DOG, 4
filters), Difference of Offset Gaussian (DOOG, 12 filters ) and colors (R,G,B and L). wil,h,b equals
one if the averaged value of filter responses of the 4*4 pixels covered by cell l falls into bin b (the
response values are divided into 6 bins), and zero otherwise. Therefore, each patch xi is represented
by 7992 (= 37 ? 36 ? 6) dimensional feature responses {wil,h,b } in total. We let q = 1..7992 denote
the index of the responses of visual features.
It is worth emphasizing that our feature representation differs from standard methods [10, 2] where
k-means clustering is applied to form visual vocabulary first. By contrast, we skip the clustering
step and leave the learning of texton vocabulary together with spatial layout learning into the HDP2DHMM which takes over the role of k-means.
3.2 HDP-2DHMM: Coupling Hidden Markov Model with Hierarchical Dirichlet Process
A texture is modeled by a 2D Hidden Markov Model (2DHMM) where the nodes correspond to the
image patches xi and the compatibility is encoded by the edges connecting 4 neighboring nodes.
See the graphical representation of 2DHMM in figure 2. For any node i, let L(i), T (i), R(i), D(i)
denote the four neighbors, left, upper, right and lower, respectively. We use zi to index the states
3
? ? ? GEM (?)
? For each state z ? {1, 2, 3, ...}
? ?z ? Dirichlet(?)
? ?zL ? DP (?0 , ?)
? ?zT ? DP (?0 , ?)
? For each pair of states (zL , zT )
? ?zL ,zT ? DP (?0 , ?)
? For each node i in the HMM
? if L(i) 6= ? and T (i) 6= ?: zi |(zL(i) , zT (i) ) ? M ultinomial(?zL ,zT )
? if L(i) 6= ? and T (i) = ?: zi |zL(i) ? M ultinomial(?zL )
? if L(i) = ? and T (i) 6= ?: zi |zT (i) ? M ultinomial(?zT )
? xi ? M ultinomial(?zi )
Figure 3: HDP-2DHMM for texture modeling
of node i which correspond to the cluster labeling of textons. The likelihood model p(xi |zi ) which
specifies the probability of visual fetures is defined by multinomial distribution parameterized by
?zi specific to its corresponding hidden state zi :
xi ? M ultinomial(?zi )
(1)
where ?zi specify the weights of visual features.
For node i which is connected to the nodes above and on the left (i.e. L(i) 6= ? and T (i) 6= ?),
the probability p(zi |zL(i) , zT (i) ) of its state zi is only determined by the states (zL(i) , zT (i) ) of
the connected nodes. The distribution has a form of multinomial distribution parameterized by
?zL(i) ,zT (i) :
zi ? M ultinomial(?zL(i) ,zT (i) )
(2)
where ?zL(i) ,zT (i) encodes the transition matrix and thus the spatial layout of textons.
For the nodes which are on the top row or the left-most column (i.e. L(i) = ? or T (i) = ?), the
distribution of their states are modeled by M ultinomial(?zL(i) ) or M ultinomial(?zT (i) ) which
can be considered as simpler cases. We assume the top left corner can be sampled from any states
according to the marginal statistics of states. Without loss of generality, we will skip the details of
the boundary cases, but only focus on the nodes whose states should be determined by their top and
left nodes jointly.
To make a nonparametric Bayesian representation, we need to allow the number of states zi countably infinite and put prior distributions over the parameters ?zi and ?zL(i) ,zT (i) . We can achieve this
by tying the 2DHMM together with the hierarchical Dirichlet process [5]. We define the prior of ?z
as a conjugate Dirichlet prior:
?z ? Dirichlet(?)
(3)
where ? is the concentration hyperparameter which controls how uniform the distribution of ?z is
(note ?z specify weights of visual features): as ? increases, it becomes more likely that the visual
features have equal probability. Since the likelihood model p(xi |zi ) is of multinomial form, the
posterior distribution of ?z has a analytic form, still a Dirichlet distribtion.
The transition parameters ?zL ,zT are modeled by a hierarchical Dirichlet process (HDP):
? ? GEM (?)
0
?zL ,zT ? DP (? , ?)
(4)
(5)
where we first draw global weights ? according to the stick-breaking prior distribution GEM (?).
The stick-breaking weights ? specify the probability of state which are globally shared among all
nodes. The stick-breaking prior produces exponentially decayed weights in expectation such that
simple models with less representative clusters (textons) are favored, given few observations, but,
there is always a low-probability that small clusters are created to capture details revealed by large,
complex textures. The concentration hyperparameter ? controls the sparseness of states: a larger
? leads to more states. The prior of the transition parameter ?zL ,zT is modeled by a Dirichlet
4
process DP (?0 , ?) which is a distribution over the other distribution ?. ?0 is a hyperparameter
which controls the variability of ?zL ,zT over different states across all nodes: as ?0 increases, the
state transitions become more regular. Therefore, the HDP makes use of a Dirichlet process prior to
place a soft bias towards simpler models (in terms of the number of states and the regularity of state
transitions) which explain the texture.
The generative process of the HDP-2DHMM is described in figure (3).We now have the full representation of the HDP-2DHMM. But this simplified model does not allow the textons (image patches)
to be shifted. We remove this constraint by introducing two hidden variables (ui , vi ) which indicate
the displacements of textons associated with node i. We only need to adjust the correspondence
between image features xi and hidden states zi . xi is modified to be xui ,vi which refers to image
features located at the position with displacement of (ui , vi ) to the position i. Random variables
(ui , vi ) are only connected to the observation xi (not shown in figure 2). (ui , vi ) have a uniform
prior, but are limited to the small neighborhood of i (maximum 10% shift on one side).
4 Learning HDP-2DHMM
In a Bayesian framework, the task of learning HDP-2DHMM (also called Bayesian inference) is
to compute the posterior distribution p(?, ?, z|x). It is trivial to sample the hidden variables (u, v)
because of their uniform prior. For simplicity, we skip the details of sampling u, v. Here, we present
an inference procedure for the HDP-2DHMM that is based on Gibbs sampling. Our procedure
alternates between two sampling stages: (i) sampling the state assignments z, (ii) sampling the
global weights ?. Given fixed values for z, ?, the posterior of ? can be easily obtained by aggregating
statistics of the observations assigned to each state. The posterior of ? is Dirichlet. For more details
on Dirichlet processes, see [5].
We first instantiate a random hidden state labeling and then iteratively repeat the following two steps.
Sampling z. In this stage we sample a state for each node. The probability of node i being assigned
state t is given by:
P (zi = t|z ?i , ?) ? ft?xi (xi )P (zi = t|zL(i) , zT (i) )
?P (zR(i) |zi = t, zT (R(i)) )P (zD(i) |zL(D(i)) , zi = t)
(6)
The first term ft?xi (xi ) denotes the posterior probability of observation xi given all other observations assigned to state t, and z ?i denotes all state assignments except zi . Let nqt be the number of
observations of feature wq with state t. ft?xi (xi ) is calculated by:
ft?xi (xi ) =
Y
(P
q
q
nqt + ?q
P
)wi
0
0
q 0 nq t +
q 0 ?q
(7)
where ?q is the weight for visual feature wq .
The next term P (zi = t|zL(i) = r, zT (i) = s) is the probability of state of t, given the states of the
nodes on the left and above, i.e. L(i) and T (i). Let nrst be the number of observations with state
t whose the left and upper neighbor nodes? states are r for L(i) and s for T (i). The probability of
generating state t is given by:
nrst + ?0 ?t
P (zi = t|zL(i) = r, zT (i) = s) = P
0
t0 nrst0 + ?
(8)
where ?t refers to the weight of state t. This calculation follows the properties of Dirichlet distribution [5].
The last two terms P (zR(i) |zi = t, zT (R(i)) ) and P (zD(i) |zL(D(i)) , zi = t) are the probability of the
states of the right and lower neighbor nodes (R(i), D(i)) given zi . These two terms can be computed
in a similar form as equation (8).
Sampling ?. In the second stage, given the assignments z = {zi }, we sample ? using the Dirichlet
distribution as described in [5].
5
Figure 4: The color of rectangles in columns 2 and 3 correspond to the index (labeling) of textons which
are represented by 24*24 image patches. The synthesized images are all 384*384 (16*16 textons /patches).
Our method captures both stochastic textures (the last two rows) and more structured textures (the first three
rows, see the horizontal and grided layouts). The inferred texton maps for structured textures are simpler (less
states/textons) and more regular (less cluttered texton maps) than stochastic textures.
5
Texture Synthesis
Once the texton vocabulary and the transition matrix are learnt, the synthesis process first samples
the latent texton labeling map according to the probability encoded in the transition matrix. But the
HDP-2DHMM is generative only for image features, but not image intensity. To make it practical
for image synthesis, image quilting [3] is integrated with the HDP-2DHMM. The final image is then
generated by selecting image patches according to the texton labeling map. Image quilting is applied
to select and stitch together all the patches in a top-left-to-bottom-right order so that the boundary
inconsistency is minimized . The width of the overlap edge is 8 pixels. By contrast to [3] which
need to search over all image patches to ensure high rendering quality, our method is only required
to search the candidate patches within a local cluster. The HDP-2DHMM is capable of producing
high rendering quality because the patches have been grouped based on visual features. Therefore,
the synthesis cost is dramatically reduced. We show that the HDP-2DHMM is able to synthesize a
6
Figure 5: More synthesized texture images (for each pair, left is input texture, right is synthesized).
texture image with size of 384*384 and with comparable quality in one second (25 times faster than
image quilting).
6 Experimental Results
6.1 Texture Learning and Synthesis
We use the texture images in [3]. The hyperparameters {?, ?0 , ?} are set to 10, 1, and 0.5, respectively. The image patch size is fixed to 24*24. All the parameter settings are identical for all images.
The learning runs with 10 random initializations each of which takes about 30 sampling iterations
to converge. A computer with 2.4 GHz CPU was used. For each image, it takes 100 seconds for
learning and 1 second for synthesis (almost 25 times faster than [3]).
Figure (4) shows the inferred texton labeling maps, the sampled texton maps and the synthesized
texture images. More synthesized images are shown in figure (5). The rendering quality is visually
comparable with [3] (not shown) for both structured textures and stochastic textures. It is interesting
to see that the HMM-HDP captures different types of texture patterns, such as vertical, horizontal
and grided layouts. It suggests that our method is able to discover the semantic texture meaning by
learning texton vocabulary and their spatial relations.
7
Figure 6:
Image segmentation and synthesis. The first three rows show the HDP-2DHMM is able to segment images with mixture of
textures and synthesize new textures. The last row shows a failure example where the texton is not well aligned.
6.2
Image Segmentation and Synthesis
We also apply the HDP-2DHMM to perform image segmentation and synthesis. Figure (6) shows
several examples of natural images which contain mixture of textured regions. The segmentation
results are represented by the inferred state assignments (the texton map). In figure (6), one can see
that our method successfully divides images into meaningful regions and the synthesized images
look visually similar to the input images. These results suggest that the HDP-2DHMM framework
is generally useful for low-level vision problems. The last row in figure (6) shows a failure example
where the texton is not well aligned.
7
Conclusion
This paper describes a novel nonparametric Bayesian method for textrure learning and synthesis.
The 2D Hidden Markov Model (HMM) is coupled with the hierarchical Dirichlet process (HDP)
which allows the number of textons and the complexity of transition matrix grows as the input texture becomes irregular. The HDP makes use of Dirichlet process prior which favors regular textures
by penalizing the model complexity. This framework (HDP-2DHMM) learns the texton vocabulary and their spatial layout jointly and automatically. We demonstrated that the resulting compact
representation obtained by the HDP-2DHMM allows fast texture synthesis (under one second) with
comparable rendering quality to the state-of-the-art image-based rendering methods. Our results
on image segmentation and synthesis suggest that the HDP-2DHMM is generally useful for further
applications in low-level vision problems.
Acknowledgments. This work was supported by NGA NEGI-1582-04-0004, MURI Grant N0001406-1-0734, ARDA VACE, and gifts from Microsoft Research and Google. Thanks to the anonymous
reviewers for helpful feedback.
8
References
[1] Y. N. Wu, S. C. Zhu, and C.-e. Guo, ?Statistical modeling of texture sketch,? in ECCV ?02:
Proceedings of the 7th European Conference on Computer Vision-Part III, 2002, pp. 240?254.
[2] S.-C. Zhu, C.-E. Guo, Y. Wang, and Z. Xu, ?What are textons?? International Journal of
Computer Vision, vol. 62, no. 1-2, pp. 121?143, 2005.
[3] A. A. Efros and W. T. Freeman, ?Image quilting for texture synthesis and transfer,? in Siggraph,
2001.
[4] A. Efros and T. Leung, ?Texture synthesis by non-parametric sampling,? in International Conference on Computer Vision, 1999, pp. 1033?1038.
[5] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei, ?Hierarchical dirichlet processes,? Journal
of the American Statistical Association, 2006.
[6] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen, ?The infinite hidden markov model,? in
NIPS, 2002.
[7] S. C. Zhu, Y. Wu, and D. Mumford, ?Filters, random fields and maximum entropy (frame):
Towards a unified theory for texture modeling,? International Journal of Computer Vision,
vol. 27, pp. 1?20, 1998.
[8] A. Criminisi, P. Perez, and K. Toyama, ?Region filling and object removal by exemplar-based
inpainting,? IEEE Trans. on Image Processing, 2004.
[9] J. Malik, S. Belongie, J. Shi, and T. Leung, ?Textons, contours and regions: Cue integration in
image segmentation,? IEEE International Conference on Computer Vision, vol. 2, 1999.
[10] T. Leung and J. Malik, ?Representing and recognizing the visual appearance of materials using three-dimensional textons,? International Journal of Computer Vision, vol. 43, pp. 29?44,
2001.
[11] M. Varma and A. Zisserman, ?Texture classification: Are filter banks necessary?? IEEE Computer Society Conference on Computer Vision and Pattern Recognition, vol. 2, 2003.
[12] Y. Liu, W.-C. Lin, and J. H. Hays, ?Near regular texture analysis and manipulation,? ACM
Transactions on Graphics (SIGGRAPH 2004), vol. 23, no. 1, pp. 368 ? 376, August 2004.
[13] J. Hays, M. Leordeanu, A. A. Efros, and Y. Liu, ?Discovering texture regularity as a higherorder correspondence problem,? in 9th European Conference on Computer Vision, May 2006.
[14] N. Jojic, B. J. Frey, and A. Kannan, ?Epitomic analysis of appearance and shape,? in In ICCV,
2003, pp. 34?41.
[15] A. Kannan, J. Winn, and C. Rother, ?Clustering appearance and shape by learning jigsaws,? in
In Advances in Neural Information Processing Systems. MIT Press, 2007.
[16] J. J. Kivinen, E. B. Sudderth, and M. I. Jordan, ?Learning multiscale representations of natural
scenes using dirichlet processes,? IEEE International Conference on Computer Vision, vol. 0,
2007.
[17] J. Domke, A. Karapurkar, and Y. Aloimonos, ?Who killed the directed model?? in IEEE
Computer Society Conference on Computer Vision and Pattern Recognition, 2008.
[18] L. Cao and L. Fei-Fei, ?Spatially coherent latent topic model for concurrent object segmentation and classification,? in Proceedings of IEEE International Conference on Computer Vision,
2007.
[19] X. Wang and E. Grimson, ?Spatial latent dirichlet allocation,? in NIPS, 2007.
[20] L.-Y. Wei and M. Levoy, ?Fast texture synthesis using tree-structured vector quantization,?
in SIGGRAPH ?00: Proceedings of the 27th annual conference on Computer graphics and
interactive techniques, 2000, pp. 479?488.
[21] J. Winn, A. Criminisi, and T. Minka, ?Object categorization by learned universal visual dictionary,? in Proceedings of the Tenth IEEE International Conference on Computer Vision, 2005.
9
| 3729 |@word inpainting:2 liu:3 contains:1 selecting:2 tuned:2 existing:1 z2:1 zhu1:1 shape:2 analytic:1 remove:2 designed:2 generative:2 discovering:2 instantiate:1 nq:1 cue:1 dhmm:36 colored:1 blei:1 provides:1 node:24 location:2 simpler:3 become:1 inspired:1 globally:1 freeman:1 automatically:5 cpu:1 becomes:4 begin:1 discover:1 gift:1 panel:2 what:1 tying:1 developed:1 unified:1 toyama:1 interactive:1 stick:3 zl:23 control:3 grant:1 producing:1 local:2 aggregating:1 frey:1 might:1 initialization:1 suggests:1 limited:1 averaged:1 directed:1 practical:1 acknowledgment:1 practice:1 differs:2 x3:1 procedure:2 displacement:2 universal:1 pre:1 regular:6 refers:2 suggest:4 cannot:1 selection:1 put:1 epitome:1 context:1 optimize:1 map:12 demonstrated:1 reviewer:1 shi:1 layout:12 cluttered:1 simplicity:1 chen2:1 varma:2 synthesize:3 recognition:2 located:1 muri:1 bottom:1 role:1 ft:4 wang:2 capture:3 calculate:1 region:4 connected:3 principled:1 grimson:1 complexity:6 wil:3 ui:4 segment:1 textured:1 easily:1 siggraph:3 represented:9 leo:1 fast:4 labeling:11 neighborhood:1 whose:2 encoded:3 larger:1 otherwise:1 favor:3 statistic:5 jointly:6 final:2 beal:2 interaction:1 neighboring:1 aligned:2 cao:1 achieve:1 cluster:9 regularity:3 produce:1 generating:2 categorization:1 leave:1 object:3 help:1 coupling:2 stat:1 exemplar:1 progress:1 skip:3 indicate:1 xui:1 closely:1 filter:11 stochastic:4 criminisi:2 material:1 bin:2 preliminary:2 anonymous:1 considered:2 visually:2 major:1 efros:3 dictionary:1 sensitive:3 grouped:2 concurrent:1 successfully:1 mit:3 gaussian:2 always:1 aim:1 modified:1 rather:1 avoid:1 epitomic:1 focus:2 emission:1 vace:1 likelihood:3 contrast:3 helpful:1 inference:2 leung:3 integrated:2 hidden:16 relation:2 compatibility:3 pixel:5 among:1 classification:2 augment:1 favored:1 spatial:14 art:2 integration:1 marginal:2 field:2 once:2 equal:2 sampling:9 x4:1 identical:1 look:1 filling:1 minimized:2 develops:1 few:1 william:1 microsoft:1 attempt:1 interest:1 adjust:1 mixture:2 perez:1 edge:2 capable:2 partial:1 necessary:1 orthogonal:1 tree:2 divide:1 penalizes:1 column:2 modeling:11 soft:1 assignment:5 cost:2 introducing:1 uniform:3 recognizing:1 successful:1 graphic:3 learnt:3 thanks:1 decayed:1 international:8 csail:2 synthesis:27 together:4 connecting:1 corner:2 american:1 style:3 forego:1 reusing:1 includes:1 textons:26 explicitly:2 vi:5 later:1 jigsaw:2 yuanhao:1 chart:1 who:1 correspond:7 bayesian:10 emphasizes:1 worth:1 randomness:1 explain:1 failure:2 pp:8 minka:1 associated:1 sampled:5 color:4 segmentation:9 zisserman:2 response:7 specify:3 wei:1 generality:1 stage:3 hand:2 sketch:1 horizontal:2 multiscale:1 lack:1 google:1 quality:10 indicated:1 grows:1 contain:1 assigned:3 jojic:1 spatially:1 iteratively:1 satisfactory:1 semantic:3 deal:1 adjacent:2 width:1 image:67 meaning:3 discovers:1 novel:1 multinomial:3 exponentially:1 extend:2 discussed:1 association:1 synthesized:7 gibbs:1 grid:2 z4:1 killed:1 similarity:1 posterior:5 recent:2 nrst:2 manipulation:1 hay:2 inconsistency:2 converge:1 ii:1 full:1 faster:3 characterized:1 calculation:1 long:1 lin:1 divided:2 coded:2 vision:17 expectation:1 iteration:1 represent:1 texton:27 cell:3 irregular:2 addition:1 winn:2 grow:3 sudderth:1 extra:1 tend:1 arda:1 flow:1 jordan:2 near:1 revealed:1 iii:1 rendering:14 zi:29 quilting:8 shift:2 t0:1 antonio:2 dramatically:2 useful:4 generally:4 covered:1 nonparametric:7 reduced:2 generate:1 specifies:1 shifted:1 zd:2 hyperparameter:3 vol:7 four:1 terminology:1 penalizing:2 tenth:1 rectangle:4 nga:1 nqt:2 run:1 parameterized:2 place:1 almost:1 wu:2 patch:28 draw:1 comparable:5 karapurkar:1 capturing:1 layer:1 correspondence:4 annual:1 constraint:2 precisely:1 fei:2 x2:1 scene:1 encodes:3 ucla:2 speed:4 department:1 structured:6 according:8 alternate:1 conjugate:1 smaller:1 across:1 describes:1 wi:1 iccv:1 equation:1 fail:1 needed:1 available:2 apply:1 hierarchical:8 appropriate:1 top:8 dirichlet:24 denotes:3 include:1 clustering:3 graphical:2 ensure:1 ghahramani:1 society:2 malik:3 mumford:1 strategy:1 primary:1 parametric:3 concentration:2 dp:5 higherorder:1 hmm:7 topic:1 unstable:1 trivial:1 kannan:2 hdp:39 rother:1 index:5 modeled:4 z3:1 implementation:1 zt:23 perform:3 teh:1 upper:3 vertical:1 observation:8 markov:7 variability:1 frame:2 august:1 intensity:3 inferred:4 dog:1 required:2 pair:2 z1:1 coherent:1 ultinomial:8 learned:1 nip:2 trans:1 address:1 able:4 negi:1 aloimonos:1 pattern:3 including:1 overlap:1 natural:2 zr:2 kivinen:1 zhu:4 representing:2 created:1 coupled:3 prior:13 understanding:5 removal:1 billf:1 fully:1 loss:1 interesting:1 allocation:1 bank:1 row:6 eccv:1 repeat:1 last:5 supported:1 rasmussen:1 bias:1 allow:2 side:1 neighbor:4 fall:1 ghz:1 boundary:3 calculated:1 vocabulary:11 transition:18 feedback:1 contour:1 levoy:1 simplified:2 transaction:1 compact:3 countably:1 global:2 overfitting:1 gem:3 belongie:1 xi:22 search:4 latent:4 learn:3 transfer:1 complex:1 european:2 hyperparameters:2 allowed:1 categorized:1 x1:1 xu:1 representative:1 position:4 explicit:2 candidate:2 breaking:3 learns:2 emphasizing:1 specific:1 offset:1 quantization:2 texture:64 sparseness:1 torralba1:1 entropy:1 appearance:4 likely:1 visual:11 stitch:2 leordeanu:1 relies:1 acm:1 towards:2 shared:1 determined:4 infinite:2 except:1 domke:1 called:2 total:1 experimental:1 meaningful:1 select:1 wq:2 guo:2 |
3,012 | 373 | Sequential Adaptation of Radial Basis Function
Neural Networks and its Application to
Time-series Prediction
v.
Kadirkamanathan
Engineering Department
Cambridge University
Cambridge CB2 IPZ, UK
M. Niranjan
F. Fallside
Abstract
We develop a sequential adaptation algorithm for radial basis function
(RBF) neural networks of Gaussian nodes, based on the method of successive F-Projections. This method makes use of each observation efficiently
in that the network mapping function so obtained is consistent with that
information and is also optimal in the least L 2 -norm sense. The RBF
network with the F-Projections adaptation algorithm was used for predicting a chaotic time-series. We compare its performance to an adaptation scheme based on the method of stochastic approximation, and show
that the F-Projections algorithm converges to the underlying model much
faster.
1
INTRODUCTION
Sequential adaptation is important for signal processing applications such as timeseries prediction and adaptive control in nonstationary environments. With increasing computational power, complex algorithms that can offer better performance
can be used for these tasks. A sequential adaptation scheme, called the method
of successive F-Projections [Kadirkamanathan & Fallside, 1990], makes use of each
observation efficiently in that, the function so obtained is consistent with that observation and is the optimal posterior in the least L 2 -norm sense.
In this paper we present an adaptation algorithm based on this method for the
radial basis function (RBF) network of Gaussian nodes [Broomhead & Lowe, 1988].
It is a memory less adaptation scheme since neither the information about the past
samples nor the previous adaptation directions are retained. Also, the observations
are presented only once. The RBF network employing this adaptation scheme
721
722
Kadirkamanathan, Niranjan, and Iilllside
was used for predicting a chaotic time-series. The performance of the algorithm is
compared to a memoryless sequential adaptation scheme based on the method of
stochastic approximation.
2
METHOD OF SUCCESSIVE F-PROJECTIONS
The principle of F-Projection [Kadirkamanathan et al., 1990J is a general method
of choosing a posterior function estimate of an unknown function 1*, when there
exists a prior estimate and new information about 1* in the form of constraints.
The principle states that, of all the functions that satisfy the constraints, one should
choose the posterior In that has the least L 2 -norm, II/n - In-til, where In-l is the
prior estimate of 1*. viz.,
In = arg min III - In- III
J
such that
In
E HI
(1)
where H / is the set of functions that satisfy the new constraints, and
J IIf(~)
III - In-lW =
-
In_'(~)1I2Id?1 = D(f,/n-d
(2)
~EC
where ~ is the input vector,
domain C.
Id?1
is the infinitesimal volume in the input space
In functional analysis theory, the metric D(., .) describes the L2 -normed linear space
of square integrable functions. Since an inner product can be defined in this space,
it is also the Hilbert space of square integrable functions [Linz, 1984J. Constraints
of the form Yn = I(~n) are linear in this space, and the functions that satisfy the
constraint lie in a hyperplane subspace H /. The posterior In, obtained from the
principle can be seen to be a projection of fn-l onto the subspace H / containing
the underlying function that generates the observation set, and hence is optimal
(i.e., best possible choice), see Figure l.
r,
Hilbert space
Figure 1: Principle of F-Projection
Neural networks can be viewed as constructing a function in its input space. The
structure of the neural network and the finite number of parameters restrict the class
Sequential Adaptation of Radial Basis Function Neural Networks
of functions that can be constructed to a subset of functions in the Hilbert space.
Neural networks, therefore approximate the underlying function that describes the
set of observations. Hence, the principle of .1"-Projection now yields a posterior
I(fin) E HI that is an approximation of In (see Figure 1).
The method of successive :F-Projectjons is the application of the principle of .1"Projection on a sequence of observations or information [Kadirkamanathan et al.,
1990]. For neural networks, the method gives an algorithm that has the following
two steps .
? Initialise parameters with random values or values based on a priori knowledge .
? For each pattern
(~i'
Yi) i
= 1 ... n, determine the posterior parameter estimate
~ =argmJn
JII/(~,!D - 1(~'~_1)1I2Id~1
~EC
such that I(~,~) = Yi'
where (~i' Yi), for i = 1 ... n constitutes the observation set, ft is the neural network
parameter set and I(~, ft) is the function constructed by the neural network.
3
.r-PROJECTIONS FOR AN RBF NETWORK
The class of radial basis function (RBF) neural networks were first introduced by
Broomhead & Lowe [1988]. One such network is the RBF network of Gaussian
nodes. The function constructed at the output node of the RBF network of Gaussian
nodes, J(~), is derived from a set of basis functions of the form,
i = l. .. m
(3)
Each basis function ?i(~) is described at the output of each hidden node and is
centered on ~ in the input space. ?i(.~) is a function of the radial weighted distance
between the input vector ~ and the node centre ~. In general, Ci is diagonal with
elements [O"il, 0" 12, ... ,000iN]. 1(;,,) is a linear combination of the m basis functions.
m
f(~)
=L
ai?i(~)
(4)
i=l
and
~
= [... , ai, Il i ,Qi
I ???
J is
then the parameter vector for the RBF network.
There are two reasons for developing the sequential adaptation algorithm for the
RBF network of Gaussian nodes. Firstly, the method of successive :F-Projections is
based on minimizing the hypervolume change in the hypersurface when learning new
patterns. The RBF network of Gaussian nodes construct a localized hypersurface
and therefore the changes will also be local. This results in the adaptation of a few
nodes and therefore the algorithm is quite stable. Secondly, the L 2 -norm measure of
the hypervolume change can be solved analytically for the RBF network of Gaussian
nodes.
723
724
Kadirkamanatban, Niranjan, and l8.11side
The method of successive F-Projections is developed under deterministic noise-free
conditions. When the observations are noisy, the constraint that f(~, ~) = Yi must
be relaxed to,
(5)
IIf(~,~) - Yi 112 ::; f
Hence, the sequential adaptation scheme is modified to,
!l.n = arg min
J (0)
9
-
J(~) =
JIIf(~,~)
f(~_1,~)1I2Id~1 + cillf(~'~i) -
-
(6)
Yill 2
(7)
is the penalty parameter that trades off between the importance of learning the
new pattern and losing the information of the past patterns. This minimization
can be performed by the gradient descent procedure. The minimization procedure
is halted when the change ~J falls below a threshold. The complete adaptation
algorithm is as follows:
Ci
? Choose ~o randomly
? For each pattern (i = 1 ... P)
O~O) = o? 1
? Repeat (Ph iteration)
?
~
~-
O~ Ie)
~
= O~ Ie - 1) _
-I
Until ~J(Ie)
< tth
7]
"V J
I
-.
9=9(1,-1)
-
where "V J is the gradient vector of JUt.) with respect to ~, ~J(Ie) = J(~~Ie?) J(~~Ie-l?) is the change in the cost function and tth is a threshold. Note that
(Xi, Jli,!!..i for i = 1 ... m are all adapted. The details of the algorithm can be found
in the report by Kadirkamanathan et a/., [Kadirkamanathan, Niranjan & Fallside,
1991].
4
TIME SERIES PREDICTION
An area of application for sequential adaptation of neural networks is the prediction
of time-series in nonstationary environments, where the underlying model generating the time-series is time-varying. The adaptation algorithm must also result in
the convergence of the neural network to the underlying model under stationary
conditions. The usual approach to predicting time-series is to train the neural network on a set of training data obtained from the series [Lapedes & Farber, 1987;
Farmer & Sidorowich, 1988; Niranjan, 1991]. Our sequential adaptation approach
differs from this in that the adaptation takes place for each sample.
In this work, we examine the performance of the F-Projections adaptation algorithm
for the RBF network of Gaussian nodes in predicting a deterministic chaotic series.
The chaotic series under investigation is the logistic map [Lapedes & Farber, 1987],
whose dynamics is governed by the equation,
Xn = 4Xn -l(1- xn-d
(8)
Sequential Adaptation of Radial Basis Function Neural Networks
This is a first order nonlinear process where only the previous sample determines
the value of the present sample. Since neural networks offer the capability of constructing any arbitrary mapping to a sufficient accuracy, a network with input nodes
equal to the process order will find the underlying model. Hence, we use the RBF
network of Gaussian nodes with a single input node. We are thus able to compare
the map the RBF network constructed with that of the actual map given by eqn
(8).
First, RBF network with 2 input nodes and 8 Gaussian nodes was used to predict
the logistic map chaotic series of 100 samples. Each sample was presented only once
for training. The training was temporarily halted after 0, 20, 40, 60, 80 and 100
samples, and in each case the prediction error residual was found. This is given in
Figure 2 where the increasing darkness of the curves stand for the increasing number
of patterns used for training. It is evident from this figure that the prediction model
improves very quickly from the initial state and then slowly keeps on improving as
the number of training patterns used is increased.
0.5
~.5~--~~--~----~--~~--~~--~----~----~--~----~
o
70
80
90
100
time(samples)
Figure 2: Evolution of prediction error residuals
In order to compare the performance of the sequential adaptation algorithm, a memoryless adaptation scheme was also used to predict the chaotic series. The scheme is
the LMS or stochastic approximation (sequential back propagation [White, 1987]),
where for each sample, one iteration takes place. The iteration is given by,
fl.i = ~-l
-
rfv
JI
(9)
(J=(J
-
where,
=
-I
J(fl.) 11!(fl.,~) - ydl 2
and JUt) is the squared prediction error for the present sample.
(10)
Next, the RBF network with a single input node and 8 Gaussian units was used
to predict the chaotic series. The .1'-Projections and the stochastic approximation
725
726
Kadirkamanathan, Niranjan, and Htllside
adaptation algorithms were used for training this network on 60 samples. Results
on the map constructed by a network trained by each of these schemes for 0, 20 and
60 samples and the samples used for training are shown in Figure 3. Again, each
sample was presented only once for training.
(0 PMI.,na)
(20 pan.,na)
tE
~
(60 pan.tns)
!::s
Q.
E
III
1.0
CI)
C
(.)
(0 p .. n.,nsl
(20 Pan.,na)
(60 Pan.rns)
CD
1.0
C
~
5
D.I
U
D.I
0.8
0.6
.. .. ..... .. . ..
G.4
0.4
o
Q ... .
.
. . "?}
D.2
0.2
0.1
0.8
1.0
0.1
Past Sample
(a)
1.0
Past Sample
(b)
Figure 3: Map f(x) constructed by the RBF network. (a) f-projections (b) stochastic
approximation .
The stochastic approximation algorithm fails to construct a close-fit mapping of
the underlying function after training on 60 samples. The F-Projections algorithm
however, provides a close-fit map after training on 20 samples. It also shows stability
by maintaining the map up to training on 60 samples. The speed of convergence
achieved, in terms of the number of samples used for training, is much higher for
the F-Projections.
Comparing the cost functions being minimized for the F -Projections and the
stochastic approximation algorithms, given by eqns (7) and (10), it is clear that
the difference is only an additional integral term in eqn (7). This term is not a
function of the present observation, but is a function of the a priori parameter
values. The addition of such a term is to incorporate a priori knowledge of the
network to that of the present observation in determining the posterior parameter
values. The faster convergence result for the F-Projections indicate the importance
of the extended cost function . Even though the cost term for the F-Projections
was developed for a recursive estimation algorithm, it can be applied to a block
estimation method as well. The cost function given by eqn (7) can be seen to be an
extension of the nonlinear least squared error to incorporate a priori knowledge.
5
CONCLUSIONS
The principle of F-Projection proposed by Kadirkamanathan et a/., [1990], provides an optimal posterior estimate of a function, from the prior estimate and new
Sequential Adaptation of Radial Basis Function Neural Networks
information. Based on it, they propose a sequential adaptation scheme called, the
method of successive I-projections. We have developed a sequential adaptation
algorithm for the RBF network of Gaussian nodes based on this method.
Applying the RBF network with the .1"-Projections algorithm to the prediction
of a chaotic series, we have found that the RBF network was able to map the
underlying function. The prediction error residuals at the end of training with
different number of samples, indicate that, after a substantial reduction in the error
in the initial stages, with increasing number of samples presented for training the
error was steadily decreasing. By comparing with the performance of the stochastic
approximation algorithm, we show the superior convergence achieved by the .1"Projections.
Comparing the cost functions being minimized for the .1"-Projections and the
stochastic approximation algorithms reveal that the .1"-Projections uses both the
prediction error for the current sample and the a priori values of the parameters,
whereas the stochastic approximation algorithms use only the prediction error. We
also point out that such a cost term that includes a priori knowledge of the network
can be used for training a trained network upon receipt of further information.
References
[1] Broomhead.D.S & Lowe.D, (1988), "Afulti-variable Interpolation and Adaptive
Networks", RSRE memo No.4148, Royal Signals and Radar Establishment,
Malvern.
[2] Farmer.J.D & Sidorowich.J.J, (1988), "Exploiting chaos to predict the future
and reduce noise", Technical Report, Los Alamos National Laboratory.
[3] Kadirkamanathan.V & Fallside.F (1990), ".1"-Projections: A nonlinear recursive estimation algorithm for neural networks", Technical Report CUED/FINFENG/TR.53, Cambridge University Engineering Department.
[4] Kadirkamanathan.V, Niranjan.M & Fallside.F (1991), "Adaptive RBF network
for time-series prediction", Technical Report CUED/F-INFENG/TR.56, Cambridge University Engineering Department.
[5] Lapedes.A.S & Farber.R, (1987), "Non-linear signal processing using neural
networks: Prediction and system modelling", Technical report, Los Alamos
National Laboratory, Los Alamos, New Mexico 87545.
[6] Linz.P, (1984), "Theoretical Numerical Analysis", John Wiley, New York.
[7] Niranjan.M, (1991), "Implementing threshold autoregressive models for time
series prediction on a multilayer perceptron" , Technical Report CUED /FINFENG/TR.50, Cambridge University Engineering Department.
[8] White.H, 1987, "Some asymptotic results for learning in single hidden layer
feedforward network models", Technical Report, Department of Economics,
Univeristy of California, San Diego.
727
| 373 |@word indicate:2 norm:4 evolution:1 direction:1 hence:4 analytically:1 farber:3 memoryless:2 laboratory:2 kadirkamanathan:11 stochastic:10 centered:1 white:2 usual:1 diagonal:1 fallside:5 in_:1 implementing:1 tr:3 eqns:1 gradient:2 subspace:2 distance:1 reduction:1 initial:2 argmjn:1 series:16 investigation:1 evident:1 complete:1 secondly:1 lapedes:3 tn:1 past:4 extension:1 reason:1 current:1 comparing:3 retained:1 chaos:1 minimizing:1 must:2 mapping:3 john:1 predict:4 fn:1 numerical:1 lm:1 superior:1 functional:1 ji:1 jut:2 memo:1 volume:1 estimation:3 ydl:1 stationary:1 unknown:1 observation:11 cambridge:5 ai:2 fin:1 finite:1 descent:1 pmi:1 weighted:1 timeseries:1 minimization:2 provides:2 centre:1 node:19 gaussian:12 successive:7 extended:1 firstly:1 modified:1 establishment:1 stable:1 rn:1 arbitrary:1 constructed:6 varying:1 posterior:8 introduced:1 derived:1 viz:1 modelling:1 california:1 sense:2 finfeng:2 nor:1 examine:1 yi:5 able:2 integrable:2 seen:2 below:1 additional:1 decreasing:1 relaxed:1 pattern:7 actual:1 hidden:2 determine:1 increasing:4 signal:3 ii:1 underlying:8 memory:1 arg:2 royal:1 power:1 priori:6 technical:6 faster:2 predicting:4 univeristy:1 offer:2 developed:3 residual:3 equal:1 once:3 construct:2 niranjan:8 scheme:10 iif:2 qi:1 ipz:1 prediction:15 infeng:1 constitutes:1 multilayer:1 metric:1 future:1 hypervolume:2 report:7 minimized:2 uk:1 control:1 farmer:2 unit:1 few:1 yn:1 randomly:1 iteration:3 achieved:2 national:2 addition:1 engineering:4 local:1 whereas:1 rfv:1 determining:1 asymptotic:1 id:1 localized:1 interpolation:1 sufficient:1 consistent:2 principle:7 nonstationary:2 feedforward:1 iii:4 cd:1 l8:1 fit:2 repeat:1 free:1 integral:1 recursive:2 block:1 restrict:1 differs:1 inner:1 cb2:1 chaotic:8 reduce:1 procedure:2 side:1 perceptron:1 fall:1 area:1 linz:2 curve:1 theoretical:1 projection:28 xn:3 stand:1 radial:8 increased:1 penalty:1 autoregressive:1 adaptive:3 san:1 onto:1 close:2 halted:2 rsre:1 york:1 employing:1 ec:2 applying:1 cost:7 hypersurface:2 darkness:1 subset:1 clear:1 deterministic:2 map:9 alamo:3 sidorowich:2 approximate:1 keep:1 economics:1 ph:1 normed:1 tth:2 xi:1 ie:6 initialise:1 stability:1 off:1 jli:1 quickly:1 improving:1 diego:1 na:3 squared:2 again:1 losing:1 threshold:3 us:1 containing:1 choose:2 slowly:1 receipt:1 neither:1 element:1 complex:1 constructing:2 domain:1 noise:2 til:1 nsl:1 jii:1 ft:2 solved:1 malvern:1 includes:1 place:2 satisfy:3 wiley:1 fails:1 trade:1 performed:1 l2:1 lowe:3 substantial:1 lie:1 environment:2 governed:1 fl:3 hi:2 layer:1 capability:1 lw:1 dynamic:1 radar:1 trained:2 square:2 il:2 accuracy:1 adapted:1 constraint:6 upon:1 efficiently:2 incorporate:2 yield:1 basis:10 exists:1 generates:1 speed:1 sequential:16 min:2 importance:2 ci:3 train:1 te:1 department:5 developing:1 combination:1 choosing:1 describes:2 quite:1 whose:1 infinitesimal:1 pan:4 steadily:1 temporarily:1 noisy:1 mexico:1 determines:1 broomhead:3 sequence:1 equation:1 knowledge:4 improves:1 viewed:1 propose:1 hilbert:3 product:1 rbf:22 adaptation:28 back:1 end:1 change:5 higher:1 hyperplane:1 though:1 called:2 prior:3 stage:1 los:3 exploiting:1 convergence:4 until:1 eqn:3 generating:1 nonlinear:3 converges:1 propagation:1 cued:3 develop:1 logistic:2 maintaining:1 reveal:1 |
3,013 | 3,730 | Streaming Pointwise Mutual Information
Ashwin Lall
Georgia Institute of Technology
Atlanta, GA 30332, USA
Benjamin Van Durme
University of Rochester
Rochester, NY 14627, USA
Abstract
Recent work has led to the ability to perform space efficient, approximate counting
over large vocabularies in a streaming context. Motivated by the existence of data
structures of this type, we explore the computation of associativity scores, otherwise known as pointwise mutual information (PMI), in a streaming context. We
give theoretical bounds showing the impracticality of perfect online PMI computation, and detail an algorithm with high expected accuracy. Experiments on news
articles show our approach gives high accuracy on real world data.
1
Introduction
Recent work has led to the ability to perform space efficient counting over large vocabularies [Talbot,
2009; Van Durme and Lall, 2009]. As online extensions to previous work in randomized storage
[Talbot and Osborne, 2007], significant space savings are enabled if your application can tolerate a
small chance of false positive in lookup, and you do not require the ability to enumerate the contents
of your collection.1 Recent interest in this area is motivated by the scale of available data outpacing
the computational resources typically at hand.
We explore what a data structure of this type means for the computation of associativity scores,
or pointwise mutual information, in a streaming context. We show that approximate k-best PMI
rank lists may be maintained online, with high accuracy, both in theory and in practice. This result
is useful both when storage constraints prohibit explicitly storing all observed co-occurrences in a
stream, as well as in cases where accessing such PMI values would be useful online.
2
Problem Definition and Notation
Throughout this paper we will assume our data is in the form of pairs hx, yi, where x ? X and
y ? Y . Further, we assume that the sets X and Y are so large that it is infeasible to explicitly
maintain precise counts for every such pair on a single machine (e.g., X and Y are all the words in
the English language).
We define the pointwise mutual information (PMI) of a pair x and y to be
PMI(x, y) ? lg
P (x, y)
P (x)P (y)
where these (empirical) probabilities are computed over a particular data set of interest.2 Now, it is
often the case that we are not interested in all such pairs, but instead are satisfied with estimating the
subset of Y with the k largest PMIs with each x ? X. We denote this set by PMIk (x).
Our goal in this paper is to estimate these top-k sets in a streaming fashion, i.e., where there is only
a single pass allowed over the data and it is infeasible to store all the data for random access. This
1
2
This situation holds in language modeling, such as in the context of machine translation.
As is standard, lg refers to log2 .
1
model is natural for a variety of reasons, e.g., the data is being accessed by crawling the web and it
is infeasible to buffer all the crawled results.
As mentioned earlier, there has been considerable work in keeping track of the counts of a large
number of items succinctly. We explore the possibility of using these succinct data structures to
solve this problem. Suppose there is a multi-set M = {m1 , m2 , m3 , . . .} of word pairs from X ? Y .
Using an approximate counter data structure, it is possible to maintain in an online fashion the counts
c(x, y) = |{i | mi = hx, yi}|,
c(x) = |{i | mi = hx, y 0 i, for some y 0 ? Y }|, and
c(y) = |{i | mi = hx0 , yi, for some x0 ? X}|,
(x,y)
c(x,y)/n
nc(x,y)
which allows us to estimate PMI(x, y) as lg PP(x)P
(y) = lg (c(x)/n)(c(y)/n) = lg c(x)c(y) , where n
is the length of the stream. The challenge for this problem is determining how to keep track of the
set PMIk (x) for all x ? X in an online fashion.
3
Motivation
Pointwise mutual information underlies many experiments in computational (psycho-)linguistics,
going back at least to Church and Hanks [1990], who at the time referred to PMI as a mathematical
formalization of the psycholinguistic association score. We do not attempt to summarize this work
in its entirety, but give representative highlights below.
Trigger Models Rosenfeld [1994] was interested in collecting trigger pairs, hA, Bi, such that the
presence of A in a document is likely to ?trigger? an occurrence of B. There the concern was in
finding the most useful triggers overall, and thus pairs were favored based on high average mu?B)
?
?
P (A
P (AB)
P (AB)
??
?
tual information; I(A, B) = P (AB) lg P (A)P
? + P (AB) lg P (A)P
? (B)
? +
(B) + P (AB) lg P (A)P (B)
?
P (AB)
?
P (AB) lg ?
.
P (A)P (B)
As commented by Rosenfeld, the first term of his equation relates to the PMI formula given by
Church and Hanks [1990]. We might describe our work here as collecting terms y, triggered by
each x, once we know x to be present. As the number of possible terms is large,3 we limit ourselves
to the top-k items.
Associated Verbs Chambers and Jurafsky [2008], following work such as Lin [1998] and Chklovski
and Pantel [2004], introduced a probabilistic model for learning Shankian script-like structures
which they termed narrative event chains; for example, if in a given document someone pleaded,
admits and was convicted, then it is likely they were also sentenced, or paroled, or fired. Prior to
enforcing a temporal ordering (which does not concern us here), Chambers and Jurafsky acquired
clusters of related verb-argument pairs by finding those that shared high PMI.
Associativity in Human Memory Central to their rational analysis of human memory, Schooler
and Anderson [1997] approximated the needs odds, n, of a memory structure S as the product of
recency and context factors, where the context factor is the product of associative ratios between S
P (Sq)
P (S|HS ) Q
and local cues; n ?
= P (S|H
? S)
q? QS P (S)P (q) .
If we take x to range over cues, and y to be a memory structure, then in our work here we are
storing the identities of the top-k memory structures for a given cue x, as according to strength of
associativity.4
4
Lower Bound
We first discuss the difficulty in solving the online PMI problem exactly. An obvious first attempt
at an algorithm for this problem is to use approximate counters to estimate the PMI for each pair in
3
Rosenfeld: ... unlike in a bigram model, where the number of different consecutive word pairs is much
less than [the vocabulary] V 2 , the number of word pairs where both words occurred in the same document is a
significant fraction of V 2 .
4
Note that Frank et al. [2007] gave evidence suggesting PMI may be suboptimal for cue modeling, but to
our understanding this result is limited to the case of novel language acquisition.
2
the stream and maintain the top-k for each x using a priority queue. This method does not work, as
illustrated by the examples below.
Example 1 (probability of y changes): Consider the stream
xy xy xy xz wz | wy wy wy wy wy
which we have divided in half. After the first half, y is best for x since PMI(x, y) = lg
lg (5/4) and PMI(x, z) = lg
1/5
(4/5)(2/5)
3/5
(4/5)(3/5)
=
= lg (5/8). At the end of the second half of the stream,
3/10
1/10
z is best for x since PMI(x, y) = lg (4/10)(8/10)
? lg (0.94) and PMI(x, z) = lg (4/10)(2/10)
=
lg (1.25). However, during the second half of the stream we never encounter x and hence never
update its value. So, the naive algorithm behaves erroneously.
What this example shows is that not only does the naive algorithm fail, but also that the top-k PMI
of some x may change (because of the change in probability of y) without any opportunity to update
PMIk (x).
Next, we show another example which illustrates the failure of the naive algorithm due to the fact
that it does not re-compute every PMI each time.
Example 2 (probability of x changes): Consider the stream
pd py py xy xd
in which we are interested in only the top PMI tuples for x. When we see xy in the stream,
1/4
PMI(x, y) = lg (1/4)(3/4)
? lg (1.33), and when we see xd in the stream, PMI(x, d) =
lg
1/5
(2/5)(2/5)
= lg (1.25). As a result, we retain xy but not xd. However, xy?s PMI is now
1/5
= lg (0.833) which means that we should replace xy with xd. HowPMI(x, y) = lg (2/5)(3/5)
ever, since we didn?t re-compute PMI(x, y), we erroneously output xy.
We next formalize these intuitions into a lower bound showing why it might be hard to compute
every PMIk (x) precisely. For this lower bound, we make the simplifying assumption that the size
of the set X is much smaller than N (i.e., |X| ? o(N )), which is the usual case in practice.
Theorem 1: Any algorithm that explicitly maintains the top-k PMIs for all x ? X in a stream of
length at most n (where |X| ? o(n)) in a single pass requires ?(n|X|) time.
We will prove this theorem using the following lemma:
Lemma 1: Any algorithm that explicitly maintains the top-k PMIs of |X| = p + 1 items over a
stream of length at most n = 2r + 2p + 1 in a single pass requires ?(pr) time.
Proof of Lemma 1: Let us take the length of the stream to be n, where we assume without loss of
generality that n is odd. Let X = {x1 , . . . , xp+1 }, Y = {y1 , y2 } and let us consider the following
stream:
x1 y1 , x2 y1 , x3 y1 , . . . , xp y1 ,
x1 y2 , x2 y2 , x3 y2 , . . . , xp y2 ,
xp+1 y1
?
xp+1 y2 , xp+1 y2 ,
?
?
?
?
xp+1 y1 , xp+1 y1 ,
xp+1 y2 , xp+1 y2 ,
r times
?
?
...
?
?
xp+1 y1+r(mod)2 , xp+1 y1+r(mod)2 .
Suppose that we are interested in maintaining only the top-PMI item for each xi ? X (the proof
easily generalizes to larger k). Let us consider the update cost for only the set Xp = {x1 , . . . , xp } ?
X. After xp+1 y1 appears in the stream for the first time, it should be evident that all the elements of
Xp have a higher PMI with y2 than y1 . However, after we see two copies of xp+1 y2 , the PMI of y1
is higher than that of y2 for each x ? Xp . Similarly, the top-PMI of each element of Xp alternates
between y1 and y2 for the remainder of the stream. Now, the current PMI for each element of Xp
must be correct at any point in the stream since the stream may terminate at any time. Hence, by
construction, the top PMI of x1 , . . . , xp will change at least r times in the course of this stream, for
3
a total of at least pr operations. The length of the stream is n = 2p + 2r + 1. This completes the
proof of Lemma 1.
Proof of Theorem 1: Taking |X| = p + 1, we have in the construction of Lemma 1 that r =
(n ? 2p ? 1)/2 = (n ? 2|X| + 1)/2. Hence, there are at least pr = (|X| ? 1)(n ? 2|X| + 1)/2 =
?(n|X| ? |X|2 ) update operations required. Since we assumed that |X| ? o(n), this is ?(n|X|)
operations.
Hence, there must be a high update cost for any such algorithm. That is, on average, any algorithm
must perform ?(|X|) operations per item in the stream.
5
Algorithm
The lower bound from the previous section shows that, when solving the PMI problem, the best one
can do is effectively cross-check the PMI for every possible x ? X for each item in the stream. In
practice, this is far too expensive and will lead to online algorithms that cannot keep up with the rate
at which the input data is produced. To solve this problem, we propose a heuristic algorithm that
sacrifices some accuracy for speed in computation.
Besides keeping processing times in check, we have to be careful about the memory requirements of
any proposed algorithm. Recall that we are interested in retaining information for all pairs of x and
y, where each is drawn from a set of cardinality in the millions. Our algorithm uses approximate
counting to retain the counts of all pairs of items hx, yi in a data structure Cxy . We keep exact counts
of all x and y since this takes considerably less space. Given these values, we can (approximately)
estimate PMI(x, y) for any hx, yi in the stream.
We assume Cxy to be based on recent work in space efficient counting methods for streamed text
data [Talbot, 2009; Van Durme and Lall, 2009]. For our implementation we used TOMB counters [Van Durme and Lall, 2009] which approximate counts by storing values in log-scale. These
log-scale counts are maintained in unary within layers of Bloom filters [Bloom, 1970] (Figure 1)
that can be probabilistically updated using a small base (Figure 2); each occurrence of an item in the
stream prompts a probabilistic update to its value, dependent on the base. By tuning this base, one
can trade off between the accuracy of the counts and the space savings of approximate counting.
.
.
.
b?3
b?2
b?1
b2
b
1
1 ? b?3
1 ? b?2
1 ? b?1
Figure 2: Transition by base b.
Figure 1: Unary counting with Bloom filters.
Now, to get around the problem of having stale PMI values because the count of x changing (i.e.,
the issue in Example 2 in the previous section), we divide the stream up into fixed-size buffers B
and re-compute the PMIs for all pairs seen within each buffer (see Algorithm 1).
Updating counts for x, y and hx, yi is constant time per element in the stream. Insertion into a k-best
priority queue requires O(lg k) operations. Per interval, we perform in the worst case one insertion
per new element observed, along with one insertion for each element stored in the previous rank lists.
As long as |B| ? |X|k, updating rank lists costs O(|B|lg k) per interval.5 The algorithm therefore
requires O(n + n lg k) = O(n lg k) time, where n is the length of the stream. Note that when
|B| = n we have the standard offline method for computing PMI across X and Y (not withstanding
approximate counters). When |B| < |X|k, we run afoul of the lower bound given by Theorem 2.
Regarding space, |I| ? |B|. A benefit of our algorithm is that this can be kept significantly smaller
than |X| ? |Y |,6 since in practice, |Y | lg k.
5
6
I.e., the extra cost for reinserting elements from the previous rank lists is amortized over the buffer length.
E.g., the V 2 of Rosenfeld.
4
Algorithm 1 F IND -O NLINE -PMI
1:
2:
3:
4:
5:
6:
7:
8:
9:
10:
11:
12:
13:
14:
15:
5.1
initialize hashtable counters Hx and Hy for exact counts
initialize an approximate counter Cxy
initialize rank lists, L, mapping x to k-best priority queue storing hy, PMI(x, y)i
for each buffer B in the stream do
initialize I, mapping hx, yi to {0, 1}, denoting whether hx, yi was observed in B
for hx, yi in B do
set I(hx, yi) = 1
increment Hx (x) initial value of 0
increment Hy (y) initial value of 0
insert hx, yi into Cxy
end for
for each x ? X do
re-compute L(x) using current y ? L(x) and {y|I(hx, yi) = 1}
end for
end for
Misclassification Probability Bound
Our algorithm removes problems due to the count of x changing, but does not solve the problem that
the probability of y changes (i.e., the issue in Example 1 in the previous section). The PMI of a pair
hx, yi may decrease considerably if there are many occurrences of y (and relatively few occurrences
of hx, yi) in the stream, leading to the removal of y from the true top-k list for x. We show in the
following that this is not likely to happen very often for the text data that our algorithm is designed
to work on.
In giving a bound on this error, we will make two assumptions: (i) the PMI for a given x follows
a Zipfian distribution (something that we observed in our data), and (ii) the items in the stream are
drawn independently from some underlying distribution (i.e., they are i.i.d.). Both these assumptions
together help us to sidestep the lower bound proved earlier and demonstrate that our single-pass
algorithm will perform well on real language data sets.
We first make the observation that, for any y in the set of top-k PMIs for x, if hx, yi appears in
the final buffer then we are guaranteed that y is correctly placed in the top-k at the end. This is
because we recompute PMIs for all the pairs in the last buffer at the end of the algorithm (line 13 of
Algorithm 1). The probability that hx, yi does not appear in the last buffer can be bounded using the
i.i.d. assumption to be at most
|B|
|B|c(x,y)
c(x, y)
? e? n
1?
n
? e?k|X|c(x,y)/n ,
where for the last inequality we use the bound |B| ? |X|k that we assumed in the previous section.
Hence, in those cases that c(x, y) = ?(n/(|X|k)), our algorithm correctly identifies y as being in
the top-k PMI for x with high probability. The proof for general c(x, y) is given next.
We study the probability with which some y 0 which is not in the top-k PMI for a fixed x can displace
some y in the top-k PMI for x. We do so by studying the last buffer in which hx, yi appears. The
only way that y 0 can displace y in the top-k for x in our algorithm is if at the end of this buffer the
following holds true:
ct (x, y 0 )
ct (x, y)
>
,
ct (y 0 )
ct (y)
where the t subscripts denotes the respective counts at the end of the buffer. We will show that this
event occurs with very small probability. We do so by bounding the probability of the following
three unlikely events.
If we assume all c(x, y) are above some threshold m, then with only small probability (i.e., 1/2m )
will the last buffer containing hx, yi appear before the midpoint of the stream. So, let us assume
that the buffer appears after the midpoint of the stream. Then, the probability that hx, y 0 i appears
more than (1 + ?)c(x, y 0 )/2 times by this point can be bounded by the Chernoff bound to be at most
5
exp(?c(x, y 0 )? 2 /8). Similarly, the probability that y 0 appears less than (1 ? ?)c(y 0 )/2 times by this
point can be bounded by exp(?c(y 0 )? 2 /4). Putting all these together, we get that
ct (x, y 0 )
(1 + ?)c(x, y 0 )
Pr
>
< 1/2m + exp(?c(x, y 0 )? 2 /8) + exp(?c(y 0 )? 2 /4).
ct (y 0 )
(1 ? ?)c(y 0 )
We now make use of the assumption that the PMIs are distributed in a Zipfian manner. Let us take
the rank of the PMI of y 0 to be i (and recall that the rank of the PMI of y is at most k). Then, by the
Zipfian assumption, we have that PMI(x, y) ? (i/k)s PMI(x, y 0 ), where s is the Zipfian parameter.
c(x,y 0 ) ((i/k)s ?1)PMI(x,y 0 )
This can be re-written as c(x,y)
. We can now put all these results
c(y) ? c(y 0 ) 2
together to bound the probability of the event
ct (x, y)
ct (x, y 0 )
>
? 1/2m + exp(?c(x, y 0 )? 2 /8) + exp(?c(y 0 )? 2 /4),
Pr
ct (y 0 )
ct (y)
0
0
where we take ? = (((i/k)s ? 1)2PMI(x,y ) ? 1)/(((i/k)s ? 1)2PMI(x,y ) + 1).
Hence, the probability that some low-ranked y 0 will displace a y in the top-k PMI of x is low. Taking
a union bound across all possible y 0 ? Y gives a bound of 1/2m + |Y |(exp(?c(x, y 0 )? 2 /8) +
exp(?c(y 0 )? 2 /4)).7
6
Experiments
We evaluated our algorithm for online, k-best PMI with a set of experiments on collecting verbal
triggers in a document collection. For each document, we considered all verb::verb pairs, nonstemmed; e.g., wrote::ruled, fighting::endure, argued::bore. For each unique verb x observed in the
stream, our goal was to recover the top-k verbs y with the highest PMI given x.8 Readers may peek
ahead to Table 2 for example results.
Experiments were based on 100,000 NYTimes articles taken from the Gigaword Corpus [Graff,
2003]. Tokens were tagged for part of speech (POS) using SVMTool [Gim?enez and M`arquez, 2004],
a POS tagger based on SVMlight [Joachims, 1999].
Our stream was constructed by considering all pairwise combinations of the roughly 82 (on average)
verb tokens occurring in each document. Where D ? D is a document in the collection, let Dv
refer to the list
of verbal tokens, not necessarily unique. The length of our stream, n, is therefore:
P
|Dv | 9
D?D
2 .
While research into methods for space efficient, approximate counting has been motivated by a
desire to handle exceptionally large datasets (using limited resources), we restricted ourselves here
to a dataset that would allow for comparison to explicit, non-approximate counting (implemented
through use of standard hashtables).10 We will refer to such non-approximate counting as perfect
counting. Finally, to guard against spurious results arising from rare terms, we employed the same
c(xy) > 5 threshold as used by Church and Hanks [1990].
We did not heavily tune our counting mechanism to this task, other than to experiment with a few
different bases (settling on a base of 1.25). As such, empirical results for approximate counting
7
For streams composed such as described in our experiments, this bound becomes powerful as m approaches
100 or beyond (recalling that both c(x, y 0 ), c(y 0 ) > m). Experimentally we observed this to be conservative
in that such errors appear unlikely even when using a smaller threshold (e.g., m = 5).
8
Unlike in the case of Rosenfeld [1994], we allowed for triggers to occur anywhere in a document, rather
than exclusively in the preceding context. This can be viewed as a restricted version of the experiments of
Chambers and Jurafsky [2008], where we consider all verb pairs, regardless of whether they are assumed to
possess a co-referent argument.
9
For the experiments here, n = 869, 641, 588, or roughly 900 million, hx, yi pairs. If fully enumerated
as text, this stream would have required 12GB of uncompressed storage. Vocabulary size, |X| = |Y |, was
roughly 30 thousand (28,972) unique tokens.
10
That is, since our algorithm is susceptible to adversarial manipulation of the stream, it is important to
establish the experimental upper bound that is possible assuming zero error due to the use of probabilistic
counts.
6
1.0
0.8
0.15
?
0.05
?
0
10
20
30
40
standard
instrumented
0.6
?
?
??
????
????????????
????????????????????????????
50
10
(a)
20
30
40
50
(b)
Figure 3: 3(a) : Normalized, mean PMI for top-50 y for each x. 3(b) : Accuracy of top-5 ranklist using the
standard measurement, and when using an instrumented counter that had oracle access to which hx, yi were
above threshold.
Table 1: When using a perfect counter and a buffer of 50, 500 and 5,000 documents, for k = 1, 5, 10: the
accuracy of the resultant k-best lists when compared to the first k, k + 1 and k + 2 true values.
Buffer
1
2
3
5
6
7
10
11
12
50
94.10 98.75 99.45 97.25 99.13 99.60 98.05 99.26 99.63
500
94.14 98.81 99.53 97.31 99.16 99.62 98.12 99.29 99.65
5000
94.69 98.93 99.60 97.76 99.30 99.71 98.55 99.46 99.74
k=1
k=5
k = 10
should be taken as a lower bound, while the perfect counting results are the upper bound on what an
approximate counter might achieve.
We measured the accuracy of resultant k-best lists by first collecting the true top-50 elements for
each x, offline, to be used as a key. Then, for a proposed k-best list, accuracy was calculated
at different ranks of the gold standard. For example, the elements of a proposed 10-best list will
optimally fully intersect with the first 10 elements of the gold standard. In the case the list is not
perfect, we would hope that an element incorrectly positioned at, e.g., rank 9, should really be of
rank 12, rather than rank 50.
Using this gold standard, Figure 3(a) shows the normalized, mean PMI scores as according to rank.
This curve supports our earlier theoretical assumption that PMI over Y is a Zipfian distribution for
a given x.
6.1
Results
In Table 1 we see that when using a perfect counter, our algorithm succeeds in recovering almost
all top-k elements. For example, when k = 5, reading 500 documents at a time, our rank lists are
97.31% accurate. Further, of those collected triggers that are not truly in the top-5, most were either
in the top 6 or 7. As there appears to be minimal impact based on buffer size, we fixed |B| = 500
documents for the remainder of our experiments.11 This result supports the intuition behind our
misclassification probability bound: while it is possible for an adversary to construct a stream that
would mislead our online algorithm, this seems to rarely occur in practice.
Shown in Figure 3(b) are the accuracy results when using an approximate counter and a buffer size of
500 documents, to collect top-5 rank lists. Two results are presented. The standard result is based on
comparing the rank lists to the key just as with the results when using a perfect counter. A problem
with this evaluation is that the hard threshold used for both generating the key, and the results for
perfect counting, cannot be guaranteed to hold when using approximate counts. It is possible that
11
Strictly speaking, |B| is no larger than the maximum length interval in the stream resulting from enumerating the contents of, e.g., 500 consecutive documents.
7
Table 2: Top 5 verbs, y, for x = bomb, laughed and vetoed. Left columns are based on using a perfect
counter, while right columns are based on an approximate counter. Numeral prefixes denote rank of element in
true top-k lists. All results are with respect to a buffer of 500 documents.
x = bomb
x = laughed
x = vetoed
1:detonate
1:detonate
1:tickle
-:panang
1:vetoing
1:vetoing
2:assassinate 7:bombed
2:tickling
1:tickle
2:overridden 2:overridden
3:bomb
2:assassinate 3:tickled
3:tickled
3:overrode
4:override
4:snickered
2:tickling
4:override
5:latches
4:plotting
4:plotting
5:plotted
8:expel
5:captivating 4:snickered 5:latches
7:vetoed
some hx, yi pair that occurs perhaps 4 or 5 times may be misreported as occurring 6 times or more.
In this case, the hx, yi pair will not appear in the key in any position, thus creating an artificial
upper bound on the possible accuracy as according to this metric. For purposes of comparison, we
instrumented the approximate solution to use a perfect counter in parallel. All PMI values were
computed as before, using approximate counts, but the perfect counter was used just in verifying
whether a given pair exceeded the threshold. In this way the approximate counting solution saw just
those elements of the stream as observed in the perfect counting case, allowing us to evaluate the
ranking error introduced by the counter, irrespective of issues in ?dipping below? the threshold. As
seen in the instrumented curve, top-5 rank lists generated when using the approximate counter are
composed primarily of elements truly ranked 10 or below.
6.2
Examples
Figure 2 contains the top-5 most associated verbs as according to our algorithm, both when using
a perfect and an approximate counter. As can be seen for the perfect counter, and as suggested by
Table 1, in practice it is possible to track PMI scores over buffered intervals with a very high degree
of accuracy. For the examples shown (and more generally throughout the results), the resultant
k-best lists are near perfect matches to those computed offline.
When using an approximate counter we continue to see reasonable results, with some error introduced due to the use of probabilistic counting. The rank 1 entry reported for x = laughed exemplifies the earlier referenced issue of the approximate counter being able to incorrectly dip below the
threshold for terms that the gold standard would never see.12
7
Conclusions
In this paper we provided the first study of estimating top-k PMI online. We showed that while a
precise solution comes at a high cost in the streaming model, there exists a simple algorithm that
performs well on real data. An avenue of future work is to drop the assumption that each of the
top-k PMI values is maintained explicitly and see whether there is an algorithm that is feasible for
the streaming version of the problem or if a similar lower bound still applies. Another promising
approach would be to apply the tools of two-way associations to this problem [Li and Church, 2007].
An experiment of Schooler and Anderson [1997] assumed words in NYTimes headlines operated as
cues for the retrieval of memory structures associated with co-occurring terms. Missing from that
report was how such cues might be accumulated over time. The work presented here can be taken as
a step towards modeling resource constrained, online cue learning, where an appealing description
of our model involves agents tracking co-occurring events over a local temporal window (such as
a day), and regularly consolidating this information into long term memory (when they ?sleep?).
Future work may continue this direction by considering data from human trials.
Acknowledgements Special thanks to Dan Gildea, as well as Rochester HLP/Jaeger-lab members
for ideas and feedback. The first author was funded by a 2008 Provost?s Multidisciplinary Award
from the University of Rochester, and NSF grant IIS-0328849. The second author was supported in
part by the NSF grants CNS-0905169 and CNS-0910592, funded under the American Recovery and
Reinvestment Act of 2009 (Public Law 111-5), and by NSF grant CNS-0716423.
12
I.e., the token panang, incorrectly tagged as a verb, is sparsely occurring.
8
References
[Bloom, 1970] Burton H. Bloom. Space/time trade-offs in hash coding with allowable errors. Communications
of the ACM, 13:422?426, 1970.
[Chambers and Jurafsky, 2008] Nathanael Chambers and Dan Jurafsky. Unsupervised Learning of Narrative
Event Chains. In Proceedings of ACL, 2008.
[Chklovski and Pantel, 2004] Timothy Chklovski and Patrick Pantel. VerbOcean: Mining the Web for FineGrained Semantic Verb Relations. In Proceedings of Conference on Empirical Methods in Natural Language
Processing (EMNLP-04), pages 33?40, Barcelona, Spain, 2004.
[Church and Hanks, 1990] Kenneth Church and Patrick Hanks. Word Association Norms, Mutual Information
and Lexicography. Computational Linguistics, 16(1):22?29, March 1990.
[Frank et al., 2007] Michael C. Frank, Noah D. Goodman, and Joshua B. Tenenbaum. A Bayesian framework
for cross-situational word learning. In Advances in Neural Information Processing Systems, 20, 2007.
[Gim?enez and M`arquez, 2004] Jes?us Gim?enez and Llu??s M`arquez. SVMTool: A general POS tagger generator
based on Support Vector Machines. In Proceedings of LREC, 2004.
[Graff, 2003] David Graff. English Gigaword. Linguistic Data Consortium, Philadelphia, 2003.
[Joachims, 1999] Thorsten Joachims. Making large-scale SVM learning practical. In B. Sch?olkopf, C. Burges,
and A. Smola, editors, Advances in Kernel Methods - Support Vector Learning, chapter 11, pages 169?184.
MIT Press, Cambridge, MA, 1999.
[Li and Church, 2007] Ping Li and Kenneth W. Church. A sketch algorithm for estimating two-way and multiway associations. Computational Linguistics, 33(3):305?354, 2007.
[Lin, 1998] Dekang Lin. Automatic Retrieval and Clustering of Similar Words. In Proceedings of COLINGACL, 1998.
[Rosenfeld, 1994] Ronald Rosenfeld. Adaptive Statistical Language Modeling: A Maximum Entropy Approach. PhD thesis, Computer Science Department, Carnegie Mellon University, April 1994.
[Schooler and Anderson, 1997] Lael J. Schooler and John R. Anderson. The role of process in the rational
analysis of memory. Cognitive Psychology, 32(3):219?250, 1997.
[Talbot and Osborne, 2007] David Talbot and Miles Osborne. Randomised Language Modelling for Statistical
Machine Translation. In Proceedings of ACL, 2007.
[Talbot, 2009] David Talbot. Succinct approximate counting of skewed data. In Proceedings of IJCAI, 2009.
[Van Durme and Lall, 2009] Benjamin Van Durme and Ashwin Lall. Probabilistic Counting with Randomized
Storage. In Proceedings of IJCAI, 2009.
9
| 3730 |@word h:1 trial:1 version:2 bigram:1 seems:1 norm:1 simplifying:1 initial:2 contains:1 score:5 exclusively:1 denoting:1 document:14 prefix:1 current:2 comparing:1 crawling:1 must:3 written:1 john:1 ronald:1 happen:1 remove:1 designed:1 displace:3 update:6 drop:1 hash:1 cue:7 half:4 item:9 recompute:1 accessed:1 tagger:2 mathematical:1 along:1 constructed:1 guard:1 prove:1 dan:2 nathanael:1 manner:1 acquired:1 pairwise:1 x0:1 sacrifice:1 expected:1 roughly:3 xz:1 multi:1 window:1 cardinality:1 considering:2 becomes:1 provided:1 estimating:3 notation:1 underlying:1 bounded:3 didn:1 spain:1 what:3 finding:2 temporal:2 every:4 collecting:4 act:1 xd:4 exactly:1 grant:3 appear:4 positive:1 before:2 local:2 referenced:1 dekang:1 limit:1 subscript:1 approximately:1 might:4 acl:2 collect:1 someone:1 co:4 jurafsky:5 limited:2 bi:1 range:1 unique:3 practical:1 practice:6 union:1 x3:2 sq:1 area:1 intersect:1 empirical:3 significantly:1 hx0:1 word:9 refers:1 consortium:1 get:2 cannot:2 ga:1 storage:4 context:7 recency:1 tomb:1 put:1 py:2 missing:1 regardless:1 independently:1 impracticality:1 mislead:1 recovery:1 bomb:3 m2:1 q:1 his:1 enabled:1 handle:1 increment:2 updated:1 construction:2 suppose:2 trigger:7 heavily:1 exact:2 us:1 element:15 amortized:1 approximated:1 expensive:1 updating:2 hlp:1 sparsely:1 observed:7 role:1 burton:1 verifying:1 worst:1 thousand:1 news:1 ordering:1 counter:22 trade:2 decrease:1 highest:1 mentioned:1 benjamin:2 accessing:1 mu:1 pd:1 intuition:2 insertion:3 nytimes:2 tickle:2 solving:2 overridden:2 easily:1 po:3 chapter:1 describe:1 artificial:1 heuristic:1 larger:2 solve:3 otherwise:1 ability:3 rosenfeld:7 final:1 online:12 associative:1 triggered:1 propose:1 product:2 remainder:2 fired:1 achieve:1 pantel:3 gold:4 description:1 olkopf:1 ijcai:2 cluster:1 requirement:1 jaeger:1 generating:1 perfect:15 zipfian:5 help:1 measured:1 odd:1 durme:6 implemented:1 entirety:1 recovering:1 come:1 involves:1 direction:1 correct:1 filter:2 human:3 public:1 numeral:1 require:1 argued:1 hx:25 really:1 enumerated:1 extension:1 insert:1 strictly:1 hold:3 around:1 considered:1 exp:8 mapping:2 consecutive:2 purpose:1 narrative:2 schooler:4 saw:1 headline:1 largest:1 tool:1 hope:1 offs:1 mit:1 rather:2 crawled:1 probabilistically:1 linguistic:1 exemplifies:1 joachim:3 rank:18 check:2 referent:1 modelling:1 adversarial:1 arquez:3 dependent:1 streaming:7 unary:2 typically:1 psycho:1 associativity:4 unlikely:2 spurious:1 relation:1 accumulated:1 going:1 interested:5 endure:1 overall:1 issue:4 favored:1 retaining:1 constrained:1 special:1 initialize:4 mutual:6 once:1 saving:2 never:3 having:1 construct:1 chernoff:1 uncompressed:1 unsupervised:1 future:2 report:1 few:2 primarily:1 composed:2 ourselves:2 cns:3 maintain:3 attempt:2 ab:7 atlanta:1 recalling:1 interest:2 possibility:1 mining:1 evaluation:1 truly:2 operated:1 behind:1 chain:2 accurate:1 xy:10 respective:1 divide:1 re:5 ruled:1 plotted:1 theoretical:2 minimal:1 column:2 modeling:4 earlier:4 cost:5 subset:1 rare:1 entry:1 consolidating:1 too:1 optimally:1 stored:1 reported:1 considerably:2 tual:1 thanks:1 randomized:2 retain:2 probabilistic:5 off:1 michael:1 together:3 thesis:1 central:1 satisfied:1 containing:1 gim:3 emnlp:1 priority:3 cognitive:1 creating:1 american:1 sidestep:1 leading:1 li:3 suggesting:1 lookup:1 b2:1 coding:1 explicitly:5 ranking:1 stream:40 script:1 lab:1 recover:1 maintains:2 parallel:1 rochester:4 gildea:1 accuracy:12 cxy:4 who:1 bayesian:1 produced:1 ping:1 psycholinguistic:1 definition:1 failure:1 against:1 acquisition:1 pp:1 obvious:1 resultant:3 associated:3 mi:3 proof:5 rational:2 proved:1 dataset:1 finegrained:1 recall:2 jes:1 formalize:1 positioned:1 back:1 appears:7 exceeded:1 tolerate:1 higher:2 day:1 april:1 evaluated:1 hank:5 anderson:4 generality:1 anywhere:1 just:3 smola:1 hand:1 sketch:1 web:2 multidisciplinary:1 perhaps:1 stale:1 usa:2 normalized:2 y2:13 true:5 hence:6 tagged:2 laughed:3 semantic:1 illustrated:1 mile:1 ind:1 latch:2 during:1 skewed:1 maintained:3 prohibit:1 allowable:1 override:2 evident:1 demonstrate:1 performs:1 novel:1 behaves:1 million:2 association:4 occurred:1 m1:1 significant:2 refer:2 measurement:1 buffered:1 cambridge:1 ashwin:2 mellon:1 tuning:1 automatic:1 pmi:59 similarly:2 multiway:1 language:7 had:1 funded:2 access:2 base:6 patrick:2 something:1 recent:4 showed:1 termed:1 store:1 buffer:18 manipulation:1 inequality:1 continue:2 yi:22 joshua:1 seen:3 preceding:1 employed:1 ii:2 relates:1 match:1 cross:2 long:2 lin:3 retrieval:2 divided:1 award:1 impact:1 underlies:1 metric:1 kernel:1 interval:4 completes:1 situational:1 peek:1 goodman:1 extra:1 sch:1 unlike:2 posse:1 member:1 regularly:1 mod:2 odds:1 near:1 counting:19 presence:1 svmlight:1 variety:1 gave:1 psychology:1 suboptimal:1 regarding:1 idea:1 avenue:1 enumerating:1 whether:4 motivated:3 gb:1 queue:3 speech:1 speaking:1 enumerate:1 useful:3 generally:1 tune:1 tenenbaum:1 nsf:3 arising:1 track:3 per:5 correctly:2 gigaword:2 carnegie:1 commented:1 putting:1 key:4 threshold:8 drawn:2 changing:2 bloom:5 kept:1 kenneth:2 enez:3 fraction:1 run:1 you:1 powerful:1 throughout:2 reader:1 almost:1 reasonable:1 hashtable:1 bound:21 layer:1 ct:10 guaranteed:2 lrec:1 sleep:1 oracle:1 lall:6 strength:1 ahead:1 occur:2 constraint:1 precisely:1 your:2 noah:1 x2:2 hy:3 erroneously:2 speed:1 argument:2 nline:1 relatively:1 department:1 according:4 alternate:1 combination:1 march:1 smaller:3 across:2 instrumented:4 appealing:1 making:1 dv:2 restricted:2 pr:5 thorsten:1 taken:3 resource:3 equation:1 randomised:1 discus:1 count:16 fail:1 mechanism:1 know:1 end:8 studying:1 available:1 generalizes:1 operation:5 apply:1 chamber:5 occurrence:5 encounter:1 existence:1 top:33 denotes:1 linguistics:3 lexicography:1 clustering:1 log2:1 opportunity:1 maintaining:1 llu:1 giving:1 establish:1 streamed:1 occurs:2 usual:1 collected:1 reason:1 enforcing:1 assuming:1 length:9 besides:1 pointwise:5 chklovski:3 fighting:1 ratio:1 nc:1 lg:28 susceptible:1 frank:3 implementation:1 perform:5 allowing:1 upper:3 observation:1 datasets:1 incorrectly:3 situation:1 ever:1 precise:2 communication:1 y1:14 provost:1 verb:12 prompt:1 introduced:3 david:3 pair:22 required:2 barcelona:1 beyond:1 adversary:1 suggested:1 below:5 wy:5 able:1 reading:1 challenge:1 summarize:1 wz:1 memory:9 event:6 misclassification:2 natural:2 difficulty:1 ranked:2 settling:1 vetoed:3 technology:1 identifies:1 irrespective:1 church:8 naive:3 philadelphia:1 text:3 prior:1 understanding:1 acknowledgement:1 removal:1 determining:1 law:1 loss:1 fully:2 highlight:1 generator:1 degree:1 agent:1 xp:21 article:2 plotting:2 editor:1 storing:4 translation:2 succinctly:1 course:1 token:5 placed:1 last:5 keeping:2 english:2 infeasible:3 copy:1 offline:3 verbal:2 allow:1 burges:1 institute:1 taking:2 midpoint:2 van:6 benefit:1 distributed:1 calculated:1 vocabulary:4 world:1 transition:1 curve:2 dip:1 feedback:1 author:2 collection:3 adaptive:1 far:1 approximate:25 keep:3 wrote:1 corpus:1 assumed:4 tuples:1 xi:1 why:1 table:5 promising:1 terminate:1 necessarily:1 did:1 motivation:1 withstanding:1 bounding:1 osborne:3 allowed:2 succinct:2 x1:5 referred:1 representative:1 georgia:1 fashion:3 ny:1 formalization:1 position:1 explicit:1 formula:1 theorem:4 showing:2 list:18 admits:1 svm:1 talbot:7 concern:2 evidence:1 exists:1 false:1 supported:1 effectively:1 phd:1 illustrates:1 occurring:5 entropy:1 led:2 timothy:1 explore:3 likely:3 desire:1 tracking:1 applies:1 chance:1 acm:1 ma:1 goal:2 identity:1 viewed:1 careful:1 towards:1 shared:1 replace:1 content:2 considerable:1 change:6 hard:2 bore:1 exceptionally:1 graff:3 experimentally:1 feasible:1 lemma:5 conservative:1 total:1 pas:4 experimental:1 m3:1 succeeds:1 rarely:1 support:4 evaluate:1 |
3,014 | 3,731 | Predicting the Optimal Spacing of Study:
A Multiscale Context Model of Memory
Michael C. Mozer? , Harold Pashler? , Nicholas Cepeda? ,
Robert Lindsey? , & Ed Vul?
?
Dept. of Computer Science, University of Colorado
?
Dept. of Psychology, UCSD
?
Dept. of Psychology, York University
?
Dept. of Brain and Cognitive Sciences, MIT
Abstract
When individuals learn facts (e.g., foreign language vocabulary) over multiple
study sessions, the temporal spacing of study has a significant impact on memory
retention. Behavioral experiments have shown a nonmonotonic relationship between spacing and retention: short or long intervals between study sessions yield
lower cued-recall accuracy than intermediate intervals. Appropriate spacing of
study can double retention on educationally relevant time scales. We introduce a
Multiscale Context Model (MCM) that is able to predict the influence of a particular study schedule on retention for specific material. MCM?s prediction is based
on empirical data characterizing forgetting of the material following a single study
session. MCM is a synthesis of two existing memory models (Staddon, Chelaru,
& Higa, 2002; Raaijmakers, 2003). On the surface, these models are unrelated
and incompatible, but we show they share a core feature that allows them to be
integrated. MCM can determine study schedules that maximize the durability of
learning, and has implications for education and training. MCM can be cast either
as a neural network with inputs that fluctuate over time, or as a cascade of leaky
integrators. MCM is intriguingly similar to a Bayesian multiscale model of memory (Kording, Tenenbaum, & Shadmehr, 2007), yet MCM is better able to account
for human declarative memory.
1
Introduction
Students often face the task of memorizing facts such as foreign language vocabulary or state capitals. To retain such information for a long time, students are advised not to cram their study, but
rather to study over multiple, well-spaced sessions. This advice is based on a memory phenomenon
known as the distributed practice or spacing effect (Cepeda, Pashler, Vul, Wixted, & Rohrer, 2006).
The spacing effect is typically studied via a controlled experimental paradigm in which participants
are asked to study unfamiliar paired associates (e.g., English-Japanese vocabulary) in two sessions.
The time between sessions, known as the intersession interval or ISI, is manipulated across participants. Some time after the second study session, a cued-recall test is administered to the participants,
e.g., ?What is ?rabbit? in Japanese?? The lag between second session and the test is known as the
retention interval or RI.
Recall accuracy as a function of ISI follows a characteristic curve. The solid line of Figure 1a
sketches this curve, which we will refer to as the spacing function. The left edge of the graph corresponds to massed practice, when session two immediately follows session one. Recall accuracy
rises dramatically as the ISI increases, reaches a peak, and falls off gradually. The ISI corresponding
to the peak?the optimal ISI?depends strongly on RI: a meta-analysis by Cepeda et al. (2006) sug1
forgetting function
spacing function
(b)
m2
% recall
(a)
m3
m4
...
mN
m1
...
pool 1
pool 2
pool 3
pool 4
pool N
ISI
Figure 1: (a) The spacing function (solid line) depicts recall at test following two study sessions
separated by a given ISI; the forgetting function (dashed line) depicts recall as a function of the lag
between study and test. (b) A sketch of the Multiscale Context Model.
gests a power-law relationship. The optimal ISI almost certainly depends on the specific materials
being studied and the manner of study as well. For educationally relevant RIs on the order of weeks
and months, the effect of spacing can be tremendous: optimal spacing can double retention over
massed practice (Cepeda et al., in press).
The spacing function is related to another observable measure of retention, the forgetting function,
which characterizes recall accuracy following a single study session as a function of the lag between
study and test. For example, suppose participants in the experiment described above learned material
in study session 1, and were then tested on the material immediately prior to study session 2. As the
ISI increased, session 1 memories would decay. This decay is shown in the dashed line of Figure 1a.
Typical forgetting functions follow a generalized power-law decay, of the form P (recall) = A(1 +
Bt)?C , where A, B, and C are constants, and t is the study-test lag (Wixted & Carpenter, 2007).
Our goal is to develop a model of long-term memory that characterizes the memory-trace strength
of items learned over two or more sessions. The model predicts recall accuracy as a function of the
RI, taking into account the study schedule?the ISI or set of ISIs determining the spacing of study
sessions. We would like to use this model to prescribe the optimal study schedule.
The spacing effect is among the best known phenomena in cognitive psychology, and many theoretical explanations have been suggested. Two well developed computational models of human
memory have been elaborated to explain the spacing effect (Pavlik & Anderson, 2005; Raaijmakers,
2003). These models are necessarily complex: the brain contains multiple, interacting memory systems whose decay and interference characteristics depend on the specific content being stored and
its relationship to other content. Consequently, these computational theories are fairly flexible and
can provide reasonable post-hoc fits to spacing effect data, but we question their predictive value.
Rather than developing a general theory of memory, we introduce a model that specifically predicts
the shape of the spacing function. Because the spacing function depends not only on the RI, but also
on the nature of the material being learned, and the manner and amount of study, the model requires
empirical constraints. We propose a novel approach to obtaining a predictive model: we collect
behavioral data to determine the forgetting function for the specific material being learned. We then
use the forgetting function, which is based on a single study session, to predict the spacing function,
which is based on two or more study sessions. Such a predictive model has significant implications
for education and training. The model can be used to search for the ISI or set of ISIs that maximizes
expected recall accuracy for a fixed RI. Although the required RI is not known in practical settings,
one can instead optimize over RI as a random variable with an assumed distribution.
2
Accounts of the spacing effect
We review two existing theories proposed to explain the spacing effect, and then propose a synthesis
of these theories. The two theories appear to be unrelated and mutually exclusive on the surface,
but in fact share a core unifying feature. In contrast to most modeling work appearing in the NIPS
volumes, our model is cast at Marr?s implementation level, not at the level of a computational theory.
However, after introducing our model and showing its predictive power, we discuss an intriguingly
similar Bayesian theory of memory adaptation (Kording et al., 2007). Although our model has a
2
strong correspondence with the Bayesian model, their points of difference seem to be crucial for
predicting behavioral phenomena of human declarative memory.
2.1
Encoding-variability theories
One class of theories proposed to explain the spacing effect focuses on the notion of encoding
variability. According to these theories, when an item is studied, a memory trace is formed that
incorporates the current psychological context. Psychological context includes conditions of study,
internal state of the learner, and recent experiences of the learner. Retrieval of a stored item depends
at least in part on the similarity of the contexts at the study and test. If psychological context is
assumed to fluctuate randomly over time, two study sessions close together in time will have similar
contexts. Consequently, at the time of a recall test, either both study contexts will match the test
context or neither will. Increasing the ISI can thus prove advantageous because the test context
will have higher likelihood of matching one study context or the other. Greater contextual variation
enhances memory on this account by making for less redundancy in the underlying memory traces.
However, increasing the ISI also incurs a retrieval cost because random drift makes the first-study
context increasingly less likely to match the test context. The optimal ISI depends on the tradeoff
between the retrieval benefit and cost at test.
Raaijmakers (2003) developed an encoding variability theory by incorporating time-varying contextual drift into the well-known Search of Associative Memory (SAM) model (Raaijmakers & Shiffrin,
1981), and explained a range of data from the spacing literature. In this model, the contextual state is
characterized by a high-dimensional binary vector. Each element of the vector indicates the presence
or absence of a particular contextual feature. The contextual state evolves according to a stochastic
process in which features flip from absent to present at rate ?01 and from present to absent at rate
?10 . If the context is sampled at two points in time with lag ?t, the probability that a contextual
feature will be present at both times is
P (feature present at time t and t + ?t) = ? 2 + ?(1 ? ?) exp(??t/? ),
(1)
where ? ? 1/(?01 + ?10 ) and ? ? ?01 ? is the expected proportion of features present at any instant.
To assist in understanding the mechanisms of SAM, we find it useful to recast the model as a neural
network. The input layer to this neural net is a pool of binary valued neurons that represent the
contextual state at the current time; the output layer consists of a set of memory elements, one per
item to be stored. To simplify notation throughout this paper, we?ll describe this model and all
others in terms of a single-item memory, allowing us to avoid an explicit index term for the item
being stored or retrieved. The memory element for the item under consideration
has an activation
P
level, m, which is a linear function of the context unit activities: m = j wj cj , where cj is the
binary activation level of context unit j and wj is the strength of connection from context j. The
probability of retrieval of the item is assumed to be monotonically related to m.
When an item is studied, its connection strengths are adjusted according to a Hebbian learning rule
with an upper limit on the connection strength:
?wj = min(1 ? wj , cj m),
?
(2)
where m
? = 1 if the item was just presented for study, or 0 otherwise. When an item is studied, the
weights for all contextual features present at the time of study will be strengthened. Later retrieval
is more likely if the context at test matches the context at study: the memory element receives
a contribution only when an input is active and its connection strength is nonzero. Thus, after a
single study and lag ?t, retrieval probability is directly related to Equation 1. When an item has
been studied twice, retrieval will be more robust if the two study opportunities strengthen different
weights, which occurs when the ISI is large and the contextual states do not overlap significantly.
One other feature of SAM is crucial for explaining spacing-effect data. After an item has been
studied at least once, SAM assumes that the memory trace resulting from further study is influenced
by whether the item is accessible to retrieval at the time of study. Specifically, SAM assumes that
the weights have effectively decayed to zero if recall fails. Other memory models similarly claim
that memory traces are weaker if an item is inaccessible to retrieval at the time of study (e.g., Pavlik
& Anderson, 2005), which we label as the retrieval-dependent update assumption.
We have described the key components of SAM that explain the spacing effect, but the model has
additional complexity, including a short-term memory store, inter-item interference, and additional
3
context based on associativity and explicit cues. Even with all this machinery, SAM has a serious
limitation. Spacing effects occur on many time scales (Cepeda et al., 2006). SAM can explain
effects on any one time scale (e.g., hours), but the same model cannot explain spacing effects on a
different time scale (e.g., months). The reason is essentially that the exponential decay in context
overlap bounds the time scale at which the model operates.
2.2
Predictive-utility theories
We now turn to another class of theories that has been proposed to explain the spacing effect. These
theories, which we will refer to as predictive-utility theories, are premised on the assumption that
memory is limited in capacity and/or is imperfect and allows intrusions. To achieve optimal performance, memories should therefore be erased if they are not likely to be needed in the future.
Anderson and Milson (1989) proposed a rational analysis of memory from which they estimated
the future need probability of a stored trace. When an item is studied multiple times with a given
ISI, the rational analysis suggests that the need probability drops off rapidly following the last study
once an interval of time greater than the ISI has passed. Consequently, increasing the ISI should
lead to a more persistent memory trace. Although this analysis yields a reasonable qualitative match
to spacing-effect data, no attempt was made to make quantitative predictions.
The notion of predictive utility is embedded in the multiple time-scale or MTS model of Staddon
et al. (2002). In MTS, each item to be stored is represented by a dedicated cascade of N leaky
integrators. The activation of integrator i, xi , decays over time according to:
xi (t + ?t) = xi (t) exp(??t/?i ),
(3)
where ?i is the decay time constant. The probability of retrieving the item is related to the total
Pk
trace strength, sN , where sk = j=1 xj . The integrators are ordered from shortest to longest time
constant, i.e., ?i < ?i+1 for all i. When an item is studied, the integrators receive a bump in activity
according to a cascaded error-correction update,
?xi = max(0, 1 ? si?1 ),
(4)
which is based on the idea that an integrator at some time scale ?i receives a boost only if integrators
at shorter time scales fail to represent the item at the time it is studied. The constant is a step
size. When an item is repeatedly presented for study with short ISIs, the trace can successfully
be represented by the integrators with short time constants, and consequently, the trace will decay
rapidly. Increasing the spacing shifts the representation to integrators with slower decay rates.
MTS was designed to explain rate-sensitive habituation data from the animal learning literature: the
fact that recovery following spaced stimuli is slower than following massed. We tried fitting MTS
to human-memory data and were unable to obtain quantitatively accurate fits.
3
The multiscale context model (MCM)
SAM and MTS are motivated by quite different considerations, and appear to be unrelated mechanisms. Nonetheless, they share a fundamental property: both suppose an exponential decay of internal representations over time (compare Equations 1 and 3). When we establish a correspondence
between the mechanisms in SAM and MTS that produce exponential decay, we obtain a synthesis
of the two models that incorporates features of each. Essentially, we take from SAM the notion of
contextual drift and retrieval-dependent update, and from MTS the multiscale representation and the
cascaded error-correction memory update, and we obtain a new model which we call the Multiscale
Context Model or MCM. MCM can be described as a neural network whose input layer consists of
N pools of time-varying context units. Units in pool i operate with time constant ?i . The relative
size of pool i is ?i . MCM is thus like SAM with multiple pools of context units. MCM can also be
described in terms of N leaky integrators, where integrator i has time constant ?i and activity scaled
by ?i . MCM is thus like MTS with the addition of scaling factors.
Before formally describing MCM, we detour to explain the choice of the parameters {?i } and {?i }.
As the reader might infer from our description of SAM and MTS, these parameters characterize
memory decay, extending Equation 3 such that the total trace strength at time t is defined as:
N
X
t
sN (t) =
?i exp(? )xi (0).
?i
i=1
4
If xi (0) = 1 for all i?which is the integrator activity following the first study in MTS?the trace
strength as a function of time is a mixture of exponentials. To match the form of human forgetting
(Figure 1), this mixture must approximate a power function. We can show that a generalized power
function can be exactly expressed as an infinite mixture of exponentials:
Z ?
Bt
?C
A(1 + Bt)
=A
Inv-Gamma(? ; C, 1) exp( )d?,
?
0
where Inv-Gamma(? ; C, 1) is the inverse-gamma probability density function with shape parameter
C and scale 1, and the equality is valid for t ? 0 and C > 0. We have identified several finite
mixture-of-exponential formulations that empirically yield an extremely good approximation to arbitrary power functions over ten orders of magnitude. The formulation we prefer defines ?i and ?i
in terms of four primitive parameters:
?i = ?? i and ?i = ?? i /
N
X
?j .
(5)
j=1
With ? > 1 and ? < 1, the higher-order components (i.e., larger indices) represent exponentially
longer time scales with exponentially smaller weighting. As a result, truncating higher-order mixture
components has little impact on the approximation on shorter time scales. Consequently, we simply
need to pick a value of N that allows for a representation of many orders of magnitude of time.
Given N and human forgetting data collected in an experiment, we can search for the parameters
{?, ?, ?, ?} that obtain a least squares fit to the data. Given the human forgetting function function,
then, we can completely determine the {?i } and {?i }. In all simulation results we report, we fixed
N = 100, although equivalent results are obtained for N = 50 or N = 200.
3.1
Casting MCM as a cascade of leaky integrators
Assume that?as in MTS?a dedicated set of N leaky integrators hold the memory of each item
to be learned. Let xi denote the activity of integrator i associated with the item, and let si be the
average strength of the first i integrators, weighted by the {?j } terms:
si =
i
i
X
1 X
?j xj , where ?i =
?j .
?i j=1
j=1
The recall probability is simply related to the net strength of the item: P (recall) = min(1, sN ).
When an item is studied, its integrators receive a boost in activity. Integrator i receives a boost that
depends on how close the average strength of the first i integrators is to full strength, i.e.,
?xi = (1 ? si )
(6)
where is a step size. We adopt the retrieval-dependent update assumption of SAM, and fix = 1
for an item that is unsuccessfully recalled at the time of study, and = r > 1 for an item that is
successfully recalled.
This description of MCM is identical to MTS except the following. (1) MTS weighs all integrators
equally when combining the individual integrator activities. MCM uses a ?-weighted average. (2)
MTS provides no guidance in setting the ? and ? constants; MCM constrains these parameters based
on the human forgetting function. (3) The integrator update magnitude is retrieval dependent, as in
SAM. (4) The MCM update rule (Equation 6) is based on si , whereas the MTS rule (Equation 4)
is based on si?1 . This modification is motivated by the neural net formulation of MCM, in which
using si allows the update to be interpreted as performing gradient ascent in prediction ability.
3.2
Casting MCM as a neural network
The neural net conceptualization of MCM is depicted in Figure 1b. The input layer is like that of
SAM with the context units arranged in N pools, with ?i being the relative size of pool i. The
activity of unit j in pool i is denoted cij . The context units are binary valued and units in pool i flip
with time constant ?i . On average a fraction ? are on at any time. (? has no effect on the model?s
predictions, and is cancelled out in the formulation that follows.)
5
As depicted in Figure 1b, the model also includes a set of N memory elements for each item to
be learned. Memory elements are in one-to-one correspondence with context pools. Activation of
memory element i, denoted mi , indicates strength of retrieval for the item based on context pools
1...i. The activation function is cascaded such that memory element i receives input from context
units in pool i as well as memory element i ? 1:
X
mi = mi?1 +
wij cij + b,
j
where wij is the connection weight from context unit j to memory element i, m0 ? 0, and b =
??/(1 ? ?) is a bias weight. The bias simply serves to offset spurious activity reaching the memory
elements, activity that is unrelated to the fact that the item was previously studied and stored. The
larger the fraction of context units that are on at any time (?), the more spurious activation there will
be that needs to be cancelled out. The probability of recalling the item is related to the activity of
memory element N : P (recall) = min(1, mN ).
When the item is studied, the weights from context units in pool i are adjusted according to an
update rule that performs gradient descent in an error measure Ei = ei 2 , where ei = 1 ? mi /?i .
This error is minimized when the memory element i reaches activation level ?i (defined earlier as
the proportion of units in the entire context pool that contributes to activity at stage i). The weight
update that performs gradient descent in Ei is
?wij =
ei cij ,
(7)
N ?(1 ? ?)
where is a learning rate and the denominator of the first term is a normalization constant which
can be folded into the learning rate. As in SAM, is assumed to be contingent on retrieval success
at the start of the study trial, in the manner we described previously.
What is the motivation for minimizing the prediction error at every stage, versus minimizing the
prediction error just at the final stage, EN ? To answer this question, note that there are two consequences of minimizing the error Ei to zero for any i. First, reducing Ei will also likely serve to
reduce El for all l > i. Second, achieving this objective will allow the {wl,j,k : l > i} to all be set
to zero without any effect on the memory. Essentially, there is no need to store information for a
longer time scale than it is needed.
This description of MCM is identical to SAM except: (1) SAM has a single temporal scale of representation; MCM has a multiscale representation. (2) SAM?s memory update rule can be interpreted
as Hebbian learning; MCM?s update can be interpreted as error-correction learning.
3.3
Relating leaky integrator and neural net characterizations of MCM
To make contact with MTS, we have described MCM as a cascade of leaky integrators, and to make
contact with SAM, we have described MCM as a neural net. One can easily verify that the leakyintegrator and neural-net descriptions of MCM are equivalent via the following correspondence between variables of the two models, where E[.] denotes the expectation over context representations:
P
j E[wij cij ] + b
si = E[mi ]/?i and xi =
.
N ?(1 ? ?)
4
Simulations
Cepeda and colleagues (Cepeda, Vul, Rohrer, Wixted, & Pashler, 2008; Cepeda et al., in press)
have recently conducted well-controlled experimental manipulations of spacing involving RIs on
educationally relevant time scales of days to months. Most research in the spacing literature involves
brief RIs, on the scale of minutes to an hour, and methodological concerns have been raised with
the few well-known studies involving longer RIs (Cepeda et al., 2006). In Cepeda?s experiments,
participants study a set of paired associates over two sessions. In the first session, participants are
trained until they reach a performance criterion, ensuring that the material has been successfully
encoded. At the start of the second session, participants are tested via a cued-recall paradigm, and
then are given a fixed number of study passes through all the pairs. Following a specified RI, a final
cued-recall test is administered. Recall accuracy at the start of the second session provides the basic
forgetting function, and recall accuracy at test provides the spacing function.
6
% recall
(a) RI = 10 days
(b) RI = 168 days
100
100
80
80
80
60
60
60
40
40
40
20
20
20
0
012
4
7
14
0
17 28
ISI (days)
84
168
17 28
84
168
ISI (days)
(e) RIs = 7, 35, 70, 350 days
100
80
80
% recall
100
60
40
20
0
0
ISI (days)
(d) RIs = 7, 35, 70, 350 days
% recall
(c) RI = 168 days
100
60
40
20
1 7 14 21
35
70
0
105
ISI (days)
1 7 14 21
35
70
105
ISI (days)
Figure 2: Modeling and experimental data of (Cepeda et al., in press) (a) Experiment 1 (SwahiliEnglish), (b) Experiment 2a (obscure facts), and (c) Experiment 2b (object names). The four RI
conditions of Cepeda et al. (2008) are modeled using (d) MCM and (e) the Bayesian multiscale
model of Kording et al. (2007). In panel (e), the peaks of the model?s spacing functions are indicated
by the triangle pointers.
For each experiment, we optimized MCM?s parameters, {?, ?, ?, ?}, to obtain a least squares fit to
the forgetting function. These four model parameters determine the time constants and weighting
coefficients of the mixture-of-exponentials approximation to the forgetting function (Equation 5).
The model has only one other free parameter, r , the magnitude of update on a trial when an item is
successfully recalled (see Equation 6). We chose r = 9 for all experiments, based on hand tuning
the parameter to fit the first experiment reported here. With r , MCM is fully constrained and can
make strong predictions regarding the spacing function.
Figure 2 shows MCM?s predictions of Cepeda?s experiments. Panels a-c show the forgetting function
data for the experiments (open blue squares connected by dotted lines), MCM?s post-hoc fit to the
forgetting function (solid blue line), the spacing function data (solid green points connected by
dotted lines), and MCM?s parameter-free prediction of the spacing function (solid green line). The
individual panels show the ISIs studied and the RI. For each experiment, MCM?s prediction of the
peak of the spacing function is entirely consistent with the data, and for the most part, MCM?s
quantiative predictions are excellent. (In panel c, MCM?s predictions are about 20% too low across
the range of ISIs.) Interestingly, the experiments in panels b and c explored identical ISIs and RIs
with two different types of material. With the coarse range of ISIs explored, the authors of these
experiments concluded that the peak ISI was the same independent of the material (28 days). MCM
suggests a different peak for the two sets of material, a prediction that can be evaluated empirically.
(It would be extremely surprising to psychologists if the peak were in general independent of the
material, as content effects pervade the memory literature.)
Panel d presents the results of a complex study involving a single set of items studied with 11 different ISIs, ranging from minutes to months, and four RIs, ranging from a week to nearly a year. We
omit the fit to the forgetting function to avoid cluttering the graph. The data and model predictions
7
2
log10(optimal ISI)
1
human data
MCM
MCM regression
0
?1
?2
?3
?4
?5
?6
?5
?4
?3
?2
?1
log10(RI)
0
1
2
3
Figure 3: A meta-analysis of the literature by
Cepeda et al. (2006). Each red circle represents
a single spacing experiment in which the ISI was
varied for a given RI. The optimal ISI obtained
in the experiment is plotted against the RI on a
log-log scale. (Note that the data are intrinsically
noisy because experiments typically examine only
a small set of ISIs, from which the ?optimum? is
chosen.) The X?s represent the mean from 1000
replications of MCM for a given RI with randomly
drawn parameter settings (i.e., random forgetting
functions), and the dashed line is the best regression fit to the X?s. Both the experimental data and
MCM show a power law relationship between optimal ISI and RI.
are color coded by RI, with higher recall accuracy for shorter RIs. MCM predicts the spacing functions with absolutely spectacular precision, considering the predictions are fully constrained and
parameter free. Moreover, MCM anticipates the peaks of the spacing functions, with the curvature
of the peak decreasing with the RI, and the optimal ISI increasing with the RI.
In addition to these results, MCM also predicts the probability of recall at test conditional on successful or unsuccessful recall during the test at the start of the second study session. As explained
in Figure 3, MCM obtains a sensible parameter-free fit to a meta-analysis of the experimental literature by Cepeda et al. (2006). Finally, MCM is able to post-hoc fit classic studies from the spacing
literature (for which forgetting functions are not available).
5
Discussion
MCM?s blind prediction of 7 different spacing functions is remarkable considering that the domain?s
complexity (the content, manner and amount of study) is reduced to four parameters, which are fully
determined by the forgetting function. Obtaining empirical forgetting functions is straightforward.
Obtaining empirical evidence to optimize study schedules, especially when more than two sessions
are involved, is nearly infeasible. MCM thus offers a significant practical tool for educators in
devising study schedules. Optimizing study schedules with MCM is straightfoward, and particularly
useful considering that MCM can optimize not only for a known RI but for RI as a random variable.
MCM arose from two existing models, MTS and SAM, and all three models are characterized at
Marr?s implementation or algorithmic levels, not at the level of a computational theory. Kording et
al. (2007) have proposed a Bayesian memory model which has intriguing similarities to MCM, and
has the potential of serving as the complementary computational theory. The model is a Kalman
filter (KF) with internal state variables that decay exponentially at different rates. The state predicts
the appearance of an item in the temporal stream of experience. The dynamics of MCM can be
exactly mapped onto the KF, with ? related to the decay of a variable, and ? to its internal noise
level. However, the KF model has a very different update rule, based on the Kalman gain. We
have tried to fit experimental data with the KF model, but have not been satisfied with the outcome.
For example, Figure 2e shows a least-squares fit to the six free parameters of the KF model to the
Cepeda et al. (2008) data. (Two parameters determine the range of time scales; two specify internal
and observation noise levels; and two perform an affine transform from internal memory strength to
recall probability.) In terms of sum-squared error, the model shows a reasonable fit, but the model
clearly misses the peaks of the spacing functions, and in fact predicts a peak that is independent
of RI. Notably, the KF model is a post-hoc fit to the spacing functions, whereas MCM produces a
true prediction of the spacing functions, i.e., parameters of MCM are determined without peeking
at the spacing function. Exploring many parameterizations of the KF model, we find that the model
generally predicts decreasing or constant optimal ISIs as a function of the RI. In contrast, MCM
necessarily produces an increasing optimal ISI as a function of the RI, consistent with all behavioral
data. It remains an important and intriguing challenge to unify MCM and the KF model; each has
something to offer the other.
8
References
Anderson, J. R., & Milson, R. (1989). Human memory: An adaptive perspective. Psych. Rev., 96,
703?719.
Cepeda, N. J., Coburn, N., Rohrer, D., Wixted, J. T., Mozer, M. C., & Pashler, H. (in press). Optimizing distributed practice: Theoretical analysis and practical implications. Journal of Experimental
Psychology.
Cepeda, N. J., Pashler, H., Vul, E., Wixted, J. T., & Rohrer, D. (2006). Distributed practice in verbal
recall tasks: A review and quantitative synthesis. Psychological Bulletin, 132, 354?380.
Cepeda, N. J., Vul, E., Rohrer, D., Wixted, J. T., & Pashler, H. (2008). Spacing effects in learning:
A temporal ridgeline of optimal retention. Psychological Science, 19, 1095?1102.
Kording, K. P., Tenenbaum, J. B., & Shadmehr, R. (2007). The dynamics of memory as a consequence of optimal adaptation to a changing body. Nature Neuroscience, 10, 779?786.
Pavlik, P. I., & Anderson, J. R. (2005). Practice and forgetting effects on vocabulary memory: An
activation-based model of the spacing effect. Cognitive Science, 29(4), 559-586.
Raaijmakers, J. G. W. (2003). Spacing and repetition effects in human memory: application of the
SAM model. Cognitive Science, 27, 431?452.
Raaijmakers, J. G. W., & Shiffrin, R. M. (1981). Search of associative memory. Psych. Rev., 88,
93?134.
Staddon, J. E. R., Chelaru, I. M., & Higa, J. J. (2002). Habituation, memory and the brain: The
dynamics of interval timing. Behavioural Processes, 57, 71-88.
Wixted, J. T., & Carpenter, S. K. (2007). The Wickelgren power law and the Ebbinghaus savings
function. Psychological Science, 18, 133?134.
9
| 3731 |@word trial:2 advantageous:1 proportion:2 open:1 simulation:2 tried:2 pick:1 incurs:1 solid:5 contains:1 interestingly:1 existing:3 current:2 contextual:10 surprising:1 activation:8 yet:1 si:8 must:1 intriguing:2 shape:2 drop:1 designed:1 update:14 cue:1 devising:1 item:36 short:4 core:2 pointer:1 provides:3 characterization:1 coarse:1 parameterizations:1 rohrer:5 persistent:1 qualitative:1 prove:1 consists:2 retrieving:1 fitting:1 replication:1 behavioral:4 massed:3 manner:4 introduce:2 inter:1 notably:1 forgetting:22 expected:2 isi:40 examine:1 brain:3 integrator:24 decreasing:2 little:1 cluttering:1 increasing:6 considering:3 unrelated:4 underlying:1 maximizes:1 notation:1 panel:6 moreover:1 what:2 interpreted:3 psych:2 developed:2 lindsey:1 temporal:4 quantitative:2 every:1 exactly:2 scaled:1 unit:14 omit:1 appear:2 before:1 retention:8 timing:1 limit:1 consequence:2 encoding:3 advised:1 might:1 chose:1 twice:1 studied:15 collect:1 suggests:2 limited:1 range:4 pervade:1 practical:3 practice:6 empirical:4 straightfoward:1 cascade:4 significantly:1 matching:1 cram:1 cannot:1 close:2 onto:1 context:36 influence:1 spectacular:1 pashler:6 optimize:3 equivalent:2 primitive:1 straightforward:1 truncating:1 rabbit:1 unify:1 recovery:1 immediately:2 m2:1 rule:6 marr:2 classic:1 notion:3 variation:1 suppose:2 colorado:1 strengthen:1 us:1 prescribe:1 associate:2 element:13 particularly:1 predicts:7 wj:4 connected:2 mozer:2 inaccessible:1 complexity:2 constrains:1 asked:1 dynamic:3 trained:1 depend:1 predictive:7 serve:1 learner:2 completely:1 triangle:1 easily:1 represented:2 separated:1 describe:1 pavlik:3 nonmonotonic:1 outcome:1 whose:2 lag:6 quite:1 valued:2 larger:2 encoded:1 otherwise:1 ability:1 transform:1 noisy:1 peeking:1 final:2 associative:2 hoc:4 net:7 propose:2 adaptation:2 relevant:3 combining:1 rapidly:2 educator:1 shiffrin:2 achieve:1 description:4 double:2 optimum:1 extending:1 produce:3 object:1 cued:4 develop:1 strong:2 involves:1 filter:1 stochastic:1 human:11 material:12 education:2 fix:1 adjusted:2 exploring:1 correction:3 hold:1 exp:4 algorithmic:1 predict:2 week:2 claim:1 bump:1 m0:1 adopt:1 label:1 sensitive:1 wl:1 repetition:1 successfully:4 tool:1 weighted:2 mit:1 clearly:1 rather:2 reaching:1 avoid:2 arose:1 fluctuate:2 varying:2 casting:2 focus:1 longest:1 methodological:1 likelihood:1 indicates:2 intrusion:1 contrast:2 dependent:4 el:1 foreign:2 entire:1 integrated:1 typically:2 bt:3 associativity:1 spurious:2 raaijmakers:6 wij:4 among:1 flexible:1 denoted:2 animal:1 raised:1 constrained:2 fairly:1 once:2 saving:1 intriguingly:2 identical:3 represents:1 nearly:2 future:2 minimized:1 others:1 stimulus:1 simplify:1 serious:1 quantitatively:1 report:1 few:1 randomly:2 manipulated:1 gamma:3 individual:3 m4:1 unsuccessfully:1 attempt:1 recalling:1 certainly:1 mixture:6 implication:3 accurate:1 edge:1 experience:2 shorter:3 machinery:1 detour:1 circle:1 plotted:1 guidance:1 weighs:1 theoretical:2 psychological:6 increased:1 modeling:2 earlier:1 cost:2 introducing:1 mcm:59 successful:1 conducted:1 too:1 characterize:1 stored:7 reported:1 answer:1 anticipates:1 decayed:1 peak:11 fundamental:1 density:1 accessible:1 retain:1 off:2 pool:19 michael:1 synthesis:4 together:1 squared:1 satisfied:1 wickelgren:1 cognitive:4 account:4 potential:1 premised:1 student:2 includes:2 coefficient:1 depends:6 blind:1 stream:1 later:1 characterizes:2 red:1 start:4 participant:7 elaborated:1 contribution:1 formed:1 square:4 accuracy:9 characteristic:2 yield:3 spaced:2 bayesian:5 explain:9 reach:3 influenced:1 ed:1 against:1 higa:2 nonetheless:1 colleague:1 involved:1 associated:1 mi:5 sampled:1 rational:2 gain:1 intrinsically:1 recall:28 color:1 cj:3 schedule:7 higher:4 day:12 follow:1 specify:1 formulation:4 arranged:1 evaluated:1 strongly:1 anderson:5 just:2 stage:3 until:1 sketch:2 receives:4 hand:1 ei:7 multiscale:9 defines:1 indicated:1 name:1 effect:23 verify:1 true:1 equality:1 nonzero:1 ll:1 during:1 harold:1 criterion:1 generalized:2 performs:2 dedicated:2 ranging:2 consideration:2 novel:1 recently:1 mt:17 empirically:2 exponentially:3 volume:1 m1:1 relating:1 significant:3 unfamiliar:1 refer:2 tuning:1 session:26 similarly:1 language:2 similarity:2 surface:2 longer:3 something:1 curvature:1 recent:1 retrieved:1 optimizing:2 perspective:1 manipulation:1 store:2 meta:3 binary:4 success:1 vul:5 greater:2 additional:2 contingent:1 determine:5 maximize:1 paradigm:2 monotonically:1 dashed:3 shortest:1 multiple:6 full:1 infer:1 hebbian:2 match:5 characterized:2 offer:2 long:3 retrieval:15 post:4 equally:1 coded:1 paired:2 controlled:2 impact:2 prediction:17 involving:3 ensuring:1 basic:1 denominator:1 essentially:3 expectation:1 regression:2 represent:4 normalization:1 receive:2 addition:2 whereas:2 spacing:50 interval:6 concluded:1 crucial:2 operate:1 ascent:1 pass:1 incorporates:2 seem:1 habituation:2 call:1 presence:1 intermediate:1 xj:2 fit:14 psychology:4 identified:1 imperfect:1 regarding:1 idea:1 intersession:1 tradeoff:1 reduce:1 absent:2 administered:2 shift:1 whether:1 motivated:2 six:1 utility:3 assist:1 passed:1 york:1 repeatedly:1 dramatically:1 useful:2 generally:1 staddon:3 amount:2 tenenbaum:2 ten:1 reduced:1 dotted:2 estimated:1 neuroscience:1 per:1 blue:2 serving:1 chelaru:2 redundancy:1 key:1 four:5 achieving:1 drawn:1 capital:1 changing:1 neither:1 graph:2 fraction:2 year:1 sum:1 inverse:1 almost:1 reasonable:3 throughout:1 reader:1 incompatible:1 prefer:1 scaling:1 entirely:1 layer:4 bound:1 correspondence:4 activity:12 strength:14 occur:1 constraint:1 ri:35 min:3 extremely:2 performing:1 developing:1 according:6 conceptualization:1 across:2 smaller:1 increasingly:1 sam:23 evolves:1 making:1 modification:1 rev:2 psychologist:1 memorizing:1 explained:2 gradually:1 interference:2 behavioural:1 equation:7 mutually:1 previously:2 remains:1 discus:1 turn:1 mechanism:3 fail:1 needed:2 describing:1 flip:2 serf:1 available:1 appropriate:1 nicholas:1 appearing:1 cancelled:2 slower:2 assumes:2 denotes:1 opportunity:1 log10:2 unifying:1 instant:1 especially:1 establish:1 contact:2 objective:1 question:2 occurs:1 exclusive:1 enhances:1 gradient:3 unable:1 mapped:1 capacity:1 sensible:1 collected:1 declarative:2 reason:1 kalman:2 index:2 relationship:4 modeled:1 minimizing:3 cij:4 robert:1 trace:12 rise:1 implementation:2 perform:1 allowing:1 upper:1 neuron:1 observation:1 finite:1 descent:2 variability:3 interacting:1 ucsd:1 varied:1 arbitrary:1 inv:2 drift:3 cast:2 required:1 pair:1 specified:1 connection:5 optimized:1 recalled:3 learned:6 tremendous:1 hour:2 boost:3 nip:1 able:3 suggested:1 challenge:1 recast:1 including:1 memory:54 explanation:1 max:1 green:2 power:8 overlap:2 unsuccessful:1 predicting:2 cascaded:3 mn:2 brief:1 sn:3 prior:1 review:2 literature:7 understanding:1 kf:8 determining:1 relative:2 law:4 embedded:1 fully:3 limitation:1 versus:1 remarkable:1 affine:1 consistent:2 share:3 obscure:1 last:1 free:5 english:1 infeasible:1 verbal:1 bias:2 weaker:1 allow:1 fall:1 explaining:1 characterizing:1 face:1 taking:1 bulletin:1 leaky:7 distributed:3 cepeda:19 curve:2 benefit:1 vocabulary:4 valid:1 author:1 made:1 adaptive:1 kording:5 approximate:1 observable:1 obtains:1 active:1 assumed:4 xi:9 search:4 sk:1 learn:1 nature:2 robust:1 obtaining:3 contributes:1 excellent:1 necessarily:2 complex:2 japanese:2 domain:1 wixted:7 pk:1 motivation:1 noise:2 complementary:1 carpenter:2 body:1 advice:1 en:1 depicts:2 strengthened:1 precision:1 fails:1 explicit:2 exponential:7 weighting:2 minute:2 specific:4 showing:1 offset:1 decay:14 explored:2 concern:1 evidence:1 incorporating:1 effectively:1 magnitude:4 depicted:2 simply:3 likely:4 appearance:1 expressed:1 ordered:1 quantiative:1 corresponds:1 conditional:1 month:4 goal:1 consequently:5 erased:1 absence:1 content:4 typical:1 specifically:2 operates:1 shadmehr:2 infinite:1 except:2 folded:1 reducing:1 determined:2 total:2 miss:1 experimental:7 m3:1 formally:1 internal:6 absolutely:1 dept:4 tested:2 phenomenon:3 |
3,015 | 3,732 | On Invariance in Hierarchical Models
Jake Bouvrie, Lorenzo Rosasco, and Tomaso Poggio
Center for Biological and Computational Learning
Massachusetts Institute of Technology
Cambridge, MA USA
{jvb,lrosasco}@mit.edu, [email protected]
Abstract
A goal of central importance in the study of hierarchical models for object recognition ? and indeed the mammalian visual cortex ? is that of understanding quantitatively the trade-off between invariance and selectivity, and how invariance and discrimination properties contribute towards providing an improved representation
useful for learning from data. In this work we provide a general group-theoretic
framework for characterizing and understanding invariance in a family of hierarchical models. We show that by taking an algebraic perspective, one can provide
a concise set of conditions which must be met to establish invariance, as well
as a constructive prescription for meeting those conditions. Analyses in specific
cases of particular relevance to computer vision and text processing are given,
yielding insight into how and when invariance can be achieved. We find that the
minimal intrinsic properties of a hierarchical model needed to support a particular
invariance can be clearly described, thereby encouraging efficient computational
implementations.
1
Introduction
Several models of object recognition drawing inspiration from visual cortex have been developed
over the past few decades [3, 8, 6, 12, 10, 9, 7], and have enjoyed substantial empirical success. A
central theme found in this family of models is the use of Hubel and Wiesel?s simple and complex
cell ideas [5]. In the primary visual cortex, simple units compute features by looking for the occurrence of a preferred stimulus in a region of the input (?receptive field?). Translation invariance is
then explicitly built into the processing pathway by way of complex units which pool locally over
simple units. The alternating simple-complex filtering/pooling process is repeated, building increasingly invariant representations which are simultaneously selective for increasingly complex stimuli.
In a computer implementation, the final representation can then be presented to a supervised learning
algorithm.
Following the flow of processing in a hierarchy from the bottom upwards, the layerwise representations gain invariance while simultaneously becoming selective for more complex patterns. A goal of
central importance in the study of such hierarchical architectures and the visual cortex alike is that of
understanding quantitatively this invariance-selectivity tradeoff, and how invariance and selectivity
contribute towards providing an improved representation useful for learning from examples. In this
paper, we focus on hierarchical models incorporating an explicit attempt to impose transformation
invariance, and do not directly address the case of deep layered models without local transformation
or pooling operations (e.g. [4]).
In a recent effort, Smale et al. [11] have established a framework which makes possible a more precise characterization of the operation of hierarchical models via the study of invariance and discrimination properties. However, Smale et al. study invariance in an implicit, rather than constructive,
fashion. In their work, two cases are studied: invariance with respect to image rotations and string
reversals, and the analysis is tailored to the particular setting. In this paper, we reinterpret and extend the invariance analysis of Smale et al. using a group-theoretic language towards clarifying and
unifying the general properties necessary for invariance in a family of hierarchical models. We show
that by systematically applying algebraic tools, one can provide a concise set of conditions which
must be met to establish invariance, as well as a constructive prescription for meeting those conditions. We additionally find that when one imposes the mild requirement that the transformations of
interest have group structure, a broad class of hierarchical models can only be invariant to orthog1
onal transformations. This result suggests that common architectures found in the literature might
need to be rethought and modified so as to allow for broader invariance possibilities. Finally, we
show that our framework automatically points the way to efficient computational implementations
of invariant models.
The paper is organized as follows. We first recall important definitions from Smale et al. Next, we
extend the machinery of Smale et al. to a more general setting allowing for general pooling functions, and give a proof for invariance of the corresponding family of hierarchical feature maps. This
contribution is key because it shows that several results in [11] do not depend on the particular choice
of pooling function. We then establish a group-theoretic framework for characterizing invariance in
hierarchical models expressed in terms of the objects defined here. Within this framework, we turn
to the problem of invariance in two specific domains of practical relevance: images and text strings.
Finally, we conclude with a few remarks summarizing the contributions and relevance of our work.
All proofs are omitted here, but can be found in the online supplementary material [2]. The reader
is assumed to be familiar with introductory concepts in group theory. An excellent reference is [1].
2
Invariance of a Hierarchical Feature Map
We first review important definitions and concepts concerning the neural response feature map presented in Smale et al. The reader is encouraged to consult [11] for a more detailed discussion. We
will draw attention to the conditions needed for the neural response to be invariant with respect
to a family of arbitrary transformations, and then generalize the neural response map to allow for
arbitrary pooling functions. The proof of invariance given in [11] is extended to this generalized
setting. The proof presented here (and in [11]) hinges on a technical ?Assumption? which must be
verified to hold true, given the model and the transformations to which we would like to be invariant.
Therefore the key step to establishing invariance is verification of this Assumption. After stating the
Assumption and how it figures into the overall picture, we explore its verification in Section 3. There
we are able to describe, for a broad range of hierarchical models (including a class of convolutional
neural networks [6]), the necessary conditions for invariance to a set of transformations.
2.1 Definition of the Feature Map and Invariance
First consider a system of patches of increasing size associated to successive layers of the hierarchy,
v1 ? v2 ? ? ? ? ? vn ? S, with vn taken to be the size of the full input. Here layer n is the
top-most layer, and the patches are pieces of the domain on which the input data are defined. The
set S could contain, for example, points in R2 (in the case of 2D graphics) or integer indices (the
case of strings). Until Section 4, the data are seen as general functions, however it is intuitively
helpful to think of the special case of images, and we will use a notation that is suggestive of this
particular case. Next, we?ll need spaces of functions on the patches, Im(vi ). In many cases it will
only be necessary to work with arbitrary successive pairs of patches (layers), in which case we will
denote by u the smaller patch, and v the next larger patch. We next introduce the transformation
sets Hi , i = 1, . . . , n intrinsic to the model. These are abstract sets in general, however here we
will take them to be comprised of translations with h ? Hi defined by h : vi ? vi+1 . Note that by
construction, the functions h ? Hi implicitly involve restriction. For example, if f ? Im(v2 ) is an
image of size v2 and h ? H1 , then f ? h is a piece of the image of size v1 . The particular piece is
determined by h. Finally, to each layer we also associate a dictionary of templates, Qi ? Im(vi ).
The templates could be randomly sampled from Im(vi ), for example.
b m are
Given the ingredients above, the neural response Nm (f ) and associated derived kernel K
defined as follows.
Definition 1 (Neural Response). Given a non-negative valued, normalized, initial reproducing kerb 1 , the m-th derived kernel K
b m , for m = 2, . . . , n, is obtained by normalizing Km (f, g) =
nel K
b m?1 (f ? h, q),
hNm (f ), Nm (g)iL2 (Qm?1 ) where Nm (f )(q) = maxh?H K
q ? Qm?1 with
H = Hm?1 .
p
b
Here a kernel is normalized by taking K(f,
g) = K(f, g)/ K(f, f )K(g, g). Note that the neural
response decomposes the input into a hierarchy of parts, analyzing sub-regions at different scales.
The neural response and derived kernels describe in compact, abstract terms the core operations built
into the many related hierarchical models of object recognition cited above.
We next define a set of transformations, distinct from the Hi above, to which we would like to be
invariant. Let r ? Ri , i ? {1, . . . , n ? 1}, be transformations that can be viewed as mapping
either vi to itself or vi+1 to itself (depending on the context in which it is applied). We rule out the
degenerate translations and transformations, h or r mapping their entire domain to a single point.
When it is necessary to identify transformations defined on a specific domain v, we will use the
notation rv : v ? v. Invariance of the neural response feature map can now be defined.
2
Definition 2 (Invariance). The feature map Nm is invariant to the domain transformation r ? R if
b m (f ? r, f ) = 1, for all f ? Im(vm ).
Nm (f ) = Nm (f ? r), for all f ? Im(vm ), or equivalently, K
In order to state the invariance properties of a given feature map, a technical assumption is needed.
Assumption 1 (from [11]). Fix any r ? R. There exists a surjective map ? : H ? H satisfying
rv ? h = ?(h) ? ru
(1)
for all h ? H.
This technical assumption is best described by way of an example. Consider images and rotations:
the assumption stipulates that rotating an image and then taking a restriction must be equivalent to
first taking a (different) restriction and then rotating the resulting image patch. As we will describe
below, establishing invariance will boil down to verifying Assumption 1.
2.2 Invariance and Generalized Pooling
We next provide a generalized proof of invariance of a family of hierarchical feature maps, where
the properties we derive do not depend on the choice of the pooling function. Given the above
assumption, invariance can be established for general pooling functions of which the max is only
one particular choice. We will first define such general pooling functions, and then describe the
corresponding generalized feature maps. The final step will then be to state an invariance result for
the generalized feature map, given that Assumption 1 holds.
Let H = Hi , with i ? {1, . . . , n ? 1}, and let B(R) denote the Borel algebra of R. As in Assumption 1, we define ? : H ? H to be a surjection, and let ? : B(R++ ) ? R++ be a bounded pooling
function defined for Borel sets B ? B(R) consisting of only positive elements. Here R++ denotes
the set of strictly positive reals. Given a positive functional F acting on elements of H, we define
the set F (H) ? B(R) as
F (H) = {F [h] | h ? H}.
Note that since ? is surjective, ?(H) = H, and therefore (F ? ?)(H) = F (H).
With these definitions in hand, we can define a more general neural response as follows. For H =
Hm?1 and all q ? Q = Qm?1 , let the neural response be given by
Nm (f )(q) = (? ? F )(H)
where
b m?1 (f ? h, q).
F [h] = K
Given Assumption 1, we can now prove invariance of a neural response feature map built from the
general pooling function ?.
b 1 (f, f ? r) = 1
Theorem 1. Given any function ? : B(R++ ) ? R++ , if the initial kernel satisfies K
for all r ? R, f ? Im(v1 ), then
Nm (f ) = Nm (f ? r),
for all r ? R, f ? Im(vm ) and m ? n.
We give a few practical examples of the pooling function ?.
Maximum: The original neural response is recovered setting ?(B)R= sup B .
Averaging: We can consider average pooling by setting ?(B) = x?B xd? . If H has a measure
?H , then a natural choice for ? is the induced push-forward measure ?H ? F ?1 . The measure ?H
may be simply uniform, or in the case of a finite set H, discrete. Similarly, we may consider more
general weighted averages.
3
A Group-Theoretic Invariance Framework
This section establishes general definitions and conditions needed to formalize a group-theoretic
concept of invariance. When Assumption 1 holds, then the neural response map can be made invariant to the given set of transformations. Proving invariance thus reduces to verifying that the
Assumption actually holds, and is valid. A primary goal of this paper is to place this task within
an algebraic framework so that the question of verifying the Assumption can be formalized and
explored in full generality with respect to model architecture, and the possible transformations. Formalization of Assumption 1 culminates in Definition 3 below, where purely algebraic conditions
are separated from conditions stemming from the mechanics of the hierarchy. This separation results in a simplified problem because one can then tackle the algebraic questions independent of and
untangled from the model architecture.
Our general approach is as follows. We will require that R is a subset of a group and then use
algebraic tools to understand when and how Assumption 1 can be satisfied given different instances
3
of R. If R is fixed, then the assumption can only be satisfied by placing requirements on the sets of
built-in translations Hi , i = 1, . . . , n. Therefore, we will make quantitative, constructive statements
about the minimal sets of translations associated to a layer required to support invariance to a set of
transformations. Conversely, one can fix Hi and then ask whether the resulting feature map will be
invariant to any transformations. We explore this perspective as well, particularly in the examples
of Section 4, where specific problem domains are considered.
3.1 Formulating Conditions for Invariance
Recall that vi ? S. Because it will be necessary to translate in S, it is assumed that an appropriate
notion of addition between the elements of S is given. If G is a group, we denote the (left) action of
G on S by A : G?S ? S. Given an element g ? G, the notation Ag : S ? S will be utilized. Since
A is a group action, it satisfies (Ag ? Ag0 )(x) = Agg0 (x) for all x ? S and all g, g 0 ? G. Consider
an arbitrary pair of successive layers with associated patch sizes u and v, with u ? v ? S. Recall
that the definition of the neural response involves the ?built-in? translation functions h : u ? v,
for h ? H = Hu . Since S has an addition operation, we may parameterize h ? H explicitly as
ha (x) = x + a for x ? u and parameter a ? v such that (u + a) ? v. The restriction behavior
of the translations in H prevents us from simply generating a group out of the elements of H. To
get around this difficulty, we will decompose the h ? H into a composition of two functions: a
translation group action and an inclusion.
Let S generate a group of translations T by defining the injective map
S?T
a 7? ta .
(2)
That is, to every element of a ? S we associate a member of the group T whose action corresponds
to translation in S by a: Ata (x) = x + a for x, a ? S. (Although we assume the specific case of
translations throughout, the sets of intrinsic operations Hi may more generally contain other kinds
of transformations. We assume, however, that T is abelian.) Furthermore, because the translations
H can be parameterized by an element of S, one can apply Equation (2) to define an injective map
? : H ? T by ha 7? ta . Finally, we define ?u : u ,? S to be the canonical inclusion of u into S.
We can now rewrite ha : u ? v as
ha = Ata ? ?u
Note that because a satisfies (u + a) ? v by definition, im(Ata ? ?u ) ? v automatically.
In the statement of Assumption 1, the transformations r ? R can be seen as maps from u to itself,
or from v to itself, depending on which side of Equation (1) they are applied. To avoid confusion
we denoted the former case by ru and the latter by rv . Although ru and rv are the same ?kind?
of transformation, one cannot in general associate to each ?kind? of transformation r ? R a single
element of some group as we did in the case of translations above. The group action could very
well be different depending on the context. We will therefore consider ru and rv to be distinct
transformations, loosely associated to r. In our development, we will make the important assumption
that the transformations ru , rv ? R can be expressed as actions of elements of some group, and
denote this group by R. More precisely, for every ru ? R, there is assumed to be a corresponding
element ?u ? R whose action satisfies A?u (x) = ru (x) for all x ? u, and similarly, for every rv ?
R, there is assumed to be a corresponding element ?v ? R whose action satisfies A?v (x) = rv (x)
for all x ? v. The distinction between ?u and ?v will become clear in the case of feature maps
defined on functions whose domain is a finite set (such as strings). In the case of images, we will
see that ?u = ?v .
Assumption 1 requires that rv ? h = h0 ? ru for h, h0 ? H, with the map ? : h 7? h0 onto. We
now restate this condition in group-theoretic terms. Define T? = ? (Hu ) ? T to be the set of group
elements corresponding to Hu . Set h = ha , h0 = hb , and denote also by ru , rv the elements of
the group R corresponding to the given transformation r ? R. The Assumption says in part that
rv ? h = h0 ? ru for some h0 ? H. This can now be expressed as
Arv ? Ata ? ?u = Atb ? ?u ? Aru ? ?u
(3)
for some tb ? T?. In order to arrive at a purely algebraic condition for invariance, we will need
to understand and manipulate compositions of group actions. However on the right-hand side of
Equation (3) the translation Atb is separated from the transformation Aru by the inclusion ?u . We
will therefore need to introduce an additional constraint on R. This constraint leads to our first
condition for invariance: If x ? u, then we require that Aru (x) ? u for all r ? R. One can now see
that if this condition is met, then verifying Equation (3) reduces to checking that
Arv ? Ata = Atb ? Aru ,
4
(4)
and that the map ta 7? tb is onto.
The next step is to turn compositions of actions Ax ? Ay into an equivalent action of the form Axy .
Do do this, one needs R and T to be subgroups of the same group G so that the associativity property
of group actions applies. A general way to accomplish this is to form the semidirect product
G = T o R.
(5)
Recall that the semidirect product G = X o Y is a way to put two subgroups X, Y together where
X is required to be normal in G, and X ? Y = {1} (the usual direct product requires both subgroups
to be normal). In our setting G is easily shown to be isomorphic to a group with normal subgroup T
and subgroup R where each element may be written in the form g = tr for t ? T, r ? R. We will
see below that we do not loose generality by requiring T to be normal. Note that although this construction precludes R from containing the transformations in T , allowing R to contain translations
is an uninteresting case.
Consider now the action Ag for g ? G = T o R. Returning to Equation (4), we can apply the
associativity property of actions and see that Equation (4) will hold as long as
rv T? = T?ru
(6)
for every r ? R. This is our second condition for invariance, and is a purely algebraic requirement
concerning the groups R and T , distinct from the restriction related conditions involving the patches
u and v.
The two invariance conditions we have described thus far combine to capture the content of Assumption 1, but in a manner that separates group related conditions from constraints due to restriction and
the nested nature of an architecture?s patch domains. We can summarize the invariance conditions
in the form of a concise definition that can be applied to establish invariance of the neural response
? ? R be the set of
feature maps Nm (f ), 2 ? m ? n with respect to a set of transformations. Let R
transformations for which we would like to prove invariance, in correspondence with R.
? ? R and T? ? T are compatible if all of the following
Definition 3 (Compatible Sets). The subsets R
conditions hold:
? rv T? = T?ru . When ru = rv for all r ? R, this means that normalizer of
1. For each r ? R,
? is R.
?
T? in R
2. Left transformations rv never take a point in v outside of v, and right transformations ru
never take a point in u/v outside of u/v (respectively):
imArv ? ?v ? v,
imAru ? ?u ? u,
imAru ? ?v ? v,
?
for all r ? R.
3. Translations never take a point in u outside of v:
imAt ? ?u ? v
for all t ? T?.
The final condition above has been added to ensure that any set of translations T? we might construct
satisfy the implicit assumption that the hierarchy?s translation functions h ? H are maps which
respect the definition h : u ? v.
? and T? are compatible, then for each ta ? T? Equation 3 holds for some tb ? T?, and the map
If R
ta 7? tb is surjective from T? ? T? (by Condition (1) above). So Assumption 1 holds.
As will become clear in the following section, the tools available to us from group theory will
provide insight into the structure of compatible sets.
3.2
Orbits and Compatible Sets
? is a subgroup (rather than just a subset), and ask for the smallest comSuppose we assume that R
?
patible T . We will show that the only way to satisfy Condition (1) in Definition 3 is to require that
?
T? be a union of R-orbits,
under the action
(t, r) 7? rv tru?1
(7)
? This perspective is particularly illuminating because it will eventually allow us
for t ? T , r ? R.
to view conjugation by a transformation r as a permutation of T?, thereby establishing surjectivity of
5
the map ? defined in Assumption 1. For computational reasons, viewing T? as a union of orbits is
also convenient.
?
If rv = ru = r, then the action (7) is exactly conjugation and the R-orbit
of a translation t ? T
?1
?
is the conjugacy class CR? (t) = {rtr | r ? R}. Orbits of this form are also equivalence classes
under the relation s ? s0 if s0 ? CR? (s), and we will require T? to be partitioned by the conjugacy
?
classes induced by R.
The following Proposition shows that, given set of candidate translations in H, we can construct a
? by requiring T? to be a union of R-orbits
?
set of translations compatible with R
under the action of
conjugation.
Proposition 1. Let ? ? T be a given set of translations, and assume the following: (1) G ?
= T o R,
? is a subgroup of R. Then Condition (1) of Definition 3 is
(2) For each r ? R, r = ru = rv , (3) R
satisfied if and only if T? can be expressed as a union of orbits of the form
[
(8)
T? =
CR? (t) .
t??
?
An interpretation of the above Proposition, is that when T? is a union of R-orbits,
conjugation by
r can be seen as a permutation of T?. In general, a given T? may be decomposed into several such
? on T? may not necessarily be transitive.
orbits and the conjugation action of R
4
Analysis of Specific Invariances
We continue with specific examples relevant to image processing and text analysis.
4.1 Isometries of the Plane
Consider the case where G is the group M of planar isometries, u ? v ? S = R2 , and H involves
translations in the plane. Let O2 be the group of orthogonal operators, and let ta ? T denote a
translation represented by the vector a ? R2 . In this section we assume the standard basis and work
with matrix representations of G when it is convenient.
We first need that T CM , a property that will be useful when verifying Condition (1) of Definition 3.
Indeed, from the First Isomorphism Theorem [1], the quotient space M/T is isomorphic to O2 ,
giving the following commutative diagram:
?
- O2
M
?
? ??
M/T
where the isomorphism ?
? : M/T ? O2 is given by ?
? (mT ) = ?(m) and ?(m) = mT . We recall
that the kernel of a group homomorphism ? : G ? G0 is a normal subgroup of G, and that normal
subgroups N of G are invariant under the operation of conjugation by elements g of G. That is,
gN g ?1 = N for all g ? G. With this picture in mind, the following Lemma establishes that T C M ,
and further shows that M is isomorphic to T o R with R = O2 , and T a normal subgroup of M .
Lemma 1. For each m ? M , ta ? T , mta = tb m for some unique element tb ? T .
We are now in a position to verify the Conditions of Definition 3 for the case of planar isometries.
Proposition 2. Let H be the set of translations associated to an arbitrary layer of the hierarchical
feature map and define the injective map ? : H ? T by ha 7? ta , where a is a parameter characterizing the translation. Set ? = {? (h) | h ? H}. Take G = M ?
= T o O2 as above. The
sets
[
? = O2 ,
R
T? =
CR? (t)
t??
are compatible.
This proposition states that the hierarchical feature map may be made invariant to isometries, however one might reasonably ask whether the feature map can be invariant to other transformations.
The following Proposition confirms that isometries are the only possible transformations, with group
structure, to which the hierarchy may be made invariant in the exact sense of Definition 2.
Proposition 3. Assume that the input spaces {Im(vi )}n?1
i=1 are endowed with a norm inherited from
Im(vn ) by restriction. Then at all layers, the group of orthogonal operators O2 is the only group of
transformations to which the neural response can be invariant.
6
OR~(ta)
OR~(tb)
tc
OR~(tc)
ta
tb
vi
Figure 1: Example illustrating construction of an appropriate H. Suppose H initially contains the translations
? = {ha , hb , hc }. Then to be invariant to rotations, the
condition on H is that H must also include translations
?
defined by the R-orbits
OR? (ta ), OR? (tb ) and OR? (tc ). In
?
this example R = SO2 , and the orbits are translations
to points lying on a circle in the plane.
vi+1
The following Corollary is immediate:
b 1 is.
Corollary 1. The neural response cannot be scale invariant, even if K
We give a few examples illustrating the application of the Propositions above.
? = SO2 C O2 , then the
Example 1. If we choose the group of rotations of the plane by setting R
orbits OR? (a) are circles of radius kak. See Figure 1. Therefore rotation invariance is possible as
long as the set T? (and therefore H, since we can take H = ? ?1 (T?)) includes translations to all
points along the circle of radius a, for each element ta ? T?. In particular if H includes all possible
translations, then Assumption 1 is verified, and we can apply Theorem 1: Nm will be invariant to
b 1 is. A similar argument can be made for reflection invariance, as any rotation
rotations as long as K
can be built out of the composition of two reflections.
Example 2. Analogous to the previous example, we may also consider finite cyclical groups Cn
describing rotations by ? = 2?/n. In this case the construction of an appropriate set of translations
is similar: we require that T? include at least the conjugacy classes with respect to the group Cn ,
CCn (t) for each t ? ? = ? (H).
Example 3. Consider a simple convolutional neural network [6] consisting of two layers, one filter
at the first convolution layer, and downsampling at the second layer defined by summation over all
distinct k ? k blocks. In this case, Proposition 2 and Theorem 1 together say that if the filter kernel
is rotation invariant, then the output representation will be invariant to global rotation of the input
image. This is so because convolution implies the choice K1 (f, g) = hf, giL2 , average pooling,
and H = H1 containing all possible translations. If the convolution filter z is rotation invariant,
z ? r = z for all rotations r, and K1 (f ? r, z) = K1 (f, z ? r?1 ) = K1 (f, z). So we can conclude
invariance of the initial kernel.
4.2 Strings, Reflections, and Finite Groups
We next consider the case of finite length strings defined on a finite alphabet. One of the advantages
group theory provides in the case of string data is that we need not work with permutation representations. Indeed, we may equivalently work with group elements which act on strings as abstract
objects. The definition of the neural response given in Smale et al. involves translating an analysis
window over the length of a given string. Clearly translations over a finite string do not constitute a
group as the law of composition is not closed in this case. We will get around this difficulty by first
considering closed words formed by joining the free ends of a string. Following the case of circular
data where arbitrary translations are allowed, we will then consider the original setting described in
Smale et al. in which strings are finite non-circular objects.
Taking a geometric standpoint sheds light on groups of transformations applicable to strings. In
particular, one can interpret the operation of the translations in H as a circular shift of a string
followed by truncation outside of a fixed window. The cyclic group of circular shifts of an n-string
is readily seen to be isomorphic to the group of rotations of an n-sided regular polygon. Similarly,
reversal of an n-string is isomorphic to reflection of an n-sided polygon, and describes a cyclic group
of order two. As in Equation (5), we can combine rotation and reflection via a semidirect product
Dn ?
= Cn o C2
7
(9)
where Ck denotes the cyclic group of order k. The resulting product group has a familiar presentation. Let t, r be the generators of the group, with r corresponding to reflection (reversal), and t
corresponding to a rotation by angle 2?/n (leftward circular shift by one character). Then the group
of symmetries of a closed n-string is described by the relations
Dn = ht, r | tn , rv2 , rv trv ti.
(10)
These relations can be seen as describing the ways in which an n-string can be left unchanged. The
first says that circularly shifting an n-string n times gives us back the original string. The second says
that reflecting twice gives back the original string, and the third says that left-shifting then reflecting
is the same as reflecting and then right-shifting. In describing exhaustively the symmetries of an
n-string, we have described exactly the dihedral group Dn of symmetries of an n-sided regular
polygon. As manipulations of a closed n-string and an n-sided polygon are isomorphic, we will use
geometric concepts and terminology to establish invariance of the neural response defined on strings
with respect to reversal. In the following discussion we will abuse notation and at times denote by u
and v the largest index associated with the patches u and v.
In the case of reflections of strings, ru is quite distinct from rv . The latter reflection, rv , is the
usual reflection of an v-sided regular polygon, whereas we would like ru to reflect a smaller u-sided
polygon. To build a group out of such operations, however, we will need to ensure that ru and rv
both apply in the context of v-sided polygons. This can be done by extending Aru to v by defining
ru to be the composition of two operations: one which reflects the u portion of a string and leaves
the rest fixed, and another which reflects the remaining (v ? u)-substring while leaving the first
u-substring fixed. In this case, one will notice that ru can be written in terms of rotations and the
usual reflection rv :
ru = rv t?u = tu rv .
(11)
This also implies that for any x ? T ,
{rxr?1 | r ? hrv i} = {rxr?1 | r ? hrv , ru i},
where we have used the fact that T is abelian, and applied the relations in Equation (10). We can
now make an educated guess as to the form of T? by starting with Condition (1) of Definition 3 and
applying the relations appearing in Equation (10). Given x ? T?, a reasonable requirement is that
there must exist an x0 ? T? such that rv x = x0 ru . In this case
x0 = rv xru = rv xrv t?u = x?1 rv rv t?u = x?1 t?u ,
(12)
where the second equality follows from Equation (11), and the remaining equalities follow from
the relations (10). The following Proposition confirms that this choice of T? is compatible with the
reflection subgroup of G = Dv , and closely parallels Proposition 2.
Proposition 4. Let H be the set of translations associated to an arbitrary layer of the hierarchical
feature map and define the injective map ? : H ? T by ha 7? ta , where a is a parameter characterizing the translation. Set ? = {? (h) | h ? H}. Take G = Dn ?
= T o R, with T = Cn = hti and
R = C2 = {r, 1}. The sets
? = R,
R
T? = ? ? ??1 t?u
are compatible.
One may also consider non-closed strings, as in Smale et al., in which case substrings which would
wrap around the edges are disallowed. Proposition 4 in fact points to the minimum T? for reversals in
this scenario as well, noticing that the set of allowed translations is the same set above but with the
illegal elements removed. If we again take length u substrings of length v strings, this reduced set
of valid transformations in fact describes the symmetries of a regular (v ? u + 1)-gon. We can thus
apply Proposition 4 working with the Dihedral group G = Dv?u+1 to settle the case of non-closed
strings.
5
Conclusion
We have shown that the tools offered by group theory can be profitably applied towards understanding invariance properties of a broad class of deep, hierarchical models. If one knows in advance the
transformations to which a model should be invariant, then the translations which must be built into
the hierarchy can be described. In the case of images, we showed that the only group to which a
model in the class of interest can be invariant is the group of planar orthogonal operators.
Acknowledgments
This research was supported by DARPA contract FA8650-06-C-7632, Sony, and King Abdullah
University of Science and Technology.
8
References
[1] M. Artin. Algebra. Prentice-Hall, 1991.
[2] J. Bouvrie, L. Rosasco, and T. Poggio. Supplementary material for ?On Invariance
in Hierarchical Models?. NIPS, 2009. Available online: http://cbcl.mit.edu/
publications/ps/978_supplement.pdf.
[3] K. Fukushima. Neocognitron: A self-organizing neural network model for a mechanism of
pattern recognition unaffected by shift in position. Biol. Cyb., 36:193?202, 1980.
[4] G.E. Hinton and R.R. Salakhutdinov. Reducing the dimensionality of data with neural networks. Science, 313(5786):504?507, 2006.
[5] D.H. Hubel and T.N. Wiesel. Receptive fields and functional architecture of monkey striate
cortex. J. Phys., 195:215?243, 1968.
[6] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document
recognition. Proc. of the IEEE, 86(11):2278?2324, November 1998.
[7] H. Lee, R. Grosse, R. Ranganath, and A. Ng. Convolutional deep belief networks for scalable unsupervised learning of hierarchical representations. In Proceedings of the Twenty-Sixth
International Conference on Machine Learning, 2009.
[8] B.W. Mel. SEEMORE: Combining color, shape, and texture histogramming in a neurally
inspired approach to visual object recognition. Neural Comp., 9:777?804, 1997.
[9] T. Serre, A. Oliva, and T. Poggio. A feedforward architecture accounts for rapid categorization.
Proceedings of the National Academy of Science, 104:6424?6429, 2007.
[10] T. Serre, L. Wolf, S. Bileschi, M. Riesenhuber, and T. Poggio. Robust object recognition with
cortex-like mechanisms. IEEE Trans. on Pattern Analysis and Machine Intelligence, 29:411?
426, 2007.
[11] S. Smale, L. Rosasco, J. Bouvrie, A. Caponnetto, and T. Poggio. Mathematics of the neural response. Foundations of Computational Mathematics, June 2009. available online,
DOI:10.1007/s10208-009-9049-1.
[12] H. Wersing and E. Korner. Learning optimized features for hierarchical models of invariant
object recognition. Neural Comput., 7(15):1559?1588, July 2003.
9
| 3732 |@word mild:1 illustrating:2 wiesel:2 norm:1 km:1 hu:3 confirms:2 homomorphism:1 concise:3 thereby:2 tr:1 initial:3 atb:3 contains:1 cyclic:3 document:1 past:1 o2:9 recovered:1 imat:1 must:7 written:2 readily:1 stemming:1 shape:1 discrimination:2 intelligence:1 leaf:1 guess:1 plane:4 core:1 characterization:1 provides:1 contribute:2 successive:3 along:1 dn:4 direct:1 become:2 c2:2 prove:2 korner:1 pathway:1 introductory:1 combine:2 manner:1 introduce:2 x0:3 indeed:3 rapid:1 behavior:1 tomaso:1 mechanic:1 salakhutdinov:1 inspired:1 decomposed:1 automatically:2 encouraging:1 window:2 considering:1 increasing:1 trv:1 notation:4 bounded:1 kind:3 cm:1 string:29 monkey:1 developed:1 ag:3 transformation:37 quantitative:1 every:4 reinterpret:1 act:1 ti:1 xd:1 tackle:1 shed:1 exactly:2 returning:1 qm:3 unit:3 positive:3 educated:1 local:1 joining:1 analyzing:1 establishing:3 becoming:1 abuse:1 might:3 twice:1 studied:1 equivalence:1 suggests:1 conversely:1 range:1 practical:2 unique:1 acknowledgment:1 lecun:1 union:5 block:1 empirical:1 illegal:1 convenient:2 word:1 regular:4 get:2 cannot:2 onto:2 layered:1 operator:3 put:1 context:3 applying:2 prentice:1 restriction:7 equivalent:2 map:31 center:1 attention:1 starting:1 formalized:1 insight:2 rule:1 proving:1 notion:1 analogous:1 hierarchy:7 construction:4 suppose:1 exact:1 associate:3 element:19 recognition:8 satisfying:1 particularly:2 utilized:1 mammalian:1 gon:1 bottom:1 verifying:5 parameterize:1 capture:1 region:2 trade:1 removed:1 substantial:1 exhaustively:1 depend:2 rewrite:1 cyb:1 algebra:2 purely:3 basis:1 easily:1 darpa:1 represented:1 polygon:7 alphabet:1 separated:2 distinct:5 describe:4 doi:1 outside:4 h0:6 whose:4 quite:1 supplementary:2 larger:1 valued:1 say:5 drawing:1 precludes:1 think:1 itself:4 final:3 online:3 advantage:1 product:5 tu:1 relevant:1 combining:1 organizing:1 translate:1 degenerate:1 academy:1 untangled:1 requirement:4 extending:1 p:1 generating:1 categorization:1 object:9 depending:3 derive:1 stating:1 quotient:1 involves:3 implies:2 seemore:1 met:3 restate:1 radius:2 closely:1 filter:3 viewing:1 settle:1 translating:1 material:2 require:5 fix:2 decompose:1 proposition:14 biological:1 summation:1 im:11 strictly:1 hold:8 lying:1 around:3 considered:1 hall:1 normal:7 cbcl:1 mapping:2 dictionary:1 smallest:1 omitted:1 proc:1 applicable:1 largest:1 establishes:2 tool:4 weighted:1 reflects:2 hrv:2 mit:3 clearly:2 modified:1 rather:2 ck:1 avoid:1 cr:4 profitably:1 broader:1 publication:1 corollary:2 artin:1 derived:3 ax:1 june:1 focus:1 normalizer:1 summarizing:1 sense:1 helpful:1 entire:1 associativity:2 initially:1 relation:6 selective:2 overall:1 denoted:1 histogramming:1 development:1 special:1 field:2 construct:2 never:3 ng:1 encouraged:1 placing:1 broad:3 unsupervised:1 stimulus:2 quantitatively:2 few:4 randomly:1 simultaneously:2 national:1 familiar:2 consisting:2 fukushima:1 attempt:1 interest:2 possibility:1 circular:5 surjection:1 yielding:1 light:1 ccn:1 edge:1 necessary:5 injective:4 poggio:5 il2:1 machinery:1 orthogonal:3 loosely:1 rotating:2 orbit:12 circle:3 minimal:2 instance:1 gn:1 tp:1 onal:1 subset:3 uniform:1 comprised:1 uninteresting:1 gil2:1 graphic:1 accomplish:1 cited:1 international:1 contract:1 off:1 vm:3 lee:1 pool:1 together:2 again:1 central:3 nm:11 satisfied:3 containing:2 rosasco:3 choose:1 dihedral:2 reflect:1 account:1 includes:2 satisfy:2 explicitly:2 vi:11 piece:3 h1:2 view:1 closed:6 sup:1 portion:1 hf:1 parallel:1 inherited:1 contribution:2 formed:1 convolutional:3 identify:1 generalize:1 substring:4 comp:1 unaffected:1 phys:1 definition:20 sixth:1 proof:5 associated:8 boil:1 gain:1 sampled:1 massachusetts:1 ask:3 recall:5 color:1 dimensionality:1 organized:1 formalize:1 actually:1 back:2 reflecting:3 ta:13 supervised:1 follow:1 planar:3 response:20 improved:2 done:1 generality:2 furthermore:1 just:1 implicit:2 until:1 hand:2 ag0:1 working:1 tru:1 building:1 usa:1 serre:2 concept:4 true:1 contain:3 normalized:2 former:1 requiring:2 inspiration:1 verify:1 alternating:1 equality:2 ll:1 self:1 mel:1 kak:1 rxr:2 generalized:5 pdf:1 neocognitron:1 ay:1 theoretic:6 confusion:1 arv:2 tn:1 reflection:11 upwards:1 image:12 common:1 rotation:16 functional:2 mt:2 extend:2 interpretation:1 interpret:1 composition:6 cambridge:1 ai:1 enjoyed:1 mathematics:2 similarly:3 inclusion:3 language:1 cortex:6 maxh:1 isometry:5 recent:1 showed:1 perspective:3 lrosasco:1 leftward:1 manipulation:1 selectivity:3 scenario:1 success:1 continue:1 meeting:2 seen:5 minimum:1 additional:1 impose:1 july:1 rv:30 full:2 neurally:1 reduces:2 caponnetto:1 technical:3 long:3 prescription:2 concerning:2 manipulate:1 qi:1 involving:1 scalable:1 oliva:1 vision:1 kernel:8 tailored:1 achieved:1 cell:1 addition:2 whereas:1 diagram:1 leaving:1 standpoint:1 rest:1 pooling:14 induced:2 member:1 flow:1 integer:1 consult:1 feedforward:1 bengio:1 hb:2 architecture:7 idea:1 cn:4 haffner:1 tradeoff:1 shift:4 whether:2 isomorphism:2 effort:1 algebraic:8 fa8650:1 so2:2 constitute:1 remark:1 action:18 deep:3 useful:3 generally:1 detailed:1 involve:1 clear:2 locally:1 nel:1 reduced:1 generate:1 http:1 exist:1 canonical:1 notice:1 stipulates:1 discrete:1 disallowed:1 group:55 key:2 terminology:1 verified:2 ht:1 v1:3 angle:1 parameterized:1 noticing:1 place:1 family:6 reader:2 throughout:1 arrive:1 vn:3 patch:11 separation:1 draw:1 reasonable:1 layer:13 hi:8 conjugation:6 followed:1 abdullah:1 correspondence:1 precisely:1 constraint:3 ri:1 layerwise:1 argument:1 formulating:1 mta:1 smaller:2 describes:2 increasingly:2 character:1 partitioned:1 surjectivity:1 alike:1 intuitively:1 invariant:23 dv:2 sided:7 taken:1 equation:11 conjugacy:3 turn:2 loose:1 eventually:1 describing:3 needed:4 mind:1 know:1 sony:1 mechanism:2 reversal:5 end:1 available:3 operation:9 endowed:1 apply:5 hierarchical:22 v2:3 appropriate:3 occurrence:1 appearing:1 original:4 top:1 denotes:2 ensure:2 include:2 axy:1 remaining:2 hinge:1 unifying:1 giving:1 k1:4 build:1 establish:5 surjective:3 jake:1 s10208:1 unchanged:1 g0:1 question:2 added:1 receptive:2 primary:2 usual:3 striate:1 gradient:1 wrap:1 separate:1 clarifying:1 hnm:1 reason:1 ru:24 length:4 index:2 providing:2 downsampling:1 equivalently:2 abelian:2 statement:2 smale:10 bouvrie:3 negative:1 implementation:3 twenty:1 allowing:2 convolution:3 finite:8 november:1 riesenhuber:1 immediate:1 defining:2 extended:1 looking:1 precise:1 hinton:1 culminates:1 reproducing:1 arbitrary:7 pair:2 required:2 optimized:1 distinction:1 established:2 subgroup:11 nip:1 trans:1 address:1 able:1 below:3 pattern:3 summarize:1 tb:9 built:7 including:1 max:1 belief:1 shifting:3 natural:1 difficulty:2 technology:2 lorenzo:1 picture:2 hm:2 transitive:1 text:3 review:1 understanding:4 literature:1 checking:1 geometric:2 law:1 permutation:3 filtering:1 ingredient:1 generator:1 foundation:1 illuminating:1 offered:1 verification:2 imposes:1 s0:2 systematically:1 translation:40 ata:5 compatible:9 supported:1 free:1 truncation:1 side:2 allow:3 understand:2 aru:5 institute:1 template:2 taking:5 characterizing:4 valid:2 forward:1 made:4 simplified:1 far:1 ranganath:1 compact:1 preferred:1 implicitly:1 suggestive:1 hubel:2 global:1 conclude:2 assumed:4 decade:1 decomposes:1 additionally:1 nature:1 reasonably:1 robust:1 symmetry:4 excellent:1 complex:5 necessarily:1 hc:1 domain:8 bottou:1 bileschi:1 did:1 repeated:1 allowed:2 rtr:1 borel:2 fashion:1 grosse:1 formalization:1 sub:1 theme:1 position:2 explicit:1 comput:1 candidate:1 third:1 hti:1 down:1 theorem:4 specific:7 r2:3 explored:1 normalizing:1 intrinsic:3 incorporating:1 exists:1 circularly:1 importance:2 supplement:1 texture:1 push:1 commutative:1 tc:3 simply:2 explore:2 visual:5 prevents:1 expressed:4 cyclical:1 applies:1 corresponds:1 nested:1 satisfies:5 wolf:1 ma:1 goal:3 viewed:1 presentation:1 king:1 towards:4 content:1 wersing:1 determined:1 reducing:1 acting:1 averaging:1 lemma:2 isomorphic:6 invariance:55 support:2 latter:2 relevance:3 constructive:4 biol:1 |
3,016 | 3,733 | Code-specific policy gradient rules for
spiking neurons
Henning Sprekeler? Guillaume Hennequin Wulfram Gerstner
Laboratory for Computational Neuroscience
?
Ecole
Polytechnique F?ed?erale de Lausanne
1015 Lausanne
Abstract
Although it is widely believed that reinforcement learning is a suitable tool for
describing behavioral learning, the mechanisms by which it can be implemented
in networks of spiking neurons are not fully understood. Here, we show that different learning rules emerge from a policy gradient approach depending on which
features of the spike trains are assumed to influence the reward signals, i.e., depending on which neural code is in effect. We use the framework of Williams
(1992) to derive learning rules for arbitrary neural codes. For illustration, we
present policy-gradient rules for three different example codes - a spike count
code, a spike timing code and the most general ?full spike train? code - and test
them on simple model problems. In addition to classical synaptic learning, we
derive learning rules for intrinsic parameters that control the excitability of the
neuron. The spike count learning rule has structural similarities with established
Bienenstock-Cooper-Munro rules. If the distribution of the relevant spike train
features belongs to the natural exponential family, the learning rules have a characteristic shape that raises interesting prediction problems.
1
Introduction
Neural implementations of reinforcement learning have to solve two basic credit assignment problems: (a) the temporal credit assignment problem, i.e., the question which of the actions that were
taken in the past were crucial to receiving a reward later and (b) the spatial credit assignment problem, i.e., the question, which neurons in a population were important for getting the reward and
which ones were not.
Here, we argue that an additional credit assignment problem arises in implementations of reinforcement learning with spiking neurons. Presume that we know that the spike pattern of one specific
neuron within one specific time interval was crucial for getting the reward (that is, we have already
solved the first two credit assignment problems). Then, there is still one question that remains:
Which feature of the spike pattern was important for the reward? Would any spike train with the
same number of spikes yield the same reward or do we need precisely timed spikes to get it? This
credit assignment problem is in essence the question which neural code the output neuron is (or
should be) using. It becomes particularly important, if we want to change neuronal parameters like
synaptic weights in order to maximize the likelihood of getting the reward again in the future. If
only the spike count is relevant, it might not be very effective to spend a lot of time and energy on
the difficult task of learning precisely timed spikes.
The most modest and probably most versatile way of solving this problem is not to make any assumption on the neural code but to assume that all features of the spike train were important. In
?
E-Mail: [email protected]
1
this case, neuronal parameters are changed such that the likelihood of repeating exactly the same
spike train for the same synaptic input is maximized. This approach leads to a learning rule that
was derived in a number of recent publications [3, 5, 13]. Here, we show that a whole class of
learning rules emerges when prior knowledge about the neural code at hand is available. Using a
policy-gradient framework, we derive learning rules for neural parameters like synaptic weights or
threshold parameters that maximize the expected reward.
Our aims are to (a) develop a systematic framework that allows to derive learning rules for arbitrary
neural parameters for different neural codes, (b) provide an intuitive understanding how the resulting
learning rules work, (c) derive and test learning rules for specific example codes and (d) to provide
a theoretical basis why code-specific learning rules should be superior to general-purpose rules.
Finally, we argue that the learning rules contain two types of prediction problems, one related to
reward prediction, the other to response prediction.
2
2.1
General framework
Coding features and the policy-gradient approach
The basic setup is the following: let there be a set of different input spike trains X ? to a single postsynaptic neuron, which in response generates stochastic output spike trains Y ? . In the language of
partially observable Markov decision processes, the input spike trains are observations that provide
information about the state of the animal and the output spike trains are controls that influence the
action choice. Depending on both of these spike trains, the system receives a reward. The goal is to
adjust a set of parameters ?i of the postsynaptic neuron such that it maximizes the expectation value
of the reward.
Our central assumption is that the reward R does not depend on the full output spike train, but only
on a set of coding features Fj (Y ) of the output spike train: R = R(F, X). Which coding features F
the reward depends on is in fact a choice of a neural code, because all other features of the spike
train are not behaviorally relevant. Note that there is a conceptual difference to the notion of a neural
code in sensory processing, where the coding features convey information about input signals, not
about the output signal or rewards.
P
The expectation value of the reward is given by hRi =
F,X R(F, X)P (F|X, ?)P (X), where
P (X) denotes the probability of the presynaptic spike trains and P (F|X, ?) the conditional probability of generating the coding feature F given the input spike train X and the neuronal parameters
?. Note that the only component that explicitly depends on the neural parameters ?i is the conditional probability P (F|X, ?). The reward is conditionally independent of the neural parameters ?i
given the coding feature F. Therefore, if we want to optimize the expected reward by employing a
gradient ascent method, we get a learning rule of the form
X
?t ?i = ?
R(F, X)P (X)??i P (F|X, ?)
(1)
F,X
= ?
X
P (X)P (F|X, ?)R(F, X)??i ln P (F|X, ?) .
(2)
F,X
If we choose a small learning rate ?, the average over presynaptic patterns X and coding features
F can be replaced by a time average. A corresponding online learning rule therefore results from
dropping the average over X and F :
?t ?i = ?R(F, X)??i ln P (F|X, ?) .
(3)
This general form of learning rule is well known in policy-gradient approaches to reinforcement
learning [1, 12].
2.2
Learning rules for exponentially distributed coding features
The joint distribution of the
Q coding features Fj can always be factorized into a set of conditional
distributions P (F|X) = i P (Fi |X; F1 , ..., Fi?1 ). We now make the assumption that the conditional distributions belong to the natural exponential family (NEF): P (Fi |X; F1 , ..., Fi?1 , ?) =
2
h(Fi ) exp(Ci Fi ? A(Ci )), where the Ci are parameters that depend on the input spike train X, the
coding features F1 , ..., Fi?1 and the neural parameters ?i . h(Fi ) is a function of Fi and Ai (Ci ) is
function that is characteristic for the distribution and depends only on the parameters Ci . Note that
the NEF is a relatively rich class of distributions, which includes many canonical distributions like
the Poisson, Bernoulli and the Gaussian distribution (the latter with fixed variance).
Under these assumptions, the learning rule (3) takes a characteristic shape:
X Fj ? ?j
?t ?i = ?R(F, X)
??i ?j ,
?j2
j
(4)
where ?i and ?i2 are the mean and the variance of the conditional distribution
P (Fi |X, F1 , ..., Fi?1 , ?) and therefore also depend on the input X, the coding features F1 , ..., Fi?1
and the parameters ?. Note that correlations between the coding features are implicitly accounted
for by the dependence of ?i and ?i on the other features. The summation over different coding
features arises from the factorization of the distribution, while the specific shape of the summands
relies on the assumption of normal exponential distributions [for a proof, cf. 12].
There is a simple intuition why the learning rule (4) performs gradient ascent on the mean reward.
The term Fj ? ?j fluctuates around zero on a trial-to-trial basis. If these fluctuations are positively
correlated with the trial fluctuations of the reward R, i.e., hR(Fj ? ?j )i > 0, higher values of Fj
lead to higher reward, so that the mean of the coding feature should be increased. This increase is
implemented by the term ??i ?j , which changes the neural parameter ?i such that ?j increases.
3
Examples for Coding Features
In this section, we illustrate the framework by deriving policy-gradient rules for different neural
codes and show that they can solve simple computational tasks.
The neuron type we are using is a simple Poisson-type neuron model where the postsynaptic firing
rate is given by a nonlinear function ?(u) of the membrane potential u. The membrane potential u,
in turn, is given by the sum of the EPSPs that are evoked by the presynaptic spikes, weighted with
the respective synaptic weights:
X
X
u(t) =
wi (t ? tfi ) , =:
wi PSPi (t) ,
(5)
i
i,f
where tfi denote the time of the f -th spike in the i-th presynaptic neuron. (t ? tfi ) denotes the
shape of the postsynaptic potential evoked by a single presynaptic spike at time tfi . For future use,
we have introduced PSPi as the postsynaptic potential that would be evoked by the i-th presynaptic
spike train alone, if the synaptic weight were unity.
The parameters that one could optimize in this neuron model are (a) the synaptic weights and (b) parameters in the dependence of the firing rate ? on the membrane potential. The first case is the
standard case of synaptic plasticity, the second corresponds to a reward-driven version of intrinsic
plasticity [cf. 10].
3.1
Spike Count Codes: Synaptic plasticity
Let us first assume that the coding feature is the number N of spikes within a given time window [0, T ] and that the reward is delivered at the end of this period. The probability distribution for
the spike count is a Poisson distribution P (N ) = ?N exp(??)/N ! with a mean ? that is given by
the integral of the firing rate ? over the interval [0, T ]:
Z T
?=
?(t0 ) dt0 .
(6)
0
The dependence of the distribution P (N ) on the presynaptic spike trains X and the synaptic
weights wi is hidden in the mean spike count ?, which naturally depends on those factors through
the postsynaptic firing rate ?.
3
Because the Poisson distribution belongs to the NEF, we can derive a synaptic learning rule by using
equation (4) and calculating the particular form of the term ?wi ?:
N ??
?t wi = ?R
?
Z
T
[?u ?](t0 )PSPi (t0 ) dt0 .
(7)
0
This learning rule has structural similarities with the Bienenstock-Cooper-Munro (BCM) rule [2]:
The integral term has the structure of an eligibility trace that is driven by a simple Hebbian learning
rule. In addition, learning is modulated by a factor that compares the current spike count (?rate?)
with the expected spike count (?sliding threshold? in BCM theory). Interestingly, the functional role
of this factor is very different from the one in the original BCM rule: It is not meant to introduce
selectivity [2], but rather to exploit trial fluctuations around the mean spike count to explore the
structure of the reward landscape.
We test the learning rule on a 2-armed bandit task (Figure 1A). An agent has the choice between
two actions. Depending on which of two states the agent is in, action a1 or action a2 is rewarded
(R = 1), while the other action is punished (R = ?1). The state information is encoded in the rate
pattern of 100 presynaptic neurons. For each state, a different input pattern is generated by drawing
the firing rate of each input neuron independently from an exponential distribution with a mean of
10Hz. In each trial, the input spike trains are generated anew from Poisson processes with these
neuron- and state-specific rates. The agent chooses its action stochastically with probabilities that
are proportional to the spike counts of two output neurons: p(ak |s) = Nk /(N1 + N2 ). Because
the spike counts depend on the state via the presynaptic firing rates, the agent can choose different
actions for different states. Figure 1B and C show that the learning rule learns the task by suppressing
activity in the neuron that encodes the punished action.
In all simulations throughout the paper, the postsynaptic neurons have an exponential rate function
g(u) = exp (?(u ? u0 )), where the threshold is u0 = 1. The sharpness parameter ? is set to either
? = 1 (for the 2-armed bandit task) or ? = 3 (for the spike latency task). Moreover, the postsynaptic
neurons have a membrane potential reset after each spike (i.e., relative refractoriness), so that the
assumption of a Poisson distribution for the spike counts is not necessarily fulfilled. It is worth
noting that this did not have an impeding effect on learning performance.
3.2
Spike Count Codes: Intrinsic plasticity
Let us now assume that the rate of the neuron is given by a function ?(u) = g (?(u ? u0 )) which
depends on the threshold parameters u0 and ?. Typical choices for the function g would be an
exponential (as used in the simulations), a sigmoid or a threshold linear function g(x) = ln(1 +
exp(x)).
By intrinsic plasticity we mean that the parameters u0 and ? are learned instead of or in addition
to the synaptic weights. The learning rules for these parameters are essentially the same as for the
synaptic weights, only that the derivative of the mean spike count is taken with respect to u0 and ?,
respectively:
?t u0
?t ?
Z
N ??
N ?? T 0
= ?
?u0 ? = ??
?g (?(u(t) ? u0 )) dt
?
?
0
Z T
N ??
N ??
= ?
?? ? = ?
g 0 (?(u(t) ? u0 ))(u(t) ? u0 ) dt .
?
?
0
(8)
(9)
Here, g 0 = ?x g(x) denotes the derivative of the rate function g with respect to its argument.
3.3
First Spike-Latency Code: Synaptic plasticity
As a second coding scheme, let us assume that the reward depends only on the latency t? of the first
spike after stimulus onset. More precisely, we assume that each trial starts with the onset of the
presynaptic spike trains X and that a reward is delivered at the time of the first spike. The reward
depends on the latency of that spike, so that certain latencies are favored.
4
Figure 1: Simulations for code-specific learning rules. A 2-armed bandit task: The agent has to choose among
two actions a1 and a2 . Depending on the state (s1 or s2 ), a different action is rewarded (thick arrows). The
input states are modelled by different firing rate patterns of the input neurons. The probability of choosing the
actions is proportional to the spike counts of two output neurons: p(ak s) = Nk (N1 + N2 ). B Learning
curves of the 2-armed bandit. Blue: Spike count learning rule (7), Red: Full spike train rule (16). C Evolution
of the spike count in response to the two input states during learning. Both rewards (panel B) and spike counts
(panel C) are low-pass filtered with a time constant of 4000 trials. D Learning of first spike latencies with
the latency rule (11). Two different output neurons are to learn to fire their first spike at given target latencies
L1 2 . We present one of two fixed input spike train patterns (?stimuli?) to the neurons in randomly interleaved
trials. The input spike train for each input neuron is drawn separately for each stimulus by sampling once from
a Poisson process with a rate of 10Hz. Reward is given by the negative squared difference between the target
latency (stimulus 1: L1 = 10ms, L2 = 30ms, stimulus 2: L1 = 30ms, L2 = 10ms) and the actual latency
of the trial, summed over the two neurons. The colored curves show that the first spike latencies of neurons 1
(green, red) and neuron 2 (purple, blue) converge to the target latencies. The black curve (scale on the right
axis) shows the evolution of the reward during learning.
The probability distribution of the spike latency is given by the product of the firing probability at
time t and the probability that the neuron did not fire earlier:
t
P (t) = (t) exp
(t ) dt .
(10)
0
Using eq. (3) for this particular distribution, we get the synaptic learning rule:
t
[u ](t)PSPi (t)
[u ](t )PSPi (t ) dt .
t wi = R
(t)
0
(11)
In Figure 1D, we show that this learning rule can learn to adjust the weights of two neurons such
that their first spike latencies approximate a set of target latencies.
3.4
The Full Spike Train Code: Synaptic plasticity
Finally, let us consider the most general coding feature, namely, the full spike train. Let us start with
a time-discretized version of the spike train with a discretization that is sufficiently narrow to allow at
most one spike per time bin. In each time bin [t, t + t], the number of spikes Yt follows a Bernoulli
distribution with spiking probability pt , which depends on the input and on the recent history of the
neuron. Because the Bernoulli distribution belongs to the NEF, the associated policy-gradient rule
can be derived using equation (4):
Yt pt
w pt .
(12)
t wi = R
pt (1 pt ) i
t
5
The firing probability pt depends on the instantaneous firing rate ?t : pt = 1?exp(??t ?t), yielding:
?t wi
= ?R
X Yt ? pt
pt (1 ? pt )
t
[?? pt ] [?wi ?t ]
| {z }
(13)
=?t(1?pt )
X
? u ?t
= ?R
(Yt ? pt )
PSPi (t)?t
pt
t
(14)
This is the rule that should be used in discretized simulations. In the limit ?t ? 0, pt can be
approximated by pt ? ??t, which leads to the continuous time version of the rule:
X Yt
?u ?t
?t wi = ?R lim
? ?t
PSPi (t)?t
(15)
t?0
?t
?t
t
Z
[?u ?](t)
= ?R (Y (t) ? ?(t))
PSPi (t) dt .
(16)
?(t)
P
Here, Y (t) = ti ?(t ? ti ) is now a sum of ?-functions. Note that the learning rule (16) was already
proposed by Xie and Seung [13] and Florian [3] and, slightly modified for supervised learning, by
Pfister et al. [5].
Following the same line, policy gradient rules can also be derived for the intrinsic parameters of the
neuron, i.e., its threshold parameters (see also [3]).
4
Why use code-specific rules when more general rules are available?
Obviously, the learning rule (16) is the most general in the sense that it considers the whole spike
train as a coding feature. All possible other features are therefore captured in this learning rule. The
natural question is then: what is the advantage of using rules that are specialized for one specific
code?
Say, we have a learning rule for two coding features F1 and F2 , of which only F1 is correlated with
reward. The learning rule for a particular neuronal parameter ? then has the following structure:
F2 ? ?2 ??2
(F1 ? ?1 ) ??1
+
(17)
?t ? = ?R(F1 )
?12
??
?22
??
!
F1 ? ?1 ??1
F2 ? ?2 ??2
?R
(F
?
?
)
+
(18)
? ? R(?1 ) +
1
1
?F1 ?1
?12
??
?22
??
?R (F1 ? ?1 )2 ??1
?R (F1 ? ?1 )(F2 ? ?2 ) ??2
= ?
+?
(19)
?F1 ?1
?12
??
?F1 ?1
?22
??
+?R(?1 )
F1 ? ?1 ??1
?12
??
+ ?R(?1 )
F2 ? ?2 ??2
?22
??
(20)
Of the four terms in lines (19-20), only the first term has non-vanishing mean when taking the trial
average. The other terms are simply noise and therefore more hindrance than help when trying to
maximize the reward. When using the full learning rule for both features, the learning rate needs to
be decreased until an agreeable signal-to-noise ratio between the drift introduced by the first term
and the diffusion caused by the other terms is reached. Therefore, it is desirable for faster learning
to reduce the effects of these noise terms. This can be done in two ways:
? The terms in eq. (20) can be reduced by reducing R(?1 ). This can be achieved by subtracting a suitable reward baseline from the current reward. Ideally, this should be done in a
stimulus-specific way (because ?1 depends on the stimulus), which leads to the notion of a
reward prediction error instead of a pure reward signal. This approach is in line with both
standard reinforcement learning theory [4] and the proposal that neuromodulatory signals
like dopamine represent reward prediction error instead of reward alone.
6
? The term in eq. (20) can be removed by skipping those terms in the original learning that are
related to coding feature F2 . This corresponds to using the learning rule for those features
that are in fact correlated with reward while suppressing those that are not correlated with
reward. The central argument for using code-specific learning rules is therefore the signalto-noise ratio. In extreme cases, where a very general rule is used for a very specific task,
a very large number of coding dimensions may merely give rise to noise in the learning
dynamics, while only one is relevant and causes systematic changes.
These considerations suggest that the spike count rule (7) should outperform the full spike train
rule (16) in tasks where the reward is based purely on spike count. Unfortunately, we could not
yet substantiate this claim in simulations. As seen in Figure 1B, the performance of the two rules
is very similar in the 2-armed bandit task. This might be due to a noise bottleneck effect: there
are several sources of noise in the learning process, the strongest of which limits the performance.
Unless the ?code-specific noise? is dominant, code-specific learning rules will have about the same
performance as general purpose rules.
5
Inherent Prediction Problems
As shown in section 4, the policy-gradient rule with a reduced amount of noise in the gradient
estimate is one that takes only the relevant coding features into account and subtracts the trial mean
of the reward:
X Fj ? ?j
?t ? = ?(R ? R(?1 , ?2 , ...))
?? ?j
(21)
?j2
j
This learning rule has a conceptually interesting structure: Learning takes place only when two conditions are fulfilled: the animal did something unexpected (Fj ? ?i ) and receives an unexpected
reward (R ? R(?1 , ?2 , ...)). Moreover, it raises two interesting prediction problems: (a) the prediction of the trial average ?j of the coding feature conditioned on the stimulus and (b) the reward that
is expected if the coding feature takes its mean value.
5.1
Prediction of the coding feature
In the cases where we could derive the learning rule analytically, the trial average of the coding
feature could be calculated from intrinsic properties of the neuron like its membrane potential. Unfortunately, it is not clear a priori that the information necessary for calculating this mean is always
available. This should be particularly problematic when trying to extend the framework to coding
features of populations, where the population would need to know, e.g., membrane properties of its
members.
An interesting alternative is that the trial mean is calculated by a prediction system, e.g., by topdown signals that use prior information or an internal world model to predict the expected value
of the coding feature. Learning would in this case be modulated by the mismatch of a top-down
prediction of the coding feature - represented by ?j (X) - and the real value of Fj , which is calculated
by a ?bottom-up? approach. This interpretation bears interesting parallels to certain approaches in
sensory coding, where the interpretation of sensory information is based on a comparison of the
sensory input with an internally generated prediction from a generative model [cf. 6]. There is also
some experimental evidence for neural stimulus prediction even in comparably low-level systems as
the retina [e.g. 8].
Another prediction system for the expected response could be a population coding scheme, in which
a population of neurons is receiving the same input and should produce the same output. Any
neuron of the population could receive the average population activity as a prediction of its own
mean response. It would be interesting to study the relation of such an approach with the one
recently proposed for reinforcement learning in populations of spiking neurons [11].
5.2
Reward prediction
The other quantity that should be predicted in the learning rule is the reward one would get when
the coding feature would take the value of its mean. If the distribution of the coding feature is
7
sufficiently narrow so that in the range F takes for a given stimulus, the reward can be approximated
by a linear function, the reward R(?) at the mean is simply the expectation value of the reward given
the stimulus:
R(?) ? hR(F )iF |X
(22)
The relevant quantity for learning is therefore a reward prediction error R(F ) ? hR(F )iF |X . In
classical reinforcement learning, this term is often calculated in an actor-critic architecture, where
some external module - the critic - learns the expected future reward either for states alone or for
state-action pairs. These values are then used to calculate the expected reward for the current state
or state-action pair. The difference between the reward that was really received and the predicted
reward is then used as a reward prediction error that drives learning. There is evidence that dopamine
signals in the brain encode prediction error rather than reward alone [7].
6
Discussion
We have presented a general framework for deriving policy-gradient rules for spiking neurons and
shown that different learning rules emerge depending on which features of the spike trains are assumed to influence the reward signals. Theoretical arguments suggest that code-specific learning
rules should be superior to more general rules, because the noise in the estimate of the gradient
should be smaller. More simulations will be necessary to check if this is indeed the case and in
which applications code-specific learning rules are advantageous.
For exponentially distributed coding features, the learning rule has a characteristic structure, which
allows a simple intuitive interpretation. Moreover, this structure raises two prediction problems,
which may provide links to other concepts: (a) the notion of using a reward prediction error to reduce
the variance in the estimate of the gradient creates a link to actor-critic architectures [9] and (b) the
notion of coding feature prediction is reminiscent of combined top-down?bottom-up approaches,
where sensory learning is driven by the mismatch of internal predictions and the sensory signal [6].
The fact that there is a whole class of code-specific policy-gradient learning rules opens the interesting possibility that neuronal learning rules could be controlled by metalearning processes that shape
the learning rule according to what neural code is in effect. From the biological perspective, it would
be interesting to compare spike-based synaptic plasticity in different brain regions that are thought
to use different neural codes and see if there are systematic differences.
References
[1] Baxter, J. and Bartlett, P. (2001). Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence Research, 15(4):319?350.
[2] Bienenstock, E., Cooper, L., and Munroe, P. (1982). Theory of the development of neuron selectivity:
orientation specificity and binocular interaction in visual cortex. Journal of Neuroscience, 2:32?48. reprinted
in Anderson and Rosenfeld, 1990.
[3] Florian, R. V. (2007). Reinforcement learning through modulation of spike-timing-dependent synaptic
plasticity. Neural Computation, 19:1468?1502.
[4] Greensmith, E., Bartlett, P., and Baxter, J. (2004). Variance reduction techniques for gradient estimates in
reinforcement learning. The Journal of Machine Learning Research, 5:1471?1530.
[5] Pfister, J.-P., Toyoizumi, T., Barber, D., and Gerstner, W. (2006). Optimal spike-timing dependent plasticity
for precise action potential firing in supervised learning. Neural Computation, 18:1309?1339.
[6] Rao, R. P. and Ballard, D. H. (1999). Predictive coding in the visual cortex: A functional interpretation of
some extra-classical receptive-field effects. Nature Neuroscience, 2(1):79?87.
[7] Schultz, W., Dayan, P., and Montague, R. (1997). A neural substrate for prediction and reward. Science,
275:1593?1599.
[8] Schwartz, G., Harris, R., Shrom, D., and II, M. (2007). Detection and prediction of periodic patterns by
the retina. Nature Neuroscience, 10:552?554.
[9] Sutton, R. and Barto, A. (1998). Reinforcement learning. MIT Press, Cambridge.
[10] Triesch, J. (2007). Synergies between intrinsic and synaptic plasticity mechanisms. Neural computation,
19:885 ?909.
8
[11] Urbanczik, R. and Senn, W. (2009). Reinforcement learning in populations of spiking neurons. Nat
Neurosci, 12(3):250?252.
[12] Williams, R. (1992). Simple statistical gradient-following methods for connectionist reinforcement learning. Machine Learning, 8:229?256.
[13] Xie, X. and Seung, H. (2004). Learning in neural networks by reinforcement of irregular spiking. Physical
Review E, 69(4):41909.
9
| 3733 |@word trial:14 version:3 advantageous:1 open:1 simulation:6 versatile:1 reduction:1 ecole:1 interestingly:1 suppressing:2 past:1 current:3 discretization:1 skipping:1 yet:1 reminiscent:1 plasticity:11 shape:5 alone:4 generative:1 intelligence:1 vanishing:1 filtered:1 colored:1 behavioral:1 introduce:1 indeed:1 expected:8 brain:2 discretized:2 actual:1 armed:5 window:1 becomes:1 moreover:3 maximizes:1 factorized:1 panel:2 what:2 temporal:1 ti:2 exactly:1 schwartz:1 control:2 internally:1 greensmith:1 understood:1 timing:3 limit:2 sutton:1 ak:2 fluctuation:3 firing:11 modulation:1 triesch:1 might:2 black:1 evoked:3 lausanne:2 factorization:1 range:1 urbanczik:1 thought:1 specificity:1 suggest:2 get:4 influence:3 optimize:2 yt:5 williams:2 independently:1 sharpness:1 pure:1 rule:72 hennequin:1 deriving:2 population:9 notion:4 target:4 pt:16 substrate:1 approximated:2 particularly:2 bottom:2 role:1 module:1 solved:1 calculate:1 region:1 removed:1 intuition:1 reward:58 ideally:1 seung:2 hri:1 dynamic:1 raise:3 solving:1 depend:4 predictive:1 purely:1 creates:1 f2:6 basis:2 joint:1 montague:1 pspi:8 represented:1 train:30 effective:1 artificial:1 choosing:1 dt0:2 fluctuates:1 widely:1 solve:2 spend:1 encoded:1 drawing:1 say:1 toyoizumi:1 rosenfeld:1 delivered:2 online:1 obviously:1 advantage:1 subtracting:1 interaction:1 product:1 reset:1 j2:2 relevant:6 erale:1 intuitive:2 getting:3 produce:1 generating:1 help:1 depending:6 derive:7 develop:1 illustrate:1 received:1 eq:3 epsps:1 sprekeler:2 implemented:2 predicted:2 thick:1 stochastic:1 bin:2 f1:16 really:1 biological:1 summation:1 around:2 credit:6 sufficiently:2 normal:1 exp:6 predict:1 claim:1 a2:2 purpose:2 estimation:1 punished:2 tool:1 weighted:1 mit:1 behaviorally:1 always:2 gaussian:1 aim:1 modified:1 rather:2 barto:1 publication:1 encode:1 derived:3 bernoulli:3 likelihood:2 check:1 baseline:1 sense:1 dependent:2 dayan:1 epfl:1 bienenstock:3 hidden:1 bandit:5 relation:1 among:1 orientation:1 favored:1 priori:1 development:1 animal:2 spatial:1 summed:1 field:1 once:1 sampling:1 future:3 connectionist:1 stimulus:11 inherent:1 retina:2 randomly:1 replaced:1 fire:2 n1:2 detection:1 possibility:1 adjust:2 extreme:1 yielding:1 integral:2 necessary:2 respective:1 modest:1 unless:1 timed:2 theoretical:2 increased:1 earlier:1 rao:1 assignment:6 periodic:1 chooses:1 combined:1 systematic:3 receiving:2 again:1 central:2 squared:1 choose:3 stochastically:1 external:1 derivative:2 account:1 potential:8 de:1 coding:37 includes:1 explicitly:1 caused:1 depends:10 onset:2 later:1 lot:1 red:2 start:2 reached:1 parallel:1 purple:1 variance:4 characteristic:4 maximized:1 yield:1 landscape:1 conceptually:1 modelled:1 comparably:1 worth:1 presume:1 drive:1 history:1 strongest:1 metalearning:1 ed:1 synaptic:19 energy:1 naturally:1 proof:1 associated:1 knowledge:1 emerges:1 lim:1 higher:2 dt:5 xie:2 supervised:2 response:5 done:2 refractoriness:1 anderson:1 binocular:1 correlation:1 until:1 hand:1 receives:2 hindrance:1 nonlinear:1 effect:6 contain:1 concept:1 evolution:2 analytically:1 excitability:1 laboratory:1 i2:1 conditionally:1 during:2 eligibility:1 essence:1 substantiate:1 m:4 trying:2 polytechnique:1 performs:1 l1:3 fj:9 instantaneous:1 consideration:1 fi:12 recently:1 superior:2 sigmoid:1 specialized:1 functional:2 spiking:8 physical:1 exponentially:2 belong:1 extend:1 interpretation:4 cambridge:1 ai:1 neuromodulatory:1 language:1 actor:2 similarity:2 cortex:2 summands:1 dominant:1 something:1 own:1 recent:2 perspective:1 belongs:3 driven:3 rewarded:2 selectivity:2 certain:2 captured:1 seen:1 additional:1 florian:2 converge:1 maximize:3 period:1 signal:10 u0:11 sliding:1 full:7 desirable:1 ii:1 hebbian:1 faster:1 believed:1 a1:2 controlled:1 prediction:26 basic:2 essentially:1 expectation:3 poisson:7 dopamine:2 represent:1 achieved:1 irregular:1 proposal:1 addition:3 want:2 separately:1 receive:1 interval:2 decreased:1 source:1 crucial:2 extra:1 probably:1 ascent:2 hz:2 henning:2 member:1 tfi:4 structural:2 noting:1 baxter:2 architecture:2 reduce:2 reprinted:1 t0:3 bottleneck:1 munro:2 bartlett:2 cause:1 action:15 latency:15 clear:1 amount:1 repeating:1 reduced:2 outperform:1 canonical:1 problematic:1 senn:1 neuroscience:4 fulfilled:2 per:1 blue:2 dropping:1 four:1 threshold:6 drawn:1 diffusion:1 merely:1 sum:2 place:1 family:2 throughout:1 decision:1 interleaved:1 activity:2 precisely:3 encodes:1 generates:1 argument:3 relatively:1 according:1 membrane:6 smaller:1 slightly:1 postsynaptic:8 unity:1 wi:10 s1:1 taken:2 ln:3 equation:2 remains:1 describing:1 count:20 mechanism:2 turn:1 know:2 end:1 available:3 alternative:1 original:2 denotes:3 top:2 cf:3 calculating:2 exploit:1 classical:3 question:5 already:2 spike:76 quantity:2 receptive:1 dependence:3 gradient:20 link:2 mail:1 argue:2 presynaptic:10 considers:1 barber:1 code:31 illustration:1 ratio:2 difficult:1 setup:1 unfortunately:2 trace:1 negative:1 rise:1 implementation:2 policy:13 neuron:40 observation:1 markov:1 precise:1 arbitrary:2 drift:1 introduced:2 namely:1 pair:2 bcm:3 learned:1 narrow:2 established:1 topdown:1 pattern:8 mismatch:2 green:1 suitable:2 natural:3 hr:3 scheme:2 axis:1 prior:2 understanding:1 l2:2 review:1 relative:1 fully:1 bear:1 interesting:8 proportional:2 agent:5 critic:3 changed:1 accounted:1 allow:1 taking:1 emerge:2 distributed:2 curve:3 dimension:1 calculated:4 world:1 rich:1 sensory:6 reinforcement:13 subtracts:1 schultz:1 employing:1 approximate:1 observable:1 implicitly:1 synergy:1 anew:1 conceptual:1 assumed:2 nef:4 continuous:1 why:3 learn:2 ballard:1 nature:2 gerstner:2 necessarily:1 did:3 neurosci:1 arrow:1 whole:3 s2:1 noise:10 n2:2 convey:1 positively:1 neuronal:5 cooper:3 exponential:6 learns:2 down:2 specific:18 evidence:2 intrinsic:7 ci:5 nat:1 conditioned:1 horizon:1 nk:2 signalto:1 simply:2 explore:1 visual:2 unexpected:2 partially:1 ch:1 corresponds:2 relies:1 harris:1 conditional:5 goal:1 wulfram:1 change:3 typical:1 infinite:1 reducing:1 pfister:2 pas:1 experimental:1 guillaume:1 internal:2 latter:1 arises:2 modulated:2 meant:1 correlated:4 |
3,017 | 3,734 | The Ordered Residual Kernel for
Robust Motion Subspace Clustering
Tat-Jun Chin, Hanzi Wang and David Suter
School of Computer Science
The University of Adelaide, South Australia
{tjchin, hwang, dsuter}@cs.adelaide.edu.au
Abstract
We present a novel and highly effective approach for multi-body motion segmentation. Drawing inspiration from robust statistical model fitting, we estimate putative subspace hypotheses from the data. However, instead of ranking them we
encapsulate the hypotheses in a novel Mercer kernel which elicits the potential of
two point trajectories to have emerged from the same subspace. The kernel permits the application of well-established statistical learning methods for effective
outlier rejection, automatic recovery of the number of motions and accurate segmentation of the point trajectories. The method operates well under severe outliers
arising from spurious trajectories or mistracks. Detailed experiments on a recent
benchmark dataset (Hopkins 155) show that our method is superior to other stateof-the-art approaches in terms of recovering the number of motions, segmentation
accuracy, robustness against gross outliers and computational efficiency.
1 Introduction1
Multi-body motion segmentation concerns the separation of motions arising from multiple moving
objects in a video sequence. The input data is usually a set of points on the surface of the objects
which are tracked throughout the video sequence. Motion segmentation can serve as a useful preprocessing step for many computer vision applications. In recent years the case of rigid (i.e. nonarticulated) objects for which the motions could be semi-dependent on each other has received much
attention [18, 14, 19, 21, 22, 17]. Under this domain the affine projection model is usually adopted.
Such a model implies that the point trajectories from a particular motion lie on a linear subspace
of at most four, and trajectories from different motions lie on distinct subspaces. Thus multi-body
motion segmentation is reduced to the problem of subspace segmentation or clustering.
To realize practical algorithms, motion segmentation approaches should possess four desirable attributes: (1) Accuracy in classifying the point trajectories to the motions they respectively belong
to. This is crucial for success in the subsequent vision applications, e.g. object recognition, 3D
reconstruction. (2) Robustness against inlier noise (e.g. slight localization error) and gross outliers
(e.g. mistracks, spurious trajectories), since getting imperfect data is almost always unavoidable in
practical circumstances. (3) Ability to automatically deduce the number of motions in the data. This
is pivotal to accomplish fully automated vision applications. (4) Computational efficiency. This is
integral for the processing of video sequences which are usually large amounts of data.
Recent work on multi-body motion segmentation can roughly be divided into algebraic or factorization methods [3, 19, 20], statistical methods [17, 7, 14, 6, 10] and clustering methods [22, 21, 5]. Notable approaches include Generalized PCA (GPCA) [19, 20], an algebraic method based on the idea
that one can fit a union of m subspaces with a set of polynomials of degree m. Statistical methods often employ concepts such random hypothesis generation [4, 17], Expectation-Maximization [14, 6]
1
This work was supported by the Australian Research Council (ARC) under the project DP0878801.
1
and geometric model selection [7, 8]. Clustering based methods [22, 21, 5] are also gaining attention due to their effectiveness. They usually include a dimensionality reduction step (e.g. manifold
learning [5]) followed by a clustering of the point trajectories (e.g. via spectral clustering in [21]).
A recent benchmark [18] indicated that Local Subspace Affinity (LSA) [21] gave the best performance in terms of classification accuracy, although their result was subsequently surpassed
by [5, 10]. However, we argue that most of the previous approaches do not simultaneously fulfil
the qualities desirable of motion segmentation algorithms. Most notably, although some of the approaches have the means to estimate the number of motions, they are generally unreliable in this
respect and require manual input of this parameter. In fact this prior knowledge was given to all the
methods compared in [18]2 . Secondly, most of the methods (e.g. [19, 5]) do not explicitly deal with
outliers. They will almost always breakdown when given corrupted data. These deficiencies reduce
the usefulness of available motion segmentation algorithms in practical circumstances.
In this paper we attempt to bridge the gap between experimental performance and practical usability.
Our previous work [2] indicates that robust multi-structure model fitting can be achieved effectively
with statistical learning. Here we extend this concept to motion subspace clustering. Drawing inspiration from robust statistical model fitting [4], we estimate random hypotheses of motion subspaces
in the data. However, instead of ranking these hypotheses we encapsulate them in a novel Mercer
kernel. The kernel can function reliably despite overwhelming sampling imbalance, and it permits
the application of non-linear dimensionality reduction techniques to effectively identify and reject
outlying trajectories. This is then followed by Kernel PCA [11] to maximize the separation between
groups and spectral clustering [13] to recover the number of motions and clustering. Experiments
on the Hopkins 155 benchmark dataset [18] show that our method is superior to other approaches in
terms of the qualities described above, including computational efficiency.
1.1 Brief review of affine model multi-body motion segmentation
=1,...,F
Let {tf p ? R2 }fp=1,...,P
be the set of 2D coordinates of P trajectories tracked across F frames. In
multi-body motion segmentation the tf p ?s correspond to points on the surface of rigid objects which
are moving. The goal is to separate the trajectories into groups corresponding to the motion they
belong to. In other words, if we arrange the coordinates in the following data matrix
?
?
t11 ? ? ? t1P
?
.. ? ? R2F ?P ,
..
T = ? ...
(1)
.
. ?
tF 1 . . . tF P
the goal is to find the permutation ? ? RP ?P such that the columns of T ? ? are arranged according
to the respective motions they belong to. It turns out that under affine projection [1, 16] trajectories
from the same motion lie on a distinct subspace in R2F , and each of these motion subspaces is of
dimensions 2, 3 or 4. Thus motion segmentation can be accomplished via clustering subspaces in
R2F . See [1, 16] for more details. Realistically actual motion sequences might contain trajectories
which do not correspond to valid objects or motions. These trajectories behave as outliers in the data
and, if not taken into account, can be seriously detrimental to subspace clustering algorithms.
2 The Ordered Residual Kernel (ORK)
First, we take a statistical model fitting point of view to motion segmentation. Let {xi }i=1,...,N be
the set of N samples on which we want to perform model fitting. We randomly draw p-subsets from
the data and use it to fit a hypothesis of the model, where p is the number of parameters that define
the model. In motion segmentation, the xi ?s are the columns of matrix T, and p = 4 since the model
is a four-dimensional subspace3 . Assume that M of such random hypotheses are drawn.
i
For each data point xi compute its absolute residual set ri = {r1i , . . . , rM
} as measured to the
M hypotheses. For motion segmentation, the residual is the orthogonal distance to a hypothesis
2
As confirmed through private contact with the authors of [18].
Ideally we should also consider degenerate motions with subspace dimensions 2 or 3, but previous
work [18] using RANSAC [4] and our results suggest this is not a pressing issue for the Hopkins 155 dataset.
3
2
subspace. We sort the elements in ri to obtain the sorted residual set ?ri = {r?i i , . . . , r?i i }, where
1
M
the permutation {?i1 , . . . , ?iM } is obtained such that r?i i ? ? ? ? ? r?i i . Define the following
1
M
??i := {?i1 , . . . , ?iM }
(2)
as the sorted hypothesis set of point xi , i.e. ??i depicts the order in which xi becomes the inlier of
the M hypotheses as a fictitious inlier threshold is increased from 0 to ?. We define the Ordered
Residual Kernel (ORK) between two data points as
M/h
1 X
t ?
kr?(xi1 , xi2 ) :=
zt ? k?
(?i1 , ??i2 ),
Z t=1
(3)
PM/h
where zt = 1t are the harmonic series and Z =
t=1 zt is the (M/h)-th harmonic number.
Without lost of generality assume that M is wholly divisible by h. Step size h is used to obtain the
Difference of Intersection Kernel (DOIK)
1
1:?
1:?
t ?
t
t
k?
(?i1 , ??i2 ) := (|??i1:?
? ??i1:?
| ? |??i1 t?1 ? ??i2 t?1 |)
(4)
1
2
h
where ?t = t ? h and ?t?1 = (t ? 1) ? h. Symbol ??ia:b indicates the set formed by the a-th to
the b-th elements of ??i . Since the contents of the sorted hypotheses set are merely permutations of
{1 . . . M }, i.e. there are no repeating elements,
0 ? kr?(xi1 , xi2 ) ? 1.
(5)
Note that kr? is independent of the type of model to be fitted, thus it is applicable to generic statistical
model fitting problems. However, we concentrate on motion subspaces in this paper.
Let ? be a fictitious inlier threshold. The kernel kr? captures the intuition that, if ? is low, two
points arising from the same subspace will have high normalized intersection since they share many
common hypotheses which correspond to that subspace. If ? is high, implausible hypotheses fitted
on outliers start to dominate and decrease the normalized intersection. Step size h allows us to
quantify the rate of change of intersection if ? is increased from 0 to ?, and since zt is decreasing,
kr? will evaluate to a high value for two points from the same subspace. In contrast, kr? is always low
for points not from the same subspace or that are outliers.
Proof of satisfying Mercer?s condition. Let D be a fixed domain, and P(D) be the power set of
D, i.e. the set of all subsets of D. Let S ? P(D), and p, q ? S. If ? is a measure on D, then
k? (p, q) = ?(p ? q),
(6)
called the intersection kernel, is provably a valid Mercer kernel [12]. The DOIK can be rewritten as
t ?
k?
(?i1 , ??i2 ) =
1 ?(?t?1 +1):?t ?(?t?1 +1):?t
(|?
? ?i2
|
h i1
1:(?
)
(?
+1):?t
(?
+1):?t
1:(?
)
+|??i1 t?1 ? ??i2 t?1
| + |??i1 t?1
? ??i2 t?1 |).
(7)
If we let D = {1 . . . M } be the set of all possible hypothesis indices and ? be uniform on D, each
term in Eq. (7) is simply an intersection kernel multiplied by |D|/h. Since multiplying a kernel
with a positive constant and adding two kernels respectively produce valid Mercer kernels [12], the
DOIK and ORK are also valid Mercer kernels.?
Parameter h in kr? depends on the number of random hypotheses M , i.e. step size h can be set as a
ratio of M . The value of M can be determined based on the size of the p-subset and the size of the
data N (e.g. [23, 15]), and thus h is not contingent on knowledge of the true inlier noise scale or
threshold. Moreover, our experiments in Sec. 4 show that segmentation performance is relatively
insensitive to the settings of h and M .
2.1 Performance under sampling imbalance
Methods based on random sampling (e.g. RANSAC [4]) are usually affected by unbalanced datasets.
The probability of simultaneously retrieving p inliers from a particular structure is tiny if points
3
from that structure represent only a small minority in the data. In an unbalanced dataset the ?pure?
p-subsets in the M randomly drawn samples will be dominated by points from the majority structure
in the data. This is a pronounced problem in motion sequences, since there is usually a background
?object? whose point trajectories form a large majority in the data. In fact, for motion sequences
from the Hopkins 155 dataset [18] with typically about 300 points per sequence, M has to be raised
to about 20,000 before a pure p-subset from the non-background objects is sampled.
However, ORK can function reliably despite serious sampling imbalance. This is because points
from the same subspace are roughly equi-distance to the sampled hypotheses in their vicinity, even
though these hypotheses might not pass through that subspace. Moreover, since zt in Eq. (3) is decreasing only residuals/hypotheses in the vicinity of a point are heavily weighted in the intersection.
Fig. 1(a) illustrates this condition. Results in Sec. 4 show that ORK excelled even with M = 1, 000.
(a) Data in R2F .
(b) Data in RKHS Fkr? .
Figure 1: (a) ORK under sampling imbalance. (b) Data in RKHS induced by ORK.
3 Multi-Body Motion Segmentation using ORK
In this section, we describe how ORK is used for multi-body motion segmentation.
3.1 Outlier rejection via non-linear dimensionality reduction
Denote by Fkr? the Reproducing Kernel Hilbert Space (RKHS) induced by kr?. Let matrix A =
[?(x1 ) . . . ?(xN )] contain the input data after it is mapped to Fkr? . The kernel matrix K = AT A is
computed using the kernel function kr? as
Kp,q = h?(xp ), ?(xq )i = kr?(xp , xq ), p, q ? {1 . . . N }.
(8)
Since kr? is a valid Mercer kernel, K is guaranteed to be positive semi-definite [12]. Let K =
Q?QT be the eigenvalue decomposition (EVD) of K. Then the rank-n Kernel Singular Value
Decomposition (Kernel SVD) [12] of A is
1
1
An = [AQn (?n )? 2 ][(?n ) 2 ][(Qn )T ] ? Un ?n (Vn )T .
n
n
(9)
n
Via the Matlab notation, Q = Q:,1:n and ? = ?1:n,1:n . The left singular vectors U is an
orthonormal basis for the n-dimensional principal subspace of the whole dataset in Fkr? . Projecting
the data onto the principal subspace yields
1
1
B = [AQn (?n )? 2 ]T A = (?n ) 2 (Qn )T ,
(10)
n?N
where B = [b1 . . . bN ] ? R
is the reduced dimension version of A. Directions of the principal
subspace are dominated by inlier points, since kr? evaluates to a high value generally for them, but
always to a low value for gross outliers. Moreover the kernel ensures that points from the same
subspace are mapped to the same cluster and vice versa. Fig. 1(b) illustrates this condition.
Fig. 2(a)(left) shows the first frame of sequence ?Cars10? from the Hopkins 155 dataset [18] with
100 false trajectories of Brownian motion added to the original data (297 points). The corresponing
RKHS norm histogram for n = 3 is displayed in Fig. 2(b). The existence of two distinct modes,
4
15
Outlier mode
Bin count
Inlier mode
10
5
0
(a) (left) Before and (right) after outlier removal. Blue
dots are inliers while red dots are added outliers.
0
0.02
0.04
0.06 0.08 0.1 0.12 0.14 0.16
Vector norm in principal subspace
0.18
0.2
(b) Actual norm histogram of ?cars10?.
Figure 2: Demonstration of outlier rejection on sequence ?cars10? from Hopkins 155.
corresponding respectively to inliers and outliers, is evident. We exploit this observation for outlier
rejection by discarding data with low norms in the principal subspace.
The cut-off threshold ? can be determined by analyzing the shape of the distribution. For instance
we can fit a 1D Gaussian Mixture Model (GMM) with two components and set ? as the point of
equal Mahalanobis distance between the two components. However, our experimentation shows that
an effective threshold can be obtained by simply setting ? as the average value of all the norms, i.e.
?=
N
1 X
kbi k.
N i=1
(11)
This method was applied uniformly on all the sequences in our experiments in Sec. 4. Fig. 2(a)(right)
shows an actual result of the method on Fig. 2(a)(left).
3.2 Recovering the number of motions and subspace clustering
After outlier rejection, we further take advantage of the mapping induced by ORK for recovering the
number of motions and subspace clustering. On the remaining data, we perform Kernel PCA [11]
to seek the principal components which maximize the variance of the data in the RKHS, as Fig. 1(b)
illustrates. Let {yi }i=1,...,N ? be the N ? -point subset of the input data that remains after outlier
removal, where N ? < N . Denote by C = [?(y1 ) . . . ?(yN ? )] the data matrix after mapping the data
? the result of adjusting C with the empirical mean of {?(y1 ), . . . , ?(yN ? )}.
to Fkr? , and by symbol C
?? =C
?TC
? [11] can be obtained as
The centered kernel matrix K
? ? = ? T K? ?, ? = [IN ? ? 1 1N ? ,N ? ],
K
(12)
N?
where K? = CT C is the uncentered kernel matrix, Is and 1s,s are respectively the s ? s identity
? ? = R?RT is the EVD of K
? ? , then we obtain first-m kernel
matrix and a matrix of ones. If K
m
? , i.e.
principal components P of C as the first-m left singular vectors of C
1
? m (?m )? 2 ,
Pm = CR
(13)
where Rm = R:,1:m and ?1:m,1:m ; see Eq. (9). Projecting the data on the principal components
yields
1
D = [d1 . . . dN ? ] = (?m ) 2 (Rm )T ,
(14)
?
where D ? Rm?N . The affine subspace span(Pm ) maximizes the spread of the centered data in
the RKHS, and the projection D offers an effective representation for clustering. Fig. 3(a) shows
the Kernel PCA projection results for m = 3 on the sequence in Fig. 2(a).
The number of clusters in D is recovered via spectral clustering. More specifically we apply the
Normalized Cut (Ncut) [13] algorithm. A fully connected graph is first derived from the data, where
?
?
its weighted adjacency matrix W ? RN ?N is obtained as
Wp,q = exp(?kdp ? dq k2 /2? 2 ),
(15)
and ? is taken as the average nearest neighbour distance in the Euclidean sense among the vectors
in D. The Laplacian matrix [13] is then derived from W and eigendecomposed. Under Ncut,
5
0.1
0.05
0
?0.05
?0.1
0.1
?0.15
0.15
0.08
0.1
0.05
0
?0.05
?0.1
0.06
(a) Kernel PCA and Ncut results.
(b) W matrix.
(c) Final result for ?cars10?.
Figure 3: Actual results on the motion sequence in Fig. 2(a)(left).
the number of clusters is revealed as the number of eigenvalues of the Laplacian that are zero or
numerically insignificant. With this knowledge, a subsequent k-means step is then performed to
cluster the points. Fig. 3(b) shows W for the input data in Fig. 2(a)(left) after outlier removal. It
can be seen that strong affinity exists between points from the same cluster, thus allowing accurate
clustering. Figs. 3(a) and 3(c) illustrate the final clustering result for the data in Fig. 2(a)(left).
There are several reasons why spectral clustering under our framework is more successful than
previous methods. Firstly, we perform an effective outlier rejection step that removes bad trajectories
that can potentially mislead the clustering. Secondly, the mapping induced by ORK deliberately
separates the trajectories based on their cluster membership. Finally, we perform Kernel PCA to
maximize the variance of the data. Effectively this also improves the separation of clusters, thus
facilitating an accurate recovery of the number of clusters and also the subsequent segmentation.
This distinguishes our work from previous clustering based methods [21, 5] which tend to operate
without maximizing the between-class scatter. Results in Sec. 4 validate our claims.
4 Results
Henceforth we indicate the proposed method as ?ORK?. We leverage on a recently published benchmark on affine model motion segmentation [18] as a basis of comparison. The benchmark was evaluated on the Hopkins 155 dataset4 which contains 155 sequences with tracked point trajectories.
A total of 120 sequences have two motions while 35 have three motions. The sequences contain
degenerate and non-degenerate motions, independent and partially dependent motions, articulated
motions, nonrigid motions etc. In terms of video content three categories exist: Checkerboard sequences, traffic sequences (moving cars, trucks) and articulated motions (moving faces, people).
4.1 Details on benchmarking
Four major algorithms were compared in [18]: Generalized PCA (GPCA) [19], Local Subspace
Affinity (LSA) [21], Multi-Stage Learning (MSL) [14] and RANSAC [17]. Here we extend the
benchmark with newly reported results from Locally Linear Manifold Clustering (LLMC) [5] and
Agglomerative Lossy Compression (ALC) [10, 9]. We also compare our method against Kanatani
and Matsunaga?s [8] algorithm (henceforth, the ?KM? method) in estimating the number of independent motions in the video sequences. Note that KM per se does not perform motion segmentation.
For the sake of objective comparisons we use only implementations available publicly5.
Following [18], motion segmentation performance is evaluated in terms of the labelling error of the
point trajectories, where each point in a sequence has a ground truth label, i.e.
number of mislabeled points
.
(16)
classification error =
total number of points
Unlike [18], we also emphasize on the ability of the methods in recovering the number of motions.
However, although the methods compared in [18] (except RANSAC) theoretically have the means to
4
Available at http://www.vision.jhu.edu/data/hopkins155/.
For MSL and KM, see http://www.suri.cs.okayama-u.ac.jp/e-program-separate.html/. For GPCA, LSA
and RANSAC, refer to the url for the Hopkins 155 dataset.
5
6
do so, their estimation of the number of motions is generally unrealiable and the benchmark results
in [18] were obtained by revealing the actual number of motions to the algorithms. A similar initialization exists in [5, 10] where the results were obtained by giving LLMC and ALC this knowledge
a priori (for LLMC, this was given at least to the variant LLMC 4m during dimensionality reduction [5], where m is the true number of motions). In the following subsections, where variants exist
for the compared algorithms we use results from the best performing variant.
In the following the number of random hypotheses M and step size h for ORK are fixed at 1000 and
300 respectively, and unlike the others, ORK is not given knowledge of the number of motions.
4.2 Data without gross outliers
We apply ORK on the Hopkins 155 dataset. Since ORK uses random sampling we repeat it 100
times for each sequence and average the results. Table 1 depicts the obtained classification error
among those from previously proposed methods. ORK (column 9) gives comparable results to the
other methods for sequences with 2 motions (mean = 7.83%, median = 0.41%). For sequences
with 3 motions, ORK (mean = 12.62%, median = 4.75%) outperforms GPCA and RANSAC, but is
slightly less accurate than the others. However, bear in mind that unlike the other methods ORK is
not given prior knowledge of the true number of motions and has to estimate this independently.
Column
Method
1
REF
2
GPCA
Mean
Median
2.03
0.00
4.59
0.38
Mean
Median
5.08
2.40
28.66
28.26
3
4
5
6
LSA MSL RANSAC LLMC
Sequences with 2 motions
3.45 4.14
5.56
3.62
0.59 0.00
1.18
0.00
Sequences with 3 motions
9.73 8.23
22.94
8.85
2.33 1.76
22.03
3.19
8
ALC
9
ORK
10
ORK?
3.03
0.00
7.83
0.41
1.27
0.00
6.26
1.02
12.62
4.75
2.09
0.05
Table 1: Classification error (%) on Hopkins 155 sequences. REF represents the reference/control
method which operates based on knowledge of ground truth segmentation. Refer to [18] for details.
We also separately investigate the accuracy of ORK in estimating the number of motions, and compare it against KM [8] which was proposed for this purpose. Note that such an experiment was
not attempted in [18] since approaches compared therein generally do not perform reliably in estimating the number of motions. The results in Table 2 (columns 1?2) show that for sequences with
two motions, KM (80.83%) outperforms ORK (67.37%) by ? 15 percentage points. However, for
sequences with three motions, ORK (49.66%) vastly outperforms KM (14.29%) by more than doubling the percentage points of accuracy. The overall accuracy of KM (65.81%) is slightly better than
ORK (63.37%), but this is mostly because sequences with two motions form the majority in the
dataset (120 out of 155). This leads us to conclude that ORK is actually the superior method here.
Dataset
Column
Method
2 motions
3 motions
Overall
Hopkins 155
1
2
KM
ORK
80.83% 67.37%
14.29% 49.66%
65.81% 63.37%
Hopkins 155 + Outliers
3
4
KM
ORK
00.00%
47.58%
100.00%
50.00%
22.58%
48.13%
Table 2: Accuracy in determining the number of motions in a sequence. Note that in the experiment
with outliers (columns 3?4), KM returns a constant number of 3 motions for all sequences.
We re-evaluate the performance of ORK by considering only results on sequences where the number
of motions is estimated correctly by ORK (there are about 98 ? 63.37% of such cases). The results
are tabulated under ORK? (column 10) in Table 1. It can be seen that when ORK estimates the
number of motions correctly, it is significantly more accurate than the other methods.
Finally, we compare the speed of the methods in Table 3. ORK was implemented and run in Matlab
on a Dual Core Pentium 3.00GHz machine with 4GB of main memory (this is much less powerful
7
than the 8 Core Xeon 3.66GHz with 32GB memory used in [18] for the other methods in Table 3).
The results show that ORK is comparable to LSA, much faster than MSL and ALC, but slower than
GPCA and RANSAC. Timing results of LLMC are not available in the literature.
Method
2 motions
3 motions
GPCA
324ms
738ms
LSA
7.584s
15.956s
MSL
11h 4m
1d 23h
RANSAC
175ms
258ms
ALC
10m 32s
10m 32s
ORK
4.249s
8.479s
Table 3: Average computation time on Hopkins 155 sequences.
4.3 Data with gross outliers
We next examine the ability of the proposed method in dealing with gross outliers in motion data.
For each sequence in Hopkins 155, we add 100 gross outliers by creating trajectories corresponding
to mistracks or spuriously occuring points. These are created by randomly initializing 100 locations
in the first frame and allowing them to drift throughout the sequence according to Brownian motion.
The corrupted sequences are then subjected to the algorithms for motion segmentation. Since only
ORK is capable of rejecting outliers, the classification error of Eq. (16) is evaluated on the inlier
points only. The results in Table 4 illustrate that ORK (column 4) is the most accurate method by a
large margin. Despite being given the true number of motions a priori, GPCA, LSA and RANSAC
are unable to provide satisfactory segmentation results.
Column
Method
Mean
Median
Mean
Median
1
2
3
4
GPCA LSA RANSAC ORK
Sequences with 2 motions
28.66 24.25
30.64
16.50
30.96 26.51
32.36
10.54
Sequences with 3 motions
40.61 30.94
42.24
19.99
41.30 27.68
43.43
8.49
5
ORK?
1.62
0.00
2.68
0.09
Table 4: Classification error (%) on Hopkins 155 sequences with 100 gross outliers per sequence.
In terms of estimating the number of motions, as shown in column 4 in Table 2 the overall accuracy of ORK is reduced to 48.13%. This is contributed mainly by the deterioration in accuracy on
sequences with two motions (47.58%), although the accuracy on sequences with three motions are
maintained (50.00%). This is not a surprising result, since sequences with three motions generally
have more (inlying) point trajectories than sequences with two motions, thus the outlier rates for sequences with three motions are lower (recall that a fixed number of 100 false trajectories are added).
On the other hand, the KM method (column 3) is completely overwhelmed by the outliers? for all
the sequences with outliers it returned a constant ?3? as the number of motions.
We again re-evaluate ORK by considering results from sequences (now with gross outliers) where
the number of motions is correctly estimated (there are about 75 ? 48.13% of such cases). The
results tabulated under ORK? (column 5) in Table 4 show that the proposed method can accurately
segment the point trajectories without being influenced by the gross outliers.
5 Conclusions
In this paper we propose a novel and highly effective approach for multi-body motion segmentation. Our idea is based on encapsulating random hypotheses in a novel Mercel kernel and statistical
learning. We evaluated our method on the Hopkins 155 dataset with results showing that the idea is
superior other state-of-the-art approaches. It is by far the most accurate in terms of estimating the
number of motions, and it excels in segmentation accuracy despite lacking prior knowledge of the
number of motions. The proposed idea is also highly robust towards outliers in the input data.
Acknowledgements. We are grateful to the authors of [18] especially Ren?e Vidal for discussions
and insights which have been immensely helpful.
8
References
[1] T. Boult and L. Brown. Factorization-based segmentation of motions. In IEEE Workshop on
Motion Understanding, 1991.
[2] T.-J. Chin, H. Wang, and D. Suter. Robust fitting of multiple structures: The statistical learning
approach. In ICCV, 2009.
[3] J. Costeira and T. Kanade. A multibody factorization method for independently moving objects. IJCV, 29(3):159?179, 1998.
[4] M. A. Fischler and R. C. Bolles. Random sample concensus: A paradigm for model fitting with
applications to image analysis and automated cartography. Comm. of the ACM, 24:381?395,
1981.
[5] A. Goh and R. Vidal. Segmenting motions of different types by unsupervised manifold clustering. In CVPR, 2007.
[6] A. Gruber and Y. Weiss. Multibody factorization with uncertainty and missing data using the
EM algorithm. In CVPR, 2004.
[7] K. Kanatani. Motion segmentation by subspace separation and model selection. In ICCV,
2001.
[8] K. Kanatani and C. Matsunaga. Estimating the number of independent motions for multibody
segmentation. In ACCV, 2002.
[9] Y. Ma, H. Derksen, W. Hong, and J. Wright. Segmentation of multivariate mixed data via lossy
coding and compression. TPAMI, 29(9):1546?1562, 2007.
[10] S. Rao, R. Tron, Y. Ma, and R. Vidal. Motion segmentation via robust subspace separation in
the presence of outlying, incomplete, or corrupted trajectories. In CVPR, 2008.
[11] B. Sch?olkopf, A. Smola, and K. R. M?uller. Nonlinear component analysis as a kernel eigenvalue problem. Neural Computation, 10:1299?1319, 1998.
[12] J. Shawe-Taylor and N. Cristianini. Kernel methods for pattern analysis. Cambridge University
Press, 2004.
[13] J. Shi and J. Malik. Normalized cuts and image segmentation. TPAMI, 22(8):888?905, 2000.
[14] Y. Sugaya and K. Kanatani. Geometric structure of degeneracy for multi-body motion segmentation. In Workshop on Statistical Methods in Video Processing, 2004.
[15] R. Toldo and A. Fusiello. Robust multiple structures estimation with J-Linkage. In ECCV,
2008.
[16] C. Tomasi and T. Kanade. Shape and motion from image streams under orthography. IJCV,
9(2):137?154, 1992.
[17] P. Torr. Geometric motion segmentation and model selection. Phil. Trans. Royal Society of
London, 356(1740):1321?1340, 1998.
[18] R. Tron and R. Vidal. A benchmark for the comparison of 3-D motion segmentation algorithms. In CVPR, 2007.
[19] R. Vidal and R. Hartley. Motion segmentation with missing data by PowerFactorization and
Generalized PCA. In CVPR, 2004.
[20] R. Vidal, Y. Ma, and S. Sastry. Generalized Principal Component Analysis (GPCA). TPAMI,
27(12):1?15, 2005.
[21] J. Yan and M. Pollefeys. A general framework for motion segmentation: independent, articulated, rigid, non-rigid, degenerate and non-degenerate. In ECCV, 2006.
[22] L. Zelnik-Manor and M. Irani. Degeneracies, dependencies and their implications on multibody and multi-sequence factorization. In CVPR, 2003.
[23] W. Zhang and J. Koseck?a. Nonparametric estimation of multiple structures with outliers. In
Dynamical Vision, ICCV 2005 and ECCV 2006 Workshops, 2006.
9
| 3734 |@word private:1 version:1 compression:2 polynomial:1 norm:5 km:11 zelnik:1 tat:1 seek:1 bn:1 decomposition:2 reduction:4 series:1 contains:1 seriously:1 rkhs:6 okayama:1 outperforms:3 recovered:1 surprising:1 scatter:1 realize:1 subsequent:3 shape:2 remove:1 core:2 gpca:10 equi:1 location:1 firstly:1 zhang:1 dn:1 retrieving:1 ijcv:2 fitting:8 theoretically:1 notably:1 roughly:2 examine:1 multi:13 excelled:1 fkr:5 decreasing:2 automatically:1 actual:5 overwhelming:1 considering:2 becomes:1 project:1 estimating:6 moreover:3 notation:1 maximizes:1 multibody:4 rm:4 k2:1 control:1 lsa:8 yn:2 encapsulate:2 segmenting:1 positive:2 before:2 local:2 timing:1 despite:4 analyzing:1 might:2 au:1 initialization:1 therein:1 factorization:5 practical:4 union:1 lost:1 definite:1 wholly:1 empirical:1 yan:1 jhu:1 significantly:1 revealing:1 reject:1 projection:4 word:1 kbi:1 suggest:1 onto:1 selection:3 www:2 missing:2 maximizing:1 shi:1 phil:1 attention:2 independently:2 mislead:1 recovery:2 pure:2 insight:1 dominate:1 orthonormal:1 fulfil:1 coordinate:2 heavily:1 us:1 hypothesis:21 element:3 recognition:1 satisfying:1 breakdown:1 cut:3 powerfactorization:1 wang:2 capture:1 initializing:1 ensures:1 connected:1 decrease:1 gross:10 intuition:1 comm:1 fischler:1 ideally:1 cristianini:1 grateful:1 segment:1 serve:1 localization:1 efficiency:3 basis:2 completely:1 mislabeled:1 articulated:3 distinct:3 effective:6 describe:1 london:1 kp:1 whose:1 emerged:1 cvpr:6 drawing:2 ability:3 final:2 sequence:47 pressing:1 eigenvalue:3 advantage:1 tpami:3 reconstruction:1 propose:1 degenerate:5 realistically:1 pronounced:1 validate:1 olkopf:1 getting:1 cluster:8 produce:1 object:9 inlier:8 illustrate:2 ac:1 measured:1 nearest:1 qt:1 school:1 received:1 eq:4 strong:1 recovering:4 c:2 implemented:1 implies:1 australian:1 quantify:1 indicate:1 concentrate:1 direction:1 hartley:1 attribute:1 concensus:1 subsequently:1 centered:2 australia:1 bin:1 adjacency:1 require:1 eigendecomposed:1 secondly:2 im:2 immensely:1 ground:2 wright:1 exp:1 mapping:3 claim:1 major:1 arrange:1 purpose:1 estimation:3 applicable:1 label:1 council:1 bridge:1 vice:1 tf:4 weighted:2 uller:1 always:4 gaussian:1 manor:1 cr:1 derived:2 rank:1 indicates:2 mainly:1 cartography:1 contrast:1 pentium:1 sense:1 r2f:4 helpful:1 dependent:2 rigid:4 membership:1 typically:1 spurious:2 i1:11 provably:1 issue:1 classification:6 among:2 html:1 stateof:1 priori:2 dual:1 overall:3 art:2 raised:1 equal:1 msl:5 sampling:6 represents:1 unsupervised:1 others:2 serious:1 employ:1 distinguishes:1 suter:2 randomly:3 neighbour:1 simultaneously:2 attempt:1 highly:3 investigate:1 severe:1 mixture:1 inliers:3 implication:1 accurate:7 integral:1 capable:1 respective:1 orthogonal:1 incomplete:1 euclidean:1 taylor:1 re:2 goh:1 fitted:2 increased:2 column:13 instance:1 xeon:1 rao:1 maximization:1 subset:6 uniform:1 usefulness:1 successful:1 reported:1 dependency:1 corrupted:3 accomplish:1 xi1:2 off:1 hopkins:16 vastly:1 again:1 unavoidable:1 henceforth:2 creating:1 return:1 checkerboard:1 account:1 potential:1 sec:4 coding:1 notable:1 explicitly:1 ranking:2 depends:1 stream:1 performed:1 view:1 traffic:1 red:1 start:1 recover:1 sort:1 hopkins155:1 formed:1 accuracy:11 variance:2 correspond:3 identify:1 yield:2 rejecting:1 accurately:1 ren:1 trajectory:25 confirmed:1 multiplying:1 published:1 implausible:1 influenced:1 manual:1 against:4 evaluates:1 proof:1 degeneracy:2 sampled:2 newly:1 dataset:12 adjusting:1 recall:1 knowledge:8 car:1 dimensionality:4 improves:1 segmentation:40 hilbert:1 subsection:1 actually:1 costeira:1 wei:1 arranged:1 evaluated:4 though:1 kanatani:4 generality:1 stage:1 smola:1 hand:1 nonlinear:1 mode:3 quality:2 indicated:1 hwang:1 lossy:2 concept:2 contain:3 normalized:4 true:4 deliberately:1 vicinity:2 inspiration:2 brown:1 irani:1 wp:1 satisfactory:1 i2:7 deal:1 mahalanobis:1 during:1 maintained:1 m:4 generalized:4 hong:1 nonrigid:1 chin:2 evident:1 occuring:1 bolles:1 tron:2 motion:106 image:3 harmonic:2 suri:1 novel:5 recently:1 superior:4 common:1 tracked:3 insensitive:1 jp:1 belong:3 slight:1 extend:2 numerically:1 refer:2 versa:1 r1i:1 cambridge:1 automatic:1 sastry:1 pm:3 shawe:1 dot:2 moving:5 surface:2 deduce:1 etc:1 add:1 brownian:2 multivariate:1 recent:4 success:1 accomplished:1 yi:1 seen:2 contingent:1 maximize:3 paradigm:1 semi:2 multiple:4 desirable:2 usability:1 faster:1 offer:1 divided:1 laplacian:2 ransac:11 variant:3 vision:5 circumstance:2 expectation:1 surpassed:1 histogram:2 kernel:34 represent:1 deterioration:1 achieved:1 orthography:1 background:2 want:1 separately:1 singular:3 median:6 crucial:1 sch:1 operate:1 unlike:3 posse:1 south:1 induced:4 tend:1 matsunaga:2 effectiveness:1 leverage:1 presence:1 revealed:1 automated:2 divisible:1 fit:3 gave:1 imperfect:1 idea:4 reduce:1 pca:8 url:1 gb:2 linkage:1 tabulated:2 algebraic:2 returned:1 matlab:2 useful:1 generally:5 detailed:1 se:1 amount:1 repeating:1 nonparametric:1 locally:1 category:1 reduced:3 http:2 exist:2 percentage:2 estimated:2 arising:3 per:3 correctly:3 blue:1 pollefeys:1 affected:1 group:2 four:4 threshold:5 drawn:2 gmm:1 graph:1 merely:1 year:1 ork:42 run:1 powerful:1 uncertainty:1 throughout:2 almost:2 vn:1 separation:5 putative:1 draw:1 comparable:2 ct:1 followed:2 guaranteed:1 truck:1 deficiency:1 ri:3 sake:1 dominated:2 speed:1 span:1 performing:1 relatively:1 according:2 across:1 slightly:2 em:1 derksen:1 outlier:35 projecting:2 iccv:3 taken:2 remains:1 previously:1 turn:1 count:1 xi2:2 mind:1 encapsulating:1 subjected:1 adopted:1 available:4 rewritten:1 permit:2 multiplied:1 experimentation:1 apply:2 vidal:6 spectral:4 generic:1 robustness:2 slower:1 rp:1 existence:1 original:1 clustering:22 include:2 t11:1 remaining:1 exploit:1 giving:1 especially:1 society:1 contact:1 objective:1 malik:1 added:3 rt:1 affinity:3 detrimental:1 subspace:35 distance:4 elicits:1 separate:3 mapped:2 unable:1 majority:3 excels:1 spuriously:1 manifold:3 argue:1 agglomerative:1 reason:1 dataset4:1 minority:1 index:1 ratio:1 demonstration:1 mostly:1 potentially:1 implementation:1 reliably:3 zt:5 perform:6 allowing:2 imbalance:4 contributed:1 observation:1 datasets:1 benchmark:8 arc:1 accv:1 behave:1 displayed:1 frame:3 y1:2 rn:1 reproducing:1 drift:1 david:1 tomasi:1 established:1 trans:1 usually:6 pattern:1 dynamical:1 fp:1 program:1 gaining:1 including:1 video:6 memory:2 royal:1 ia:1 power:1 residual:7 brief:1 created:1 jun:1 sugaya:1 xq:2 review:1 geometric:3 prior:3 removal:3 literature:1 acknowledgement:1 determining:1 understanding:1 lacking:1 fully:2 permutation:3 bear:1 boult:1 generation:1 mixed:1 fictitious:2 degree:1 affine:5 xp:2 mercer:7 dq:1 gruber:1 classifying:1 share:1 tiny:1 eccv:3 supported:1 repeat:1 evd:2 face:1 absolute:1 ghz:2 dimension:3 xn:1 valid:5 qn:2 author:2 preprocessing:1 outlying:2 far:1 emphasize:1 unreliable:1 dealing:1 uncentered:1 b1:1 conclude:1 xi:5 un:1 why:1 table:12 kanade:2 robust:8 domain:2 spread:1 main:1 whole:1 noise:2 facilitating:1 pivotal:1 ref:2 body:10 x1:1 fig:14 benchmarking:1 depicts:2 kdp:1 lie:3 bad:1 discarding:1 showing:1 symbol:2 r2:1 insignificant:1 concern:1 exists:2 workshop:3 false:2 adding:1 effectively:3 kr:12 alc:5 labelling:1 illustrates:3 overwhelmed:1 margin:1 gap:1 rejection:6 intersection:7 tc:1 simply:2 ncut:3 ordered:3 partially:1 doubling:1 truth:2 acm:1 ma:3 goal:2 sorted:3 identity:1 towards:1 content:2 change:1 determined:2 specifically:1 operates:2 uniformly:1 except:1 torr:1 principal:9 called:1 total:2 pas:1 experimental:1 svd:1 attempted:1 people:1 unrealiable:1 unbalanced:2 adelaide:2 evaluate:3 d1:1 |
3,018 | 3,735 | Adaptive Design Optimization in Experiments with
People
Daniel R. Cavagnaro
Department of Psychology
Ohio State University
[email protected]
Mark A. Pitt
Department of Psychology
Ohio State University
[email protected]
Jay I. Myung
Department of Psychology
Ohio State University
[email protected]
Abstract
In cognitive science, empirical data collected from participants are the arbiters
in model selection. Model discrimination thus depends on designing maximally
informative experiments. It has been shown that adaptive design optimization
(ADO) allows one to discriminate models as efficiently as possible in simulation
experiments. In this paper we use ADO in a series of experiments with people to
discriminate the Power, Exponential, and Hyperbolic models of memory retention,
which has been a long-standing problem in cognitive science, providing an ideal
setting in which to test the application of ADO for addressing questions about human cognition. Using an optimality criterion based on mutual information, ADO
is able to find designs that are maximally likely to increase our certainty about the
true model upon observation of the experiment outcomes. Results demonstrate
the usefulness of ADO and also reveal some challenges in its implementation.
1
Introduction
For better or worse, human memory is not perfect, causing us to forget. Over a century of research
on memory has consistently shown that a person?s ability to remember information just learned
(e.g., from studying a list of words), drops precipitously for a short time immediately after learning,
but then quickly decelerates, leveling off to a very low rate as more and more time elapses. The
simplicity of this data pattern has led to the introduction of a number of models to describe the rate
at which information is retained in memory.
Years of experimentation with humans (and animals) have resulted in a handful of models proving to
be superior to the rest of the field, but also proving to be increasingly difficult to discriminate [1, 2].
Three strong competitors are the power model (POW), the exponential model (EXP), and the hyperbolic model (HYP). Their equations are given in Table 1. Despite the best efforts of researchers to
design studies that were intended to discriminate among them, the results have not yielded decisive
evidence that favors one model, let alone consistency across studies. [2, 3].
In these and other studies, well-established methods were used to increase the power of an experiment, and thus improve model discriminability. They included testing large numbers of participants
to reduce measurement error, testing memory at more retention intervals (i.e., the time between the
end of the study phase and when memory is probed) after the study phase (e.g., 8 instead of 5) so as
to obtain a more accurate description of the rate of retention, and replicating the experiment using a
range of different tasks or participant populations.
1
Model
Power (POW)
Exponential (EXP)
Hyperbolic (HYP)
Equation
p = a(t + 1)?b
p = ae?bt
a
p = 1+bt
Table 1: Three quantitative models of memory retention. In each equation, the symbol p (0 < p <
1) denotes the predicted probability of correct recall as a function of time interval t with model
parameters a and b.
In the present study, we used Bayesian adaptive design optimization (ADO) [4, 5, 6, 7] on groups
of people to achieve the same goal. Specifically, a retention experiment was repeated four times
on groups of people, and the set of retention intervals at which memory was probed was optimized
for each repetition using data collected in prior repetitions. The models in Table 1 were compared.
Because model predictions can differ significantly across retention intervals, our intent was to exploit
this information to the fullest using ADO, with the aim of providing some clarity on the form of the
retention function in humans.
While previous studies have demonstrated the potential of ADO to discriminate retention functions
in computer simulations [4, 5], this is the first study to utilize the methodology in experiments with
people. Although seemingly trivial, the application of such a methodology comes with challenges
that can severely restrict its usefulness. Success in applying ADO to a relatively simple design is a
necessary first step in assessing its ability to aid in model discrimination and its broader applicability.
We begin by reviewing ADO. This is followed by a series of retention experiments using the algorithm. We conclude with discussions of the implications of the empirical findings and of the benefits
and challenges of using ADO in laboratory experiments.
2
Adaptive optimal design
2.1 Bayesian framework
Before data collection can even begin in an experiment, many choices about its design must be made.
In particular, design parameters such as the sample size, the number of treatments (i.e., conditions
or levels of the independent variable) to study, and the proportion of observations to be allocated
to each treatment group must be chosen. These choices impact not only the statistical value of the
results, but also the cost of the experiment. For example, basic statistics tells us that increasing the
sample size would increase the statistical power of the experiment, but it would also increase its
cost (e.g., number of participants, amount of testing). An optimal experimental design is one that
maximizes the informativeness of the experiment, while being cost effective for the experimenter.
A principled approach to the problem of finding optimal experimental designs can be found in the
framework of Bayesian decision theory [8]. In this framework, each potential design is treated as a
gamble whose payoff is determined by the outcome of an experiment carried out with that design.
The idea is to estimate the utilities of hypothetical experiments carried out with each design, so that
an ?expected utility? of each design can be computed. This is done by considering every possible
observation that could be obtained from an experiment with each design, and then evaluating the
relative likelihoods and statistical values of these observations. The design with the highest expected
utility value is then chosen as the optimal design.
In the case of adaptively designed experiments, in which testing proceeds over the course of several
stages (i.e., periods of data collection), the information gained from all prior stages can be used to
improve the design at the current stage. Thus, the problem to be solved in adaptive design optimization (ADO) is to identify the most informative design at each stage of the experiment, taking
into account the results of all previous stages, so that one can infer the underlying model and its
parameter values in as few steps as possible.
2
Formally, ADO for model discrimination entails finding an optimal design at each stage that maximizes a utility function U (d)
d? = argmax{U (d)}
(1)
d
with the utility function defined as
U (d) =
K
X
Z Z
p(m)
u(d, ?m , y) p(y|?m , d) p(?m ) dy d?m ,
(2)
m=1
where m = {1, 2, . . . , K} is one of a set of K models being considered, d is a design, y is the
outcome of an experiment with design d under model m, and ?m is a parameterization of model m.
We refer to the function u(d, ?m , y) in Equation (2) as the local utility of the design d. It measures
the utility of a hypothetical experiment carried out with design d when the data generating model
is m, the parameters of the model takes the value ?m , and the outcome y is observed. Thus, U (d)
represents the expected value of the local utility function, where the expectation is taken over (1)
all models under consideration, (2) the full parameter space of each model, and (3) all possible
observations given a particular model-parameter pair, with respect to the model prior probability
p(m), the parameter prior distribution p(?m ), and the sampling distribution p(y|?m , d), respectively.
2.2
Mutual information utility function
Selection of a utility function that adequately captures the goals of the experiment is an integral,
often crucial, part of design optimization. For the goal of discriminating among competing models,
one reasonable choice would be a utility function based on a statistical model selection criterion,
such as sum-of-squares error (SSE) or minimum description length (MDL) [MDL 9] as shown by
[10]. Another reasonable choice would be a utility function based on the expected Bayes factor
between pairs of competing models [11]. Both of these approaches rely on pairwise model comparisons, which can be problematic when there are more than two models under consideration.
Here, we use an information theoretic utility function based on mutual information [12]. It is an
ideal measure for quantifying the value of an experiment design because it quantifies the reduction
in uncertainty about one variable that is provided by knowledge of the value of another random
variable. Formally, the mutual information of a pair of random variables P and Q, taking values in
X , is given by
I(P ; Q) = H(P ) ? H(P |Q)
(3)
P
P
where H(P ) = ? x?X p(x) log p(x) is the entropy of P , and H(P |Q) = x?X p(x)H(P |Q =
x) is the conditional entropy of P given Q. A high mutual information indicates a large reduction
in uncertainty about P due to knowledge of Q. For example, if the distributions of P and Q were
perfectly correlated, meaning that knowledge of Q allowed perfect prediction of P , then the conditional distribution would be degenerate, having entropy zero. Thus, the mutual information of P
and Q would be H(P ), meaning that all of the entropy of P was eliminated through knowledge of
Q. Mutual information is symmetric in the sense that I(P ; Q) = I(Q; P ).
Mutual information can be implemented as an optimality criterion in ADO for model discrimination of each stage s (= 1, 2, . . .) of experimentation in the following way. (For simplicity, we omit
the subscript s in the equations below.) Let M be a random variable defined over a model set
{1, 2, . . . , K}, representing uncertainty about the true model, and let Y be a random variable denoting an experiment outcome. Hence P rob.(M = m) = p(m) is the prior probability of model m, and
R
PK
P rob.(Y = y|d) = m=1 p(y|d, m) p(m), where p(y|d, m) = p(y|?m , d)p(?m ) d?m , is the associated prior over experimental outcomes given design d. Then I(M ; Y |d) = H(M )?H(M |Y, d)
measures the decrease in uncertainty about which model drives the process under investigation given
the outcome of an experiment with design d. Since H(M ) is independent of the design d, maximizing I(M ; Y |d) on each stage of ADO is equivalent to minimizing H(M |Y, d), which is the expected
posterior entropy of M given d.
Implementing this ADO criterion requires identification of an appropriate local utility function
u(d, ?m , y) in Equation (2); specifically, a function whose expectation over models, parameters,
and observations is I(M ; Y |d). Such a function can be found by writing
Z Z
K
X
p(m|y, d)
I(M ; Y |d) =
p(m)
p(y|?m , d) p(?m ) log
dy d?m
(4)
p(m)
m=1
3
from whence it follows that setting u(d, ?m , y) = log p(m|y,d)
p(m) yields U (d) = I(M ; Y |d). Thus, the
local utility of a design for a given model and experiment outcome is the log ratio of the posterior
probability to the prior probability of that model. Put another way, the above utility function prescribes that a design that increases our certainty about the model upon the observation of an outcome
is more valued than a design that does not.
A highly desirable property of this utility function is that it is suitable for comparing more than
two models, because it does not rely on pairwise comparisons of the models under consideration.
Further, as noted by [5], it can be seen as a natural extension of the Bayes factor for comparing more
than two models. To see this, notice that the local utility function can be rewritten, applying Bayes
PK
p(y|k)
,
rule, as u(d, ?m , y) = ? log k=1 p(k) p(y|m)
2.3
Computational methods
Finding optimal designs for discriminating nonlinear models, such as POW, EXP and HYP, is a
nontrivial task, as the computation requires simultaneous optimization and high-dimensional integration. For a solution, we draw on a recent breakthrough in stochastic optimization [13]. The
basic idea is to recast the problem as a probability density simulation in which the optimal design
corresponds to the mode of the distribution. This allows one to find the optimal design without
having to evaluate the integration and optimization directly. The density is simulated by Markov
Chain Monte-Carlo [14], and the mode is sought by gradually ?sharpening? the distribution with a
simulated annealing procedure [15]. Details of the algorithm can be found in [10, 16].
The model and parameter priors are updated at each stage s = {1, 2, . . .} of experimentation. Upon
the specific outcome zs observed at stage s of an actual experiment carried out with design ds , the
model and parameter priors to be used to find an optimal design at the next stage are updated via
Bayes rule and Bayes factor calculation [e.g., 17] as
ps+1 (?m ) =
ps+1 (m) =
p(zs |?m , ds ) ps (?m )
p(zs |?m , ds ) ps (?m ) d?m
p0 (m)
PK
k=1 p0 (k) BF(k,m) (zs )ps (?)
R
(5)
(6)
where BF(k,m) (zs )ps (?) denotes the Bayes factor defined as the ratio of the marginal likelihood
of model k to that of model m given the realized outcome zs , where the marginal likelihoods are
computed with the updated priors from the preceding stage. The above updating scheme is applied successively at each stage of experimentation, after an initialization with equal model priors
p(s=0) (m) = 1/K and a parameter prior p(s=0) (?m ).
3 Discriminating retention models using ADO
Retention experiments with people were performed using ADO to discriminate the three retention
models in Table 1. The number of retention intervals was fixed at three, and ADO was used to optimize the experiment with respect to the selection of the specific retention intervals. The methodology paralleled very closely that of Experiment 1 from [3, 18]. Details of the implementation are
described next.
3.1
Experiment methodology
A variant of the Brown-Peterson task [19, 20] was used. In each trial, a target list of six words was
randomly drawn from a pool of high frequency, monosyllabic nouns. These words were presented
on a computer screen at a rate of two words per second, and served as the material that participants
(undergraduates) had to remember. Five seconds of rehearsal followed, after which the target list
was hidden and distractor words were presented, one at a time at a rate of one word per second,
for the duration of the retention interval. Participants had to say each distractor word out loud as it
appeared on the computer screen. The purpose of the distractor task was to occupy the participant?s
verbal memory in order to prevent additional rehearsal of the target list during the retention interval.
The distractor words were drawn from a separate pool of 2000 monosyllabic nouns, verbs, and
4
Proportion correct
Experiment 1
Experiment 2
Experiment 3
Experiment 4
1
1
1
1
0. 8
0. 8
0. 8
0. 8
0. 6
0. 6
0. 6
0. 6
0. 4
0. 4
POW
EXP
HYP
0. 2
0
10
0. 4
POW
EXP
HYP
0. 2
20
30
40
Retention interval
0
10
0. 4
POW
EXP
HYP
0. 2
20
30
40
0
10
20
Retention interval
Retention interval
POW
EXP
HYP
0. 2
30
40
0
10
20
30
40
Retention interval
Figure 1: Best fits of POW, EXP, and HYP at the conclusion of each experiment. Each data point
represents the observed proportion of correct responses out of 54 trials from one participant. The
level of noise is consistent with the assumption of binomial error. The clustering of retention intervals around the regions where the best fitting models are visually discernable hints at the tendency
for ADO to favor points at which the predictions of the models are most distinct.
adjectives. At the conclusion of the retention interval, participants were given up to 60 seconds
for free recall of the words (typed responses) from the target list. A word was counted as being
remembered only if it was typed correctly.
We used a method of moments [e.g. 21] to construct informative prior distributions for the model
parameters. Independent Beta distributions were constructed to match the mean and variance of the
best fitting parameters for the individual participant data from Experiment 1 from [3, 18].
We conducted four replications of the experiment to assess consistency across participants. Each
experiment was carried out across five ADO stages using a different participant at each stage (20
participants total). At the first stage of an experiment, an optimal set of three retention intervals,
each between 1 and 40 seconds, was computed using the ADO algorithm based on the priors at that
stage. There were nine trials at each time interval per stage, yielding 54 Bernoulli observations at
each of the three retention intervals. At the end of a stage, priors were updated before beginning the
next stage. For example, the prior for stage 2 of experiment 1 was obtained by updating the prior
for stage 1 of experiment 1 based on the results obtained in stage 1 in experiment 1. There was no
sharing of information between experiments.
3.2
Results and analysis
Before presenting the Bayesian analysis, we begin with a brief preliminary analysis in order to
highlight a few points about the quality of the data. Figure 1 depicts the raw data from each of the
four experiments, along with the best fitting parameterization of each model. These graphs reveal
two important points. First, the noise level in the measure of memory (number of correct responses)
is high, but not inconsistent with the assumption of binomial variance. Moreover, the variation does
not excede that in [3], the data from which our prior distributions were constructed. Second, the
retention intervals chosen by ADO are spread across their full range (1 to 40 seconds), but they are
especially clustered around the regions where the best fitting models are most discernable visually
(e.g., 5-15, 35-40). This hints at the tendency for ADO to favor retention intervals at which the
models are most distinct given current beliefs about their parameterizations.
A simple comparison of the fits of each model does not reveal a clear-cut winner. The fits are bad
and often similar across experiments. This is not surprising since such an analysis does not take into
account the noise in the models, nor does it take into account the complexity of the models. Both
are addressed in the following, Bayesian analysis.
When comparing three or more models, it can be useful to consider the probability of each model m
relative to all other models under consideration, given the data y [22, 23]. Formally, this is given by
p(m|y)
p(m) = PK
k=1 p(k|y)
5
(7)
Experiment 1
Experiment 2
Experiment 3
Experiment 4
POW
0.093
0.886
0.151
0.996
EXP
0.525
0.029
0.343
0.001
HYP
0.382
0.085
0.507
0.003
Table 2: Relative posterior probabilities of each model at the conclusion of each experiment. Experiments 2 and 4 provide strong evidence in favor of POW, while experiments 1 and 3 are inconclusive,
neither favoring, nor ruling out, any model.
which is simply a reformulation of Equation (6). Table 2 lists these relative posterior probabilities
for each of the three models at the conclusion of each of the four experiments. Scanning across
the table, two patterns are visible in the data. In Experiments 2 and 4, the data clearly favor the
power model. The posterior probabilities of the power model (0.886 and 0.992, respectively) greatly
exceed those for the other two models. Using the Bayes factor as a measure of support for a model,
comparisons of POW over EXP and POW over HYP yield values of 30.6 and 10.4. This can be
interpreted as strong evidence for POW as the correct model according to the scale given by Jeffreys
(1961). Conclusions from Experiment 4 are even stronger. With Bayes factors of 336 for POW over
EXP and 992 for POW over HYP, the evidence is decisively in support of the power model.
The results in the other two experiments are equivocal. In contrast to Experiments 2 and 4, POW
has the lowest posterior probability in both Experiments 1 and 3 (0.093 and 0.151, respectively).
EXP has the highest probability in Experiment 1 (0.525), and HYP has the highest in Experiment 3
(0.507). When Bayes Factors are computed between models, not only is there is no decisive winner,
but the evidence is not strong enough to rule out any model. For example, in Experiment 1, EXP
over POW, the largest difference in posterior probability, yields a Bayes Factor of only 5.6. The
corresponding comparison in Experiment 3, HYP over POW, yields a value of 3.3.
Inspection of the model predictions at consecutive stages of an experiment provides insight into
the workings of the ADO algorithm, and provides visual confirmation that the algorithm chooses
time points that are intended to be maximally discriminating. Figure 2 contains the predictions of
each of the three models for the first two stages of Experiments 2 and 3. The columns of density
plots corresponding to stage 1 show the predictions for each model based on the prior parameter
distributions. Based on these predictions, the ADO algorithm finds an optimal set of retention
intervals to be 1 second, 7 seconds, and 12 seconds. It is easy to see that POW predicts a much
steeper decline in retention for these three retention intervals than do EXP and HYP. Upon observing
the number of correct responses at each of those intervals in stage 1 (depicted by the blue dots in
the graphs), the algorithm computes the posterior likelihood of each model. In experiment 2, for
example, the observed numbers of correct responses for that participant lie in regions that are much
more likely under POW than under EXP or HYP, hence the posterior probability of POW is increased
from 0.333 to 0.584 after stage 1, whereas the posteriors for EXP and HYP are decreased to 0.107
and 0.309, respectively. The data from stage 1 of experiment 3 similarly favor POW.
At the start of stage 2, the parameter priors are updated based on the results from stage 1, hence
the ranges of likely outcomes for each model are much narrower than they were in stage 1, and
concentrated around the results from stage 1. Based on these updated parameter priors, the ADO
algorithm finds 1 second, 11 seconds, and 36 seconds to be an optimal set of retention intervals to test
in stage 2 of Experiment 2, and 1 second, 9 seconds, and 35 seconds to be an optimal set of retention
intervals to test in stage 2 of Experiment 3. The difference between these two designs reflects the
difference between the updated beliefs about the models, which can be seen by comparing the stage2 density plots for the respective experiments in Figure 2.
As hoped for with ADO, testing in stage 2 produced results that begin to discriminate the models.
What is somewhat surprising, however, is that the results favor different models, with POW having
the highest probability (0.911) in Experiment 2, and HYP (0.566) in Experiment 3. The reason
for this is the very different patterns of data produced by the participants in the two experiments.
The participant in Experiment 2 remembered more words overall than the participant in Experiment
3, especially at the longest retention interval. These two factors together combine to yield very
different posterior probabilities across models.
6
Experiment 3
Experiment 2
Stage 2
45
45
36
36
27
18
9
18
9
p(POW)=.911
0
54
54
45
45
36
36
27
18
9
p(EXP)=.107
p(EXP)=.010
0
54
36
(HYP)
Correct responses
45
p(HYP)=.309
0
0
10
20
Retention interval
40 0
10
20
p(EXP)=.336
27
18
9
p(HYP)=.284
p(HYP)=.079
30
p(EXP)=.173
9
36
9
p(POW)=.098
18
45
18
p(POW)=.543
27
0
54
27
Stage 2
27
0
(EXP)
Correct responses
(EXP)
Correct responses
p(POW)=.584
(HYP)
Correct responses
Stage 1
54
(POW ) )
Correct responses
(POW ) )
Correct responses
Stage 1
54
30
0
40
Retention interval
0
10
20
Retention interval
p(HYP)=.566
30
40 0
10
20
30
40
Retention interval
Figure 2: Predictions of POW, EXP and HYP based on the prior parameter distributions in the
first two stages of Experiments 2 and 3. Darker colors indicate higher probabilities. Light blue dots
mark the observations at the given stage, and dark blue dots mark observations from previous stages.
Relative posterior model probabilities based on all observations up to the current stage are given in
the lower left corner of each plot.
4
Discussion
The results of the current study demonstrate that ADO can work as advertised. Over a series of
testing stages, the algorithm updated the experiment?s design (with new retention intervals) on the
basis of participant data to determine the form of the retention function, yielding final posterior
probabilities in Experiments 2 and 4 that unambiguously favor the power model. Like Wixted and
Ebbesen (1991), these results champion the power model, and they do so much more definitively
than any experiment that we know of.
The failure to replicate these results in Experiments 1 and 3 tempers such strong conclusions about
the superiority of the power model and can raise doubts about the usefulness of ADO. The data in
Figure 2 (also Figure 1) hint at a likely reason for the observed inconsistencies: participant variability. In Figure 2, the variability in performance at stage 2 of Experiments 2 and 3 is very large,
near the upper limit of what one would expect from binomial noise. If the variability in the data
were to exceed the variability predicted by the models, then the more extreme data points could be
incorrectly interpreted as evidence in favor of the wrong model, rather than being attributed to the
intrinsic noise in the true model. Moreover, even when the noise is taken into account accurately,
ADO does not guarantee that an experiment will generate data that discriminates the models; it
merely sets up ideal conditions for that to occur. It is up to the participants to provide discriminating
data points.
The inconsistencies across experiments reveal one of the challenges of using ADO. It is designed
to be highly sensitive to participant performance, and this sensitivity can also be a weakness under
certain conditions. If the variability noted above is uninteresting noise, then by testing the same
participant at each stage (a within-subject design), we should be able to reduce the problem. On the
other hand, the inconclusiveness of the data in Experiments 1 and 3 may point to a more interesting
7
possibility: a minority of participants may retain information at a rate that is best described by an
exponential or hyperbolic function. Such individual differences would be identifiable with the use
of a within-subject design.
As with any search-based methodology, the application of ADO requires a number of decisions to
be made. Although there are too many to cover here, we conclude the paper by touching on the most
important ones.
When running an experiment with ADO, any model that is expected to be a serious competitor
should be included in the analysis from the start of experimentation. In the present study, we considered three retention functions with strong theoretical motivations, which have outperformed others
in previous experiments [2, 3]. The current methodology does not preclude considering a larger set
of models (the only practical limitations are computing time and the patience of the experimenter).
However, once that set of models is decided, the designs chosen by ADO are optimal for discriminating those ?and only those? models. Thus, the designs that we found and the data we have
collected in these experiments are not necessarily optimal for discriminating between, say, a power
model and a logarithmic model. Therefore, ADO is best used as a tool for confirmatory rather than
exploratory analyses. That is, it is best suited for situations in which the field of potential models
can be narrowed to a few of the strongest competitors.
Another important choice to be made before using ADO is which prior distributions to use. Using informative priors is very helpful but not necessarily essential to implementing ADO. Since
the parameter distributions are updated sequentially, the data will quickly trump all but the most
pathological prior distributions. Therefore, using a different prior distribution should not affect the
conclusions of the sequential experiment. The ideal approach would to use an informative prior
that accurately reflects individual perfomance. In the absence of reliable information from which
to construct such a prior, any vague prior that does not give appreciably different densities to those
regions of the parameter space where there is a reasonable fit would do [22]. However, constructing
such priors can be difficult due to the nonlinearity of the models.
Finally, in the current study, we applied ADO to just one property of the experiment design: the
lengths of the retention intervals. This leaves several other design variables open to subjective
manipulation. Two such variables that are crucial to the timely and successful completion of the
experiment are the number of retention intervals, and the number of trials allotted to each interval.
In theory, one could allot all of the trials in each stage to just one interval.1 In practice, however, this
approach would require more stages, and consequently more participants, to collect observations
at the same number of intervals as an approach that allotted trials to multiple intervals in each
stage. Such an approach could be disadvantageous if observations at several different intervals
were essential for discriminating the models under consideration. On the other hand, increasing the
number of intervals at which to test in each stage greatly increases the complexity of the design
space, thus increasing the length of the computation needed to find an optimal design. Extending
the ADO algorithm to address these multiple design variables simultaneously would be a useful
contribution.
5 Conclusion
In the current study, ADO was successfully applied in a laboratory experiment with people, the
purpose of which was to discriminate models of memory retention. The knowledge learned from
its application contributes to our understanding of human memory. Although challenges remain in
the implementation of ADO, the present success is an encouraging sign. The goals of future work
include applying ADO to more complex experimental designs and to other research questions in
cognitive science (e.g., numerical representation in children).
1
Testing at one interval per stage is not possible with a utility function based on statistical model selection
criteria, such as MDL, which require computation of the maximum likelihood estimate [10]. However, it can
be done with a utility function based on mutual information [5].
8
References
[1] D. J. Navarro, M. A. Pitt, and I. J. Myung. Assessing the distinguishability of models and the
informativeness of data. Cognitive Psychology, 49:47?84, 2004.
[2] D. C. Rubin and A. E. Wenzel. One hundred years of forgetting: A quantitative description of
retention. Psychological Review, 103(4):734?760, 1996.
[3] J. T. Wixted and E. B. Ebbesen. On the form of forgetting. Psychological Science, 2(6):409?
415, 1991.
[4] D. R. Cavagnaro, J. I Myung, M. A. Pitt, and Y. Tang. Better data with fewer participants and
trials: improving experiment efficiency with adaptive design optimization. In N. A. Taatgen
and H. Van Rijn, editors, Proceedings of the 31st Annual Conference of the Cognitive Science
Society, pages 93?98. Cognitive Science Society, 2009.
[5] D. R. Cavagnaro, J. I. Myung, M. A. Pitt, and J. V. Kujala. Adaptive design optimization:
A mutual information based approach to model discrimination in cognitive science. Neural
Computation, 2009. In press.
[6] J. V. Kujala and T. J. Lukka. Bayesian adaptive estimation: The next dimension. Journal of
Mathematical Psychology, 50(4):369?389, 2006.
[7] J. Lewi, R. Butera, and L. Paninski. Sequential optimal design of neurophysiology experiments. Neural Computation, 21:619?687, 2009.
[8] K. Chaloner and I. Verdinelli. Bayesian experimental design: A review. Statistical Science,
10(3):273?304, 1995.
[9] P. Gr?unwald. A tutorial introduction to the minimum description length principle. In
P. Gr?unwald, I. J. Myung, and M. A. Pitt, editors, Advances in Minimum Description Length:
Theory and Applications. The M.I.T. Press, 2005.
[10] J. I. Myung and M. A. Pitt. Optimal experimental design for model discrimination. Psychological Review, in press.
[11] A. Heavens, T. Kitching, and L. Verde. On model selection forecasting, dark energy and modified gravity. Monthly Notices of the Royal Astronomical Society, 380(3):1029?1035, 2007.
[12] T. M. Cover and J. A. Thomas. Elements of Information Theory. John Wiley & Sons, Inc.,
1991.
[13] P. M?uller, B. Sanso, and M. De Iorio. Optimal bayesian design by inhomogeneous markov
chain simulation. Journal of the American Statistical Association, 99(467):788?798, 2004.
[14] W. R. Gilks, S. Richardson, and D. Spiegelhalter. Markov Chain Monte Carlo in Practice.
Chapman & Hall, 1996.
[15] S. Kirkpatrick, C. D. Gelatt, and M. P. Vecchi. Optimization by simulated annealing. Science,
220:671?680, 1983.
[16] B. Amzal, F. Y. Bois, E. Parent, and C. P. Robert. Bayesian-optimal design via interacting
particle systems. Journal of the American Statistical Association, 101(474):773?785, 2006.
[17] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian Data Analysis. Chapman &
Hall, 2004.
[18] J. T. Wixted and E. B. Ebbesen. Genuine power curves in forgetting: A quantitative analysis
of individual subject forgetting functions. Memory & cognition, 25(5):731?739, 1997.
[19] J. A. Brown. Some tests of the decay theory of immediate memory. Quarterly Journal of
Experimental Psychology, 10:12?21, 1958.
[20] L. R. Peterson and M. J. Peterson. Short-term retention of individual verbal items. Journal of
Experimental Psychology, 58:193?198, 1959.
[21] S. D. Guikema. Formulating informative, data-based priors for failure probability estimation
in reliability analysis. Reliability Engineering & System Safety, 92:490?502, 2007.
[22] M. D. Lee. A bayesian analysis of retention functions. Journal of Mathematical Psychology,
48:310?321, 2004.
[23] H. P. Carlin and T. A. Louis. Bayes and empirical Bayes methods for data analysis, 2nd ed.
Chapman & Hall, 2000.
9
| 3735 |@word neurophysiology:1 trial:7 proportion:3 replicate:1 stronger:1 bf:2 nd:1 open:1 simulation:4 p0:2 moment:1 reduction:2 series:3 contains:1 daniel:1 denoting:1 subjective:1 current:7 comparing:4 surprising:2 must:2 john:1 visible:1 numerical:1 informative:6 drop:1 designed:2 plot:3 discrimination:6 alone:1 leaf:1 fewer:1 item:1 parameterization:2 inspection:1 beginning:1 short:2 provides:2 parameterizations:1 five:2 mathematical:2 along:1 constructed:2 beta:1 replication:1 fitting:4 combine:1 leveling:1 pairwise:2 forgetting:4 expected:6 nor:2 distractor:4 actual:1 encouraging:1 preclude:1 considering:2 increasing:3 begin:4 provided:1 underlying:1 moreover:2 maximizes:2 lowest:1 what:2 interpreted:2 z:6 finding:4 sharpening:1 guarantee:1 certainty:2 remember:2 quantitative:3 hypothetical:2 every:1 gravity:1 wrong:1 omit:1 superiority:1 louis:1 before:4 retention:46 engineering:1 local:5 safety:1 limit:1 severely:1 despite:1 subscript:1 discriminability:1 initialization:1 collect:1 taatgen:1 monosyllabic:2 range:3 decided:1 practical:1 gilks:1 testing:8 practice:2 lewi:1 procedure:1 equivocal:1 empirical:3 hyperbolic:4 significantly:1 word:11 selection:6 gelman:1 fullest:1 put:1 applying:3 writing:1 optimize:1 equivalent:1 demonstrated:1 stage2:1 maximizing:1 duration:1 simplicity:2 immediately:1 rule:3 insight:1 century:1 proving:2 population:1 sse:1 variation:1 exploratory:1 updated:9 target:4 designing:1 element:1 updating:2 cut:1 predicts:1 observed:5 solved:1 capture:1 region:4 decrease:1 highest:4 principled:1 discriminates:1 complexity:2 prescribes:1 raise:1 reviewing:1 upon:4 efficiency:1 basis:1 vague:1 iorio:1 distinct:2 describe:1 effective:1 monte:2 tell:1 outcome:12 whose:2 larger:1 valued:1 say:2 ability:2 favor:9 statistic:1 richardson:1 final:1 seemingly:1 causing:1 degenerate:1 achieve:1 description:5 parent:1 p:6 assessing:2 extending:1 generating:1 perfect:2 completion:1 strong:6 implemented:1 predicted:2 come:1 indicate:1 differ:1 inhomogeneous:1 closely:1 correct:13 stochastic:1 human:5 material:1 implementing:2 require:2 trump:1 clustered:1 investigation:1 preliminary:1 extension:1 around:3 considered:2 hall:3 exp:23 visually:2 cognition:2 pitt:7 sought:1 consecutive:1 purpose:2 estimation:2 outperformed:1 sensitive:1 largest:1 appreciably:1 repetition:2 champion:1 successfully:1 tool:1 reflects:2 uller:1 clearly:1 verde:1 aim:1 modified:1 rather:2 broader:1 longest:1 consistently:1 bernoulli:1 likelihood:5 indicates:1 chaloner:1 greatly:2 contrast:1 sense:1 whence:1 helpful:1 bt:2 hidden:1 favoring:1 kujala:2 overall:1 among:2 temper:1 animal:1 noun:2 integration:2 breakthrough:1 mutual:10 marginal:2 genuine:1 construct:2 field:2 having:3 equal:1 sampling:1 eliminated:1 once:1 represents:2 chapman:3 future:1 others:1 hint:3 few:3 serious:1 randomly:1 pathological:1 simultaneously:1 resulted:1 individual:5 intended:2 phase:2 argmax:1 decisively:1 hyp:24 highly:2 possibility:1 mdl:3 weakness:1 kirkpatrick:1 extreme:1 yielding:2 light:1 chain:3 implication:1 accurate:1 heaven:1 integral:1 necessary:1 respective:1 theoretical:1 psychological:3 increased:1 column:1 cover:2 applicability:1 cost:3 addressing:1 uninteresting:1 usefulness:3 hundred:1 successful:1 conducted:1 gr:2 too:1 scanning:1 chooses:1 adaptively:1 person:1 density:5 st:1 sensitivity:1 discriminating:8 retain:1 standing:1 lee:1 off:1 pool:2 together:1 quickly:2 successively:1 worse:1 cognitive:7 corner:1 allot:1 wenzel:1 american:2 doubt:1 account:4 potential:3 de:1 inc:1 depends:1 decisive:2 performed:1 observing:1 steeper:1 start:2 bayes:12 participant:25 disadvantageous:1 narrowed:1 timely:1 contribution:1 ass:1 square:1 variance:2 efficiently:1 yield:5 identify:1 bayesian:11 identification:1 raw:1 accurately:2 produced:2 carlo:2 served:1 researcher:1 drive:1 simultaneous:1 strongest:1 sharing:1 ed:1 competitor:3 failure:2 rehearsal:2 energy:1 frequency:1 typed:2 associated:1 attributed:1 experimenter:2 treatment:2 recall:2 knowledge:5 color:1 astronomical:1 higher:1 methodology:6 response:11 maximally:3 unambiguously:1 done:2 just:3 stage:52 d:3 working:1 hand:2 nonlinear:1 mode:2 quality:1 reveal:4 brown:2 true:3 adequately:1 hence:3 butera:1 symmetric:1 laboratory:2 during:1 noted:2 criterion:5 presenting:1 theoretic:1 demonstrate:2 meaning:2 consideration:5 ohio:3 superior:1 confirmatory:1 winner:2 perfomance:1 association:2 measurement:1 refer:1 monthly:1 consistency:2 similarly:1 particle:1 nonlinearity:1 replicating:1 had:2 dot:3 reliability:2 entail:1 posterior:13 recent:1 touching:1 manipulation:1 certain:1 success:2 remembered:2 inconsistency:2 seen:2 minimum:3 additional:1 somewhat:1 preceding:1 determine:1 period:1 full:2 desirable:1 multiple:2 infer:1 match:1 calculation:1 long:1 impact:1 prediction:8 variant:1 pow:30 basic:2 ae:1 expectation:2 whereas:1 interval:39 annealing:2 addressed:1 decreased:1 allocated:1 crucial:2 rest:1 navarro:1 subject:3 inconsistent:1 near:1 ideal:4 exceed:2 enough:1 easy:1 affect:1 fit:4 psychology:8 carlin:2 restrict:1 competing:2 perfectly:1 reduce:2 idea:2 decline:1 six:1 utility:20 effort:1 forecasting:1 nine:1 useful:2 clear:1 amount:1 dark:2 concentrated:1 generate:1 occupy:1 problematic:1 tutorial:1 notice:2 elapses:1 sign:1 per:4 correctly:1 blue:3 probed:2 group:3 four:4 reformulation:1 drawn:2 clarity:1 prevent:1 neither:1 utilize:1 graph:2 merely:1 year:2 sum:1 uncertainty:4 reasonable:3 ruling:1 draw:1 decision:2 dy:2 patience:1 followed:2 discernable:2 yielded:1 identifiable:1 nontrivial:1 annual:1 occur:1 handful:1 vecchi:1 optimality:2 formulating:1 relatively:1 department:3 according:1 across:9 remain:1 increasingly:1 son:1 rob:2 jeffreys:1 gradually:1 advertised:1 taken:2 equation:7 needed:1 know:1 end:2 studying:1 rewritten:1 experimentation:5 quarterly:1 appropriate:1 gelatt:1 thomas:1 denotes:2 binomial:3 clustering:1 running:1 include:1 ado:43 exploit:1 especially:2 society:3 question:2 realized:1 separate:1 simulated:3 collected:3 trivial:1 reason:2 minority:1 length:5 retained:1 providing:2 minimizing:1 ratio:2 difficult:2 robert:1 intent:1 design:58 implementation:3 stern:1 upper:1 observation:13 markov:3 incorrectly:1 immediate:1 payoff:1 situation:1 variability:5 interacting:1 verb:1 pair:3 optimized:1 learned:2 established:1 distinguishability:1 address:1 able:2 proceeds:1 below:1 pattern:3 appeared:1 challenge:5 adjective:1 recast:1 reliable:1 memory:14 royal:1 belief:2 power:13 suitable:1 treated:1 rely:2 natural:1 representing:1 scheme:1 improve:2 spiegelhalter:1 brief:1 carried:5 prior:31 understanding:1 review:3 relative:5 precipitously:1 expect:1 highlight:1 rijn:1 interesting:1 limitation:1 consistent:1 informativeness:2 rubin:2 myung:7 editor:2 principle:1 course:1 free:1 verbal:2 peterson:3 taking:2 definitively:1 benefit:1 van:1 curve:1 dimension:1 evaluating:1 computes:1 collection:2 adaptive:8 made:3 counted:1 sequentially:1 conclude:2 arbiter:1 search:1 quantifies:1 table:7 confirmation:1 contributes:1 improving:1 necessarily:2 complex:1 constructing:1 wixted:3 pk:4 spread:1 motivation:1 noise:7 repeated:1 allowed:1 child:1 screen:2 depicts:1 darker:1 aid:1 wiley:1 exponential:4 lie:1 jay:1 tang:1 bad:1 specific:2 symbol:1 list:6 decay:1 evidence:6 inconclusive:1 intrinsic:1 undergraduate:1 essential:2 sequential:2 gained:1 hoped:1 suited:1 entropy:5 forget:1 led:1 depicted:1 simply:1 likely:4 logarithmic:1 paninski:1 visual:1 corresponds:1 loud:1 conditional:2 goal:4 narrower:1 quantifying:1 consequently:1 absence:1 included:2 specifically:2 determined:1 total:1 discriminate:8 verdinelli:1 experimental:8 tendency:2 osu:3 unwald:2 gamble:1 formally:3 allotted:2 people:7 mark:3 support:2 paralleled:1 evaluate:1 correlated:1 |
3,019 | 3,736 | Fast Learning from Non-i.i.d. Observations
Ingo Steinwart
Information Sciences Group CCS-3
Los Alamos National Laboratory
Los Alamos, NM 87545, USA
[email protected]
Andreas Christmann
University of Bayreuth
Department of Mathematics
D-95440 Bayreuth
[email protected]
Abstract
We prove an oracle inequality for generic regularized empirical risk minimization
algorithms learning from ?-mixing processes. To illustrate this oracle inequality,
we use it to derive learning rates for some learning methods including least squares
SVMs. Since the proof of the oracle inequality uses recent localization ideas
developed for independent and identically distributed (i.i.d.) processes, it turns
out that these learning rates are close to the optimal rates known in the i.i.d. case.
1
Introduction
In the past, most articles investigating statistical properties of learning algorithms assumed that the
observed data was generated in an i.i.d. fashion. However, in many applications this assumption
cannot be strictly justified since the sample points are intrinsically temporal and thus often weakly
dependent. Typical examples for this phenomenon are applications where observations come from
(suitably pre-processed) time series, i.e., for example, financial predictions, signal processing, system observation and diagnosis, and speech or text recognition. A set of natural and widely accepted
notions for describing such weak dependencies1 are mixing concepts such as ?-, ?-, and ?-mixing,
since a) they offer a generalization to i.i.d. processes that is satisfied by various types of stochastic
processes including Markov chains and many time series models, and b) they quantify the dependence in a conceptionally simple way that is accessible to various types of analysis.
Because of these features, the machine learning community is currently in the process of appreciating and accepting these notions as the increasing number of articles in this direction shows. Probably the first work in this direction goes back to Yu [20], whose techniques for ?-mixing processes
inspired subsequent work such as [18, 10, 11], while the analysis of specific learning algorithms
probably started with [9, 5, 8]. More recently, [7] established consistency of regularized boosting
algorithms learning from ?-mixing processes, while [15] established consistency of support vector
machines (SVMs) learning from ?-mixing processes, which constitute the largest class of mixing
processes. For the latter, [21] established generalization bounds for empirical risk minimization
(ERM) and [19, 17] analyzed least squares support vector machines (LS-SVMs).
In this work, we establish a general oracle inequality for generic regularized learning algorithms and
?-mixing observations by combining a Bernstein inequality for such processes [9] with localization
ideas for i.i.d. processes pioneered by [6] and refined by e.g. [1]. To illustrate this oracle inequality,
we then use it to show learning rates for some algorithms including ERM over finite sets and LSSVMs. In the ERM case our results match those in the i.i.d. case if one replaces the number of
observations with the ?effective number of observations?, while, for LS-SVMs, our rates are at
least quite close to the recently obtained optimal rates [16] for i.i.d. observations. However, the
latter difference is not surprising, when considering the fact that [16] used heavy machinery from
1
For example, [4] write on page 71: ?. . . it is a common practice to assume a certain mild asymptotic
independence (such as ?-mixing) as a precondition in the context of . . . nonlinear times series.?
1
empirical process theory such as Talagrand?s inequality and localized Rademacher averages, while
our results only use a light-weight argument based on Bernstein?s inequality.
2
Definitions, Results, and Examples
Let X be a measurable space and Y ? R be closed. Furthermore, let (?, A, ?) be a probability
space and Z := (Zi )i?1 be a stochastic process such that Zi : ? ? X ? Y for all i ? 1. For n ? 1,
we further write Dn := ((X1 , Y1 ), . . . , (Xn , Yn )) := (Z1 , . . . , Zn ) for a training set of length n
that is distributed according to the first n components of Z. Throughout this work, we assume that
Z is stationary, i.e., the (X ? Y )n -valued random variables (Zi1 , . . . , Zin ) and (Zi1 +i , . . . , Zin +i )
have the same distribution for all n, i, i1 , . . . , in ? 1. We further write P for the distribution of one
(and thus all) Zi , i.e., for all measurable A ? X ? Y , we have
(1)
P (A) = ? {? ? ? : Zi (?) ? A} .
To learn from stationary processes whose components are not independent, [15] suggests that it is
necessary to replace the independence assumption by a notion that still guarantees certain concentration inequalities. We will focus on ?-mixing, which is based on the ?-mixing coefficients
n
o
n ? 1,
?(Z, ?, n) := sup ?(A ? B) ? ?(A)?(B) : i ? 1, A ? Ai1 and B ? A?
i+n ,
where Ai1 and A?
i+n are the ?-algebras generated by (Z1 , . . . , Zi ) and (Zi+n , Zi+n+1 , . . . ), respectively. Throughout this work, we assume that the process Z is geometrically ?-mixing, that is
?(Z, ?, n) ? c exp(?bn? ) ,
n ? 1,
(2)
for some constants b > 0, c ? 0, and ? > 0. Of course, i.i.d. processes satisfy (2) for c = 0 and all
b, ? > 0. Moreover, several time series models such as ARMA and GARCH, which are often used
to describe, e.g. financial data, satisfy (2) under natural conditions [4, Chapter 2.6.1], and the same
is true for many Markov chains including some dynamical systems perturbed by dynamic noise, see
e.g. [18, Chapter 3.5]. An extensive and thorough account on mixing concepts including stronger
mixing notions such as ?- and ?-mixing is provided by [3].
Let us now describe the learning algorithms we are interested in. To this end, we assume that we
have a hypothesis set F consisting of bounded measurable functions f : X ? R that is pre-compact
with respect to the supremum norm k ? k? , i.e., for all ? > 0, the covering numbers
n
[
N (F, k ? k? , ?) := inf n ? 1 : ?f1 , . . . , fn ? F such that F ?
B(fi , ?)
i=1
are finite, where B(fi , ?) := {f ? `? (X) : kf ? fi k? ? ?} denotes the ?-ball with center fi in the
space `? (X) of bounded functions f : X ? R. Moreover, we assume that we have a regularizer,
that is, a function ? : F ? [0, ?). Following [13, Definition 2.22], we further say that a function
L : X ? Y ? R ? [0, ?) is a loss that can be clipped at some M > 0, if L is measurable and
L(x, y, t?) ? L(x, y, t) ,
(x, y, t) ? X ? Y ? R,
(3)
where t? denotes the clipped value of t at ?M , that is t? := t if t ? [?M, M ], t? := ?M if t < ?M ,
and t? := M if t > M . Various often used loss functions can be clipped. For example, if Y :=
{?1, 1} and L is a convex, margin-based loss represented by ? : R ? [0, ?), that is L(y, t) =
?(yt) for all y ? Y and t ? R, then L can be clipped, if and only if ? has a global minimum, see [13,
Lemma 2.23]. In particular, the hinge loss, the least squares loss for classification, and the squared
hinge loss can be clipped, but the logistic loss for classification and the AdaBoost loss cannot be
clipped. On the other hand, [12] established a simple technique, which is similar to inserting a
small amount of noise into the labeling process, to construct a clippable modification of an arbitrary
convex, margin-based loss. Moreover, if Y := [?M, M ] and L is a convex, distance-based loss
represented by some ? : R ? [0, ?), that is L(y, t) = ?(y ? t) for all y ? Y and t ? R, then L
can be clipped whenever ?(0) = 0, see again [13, Lemma 2.23]. In particular, the least squares loss
and the pinball loss used for quantile regression can be clipped, if the space of labels Y is bounded.
Given a loss function L and an f : X ? R, we often use the notation L ? f for the function
(x, y) 7? L(x, y, f (x)). Moreover, the L-risk is defined by
Z
RL,P (f ) :=
L(x, y, f (x)) dP (x, y) ,
X?Y
2
?
and the minimal L-risk is R?L,P := inf{RL,P (f ) | f : X ? R}. In addition, a function fL,P
?
?
satisfying RL,P (fL,P ) = RL,P is called a Bayes decision function. Finally, we denote empirical
risks based on Dn by RL,Dn (f ), that is, for a realization of Dn (?) of the training set Dn we have
n
1X
L Xi (?), Yi (?), f (Xi (?)) .
RL,Dn (?) (f ) =
n i=1
Given a regularizer ? : F ? [0, ?), a clippable loss, and an accuracy ? ? 0, we consider learning
methods that, for all n ? 1, produce a decision function fDn ,? ? F satisfying
?(fDn ,? ) + RL,Dn (f?Dn ,? ) ? inf ?(f ) + RL,Dn (f ) + ? .
(4)
f ?F
Note that methods such as SVMs (see below) that minimize the right-hand side of (4) exactly, satisfy
(4), because of (3). The following theorem, which is our main result, establishes an oracle inequality
for methods (4), when the training data is generated by Z.
Theorem 2.1 Let L : X ? Y ? R ? [0, ?) be a loss that can be clipped at M > 0 and that
satisfies L(x, y, 0) ? 1, L(x, y, t) ? B, and
L(x, y, t) ? L(x, y, t0 ) ? |t ? t0 |
(5)
for all (x, y) ? X ? Y and t, t0 ? [?M, M ], where B > 0 is some constant. Moreover, let
Z := (Zi )i?1 be an X ? Y -valued process that satisfies (2), and P be defined by (1). Assume that
?
and constants ? ? [0, 1] and V ? B 2?? such that
there exist a Bayes decision function fL,P
2
?
?
?
? V ? EP (L ? f? ? L ? fL,P
) ,
f ? F,
(6)
EP L ? f? ? L ? fL,P
where F is a hypothesis set and L ? f denotes the function (x, y) 7? L(x, y, f (x)). Finally, let
? : F ? [0, ?) be a regularizer, f0 ? F be a fixed function and B0 ? B be a constant such that
kL ? f0 k? ? B0 . Then, for all fixed ? > 0, ? ? 0, ? > 0, and n ? max{b/8, 22+5/? b?1/? }, every
learning method defined by (4) satisfies with probability ? not less than 1 ? 3Ce?? :
4B0 cB ?
?(fDn ,? ) + RL,P (f?Dn ,? ) ? R?L,P < 3 ?(f0 ) + RL,P (f0 ) ? R?L,P +
+ 4? + 2?
n?
1/(2??)
36c? V (? + ln N (F, k ? k? , ?))
,
+
n?
where ? :=
?
?+1 ,
2+?
C := 1 + 4e?2 c, c? := ( 8 b )1/(1+?) , and cB := c? /3.
Before we illustrate this theorem by a few examples, let us briefly discuss the variance bound (6).
For example, if Y = [?M, M ] and L is the least squares loss, then it is well-known that (6) is
satisfied for V := 16M 2 and ? = 1, see e.g. [13, Example 7.3]. Moreover, under some assumptions
on the distribution P , [14] established a variance bound of the form (6) for the so-called pinball
loss used for quantile regression. In addition, for the hinge loss, (6) is satisfied for ? := q/(q + 1),
if Tsybakov?s noise assumption holds for q, see [13, Theorem 8.24]. Finally, based on [2], [12]
established a variance bound with ? = 1 for the earlier mentioned clippable modifications of strictly
convex, twice continuously differentiable margin-based loss functions.
One might wonder, why the constant B0 is necessary in Theorem 2.1, since appearently it only adds
further complexity. However, a closer look reveals that the constant B only bounds functions of the
form L ? f?, while B0 bounds the function L ? f0 for an unclipped f0 ? F. Since we do not assume
that all f ? F satisfy f? = f , we conclude that in general B0 is necessary. We refer to Examples 2.4
and 2.5 for situations, where B0 is significantly larger than B.
Let us now consider a few examples of learning methods to which Theorem 2.1 applies. The first
one is empirical risk minimization over a finite set.
Example 2.2 Let the hypothesis set F be finite and ?(f ) = 0 for all f ? F. Moreover, assume
that kf k? ? M for all f ? F. Then, for accuracy ? := 0, the learning method described by (4) is
ERM, and Theorem 2.1 provides, by some simple estimates, the oracle inequality
1/(2??)
36c? V (? + ln |F|)
4BcB ?
?
?
RL,P (fDn ,? ) ? RL,P < 3 inf RL,P (f ) ? RL,P +
+
.
?
f ?F
n
n?
3
Besides constants, this oracle inequality is an exact analogue to the standard oracle inequality for
ERM learning from i.i.d. processes, [13, Theorem 7.2].
C
Before we present another example, let us first reformulate Theorem 2.1 for the case that the involved
covering numbers have a certain polynomial behavior.
Corollary 2.3 Consider the situation of Theorem 2.1 and additionally assume that there exist constants a > 0 and p ? (0, 1] such that
ln N (F, k ? k? , ?) ? a ??2p ,
? > 0.
Then there is cp,? > 0 only depending on p and ? such that the inequality of Theorem 2.1 reduces to
1/(2+2p??)
c? V a
?(fDn ,? ) + RL,P (f?Dn ,? ) ? R?L,P < 3 ?(f0 ) + RL,P (f0 ) ? R?L,P + cp,?
n?
1/(2??)
36c? V ?
4B0 cB ?
+ 2? .
+
+
n?
n?
For the learning rates considered in the following examples, the exact value of cp,? is of no importance. However, a careful numerical analysis shows that cp,? ? 40 for all p ? (0, 1] and ? ? [0, 1].
Corollary 2.3 can be applied to various methods including e.g. SVMs with the hinge loss or the
pinball loss, and regularized boosting algorithms. For the latter, we refer to e.g. [2] for some learning
rates in the i.i.d. case and to [7] for a consistency result in the case of geometrically ?-mixing
observations. Unfortunately, a detailed exposition of the learning rates resulting from Corollary 2.3
for all these algorithms is clearly out of scope this paper, and hence we will only discuss learning
rates for LS-SVMs. However, the only reason we picked LS-SVMs is that they are one of the few
methods for which both rates for learning from ?-mixing processes and optimal rates in the i.i.d. case
are known. By considering LS-SVMs we can thus assess the sharpness of our results. Let us begin
by briefly recalling LS-SVMs. To this end, let X be a compact metric space and k be a continuous
kernel on X with reproducing kernel Hilbert space (RKHS) H. Given a regularization parameter
? > 0 and the least squares loss L(y, t) := (y ? t)2 , the LS-SVM finds the unique solution
fDn ,? = arg min ?kf k2H + RL,Dn (f ) .
f ?H
To describe the approximation properties of H, we further need the approximation error function
A(?) := inf ?kf k2H + RL,P (f ) ? R?L,P ,
? > 0.
f ?H
Example 2.4 (Rates for least squares SVMs) Let X be a compact metric space, Y = [?1, 1], and
Z and P as above. Furthermore, let L be the least squares loss and H be the RKHS of a continuous
kernel k over X. Assume that the closed unit ball BH of H satisfies
ln N (BH , k ? k? , ?) ? a ??2p ,
? > 0,
(7)
where a > 0 and p ? (0, 1] are some constants. In addition, assume that the approximation error
function satisfies A(?) ? c?? for some c > 0, ? ? (0, 1], and all ? > 0. We define
n
o
?
.
? := min ?,
? + 2p? + p
Then Corollary 2.3 applied to F := ??1/2 BH shows that the LS-SVM using ?n := n???/? learns
with rate n??? . Let us compare this rate with other recent results: [17] establishes the learning rate
2?
n? ?+3 ,
whenever (2) is satisfied for some ?. At first glance, this rate looks stronger, since it is independent
of ?. However, a closer look shows that it depends on the confidence level 1 ? 3Ce?? by a factor of
e? rather than by the factor of ? appearing in our analysis, and hence these rates are not comparable.
Moreover, in the case ? = 1, our rates are still faster whenever p ? (0, 1/3], which is e.g. satisfied
4
for sufficiently smooth kernels, see e.g. [13, Theorem 6.26]. Moreover, [19] has recently established
the rate
??
n? 2p+1 ,
(8)
1+p
which is faster than ours, if and only if ? > 1+2p
. In particular, for highly smooth kernels such
as the Gaussian RBF kernels, where p can be chosen arbitrarily close to 0, their rate is never faster.
Moreover, [19] requires knowing ?, which, as we will briefly discuss in Remark 2.6, is not the case
for our rates. In this regard, it is interesting to note that their iterative proof procedure, see [13,
Chapter 7.1] for a generic description of this technique, can also be applied to our oracle inequality.
The resulting rate is essentially n?? min{?,?/(?+p?+p)} , which is always faster than (8). Due to
space constraints and the fact that these rates require knowing ? and ?, we skip a detailed exposition.
Finally, both [19] and [17] only consider LS-SVMs, while Theorem 2.1 applies to various learning
methods.
C
Example 2.5 (Almost optimal rates for least squares SVMs) Consider the situation of Example
2.4, and additionally assume that there exists a constant Cp > 0 such that
kf k? ? Cp kf kpH kf k1?p
L2 (PX ) ,
f ? H.
(9)
?
As in [16], we can then bound B0 ? ?(??1)p , and hence the SVM using ?n := n? ?+2p?+p learns
with rate
??
n? ?+2p?+p ,
?
compared to the optimal rate n? ?+p in the i.i.d. case, see [16]. In particular, if H = W m (X)
is a Sobolev space over X ? Rd with smoothness m > d/2, and the marginal distribution PX
is absolutely continuous with respect to the uniform distribution, where corresponding density is
d
bounded away from 0 and ?, then (7) and (9) are satisfied for p := 2m
. Moreover, the assumption
?
? W s (X) and
on the approximation error function is satisfied for ? := s/m, whenever fL,P
s ? (d/2, m]. Consequently, the resulting learning rate is
2s?
n? 2s+d+2ds/m ,
2s
which in the i.i.d. case, where ? = 1, is worse than the optimal rate n? 2s+d by the term 2ds/m. Note
that this difference can be made arbitrarily small by picking a sufficiently large m. Unfortunately,
we do not know, whether the extra term 2ds/m is an artifact of our proof techniques, which are
relatively light-weighted compared to the heavy machinery used in the i.i.d. case. Similarly, we
do not know, whether the used Bernstein inequality for ?-mixing processes, see Theorem 3.1, is
optimal, but it is the best inequality we could find in the literature. However, if there is, or will be, a
better version of this inequality, our oracle inequalities can be easily improved, since our techniques
only require a generic form of Bernstein?s inequality.
C
Remark 2.6 In the examples above, the rates were achieved by picking particular regularization
sequences that depend on both ? and ?, which in turn, are almost never known in practice. Fortunately, there exists an easy way to achieve the above rates without such knowledge. Indeed, let us
assume we pick a polynomially growing n?1/p -net ?n of (0, 1], split the training sample Dn into
(1)
(2)
two (almost) equally sized and consecutive parts Dn and Dn , compute fD(1) ,? for all ? ? ?n ,
n
?
and pick a ? ? ?n whose fD(1) ,?? minimizes the RL,D(2) -risk over ?n . Then combining Example
n
n
2.2 with the oracle inequality of Corollary 2.3 for LS-SVMs shows that the learning rates of the
Examples 2.4 and 2.5 are also achieved by this training-validation approach. Although the proof is
a straightforward modification of [13, Theorem 7.24], it is out of the page limit of this paper.
C
3
Proofs
In the following, btc denotes the largest integer n satisfying n ? t, and similarly, dte denotes the
smallest integer n satisfying n ? t.
The key result we need to prove the oracle inequality of Theorem 2.1 is the following Bernstein type
inequality for geometrically ?-mixing processes, which was established in [9, Theorem 4.3]:
5
Theorem 3.1 Let Z := (Zi )i?1 be an X ? Y -valued stochastic process that satisfies (2) and P be
defined by (1). Furthermore, let h : X ? Y ? R be a bounded measurable function for which there
exist constants B > 0 and ? ? 0 such that EP h = 0, EP h2 ? ? 2 , and khk? ? B. For n ? 1 we
define
$ 1 %
?1
8n ?+1
(?)
n := n
.
b
Then, for all n ? 1 and all ? > 0, we have
n
? 3?2 n(?)
1X
? ???:
h(Zi (?)) ? ?
? 1 + 4e?2 c e 6?2 +2?B .
n i=1
(10)
Before we prove Theorem 2.1, we need to slightly modify (10). To this end, we first observe that
dte ? 2t for all t ? 1 and btc ? t/2 for all t ? 2. From this it is easy to conclude that, for all n
satisfying n ? n0 := max{b/8, 22+5/? b?1/? }, we have
2?+5
1
n(?) ? 2? ?+1 b ?+1 n? ,
where ? :=
?
?+1 .
where ? :=
?2 n?
c? ? 2 +?cB B .
2+?
For C := 1 + 4e?2 c, c? := ( 8 b )1/(1+?) , and cB := c? /3, we thus obtain
n
1X
? ???:
h(Zi (?)) ? ?
? Ce?? ,
n ? n0 ,
n i=1
Simple transformations and estimations then yield
r
n
1X
? c? ? 2
cB B?
? ???:
+
? Ce??
h(Zi (?)) ?
n i=1
n?
n?
(11)
for all n ? max{b/8, 22+5/? b?1/? } and ? > 0. In the following, we will use only this inequality.
In addition, we will need the following simple and well-known lemma:
Lemma 3.2 For q ? (1, ?), define q 0 ? (1, ?) by 1/q + 1/q 0 = 1. Then, for all a, b ? 0, we have
0
0
(qa)2/q (q 0 b)2/q ? (a + b)2 and ab ? aq /q + bq /q 0 .
?
. By the definition of
Proof of Theorem 2.1: For f : X ? R we define hf := L ? f ? L ? fL,P
fDn ,? , we then have ?(fDn ,? ) + EDn hf?Dn ,? ? ?(f0 ) + EDn hf0 + ?, and consequently we obtain
?(fDn ,? ) + RL,P (f?Dn ,? ) ? R?L,P
=
?(fDn ,? ) + EP hf?Dn ,?
? ?(f0 ) + EDn hf0 ? EDn hf?Dn ,? + EP hf?Dn ,? + ?
=
(?(f0 ) + EP hf0 ) + (EDn hf0 ? EP hf0 ) + (EP hf?Dn ,? ? EDn hf?Dn ,? ) + ? .
(12)
Let us first bound the term EDn hf0 ? EP hf0 . To this end, we further split this difference into
(13)
EDn hf0 ? EP hf0 = EDn (hf0 ? hf?0 ) ? EP (hf0 ? hf?0 ) + EDn hf?0 ? EP hf?0 .
Now L ? f0 ? L ? f?0 ? 0 implies hf0 ? hf?0 = L ? f0 ? L ? f?0 ? [0, B0 ], and hence we obtain
2
EP (hf0 ? hf?0 ) ? EP (hf0 ? hf?0 ) ? EP (hf0 ? hf?0 )2 ? B0 EP (hf0 ? hf?0 ) .
Inequality (11) applied to h := (hf0 ? hf?0 ) ? EP (hf0 ? hf?0 ) thus shows that
r
? c? B0 EP (hf0 ? hf?0 ) cB B0 ?
+
EDn (hf0 ? hf?0 ) ? EP (hf0 ? hf?0 ) <
n?
n?
?
holds with probability ? not less than 1 ? Ce?? . Moreover, using ab ? a2 + 2b , we find
q
n?? ? c? B0 EP (hf0 ? hf?0 ) ? EP (hf0 ? hf?0 ) + n?? c? B0 ? /4 ,
6
and consequently we have with probability ? not less than 1 ? Ce?? that
EDn (hf0 ? hf?0 ) ? EP (hf0 ? hf?0 ) < EP (hf0 ? hf?0 ) +
7cB B0 ?
.
4n?
(14)
In order to bound the remaining term in (13), that is EDn hf?0 ? EP hf?0 , we first observe that (5)
implies khf?0 k? ? B, and hence we have khf?0 ? EP hf?0 k? ? 2B. Moreover, (6) yields
EP (hf?0 ? EP hf?0 )2 ? EP h2f?0 ? V (EP hf?0 )? .
2
, q 0 := ?2 , a := (n?? c? 2?? ?? V ? )1/2 ,
In addition, if ? ? (0, 1], Lemma 3.2 implies for q := 2??
and b := (2??1 EP hf?0 )?/2 , that
s
1
1
c? V ? (EP hf?0 )?
c? 2?? ?? V ? 2??
?
c? V ? 2??
?
1
?
+
E
h
+ EP hf?0 .
?
P f?0
n?
2
n?
n?
Since EP hf?0 ? 0, this inequality also holds for ? = 0, and hence (11) shows that we have
1
c? V ? 2??
2cB B?
EDn hf?0 ? EP hf?0 < EP hf?0 +
+
n?
n?
(15)
with probability ? not less than 1 ? Ce?? . By combining this estimate with (14) and (13), we now
obtain that with probability ? not less than 1 ? 2Ce?? we have
1
c? V ? 2??
2cB B?
7cB B0 ?
+
+
,
(16)
EDn hf?0 ? EP hf?0 < EP hf?0 +
n?
n?
4n?
i.e., we have established a bound on the second term in (12).
Let us now fix a minimal ?-net C of F, that is, an ?-net of cardinality |C| = N (F, k ? k? , ?). Let
us first consider the case n? < 3cB (? + ln |C|). Combining (16) with (12) and using B ? B0 ,
B 2?? ? V , 3cB ? c? , 2 ? 41/(2??) , and EP hf?Dn ,? ? EDn hf?Dn ,? ? 2B, we then find
?(fDn ,? ) + RL,P (fDn ,? ) ? R?L,P
1
c? V ? 2??
2cB B?
7cB B0 ?
? ?(f0 ) + 2EP hf0 +
+
+
+ (EP hf?Dn ,? ? EDn hf?Dn ,? ) + ?
n?
n?
4n?
1
1
c? V (? + ln |C|) 2??
4cB B0 ?
c? (? + ln |C|) 2??
? ?(f0 ) + 2EP hf0 +
+
+?
+ 2B
n?
n?
n?
1
36c? V (? + ln |C|) 2??
4cB B0 ?
? 3?(f0 ) + 3EP hf0 +
+
+?
?
n
n?
with probability ? not less than 1 ? 2e?? . It thus remains to consider the case n? ? 3cB (? + ln |C|).
To establish a non-trivial bound on the term EP hf?D ? EDn hf?D in (12), we define functions
gf,r :=
EP hf? ? hf?
,
EP hf? + r
f ?F,
where r > 0 is a real number to be fixed later. For f ? F, we then have kgf,r k? ? 2Br?1 , and for
2
? > 0, q := 2??
, q 0 := ?2 , a := r, and b := EP hf? 6= 0, the first inequality of Lemma 3.2 yields
2
EP gf,r
?
EP h2f?
(EP hf? + r)2
?
(2 ? ?)2?? ?? EP h2f?
4r2?? (EP hf? )?
? V r??2 .
(17)
Moreover, for ? ? (0, 1] and EP hf? = 0, we have EP h2f? = 0 by the variance bound (6), which in
2
2
turn implies EP gf,r
? V r??2 . Finally, it is not hard to see that EP gf,r
? V r??2 also holds for
? = 0. Now, (11) together with a simple union bound yields
r
2cB B?
c? V ?
n
? Dn ? (X ? Y ) : sup EDn gf,r <
+
? 1 ? C |C| e?? ,
? r 2??
?r
n
n
f ?C
7
and consequently we see that, with probability ? not less than 1 ? C |C| e?? , we have
r
2cB B?
c? V ?
+
EP hf? ? EDn hf? < EP hf? + r
n? r2??
n? r
(18)
for all f ? C. Since fDn ,? ? F, there now exists an fDn ? C with kfDn ,? ? fDn k? ? ?. By the
assumed Lipschitz continuity of L the latter implies
hf? (x, y) ? hf?
(x, y) ? f?Dn (x) ? f?Dn ,? (x) ? fDn (x) ? fDn ,? (x) ? ?
Dn
Dn ,?
for all (x, y) ? X ? Y . Combining this with (18), we obtain
r
c? V (? + ln |C|) 2cB B(? + ln |C|)
+
+ 2?
EP hf?Dn ,? ? EDn hf?Dn ,? < EP hf? + ? + r
n? r2??
n? r
with probability ? not less than 1 ? C e?? . By combining this estimate with (12) and (16), we then
obtain that
1
2cB B?
c? V ? 2??
7cB B0 ?
+
?(fDn ,? ) + EP hf?Dn ,? < ?(f0 ) + 2EP hf0 +
+
+ 2? + ?
n?
n?
4n?
r
c? V (? + ln |C|) 2cB B(? + ln |C|)
+
+ EP hf?Dn ,? + ? + r
(19)
n? r2??
n? r
holds with probability ? not less than 1 ? 3Ce?? . Consequently, it remains to bound the various
terms. To this end, we first observe that for
1/(2??)
36c? V (? + ln |C|)
r :=
,
n?
we obtain, since 6 ? 361/(2??) ,
1
c V ? 2??
?
n?
r
?
6
r
and
c? V (? + ln |C|)
1
? .
n? r2??
6
In addition, V ? B 2?? , c? ? 3cB , 6 ? 361/(2??) , and n? ? 3cB (? + ln |C|) imply
1
1
6 3cB (? + ln |C|) 2??
V 2??
?
?
?
9
n?
r
1
1
36c? V (? + ln |C|) 2??
?
?
=
9
n? r2??
Using these estimates together with 1/6 + 1/9 ? 1/3 in (19), we see that
6 3cB (? + ln |C|) B
2cB B(? + ln |C|)
= ?
?
rn?
9
n?
r
?(fDn ,? ) + EP hf?Dn ,? < ?(f0 ) + 2EP hf0 +
1
.
9
EP hf?Dn ,? + ? + r
r
7cB B0 ?
+
+
+ 2? + ?
3
4n?
3
holds with probability ? not less than 1 ? 3Ce?? . Consequently, we have
1/(2??)
36c? V (? + ln |C|
4cB B0 ?
?(fDn ,? ) + EP hf?Dn ,? < 3?(f0 ) + 3EP hf0 +
+ 4? + 2? ,
+
n?
n?
i.e. we have shown the assertion.
Proof of Corollary 2.3: The result follows from minimizing the right-hand side of the oracle inequality of Theorem 2.1 with respect to ?.
References
[1] P. L. Bartlett, O. Bousquet, and S. Mendelson. Local Rademacher complexities. Ann. Statist.,
33:1497?1537, 2005.
[2] G. Blanchard, G. Lugosi, and N. Vayatis. On the rate of convergence of regularized boosting
classifiers. J. Mach. Learn. Res., 4:861?894, 2003.
8
[3] R. C. Bradley. Introduction to Strong Mixing Conditions. Vol. 1-3. Kendrick Press, Heber City,
UT, 2007.
[4] J. Fan and Q. Yao. Nonlinear Time Series. Springer, New York, 2003.
[5] A. Irle. On consistency in nonparametric estimation under mixing conditions. J. Multivariate
Anal., 60:123?147, 1997.
[6] W. S. Lee, P. L. Bartlett, and R. C. Williamson. The importance of convexity in learning with
squared loss. IEEE Trans. Inform. Theory, 44:1974?1980, 1998.
[7] A. Lozano, S. Kulkarni, and R. Schapire. Convergence and consistency of regularized boosting
algorithms with stationary ?-mixing observations. In Y. Weiss, B. Sch?olkopf, and J. Platt,
editors, Advances in Neural Information Processing Systems 18, pages 819?826. MIT Press,
Cambridge, MA, 2006.
[8] R. Meir. Nonparametric time series prediction through adaptive model selection. Mach. Learn.,
39:5?34, 2000.
[9] D. S. Modha and E. Masry. Minimum complexity regression estimation with weakly dependent
observations. IEEE Trans. Inform. Theory, 42:2133?2145, 1996.
[10] M. Mohri and A. Rostamizadeh. Stability bounds for non-i.i.d. processes. In J.C. Platt,
D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1025?1032. MIT Press, Cambridge, MA, 2008.
[11] M. Mohri and A. Rostamizadeh. Rademacher complexity bounds for non-i.i.d. processes. In
D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural Information
Processing Systems 21, pages 1097?1104. 2009.
[12] I. Steinwart. Two oracle inequalities for regularized boosting classiers. Statistics and Its
Interface, 2:271284, 2009.
[13] I. Steinwart and A. Christmann. Support Vector Machines. Springer, New York, 2008.
[14] I. Steinwart and A. Christmann. Estimating conditional quantiles with the help of the pinball
loss. Bernoulli, accepted with minor revision.
[15] I. Steinwart, D. Hush, and C. Scovel. Learning from dependent observations. J. Multivariate
Anal., 100:175?194, 2009.
[16] I. Steinwart, D. Hush, and C. Scovel. Optimal rates for regularized least squares regression. In
S. Dasgupta and A. Klivans, editors, Proceedings of the 22nd Annual Conference on Learning
Theory, pages 79?93. 2009.
[17] H. Sun and Q. Wu. Regularized least square regression with dependent samples. Adv. Comput.
Math., to appear.
[18] M. Vidyasagar. A Theory of Learning and Generalization: With Applications to Neural Networks and Control Systems. Springer, London, 2nd edition, 2003.
[19] Y.-L. Xu and D.-R. Chen. Learning rates of regularized regression for exponentially strongly
mixing sequence. J. Statist. Plann. Inference, 138:2180?2189, 2008.
[20] B. Yu. Rates of convergence for empirical processes of stationary mixing sequences. Ann.
Probab., 22:94?116, 1994.
[21] B. Zou and L. Li. The performance bounds of learning machines based on exponentially
strongly mixing sequences. Comput. Math. Appl., 53:1050?1058, 2007.
9
| 3736 |@word mild:1 version:1 briefly:3 polynomial:1 stronger:2 norm:1 nd:2 suitably:1 bn:1 pick:2 series:6 rkhs:2 ours:1 past:1 bradley:1 scovel:2 surprising:1 fn:1 numerical:1 subsequent:1 n0:2 stationary:4 accepting:1 provides:1 boosting:5 math:2 dn:38 prove:3 khk:1 indeed:1 behavior:1 growing:1 inspired:1 gov:1 considering:2 increasing:1 cardinality:1 provided:1 begin:1 moreover:14 bounded:5 notation:1 estimating:1 revision:1 minimizes:1 developed:1 transformation:1 guarantee:1 temporal:1 thorough:1 every:1 exactly:1 classifier:1 platt:2 control:1 unit:1 masry:1 yn:1 appear:1 before:3 local:1 modify:1 limit:1 mach:2 modha:1 lugosi:1 might:1 twice:1 suggests:1 appl:1 unique:1 practice:2 union:1 procedure:1 empirical:6 kendrick:1 significantly:1 pre:2 confidence:1 cannot:2 close:3 selection:1 bh:3 risk:7 context:1 measurable:5 center:1 yt:1 go:1 straightforward:1 l:10 convex:4 fdn:20 sharpness:1 financial:2 stability:1 notion:4 pioneered:1 exact:2 edn:21 us:1 hypothesis:3 recognition:1 satisfying:5 observed:1 ep:68 precondition:1 adv:1 sun:1 mentioned:1 convexity:1 complexity:4 dynamic:1 weakly:2 depend:1 algebra:1 localization:2 easily:1 various:6 chapter:3 represented:2 regularizer:3 fast:1 effective:1 describe:3 london:1 labeling:1 refined:1 whose:3 quite:1 widely:1 valued:3 larger:1 say:1 statistic:1 sequence:4 differentiable:1 net:3 inserting:1 combining:6 realization:1 mixing:25 achieve:1 roweis:1 description:1 olkopf:1 los:2 hf0:32 convergence:3 rademacher:3 produce:1 help:1 illustrate:3 derive:1 depending:1 minor:1 b0:24 strong:1 christmann:4 come:1 skip:1 quantify:1 implies:5 direction:2 kgf:1 stochastic:3 require:2 f1:1 generalization:3 fix:1 strictly:2 hold:6 sufficiently:2 considered:1 exp:1 k2h:2 cb:31 scope:1 consecutive:1 smallest:1 a2:1 estimation:3 label:1 currently:1 largest:2 establishes:2 city:1 weighted:1 minimization:3 mit:2 clearly:1 gaussian:1 always:1 rather:1 unclipped:1 corollary:6 focus:1 bernoulli:1 rostamizadeh:2 inference:1 dependent:4 koller:2 i1:1 interested:1 arg:1 classification:2 marginal:1 construct:1 never:2 btc:2 yu:2 look:3 pinball:4 few:3 national:1 consisting:1 recalling:1 ab:2 fd:2 highly:1 ai1:2 analyzed:1 light:2 chain:2 closer:2 necessary:3 machinery:2 bq:1 arma:1 re:1 minimal:2 earlier:1 assertion:1 zn:1 alamo:2 uniform:1 wonder:1 perturbed:1 density:1 accessible:1 lee:1 picking:2 together:2 continuously:1 yao:1 squared:2 again:1 nm:1 satisfied:7 worse:1 li:1 account:1 de:1 coefficient:1 blanchard:1 satisfy:4 depends:1 later:1 picked:1 closed:2 sup:2 bayes:2 hf:68 kph:1 minimize:1 square:11 ass:1 accuracy:2 variance:4 yield:4 weak:1 bayreuth:3 cc:1 inform:2 whenever:4 definition:3 involved:1 proof:7 intrinsically:1 knowledge:1 ut:1 hilbert:1 back:1 adaboost:1 improved:1 wei:1 strongly:2 furthermore:3 talagrand:1 hand:3 steinwart:6 d:3 nonlinear:2 glance:1 continuity:1 logistic:1 artifact:1 irle:1 usa:1 concept:2 true:1 lozano:1 hence:6 regularization:2 laboratory:1 covering:2 cp:6 interface:1 recently:3 fi:4 common:1 rl:21 exponentially:2 h2f:4 refer:2 cambridge:2 smoothness:1 rd:1 consistency:5 mathematics:1 similarly:2 aq:1 f0:19 add:1 multivariate:2 recent:2 inf:5 certain:3 inequality:29 arbitrarily:2 yi:1 garch:1 minimum:2 fortunately:1 signal:1 reduces:1 smooth:2 match:1 faster:4 offer:1 equally:1 zin:2 prediction:2 regression:6 essentially:1 metric:2 kernel:6 achieved:2 justified:1 addition:6 vayatis:1 sch:1 extra:1 probably:2 integer:2 bernstein:5 split:2 identically:1 easy:2 bengio:1 independence:2 zi:12 andreas:2 idea:2 knowing:2 br:1 t0:3 whether:2 bartlett:2 speech:1 york:2 constitute:1 remark:2 detailed:2 amount:1 nonparametric:2 tsybakov:1 statist:2 svms:14 processed:1 schapire:1 meir:1 exist:3 diagnosis:1 write:3 dasgupta:1 vol:1 group:1 key:1 ce:10 geometrically:3 clipped:9 throughout:2 almost:3 classiers:1 wu:1 sobolev:1 decision:3 comparable:1 bound:17 fl:7 fan:1 replaces:1 oracle:15 annual:1 constraint:1 bousquet:1 argument:1 min:3 klivans:1 px:2 relatively:1 department:1 according:1 ball:2 slightly:1 modification:3 erm:5 ln:21 remains:2 turn:3 describing:1 discus:3 singer:1 know:2 end:5 observe:3 away:1 generic:4 appearing:1 denotes:5 remaining:1 hinge:4 quantile:2 k1:1 establish:2 concentration:1 dependence:1 dp:1 distance:1 trivial:1 reason:1 length:1 besides:1 reformulate:1 minimizing:1 unfortunately:2 anal:2 observation:11 markov:2 ingo:2 finite:4 situation:3 zi1:2 y1:1 rn:1 reproducing:1 arbitrary:1 community:1 lanl:1 extensive:1 z1:2 kl:1 established:9 hush:2 qa:1 trans:2 dynamical:1 below:1 including:6 max:3 analogue:1 vidyasagar:1 natural:2 regularized:10 imply:1 started:1 gf:5 text:1 probab:1 literature:1 l2:1 kf:7 asymptotic:1 loss:25 interesting:1 localized:1 validation:1 h2:1 article:2 editor:4 heavy:2 course:1 mohri:2 side:2 distributed:2 regard:1 xn:1 made:1 adaptive:1 polynomially:1 compact:3 uni:1 supremum:1 global:1 investigating:1 reveals:1 assumed:2 conclude:2 xi:2 continuous:3 iterative:1 why:1 additionally:2 learn:3 dte:2 schuurmans:1 williamson:1 bottou:1 zou:1 main:1 noise:3 edition:1 x1:1 xu:1 quantiles:1 fashion:1 comput:2 learns:2 theorem:21 specific:1 r2:6 svm:3 exists:3 mendelson:1 importance:2 margin:3 chen:1 applies:2 springer:3 satisfies:6 ma:2 conditional:1 sized:1 consequently:6 careful:1 exposition:2 rbf:1 ann:2 replace:1 lipschitz:1 hard:1 typical:1 lemma:6 called:2 accepted:2 khf:2 support:3 bcb:1 latter:4 absolutely:1 kulkarni:1 phenomenon:1 |
3,020 | 3,737 | Construction of Nonparametric Bayesian Models
from Parametric Bayes Equations
Peter Orbanz
University of Cambridge and ETH Zurich
[email protected]
Abstract
We consider the general problem of constructing nonparametric Bayesian models
on infinite-dimensional random objects, such as functions, infinite graphs or infinite permutations. The problem has generated much interest in machine learning,
where it is treated heuristically, but has not been studied in full generality in nonparametric Bayesian statistics, which tends to focus on models over probability
distributions. Our approach applies a standard tool of stochastic process theory,
the construction of stochastic processes from their finite-dimensional marginal
distributions. The main contribution of the paper is a generalization of the classic
Kolmogorov extension theorem to conditional probabilities. This extension allows
a rigorous construction of nonparametric Bayesian models from systems of finitedimensional, parametric Bayes equations. Using this approach, we show (i) how
existence of a conjugate posterior for the nonparametric model can be guaranteed
by choosing conjugate finite-dimensional models in the construction, (ii) how the
mapping to the posterior parameters of the nonparametric model can be explicitly
determined, and (iii) that the construction of conjugate models in essence requires
the finite-dimensional models to be in the exponential family. As an application
of our constructive framework, we derive a model on infinite permutations, the
nonparametric Bayesian analogue of a model recently proposed for the analysis
of rank data.
1
Introduction
Nonparametric Bayesian models are now widely used in machine learning. Common models, in
particular the Gaussian process (GP) and the Dirichlet process (DP), were originally imported from
statistics, but the nonparametric Bayesian idea has since been adapted to the needs of machine
learning. As a result, the scope of Bayesian nonparametrics has expanded significantly: Whereas
traditional nonparametric Bayesian statistics mostly focuses on models on probability distributions,
machine learning researchers are interested in a variety of infinite-dimensional objects, such as functions, kernels, or infinite graphs. Initially, existing DP and GP approaches were modified and combined to derive new models, including the Infinite Hidden Markov Model [2] or the Hierarchical
Dirichlet Process [15]. More recently, novel stochastic process models have been defined from
scratch, such as the Indian Buffet Process (IBP) [8] and the Mondrian Process [13]. This paper
studies the construction of new nonparametric Bayesian models from finite-dimensional distributions: To construct a model on a given type of infinite-dimensional object (for example, an infinite
graph), we start out from available probability models on the finite-dimensional counterparts (probability models on finite graphs), and translate them into a model on infinite-dimensional objects
using methods of stochastic process theory. We then ask whether interesting statistical properties of
the finite-dimensional models used in the constructions, such as conjugacy of priors and posteriors,
carry over to the stochastic process model.
1
In general, the term nonparametric Bayesian model refers to a Bayesian model on an infinitedimensional parameter space. Unlike parametric models, for which the number of parameters is
constantly bounded w.r.t. sample size, nonparametric models allow the number of parameters to
grow with the number of observations. To accommodate a variable and asymptotically unbounded
number of parameters within a single parameter space, the dimension of the space has to be infinite,
and nonparametric models can be defined as statistical models with infinite-dimensional parameter
spaces [17]. For a given sample of finite size, the model will typically select a finite subset of the
available parameters to explain the observations. A Bayesian nonparametric model places a prior
distribution on the infinite-dimensional parameter space.
Many nonparametric Bayesian models are defined in terms of their finite-dimensional marginals:
For example, the Gaussian process and Dirichlet process are characterized by the fact that their
finite-dimensional marginals are, respectively, Gaussian and Dirichlet distributions [11, 5]. The
probability-theoretic construction result underlying such definitions is the Kolmogorov extension
theorem [1], described in Sec. 2 below. In stochastic process theory, the theorem is used to study
the properties of a process in terms of its marginals, and hence by studying the properties of finitedimensional distributions. Can the statistical properties of a nonparametric Bayesian model, i.e. of
a parameterized family of distributions, be treated in a similar manner, by considering the model?s
marginals? For example, can a nonparametric Bayesian model be guaranteed to be conjugate if
the marginals used in its construction are conjugate? Techniques such as the Kolmogorov theorem construct individual distributions, whereas statistical properties are properties of parameterized
families of distributions. In Bayesian estimation, such families take the form of conditional probabilities. The treatment of the statistical properties of nonparametric Bayesian models in terms of
finite-dimensional Bayes equations therefore requires an extension result similar to the Kolmogorov
theorem that is applicable to conditional distributions. The main contribution of this paper is to
provide such a result.
We present an analogue of the Kolmogorov theorem for conditional probabilities, which permits the
direct construction of conditional stochastic process models on countable-dimensional spaces from
finite-dimensional conditional probabilities. Application to conjugate models shows how a conjugate nonparametric Bayesian model can be constructed from conjugate finite-dimensional Bayes
equations ? including the mapping to the posterior parameters. The converse is also true: To construct a conjugate nonparametric Bayesian model, the finite-dimensional models used in the construction all have to be conjugate. The construction of stochastic process models from exponential
family marginals is almost generic: The model is completely described by the mapping to the posterior parameters, which has a generic form as a function of the infinite-dimensional counterpart of the
model?s sufficient statistic. We discuss how existing models fit into the framework, and derive the
nonparametric Bayesian version of a model on infinite permutations suggested by [9]. By essentially
providing a construction recipe for conjugate models of countable dimension, our theoretical results
have clear practical implications for the derivation of novel nonparametric Bayesian models.
2
Formal Setup and Notation
Infinite-dimensional probability models cannot generally be described with densities and therefore
require some basic notions of measure-theoretic probability. In this paper, required concepts will
be measures on product spaces and abstract conditional probabilities (see e.g. [3] or [1] for general
introductions). Randomness is described by means of an abstract probability space (?, A, P). Here,
? is a space of points ?, which represent atomic random events, A is a ?-algebra of events on ?,
and P a probability measure defined on the ?-algebra. A random variable is a measurable mapping
from ? into some space of observed values, such as X : ? ? ?x . The distribution of X is the
image measure PX := X(P) = P ? X ?1 . Roughly speaking, the events ? ? ? represent abstract
states of nature, i.e. knowing the value of ? completely describes all probabilistic aspects of the
model universe, and all random aspects are described by the probability measure P. However, ?, A
and P are never known explicitly, but rather constitute the modeling assumption that any explicitly
known distribution PX is derived from one and the same probability measure P through some random
variable X.
Multiple dimensions of random variables are formalized by product spaces. We will generally deal
with an infinite-dimensional space such as ?Ex , were E is an infinite index set and ?Ex is the E2
fold product of ?x with itself. The set of finite subsets of E will be denoted F(E), such that
?Ix with I ? F(E) is a finite-dimensional subspace of ?Ex . Each product space ?Ix is equipped
with the product Borel ?-algebra
BxI . Random variables with values on these spaces have product
N
I
{i}
structure, such as X = i?I X . Note that this does not imply that the corresponding measure
PXI := X I (P) is a product measure; the individual components of X I may be dependent. The
elements of the infinite-dimensional product space ?Ex can be thought of as functions of the form
E ? ?x . For example, the space RR contains all real-valued functions on the line.
Product spaces ?Ix ? ?Jx of different dimensions are linked by a projection operator ?JI , which
restricts a vector xJ ? ?Jx to xI , the subset of entries of xJ that are indexed by I ? J. For a set
AI ? ?Ix , the preimage ?JI-1 AI under projection is called a cylinder set with base AI . The projection
operator can be applied to measures as [?JI PXJ ] := PXJ ? ?JI-1 , so for an I-dimensional event AI ? BxI ,
we have [?JI PXJ ](AI ) = PXJ (?JI-1 AI ). In other words, a probability is assigned to the I-dimensional
set AI by applying the J-dimensional measure PXJ to the cylinder with base AI . The projection
of a measure is just its marginal, that is, [?JI PXJ ] is the marginal of the measure PXJ on the lowerdimensional subspace ?Ix .
We denote observation variables (data) by X I , parameters by ?I and hyperparameters by ?I . The
corresponding measures and spaces are indexed accordingly, as PX , P? , ?? etc. The likelihoods and
posteriors that occur in Bayesian estimation are conditional probability distributions. Since densities
are not generally applicable in infinite-dimensional spaces, the formulation of Bayesian models on
such spaces draws on the abstract conditional probabilities of measure-theoretic probability, which
are derived from Kolmogorov?s implicit formulation of conditional expectations [3]. We will write
e.g. PX (X|?) for the conditional probability of X given ?. For the reader familiar with the theory,
we note that all spaces considered here are Borel spaces, such that regular versions of conditionals
always exist, and we hence assume all conditionals to be regular conditional probabilities (Markov
kernels). Introducing abstract conditional probabilities here is far beyond the possible scope of this
paper. A reader not familiar with the theory should simply read PX (X|?) as a conditional distribution, but take into account that these abstract objects are only uniquely defined almost everywhere.
That is, the probability PX (X|? = ?) can be changed arbitrarily for those values of ? within some
set of exceptions, provided that this set has measure zero. While not essential for understanding
most of our results, this fact is the principal reason that limits the results to countable dimensions.
Example: GP. Assume that PXE (X E |?E ) is to represent a Gaussian process with fixed covariance
function. Then X E is function-valued, and if for example E := R+ and ?x := R, the product space
?Ex = RR+ contains all functions xE of the form xE : R+ ? R. Each axis label i ? E in the product
space is a point on the real line, and a finite index set I ? F(E) is a finite collection of points I =
(i1 , . . . , im ). The projection ?EI xE of a function in ?Ex is then the vector xI := (xE (i1 ), . . . , xE (im ))
of function values at the points in I. The parameter variable ?E represents the mean function of the
process, and so we would choose ?E? := ?Ex = RR+ .
Example: DP. If PXE (X E |?E ) is a Dirichlet process, the variable X E takes values xE in the set of
probability measures over a given domain, such as R. A probability measure on R (with its Borel
algebra B(R)) is in particular a set function B(R) ? [0, 1], so we could choose E = B(R) and
?x = [0, 1]. The parameters of a Dirichlet process DP(?, G0 ) are a scalar concentration parameter
? ? R+ , and a probability measure G0 with the same domain as the randomly drawn measure xE .
The parameter space would therefore be chosen as R+ ? [0, 1]B(R) .
2.1
Construction of Stochastic Processes from their Marginals
Suppose that a family PXI of probability measures are the finite-dimensional marginals of an infinitedimensional measure PXE (a ?stochastic process?). Each measure PXI lives on the finite-dimensional
subspace ?Ix of ?Ex . As marginals of one and the same measure, the measures must be marginals of
each other as well:
PXI = PXJ ? ?JI-1
whenever I ? J .
(1)
Any family of probability measures satisfying (1) is called a projective family. The marginals of
a stochastic process measure are always projective. A famous theorem by Kolmogorov states that
the converse is also true: Any projective family on the finite-dimensional subspaces of an infinitedimensional product space ?Ex uniquely defines a stochastic process on the space ?Ex [1]. The only
assumption required is that the ?axes? ?x of the product space are so-called Polish spaces, i.e.
3
topological spaces that are complete, separable and metrizable. Examples include Euclidean spaces,
separable Banach or Hilbert spaces, countable discrete spaces, and countable products of spaces that
are themselves Polish.
Theorem 1 (Kolmogorov Extension Theorem). Let E be an arbitrary infinite set. Let ?x be a Polish
space, and let {PXI |I ? F(E)} be a family of probability measures on the spaces (?Ix , BxI ). If the
family is projective, there exists a uniquely defined probability measure PXE on ?Ex with the measures
PXI as its marginals.
The infinite-dimensional measure PXE constructed in Theorem 1 is called the projective limit of the
family PXI . Intuitively, the theorem is a regularity result: The marginals determine the values of
PXE on a subset of events (namely on those events involving only a finite subset of the random
variables, which are just the cylinder sets with finite-dimensional base). The theorem then states that
a probability measure is such a regular object that knowledge of these values determines the measure
completely, in a similar manner as continuous functions on the line are completely determined by
their values on a countable dense subset. The statement of the Kolmogorov theorem is deceptive in
its generality: It holds for any index set E, but if E is not countable, the constructed measure PXE is
essentially useless ? even though the theorem still holds, and the measure is still uniquely defined.
The problem is that the measure PXE , as a set function, is not defined on the space ?Ex , but on the
?-algebra BxE (the product ?-algebra on ?Ex ). If E is uncountable, this ?-algebra is too coarse to
resolve events of interest1 . In particular, it does not contain the singletons (one-point sets), such that
the measure PXE is incapable of assigning a probability to an event of the form {X E = xE }.
3
Extension of Conditional and Bayesian Models
According to the Kolmogorov extension theorem, the properties of a stochastic process can be analyzed by studying its marginals. Can we, analogously, use a set of finite-dimensional Bayes equations to represent a nonparametric Bayesian model? The components of a Bayesian model are conditional distributions. Even though these conditionals are probability measures for (almost) each value
of the condition variable, the Kolmogorov theorem cannot simply be applied to extend conditional
models: Conditional probabilities are functions of two arguments, and have to satisfy a measurability requirement in the second argument (the condition). Application of the extension theorem to each
value of the condition need not yield a proper conditional distribution on the infinite-dimensional
space, as it disregards the properties of the second argument. But since the second argument takes
the role of a parameter in statistical estimation, these properties determine the statistical properties of
the model, such as sufficiency, identifiability, or conjugacy. In order to analyze the properties of an
infinite-dimensional Bayesian model in terms of finite-dimensional marginals, we need a theorem
that establishes a correspondence between the finite-dimensional and infinite-dimensional conditional distributions. Though a number of extension theorems based on conditional distributions is
available in the literature, these results focus on the construction of sequential stochastic processes
from a sequence of conditionals (see [10] for an overview). Theorem 2 below provides a result that,
like the Kolmogorov theorem, is applicable on product spaces.
To formulate the result, the projector used to define the marginals has to be generalized from measures to conditionals. The natural way to do so is the following: If PXJ (X J |?J ) is a conditional
probability on the product space ?J , and I ? J, define
[?JI PXJ ]( . |?J ) := PXJ (?JI-1 . |?J ) .
(2)
This definition is consistent with that of the projector above, in the sense that it coincides with the
standard projector applied to the measure PXJ ( . |?J = ?J ) for any fixed value ?J of the parameter. As
with projective families of measures, we then define projective families of conditional probabilities.
Definition 1 (Conditionally Projective Probability Models). Let PXI (X I |?I ) be a family of regular conditional probabilities on product spaces ?Ix , for all I ? F(E). The family will be called
conditionally projective if [?JI PXJ ]( . |?J ) =a.e. PXI ( . |?I ) whenever I ? J.
As conditional probabilities are unique almost everywhere, the equality is only required to hold almost everywhere as well. In the jargon of abstract conditional probabilities, the definition requires
1
This problem is unfortunately often neglected in the statistics literature, and measures in uncountable
dimensions are ?constructed? by means of the extension theorem (such as in the original paper [5] on the
Dirichlet process). See e.g. [1] for theoretical background, and [7] for a rigorous construction of the DP.
4
that PXI ( . |?I ) is a version of the projection of PXJ ( . |?J ). Theorem 2 states that a conditional probability on a countably-dimensional product space is uniquely defined (up to a.e.-equivalence) by a
conditionally projective family of marginals. In particular, if we can define a parametric model on
each finite-dimensional space ?Ix for I ? F(E) such that these models are conditionally projective,
the models determine an infinite-dimensional parametric model (a ?nonparametric? model) on the
overall space ?Ex .
Theorem 2 (Extension of Conditional Probabilities). Let E be a countable index set. Let PXI (X I |?I )
be a family of regular conditional probabilities on the product space ?Ix . Then if the family is
conditionally projective, there exists a regular conditional probability PXE (X E |C E ) on the infinitedimensional space ?Ex with the PXI (X I |?I ) as its conditional marginals. PXE (X E |C E ) is measurable
with respect to the ?-algebra C E := ?(?I?F (E) ?(?I )). In particular, if the parameter variables
satisfy ?JI ?JN
= ?I , then PXE (X E |C E ) can be interpreted as the conditional probability PXE (X E |?E )
E
with ? := i?E ?{i} .
Proof Sketch2 . We first apply the Kolmogorov theorem separately for each setting of the parameters
that makes the measures PXI (X I |?I = ?I ) projective. For any given ? ? ? (the abstract probability
space), projectiveness holds if ?I = ?I (?) for all I ? F(E). However, for any conditionally
projective family, there is a set N ? ? of possible exceptions (for which projectiveness need not
hold), due to the fact that conditional probabilities and conditional projections are only unique almost
everywhere. Using the countability of the dimension set E, we can argue that N is always a null set;
the resulting set of constructed infinite-dimensional measures is still a valid candidate for a regular
conditional probability. We then show that if this set of measures is assembled into a function of the
parameter, it satisfies the measurability conditions of a regular conditional probability: We first use
the properties of the marginals to show measurability on the subset of events which are preimages
under projection of finite-dimensional events (the cylinder sets), and then use the ?-? theorem [3]
to extend measurability to all events.
4
Conjugacy
The posterior of a Dirichlet process is again a Dirichlet process, and the posterior parameters can be
computed as a function of the data and the prior parameters. This property is known as conjugacy,
in analogy to conjugacy in parametric Bayesian models, and makes Dirichlet process inference
tractable. Virtually all known nonparametric Bayesian models, including Gaussian processes, P?olya
trees, and neutral-to-the-right processes are conjugate [16]. In the Bayesian and exponential family
literature, conjugacy is often defined as ?closure under sampling?, i.e. for a given likelihood and a
given class of priors, the posterior is again an element of the prior class [12]. This definition does not
imply tractability of the posterior: In particular, the set of all probability measures (used as priors)
is conjugate for any possible likelihood, but obviously this does not facilitate computation of the
posterior. In the following, we call a prior and a likelihood of a Bayesian model conjugate if the
posterior (i) is parameterized and (ii) there is a measurable mapping T from the data x and the prior
parameter ? to the parameter ? 0 = T (x, ?) which specifies the corresponding posterior. In the
definition below, the conditional probability k represents the parametric form of the posterior. The
definition is applicable to ?nonparametric? models, in which case the parameter simply becomes
infinite-dimensional.
Definition 2 (Conjugacy and Posterior Index). Let PX (X|?) and P? (?|?) be regular conditional
probabilities. Let P? (?|X, ?) be the posterior of the model PX (X|?) under prior P? (?|?). Model
and prior are called conjugate if there exists a regular conditional probability k : B? ? ?t ? [0, 1],
parameterized on a measurable Polish space (?t , Bt ), and a measurable map T : ?x ? ?? ? ?t ,
such that
P? (A|X = x, ? = ?) = k(A, T (x, ?))
for all A ? B? .
(3)
The mapping T is called the posterior index of the model.
The definition becomes trivial for ?t = ?x ? ?? and T chosen as the identity mapping; it is meaningful if T is reasonably simple to evaluate, and its complexity does not increase with sample size.
Theorem 3 below shows that, under suitable conditions, the structure of the posterior index carries
2
Complete proofs for both theorems in this paper are provided as supplementary material.
5
over to the projective limit model: If the finite-dimensional marginals admit a tractable posterior
index, then so does the projective limit model.
Example. (Posterior Indices in Exponential Families) Suppose that PX (X|?) is an exponential
family model with sufficient statistic S and density p(x|?) = exp(hS(x), ?i ? ?(x) ? ?(?)). Choose
P? (?|?) as the ?natural conjugate prior? with parameters ? = (?, y). Its density, w.r.t. a suitable
measure ?? on parameter space, is of the form q(?|?, y) = K(?, y)?1 exp(h?, yi ? ??(?)). The
posterior P? (?|X, ?) is conjugate in the sense of Def.
R 2, and its density is q(?|? + 1, y + S(x)).
The probability kernel k is given by k(A, (t1 , t2 )) := A q(?|t1 , t2 )d?? (?), and the posterior index
is T (x, (?, y)) := (? + 1, y + S(x)).
The main result of this section is Theorem 3, which explains how conjugacy carries over from
the finite-dimensional to the infinite-dimensional case, and vice versa. Both extension theorems
discussed so far require a projection condition on the measures and models involved. A similar
condition is now required for the mappings T I : The preimages T I,-1 of the posterior indices T I must
commute with the preimage under projection,
(?EI ? T E )-1 = (T I ? ?EI )-1
for all I ? F(E) .
(4)
The posterior indices of all well-known exponential family models, such as Gaussians and Dirichlets, satisfy this condition. The following theorem states that (i) stochastic process Bayesian models
that are constructed from conjugate marginals are conjugate if the projection equation (4) is satisfied,
and that (ii) such conjugate models can only be constructed from conjugate marginals.
Theorem 3 (Functional Conjugacy of Projective Limit Models). Let E be a countable index set
and ?Ex and ?E? be Polish product spaces. Assume that there is a Bayesian model on each finitedimensional subspace ?Ix , such that the families of all priors, all observation models and all posteriors are conditionally projective. Let P?E (?E ), PXE (X E |?E ) and P?E (?E |X E ) denote the respective
projective limits. Then P?E (?E |X E ) is a posterior for the infinite-dimensional Bayesian model defined by PXE (X E |?E ) with prior P?E (?E ), and the following holds:
(i) Assume that each finite-dimensional posterior P?I (?I |X I ) is conjugate w.r.t. its respective
Bayesian model, with posterior index T I and probability kernel k I . Then if there is a measurable mapping T : ?Ex ? ?Et satisfying the projection condition (4), the projective limit
posterior P?E (?E |X E ) is conjugate with posterior index T .
(ii) Conversely, if the infinite-dimensional posterior P?E (?E |X E ) is conjugate with posterior
index T E and probability kernel k E , then each marginal posterior P?I (?I |X I ) is conjugate,
-1
with posterior index T I := ?EI ? T E ? ?EI
. The corresponding probability kernels k I are
given by
-1 I
-1 I
k I (AI , tI ) := k E (?EI
A , t)
for any t ? ?EI
t .
(5)
The theorem is not stated here in full generality, but under two simplifying assumptions: We have
omitted the use of hyperparameters, such that the posterior indices depend only on the data, and all
involved spaces (observation space, parameter space etc) are assumed to have the same dimension
for each Bayesian model. Generalizing the theorem beyond both assumptions is technically not difficult, but the additional parameters and notation for book-keeping on dimensions reduce readability.
Proof Sketch2 . Part (i): We define a candidate for the probability kernel k E representing the projective limit posterior, and then verify that it makes the model conjugate when combined with the mapping T given by assumption. To do so, we first construct the conditional probabilities P?I (?I |T I ),
show that they form a conditionally projective family, and take their conditional projective limit
using Theorem 2. This projective limit is used as a candidate for k E . To show that k E indeed represents the posterior, we show that the two coincide on the cylinder sets (events which are preimages
under projection of finite-dimensional events). From this, equality for all events follows by the
Caratheodory theorem [1].
Part (ii): We only have to verify that the mappings T I and probability kernels k I indeed satisfy the
definition of conjugacy, which is a straightforward computation.
5
Construction of Nonparametric Bayesian Models
Theorem 3(ii) states that conjugate models have conjugate marginals. Since, in the finitedimensional case, conjugate Bayesian models are essentially limited to exponential families and
6
their natural conjugate priors3 , a consequence of the theorem is that we can only expect a nonparametric Bayesian model to be conjugate if it is constructed from exponential family marginals ?
assuming that the construction is based on a product space approach.
When an exponential family model and its conjugate prior are used in the construction, the form
of the resulting model becomes generic: The posterior index T of a conjugate exponential family Bayesian model is always given by the sufficient statistic S in the form T (x, (?, y)) :=
(? + 1, y + S(x)). Addition commutes with projection, and hence the posterior indices T I of a
family of such models over all dimensions I ? F(E) satisfy the projection condition (4) if and
only if the same condition is satisfied by the sufficient statistics S I of the marginals. Accordingly, the infinite-dimensional posterior index T E in Theorem 3 exists if and only if there is an
infinite-dimensional ?extension? S E of the sufficient statistics S I satisfying (4). If that is the case,
T E (xE , (?, y E )) := (? + 1, y E + S E (xE )) is a posterior index for the infinite-dimensional projective
limit model. In the case of countable dimensions, Theorem 3 therefore implies a construction recipe
for nonparametric Bayesian models from exponential family marginals; constructing the model boils
down to checking whether the models selected as finite-dimensional marginals are conditionally
projective, and whether the sufficient statistics satisfy the projection condition. An example construction, for a model on infinite permutations, is given in below. The following table summarizes
some stochastic process models from the conjugate extension point of view:
Marginals (d-dim)
Bernoulli/Beta
Multin./Dirichlet
Gaussian/Gaussian
Mallows/conjugate
Projective limit model
Beta process; IBP
DP; CRP
GP/GP
Example below
Observations (limit)
Binary arrays
Discrete distributions
(continuous) functions
Bijections N ? N
A Construction Example. The analysis of preference data, in which preferences are represented
as permutations, has motivated the definition of distributions on permutations of an infinite number
of items [9]. A finite permutation on r items always implies a question such as ?rank your favorite
movies out of r movies?. A nonparametric approach can generalize the question to ?rank your
favorite movies?. Meila and Bao [9] derived a model on infinite permutations, that is, on bijections of
the set N. We construct a nonparametric Bayesian model on bijections, with a likelihood component
PXE (X E |?E ) equivalent to the model of Meila and Bao.
Choice of marginals. The finite-dimensional marginals are probability models of rankings of a finite
number of items, introduced by Fligner and Verducci [6]. For permutations ? ?
Sr of lengthr,
the model is defined by the exponential family density p(? |?, ?) := Z(?)?1 exp( S(? ? ?1 ), ? ),
where
statistic is the vector S r (? ) := (S1 (? ), . . . , Sr (? )) with components Sj (? ) :=
Pr the sufficient
?1
(j) > ? ?1 (l)}. Roughly speaking, the model is a location-scale model, and the
l=j+1 I{?
permutation ? defines the distribution?s mean. If all entries of ? are chosen identical as some constant, this constant acts as a concentration parameter, and the scalar product is equivalent to the
Kendall metric on permutations. This metric measures distance between permutations as the minimum number of adjacent transpositions (i.e. swaps of neighboring entries) required to transform
one permutation into the other. If the entries of ? differ, they can be regarded as weights specifying
the relevance of each position in the ranking [6].
Definition of marginals. In the product space context, each finite set I ? F(E) of axis labels is a
set of items to be permuted, and the marginal P?I (? I |? I , ?I ) is a model on the corresponding finite
permutation group SI on the elements of I. The sufficient statistics S I maps each permutation to a
vector of integers, and thus embeds the group SI into RI . The mapping is one-to-one [6]. Projections,
i.e. restrictions, on the group mean deletion of elements. A permutation ? J is restricted to a subset
I ? J of indices by deleting all items indexed by J \ I, producing the restriction ? J |I . We overload
notation and write ?JI for both the restriction in the group SI and axes-parallel projection in the
Euclidean space RI , into which the sufficient statistic S I embeds SI . It follows from the definition
of S I that, whenever ?JI ? J = ? I , then ?JI S J (? J ) = S I (? I ). In other words, ?JI ? S J = S I ? ?JI ,
which is a stronger form of the projection condition S J,-1 ? ?JI-1 = ?JI-1 ? S I,-1 given in Eq. 4. We
will define a nonparametric Bayesian model that puts a prior on the infinite-dimensional analogue
3
Mixtures of conjugate priors are conjugate in the sense of closure under sampling [4], but the posterior
index in Def. 2 has to be evaluated for each mixture component individually. An example of a conjugate model
not in the exponential family is the uniform distribution on [0, ?] with a Pareto prior [12].
7
of ?, i.e. on the weight function ?E . For I ? F(N),
the marginal of
the likelihood component is
given by the density pI (? I |? I , ?I ) := Z I (?I )?1 exp( S I (? I (? I )?1 ), ?I ). The corresponding natural
conjugate prior on ?I has density q I (?I |?, y I ) ? exp(h?I , y I i ? ? log Z I (?I )). Since the model is an
exponential family model, the posterior index is of the form T I ((?, y I ), ? I ) = (? + 1, y I + S I (? I )),
and since S I is projective in the sense of Eq. 4, so is T I . The prior and likelihood densities above
define two families P I (X I |?I ) and P I (?I |?) of measures over all finite dimensions I ? F(E). It
is reasonably straightforward to show that both families are conditionally projective, and so is the
family of the corresponding posteriors. Each therefore has a projective limit, and the projective limit
of the posteriors is the posterior of the projective limit P E (X E |?E ) under prior P E (?E ).
Posterior index. The posterior index of the infinite-dimensional model can be derived by means
of Theorem 3: To get rid of the hyperparameters, we first fix a value ? E := (?, y E ) of the
infinite-dimensional hyperparameter, and only consider the corresponding infinite-dimensional prior
P?E (?E |?E = ? E ), with its marginals P?I (?I |?I = ?EI ? E ). Now define a function S E on the
bijections of N as follows. For each bijection ? : N ? N, and each j ? N, set SjE (? ) :=
P?
?1
(j) > ? ?1 (l)}. Since ? ?1 (j) is a finite number for any j ? N, the indicator function
l=j+1 I{?
is non-zero only for a finite number of indices l, such that the entries of S E are always finite. Then
-1
-1 I,-1
S E satisfies the projection condition S E,-1 ? ?EI
= ?EI
S for all I ? F(E). As candidate posterior
E
E
E
E
index, we define the function T ((?, y ), ? ) = (? + 1, y E + S E (? E )) for y E ? ?N
? . Then T also
E
satisfies the projection condition (4) for any I ? F(E). By Theorem 3, this makes T a posterior
index for the projective limit model.
6
Discussion and Conclusion
We have shown how nonparametric Bayesian models can be constructed from finite-dimensional
Bayes equations, and how conjugacy properties of the finite-dimensional models carry over to
the infinite-dimensional, nonparametric case. We also have argued that conjugate nonparametric
Bayesian models arise from exponential families.
A number of interesting questions could not be addressed within the scope of this paper, including
(1) the extension to model properties other than conjugacy and (2) the generalization to uncountable
dimensions. For example, a model property which is closely related to conjugacy is sufficiency [14].
In this case, we would ask whether the existence of sufficient statistics for the finite-dimensional
marginals implies the existence of a sufficient statistic for the nonparametric Bayesian model, and
whether the infinite-dimensional sufficient statistic can be explicitly constructed. Second, the results
presented here are restricted to the case of countable dimensions. This restriction is inconvenient,
since the natural product space representations of, for example, Gaussian and Dirichlet processes
on the real line have uncountable dimensions. The GP (on continuous functions) and the DP are
within the scope of our results, as both can be derived by means of countable-dimensional surrogate
constructions: Since continuous functions on R are completely determined by their values on Q, a
GP can be constructed on the countable-dimensional product space RQ . Analogous constructions
have been proposed for the DP [7]. The drawback of this approach is that the actual random draw is
just a partial version of the object of interest, and formally has to be completed e.g. into a continuous
function or a probability measure after it is sampled. On the other hand, uncountable product space
constructions are subject to all the subtleties of stochastic process theory, many of which do not
occur in countable dimensions. The application of construction methods to conditional probabilities
also becomes more complicated (roughly speaking, the point-wise application of the Kolmogorov
theorem in the proof of Theorem 2 is not possible if the dimension is uncountable).
Product space constructions are by far not the only way to define nonparametric Bayesian models. A
P?olya tree model [7], for example, is much more intuitive to construct by means of a binary partition
argument than from marginals in product space. As far as characterization results, such as which
models can be conjugate, are concerned, our results are still applicable, since the set of Poly?a trees
can be embedded into a product space. However, the marginals may then not be the marginals in
terms of which we ?naturally? think about the model. Nonetheless, we have hopefully demonstrated
that the theoretical results are applicable for the construction of an interesting and practical range of
nonparametric Bayesian models.
Acknowledgments. I am grateful to Joachim M. Buhmann, Zoubin Ghaharamani, Finale DoshiVelez and the reviewers for helpful comments. This work was in part supported by EPSRC grant
EP/F028628/1.
8
References
[1] H. Bauer. Probability Theory. W. de Gruyter, 1996.
[2] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen. The infinite hidden Markov model. In
Advances in Neural Information Processing Systems, 2001.
[3] P. Billingsley. Probability and measure, 1995.
[4] S. R. Dalal and W. J. Hall. Approximating priors by mixtures of natural conjugate priors.
Annals of Statistics, 45(2):278?286, 1983.
[5] T. S. Ferguson. A Bayesian analysis of some nonparametric problems. Annals of Statistics,
1(2), 1973.
[6] M. A. Fligner and J. S. Verducci. Distance based ranking models. Journal of the Royal Statistical Society B, 48(3):359?369, 1986.
[7] J. K. Ghosh and R. V. Ramamoorthi. Bayesian Nonparametrics. Springer, 2002.
[8] T. L. Griffiths and Z. Ghahramani. Infinite latent feature models and the Indian buffet process.
In Advances in Neural Information Processing Systems, 2005.
[9] M. Meil?a and L. Bao. Estimation and clustering with infinite rankings. In Uncertainty in
Artificial Intelligence, 2008.
[10] M. M. Rao. Conditional Measures and Applications. Chapman & Hall, second edition, 2005.
[11] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press,
2006.
[12] C. P. Robert. The Bayesian Choice. Springer, 1994.
[13] D. M. Roy and Y. W. Teh. The Mondrian process. In Advances in Neural Information Processing Systems, 2009.
[14] M. J. Schervish. Theory of Statistics. Springer, 1995.
[15] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal
of the American Statistical Association, (476):1566?1581, 2006.
[16] S. G. Walker, P. Damien, P. W. Laud, and A. F. M. Smith. Bayesian nonparametric inference
for random distributions and related functions. Journal of the Royal Statistical Society B,
61(3):485?527, 1999.
[17] L. Wasserman. All of Nonparametric Statistics. Springer, 2006.
9
| 3737 |@word h:1 version:4 dalal:1 stronger:1 heuristically:1 closure:2 eng:1 covariance:1 simplifying:1 commute:2 accommodate:1 carry:4 contains:2 existing:2 si:4 assigning:1 must:2 partition:1 intelligence:1 selected:1 item:5 accordingly:2 smith:1 transposition:1 blei:1 coarse:1 provides:1 bijection:1 readability:1 preference:2 location:1 characterization:1 unbounded:1 constructed:11 direct:1 beta:2 manner:2 indeed:2 roughly:3 themselves:1 olya:2 resolve:1 actual:1 equipped:1 considering:1 becomes:4 provided:2 bounded:1 underlying:1 notation:3 null:1 interpreted:1 ghosh:1 ti:1 act:1 uk:1 converse:2 sje:1 grant:1 producing:1 t1:2 tends:1 limit:17 consequence:1 meil:1 studied:1 deceptive:1 equivalence:1 specifying:1 conversely:1 limited:1 projective:33 range:1 practical:2 unique:2 acknowledgment:1 atomic:1 mallow:1 eth:1 significantly:1 thought:1 projection:21 word:2 refers:1 regular:10 griffith:1 zoubin:1 get:1 cannot:2 operator:2 put:1 context:1 applying:1 restriction:4 measurable:6 projector:3 map:2 equivalent:2 demonstrated:1 reviewer:1 straightforward:2 williams:1 formulate:1 formalized:1 wasserman:1 array:1 regarded:1 classic:1 notion:1 analogous:1 annals:2 construction:28 suppose:2 element:4 roy:1 satisfying:3 imported:1 caratheodory:1 observed:1 role:1 epsrc:1 ep:1 rq:1 complexity:1 cam:1 neglected:1 depend:1 mondrian:2 grateful:1 algebra:8 technically:1 completely:5 swap:1 represented:1 kolmogorov:14 derivation:1 artificial:1 choosing:1 interest1:1 widely:1 valued:2 supplementary:1 statistic:20 gp:7 transform:1 itself:1 think:1 obviously:1 beal:2 sequence:1 rr:3 product:30 neighboring:1 translate:1 metrizable:1 intuitive:1 bao:3 recipe:2 regularity:1 requirement:1 object:7 derive:3 ac:1 damien:1 ibp:2 eq:2 implies:3 differ:1 closely:1 drawback:1 stochastic:17 material:1 explains:1 require:2 argued:1 fix:1 generalization:2 im:2 extension:15 hold:6 considered:1 hall:2 exp:5 mapping:12 scope:4 jx:2 omitted:1 estimation:4 applicable:6 label:2 pxj:14 individually:1 vice:1 establishes:1 tool:1 mit:1 gaussian:9 always:6 modified:1 rather:1 derived:5 focus:3 pxi:13 ax:2 joachim:1 rank:3 likelihood:7 bernoulli:1 polish:5 rigorous:2 sense:4 am:1 dim:1 inference:2 helpful:1 dependent:1 ferguson:1 typically:1 bt:1 initially:1 hidden:2 interested:1 i1:2 overall:1 denoted:1 marginal:6 construct:6 never:1 sampling:2 chapman:1 identical:1 represents:3 t2:2 randomly:1 individual:2 familiar:2 cylinder:5 interest:2 analyzed:1 mixture:3 implication:1 partial:1 respective:2 indexed:3 tree:3 euclidean:2 inconvenient:1 theoretical:3 modeling:1 rao:1 tractability:1 introducing:1 subset:8 entry:5 neutral:1 uniform:1 too:1 combined:2 density:9 probabilistic:1 analogously:1 dirichlets:1 again:2 satisfied:2 choose:3 admit:1 book:1 american:1 account:1 singleton:1 de:1 sec:1 satisfy:6 countability:1 explicitly:4 ranking:4 view:1 kendall:1 linked:1 analyze:1 start:1 bayes:6 parallel:1 complicated:1 identifiability:1 contribution:2 yield:1 generalize:1 bayesian:53 famous:1 researcher:1 randomness:1 explain:1 whenever:3 definition:13 nonetheless:1 involved:2 e2:1 naturally:1 proof:4 boil:1 billingsley:1 sampled:1 treatment:1 ask:2 knowledge:1 hilbert:1 originally:1 verducci:2 nonparametrics:2 evaluated:1 formulation:2 though:3 sufficiency:2 generality:3 just:3 implicit:1 crp:1 hand:1 ei:10 hopefully:1 defines:2 measurability:4 preimage:2 facilitate:1 verify:2 concept:1 true:2 contain:1 counterpart:2 hence:3 assigned:1 equality:2 read:1 jargon:1 deal:1 conditionally:10 adjacent:1 uniquely:5 essence:1 coincides:1 generalized:1 theoretic:3 complete:2 image:1 wise:1 novel:2 recently:2 common:1 permuted:1 functional:1 ji:19 overview:1 bxi:3 banach:1 extend:2 discussed:1 association:1 marginals:36 cambridge:1 versa:1 ai:9 meila:2 etc:2 base:3 posterior:50 orbanz:2 incapable:1 binary:2 arbitrarily:1 preimages:3 life:1 xe:10 yi:1 minimum:1 additional:1 lowerdimensional:1 determine:3 ii:6 full:2 multiple:1 characterized:1 involving:1 basic:1 essentially:3 expectation:1 metric:2 kernel:8 represent:4 whereas:2 conditionals:5 background:1 separately:1 addition:1 addressed:1 grow:1 walker:1 unlike:1 sr:2 comment:1 subject:1 virtually:1 ramamoorthi:1 finale:1 call:1 integer:1 jordan:1 iii:1 concerned:1 variety:1 xj:2 fit:1 reduce:1 idea:1 knowing:1 whether:5 motivated:1 peter:1 speaking:3 constitute:1 generally:3 clear:1 nonparametric:42 bijections:4 specifies:1 exist:1 restricts:1 write:2 discrete:2 hyperparameter:1 group:4 drawn:1 graph:4 asymptotically:1 schervish:1 parameterized:4 everywhere:4 uncertainty:1 place:1 family:39 almost:6 reader:2 draw:2 summarizes:1 def:2 guaranteed:2 correspondence:1 fold:1 topological:1 adapted:1 occur:2 your:2 ri:2 aspect:2 argument:5 expanded:1 separable:2 px:9 according:1 conjugate:42 describes:1 s1:1 intuitively:1 restricted:2 pr:1 equation:7 zurich:1 conjugacy:13 discus:1 tractable:2 studying:2 available:3 gaussians:1 permit:1 apply:1 hierarchical:2 generic:3 buffet:2 existence:3 original:1 jn:1 uncountable:6 dirichlet:13 include:1 clustering:1 completed:1 ghahramani:2 approximating:1 society:2 g0:2 question:3 parametric:7 concentration:2 traditional:1 surrogate:1 dp:8 subspace:5 distance:2 argue:1 trivial:1 reason:1 assuming:1 length:1 index:30 useless:1 providing:1 setup:1 mostly:1 unfortunately:1 difficult:1 statement:1 robert:1 stated:1 countable:14 proper:1 teh:2 observation:6 markov:3 finite:46 arbitrary:1 introduced:1 namely:1 required:5 deletion:1 assembled:1 beyond:2 suggested:1 below:6 including:4 royal:2 deleting:1 analogue:3 event:14 suitable:2 treated:2 natural:6 indicator:1 buhmann:1 representing:1 movie:3 imply:2 axis:2 prior:23 understanding:1 literature:3 checking:1 embedded:1 expect:1 permutation:16 interesting:3 analogy:1 sufficient:12 consistent:1 pareto:1 pi:1 changed:1 supported:1 keeping:1 rasmussen:2 formal:1 allow:1 bauer:1 dimension:17 finitedimensional:4 valid:1 infinitedimensional:4 collection:1 coincide:1 far:4 sj:1 countably:1 rid:1 assumed:1 xi:2 continuous:5 latent:1 table:1 favorite:2 nature:1 reasonably:2 poly:1 constructing:2 domain:2 main:3 dense:1 universe:1 hyperparameters:3 arise:1 edition:1 borel:3 fligner:2 embeds:2 position:1 exponential:15 candidate:4 ix:11 theorem:44 down:1 essential:1 exists:4 sequential:1 generalizing:1 simply:3 scalar:2 subtlety:1 applies:1 springer:4 determines:1 constantly:1 satisfies:3 gruyter:1 conditional:43 identity:1 ghaharamani:1 infinite:46 determined:3 principal:1 called:7 disregard:1 meaningful:1 exception:2 select:1 formally:1 relevance:1 indian:2 overload:1 constructive:1 evaluate:1 scratch:1 ex:17 |
3,021 | 3,738 | A Fast, Consistent Kernel Two-Sample Test
Kenji Fukumizu
Inst. of Statistical Mathematics
Tokyo Japan
[email protected]
Arthur Gretton
Carnegie Mellon University
MPI for Biological Cybernetics
[email protected]
Bharath K. Sriperumbudur
Dept. of ECE, UCSD
La Jolla, CA 92037
[email protected]
Zaid Harchaoui
Carnegie Mellon University
Pittsburgh, PA, USA
[email protected]
Abstract
A kernel embedding of probability distributions into reproducing kernel Hilbert
spaces (RKHS) has recently been proposed, which allows the comparison of two
probability measures P and Q based on the distance between their respective embeddings: for a sufficiently rich RKHS, this distance is zero if and only if P and
Q coincide. In using this distance as a statistic for a test of whether two samples
are from different distributions, a major difficulty arises in computing the significance threshold, since the empirical statistic has as its null distribution (where
P = Q) an infinite weighted sum of ?2 random variables. Prior finite sample
approximations to the null distribution include using bootstrap resampling, which
yields a consistent estimate but is computationally costly; and fitting a parametric
model with the low order moments of the test statistic, which can work well in
practice but has no consistency or accuracy guarantees. The main result of the
present work is a novel estimate of the null distribution, computed from the eigenspectrum of the Gram matrix on the aggregate sample from P and Q, and having
lower computational cost than the bootstrap. A proof of consistency of this estimate is provided. The performance of the null distribution estimate is compared
with the bootstrap and parametric approaches on an artificial example, high dimensional multivariate data, and text.
1 Introduction
Learning algorithms based on kernel methods have enjoyed considerable success in a wide range of
supervised learning tasks, such as regression and classification [25]. One reason for the popularity of
these approaches is that they solve difficult non-parametric problems by representing the data points
in high dimensional spaces of features, specifically reproducing kernel Hilbert spaces (RKHSs), in
which linear algorithms can be brought to bear. While classical kernel methods have addressed the
mapping of individual points to feature space, more recent developments [14, 29, 28] have focused
on the embedding of probability distributions in RKHSs. When the embedding is injective, the
RKHS is said to be characteristic [11, 29, 12], and the distance between feature mappings constitutes
a metric on distributions. This distance is known as the maximum mean discrepancy (MMD).
One well-defined application of the MMD is in testing whether two samples are drawn from two
different distributions (i.e., a two-sample or homogeneity test). For instance, we might wish to find
whether DNA microarrays obtained on the same tissue type by different labs are distributed identically, or whether differences in lab procedure are such that the data have dissimilar distributions
(and cannot be aggregated) [8]. Other applications include schema matching in databases, where
tests of distribution similarity can be used to determine which fields correspond [14], and speaker
1
verification, where MMD can be used to identify whether a speech sample corresponds to a person
for whom previously recorded speech is available [18].
A major challenge when using the MMD in two-sample testing is in obtaining a significance threshold, which the MMD should exceed with small probability when the null hypothesis (that the samples share the same generating distribution) is satisfied. Following [14, Section 4], we define this
threshold as an upper quantile of the asymptotic distribution of the MMD under the null hypothesis.
Unfortunately this null distribution takes the form of an infinite weighted sum of ?2 random variables. Thus, obtaining a consistent finite sample estimate of this threshold ? that is, an estimate
that converges to the true threshold in the infinite sample limit ? is a significant challenge. Three
approaches have previously been applied: distribution-free large deviation bounds [14, Section 3],
which are generally too loose for practical settings; fitting to the Pearson family of densities [14],
a simple heuristic that performs well in practice, but has no guarantees of accuracy or consistency;
and a bootstrap approach, which is guaranteed to be consistent, but has a high computational cost.
The main contribution of the present study is a consistent finite sample estimate of the null distribution (not based on bootstrap), and a proof that this estimate converges to the true null distribution in
the infinite sample limit. Briefly, the infinite sequence of weights that defines the null distribution is
identical to the sequence of normalized eigenvalues obtained in kernel PCA [26, 27, 7]. Thus, we
show that the null distribution defined using finite sample estimates of these eigenvalues converges
to the population distribution, using only convergence results on certain statistics of the eigenvalues.
In experiments, our new estimate of the test threshold has a smaller computational cost than that
of resampling-based approaches such as the bootstrap, while providing performance as good as the
alternatives for larger sample sizes.
We begin our presentation in Section 2 by describing how probability distributions may be embedded
in an RKHS. We also review the maximum mean discrepancy as our chosen distance measure on
these embeddings, and recall the asymptotic behaviour of its finite sample estimate. In Section 3,
we present both moment-based approximations to the null distribution of the MMD (which have
no consistency guarantees); and our novel, consistent estimate of the null distribution, based on the
spectrum of the kernel matrix over the aggregate sample. Our experiments in Section 4 compare the
different approaches on an artificial dataset, and on high-dimensional microarray and neuroscience
data. We also demonstrate the generality of a kernel-based approach by testing whether two samples
of text are on the same topic, or on different topics.
2 Background
In testing whether two samples are generated from the same distribution, we require both a measure
of distance between probabilities, and a notion of whether this distance is statistically significant. For
the former, we define an embedding of probability distributions in a reproducing kernel Hilbert space
(RKHS), such that the distance between these embeddings is our test statistic. For the latter, we give
an expression for the asymptotic distribution of this distance measure, from which a significance
threshold may be obtained.
Let F be an RKHS on the separable metric space X, with a continuous feature mapping ?(x) ? F
for each x ? X. The inner product between feature mappings is given by the positive definite kernel
function k(x, x? ) := h?(x), ?(x? )iF . We assume in the following that the kernel k is bounded. Let
P be the set of Borel probability measures on X. Following [4, 10, 14], we define the mapping to F
of P ? P as the expectation of ?(x) with respect to P , or
?P : P ? F
Z
?(x)dP.
P 7?
X
The maximum mean discrepancy (MMD) [14, Lemma 7] is defined as the distance between two
such mappings,
MMD(P, Q) := k?P ? ?Q kF
1/2
= (Ex,x? (k(x, x? )) + Ey,y? k(y, y ? ) ? 2Ex,y k(x, y)) ,
where x and x? are independent random variables drawn according to P , y and y ? are independent
and drawn according to Q, and x is independent of y. This quantity is a pseudo-metric on distributions: that is, it satisfies all the qualities of a metric besides MMD(P, Q) = 0 iff P = Q. For MMD
2
to be a metric, we require that the kernel be characteristic [11, 29, 12].1 This criterion is satisfied for
many common kernels, such as the Gaussian kernel (both on compact domains and on Rd ) and the
B2l+1 spline kernel on Rd .
We now consider two possible empirical estimates of the MMD, based on i.i.d. samples
(x1 , . . . , xm ) from P and (y1 , . . . , ym ) from Q (we assume an equal number of samples for simplicity). An unbiased estimate of MMD is the one-sample U-statistic
m
MMD2u :=
X
1
h(zi , zj ),
m(m ? 1)
(1)
i6=j
where zi := (xi , yi ) and h(zi , zj ) := k(xi , xj )+k(yi , yj )?k(xi , yj )?k(xj , yi ). We also define the
biased estimate MMD2b by replacing the U-statistic in (1) with a V-statistic (the sum then includes
terms i = j).
Our goal is to determine whether P and Q differ, based on m samples from each. To this end, we
require a measure of whether MMD2u differs significantly from zero; or, if the biased statistic MMD2b
is used, whether this value is significantly greater than its expectation when P = Q. In other words
we conduct a hypothesis test with null hypothesis H0 defined as P = Q, and alternative hypothesis
H1 as P 6= Q. We must therefore specify a threshold that the empirical MMD will exceed with
small probability, when P = Q. For an asymptotic false alarm probability (Type I error) of ?, an
appropriate threshold is the 1 ? ? quantile of the asymptotic distribution of the empirical MMD
assuming P = Q. According to [14, Theorem 8], this distribution takes the form
mMMD2u ?
D
?
X
?l (zl2 ? 2),
(2)
l=1
where ? denotes convergence in distribution, zl ? N(0, 2) i.i.d., ?i are the solutions to the eigenD
value equation
Z
? i , xj )?l (xi )dP := ?l ?l (xj ),
k(x
(3)
X
? i , xj ) := k(xi , xj ) ? Ex k(xi , x) ? Ex k(x, xi ) + Ex,x? k(x, x? ). Consistency in power of
and k(x
the resulting hypothesis test (that is, the convergence of its Type II error to zero for increasing m) is
shown in [14].
The eigenvalue problem (3) has been studied extensively in the context of kernel PCA [26, 27, 7]:
this connection will be used in obtaining a finite sample estimate of the null distribution in (2),
and we summarize certain important results. Following [3, 10], we define the covariance operator
C : F ? F as
hf, Cf iF
:=
var(f (x))
=
Ex f 2 (x) ? [Ex f (x)]2 .
(4)
The eigenvalues ?l of C are the solutions to the eigenvalue problem in (3) [19, Proposition 2].
Following e.g. [27, p.2511], empirical estimates of these eigenvalues are
? l = 1 ?l
?
m
where ?l are the eigenvalues of the centered Gram matrix
(5)
e := HKH,
K
Ki,j := k(xi , xj ), and H = I ?
is a centering matrix. Finally, by subtracting mMMD2u from
P?
mMMD2b , we observe that these differ by a quantity with expectation tr(C) = l=1 ?l , and thus
1
?
m 11
mMMD2b ?
D
?
X
?l zl2 .
l=1
1
Other interpretations of the MMD are also possible, for particular kernel choices. The most closely related
is the L2 distance between probability density estimates [1], although this requires the kernel bandwidth to
decrease with increasing sample size. See [1, 14] for more detail. Yet another interpretation is given in [32].
3
3 Theory
In the present section, we describe three approaches for approximating the null distribution of MMD.
We first present the Pearson curve and Gamma-based approximations, which consist of parametrized
families of distributions that we fit by matching the low order moments of the empirical MMD. Such
approximations can be accurate in practice, although they remain heuristics with no consistency
guarantees. Second, we describe a null distribution estimate based on substituting the empirical
estimates (5) of the eigenvalues into (2). We prove that this estimate converges to its population
counterpart in the large sample limit.
3.1 Moment-based null distribution estimates
The Pearson curves and the Gamma approximation are both based on the low order moments of the
empirical MMD. The second and third moments for MMD are obtained in [14]:
2
2
=
E MMD2u
Ez,z? h2 (z, z ? ) and
(6)
m(m ? 1)
3
8(m ? 2)
(7)
Ez,z? [h(z, z ? )Ez?? (h(z, z ?? )h(z ? , z ?? ))] + O(m?4 ).
E MMD2u
= 2
m (m ? 1)2
Pearson curves take as arguments the variance, skewness and kurtosis As in [14], we replace the
2
+ 1. An alternative,
kurtosis with a lower bound due to [31], kurt MMD2u ? skew MMD2u
more computationally efficient approach is to use a two-parameter Gamma approximation [20, p.
343, p. 359],
mMMDb (Z) ?
x??1 e?x/?
? ? ?(?)
where ? =
(E(MMDb (Z)))2
,
var(MMDb (Z))
?=
mvar(MMDb (Z))
, (8)
E(MMDb (Z))
and we use the biased statistic MMD2b . Although the Gamma approximation is necessarily less
accurate than the Pearson approach, it has a substantially lower computational cost (O(m2 ) for
the Gamma approximation, as opposed to O(m3 ) for Pearson). Moreover, we will observe in our
experiments that it performs remarkably well, at a substantial cost saving over the Pearson curves.
3.2 Null distribution estimates using Gram matrix spectrum
In [14, Theorem 8], it was established that for large sample sizes, the null distribution of MMD
approaches an infinite weighted sum of independent ?21 random variables, the weights being the
population eigenvalues of the covariance operator C. Hence, an efficient and theoretically grounded
way to calibrate the test is to compute the quantiles by replacing the population eigenvalues of C
with their empirical counterparts, as computed from the Gram matrix (see also [18], where a similar
strategy is proposed for the KFDA test with fixed regularization).
The following result shows that this empirical estimate of the null distribution converges in distribution to its population counterpart. In other words, a test using the MMD statistic, with the threshold
computed from quantiles of the null distribution estimate, is asymptotically consistent in level.
Theorem 1 Let z1 , . . . , zl , . . . be an infinite sequence of i.i.d. random variables, with z1 ? N(0, 2).
P? 1/2
Assume l=1 ?l < ?. Then, as m ? ?
?
X
l=1
? l (z 2 ? 2) ?
?
l
D
?
X
?l (zl2 ? 2) .
l=1
Furthermore, as m ? ?
!
?
X
2
2
? l (z ? 2) > t ? 0 .
sup P mMMDu > t ? P
?
l
t
l=1
4
P?
Proof (sketch) We begin with a proof of conditions under which the sum l=1 ?l (zl2 ? 2) is finite
w.p. 1. According to [16, Exercise 30, p. 358], we may use Kolmogorov?s inequality to determine
that this sum converges a.s. if
?
X
Ez [?2l (zl2 ? 2)2 ] < ?,
l=1
from which it follows that the covariance operator must be Hilbert-Schmidt: this is guaranteed by
P? 1/2
the assumption l=1 ?l < ? (see also [7]). We now proceed to the convergence result. Let C
bl (l = 1, 2, . . .) be the
b be the covariance operator and its empirical estimator. Let ?l and ?
and C
b respectively, in descending order. We want to prove
eigenvalues of C and C,
?
X
p=1
bl ? ?l )Z 2
(?
l
?
0
(9)
in probability as n ? ?, where Zp ? N (0, 2) are i.i.d. random variables. The constant ?2 in
b ? Tr[C], where the proof is given in the online supplement. Thus
Zp2 ? 2 can be neglected as Tr[C]
X
X
X
bl ? ?l )Z 2 ?
b1/2 ? ?1/2 )Z 2 +
b1/2 ?
b1/2 ? ?1/2 )?1/2 Z 2
?
?
(?
l
l
l
l
l
l
l
l
l
l
l
?
nX
l
+
l
o1/2 nX
o1/2
1/2
1/2 2
bl Z 4
b
?
?
?
?
l
l
l
nX
l
l
o1/2 nX
o1/2
1/2
1/2 2
b
?l ? ?l
?l Zl4
(Cauchy-Schwarz). (10)
l
P b 4
P
We now establish l ?l Zl4 and l ?
l Zl are of Op (1). The former follows from Chebyshev?s
?i and Zi are independent,
inequality. To prove the latter, we use that since ?
X
X
? i ]E[Z 4 ] = ?E[tr(C)],
?
?i Z 4 =
E[?
(11)
?
E
i
i
i
i
? is bounded when the kernel has bounded expectation, we again
where ? = E[Z ]. Since E[tr(C)]
have the desired result by Chebyshev?s inequality. The proof is complete if we show
X 1/2
b ? ?1/2 )2 = op (1).
?
(12)
l
l
4
l
From
we have
2
b1/2
b1/2
b
1/2
1/2 b1/2
1/2
?
?
(
?
+
?
)
=
?
?
?
?l ? ?l ? ?
,
l
l
l
l
l
l
(13)
2 X
X 1/2
1/2
b
bl ? ?l |.
?
?
|?
?
l
l ?
l
l
It is known as an extension of the Hoffmann-Wielandt inequality that
X
bl ? ?l ? kC
b ? Ck1 ,
?
l
where k ? k1 is the trace norm (see [23], also shown in [5, p. 490]). Using [18, Prop. 12], which
b ? Ck1 ? 0 in probability, the proof of the first statement is completed. The proof of the
gives kC
second statement follows immediately from the Polya theorem [21], as in [18].
3.3 Discussion
We now have several ways to calibrate the MMD test statistic, ranked in order of increasing computational cost: 1) the Gamma approximation, 2) the ?empirical null distribution?: that is, the null
distribution estimate using the empirical Gram matrix spectrum, and 3) the Pearson curves, and
5
the resampling procedures (subsampling or bootstrap with replacement). We include the final two
approaches in the same cost category since even though the Pearson approach scales worse with
m than the bootstrap (O(m3 ) vs O(m2 )), the bootstrap has a higher cost for sample sizes less than
about 103 due the requirement to repeatedly re-compute the test statistic. We also note that our result
of large-sample consistency in level holds under a restrictive condition on the decay of the spectrum
of the covariance operator, whereas the Gamma approximation calculations are straightforward and
remain possible for any spectrum decay behaviour. The Gamma approximation remains a heuristic,
however, and we give an example of a distribution and kernel for which it performs less accurately
than the spectrum-based estimate in the upper tail, which is of most interest for testing purposes.
4 Experiments
In this section, we compare the four approaches to obtaining the null distribution, both in terms of
the approximation error computed with respect to simulations from the true null, and when used
in homogeneity testing. Our approaches are denoted Gamma (the two-parameter Gamma approximation), Pears (the Pearson curves based on the first three moments, using a lower bound for the
kurtosis), Spec (our new approximation to the null distribution, using the Gram matrix eigenspectrum), and Boot (the bootstrap approach).
Artificial data: We first provide an example of a distribution P for which the heuristics Gamma
and Pears have difficulty in approximating the null distribution, whereas Spec converges. We chose
P to be a mixture of normals P = 0.5 ? N(?1, 0.44) + 0.5 ? N(+1, 0.44), and k as a Gaussian
kernel with bandwidth ranging over ? = 2?4 , 2?3 , 2?2 , 2?1 , 20 , 21 , 22 . The sample sizes were set
to m = 5000, the total sample size hence being 10, 000, and the results were averaged over 50, 000
replications. The eigenvalues of the Gram matrix were estimated in this experiment using [13],
which is slower but more accurate than standard Matlab routines. The true quantiles of the MMD
null distribution, referred to as the oracle quantiles, were estimated by Monte Carlo simulations with
50, 000 runs. We report the empirical performance of Spec compared to the oracle in terms of ?q =
b m (mM M D2 > tr )|, where tq is such that P(mM M D2 >
maxtr :q<r<1 |P(mM M Du2 > tr ) ? P
u
u
b m is the Spec null distribution estimate obtained with m
tq ) = q for q = 0.6, 0.7, 0.8, 0.9, and P
samples from each of P and Q. We also use this performance measure for the Gamma and Pears
approximations. This focuses the performance comparison on the quantiles corresponding to the
upper tail of the null distribution, while still addressing uniform accuracy over a range of thresholds
so as to ensure reliable p-values. The results are shown in Figure 1, and demonstrate that for this
combination of distribution and kernel, Spec performs almost uniformly better than both Gamma and
Pears. We emphasize that the performance advantage of Spec is greatest when we restrict ourselves
to higher quantiles, which are of most interest in testing.
?0.6 vs ?
0.07
?0.7 vs ?
0.08
Gam
Spec
Pears
0.06
?0.8 vs ?
0.06
Gam
Spec
Pears
0.05
0.04
?0.8
0.05
0.04
0.04
0.03
0.03
0.02
0.02
0.02
0.03
0.02
?4
Gam
Spec
Pears
0.04
?0.7
?0.6
0.06
?0.9 vs ?
0.05
Gam
Spec
Pears
?0.9
0.08
0.01
?2
log (?)
2
0
2
0
?4
?2
log (?)
0
0
?4
2
2
?2
log (?)
2
0
2
0.01
?4
?2
log (?)
0
2
2
Figure 1: Evolution of ?q for resp. the Gamma (Gam), Spectrum (Spec), and Pearson (Pears) approximations
to the null distribution, as the Gaussian kernel bandwidth parameter varies. From left to right, plots of ?q
versus ? = 2?4 , 2?3 , . . . , 22 for q = 0.6, 0.7, 0.8, 0.9.
Benchmark data: We next demonstrate the performance of the MMD tests on a number of multivariate datasets, taken from [14, Table 1]. We compared microarray data from normal and tumor
tissues (Health status), microarray data from different subtypes of cancer (Subtype), and local field
potential (LFP) electrode recordings from the Macaque primary visual cortex (V1) with and without spike events (Neural Data I and II, described in [24]). In all cases, we were provided with two
samples having different statistical properties, where the detection of these differences was made
difficult by the high data dimensionality (for the microarray data, density estimation is impossi6
ble given the small sample size and high data dimensionality, and a successful test cannot rely on
accurate density estimates as an intermediate step).
In computing the null distributions for both the Spec and Pears cases, we drew 500 samples from the
associated null distribution estimates, and computed the test thresholds using the resulting empirical
quantiles. For the Spec case, we computed the eigenspectrum on the gram matrix of the aggregate
data from P and Q, retaining in all circumstances the maximum number 2m ? 1 of nonzero eigenvalues of the empirical Gram matrix. This is a conservative approach, given that the Gram matrix
spectrum may decay rapidly [2, Appendix C], in which case it might be possible to safely discard the
smallest eigenvalues. For the bootstrap approach Boot, we aggregated points from the two samples,
then assigned these randomly without replacement to P and Q. In our experiments, we performed
500 such iterations, and used the resulting histogram of MMD values as our null distribution. We
used a Gaussian kernel in all cases, with the bandwidth set to the median distance between points in
the aggregation of samples from P and Q.
We applied our tests to the benchmark data as follows: Given datasets A and B, we either drew one
sample with replacement from A and the other from B (in which case a Type II error was made
when the null hypothesis H0 was accepted); or we drew both samples with replacement from a
single pool consisting of A and B combined (in which case a Type I error was made when H0
was rejected: this should happen a fraction 1 ? ? of the time). This procedure was repeated 1000
times to obtain average performance figures. We summarize our results in Table 1. Note that an
extensive benchmark of the MMD Boot and Pears tests against other nonparametric approaches to
two-sample testing is provided in [14]: these include the the Friedman-Rafsky generalisation of the
Kolmogorov-Smirnov and Wald-Wolfowitz tests [9], the Biau-Gy?orfi test [6], and the Hall-Tajvidi
test [17]. See [14] for details.
We observe that the kernel tests perform extremely well on these data: the Type I error is in the
great majority of cases close to its design value of 1 ? ?, and the Type II error is very low (and
often zero). The Spec test is occasionally slightly conservative, and has a lower Type I error than
required: this is most pronounced in the Health Status dataset, for which the sample size m is low.
The computational cost shows the expected trend, with Gamma being least costly, followed by Spec,
Pears, and finally Boot (this trend is only visible for the larger m = 500 datasets). Note that for yet
larger sample sizes, however, we expect the cost of Pears to exceed that of the remaining methods,
due to its O(m3 ) cost requirement (vs O(m2 ) for the other approaches).
Dataset
Neural Data I
Attribute
Gamma
Pears
Spec
Boot
Type I/Type II 0.95 / 0.00 0.96 / 0.00 0.96 / 0.00 0.96 / 0.00
Time (sec)
0.06
3.92
2.79
5.79
Neural Data II Type I/Type II 0.96 / 0.00 0.96 / 0.00 0.97 / 0.00 0.96 / 0.00
Time (sec)
0.08
3.97
2.91
8.08
Health status
Type I/Type II 0.96 / 0.00 0.96 / 0.00 0.98 / 0.00 0.95 / 0.00
Time (sec)
0.01
0.01
0.01
0.03
Subtype
Type I/Type II 0.95 / 0.02 0.95 / 0.01 0.96 / 0.01 0.94 / 0.01
Time (sec)
0.05
0.05
0.05
0.07
Table 1: Benchmarks for the kernel two-sample tests on high dimensional multivariate data. Type I and Type
II errors are provided, as are average run times. Sample size (dimension): Neural I 500 (63) ; Neural II 500
(100); Health Status 25 (12,600); Subtype 25 (2,118).
Finally,
we
demonstrate
the
performance
of
the
test
on
structured
(text) data.
Our data are taken from the Canadian Hansard corpus
(http : //www.isi.edu/natural ? language/download/hansard/). As in the earlier work on
dependence testing presented in [15], debate transcripts on the three topics of agriculture, fisheries,
and immigration were used. Transcripts were in English and French, however we confine ourselves
to reporting results on the English data (the results on the French data were similar). Our goal was to
distinguish samples on different topics, for instance P being drawn from transcripts on agriculture
and Q from transcripts on immigration (in the null case, both samples were from the same topic).
The data were processed following the same procedures as in [15]. We investigated two different
kernels on text: the k-substring kernel of [22, 30] with k = 10, and a bag-of-words kernel. In
both cases, we computed kernels between five-line extracts, ignoring lines shorter than five words
long. Results are presented in Figure 2, and represent an average over all three combinations of
7
Test performance,bow
Test performance,spec
0.6
0.4
0.2
0.08
0.8
Eig. amplitude
0.8
0
10
Gram matrix spectrum, bow
1
Gamma
Pears
Spec
Boot
Type II error
Type II error
1
0.6
0.4
0.2
20
30
40
Sample size m
50
0
10
20
30
40
Sample size m
50
0.06
0.04
0.02
0
0
5
10
15
20
Eig. index
Figure 2: Canadian Hansard data. Left: Average Type II error over all of agriculture-fisheries, agricultureimmigration, and fisheries-immigration, for the bag-of-words kernel. Center: Average Type II error for the
k-substring kernel. Right: Eigenspectrum of a centered Gram matrix obtained by drawing m = 10 points from
each of P and Q, where P 6= Q, for the bag-of-words kernel.
different topic pairs: agriculture-fisheries, agriculture-immigration, and fisheries-immigration. For
each topic pairing, results are averaged over 300 repetitions.
We observe that in general, the MMD is very effective at distinguishing distributions of text fragments on different topics: for sample sizes above 30, all the test procedures are able to detect differences in distribution with zero Type II error, for both kernels. When the k-substring kernel is used,
the Boot, Gamma, and Pears approximations can distinguish the distributions for sample sizes as low
as 10: this indicates that a more sophisticated encoding of the text than provided by bag-of-words
results in tests of greater sensitivity (consistent with the independence testing observations of [15]).
We now investigate the fact that for sample sizes below m = 30 on the Hansard data, the Spec
test has a much higher Type II error the alternatives. The k-substring and bag-of-words kernels are
diagonally dominant: thus for small sample sizes, the empirical estimate of the kernel spectrum
is effectively truncated at a point where the eigenvalues remain large, introducing a bias (Figure
2). This effect vanishes on the Hansard benchmark once the number of samples reaches 25-30.
By contrast, for the Neural data using a Gaussian kernel, this small sample bias is not observed,
and the Spec test has equivalent Type II performance to the other three tests (see Figure 1 in the
online supplement). In this case, for sample sizes of interest (i.e., where there are sufficient samples
to obtain a Type II error of less than 50%), the bias in the Spec test due to spectral truncation is
negligible. We emphasize that the speed advantage of the Spec test becomes important only for
larger sample sizes (and the consistency guarantee is only meaningful in this regime).
5 Conclusion
We have presented a novel method for estimating the null distribution of the RKHS distance between probability distribution embeddings, for use in a nonparametric test of homogeneity. Unlike
previous parametric heuristics based on moment matching, our new distribution estimate is consistent; moreover, it is computationally less costly than the bootstrap, which is the only alternative
consistent approach. We have demonstrated in experiments that our method performs well on high
dimensional multivariate data and text, as well as for distributions where the parametric heuristics
show inaccuracies. We anticipate that our approach may also be generalized to kernel independence
tests [15], and to homogeneity tests based on the kernel Fisher discriminant [18].
Acknowledgments: The ordering of the second through fourth authors is alphabetical. We thank Choon-Hui
Teo for generating the Gram matrices for the text data, Malte Rasch for his assistance in the experimental
evaluation, and Karsten Borgwardt for his assistance with the microarray data. A. G. was supported by grants
DARPA IPTO FA8750-09-1-0141, ONR MURI N000140710747, and ARO MURI W911NF0810242. Z. H.
was supported by grants from the Technical Support Working Group through funding from the Investigative
Support and Forensics subgroup and NIMH 51435, and from Agence Nationale de la Recherche under contract
ANR-06-BLAN-0078 KERNSIG. B. K. S. was supported by the MPI for Biological Cybernetics, NSF (grant
DMS-MSPA 0625409), the Fair Isaac Corporation and the University of California MICRO program.
References
[1] N. Anderson, P. Hall, and D. Titterington. Two-sample test statistics for measuring discrepancies between two multivariate probability density functions using kernel-based density estimates. Journal of
8
Multivariate Analysis, 50:41?54, 1994.
[2] F. R. Bach and M. I. Jordan. Kernel independent component analysis. J. Mach. Learn. Res., 3:1?48, 2002.
[3] C. Baker. Joint measures and cross-covariance operators. Transactions of the American Mathematical
Society, 186:273?289, 1973.
[4] A. Berlinet and C. Thomas-Agnan. Reproducing Kernel Hilbert Spaces in Probability and Statistics.
Springer-Verlag, Berlin, 2003.
[5] Rajendra Bhatia and Ludwig Elsner. The Hoffman-Wielandt inequality in infinite dimensions. Proceedings of Indian Academy of Science (Mathematical Sciences), 104(3):483?494, 1994.
[6] G. Biau and L. Gyorfi. On the asymptotic properties of a nonparametric l1 -test statistic of homogeneity.
IEEE Transactions on Information Theory, 51(11):3965?3973, 2005.
[7] G. Blanchard, O. Bousquet, and L. Zwald. Statistical properties of kernel principal component analysis.
Machine Learning, 66:259?294, 2007.
[8] K. M. Borgwardt, A. Gretton, M. J. Rasch, H.-P. Kriegel, B. Sch?olkopf, and A. J. Smola. Integrating
structured biological data by kernel maximum mean discrepancy. Bioinformatics (ISMB), 22(14):e49?
e57, 2006.
[9] J. Friedman and L. Rafsky. Multivariate generalizations of the Wald-Wolfowitz and Smirnov two-sample
tests. The Annals of Statistics, 7(4):697?717, 1979.
[10] K. Fukumizu, F. R. Bach, and M. I. Jordan. Dimensionality reduction for supervised learning with reproducing kernel Hilbert spaces. J. Mach. Learn. Res., 5:73?99, 2004.
[11] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel measures of conditional dependence. In
NIPS 20, pages 489?496, 2008.
[12] K. Fukumizu, B. Sriperumbudur, A. Gretton, and B. Sch?olkopf. Characteristic kernels on groups and
semigroups. In NIPS 21, pages 473?480, 2009.
[13] G. Golub and Q. Ye. An inverse free preconditioned krylov subspace method for symmetric generalized
eigenvalue problems. SIAM Journal on Scientific Computing, 24:312?334, 2002.
[14] A. Gretton, K. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the two-sampleproblem. In NIPS 19, pages 513?520, 2007.
[15] A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of
independence. In NIPS 20, pages 585?592, 2008.
[16] G. R. Grimmet and D. R. Stirzaker. Probability and Random Processes. Oxford University Press, Oxford,
third edition, 2001.
[17] P. Hall and N. Tajvidi. Permutation tests for equality of distributions in high-dimensional settings.
Biometrika, 89(2):359?374, 2002.
[18] Z. Harchaoui, F. Bach, and E. Moulines. Testing for homogeneity with kernel fisher discriminant analysis.
In NIPS 20, pages 609?616. 2008. (long version: arXiv:0804.1026v1).
[19] M. Hein and O. Bousquet. Kernels, associated structures, and generalizations. Technical Report 127,
Max Planck Institute for Biological Cybernetics, 2004.
[20] N. L. Johnson, S. Kotz, and N. Balakrishnan. Continuous Univariate Distributions. Volume 1 (Second
Edition). John Wiley and Sons, 1994.
[21] E. Lehmann and J. Romano. Testing Statistical Hypothesis (3rd ed.). Wiley, New York, 2005.
[22] C. Leslie, E. Eskin, and W. S. Noble. The spectrum kernel: A string kernel for SVM protein classification.
In Proceedings of the Pacific Symposium on Biocomputing, pages 564?575, 2002.
[23] A. S. Markus. The eigen- and singular values of the sum and product of linear operators. Russian
Mathematical Surveys, 19(4):93?123, 1964.
[24] M. Rasch, A. Gretton, Y. Murayama, W. Maass, and N. K. Logothetis. Predicting spiking activity from
local field potentials. Journal of Neurophysiology, 99:1461?1476, 2008.
[25] B. Sch?olkopf and A. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[26] B. Sch?olkopf, A. J. Smola, and K.-R. M?uller. Nonlinear component analysis as a kernel eigenvalue
problem. Neural Computation, 10:1299?1319, 1998.
[27] J. Shawe-Taylor, C. Williams, N. Cristianini, and J. Kandola. On the eigenspectrum of the Gram matrix
and the generalisation error of kernel PCA. IEEE Trans. Inf. Theory, 51(7):2510?2522, 2005.
[28] A. J. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A Hilbert space embedding for distributions. In ALT
18, pages 13?31, 2007.
[29] B. Sriperumbudur, A. Gretton, K. Fukumizu, G. Lanckriet, and B. Sch?olkopf. Injective hilbert space
embeddings of probability measures. In COLT 21, pages 111?122, 2008.
[30] C. H. Teo and S. V. N. Vishwanathan. Fast and space efficient string kernels using suffix arrays. In ICML,
pages 929?936, 2006.
[31] J. E. Wilkins. A note on skewness and kurtosis. Ann. Math. Stat., 15(3):333?335, 1944.
[32] G. Zech and B. Aslan. A multivariate two-sample test based on the concept of minimum energy. In
PHYSTAT, pages 97?100, 2003.
9
| 3738 |@word neurophysiology:1 version:1 briefly:1 norm:1 smirnov:2 d2:2 simulation:2 covariance:6 tr:7 reduction:1 moment:8 fragment:1 rkhs:7 kurt:1 fa8750:1 com:2 gmail:2 yet:2 must:2 w911nf0810242:1 john:1 visible:1 happen:1 zaid:2 plot:1 resampling:3 v:6 mvar:1 spec:22 recherche:1 eskin:1 math:1 five:2 mathematical:3 symposium:1 replication:1 pairing:1 prove:3 fitting:2 theoretically:1 expected:1 karsten:1 isi:1 moulines:1 increasing:3 becomes:1 provided:5 begin:2 bounded:3 moreover:2 estimating:1 baker:1 null:38 skewness:2 substantially:1 string:2 titterington:1 corporation:1 guarantee:5 pseudo:1 safely:1 biometrika:1 berlinet:1 zl:3 subtype:3 grant:3 planck:1 positive:1 negligible:1 local:2 limit:3 encoding:1 mach:2 oxford:2 might:2 chose:1 studied:1 range:2 statistically:1 averaged:2 gyorfi:1 ismb:1 practical:1 acknowledgment:1 testing:12 yj:2 practice:3 lfp:1 definite:1 differs:1 hansard:5 alphabetical:1 bootstrap:12 procedure:5 empirical:17 significantly:2 orfi:1 matching:3 word:8 integrating:1 protein:1 cannot:2 close:1 operator:7 context:1 descending:1 www:1 equivalent:1 zwald:1 demonstrated:1 center:1 straightforward:1 williams:1 focused:1 survey:1 simplicity:1 immediately:1 m2:3 estimator:1 array:1 his:2 embedding:5 population:5 notion:1 resp:1 annals:1 logothetis:1 distinguishing:1 hypothesis:8 lanckriet:1 pa:1 trend:2 muri:2 database:1 observed:1 sun:1 ordering:1 decrease:1 substantial:1 vanishes:1 nimh:1 cristianini:1 neglected:1 darpa:1 joint:1 mmd2b:3 kolmogorov:2 fast:2 describe:2 effective:1 monte:1 investigative:1 artificial:3 bhatia:1 aggregate:3 pearson:11 h0:3 heuristic:6 larger:4 solve:1 drawing:1 anr:1 statistic:17 final:1 online:2 sequence:3 eigenvalue:18 advantage:2 kurtosis:4 aro:1 subtracting:1 product:2 zl2:5 rapidly:1 bow:2 murayama:1 iff:1 ludwig:1 academy:1 pronounced:1 olkopf:9 convergence:4 electrode:1 requirement:2 zp:1 generating:2 converges:7 ac:1 stat:1 polya:1 op:2 transcript:4 kenji:1 differ:2 rasch:4 closely:1 tokyo:1 attribute:1 mmmd2u:2 centered:2 require:3 behaviour:2 generalization:2 proposition:1 biological:4 anticipate:1 subtypes:1 extension:1 hold:1 mm:3 sufficiently:1 hall:3 confine:1 normal:2 great:1 mapping:6 substituting:1 major:2 smallest:1 agriculture:5 purpose:1 estimation:1 bag:5 schwarz:1 teo:3 repetition:1 weighted:3 hoffman:1 fukumizu:7 uller:1 brought:1 mit:1 gaussian:5 blan:1 focus:1 indicates:1 pear:16 contrast:1 detect:1 inst:1 suffix:1 kc:2 b2l:1 classification:2 colt:1 denoted:1 retaining:1 development:1 field:3 equal:1 saving:1 having:2 once:1 identical:1 icml:1 constitutes:1 noble:1 discrepancy:5 report:2 spline:1 micro:1 randomly:1 gamma:18 kandola:1 homogeneity:6 individual:1 choon:1 semigroups:1 ourselves:2 replacement:4 consisting:1 tq:2 immigration:5 friedman:2 detection:1 interest:3 investigate:1 evaluation:1 golub:1 mixture:1 accurate:4 arthur:2 injective:2 respective:1 shorter:1 conduct:1 taylor:1 desired:1 re:3 hein:1 instance:2 earlier:1 measuring:1 e49:1 calibrate:2 leslie:1 cost:11 introducing:1 deviation:1 addressing:1 uniform:1 successful:1 johnson:1 too:1 varies:1 combined:1 person:1 density:6 borgwardt:3 sensitivity:1 siam:1 contract:1 pool:1 ym:1 fishery:5 again:1 recorded:1 satisfied:2 opposed:1 worse:1 american:1 japan:1 potential:2 de:1 gy:1 sec:4 includes:1 blanchard:1 performed:1 h1:1 lab:2 schema:1 sup:1 hf:1 aggregation:1 contribution:1 accuracy:3 variance:1 characteristic:3 yield:1 correspond:1 identify:1 biau:2 accurately:1 substring:4 carlo:1 cybernetics:3 tissue:2 rajendra:1 bharath:1 reach:1 ed:1 centering:1 sriperumbudur:3 against:1 energy:1 isaac:1 dm:1 proof:8 associated:2 dataset:3 recall:1 dimensionality:3 hilbert:8 amplitude:1 routine:1 sophisticated:1 higher:3 forensics:1 supervised:2 specify:1 though:1 generality:1 furthermore:1 rejected:1 anderson:1 smola:6 sketch:1 working:1 replacing:2 nonlinear:1 eig:2 french:2 defines:1 quality:1 scientific:1 russian:1 usa:1 effect:1 ye:1 normalized:1 true:4 unbiased:1 counterpart:3 former:2 equality:1 hence:2 regularization:1 evolution:1 assigned:1 nonzero:1 symmetric:1 maass:1 assistance:2 speaker:1 mpi:2 criterion:1 generalized:2 complete:1 demonstrate:4 performs:5 l1:1 ranging:1 novel:3 recently:1 funding:1 common:1 spiking:1 jp:1 volume:1 tail:2 interpretation:2 mellon:2 significant:2 cambridge:1 enjoyed:1 rd:3 consistency:8 mathematics:1 i6:1 language:1 shawe:1 similarity:1 cortex:1 du2:1 dominant:1 multivariate:8 agence:1 recent:1 jolla:1 inf:1 discard:1 occasionally:1 certain:2 verlag:1 inequality:5 onr:1 success:1 yi:3 minimum:1 greater:2 ey:1 elsner:1 aggregated:2 determine:3 wolfowitz:2 ii:19 harchaoui:3 gretton:10 technical:2 calculation:1 bach:3 long:2 cross:1 regression:1 wald:2 circumstance:1 metric:5 expectation:4 arxiv:1 iteration:1 kernel:59 grounded:1 mmd:28 histogram:1 represent:1 background:1 remarkably:1 want:1 whereas:2 addressed:1 median:1 singular:1 microarray:5 sch:9 biased:3 unlike:1 recording:1 balakrishnan:1 jordan:2 exceed:3 intermediate:1 embeddings:5 identically:1 canadian:2 xj:7 fit:1 zi:4 independence:3 bandwidth:4 restrict:1 inner:1 microarrays:1 chebyshev:2 whether:11 expression:1 pca:3 song:2 speech:2 proceed:1 york:1 romano:1 repeatedly:1 matlab:1 generally:1 nonparametric:3 extensively:1 processed:1 category:1 dna:1 http:1 zj:2 nsf:1 neuroscience:1 estimated:2 popularity:1 carnegie:2 group:2 four:1 threshold:12 drawn:4 v1:2 asymptotically:1 fraction:1 sum:7 run:2 inverse:1 fourth:1 lehmann:1 reporting:1 family:2 almost:1 kotz:1 ble:1 appendix:1 bound:3 ki:1 guaranteed:2 followed:1 distinguish:2 stirzaker:1 oracle:2 activity:1 vishwanathan:1 markus:1 bousquet:2 speed:1 argument:1 extremely:1 separable:1 structured:2 pacific:1 according:4 sampleproblem:1 combination:2 smaller:1 remain:3 slightly:1 son:1 taken:2 computationally:3 equation:1 previously:2 remains:1 describing:1 loose:1 skew:1 end:1 available:1 observe:4 gam:5 appropriate:1 spectral:1 alternative:5 rkhss:2 schmidt:1 slower:1 eigen:1 thomas:1 denotes:1 remaining:1 include:4 cf:1 completed:1 subsampling:1 ensure:1 ck1:2 e57:1 ism:1 restrictive:1 quantile:2 k1:1 establish:1 approximating:2 classical:1 society:1 bl:6 quantity:2 hoffmann:1 spike:1 parametric:5 costly:3 strategy:1 primary:1 dependence:2 said:1 dp:2 subspace:1 distance:14 thank:1 mmd2u:6 berlin:1 parametrized:1 majority:1 nx:4 topic:8 whom:1 cauchy:1 eigenspectrum:5 discriminant:2 reason:1 kfda:1 preconditioned:1 assuming:1 besides:1 o1:4 index:1 providing:1 difficult:2 unfortunately:1 statement:2 debate:1 trace:1 design:1 bharathsv:1 rafsky:2 perform:1 upper:3 boot:7 observation:1 datasets:3 benchmark:5 finite:7 truncated:1 y1:1 ucsd:2 reproducing:5 download:1 pair:1 required:1 extensive:1 connection:1 z1:2 california:1 established:1 subgroup:1 inaccuracy:1 macaque:1 trans:1 nip:5 able:1 kriegel:1 krylov:1 below:1 agnan:1 xm:1 regime:1 challenge:2 summarize:2 program:1 reliable:1 max:1 power:1 greatest:1 event:1 difficulty:2 ranked:1 rely:1 natural:1 malte:1 predicting:1 representing:1 extract:1 health:4 text:8 prior:1 review:1 l2:1 kf:1 asymptotic:6 embedded:1 expect:1 bear:1 permutation:1 aslan:1 var:2 versus:1 h2:1 verification:1 consistent:10 sufficient:1 share:1 cancer:1 diagonally:1 supported:3 concept:1 free:2 english:2 truncation:1 bias:3 institute:1 wide:1 distributed:1 curve:6 dimension:2 gram:14 rich:1 author:1 made:3 coincide:1 transaction:2 hkh:1 compact:1 emphasize:2 status:4 b1:6 pittsburgh:1 corpus:1 xi:8 spectrum:11 continuous:2 table:3 n000140710747:1 learn:2 ca:1 ignoring:1 obtaining:4 investigated:1 necessarily:1 domain:1 significance:3 main:2 alarm:1 edition:2 repeated:1 fair:1 x1:1 referred:1 quantiles:7 borel:1 wiley:2 wish:1 exercise:1 third:2 tajvidi:2 mspa:1 theorem:4 decay:3 svm:1 alt:1 consist:1 false:1 effectively:1 drew:3 hui:1 supplement:2 nationale:1 wielandt:2 univariate:1 ez:4 visual:1 springer:1 corresponds:1 satisfies:1 ma:1 prop:1 conditional:1 goal:2 presentation:1 ann:1 replace:1 fisher:2 considerable:1 infinite:8 specifically:1 uniformly:1 generalisation:2 lemma:1 tumor:1 total:1 conservative:2 principal:1 ece:1 accepted:1 la:2 m3:3 experimental:1 meaningful:1 wilkins:1 support:2 grimmet:1 latter:2 arises:1 ipto:1 dissimilar:1 bioinformatics:1 indian:1 biocomputing:1 dept:1 ex:7 |
3,022 | 3,739 | Entropic Graph Regularization in Non-Parametric
Semi-Supervised Classification
Amarnag Subramanya & Jeff Bilmes
Department of Electrical Engineering, University of Washington, Seattle.
{asubram,bilmes}@ee.washington.edu
Abstract
We prove certain theoretical properties of a graph-regularized transductive learning objective that is based on minimizing a Kullback-Leibler divergence based
loss. These include showing that the iterative alternating minimization procedure
used to minimize the objective converges to the correct solution and deriving a test
for convergence. We also propose a graph node ordering algorithm that is cache
cognizant and leads to a linear speedup in parallel computations. This ensures that
the algorithm scales to large data sets. By making use of empirical evaluation on
the TIMIT and Switchboard I corpora, we show this approach is able to outperform other state-of-the-art SSL approaches. In one instance, we solve a problem
on a 120 million node graph.
1
Introduction
The process of training classifiers with small amounts of labeled data and relatively large amounts
of unlabeled data is known as semi-supervised learning (SSL). In many applications, such as speech
recognition, annotating training data is time-consuming, tedious and error-prone. SSL lends itself as
a useful technique in such situations as one only needs to annotate small amounts of data for training
models. For a survey of SSL algorithms, see [1, 2]. In this paper we focus on graph-based SSL [1].
Here one assumes that the labeled and unlabeled samples are embedded within a low-dimensional
manifold expressed by a graph ? each data sample is represented by a vertex within a weighted
graph with the weights providing a measure of similarity between vertices. Some graph-based SSL
approaches perform random walks on the graph for inference [3, 4] while others optimize a loss
function based on smoothness constraints derived from the graph [5, 6, 7, 8]. Graph-based SSL
algorithms are inherently non-parametric, transductive and discriminative [2]. The results of the
benchmark SSL evaluations in chapter 21 of [1] show that graph-based algorithms are in general
better than other SSL algorithms.
Most of the current graph-based SSL algorithms have a number of shortcomings ? (a) in many cases,
such as [6, 9], a two class problem is assumed; this necessitates the use of sub-optimal extensions
like one vs. rest to solve multi-class problems, (b) most graph-based SSL algorithms (exceptions include [7, 8]) attempt to minimize squared error which is not optimal for classification problems [10],
and (c) there is a lack of principled approaches to integrating class prior information into graph-based
SSL algorithms. Approaches such as class mass normalization and label bidding are used as a postprocessing step rather than being tightly integrated with the inference. To address some of the above
issues, we proposed a new graph-based SSL algorithm based on minimizing a Kullback-Leibler
divergence (KLD) based loss in [11]. Some of the advantages of this approach include, straightforward extension to multi-class problems, ability to handle label uncertainty and integrate priors.
We also showed that this objective can be minimized using alternating minimization (AM), and can
outperform other state-of-the-art SSL algorithms for document classification.
Another criticism of previous work in graph-based SSL (and SSL in general) is the lack of algorithms that scale to very large data sets. SSL is based on the premise that unlabeled data is easily
1
obtained, and adding large quantities of unlabeled data leads to improved performance. Thus practical scalability (e.g., parallelization), is very important in SSL algorithms. [12, 13] discuss the
application of TSVMs to large-scale problems. [14] suggests an algorithm for improving the induction speed in the case of graph-based algorithms. [15] solves a graph transduction problem with
650,000 samples. To the best of our knowledge, the largest graph-based problem solved to date had
about 900,000 samples (includes both labeled and unlabeled data) [16]. Clearly, this is a fraction
of the amount of unlabeled data at our disposal. For example, on the Internet alone we create 1.6
billion blog posts, 60 billion emails, 2 million photos and 200,000 videos every day [17].
The goal of this paper is to provide theoretical analysis of our algorithm proposed in [11] and also
show how it can be scaled to very large problems. We first prove that AM on our KLD based
objective converges to the true optimum. We also provide a test for convergence and discuss some
theoretical connections between the two SSL objectives proposed in [11]. In addition, we propose
a graph node ordering algorithm that is cache cognizant and makes obtaining a linear speedup with
a parallel implementation more likely. As a result, the algorithms are able to scale to very large
datasets. The node ordering algorithm is quite general and can be applied to graph-based SSL
algorithm such as [5, 11]. In one instance, we solve a SSL problem over a graph with 120 million
vertices. We use the phone classification problem to demonstrate the scalability of the algorithm. We
believe that speech recognition is an ideal application for SSL and in particular graph-based SSL for
several reasons: (a) human speech is produced by a small number of articulators and thus amenable
to representation in a low-dimensional manifold [18]; (b) annotating speech data is time-consuming
and costly; and (c) the data sets tend to be very large.
2
Graph-based SSL
Let Dl = {(xi , ri )}li=1 be the set of labeled samples, Du = {xi }l+u
i=l+1 , the set of unlabeled samples
and D , {Dl , Du }. Here ri is an encoding of the labeled data and will be explained shortly. We are
interested in solving the transductive learning problem, i.e., given D, the task is to predict the labels
of the samples in Du . The first step in most graph-based SSL algorithms is the construction of an
undirected weighted graph G = (V, E), where the vertices (nodes) V = {1, . . . , m}, m = l + u,
are the data points in D and the edges E ? V ? V . Let Vl and Vu be the set of labeled and
unlabeled vertices respectively. G may be represented via a symmetric matrix W which is referred
to as the weight or affinity matrix. There are many ways of constructing the graph (see section 6.2
in [2]). In this paper, we use symmetric k-nearest neighbor (NN) graphs ? that is, we first form
wij , [W]ij = sim(xi , xj ) and then make this graph sparse by setting wij = 0 unless i is one of j?s
k nearest neighbors or j is one of i?s k nearest neighbors. It is assumed that sim(x, y) = sim(y, x).
Let N (i) be the set of neighbors of vertex i. Choosing the correct similarity measure and |N (i)| are
crucial steps in the success of any graph-based SSL algorithm as it determines the graph [2].
For each i ? V and j ? Vl , define probability measures pi and rj respectively over the measurable
space (Y, Y). Here Y is the ?-field of measurable subsets of Y and Y ? N (the set of natural
numbers) is the space of classifier outputs. Thus |Y| = 2 yields binary classification while |Y| > 2
implies multi-class. As we only consider classification problems here, pi and ri are multinomial
distributions, pi (y) is the probability that xi belongs to class y and the classification result is given
by argmaxy pi (y). {rj }, j ? Vl encodes the labels of the supervised portion of the training data. If
the labels are known with certainty, then rj is a ?one-hot? vector (with the single 1 at the appropriate
position in the vector). rj is also capable of representing cases where the label is uncertain, i.e., for
example when the labels are in the form of a distribution (possibly derived from normalizing scores
representing confidence). It is important to distinguish between the classical multi-label problem
and the use of uncertainty in rj . If it is the case that rj (?
y1 ), rj (?
y2 ) > 0, y?1 6= y?2 , it does not
imply that the input xj possesses two output labels y?1 and y?2 . Rather, rj represents our belief in the
various values of the output. As pi , ri are probability measures, they lie within a |Y|-dimensional
probability simplex which we represent using M|Y| and so pi , ri ?M|Y| (henceforth denoted as M).
Also let p , (p1 , . . . , pm ) ?Mm ,M ? . . . ? M (m times) and r , (r1 , . . . , rl ) ?Ml .
Consider the optimization problem proposed in [11] where p? = minm C1 (p) and
p?M
C1 (p) =
l
X
i=1
DKL ri ||pi + ?
m
X
X
i=1 j?N (i)
2
m
X
wij DKL pi ||pj ? ?
H(pi ).
i=1
Here H(p) = ?
P
p(y) log p(y) is the Shannon entropy of p and DKL (p||q) is the KLD between
P
measures p and q and is given by DKL (p||q) = y p(y) log p(y)
q(y) . If ?, ?, wij ? 0, ? i, j then C1 (p)
is convex [19]. (?, ?) are hyper-parameters whose choice we discuss in Section 5. The first term in
C1 penalizes the solution pi i ? {1, . . . , l}, when it is far away from the labeled training data Dl ,
but it does not insist that pi = ri , as allowing for deviations from ri can help especially with noisy
labels [20] or when the graph is extremely dense in certain regions. The second term of C1 penalizes
a lack of consistency with the geometry of the data, i.e., a graph regularizer. If wij is large, we
prefer a solution in which pi and pj are close in the KLD sense. The last term encourages each pi
to be close to the uniform distribution if not preferred to the contrary by the first two terms. This
acts as a guard against degenerate solutions commonly encountered in graph-based SSL [6], e.g., in
cases where a sub-graph is not connected to any labeled vertex. We conjecture that by maximizing
the entropy of each pi , the classifier has a better chance of producing high entropy results in graph
regions of low confidence (e.g. close to the decision boundary and/or low density regions). To
recap, C1 makes use of the manifold assumption, is naturally multi-class and able to encode label
uncertainty.
y
As C1 is convex in p with linear constraints, we have a convex programming problem. However,
a closed form solution does not exist and so standard numerical optimization approaches such as
interior point methods (IPM) or method of multipliers (MOM) can be used to solve the problem.
But, each of these approaches have their own shortcomings and are rather cumbersome to implement
(e.g. an implementation of MOM to solve this problem would have 7 extraneous parameters). Thus,
in [11], we proposed the use of AM for minimizing C1 . We will address the question of whether AM
is superior to IPMs or MOMs for minimizing C1 shortly.
Consider a problem of minimizing d(p, q) over p ? P, q ? Q. Sometimes solving this problem directly is hard and in such cases AM lends itself as a valuable tool for efficient optimization. It is an
iterative process in which p(n) = argminp?P d(p, q (n?1) ) and q (n+1) = argminq?Q d(p(n) , q).
The Expectation-Maximization (EM) [21] algorithm is an example of AM. C1 is not amenable
to optimization using AM and so we have proposed a modified version of the objective where
(p? , q? ) = minm C2 (p, q) and
p,q?M
C2 (p, q) =
l
X
i=1
m
m
X
X
X
0
H(pi ).
DKL ri ||qi + ?
wij
DKL pi ||qj ? ?
i=1 j?N 0 (i)
i=1
0
In the above, a third measure qi , ? i ? V is defined over the measurable space (Y, Y), W =
0
W + ?In , N (i) = {{i} ? N (i)} and ? ? 0. Here the qi ?s play a similar role as the pi ?s and
can potentially be used to obtain a final classification result (argmaxy qi (y)), but ?, which is a
hyper-parameter, plays an important role in ensuring that pi and qi are close ? i. It should be at
least intuitively clear that as ? gets large, the reformulated objective (C2 ) apparently approaches the
original objective (C1 ). Our results from [11] suggest that setting ? = 2 ensures that p? = q? (more
on this in the next section). It is important to highlight that C2 (p, q) is itself still a valid SSL criterion.
While the first term encourages qi for the labeled vertices to be close to the labels, ri , the last term
encourages higher entropy pi ?s. The second term, in addition to acting as a graph regularizer, also
acts as glue between the p?s and q?s. The update equations for solving C2 (p, q) are given by
P 0
P 0 (n)
(n?1)
exp{ ??i j wij
log qj
(y)}
ri (y)?(i ? l) + ? j wji pj (y)
(n)
(n)
P 0
pi (y) = P
and
q
(y)
=
i
? P
0 log q (n?1) (y)}
?(i ? l) + ? j wji
exp{
w
j
y
j ij
?i
P 0
where ?i = ? + ? j wij . Intuitively, discrete probability measures are being propagated between
vertices along edges, so we refer to this algorithm as measure propagation (MP).
When AM is used to solve an optimization problem, a closed form solution to each of the steps
of the AM is desired but not always guaranteed [7]. It can be seen that solving C2 using AM has
a single additional hyper-parameter while other approaches such as MOM can have as many as 7.
Further, as we show in section 4, the AM update equations can be easily parallelized.
We briefly comment on the relationship to previous work. As noted in section 1, a majority of
the previous graph-based SSL algorithms are based on minimizing squared error [6, 5]. While
3
these objectives are convex and in some cases admit closed-form (i.e., non-iterative) solutions, they
require inverting a matrix of size m ? m. Thus in the case of very large data sets (e.g., like the
one we consider in section 5), it might not be feasible to use this approach. Therefore, an iterative
update is employed in practice. Also, squared-error is only optimal under a Gaussian loss model
and thus more suitable for regression rather than classification problems. Squared-loss penalizes
absolute error, while KLD, on the other-hand penalizes relative error (pages 226 and 235 of [10]).
Henceforth, we refer to a multi-class extension of algorithm 11.2 in [20] as SQ-Loss.
The Information Regularization (IR) [7] approach and subsequently the algorithm of Tsuda [8] use
KLD based objectives and utilize AM to solve the problem. However these algorithms are motivated
from a different perspective. In fact, as stated above, one of the steps of the AM procedure in the
case of IR does not admit a closed form solution. In addition, neither IR nor the work of Tsuda
use an entropy regularizer, which, as our results will show, leads to improved performance. While
the two steps of the AM procedure in the case of Tsuda?s work have closed form solutions and the
approach is applicable to hyper-graphs, one of the updates (equation 13 in [8]) is a special of the
(n)
update for pi . For more connections to previous approaches, see Section 4 in [11].
3
Optimal Convergence of AM on C2
We show that AM on C2 converges to the minimum of C2 , and that there exists a finite ? such that
the optimal solutions of C1 and C2 are identical. Therefore, C2 is a perfect tractable surrogate for C1 .
In general, AM is not always guaranteed to converge to the correct solution. For example, consider
minimizing f (x, y) = x2 + 3xy + y 2 over x, y ? R where f (x, y) is unbounded below (consider
y = ?x). But AM says that (x? , y ? ) = (0, 0) which is incorrect (see [22] for more examples). For
AM to converge to the correct solution certain conditions must be satisfied. These might include
topological properties of the optimization problem [23, 24] or certain geometrical properties [25].
The latter is referred to as the Information Geometry approach where the 5-points property (5pp) [25] plays an important role in determining convergence and is the method of choice here.
Theorem 3.1 (Convergence of AM on C2 ). If
(0)
p(n) = argmin C2 (p, q(n?1) ), q(n) = argmin C2 (p(n) , q) and qi (y) > 0 ? y ? Y, ?i then
q?Mm
p?Mm
(a) C2 (p, q) + C2 (p, p(0) ) ? C2 (p, q(1) ) + C2 (p(1) , q(1) ) for all p, q ?Mm , and
(b) lim C2 (p(n) , q(n) ) = inf p,q?Mm C2 (p, q).
n??
Proof Sketch: (a) is the 5-pp for C2 (p, q). 5-pp holds if the 3-points (3-pp) and 4-points (4-pp)
properties hold. In order to show 3-pp, let f (t) , C2 (p(t) , q(0) ) where p(t) = (1 ? t)p + tp(1) , 0 <
t ? 1. Next we use the fact that the first-order Taylor?s approximation underestimates a convex
function to upper bound the gradient of f (t) w.r.t t. We then pass this to the limit as t ? 1 and use
the monotone convergence theorem to exchange the limit and the summation. This gives the 3-pp.
The proof for 4-pp follows in a similar manner. (b) follows as a result of Theorem 3 in [25].
Theorem 3.2 (Test for Convergence). If {(p(n) , q(n) )}?
n=1 is generated by AM of C2 (p, q) and
C2 (p? , q? ) , inf p,q?Mm C2 (p, q) then
C2 (p(n) , q(n) ) ? C2 (p? , q? ) ?
m
X
i=1
(n)
X
qi (y)
0
?(i ? l) + di ?i where ?i , log sup (n?1)
, dj =
wij
.
y q
(y)
i
i
Proof Sketch: As the 5-pp holds for all p, q ?Mm , it also holds for p = p? and q = q? . We use fact
that E(f (Z)) ? supz f (z) where Z is a random variable and f (?) is an arbitrary function.
The above means that AM on C2 converges to its optimal value. We also have the following theorems
that show the existence of a finite lower-bound on ? such that the optimum of C1 and C2 are the same.
0
Lemma 3.3. If C2 (p, q; wii
= 0) is C2 when the diagonal elements of the affinity matrix are all zero
then we have that
0
min C2 (p, q; wii
= 0) ? minm C1 (p)
p,q?Mm
p?M
4
Theorem 3.4. Given any A, B, S ?Mm (i.e., A = [a1 , . . . , an ] , B = [b1 , . . . , bn ] , S =
[s1 , . . . , sn ]) such that ai (y), bi (y), si (y) > 0, ? i, y and A 6= B (i.e., not all ai (y) = bi (y))
then there exists a finite ? such that
C2 (A, B) ? C2 (S, S) = C1 (S)
Theorem 3.5 (Equality of Solutions of C1 and C2 ). Let p
? = argmin C1 (p) and (p??? , q??? ) =
p?Mm
argmin C2 (p, q; ?
? ) for an arbitrary ? = ?
? > 0 where p??? = (p?1;?? , ? ? ? , p?m;?? ) and q??? =
p,q?Mm
?
?
(q1;
? such that at convergence of AM, we have that
?
? , ? ? ? , qm;?
? ). Then there exists a finite ?
p
? = p??? = q??? . Further, it is the case that if p??? 6= q??? , then
?
??
C1 (?
p) ? C2 (p? , q? ; ? = 0)
Pn
.
? i=1 DKL (p?i;?? ||qi;? ?? )
And if p??? = q??? then ?
???
?.
4
Parallelism and Scalability to Large Datasets
One big advantage of AM on C2 over optimizing C1 directly is that it is naturally amenable to a
parallel implementation, and is also amenable to further optimizations (see below) that yield a near
linear speedup. Consider the update equations of Section 2. We see that one set of measures is held
fixed while the other set is updated without any required communication amongst set members, so
there is no write contention. This immediately yields a T ? 1-threaded implementation where the
graph is evenly T -partitioned and each thread operates over only a size m/T = (l + u)/T subset of
the graph nodes.
We constructed a 10-NN graph using the standard TIMIT training and development sets (see section 5). The graph had 1.4 million vertices. We ran a timing test on a 16 core symmetric multiprocessor with 128GB of RAM, each core operating at 1.6GHz. We varied the number T of threads
from 1 (single-threaded) up to 16, in each case running 3 iterations of AM (i.e., 3 each of p and q
updates). Each experiment was repeated 10 times, and we measured the minimum CPU time over
these 10 runs (total CPU time only was taken into account). The speedup for T threads is typically
defined as the ratio of time taken for single thread to time taken for T threads. The solid (black)
line in figure 1(a) represents the ideal case (a linear speedup), i.e., when using T threads results in a
speedup of T . The pointed (green) line shows the actual speedup of the above procedure, typically
less than ideal due to inter-process communication and poor shared L1 and/or L2 microprocessor
cache interaction. When T ? 4, the speedup (green) is close to ideal, but for increasing T the
performance diminishes away from the ideal case.
Our contention is that the sub-linear speedup is due to the poor cache cognizance of the algorithm.
At a given point in time, suppose thread t ? {1, . . . , T } is operating on node it . The collective set of neighbors that are being used by these T threads are {?Tt=1 N (it )} and this, along with
nodes ?Tt=1 {it } (and all memory for the associated measures), constitute the current working set.
The working set should be made as small as possible to increase the chance it will fit in the microprocessor caches, but this becomes decreasingly likely as T increases since the working set is
monotonically increasing with T . Our goal, therefore, is for the nodes that are being simultaneously operated on to have a large amount of neighbor overlap thus minimizing the working set size.
Viewed as an optimization problem, we must find a partition (V1 , V2 , . . . , Vm/T ) of V that minimizes maxj?{1,...,m/T } | ?v?Vj N (v)|. With such a partition, we may also order the subsets so that
the neighbors of Vi would have maximal overlap with the neighbors of Vi+1 . We then schedule the
T nodes in Vj to run simultaneously, and schedule the Vj sets successively.
Of course, the time to produce such a partition cannot dominate the time to run the algorithm itself.
Therefore, we propose a simple fast node ordering procedure (Algorithm 1) that can be run once
before the parallelization begins. The algorithm orders the nodes such that successive nodes are
likely to have a high amount of neighbor overlap with each other and, by transitivity, with nearby
nodes in the ordering. It does this by, given a node v, choosing another node v 0 from amongst
v?s neighbors? neighbors (meaning the neighbors of v?s neighbors) that has the highest neighbor
overlap. We need not search all V nodes for this, since anything other than v?s neighbors? neighbors
5
Algorithm 1 Graph Ordering Algorithm
Select an arbitrary node v.
while there are any unselected nodes remaining do
Let N (v) be the set of neighbors, and N 2 (v) be the set of neighbors? neighbors, of v.
Select a currently unselected v 0 ? N 2 (v) such that |N (v) ? N (v 0 )| is maximized. If the
intersection is empty, select an arbitrary unselected v 0 .
v ? v0 .
end while
16
6.5
Linear Speed?Up
Re?Ordered Graph
Original Graph
14
After Re?Ordering
Before Re?Ordering
6
log(CPU Time)
speed?up
12
10
8
6
4
5.5
5
4.5
4
2
2
4
6
8
10
Number of Threads
12
14
3.5
16
2
4
6
8
10
Number of Threads
12
14
16
Figure 1: (a) speedup vs. number of threads for the TIMIT graph (see section 5). The process was
run on a 128GB, 16 core machine with each core at 1.6GHz. (b) The actual CPU times in seconds
on a log scale vs. number of threads for with and without ordering cases.
has no overlap with the neighbors of v. Given such an ordering, the tth thread operates on nodes
{t, t + m/T, t + 2m/T, . . . }. If the threads proceed synchronously (which we do not enforce) the
set of nodes being processed at any time instant are {1 + jm/T, 2 + jm/T, . . . , T + jm/T }. This
assignment is beneficial not only for maximizing the set of neighbors being simultaneously used,
but also for successive chunks of T nodes since once a chunk of T nodes have been processed, it is
likely that many of the neighbors of the next chunk of T nodes will already have been pre-fetched
into the caches. With the graph represented as an adjacency list, and sets of neighbor indices sorted,
our algorithm is O(mk 3 ) in time and linear in memory since the intersection between two sorted
lists may be computed in O(k) time. This is sometimes better than O(m log m) for cases where
k 3 < log m, true for very large m.
We ordered the TIMIT graph nodes, and ran timing tests as explained above. To be fair, the time
required for node ordering is charged against every run. The results are shown in figure 1(a) (pointed
red line) where the results are much closer to ideal, and there are no obvious diminishing returns
like in the unordered case. Running times are given in figure 1(b). Moreover, the ordered case
showed better performance even for a single thread T = 1 (CPU time of 539s vs. 565s for ordered
vs. unordered respectively, on 3 iterations of AM).
We conclude this section by noting that (a) re-ordering may be considered a pre-processing (offline)
step, (b) the SQ-Loss algorithm may also be implemented in a multi-threaded manner and this is
supported by our implementation, (c) our re-ordering algorithm is general and fast and can be used
for any graph-based algorithm where the iterative updates for a given node are a function of its
neighbors (i.e., the updates are harmonic w.r.t. the graph [5]), and (d) while the focus here was on
parallelization across different processors on a symmetric multiprocessor, this would also apply for
distributed processing across a network with a shared network disk.
5
Results
In this section we present results on two popular phone classification tasks. We use SQ-Loss as
the competing graph-based algorithm and compare its performance against that of MP because (a)
SQ-Loss has been shown to outperform its other variants, such as, label propagation [4] and the
harmonic function algorithm [5], (b) SQ-Loss scales easily to very large data sets unlike approaches
like spectral graph transduction [6], and (c) SQ-Loss gives similar performance as other algorithms
that minimize squared error such as manifold regularization [20].
6
64
46
62
44
Phone Recognition Accuracy
60
Phone Accuracy
58
MP
56
MP (? = 0)
54
MLP
SQ?Loss
52
50
42
40
38
36
48
46
0
5
10
15
20
25
Percentage of TIMIT Training Set Used
34
30
MP
SQ?Loss
0
20
40
60
80
Percentage of SWB Training Data
100
Figure 2: Phone accuracy on the TIMIT test set (a,left) and phone accuracy vs. amount of SWB
training data (b,right). With all SWB data added, the graph has 120 million nodes.
TIMIT Phone Classification: TIMIT is a corpus of read speech that includes time-aligned phonetic
transcriptions. As a result, it has been popular in the speech community for evaluating supervised
phone classification algorithms [26]. Here, we use it to evaluate SSL algorithms by using fractions
of the standard TIMIT training set, i.e., simulating the case when only small amounts of data are
labeled. We constructed a symmetrized 10-NN graph (G timit ) over the TIMIT training and development sets (minimum graph degree is 10). The graph had about 1.4 million vertices. We used
sim(xi , xj ) = exp{?(xi ? xj )T ??1 (xi ? xj )} where ? is the covariance matrix computed over
the entire training set. In order to obtain the features, xi , we first extracted mel-frequency cepstral
coefficients (MFCC) along with deltas in the manner described in [27]. As phone classification performance is improved with context information, each xi was constructed using a 7 frame context
window. We follow the standard practice of building models to classify 48 phones (|Y| = 48) and
then mapping down to 39 phones for scoring [26].
We compare the performance of MP against MP with no entropy regularization (? = 0), SQ-Loss,
and a supervised state-of-the-art L2 regularized multi-layered perceptron (MLP) [10]. The hyperparameters in each case, i.e., number of hidden units and regularization weight in case of MLP, ?
and ? in the case of MP and SQ-Loss, were tuned on the development set. For the MP and SQLoss, the hyper-parameters were tuned over the following sets ? ? {1e?8, 1e?4, 0.01, 0.1} and
? ? {1e?8, 1e?6, 1e?4, 0.01, 0.1}. We found that setting ? = 1 in the case of MP ensured that
p = q at convergence. As both MP and SQ-Loss are transductive, in order to measure performance
on an independent test set, we induce the labels using the Nadaraya-Watson estimator (see section
6.4 in [2]) with 50 NNs using the similarity measure defined above.
Figure 2(a) shows the phone classification results on the NIST Core test set (independent of the
development set). We varied the number of labeled examples by sampling a fraction f of the TIMIT
training set. We show results for f ? {0.005, 0.05, 0.1, 0.25, 0.3}. In all cases, for MP and SQLoss, we use the same graph G timit , but the set of labeled vertices changes based on f . In all
cases the MLP was trained fully-supervised. We only show results on the test set, but the results
on the development set showed similar trends. It can be seen that (i) using an entropy regularizer
leads to much improved results in MP, (ii) as expected, the MLP being fully-supervised, performs
poorly compared to the semi-supervised approaches, and most importantly, (iii) MP significantly
outperforms all other approaches. We believe that MP outperforms SQ-Loss as the loss function
in the case of MP is better suited for classification. We also found that for larger values of f (e.g.,
at f = 1), the performances of MLP and MP did not differ significantly. But those are more
representative of the supervised training scenarios which is not the focus here.
Switchboard-I Phone Classification: Switchboard-I (SWB) is a collection of about 2,400 two-sided
telephone conversations among 543 speakers [28]. SWB is often used for the training of large
vocabulary speech recognizers. The corpus is annotated at the word-level. In addition, less reliable
phone level annotations generated in an automatic manner by a speech recognizer with a non-zero
error rate are also available [29]. The Switchboard Transcription Project (STP) [30] was undertaken
to accurately annotate SWB at the phonetic and syllable levels. As a result of the arduous and costly
nature of this transcription task, only 75 minutes (out of 320 hours) of speech segments selected from
different SWB conversations were annotated at the phone level and about 150 minutes annotated at
the syllable level. Having such annotations for all of SWB could be useful for speech processing in
general, so this is an ideal real-world task for SSL.
7
We make use of only the phonetic labels ignoring the syllable annotations. Our goal is to phonetically annotate SWB in STP style while treating STP as labeled data, and in the process show that
our aforementioned parallelism efforts scale to extremely large datasets. We extracted features xi
from the conversations by first windowing them using a Hamming window of size 25ms at 100Hz.
We then extracted 13 perceptual linear prediction (PLP) coefficients from these windowed features
and appended both deltas and double-deltas resulting in a 39 dimensional feature vector. As with
TIMIT, we are interested in phone classification and we use a 7 frame context window to generate
xi , stepping successive context windows by 10ms as is standard in speech recognition.
We randomly split the 75 minute phonetically annotated part of STP into three sets, one each for
training, development and testing containing 70%, 10% and 20% of the data respectively (the size
of the development set is considerably smaller than the size of the training set). This procedure was
repeated 10 times (i.e. we generated 10 different training, development and test sets by random sampling). In each case, we trained a phone classifier using the training set, tuned the hyper-parameters
on the development set and evaluated the performance on the test set. In the following, we refer to
SWB that is not a part of STP as SWB-STP. We added the unlabeled SWB-STP data in stages. The
percentage, s, included, 0%, 2%, 5%, 10%, 25%, 40%, 60%, and 100% of SWB-STP. We ran both
MP and SQ-Loss in each case. When s =100%, there were about 120 million nodes in the graph!
Due to the large size m = 120M of the dataset, it was not possible to generate the graph using
the conventional brute-force search which is O(m2 ). Nearest neighbor search is a well-researched
problem with many approximate solutions [31]. Here we make use of the Approximate Nearest
Neighbor (ANN) library (see http://www.cs.umd.edu/?mount/ANN/) [32]. It constructs
a modified version of a kd-tree which is then used to query the NNs. The query process requires
that one specify an error term, , and guarantees that (d(xi , N (xi ))/d(xi , N (xi ))) ? 1 + . where
N (xi ) is a function that returns the actual NN of xi while N (xi ) returns the approximate NN.
We constructed graphs using the STP data and s% of (unlabeled) SWB-STP data. For all the experiments here we used a symmetrized 10-NN graph and = 2.0. The labeled and unlabeled points in
the graph changed based on training, development and test sets used. In each case, we ran both the
MP and SQ-Loss objectives. For each set, we ran a search over ? ? {1e?8, 1e?4, 0.01, 0.1} and
? ? {1e?8, 1e?6, 1e?4, 0.01, 0.1} for both the approaches. The best value of the hyper-parameters
were chosen based on the performance on the development set and the same value was used to
measure the accuracy on the test set. The mean phone accuracy over the different test sets (and the
standard deviations) are shown in figure 2(b) for the different values of s. It can be seen that MP
outperforms SQ-Loss in all cases. Equally importantly, we see that the performance on the STP data
improves with the addition of increasing amounts of unlabeled data.
References
[1] O. Chapelle, B. Scholkopf, and A. Zien, Semi-Supervised Learning. MIT Press, 2007.
[2] X. Zhu, ?Semi-supervised learning literature survey,? tech. rep., Computer Sciences, University of
Wisconsin-Madison, 2005.
[3] M. Szummer and T. Jaakkola, ?Partially labeled classification with Markov random walks,? in Advances
in Neural Information Processing Systems, vol. 14, 2001.
[4] X. Zhu and Z. Ghahramani, ?Learning from labeled and unlabeled data with label propagation,? tech. rep.,
Carnegie Mellon University, 2002.
[5] X. Zhu, Z. Ghahramani, and J. Lafferty, ?Semi-supervised learning using gaussian fields and harmonic
functions,? in Proc. of the International Conference on Machine Learning (ICML), 2003.
[6] T. Joachims, ?Transductive learning via spectral graph partitioning,? in Proc. of the International Conference on Machine Learning (ICML), 2003.
[7] A. Corduneanu and T. Jaakkola, ?On information regularization,? in Uncertainty in Artificial Intelligence,
2003.
[8] K. Tsuda, ?Propagating distributions on a hypergraph by dual information regularization,? in Proceedings
of the 22nd International Conference on Machine Learning, 2005.
[9] M. Belkin, P. Niyogi, and V. Sindhwani, ?On manifold regularization,? in Proc. of the Conference on
Artificial Intelligence and Statistics (AISTATS), 2005.
[10] C. Bishop, ed., Neural Networks for Pattern Recognition. Oxford University Press, 1995.
8
[11] A. Subramanya and J. Bilmes, ?Soft-supervised text classification,? in EMNLP, 2008.
[12] R. Collobert, F. Sinz, J. Weston, L. Bottou, and T. Joachims, ?Large scale transductive svms,? Journal of
Machine Learning Research, 2006.
[13] V. Sindhwani and S. S. Keerthi, ?Large scale semi-supervised linear svms,? in SIGIR ?06: Proceedings of
the 29th annual international ACM SIGIR, 2006.
[14] O. Delalleau, Y. Bengio, and N. L. Roux, ?Efficient non-parametric function induction in semi-supervised
learning,? in Proc. of the Conference on Artificial Intelligence and Statistics (AISTATS), 2005.
[15] M. Karlen, J. Weston, A. Erkan, and R. Collobert, ?Large scale manifold transduction,? in International
Conference on Machine Learning, ICML, 2008.
[16] I. W. Tsang and J. T. Kwok, ?Large-scale sparsified manifold regularization,? in Advances in Neural
Information Processing Systems (NIPS) 19, 2006.
[17] A. Tomkins, ?Keynote speech.? CIKM Workshop on Search and Social Media, 2008.
[18] A. Jansen and P. Niyogi, ?Semi-supervised learning of speech sounds,? in Interspeech, 2007.
[19] T. M. Cover and J. A. Thomas, Elements of Information Theory. Wiley Series in Telecommunications,
New York: Wiley, 1991.
[20] Y. Bengio, O. Delalleau, and N. L. Roux, Semi-Supervised Learning, ch. Label Propogation and Quadratic
Criterion. MIT Press, 2007.
[21] Dempster, Laird, and Rubin, ?Maximum likelihood from incomplete data via the em algorithm,? Journal
of the Royal Statistical Society, Series B, vol. 39, no. 1, pp. 1?38, 1977.
[22] T. Abatzoglou and B. O. Donnell, ?Minimization by coordinate descent,? Journal of Optimization Theory
and Applications, 1982.
[23] W. Zangwill, Nonlinear Programming: a Unified Approach. Englewood Cliffs: N.J.: Prentice-Hall International Series in Management, 1969.
[24] C. F. J. Wu, ?On the convergence properties of the EM algorithm,? The Annals of Statistics, vol. 11, no. 1,
pp. 95?103, 1983.
[25] I. Csiszar and G. Tusnady, ?Information Geometry and Alternating Minimization Procedures,? Statistics
and Decisions, 1984.
[26] A. K. Halberstadt and J. R. Glass, ?Heterogeneous acoustic measurements for phonetic classification,? in
Proc. Eurospeech ?97, (Rhodes, Greece), pp. 401?404, 1997.
[27] K. F. Lee and H. Hon, ?Speaker independant phone recognition using hidden markov models,? IEEE
Transactions on Acoustics, Speech and Signal Processing, vol. 37, no. 11, 1989.
[28] J. Godfrey, E. Holliman, and J. McDaniel, ?Switchboard: Telephone speech corpus for research and
development,? in Proceedings of ICASSP, vol. 1, (San Francisco, California), pp. 517?520, March 1992.
[29] N. Deshmukh, A. Ganapathiraju, A. Gleeson, J. Hamaker, and J. Picone, ?Resegmentation of switchboard,? in Proceedings of ICSLP, (Sydney, Australia), pp. 1543?1546, November 1998.
[30] S. Greenberg, ?The Switchboard transcription project,? tech. rep., The Johns Hopkins University (CLSP)
Summer Research Workshop, 1995.
[31] J. Friedman, J. Bentley, and R. Finkel, ?An algorithm for finding best matches in logarithmic expected
time,? ACM Transaction on Mathematical Software, vol. 3, 1977.
[32] S. Arya and D. M. Mount, ?Approximate nearest neighbor queries in fixed dimensions,? in ACM-SIAM
Symp. on Discrete Algorithms (SODA), 1993.
9
| 3739 |@word briefly:1 version:2 nd:1 glue:1 tedious:1 disk:1 bn:1 covariance:1 independant:1 q1:1 solid:1 ipm:1 series:3 score:1 tuned:3 document:1 outperforms:3 current:2 si:1 must:2 john:1 numerical:1 partition:3 treating:1 update:9 v:6 alone:1 intelligence:3 selected:1 core:5 node:29 successive:3 unbounded:1 windowed:1 guard:1 c2:38 along:3 constructed:4 mathematical:1 scholkopf:1 incorrect:1 prove:2 symp:1 manner:4 inter:1 expected:2 p1:1 nor:1 multi:8 insist:1 researched:1 cpu:5 actual:3 cache:6 jm:3 increasing:3 becomes:1 begin:1 window:4 moreover:1 project:2 mass:1 medium:1 argmin:4 minimizes:1 cognizant:2 unified:1 finding:1 sinz:1 guarantee:1 certainty:1 every:2 act:2 ensured:1 classifier:4 scaled:1 qm:1 brute:1 unit:1 partitioning:1 producing:1 before:2 engineering:1 timing:2 limit:2 encoding:1 mount:2 oxford:1 cliff:1 might:2 black:1 suggests:1 nadaraya:1 bi:2 practical:1 testing:1 vu:1 practice:2 zangwill:1 implement:1 sq:15 procedure:7 empirical:1 significantly:2 confidence:2 integrating:1 pre:2 induce:1 word:1 suggest:1 get:1 cannot:1 unlabeled:13 close:6 interior:1 layered:1 prentice:1 context:4 kld:6 optimize:1 measurable:3 conventional:1 charged:1 www:1 maximizing:2 straightforward:1 convex:5 survey:2 sigir:2 roux:2 halberstadt:1 immediately:1 m2:1 estimator:1 supz:1 importantly:2 deriving:1 dominate:1 handle:1 coordinate:1 updated:1 annals:1 construction:1 play:3 suppose:1 programming:2 aforementioned:1 element:2 trend:1 recognition:6 keynote:1 labeled:16 role:3 electrical:1 solved:1 tsang:1 region:3 ensures:2 connected:1 ordering:13 highest:1 valuable:1 ran:5 principled:1 dempster:1 hypergraph:1 trained:2 solving:4 segment:1 necessitates:1 bidding:1 easily:3 icassp:1 represented:3 chapter:1 various:1 regularizer:4 fast:2 shortcoming:2 query:3 artificial:3 hyper:7 choosing:2 quite:1 whose:1 larger:1 solve:7 say:1 delalleau:2 annotating:2 ability:1 niyogi:2 statistic:4 transductive:6 subramanya:2 itself:4 noisy:1 final:1 laird:1 amarnag:1 advantage:2 propose:3 interaction:1 maximal:1 aligned:1 date:1 degenerate:1 poorly:1 scalability:3 seattle:1 convergence:10 billion:2 optimum:2 r1:1 empty:1 produce:1 double:1 perfect:1 converges:4 help:1 propagating:1 measured:1 ij:2 nearest:6 sim:4 solves:1 sydney:1 implemented:1 c:1 implies:1 differ:1 correct:4 annotated:4 subsequently:1 human:1 australia:1 adjacency:1 require:1 premise:1 exchange:1 icslp:1 summation:1 extension:3 mm:11 hold:4 recap:1 considered:1 hall:1 exp:3 mapping:1 predict:1 entropic:1 recognizer:1 diminishes:1 proc:5 applicable:1 rhodes:1 label:17 currently:1 largest:1 create:1 tool:1 weighted:2 minimization:4 mit:2 clearly:1 always:2 gaussian:2 modified:2 rather:4 pn:1 finkel:1 jaakkola:2 encode:1 derived:2 focus:3 joachim:2 articulator:1 likelihood:1 tech:3 criticism:1 am:26 sense:1 glass:1 inference:2 multiprocessor:2 nn:6 vl:3 entire:1 integrated:1 typically:2 diminishing:1 hidden:2 picone:1 wij:9 interested:2 issue:1 classification:20 among:1 stp:11 denoted:1 extraneous:1 dual:1 development:12 jansen:1 art:3 ssl:31 special:1 godfrey:1 field:2 construct:1 once:2 having:1 washington:2 sampling:2 identical:1 represents:2 icml:3 argminp:1 minimized:1 others:1 simplex:1 belkin:1 randomly:1 simultaneously:3 divergence:2 tightly:1 maxj:1 geometry:3 keerthi:1 attempt:1 friedman:1 mlp:6 englewood:1 evaluation:2 argmaxy:2 operated:1 csiszar:1 held:1 amenable:4 edge:2 capable:1 closer:1 xy:1 unless:1 tree:1 incomplete:1 taylor:1 walk:2 penalizes:4 desired:1 tsuda:4 re:5 theoretical:3 uncertain:1 mk:1 instance:2 classify:1 soft:1 cover:1 tp:1 assignment:1 maximization:1 vertex:12 subset:3 deviation:2 uniform:1 eurospeech:1 considerably:1 nns:2 chunk:3 density:1 international:6 propogation:1 siam:1 donnell:1 vm:1 lee:1 hopkins:1 squared:5 satisfied:1 successively:1 containing:1 management:1 possibly:1 emnlp:1 henceforth:2 admit:2 style:1 return:3 li:1 account:1 unordered:2 includes:2 coefficient:2 mp:20 vi:2 collobert:2 closed:5 apparently:1 sup:1 portion:1 red:1 parallel:3 annotation:3 timit:14 minimize:3 appended:1 ir:3 accuracy:6 phonetically:2 maximized:1 yield:3 accurately:1 produced:1 bilmes:3 mfcc:1 processor:1 minm:3 cumbersome:1 ed:1 email:1 against:4 underestimate:1 pp:14 frequency:1 obvious:1 naturally:2 proof:3 di:1 associated:1 hamming:1 propagated:1 dataset:1 popular:2 knowledge:1 lim:1 conversation:3 improves:1 schedule:2 greece:1 disposal:1 higher:1 supervised:17 day:1 follow:1 specify:1 improved:4 evaluated:1 stage:1 hand:1 sketch:2 working:4 nonlinear:1 lack:3 propagation:3 corduneanu:1 arduous:1 believe:2 bentley:1 building:1 true:2 y2:1 multiplier:1 regularization:9 equality:1 alternating:3 symmetric:4 leibler:2 read:1 transitivity:1 interspeech:1 encourages:3 plp:1 noted:1 anything:1 mel:1 speaker:2 criterion:2 m:2 tt:2 demonstrate:1 performs:1 l1:1 postprocessing:1 geometrical:1 meaning:1 harmonic:3 contention:2 superior:1 multinomial:1 rl:1 stepping:1 million:7 refer:3 mellon:1 measurement:1 tsvms:1 ai:2 smoothness:1 automatic:1 consistency:1 pm:1 pointed:2 had:3 dj:1 chapelle:1 similarity:3 operating:2 v0:1 recognizers:1 own:1 showed:3 perspective:1 optimizing:1 belongs:1 inf:2 phone:19 scenario:1 phonetic:4 certain:4 blog:1 success:1 binary:1 watson:1 rep:3 wji:2 scoring:1 seen:3 minimum:3 additional:1 employed:1 parallelized:1 converge:2 monotonically:1 signal:1 semi:10 ii:1 windowing:1 zien:1 rj:8 sound:1 karlen:1 match:1 post:1 equally:1 dkl:7 a1:1 qi:9 ensuring:1 variant:1 regression:1 prediction:1 heterogeneous:1 expectation:1 annotate:3 normalization:1 represent:1 sometimes:2 iteration:2 c1:20 addition:5 argminq:1 crucial:1 parallelization:3 rest:1 unlike:1 posse:1 umd:1 comment:1 hz:1 tend:1 undirected:1 member:1 contrary:1 lafferty:1 ee:1 near:1 noting:1 ideal:7 iii:1 split:1 bengio:2 xj:5 fit:1 competing:1 qj:2 whether:1 motivated:1 thread:15 gb:2 effort:1 reformulated:1 speech:15 proceed:1 york:1 constitute:1 useful:2 clear:1 amount:9 processed:2 svms:2 tth:1 mcdaniel:1 generate:2 http:1 outperform:3 exist:1 percentage:3 delta:3 cikm:1 discrete:2 write:1 carnegie:1 vol:6 pj:3 neither:1 swb:14 utilize:1 undertaken:1 v1:1 ram:1 graph:65 monotone:1 fraction:3 run:6 uncertainty:4 telecommunication:1 ganapathiraju:1 soda:1 ipms:1 wu:1 fetched:1 decision:2 prefer:1 bound:2 internet:1 summer:1 distinguish:1 guaranteed:2 syllable:3 topological:1 quadratic:1 encountered:1 annual:1 constraint:2 ri:11 x2:1 encodes:1 software:1 nearby:1 speed:3 extremely:2 min:1 relatively:1 conjecture:1 speedup:10 department:1 march:1 poor:2 kd:1 beneficial:1 across:2 em:3 smaller:1 partitioned:1 making:1 s1:1 explained:2 intuitively:2 sided:1 taken:3 equation:4 discus:3 tractable:1 end:1 photo:1 available:1 wii:2 apply:1 kwok:1 away:2 appropriate:1 v2:1 enforce:1 spectral:2 simulating:1 shortly:2 symmetrized:2 existence:1 original:2 thomas:1 assumes:1 running:2 include:4 remaining:1 tomkins:1 madison:1 instant:1 ghahramani:2 especially:1 classical:1 society:1 objective:11 question:1 quantity:1 already:1 added:2 parametric:3 costly:2 diagonal:1 surrogate:1 affinity:2 lends:2 gradient:1 amongst:2 deshmukh:1 majority:1 evenly:1 manifold:7 threaded:3 reason:1 induction:2 index:1 relationship:1 providing:1 minimizing:8 ratio:1 potentially:1 stated:1 implementation:5 collective:1 perform:1 allowing:1 upper:1 datasets:3 arya:1 benchmark:1 finite:4 nist:1 november:1 markov:2 descent:1 hon:1 sparsified:1 situation:1 communication:2 y1:1 frame:2 varied:2 synchronously:1 arbitrary:4 community:1 inverting:1 required:2 connection:2 acoustic:2 california:1 hour:1 decreasingly:1 nip:1 address:2 able:3 below:2 parallelism:2 pattern:1 royal:1 green:2 video:1 belief:1 memory:2 hot:1 suitable:1 overlap:5 natural:1 reliable:1 regularized:2 force:1 zhu:3 representing:2 imply:1 library:1 clsp:1 unselected:3 sn:1 text:1 prior:2 mom:4 l2:2 literature:1 determining:1 relative:1 wisconsin:1 embedded:1 loss:21 fully:2 highlight:1 integrate:1 degree:1 switchboard:7 rubin:1 pi:21 prone:1 course:1 changed:1 supported:1 last:2 offline:1 perceptron:1 neighbor:27 cepstral:1 absolute:1 sparse:1 ghz:2 distributed:1 boundary:1 greenberg:1 vocabulary:1 valid:1 evaluating:1 world:1 dimension:1 commonly:1 made:1 collection:1 san:1 far:1 social:1 transaction:2 approximate:4 preferred:1 kullback:2 transcription:4 ml:1 corpus:4 b1:1 assumed:2 conclude:1 consuming:2 discriminative:1 xi:18 francisco:1 search:5 iterative:5 nature:1 inherently:1 ignoring:1 obtaining:1 improving:1 du:3 bottou:1 constructing:1 microprocessor:2 vj:3 did:1 aistats:2 dense:1 big:1 hyperparameters:1 repeated:2 fair:1 referred:2 representative:1 transduction:3 wiley:2 sub:3 position:1 lie:1 perceptual:1 erkan:1 third:1 theorem:7 down:1 minute:3 bishop:1 showing:1 list:2 normalizing:1 dl:3 exists:3 workshop:2 adding:1 suited:1 entropy:7 intersection:2 logarithmic:1 likely:4 expressed:1 ordered:4 partially:1 sindhwani:2 ch:1 determines:1 chance:2 extracted:3 acm:3 weston:2 goal:3 viewed:1 sorted:2 ann:2 jeff:1 shared:2 feasible:1 hard:1 change:1 included:1 telephone:2 operates:2 acting:1 lemma:1 total:1 pas:1 shannon:1 exception:1 select:3 latter:1 szummer:1 evaluate:1 |
3,023 | 374 | Statistical Mechanics of Temporal Association
in Neural Networks with Delayed Interactions
Andreas V.M. Herz
Division of Chemistry
Caltech 139-74
Pasadena, CA 91125
Zhaoping Li
School of Natural Sciences
Institute for Advanced Study
Princeton, NJ 08540
J. Leo van Hemmen
Physik-Department
der TU M iinchen
D-8046 Garching, FRG
Abstract
We study the representation of static patterns and temporal associations in neural networks with a broad distribution of signal delays.
For a certain class of such systems, a simple intuitive understanding
of the spatia-temporal computation becomes possible with the help
of a novel Lyapunov functional. It allows a quantitative study of
the asymptotic network behavior through a statistical mechanical
analysis. We present analytic calculations of both retrieval quality
and storage capacity and compare them with simulation results.
1 INTRODUCTION
Basic computational functions of associative neural structures may be analytically
studied within the framework of attractor neural networks where static patterns are
stored as stable fixed-points for the system's dynamics. If the interactions between
single neurons are instantaneous and mediated by symmetric couplings, there is a
Lyapunov function for the retrieval dynamics (Hopfield 1982). The global computation corresponds in that case to a downhill motion in an energy landscape created
by the stored information. Methods of equilibrium statistical mechanics may be applied and permit a quantitative analysis of the asymptotic network behavior (Amit
et al. 1985, 1987). The existence of a Lyapunov function is thus of great conceptual as well as technical importance. Nevertheless, one should be aware that
environmental inputs to a neural net always provide information in both space and
time. It is therefore desirable to extend the original Hopfield scheme and to explore
possibilities for a joint representation of static patterns and temporal associations.
176
Statistical Mechanics of Temporal Association in Neural Networks
Signal delays are omnipresent in the brain and play an important role in biological information processing. Their incorporation into theoretical models seems to
be rather convincing, especially if one includes the distribution of the delay times
involved. Kleinfeld (1986) and Sompolinsky and Kanter (1986) proposed models
for temporal associations, but they only used a single delay line between two neurons. Tank and Hopfield (1987) presented a feedforward architecture for sequence
recognition based on multiple delays, but they just considered information relative
to the very end of a given sequence. Besides these deficiences, both approaches lack
the quality to acquire knowledge through a true learning mechanism: Synaptic efficacies have to be calculated by hand which is certainly not satisfactory both from
a neurobiological point of view and also for applications in artificial intelligence.
This drawback has been overcome by a careful interpretation of the Hebb principle
(1949) for neural networks with a broad distribution of transmission delays (Herz
et al. 1988, 1989). After the system has been taught stationary patterns and
temporal sequences - by the same principle ! - it reproduces them with high
precission when triggered suitably. In the present contribution, we focus on a special
class of such delay networks and introduce a Lyapunov (energy) functional for the
deterministic retrieval dynamics (Li and Herz 1990). We thus generalize Hopfield's
approach to the domain of temporal associations. Through an extension of the usual
formalism of equilibrium statistical mechanics to time-dependent phenomena, we
analyze the network performance under a stochastic (noisy) dynamics. We derive
quantitative results on both the retrieval quality and storage capacity, and close
with some remarks on possible generalizations of this approach.
2 DYNAMICS OF THE NEURONS
Throughout what follows, we describe a neural network as a collection of N twostate neurons with activities Si = 1 for a firing cell and Si = -1 for a quiescent
one. The cells are connected by synapses with modifiable efficacies Jij(r). Here r
denotes the delay for the information transport from j to i. We focus on a solitonlike propagation of neural signals, characteristic for the (axonal) transmission of
action potentials, and consider a model where each pair of neurons is linked by
several axons with delays 0 ~ r < rmax. Other architectures with only a single
link have been considered elsewhere (Coolen and Gielen 1988; Herz et al. 1988,
1989; Kerszberg and Zippelius 1990). External stimuli are fed into the system via
receptors Ui =?1 with input sensitivity 'Y. The postsynaptic potentials are given by
N
hi(t)
T m .. x
= (1- 'Y) L L
Jij(r)Sj(t - r)
+ 'YUi(t)
.
(1)
j=l T=O
We concentrate on synchronous dynamics (Little 1974) with basic time step ~t =
1. Consequently, signal delays take nonnegative integer values. Synaptic noise is
described by a stochastic Glauber dynamics with noise level (3=T- 1 (Peretto 1984),
1
Prob[Si(t + 1) = ?1] = 'i{1 ? tanh ((3h i (t)]} ,
(2)
where Prob denotes probability. For (3-00, we arrive at a deterministic dynamics,
Si(t
_ { 1,
+ 1) =sgn[hi(t)] =
-1,
if hi (t)
if hi(t)
>0
<0
.
(3)
177
178
Herz, Li, and van Hemmen
3 HEBBIAN LEARNING
During a learning session the synaptic strengths may change according to the Hebb
principle (1949). We focus on a connection with delay r between neurons i and j.
According to Hebb, the corresponding efficacy Jij(r) will be increased if cell j takes
part in firing cell i. In its physiological context, this rule was originaly formulated
for excitatory synapses only, but for simplicity, we apply it to all synapses.
Due to the delay r in (1) and the parallel dynamics (2), it takes r+ 1 time steps until
neuron j actually influences the state of neuron i. Jij(r) thus changes by an amount
proportional to the product of Sj(t-r) and Si(t+l). Starting with Jij(r) 0, we
obtain after P learning sessions, labeled by I-' and each of duration D/1'
=
P
Jij(r) = e:(r)N- 1
D",
2: 2: Si(t/1 + I)Sj(t/1 - r) =e:(r)iij(r) .
(4)
/1=1 t", =1
The parameters e:( r), normalized by L;~o~ e:( r) = 1, take morphological characteristics of the delay lines into account; N- 1 is a scaling factor useful for the theoretical
analysis. By (4), synapses act as microscopic feature detectors during the learning
sessions and store correlations of the taught sequences in both space (i, j) and time
(r). In general, they will be asymmetric in the sense that Jij(r) :/; Jji(r).
During learning, we set T = 0 and 'Y = 1 to achieve a "clamped learning scenario"
where the system evolves strictly according to the external stimuli, Si(t/1) O"i(t/1-1).
We study the case where all input sequences O"i(t/1) are cyclic with equal periods
D/1 D, i.e., O"i(t/1) O"i(t/1 ? D) for alII-'. In passing we note that one should offer
the sequences already rmax time steps before allowing synaptic plasticity ala (4) so
that both Si and Sj are in well defined states during the actual learning sessions.
We define patterns erao by erao - O"i(t/1 a) for 0 < a < D and get
=
=
=
=
P
Jij ( r) = e:( r)N- 1
D-1
2: 2: er.~+l ef.~-T .
(5)
/1=1 a=O
Our learning scheme is thus a generalization of outer-product rules to spatiotemporal patterns. As in the following, temporal arguments of the sequence pattern
states and the synaptic couplings should always be understood modulo D.
e
4 LYAPUNOV FUNCTIONAL
Using formulae (1)-(5), one may derive equations of motion for macroscopic order
parameters (Herz et al. 1988, 1989) but this kind of analysis only applies to the case
P ~ 10gN. However, note that from (4) and (5), we get iij(r) iji(D - (2 + r)).
For all networks whose a priori weights e:( r) obey e:( r)
e:(D - (2 + r)) we have
thus found an "extended synaptic symmetry" (Li and Herz 1990),
=
=
(6)
generalizing Hopfield's symmetry assumption J ij = J ji in a natural way to the
temporal domain. To establish a Lyapunov functional for the noiseless retrieval
Statistical Mechanics of Temporal Association in Neural Networks
dynamics (3), we take ,=0 in (1) and define
1 N D-l
H(t)
=
=-- 2: 2:
2 ',]=
. . 1 a,'T=
Jij(T)Si(t - a)Sj(t - (a + T + 1)%D) ,
(7)
?
where a%b a mod b. The functional H depends on aI/states between t+l-D and
t so that solutions with constant H, like D-periodic cycles, need not be static fixed
points of the dynamics. By (1), (5) and (6), the difference AH(t)=H(t)-H(t-l)
}s
N
AH(t)=
P
D-I
N
-2: [Si(t)-Si(t-D)]h,(t-l)- e;(D-l)
2: 2: {2:erao[Si(t)-Si(t-D)]}2 .
2N
. 1
,=
.
1'= I a=O I=}
(8)
The dynamics (3) implies that the first term is nonpositive. Since e;( T) > 0, the same
holds true for the second one. For finite N, H is bounded and AH has to vanish
as t-oo. The system therefore settles into a state with Si(t)=Si(t-D) for all i.
We have thus exposed two important facts: (a) the retrieval dynamics is governed
by a Lyapunov functional, and (b) the system relaxes to a static state or a limit
cycle with Si(t)=Si(t - D) - oscillatory solutions with the same period as that of
the taught cycles or a period which is equal to an integer fraction of D.
Stepping back for an overview, we notice that H is a Lyapunov functional for all
networks which exhibit an "extended synaptic symmetry" (6) and for which the
matrix J(D - 1) is positive semi-definite . The Hebbian synapses (4) constitute an
important special case and will be the main subject of our further discussion.
5 STATISTICAL MECHANICS
We now prove that a limit cycle of the retrieval dynamics indeed resembles a stored
sequence. We proceed in two steps. First, we demonstrate that our task concerning
cyclic temporal associations can be mapped onto a symmetric network without
delays. Second, we apply equilibrium statistical mechanics to study such "equivalent
systems" and derive analytic results for the retrieval quality and storage capacity.
D-periodic oscillatory solutions ofthe retrieval dynamics can be interpreted as static
states in a "D-plicated" system with D columns and N rows of cells with activities
Sia. A network state will be written A (Ao, AI' ... ' AD-d with Aa {Sia; 1 ::;
i < N}. To reproduce the parallel dynamics of the original system, neurons Sia
with a = t%D are updated at time t. The time evolution of the new network
therefore has a pseudo-sequential characteristic: synchronous within single columns
and sequentially ordered with respect to these columns. Accordingly, the neural
activities at time t are given by Sia(t)
SiCa + n,) for a ~ t%D and Sia(t) =
SiCa + n, - D) for a>t%D, where n, is defined through t=nt+t%D. Due to (6),
symmetric efficacies Jljb = Jft may be contructed for the new system by
=
=
=
J:/ = Jij (b -
(9)
a - 1)%D) ,
allowing a well-defined Hamiltonian, equal to that of a Hopfield net of size N D,
1
H
N
= -- 2:
D-l
2: Jt/ SiaSjb .
2 i,;=l a,b=O
(10)
179
180
Herz, Li, and van Hemmen
An evaluation of (10) in terms of the former state variables reveals that it is identical
to the Lyapunov functional (7). The interpretation, however, is changed: a limit
cycle of period D in the original network corresponds to a fixed-point of the new
system of size ND. We have thus shown that the time evolution of a delay network
with extended symmetry can be understood in terms of a downhill motion in the
energy landscape of its "equivalent system" .
For Hebbian couplings (5), the new efficacies
Jf/
take a particularly simple form if
=
we define patterns {~raa; 1 ~ i ~ N, 0 ~ a <D-l} by ~raa ~f.~a-a)%D' i.e., if we create
column-shifted copies of the prototype ~rao. Setting Eab e:(b - a - 1)%D)
Eba
leads to
P
Jijab
=
=
=
D-l
clJ a"'jb
clJ a .
cab N-l '"""
L...J '"""
L...J "'ia
(11)
C"
1J=la=O
Storing one cycle O'i(tlJ) = ~rao in the delay network thus corresponds to memorizing
D shifted duplicates ~raa, 0 ~ a < D, in the equivalent system, reflecting that a
D-cycle can be retrieved in D different time-shifted versions in the original network.
If, in the second step, we now switch to the stochastic dynamics (2), the important
question arises whether H also determines the equilibrium distribution p of the
system. This need not be true since the column-wise dynamics of the equivalent
network differs from both the Little and Hopfield model. An elaborate proof (Li
and Herz, 1990), however, shows that there is indeed an equilibrium distribution a
la Gibbs,
peA) = Z-l exp[-,BH(A)] ,
(12)
=
where Z
TrA exp[-,BH(A)]. In passing we note that for D = 2 there are only
links with zero delay. By (6) we have Jij(O)
Jji(O), i.e., we are dealing with a
symmetric Little model. We may introduce a reduced probability distribution p
for this special case, peAl) TrAo p(AoAI), and obtain peAl) Z-l exp[-,BH(Al)]
with
=
=
=
N
iI
N
=_,B-1 :L In[2 cosh(,B:L JijSj)] .
i=l
(13)
j=l
We thus have recovered both the effective Hamiltonian of the Little model as derived
by Peretto (1984) and the duplicated-system technique of van Hemmen (1986).
We finish our argument by turning to quantative results. We focus on the case
where each of the P learning sessions corresponds to teaching a (different) cycle
of D patterns ~rao, each lasting for one time step. We work with unbiased random
patterns where ~rao ?1 with equal probability, and study our network at a finite
limN_oo(P/ N) > O. A detailed analysis of the case where the
storage level a
number of cycles remains bounded as N -+ 00 can be found in (Li and Herz 1990).
=
=
As in the replica-symmetric theory of Amit et al. (1987), we assume that the network
is in a state highly correlated with a finite number of stored cycles. The remaining,
extensively many cycles are described as a noise term. We define "partial" overlaps
by m~a N-l Li ~raa Sia. These macroscopic order parameters measure how close
the system is to a stored pattern ~lJa at a specific column a. We consider retrieval
=
Statistical Mechanics of Temporal Association in Neural Networks
solutions, i.e.,
m~a
=
mllba,o, and arrive at the fixed-point equations (Li and Herz
1990)
(14)
where
and
q
= ? tanh 2 [,8{L: mllC D +..;c;;z }])).
(15)
Double angular brackets represent an average with respect to both the "condensed"
cycles and the normalized Gaussian random variable z. The Ak(?) are eigenvalues
of the matrix ?. Retrieval is possible when solutions with mil > 0 for a single cycle
J.L exist, and the storage capacity a c is reached when such solutions cease to exist. It
should be noted that each cycle consists of D patterns so that the storage capacity
for single patterns is ac = Da c ? During the recognition process, however, each of
them will trigger the cycle it belongs to and cannot be retrieved as a static pattern.
For systems with a "maximally uniform" distribution, ?ab = (D_1)-l(1- bab), we
get
5
00
D
3
4
2
0.100
0.116
0.110
0.120
0.138
where the last result is identical to that for the corresponding Hopfield model since
the diagonal terms of ? can be neglected in that case. The above findings agree
well with estimates from a finite-size analysis (N ~ 3000) of data from numerical
simulations as shown by two examples. For D = 3, we have found a c = 0.120 ? 0.015,
for D=4, a c =0.125?0.0l5. Our results demonstrate that the storage capacity for
temporal associations is comparable to that for static memories. As an example,
take D = 2, i.e., the Little model. In the limit of large N, we see that 0.100 ? N
twa-cycles of the form erDD ~
may be recalled as compared to 0.138? N static
patterns (Fontanari and Koberle 1987); this leads to an 1.45-fold increase of the
information content per synapse.
The influence of the weight distribution on the network behavior may be demonstrated by some choices of g( T) for D = 4:
erlo
T
g(T)
g(T)
g(T)
0
1
2
3
1/3 1/3 1/3 0
1/2
0
0
1
1/2 0
0
0
ac
ffic
0.116
0.96
0.100
0.93
0.050
0.93
The storage capacity decreases with decreasing number of delay lines, but measured
per synapse, it does increase. However, networks with only a few number of delays
are less fault-tolerant as known from numerical simulations (Herz et al. 1989). For
all studied architectures, retrieved sequences contain less than 3.5% errors.
Our results prove that an extensive number of temporal associations can be stored
as spatia-temporal attractors for the retrieval dynamics. They also indicate that
dynamical systems with delayed interactions can be programmed in a very efficient
manner to perform associative computations in the space-time domain.
181
182
Herz, Li, and van Hemmen
6 CONCLUSION
Learning schemes can be successful only if the structure of the learning task is
compatible with both the network architecture and the learning algorithm. In the
present context, the task is to store simple temporal associations. It can be accomplished in neural networks with a broad distribution of signal delays and Hebbian
synapses which, during learning periods, operate as microscopic feature detectors
for spatio-temporal correlations within the external stimuli. The retrieval dynamics
utilizes the very same delays and synapses, and is therefore rather robust as shown
by numerical simulations and a statistical mechanical analysis.
Our approach may be generalized in various directions. For example, one can investigate more sophisticated learning rules or switch to continuous neurons in "iteratedmap networks" (Marcus and Westervelt 1990). A generalization of the Lyapunov
functional (7) covers that case as well (Herz, to be published) and allows a direct
comparison of theoretical predictions with results from hardware implementations.
Finally, one could try to develop a Lyapunov functional for a continuous-time dynamics with delays which seems to be rather significant for applications as well as
for the general theory of functional differential equations and dynamical systems.
Acknowledgements
It is a pleasure to thank Bernhard Sulzer, John Hopfield, Reimer Kuhn and Wulf-
ram Gerstner for many helpful discussions. AVMH acknowledges support from the
Studienstiftung des Deutschen Volkes. ZL is partly supported by a grant from the
Seaver Institute.
References
Amit D J, Gutfreund Hand Sompolinsky H 1985 Phys. Rev. A 32 1007
- 1987 Ann. Phys. (N. Y.) 173 30
Cool en A C C and Gielen C CAM 1988 Europhys. Lett. 7 281
Fontanari J F and Koberle R 1987 Phys. Rev. A 36 2475
Hebb D 0 1949 The Organization of Behavior Wiley, New York
van Hemmen J L 1986 Phys. Rev. A 34 3435
Herz A V M, Sulzer B, Kuhn R and van Hemmen J L 1988 Europhys. Lett. 7 663
- 1989 Bioi. Cybern. 60 457
Hopfield J J 1982 Proc. Natl. Acad. Sci. USA 79 2554
Kerszberg M and Zippelius A 1990 Phys. Scr. T33 54
Kleinfeld D 1986 Proc. Natl. Acad. Sci. USA 83 9469
Li Z and Herz A V M 1990 inLectureNotes in Physics 368 pp287 Springer ,Heidelberg
Little W A 1974 Math. Biosci. 19 101
Marcus C M and Westervelt R M 1990 Phys. Rev. A 42 2410
Peretto P 1984 Bioi. Cybern. 50 51
Sompolinsky H and Kanter I 1986 Phys. Rev. Lett. 57 2861
Tank D Wand Hopfield J J 1987 Proc. Natl. Acad. Sci. USA 84 1896
| 374 |@word version:1 seems:2 nd:1 suitably:1 jijsj:1 physik:1 simulation:4 cyclic:2 efficacy:5 ala:1 recovered:1 nt:1 si:17 written:1 john:1 numerical:3 plasticity:1 analytic:2 eab:1 stationary:1 intelligence:1 twostate:1 accordingly:1 eba:1 hamiltonian:2 math:1 direct:1 differential:1 prove:2 consists:1 manner:1 introduce:2 indeed:2 behavior:4 mechanic:8 brain:1 decreasing:1 little:6 actual:1 becomes:1 bounded:2 what:1 kind:1 rmax:2 interpreted:1 gutfreund:1 finding:1 nj:1 temporal:18 quantitative:3 zippelius:2 pseudo:1 act:1 zl:1 grant:1 before:1 positive:1 understood:2 limit:4 acad:3 receptor:1 ak:1 firing:2 studied:2 resembles:1 programmed:1 definite:1 differs:1 get:3 onto:1 close:2 cannot:1 bh:3 storage:8 context:2 influence:2 cybern:2 equivalent:4 deterministic:2 demonstrated:1 starting:1 duration:1 simplicity:1 rule:3 updated:1 play:1 trigger:1 modulo:1 recognition:2 particularly:1 asymmetric:1 labeled:1 role:1 connected:1 cycle:16 sompolinsky:3 morphological:1 decrease:1 ui:1 cam:1 dynamic:21 neglected:1 exposed:1 division:1 joint:1 hopfield:11 various:1 leo:1 describe:1 effective:1 artificial:1 europhys:2 kanter:2 whose:1 noisy:1 associative:2 sequence:9 triggered:1 eigenvalue:1 net:2 interaction:3 jij:11 product:2 tu:1 achieve:1 intuitive:1 double:1 transmission:2 help:1 coupling:3 derive:3 oo:1 ac:2 develop:1 measured:1 ij:1 school:1 cool:1 implies:1 indicate:1 lyapunov:11 concentrate:1 direction:1 kuhn:2 drawback:1 stochastic:3 pea:1 sgn:1 settle:1 frg:1 ao:1 generalization:3 biological:1 extension:1 strictly:1 hold:1 considered:2 exp:3 great:1 equilibrium:5 proc:3 condensed:1 coolen:1 tanh:2 create:1 always:2 gaussian:1 rather:3 mil:1 derived:1 focus:4 sense:1 helpful:1 dependent:1 pasadena:1 reproduce:1 tank:2 priori:1 special:3 equal:4 aware:1 bab:1 reimer:1 identical:2 zhaoping:1 broad:3 jb:1 stimulus:3 duplicate:1 few:1 delayed:2 attractor:2 volkes:1 ab:1 organization:1 possibility:1 highly:1 investigate:1 evaluation:1 certainly:1 bracket:1 natl:3 yui:1 partial:1 theoretical:3 increased:1 formalism:1 column:6 gn:1 rao:4 cover:1 uniform:1 delay:22 successful:1 stored:6 spatiotemporal:1 periodic:2 sensitivity:1 l5:1 physic:1 external:3 li:11 account:1 potential:2 de:1 chemistry:1 includes:1 tra:1 depends:1 ad:1 view:1 try:1 analyze:1 linked:1 reached:1 parallel:2 contribution:1 characteristic:3 ofthe:1 landscape:2 generalize:1 published:1 ah:3 detector:2 synapsis:7 oscillatory:2 phys:7 synaptic:7 energy:3 involved:1 proof:1 static:9 nonpositive:1 iji:1 duplicated:1 knowledge:1 sophisticated:1 actually:1 back:1 reflecting:1 maximally:1 synapse:2 just:1 angular:1 until:1 correlation:2 hand:2 transport:1 propagation:1 lack:1 kleinfeld:2 quality:4 usa:3 contain:1 true:3 unbiased:1 normalized:2 evolution:2 former:1 analytically:1 symmetric:5 satisfactory:1 glauber:1 during:6 noted:1 generalized:1 demonstrate:2 scr:1 motion:3 wise:1 instantaneous:1 novel:1 ef:1 functional:11 ji:1 overview:1 stepping:1 extend:1 interpretation:2 association:12 significant:1 biosci:1 gibbs:1 ai:2 lja:1 session:5 teaching:1 stable:1 spatia:2 retrieved:3 belongs:1 scenario:1 store:2 certain:1 fault:1 der:1 accomplished:1 caltech:1 period:5 signal:5 semi:1 ii:1 multiple:1 desirable:1 hebbian:4 technical:1 calculation:1 offer:1 retrieval:13 jft:1 concerning:1 prediction:1 basic:2 noiseless:1 represent:1 cell:5 macroscopic:2 operate:1 subject:1 mod:1 integer:2 axonal:1 feedforward:1 relaxes:1 switch:2 finish:1 architecture:4 andreas:1 prototype:1 synchronous:2 whether:1 york:1 passing:2 proceed:1 constitute:1 remark:1 action:1 garching:1 useful:1 limn_oo:1 detailed:1 amount:1 cosh:1 extensively:1 hardware:1 reduced:1 exist:2 notice:1 shifted:3 per:2 modifiable:1 herz:16 taught:3 nevertheless:1 replica:1 ram:1 fraction:1 wand:1 prob:2 arrive:2 throughout:1 utilizes:1 scaling:1 comparable:1 hi:4 jji:2 fold:1 nonnegative:1 activity:3 strength:1 incorporation:1 westervelt:2 argument:2 department:1 according:3 postsynaptic:1 seaver:1 evolves:1 rev:5 lasting:1 memorizing:1 equation:3 agree:1 remains:1 mechanism:1 fed:1 end:1 permit:1 apply:2 obey:1 tlj:1 existence:1 original:4 denotes:2 remaining:1 amit:3 especially:1 establish:1 already:1 question:1 kerszberg:2 usual:1 diagonal:1 microscopic:2 exhibit:1 link:2 mapped:1 fontanari:2 capacity:7 pleasure:1 outer:1 thank:1 sci:3 marcus:2 besides:1 sia:6 convincing:1 acquire:1 implementation:1 perform:1 allowing:2 neuron:10 peretto:3 finite:4 extended:3 pair:1 mechanical:2 extensive:1 connection:1 recalled:1 dynamical:2 pattern:15 memory:1 ia:1 overlap:1 natural:2 plicated:1 turning:1 advanced:1 scheme:3 created:1 acknowledges:1 mediated:1 alii:1 koberle:2 understanding:1 acknowledgement:1 asymptotic:2 relative:1 proportional:1 principle:3 clj:2 storing:1 row:1 elsewhere:1 excitatory:1 changed:1 compatible:1 raa:4 copy:1 last:1 sica:2 deutschen:1 supported:1 institute:2 van:7 overcome:1 calculated:1 lett:3 collection:1 sj:5 neurobiological:1 bernhard:1 dealing:1 global:1 reproduces:1 sequentially:1 reveals:1 tolerant:1 conceptual:1 quiescent:1 spatio:1 continuous:2 robust:1 ca:1 symmetry:4 heidelberg:1 gerstner:1 domain:3 da:1 main:1 noise:3 en:1 hemmen:7 elaborate:1 hebb:4 axon:1 iij:2 wiley:1 downhill:2 clamped:1 governed:1 vanish:1 formula:1 specific:1 jt:1 twa:1 er:1 omnipresent:1 physiological:1 cease:1 studienstiftung:1 sequential:1 importance:1 cab:1 generalizing:1 gielen:2 explore:1 ordered:1 applies:1 springer:1 aa:1 corresponds:4 environmental:1 determines:1 bioi:2 formulated:1 consequently:1 careful:1 ann:1 jf:1 content:1 change:2 partly:1 la:2 support:1 arises:1 princeton:1 phenomenon:1 correlated:1 |
3,024 | 3,740 | Time-rescaling methods for the estimation and
assessment of non-Poisson neural encoding models
Jonathan W. Pillow
Departments of Psychology and Neurobiology
University of Texas at Austin
[email protected]
Abstract
Recent work on the statistical modeling of neural responses has focused on modulated renewal processes in which the spike rate is a function of the stimulus and
recent spiking history. Typically, these models incorporate spike-history dependencies via either: (A) a conditionally-Poisson process with rate dependent on
a linear projection of the spike train history (e.g., generalized linear model); or
(B) a modulated non-Poisson renewal process (e.g., inhomogeneous gamma process). Here we show that the two approaches can be combined, resulting in a
conditional renewal (CR) model for neural spike trains. This model captures both
real-time and rescaled-time history effects, and can be fit by maximum likelihood
using a simple application of the time-rescaling theorem [1]. We show that for
any modulated renewal process model, the log-likelihood is concave in the linear
filter parameters only under certain restrictive conditions on the renewal density
(ruling out many popular choices, e.g. gamma with shape ? 6= 1), suggesting that
real-time history effects are easier to estimate than non-Poisson renewal properties. Moreover, we show that goodness-of-fit tests based on the time-rescaling
theorem [1] quantify relative-time effects, but do not reliably assess accuracy in
spike prediction or stimulus-response modeling. We illustrate the CR model with
applications to both real and simulated neural data.
1
Introduction
A central problem in computational neuroscience is to develop functional models that can accurately
describe the relationship between external variables and neural spike trains. All attempts to measure
information transmission in the nervous system are fundamentally attempts to quantify this relationship, which can be expressed by the conditional probability P ({ti }|X), where {ti } is a set of spike
times generated in response to an external stimulus X.
Recent work on the neural coding problem has focused on extensions of the Linear-NonlinearPoisson (LNP) ?cascade? encoding model, which describes the neural encoding process using a
linear receptive field, a point nonlinearity, and an inhomogeneous Poisson spiking process [2, 3].
While this model provides a simple, tractable tool for characterizing neural responses, one obvious
shortcoming is the assumption of Poisson spiking. Neural spike trains exhibit spike-history dependencies (e.g., refractoriness, bursting, adaptation), violating the Poisson assumption that spikes in
disjoint time intervals are independent. Such dependencies, moreover, have been shown to be essential for extracting complete stimulus information from spike trains in a variety of brain areas
[4, 5, 6, 7, 8, 9, 10, 11].
Previous work has considered two basic approaches for incorporating spike-history dependencies
into neural encoding models. One approach is to model spiking as a non-Poisson inhomogeneous
renewal process (e.g., a modulated gamma process [12, 13, 14, 15]). Under this approach, spike
1
A
50
B
0
rate (Hz)
100
nonlinearity
6
5
renewal density
4
3
2
p(ISI)
+
...
stimulus filter
1
post-spike filter
0
1
2
rescaled time
rescaled time (unitless)
7
rescaled
renewal
spiking
0
real time (s)
Figure 1: The conditional renewal (CR) model and time-rescaling transform. (A) Stimuli are convolved with a filter k then passed through a nonlinearity f , whose output is the rate ?(t) for an inhomogeneous spiking process with renewal density q. The post-spike filter h provides recurrent additive
input to f for every spike emitted. (B) Illustration of the time-rescaling transform and its inverse. Top:
the intensity ?(t) (here independent of spike history) in response to a one-second stimulus. Bottom
left: interspike intervals (left, intervals between red dots) are drawn i.i.d. in rescaled time from renewal
density q, here set to gamma with shape ? = 20. Samples are mapped to spikes in real time (bottom) via ??1 (t), the inverse of the cumulative intensity. Alternatively, ?(t) maps the true spike times
(bottom) to samples from a homogeneous renewal process in rescaled time (left edge).
times are Markovian, depending on the most recent spike time via a (non-exponential) renewal
density, which may be rescaled in proportion to the instantaneous spike rate. A second approach
is to use a conditionally Poisson process in which the intensity (or spike rate) is a function of the
recent spiking history [4, 16, 17, 18, 19, 20]. The output of such a model is a conditionally Poisson
process, but not Poisson, since the spike rate itself depends on the spike history.
The time-rescaling theorem, described elegantly for applications to neuroscience in [1] , provides a
powerful tool for connecting these two basic approaches, which is the primary focus of this paper.
We begin by reviewing inhomogeneous renewal models and generalized linear model point process
models for neural spike trains.
2
2.1
Point process neural encoding models
Definitions and Terminology
Let {ti } be a sequence of spike times on the interval (0, T ], with 0 < t0 < t1 < . . . , < tn ? T ,
and let ?(t) denote the intensity (or ?spike rate?) for the point process, where ?(t) ? 0, ?t. Generally, this intensity is a function of some external variable (e.g., a visual stimulus). The cumulative
intensity function is given by the integrated intensity,
Z t
?(t) =
?(s)ds,
(1)
0
and is also known as the time-rescaling transform [1]. This function rescales the original spike
times into spikes from a (homogeneous) renewal process, that is, a process in which the intervals
are i.i.d. samples from a fixed distribution. Let {ui } denote the inter-spike intervals (ISIs) of the
rescaled process, which are given by the integral of the intensity between successive spikes, i.e.,
Z ti
ui = ?ti?1 (ti ) =
?(s)ds.
(2)
ti?1
Intuitively, this transformation stretches time in proportion to the spike rate ?(t) , so that when the
rate ?(t) is high, ISIs are lengthened and when ?(t) is low, ISIs are compressed. (See fig. 1B for
illustration).
2
Let q(u) denote the renewal density, the probability density function from which the rescaled-time
intervals {ui } are drawn. A Poisson process arises if q is exponential, q(u) = e?u ; for any other
density, the probability of spiking depends on the most recent spike time. For example, if q(u) is
zero for u ? [0, a], the neuron exhibits a refractory period (whose duration varies with ?(t)).
To sample from this model (illustrated in fig. 1B), we can draw independent intervals ui from renewal density q(u), then apply the inverse time-rescaling transform to obtain ISIs in real time:
(ti ? ti?1 ) = ??1
ti?1 (ui ),
(3)
1
where ??1
ti?1 (t) is the inverse of time-rescaling transform (eq 2).
We will generally define the intensity function (which we will refer to as the base intensity2 ) in terms
of a linear-nonlinear cascade, with linear dependence on some external covariates of the response
(optionally including spike-history), followed by a point nonlinearity. The intensity in this case can
be written:
?(t) = f (xt ? k + yt ? h),
(4)
where xt is a vector representing the stimulus at time t, k is a stimulus filter, yt is a vector representing the spike history at t, and h is a spike-history filter. We assume that the nonlinearity f is
fixed.
2.2
The conditional renewal model
We refer to the most general version of this model, in which ?(t) is allowed to depend on both
the stimulus and spike train history, and q(u) is an arbitrary (finite-mean) density on R + , as a
conditional renewal (CR) model (see fig. 1A). The output of this model forms an inhomogeneous
renewal process conditioned on the process history. Although it is mathematically straightforward
to define such a model, to our knowledge, no previous work has sought to incorporate both real-time
(via h) and rescaled-time (via q) dependencies in a single model.
Specific (restricted) cases of the CR model include the generalized linear model (GLM) [17], and the
modulated renewal model with ? = f (x ? k) and q a right-skewed, non-exponential renewal density
[13, 15]. (Popular choices for q include gamma, inverse Gaussian, and log-normal distributions).
The conditional probability distribution over spike times {ti } given the external variables X can
be derived using the time-rescaling transformation. In rescaled time, the CR model specifies a
probability over the ISIs,
n
Y
P ({ui }|X) =
q(ui ).
(5)
i=1
A change-of-variables ti =
times:
??1
ti?1 (ui )
+ ti?1 (eq. 3) provides the conditional probability over spike
P ({ti }|X) =
n
Y
?(ti )q(?ti?1 (ti )).
(6)
i=1
This probability, considered as a function of the parameters defining ?(t) and q(u), is the likelihood
function for the CR model, as derived in [13].3 The log-likelihood function can be approximated in
discrete time, with bin-size dt taken small enough to ensure ? 1 spike per bin:
?
?
ti
n
n
X
X
X
log P ({ti }|X) =
log ?(ti ) +
log q ?
?(j)dt? ,
(7)
i=1
i=1
j=ti?1 +1
where ti indicates the bin for the ith spike. This approximation becomes exact in the limit as dt ? 0.
1
Note that ?t? (t) is invertible for all spike times ti , since necessarily ti ? {t; ?(t) > 0}.
A note on terminology: we follow [13] in defining ?(t) to be the instantaneous rate for an inhomogeneous
renewal process, which is not identical to the hazard function H(t) = P (ti ? [t, t + ?]|ti > ti?1 )/?, also
known as the conditional intensity [1]. We will use ?base intensity? for ?(t) to avoid this confusion.
3
For simplicity, we have ignored the intervals (0, t0 ], the time to the first spike, and (tn , T ], the time after
the last spike, which are simple to compute but contribute only a small fraction to the total likelihood.
2
3
stimulus filter
rasters
renewal density
KS plot
1
2
(a)
CDF
gamma
0.5
2
(b)
0
exponential
0
0.5
quantiles
5
(c)
1
cross-validation
non-parametric
20
0
2
4
6
ISI
(rescaled time)
100
0
10
(a /b)
(c)
50
bits/s
0
rate (Hz)
50 ms
0
0
1
time (s)
(a)
(b)
(c)
Figure 2: Time-rescaling and likelihood-based goodness-of-fit tests with simulated data. : Left: Stimulus filter and renewal density for three point process models (all with nonlinearity f (x) = ex and
history-independent intensity). ?True? spikes were generated from (a), a conditional renewal model
with a gamma renewal density (? = 10). These responses were fit by: (b), a Poisson model with the
correct stimulus filter; and (c), a modulated renewal process with incorrect stimulus filter (set to the
negative of the correct filter), and renewal density estimated nonparametrically from the transformed
intervals (eq. 10). Middle: Repeated responses from all three models to a novel 1-s stimulus, showing
that spike rate is well predicted by (b) but not by (c). Right: KS plots (above) show time-rescaling
based goodness-of-fit. Here, (b) fails badly, while (c) passes easily, with cdf entirely within within 99%
confidence region (gray lines). Likelihood-based cross-validation tests (below) show that (b) preserves
roughly 1/3 as much information about spike times as (a), while (c) carries slightly less information
than a homogeneous Poisson process with the correct spike rate.
3
Convexity condition for inhomogeneous renewal models
We now turn to the tractability of estimating the CR model parameters from data. Here, we present
an extension to the results of [21], which proved a convexity condition for maximum-likelihood
estimation of a conditionally Poisson encoding model (i.e., generalized linear model). Specifically,
[21] showed that the log-likelihood for the filter parameters ? = {k, h} is concave (i.e., has no nonglobal local maxima) if the nonlinear function f is both convex and log-concave (meaning log f is
concave). Under these conditions4 , minimizing the negative log-likelihood is a convex optimization
problem.
By extension, we can ask whether the estimation problem remains convex when we relax the Poisson
assumption and allow for a non-exponential renewal density q. Let us write the log-likelihood
function for the linear filter parameters ? = [kT , hT ]T as
!
Z ti
n
n
X
X
L{D,q} (?) =
log f (X(ti ) ? ?) +
log q
f (X(t) ? ?)dt ,
(8)
i
i=1
ti?1
where X(t) = [xTt , ytT ]T is a vector containing the relevant stimulus and spike history at time t, and
D = {{ti }, {X(t)}} represents the full set of observed data. The condition we obtain is:
Theorem 1. The CR model log-likelihood L{D,q} (?) is concave in the filter parameters ?, for any
observed data D, if: (1) the nonlinearity f is convex and log-concave; and (2) the renewal density
q is log-concave and non-increasing on (0, ?].
Proof. It suffices to show that both terms in the equation (8) are concave in ?, since the sum of two
concave functions is concave. The first term is obviously concave, since log f is concave. For the
4
Allowed nonlinearities must grow monotonically, at least linearly and at most exponentially: e.g., exp(x);
log(1 + exp(x)); bxcp , p ? 1.
4
R
second term, note that f (X ? ?) isR a convex function, since it is the integral of a convex function
over a convex region. Then log q[ f (X ? ?)] is a concave, non-increasing function of a convex
function, since log q is concave and non-increasing; such a function is necessarily concave.5 The
second term is therefore also a sum of concave functions, and thus concave.
Maximum likelihood filter estimation under the CR model is therefore a convex problem so long as
the renewal density q is both log-concave and non-increasing. This restriction rules out a variety of
renewal densities that are commonly employed to model neural data [13, 14, 15]. Specifically, the
log-normal and inverse-Gaussian densities both have increasing regimes on a subset of [0, ?), as
does the gamma density q(u) ? u??1 e?u? when ? > 1. For ? < 1, gamma fails to be log-concave,
meaning that the only gamma density satisfying both conditions is the exponential (? = 1).
There are nevertheless many densities (besides the exponential) for which these conditions are met,
including
p
2
? q(u) ? e?u /? , for any p ? 1
? q(u) = uniform density
? q(u) ? bf (u)c, or q(u) ? ef (u) , for any concave, decreasing function f (u)
Unfortunately, no density in this family can exhibit refractory effects, since this would require a q
that is initially zero and then rises. From an estimation standpoint, this suggests that it is easier to
incorporate certain well-known spike-history dependencies using recurrent spike-history filters (i.e.,
using the GLM framework) than via a non-Poisson renewal density.
An important corollary of this convexity result is that the decoding problem of estimating stimuli
{xt } from a set of observed spike times {ti } using the maximum of the posterior (i.e., computing
the MAP estimate) is also a convex problem under the same restrictions on f and q, so long as the
prior over stimuli is log-concave.
4
Nonparametric Estimation of the CR model
In practice, we may wish to optimize both the filter parameters governing the base intensity ?(t) and
the renewal density q, which is not in general a convex problem. We may proceed, however, bearing
in mind that gradient ascent may not achieve the global maximum of the likelihood function.
Here we formulate a slightly different interval-rescaling function that allows us to nonparametrically estimate renewal properties using a density on the unit interval. Let us define the
mapping
vi = 1 ? exp(??ti?1 (ti )),
(9)
which is the cumulative density function (cdf) for the intervals from a conditionally Poisson process
with cumulative intensity ?(t). This function maps spikes from a conditionally Poisson process to
i.i.d. samples from U [0, 1]. Any discrepancy between the distribution of {vi } and the uniform distribution represents failures of a Poisson model to correctly describe the renewal statistics. (This is
the central idea underlying time-rescaling based goodness-of-fit test, which we will discuss shortly).
We propose to estimate a density ?(v) for the rescaled intervals {vi } using cubic splines (piecewise
3rd-order polynomials with continuous 2nd derivatives), with evenly spaced knots on the interval
[0, 1].6 This allows us to rewrite the likelihood function (6) as the product of two identifiable terms:
! n
!
n
Y
Y
??0 (T )
P ({ti }|X) =
?(ti ) e
?(vi ) ,
(10)
i=1
i=1
where the first term is the likelihood under the conditional Poisson model [17], and the second is
the probability of the rescaled intervals {vi } under the density ?(v). This formulation allows us to
separate the (real-time) contributions of the intensity function under the assumption of conditionally
To see this, note that if g is concave (g 00 ? 0) and non-increasing (g 0 ? 0), and f is convex (f 00 ? 0), then
= g 00 (f (x))f 0 (x)2 + g 0 (f (x))f 00 (x) ? 0, implying g(f (x)) is concave.
R 1 ML estimation of the spline parameters is a convex problem with one linear equality constraint
?(v)dv = 1 and a family of inequality constraints q(v) ? 0, ?v, which can be optimized efficiently.
0
5
d2
g(f (x))
dx2
6
5
1
0
0
1
Figure 3: Left: pairwise dependencies between successive rescaled ISIs from model (?a?, see fig. 2)
when fit by a non-Poisson renewal model ?c?. Center: fitted model of the conditional distribution over
rescaled ISIs given the previous ISI, discretized into 7 intervals for the previous ISI. Right: rescaling
the intervals using the cdf , obtained from the conditional (zi+1 | zi ), produces successive ISIs which
are much more independent. This transformation adds roughly 3 bits/s to the likelihood-based crossvalidation performance of model (c).
Poisson spiking, from the (rescaled-time) contributions of a non-Poisson renewal density. (For a
conditionally Poisson process, is the uniform density on [0 1], and makes zero contribution to the
total log-likelihood).
We fit this model to simulated data (fig. 2), and to real neural data using alternating coordinate ascent
of the filter parameters and the renewal density parameters (fig. 4). In fig. 2, we plot the renewal
distribution q(u) (red trace), which can be obtained from the estimated (v) via the transformation
q(u) = (1 e u )e u .
4.1
Incorporating dependencies between intervals
v
The cdf defined by the CR model, (v) = 0 (s)ds, maps the transformed ISIs { vi } so that the
marginal distribution over zi = (vi ) is uniform on [0 1]. However, there is no guarantee that the
resulting random variables are independent, as assumed in the likelihood (eq. 10). We can examine
dependencies between successive ISIs by making a scatter plot of pairs (zi zi+1 ) (see fig. 3). Departures from independence can then be modeled by introducing a nonparametric estimator for the
conditional distribution ?(zi | zi 1 ). In this case, the likelihood becomes
n
n
n
(ti ) e 0 (T )
(vi )
?(zi | zi 1 )
(11)
P ({ ti } | X) =
i=1
i=1
i=2
which now has three terms, corresponding (respectively) to the effects of the base intensity, nonconditionally Poisson renewal properties, and dependencies between successive intervals.
5
The time-rescaling goodness-of-fit test
If a particular point-process model provides an accurate description of a neuron?s response, then the
cumulative intensity function defines a mapping from the real time to rescaled-time such that the
rescaled interspike intervals have a common distribution. Time-rescaling can therefore be used as a
tool for assessing the goodness-of-fit of a point process model [1, 22]. Specifically, after remapping
a set of observed spike times according to the (model-defined) cumulative intensity, one can perform
a distributional test (e.g., Kolmogorov-Smirnov, or KS test) to assess whether the rescaled intervals
have the expected distribution7 . For example, for a conditionally Poisson model, the KS test can be
applied to the rescaled intervals { vi } (eq. 9) to assess their fit to a uniform distribution.
7
Although we have defined the time-rescaling transform using the base intensity instead of the conditional
intensity as in [1], the resulting tests are equivalent provided the K-S test is applied using the appropriate
distribution.
6
This approach to model validation has grown in popularity in recent years [14, 23], and has in some
instances been used as the only metric for comparing models. We wish to point out that timerescaling based tests are sensitive to one kind of error (i.e., errors in modeling rescaled ISIs), but
may be insensitive to other kinds of model error (i.e., errors in modeling the stimulus-dependent
spike rate). Inspection of the CR model likelihood (eq. 10), makes it clear that time-rescaling based
goodness-of-fit tests are sensitive only to accuracy with which ?(v) (or equivalently, q(u)) models
the rescaled intervals. The test can in fact be independent of the accuracy with which the model
describes the transformation from stimulus to spikes, a point that we illustrate with an (admittedly
contrived) example in fig. 2.
For this example, spikes were genereated from a ?true? model (denoted ?a?), a CR model with a
biphasic stimulus filter and a gamma renewal density (? = 10). Responses from this model were fit
by two sub-optimal approximate models: ?b?, a Poisson (LNP) model, which was specified to have
the correct stimulus filter; and ?c?, a CR model in which the stimulus filter was mis-specified (set
to the negative of the true filter), and a renewal density ?(v) was estimated non-parametrically from
the rescaled intervals {vi } (rescaled under the intensity defined by this model).
Although the time-varying spike-rate predictions of model (c) were badly mis-matched to those of
model (a) (fig. 2, middle), a KS-plot (upper right) shows that (c) exhibits near perfect goodness-of-fit
on a time-rescaling test, which the Poisson model (b) fails badly. We cross-validated these models
by computing the log-likelihood of novel data, which provides a measure of predictive information
about novel spike trains in units of bits/s [24, 18]. Using this measure, the ?true? model (a) provides
approximately 24 bits/s about the spike response to a novel stimulus. The Poisson model (b) captures
only 8 bits/s, but is still much more accurate than the mis-specified renewal model (c), for which
the information is slightly negative (indicating that performance is slightly worse than that of a
homogeneous Poisson process with the correct rate).
Fig. 3 shows that model (c) can be improved by modeling the dependencies between successive
rescaled interspike intervals. We constructed a spline-based non-parametric estimate of the density
?(zi+1 |zi ), where zi = ?(vi ). (We discretized zi into 7 bins, based on visual inspection of the pairwise dependency structure, and fit a cubic spline with 10 evenly spaced knots on [0,1] to the density
within each bin). Rescaling these intervals using the cdf of the augmented model yields intervals
that are both uniform on [0, 1] and approximately independent (fig. 3, right; independence for nonsuccessive intervals not shown). The augmented model raises the cross-validation score of model (c)
to 1 bit/s, meaning that by incorporating dependencies between intervals, the model carries slightly
more predictive information than a homogeneous Poisson model, despite the mis-specified stimulus filter. However, this model?despite passing time-rescaling tests of both marginal distribution
and independence?still carries less information about spike times than the inhomogeneous Poisson
model (b).
6
Application to neural data
Figure 4 shows several specific cases of the CR model fit to spiking data from an ON parasol cell in
primate retina, which was visually stimulated with binary spatio-temporal white noise (i.e., flickering checkerboard, [18]). We fit parameters for the CR model with and without spike-history filters,
and with and without a non-Poisson renewal density (estimated non-parametrically as described
above).
As expected, a non-parametric renewal density allows for remapping of ISIs to the correct (uniform)
marginal distribution in rescaled time (fig. 4, left), and leads to near-perfect scores on the timerescaling goodness-of-fit test (middle). Even when incorporating spike-history filters, the model
with conditionally Poisson spiking (red) fails the time-rescaling test at the 95% level, though not so
badly as the the inhomogeneous Poisson model (blue). However, the conditional Poisson model with
spike-history filter (red) outperforms the non-parametric renewal model without spike-history filter
(dark gray) on likelihood-based cross-validation, carrying 14% more predictive information. For
this neuron, incorporating non-Poisson renewal properties into a model with spike history dependent
intensity (light gray) provides only a modest (<1%) increase in cross-validation performance. Thus,
in addition to being more tractable for estimation, it appears that the generalized linear modeling
framework captures spike-train dependencies more accurately than a non-Poisson renewal process
(at least for this neuron). We are in the process of applying this analysis to more data.
7
stimulus +
spike-history
a
c
cross-validated
log-likelihood
KS statistic
20
b
d
0
bits/s
1
CDF - quantile
0
conditional
renewal
P(z)
conditional
Poisson
stimulus
only
10
0
0
z
quantile
1
a
b c
model
d
Figure 4: Evaluation of four specific cases of the conditional renewal model, fit to spike responses
from a retinal ganglion cell stimulated with a time-varying white noise stimulus. Left: marginal distribution over the interspike intervals {zi }, rescaled according to their cdf defined under four different
models: (a) Inhomogeneous Poisson (i.e., LNP) model, without spike-history filter. (b) Conditional
renewal model without spike-history filter, with non-parametrically estimated renewal density ?. (c)
Conditional Poison model, with spike-history filter (GLM). (d) Conditional renewal model with spikehistory filter and non-parametrically estimated renewal density. A uniform distribution indicates good
model fit under the time-rescaling test. Middle: The difference between the empirical cdf of the
rescaled intervals (under all four models) and their quantiles. As expected, (a) fares poorly, (c) performs better but slightly exceeds the 95% confidence interval (black lines), and (b) and (d) exhibit
near-perfect time-rescaling properties. Right: Likelihood-based cross-validation performance. Adding
a non-parametric renewal density adds 4% to the Poisson model performance, but <1% to the GLM
performance. Overall, a spike-history filter improves cross-validation performance more than the use
of non-Poisson renewal process.
7
Discussion
We have connected two basic approaches for incorporating spike-history effects into neural encoding models: (1) non-Poisson renewal processes; and (2) conditionally Poisson processes with an
intensity that depends on spike train history. We have shown that both kinds of effects can be regarded as special cases of a conditional renewal (CR) process model, and have formulated the model
likelihood in a manner that separates the contributions from these two kinds of mechanisms.
Additionally, we have derived a condition on the CR model renewal density under which the likelihood function over filter parameters is log-concave, guaranteeing that ML estimation of filters (and
MAP stimulus decoding) is a convex optimization problem.
We have shown that incorporating a non-parametric estimate of the CR model renewal density ensures near-perfect performance on the time-rescaling goodness-of-fit test, even when the model itself
has little predictive accuracy (e.g., due to a poor model of the base intensity). Thus, we would argue
that K-S tests based on the time-rescaled interspike intervals should not be used in isolation, but
rather in conjunction with other tools for model comparison (e.g., cross-validated log-likelihood).
Failure under the time-rescaling test indicates that model performance may be improved by incorporating a non-Poisson renewal density, which as we have shown, may be estimated directly from
rescaled intervals.
Finally, we have applied the CR model to neural data, and shown that it can capture spike-history
dependencies in both real and rescaled time. In future work, we will examine larger datasets and
explore whether rescaled-time or real-time models provide more accurate descriptions of the dependencies in spike trains from a wider variety of neural datasets.
Acknowledgments
Thanks to E. J. Chichilnisky, A. M. Litke, A. Sher and J. Shlens for retinal data, and to J. Shlens and
L. Paninski for helpful discussions.
8
References
[1] E. Brown, R. Barbieri, V. Ventura, R. Kass, and L. Frank. The time-rescaling theorem and its application
to neural spike train data analysis. Neural Computation, 14:325?346, 2002.
[2] E. J. Chichilnisky. A simple white noise analysis of neuronal light responses. Network: Computation in
Neural Systems, 12:199?213, 2001.
[3] E. P. Simoncelli, L. Paninski, J. W. Pillow, and O. Schwartz. Characterization of neural responses with
stochastic stimuli. In M. Gazzaniga, editor, The Cognitive Neurosciences, III, chapter 23, pages 327?338.
MIT Press, 2004.
[4] M. Berry and M. Meister. Refractoriness and neural precision. Journal of Neuroscience, 18:2200?2211,
1998.
[5] Daniel S. Reich, Ferenc Mechler, Keith P. Purpura, and Jonathan D. Victor. Interspike intervals, receptive
fields, and information encoding in primary visual cortex. J. Neurosci., 20(5):1964?1974, 2000.
[6] N. Brenner, W. Bialek, and R. de Ruyter van Steveninck. Adaptive rescaling optimizes information
transmission. Neuron, 26:695?702, 2000.
[7] W. Gerstner. Population dynamics of spiking neurons: Fast transients, asynchronous states, and locking.
Neural Computation, 12(1):43?89, 2000.
[8] P. Reinagel and R. C. Reid. Temporal coding of visual information in the thalamus. Journal of Neuroscience, 20:5392?5400, 2000.
[9] J. W. Pillow, L. Paninski, V. J. Uzzell, E. P. Simoncelli, and E. J. Chichilnisky. Prediction and decoding of retinal ganglion cell responses with a probabilistic spiking model. The Journal of Neuroscience,
25:11003?11013, 2005.
[10] M.A. Montemurro, S. Panzeri, M. Maravall, A. Alenda, M.R. Bale, M. Brambilla, and R.S. Petersen.
Role of precise spike timing in coding of dynamic vibrissa stimuli in somatosensory thalamus. Journal of
Neurophysiology, 98(4):1871, 2007.
[11] A.L. Jacobs, G. Fridman, R.M. Douglas, N.M. Alam, P. Latham, et al. Ruling out and ruling in neural
codes. Proceedings of the National Academy of Sciences, 106(14):5936, 2009.
[12] M. Berman. Inhomogeneous and modulated gamma processes. Biometrika, 68(1):143?152, 1981.
[13] R. Barbieri, M.C. Quirk, L.M. Frank, M.A. Wilson, and E.N. Brown. Construction and analysis of
non-poisson stimulus-response models of neural spiking activity. Journal of Neuroscience Methods,
105(1):25?37, 2001.
[14] E. Rossoni and J. Feng. A nonparametric approach to extract information from interspike interval data.
Journal of neuroscience methods, 150(1):30?40, 2006.
[15] K. Koepsell and F.T. Sommer. Information transmission in oscillatory neural activity. Biological Cybernetics, 99(4):403?416, 2008.
[16] R.E. Kass and V. Ventura. A spike-train probability model. Neural computation, 13(8):1713?1720, 2001.
[17] W. Truccolo, U. T. Eden, M. R. Fellows, J. P. Donoghue, and E. N. Brown. A point process framework
for relating neural spiking activity to spiking history, neural ensemble and extrinsic covariate effects. J.
Neurophysiol, 93(2):1074?1089, 2005.
[18] J. W. Pillow, J. Shlens, L. Paninski, A. Sher, A. M. Litke, and E. P. Chichilnisky, E. J. Simoncelli. Spatiotemporal correlations and visual signaling in a complete neuronal population. Nature, 454:995?999,
2008.
[19] S. Gerwinn, J.H. Macke, M. Seeger, and M. Bethge. Bayesian inference for spiking neuron models with
a sparsity prior. Advances in Neural Information Processing Systems, 2008.
[20] I.H. Stevenson, J.M. Rebesco, L.E. Miller, and K.P. K
?ording. Inferring functional connections between neurons. Current Opinion in Neurobiology, 18(6):582?
588, 2008.
[21] L. Paninski. Maximum likelihood estimation of cascade point-process neural encoding models. Network:
Computation in Neural Systems, 15:243?262, 2004.
[22] J. W. Pillow. Likelihood-based approaches to modeling the neural code. In K. Doya, S. Ishii, A. Pouget,
and R. P. Rao, editors, Bayesian Brain: Probabilistic Approaches to Neural Coding, pages 53?70. MIT
Press, 2007.
[23] T.P. Coleman and S. Sarma. Using convex optimization for nonparametric statistical analysis of point
processes. In IEEE International Symposium on Information Theory, 2007. ISIT 2007, pages 1476?1480,
2007.
[24] L. Paninski, M. Fellows, S. Shoham, N. Hatsopoulos, and J. Donoghue. Superlinear population encoding
of dynamic hand trajectory in primary motor cortex. J. Neurosci., 24:8551?8561, 2004.
9
| 3740 |@word neurophysiology:1 version:1 middle:4 polynomial:1 proportion:2 smirnov:1 nd:1 bf:1 d2:1 jacob:1 carry:3 score:2 daniel:1 ording:1 outperforms:1 ka:2 comparing:1 current:1 scatter:1 written:1 must:1 additive:1 alam:1 interspike:7 shape:2 motor:1 plot:5 implying:1 nervous:1 inspection:2 ith:1 coleman:1 provides:8 characterization:1 contribute:1 successive:6 constructed:1 symposium:1 incorrect:1 poison:1 manner:1 pairwise:2 inter:1 expected:3 montemurro:1 roughly:2 isi:16 examine:2 brain:2 discretized:2 decreasing:1 little:1 increasing:6 becomes:2 begin:1 estimating:2 moreover:2 underlying:1 provided:1 remapping:2 matched:1 kind:4 transformation:5 biphasic:1 guarantee:1 temporal:2 fellow:2 every:1 ti:40 concave:24 biometrika:1 schwartz:1 unit:2 reid:1 t1:1 local:1 timing:1 limit:1 despite:2 encoding:10 barbieri:2 approximately:2 black:1 k:6 bursting:1 suggests:1 steveninck:1 acknowledgment:1 practice:1 signaling:1 area:1 empirical:1 cascade:3 shoham:1 projection:1 confidence:2 petersen:1 superlinear:1 applying:1 restriction:2 optimize:1 map:5 equivalent:1 yt:2 center:1 straightforward:1 duration:1 convex:15 focused:2 formulate:1 simplicity:1 pouget:1 rule:1 estimator:1 reinagel:1 regarded:1 shlens:3 population:3 coordinate:1 conditions4:1 construction:1 exact:1 homogeneous:5 approximated:1 satisfying:1 distributional:1 bottom:3 observed:4 role:1 capture:4 region:2 ensures:1 connected:1 rescaled:33 hatsopoulos:1 convexity:3 ui:8 covariates:1 locking:1 dynamic:3 depend:1 reviewing:1 rewrite:1 raise:1 carrying:1 predictive:4 ferenc:1 neurophysiol:1 easily:1 chapter:1 kolmogorov:1 grown:1 train:13 fast:1 describe:2 shortcoming:1 whose:2 larger:1 relax:1 compressed:1 statistic:2 transform:6 itself:2 obviously:1 sequence:1 propose:1 product:1 adaptation:1 relevant:1 poorly:1 achieve:1 academy:1 description:2 maravall:1 crossvalidation:1 contrived:1 transmission:3 assessing:1 produce:1 perfect:4 guaranteeing:1 wider:1 illustrate:2 develop:1 recurrent:2 depending:1 quirk:1 keith:1 eq:6 predicted:1 somatosensory:1 berman:1 quantify:2 met:1 inhomogeneous:12 correct:6 filter:35 stochastic:1 transient:1 opinion:1 bin:5 require:1 truccolo:1 suffices:1 isit:1 biological:1 mathematically:1 extension:3 stretch:1 considered:2 normal:2 exp:3 visually:1 panzeri:1 mapping:2 sought:1 estimation:10 utexas:1 sensitive:2 tool:4 mit:2 gaussian:2 rather:1 avoid:1 cr:21 varying:2 wilson:1 conjunction:1 corollary:1 derived:3 focus:1 validated:3 likelihood:30 indicates:3 seeger:1 ishii:1 litke:2 helpful:1 inference:1 dependent:3 typically:1 integrated:1 initially:1 transformed:2 overall:1 denoted:1 renewal:65 special:1 marginal:4 field:2 identical:1 represents:2 discrepancy:1 future:1 stimulus:33 fundamentally:1 spline:4 piecewise:1 retina:1 gamma:12 preserve:1 national:1 attempt:2 evaluation:1 light:2 kt:1 accurate:3 edge:1 integral:2 modest:1 fitted:1 instance:1 modeling:7 markovian:1 rao:1 goodness:10 tractability:1 introducing:1 subset:1 parametrically:4 uniform:8 dependency:16 varies:1 spatiotemporal:1 combined:1 thanks:1 density:46 international:1 probabilistic:2 decoding:3 invertible:1 connecting:1 bethge:1 central:2 containing:1 worse:1 external:5 cognitive:1 derivative:1 macke:1 rescaling:29 checkerboard:1 suggesting:1 stevenson:1 nonlinearities:1 de:1 parasol:1 retinal:3 coding:4 rescales:1 depends:3 vi:11 red:4 contribution:4 ass:3 accuracy:4 unitless:1 efficiently:1 spaced:2 yield:1 ensemble:1 miller:1 bayesian:2 accurately:2 knot:2 trajectory:1 cybernetics:1 history:33 oscillatory:1 definition:1 failure:2 raster:1 obvious:1 proof:1 mi:4 proved:1 popular:2 ask:1 knowledge:1 improves:1 appears:1 dt:4 violating:1 follow:1 response:16 improved:2 formulation:1 refractoriness:2 though:1 governing:1 correlation:1 d:3 hand:1 nonlinear:2 assessment:1 nonparametrically:2 defines:1 gray:3 effect:8 brown:3 true:5 brambilla:1 equality:1 alternating:1 dx2:1 illustrated:1 white:3 conditionally:11 skewed:1 m:1 generalized:5 complete:2 confusion:1 tn:2 performs:1 latham:1 meaning:3 instantaneous:2 novel:4 ef:1 common:1 functional:2 spiking:17 refractory:2 exponentially:1 insensitive:1 fare:1 relating:1 refer:2 rd:1 nonlinearity:7 dot:1 reich:1 cortex:2 base:6 add:2 posterior:1 recent:7 showed:1 sarma:1 optimizes:1 certain:2 inequality:1 binary:1 gerwinn:1 lnp:3 victor:1 employed:1 period:1 monotonically:1 full:1 simoncelli:3 thalamus:2 exceeds:1 cross:10 long:2 hazard:1 post:2 prediction:3 basic:3 metric:1 poisson:47 cell:3 addition:1 interval:37 grow:1 standpoint:1 pass:1 ascent:2 hz:2 emitted:1 extracting:1 near:4 iii:1 enough:1 variety:3 independence:3 fit:21 psychology:1 zi:14 isolation:1 idea:1 donoghue:2 texas:1 t0:2 whether:3 passed:1 proceed:1 passing:1 ignored:1 generally:2 clear:1 nonparametric:4 dark:1 specifies:1 neuroscience:8 disjoint:1 per:1 estimated:7 correctly:1 popularity:1 blue:1 extrinsic:1 discrete:1 write:1 four:3 terminology:2 nevertheless:1 eden:1 drawn:2 douglas:1 ht:1 fraction:1 sum:2 year:1 inverse:6 powerful:1 family:2 ruling:3 distribution7:1 doya:1 draw:1 bit:7 entirely:1 followed:1 nonglobal:1 identifiable:1 badly:4 activity:3 constraint:2 department:1 according:2 mechler:1 poor:1 describes:2 slightly:6 making:1 primate:1 intuitively:1 restricted:1 dv:1 glm:4 taken:1 equation:1 remains:1 turn:1 discus:1 mechanism:1 mind:1 tractable:2 meister:1 apply:1 appropriate:1 shortly:1 convolved:1 original:1 top:1 include:2 ensure:1 sommer:1 rebesco:1 restrictive:1 quantile:2 feng:1 spike:80 receptive:2 primary:3 dependence:1 parametric:6 bialek:1 exhibit:5 gradient:1 separate:2 mapped:1 simulated:3 evenly:2 mail:1 argue:1 besides:1 code:2 modeled:1 relationship:2 illustration:2 minimizing:1 optionally:1 equivalently:1 unfortunately:1 ventura:2 frank:2 trace:1 negative:4 rise:1 reliably:1 perform:1 upper:1 neuron:8 datasets:2 finite:1 defining:2 neurobiology:2 precise:1 vibrissa:1 arbitrary:1 intensity:25 lengthened:1 pair:1 specified:4 chichilnisky:4 optimized:1 connection:1 ytt:1 gazzaniga:1 below:1 departure:1 regime:1 sparsity:1 including:2 representing:2 sher:2 extract:1 prior:2 berry:1 xtt:1 relative:1 validation:8 editor:2 austin:1 last:1 asynchronous:1 allow:1 characterizing:1 isr:1 van:1 pillow:6 cumulative:6 commonly:1 adaptive:1 approximate:1 ml:2 global:1 assumed:1 spatio:1 alternatively:1 continuous:1 purpura:1 stimulated:2 additionally:1 nature:1 ruyter:1 bearing:1 gerstner:1 necessarily:2 elegantly:1 linearly:1 neurosci:2 noise:3 allowed:2 repeated:1 augmented:2 fig:13 neuronal:2 quantiles:2 cubic:2 precision:1 fails:4 sub:1 inferring:1 wish:2 exponential:7 theorem:5 xt:3 specific:3 covariate:1 showing:1 timerescaling:2 essential:1 incorporating:8 adding:1 conditioned:1 easier:2 paninski:6 explore:1 ganglion:2 visual:5 expressed:1 cdf:9 fridman:1 conditional:22 formulated:1 flickering:1 brenner:1 change:1 specifically:3 admittedly:1 total:2 indicating:1 uzzell:1 arises:1 modulated:7 jonathan:2 incorporate:3 ex:1 |
3,025 | 3,741 | Semi-supervised Regression using Hessian Energy
with an Application to Semi-supervised
Dimensionality Reduction
Kwang In Kim1 , Florian Steinke2,3 , and Matthias Hein1
Department of Computer Science, Saarland University Saarbr?ucken, Germany
2
Siemens AG Corporate Technology Munich, Germany
3
MPI for Biological Cybernetics, Germany
{kimki,hein}@cs.uni-sb.de, [email protected]
1
Abstract
Semi-supervised regression based on the graph Laplacian suffers from the fact
that the solution is biased towards a constant and the lack of extrapolating power.
Based on these observations, we propose to use the second-order Hessian energy
for semi-supervised regression which overcomes both these problems. If the data
lies on or close to a low-dimensional submanifold in feature space, the Hessian
energy prefers functions whose values vary linearly with respect to geodesic distance. We first derive the Hessian energy for smooth manifolds and continue to
give a stable estimation procedure for the common case where only samples of
the underlying manifold are given. The preference of ??linear? functions on manifolds renders the Hessian energy particularly suited for the task of semi-supervised
dimensionality reduction, where the goal is to find a user-defined embedding
function given some labeled points which varies smoothly (and ideally linearly)
along the manifold. The experimental results suggest superior performance of our
method compared with semi-supervised regression using Laplacian regularization
or standard supervised regression techniques applied to this task.
1
Introduction
Central to semi-supervised learning is the question how unlabeled data can help in either classification or regression. A large class of methods for semi-supervised learning is based on the manifold
assumption, that is, the data points do not fill the whole feature space but they are concentrated
around a low-dimensional submanifold. Under this assumption unlabeled data points can be used
to build adaptive regularization functionals which penalize variation of the regression function only
along the underlying manifold.
One of the main goals of this paper is to propose an appropriate regularization functional on a manifold, the Hessian energy, and show that it has favourable properties for semi-supervised regression
compared to the well known Laplacian regularization [2, 12]. Opposite to the Laplacian regularizer,
the Hessian energy allows functions that extrapolate, i.e. functions whose values are not limited to
the range of the training outputs. Particularly if only few labeled points are available, we show that
this extrapolation capability leads to significant improvements. The second property of the proposed
Hessian energy is that it favors functions which vary linearly along the manifold, so-called geodesic
functions defined later. By linearity we mean that the output values of the functions change linearly
along geodesics in the input manifold. This property makes it particularly useful as a tool for semisupervised dimensionality reduction [13], where the task is to construct user-defined embeddings
based on a given subset of labels. These user-guided embeddings are supposed to vary smoothly or
even linearly along the manifold, where the later case corresponds to a setting where the user tries to
1
recover a low-distortion parameterization of the manifold. Moreover, due to user defined labels the
interpretability of the resulting parameterization is significantly improved over unsupervised methods like Laplacian [1] or Hessian [3] eigenmaps. The proposed Hessian energy is motivated by the
recently proposed Eells energy for mappings between manifolds [11], which contains as a special
case the regularization of real-valued functions on a manifold. In flavour, it is also quite similar to
the operator constructed in Hessian eigenmaps [3]. However, we will show that their operator due
to problems in the estimation of the Hessian, leads to useless results when used as regularizer for regression. On the contrary, our novel estimation procedure turns out to be more stable for regression
and as a side effects leads also to a better estimation of the eigenvectors used in Hessian eigenmaps.
We present experimental results on several datasets, which show that our method for semi-supervised
regression is often superior to other semi-supervised and supervised regression techniques.
2
Regression on manifolds
Our approach for regression is based on regularized empirical risk minimization. First, we will
discuss the problem and the regularizer in the ideal case where we know the manifold exactly,
corresponding to the case where we have access to an unlimited number of unlabeled data. In the
following we denote by M the m-dimensional data-submanifold in Rd . The supervised regression
problems for a training set of l points (Xi , Yi )li=1 can then be formulated as,
l
1X
L(Yi , f (Xi )) + ? S(f ),
f ?C ? (M ) l i=1
arg min
where C ? (M ) is the set of smooth functions on M , L : R ? R ? R is the loss function and S :
C ? (M ) ? R is the regularization functional. For simplicity we use the squared loss L(y, f (x)) =
(y ? f (x))2 , but the framework can be easily extended to other convex loss functions.
Naturally, we do not know the manifold M the data is lying on. However, we have unlabeled data
which can be used to estimate it, or more precisely we can use the unlabeled data to build an estimate
? ) of the true regularizer S(f ). The proper estimation of S(f ) will be the topic of the next section.
S(f
For the moment we just want to discuss regularization functionals in the ideal case, where we know
the manifold. However, we would like to stress already here that for our framework to work it does
not matter if the data lies on or close to a low-dimensional manifold. Even the dimension can change
from point to point. The only assumption we make is that the data generating process does not fill
the whole space but is concentrated on a low-dimensional structure.
Regularization on manifolds. Our main goal is to construct a regularization functional on manifolds, which is particularly suited for semi-supervised regression and semi-supervised dimensionality reduction. We follow here the framework of [11] who discuss regularization of mappings between
manifolds, where we are interested in the special case of real-valued output. They propose to use the
so called Eells-energy SEells (f ), which can be written for real-valued functions, f : M ? R, as,
Z
2
SEells (f ) =
k?a ?b f kTx? M ?Tx? M dV (x),
M
where ?a ?b f is the second covariant derivative of f and dV (x) is the natural volume element, see
[7]. Note, that the energy is by definition independent of the coordinate representation and depends
only on the properties of M . For details we refer to [11]. This energy functional looks quite abstract.
However, in a special coordinate system on M , the so called normal coordinates, one can evaluate
it quite easily. In sloppy terms, normal coordinates at a given point p are coordinates on M such
that the manifold looks as Euclidean as possible (up to second order) around p. Thus in normal
coordinates xr centered at p,
2
m
m
X
X
?2f
? 2 f
2
r
s
, (1)
?a ?b f =
dx ? dxb =? k?a ?b f kTp? M ?Tp? M =
?xr ?xs p a
?xr ?xs
p
r,s=1
r,s=1
so that at p the norm of the second covariant derivative is just the Frobenius norm of the Hessian of f
in normal coordinates. Therefore we call the resulting functional the Hessian regularizer SHess (f ).
2
1
0
?1
2
0
?2
0
?2
2
2
estimated output
Laplacian regularization
Hessian regularization
1
0
?1
?2
0
5
10
15
20
geodesic distance along spiral
Figure 1: Difference between semi-supervised regression using
Laplacian and Hessian regularization for fitting two points on the
one-dimensional spiral. The Laplacian regularization has always
a bias towards the constant function (for a non-zero regularization
parameter it will not fit the data exactly) and the extrapolation beyond data points to the boundary of the domain is always constant.
The non-linearity of the fitted function between the data point arises
due to the non-uniform sampling of the spiral. On the contrary the
Hessian regularization fits the data perfectly and extrapolates nicely
to unseen data, since it?s null space contains functions which vary
linearly with the geodesic distance.
Before we discuss the discretization, we would like to discuss some properties of this regularizer. In
particular, its difference to the regularizer S? (f ) using the Laplacian,
Z
2
S? (f ) =
k?f k dV (x)
M
proposed by Belkin and Niyogi [2] for semi-supervised classification and in the meantime also
adopted for semi-supervised regression [12]. While this regularizer makes sense for classification, it
is of limited use for regression. The problem is that the null space NS = {f ? C ? (M ) | S(f ) = 0}
of S? , that is the functions which are not penalized, are only the constant functions on M . The
following adaptation of a result in [4] shows that the Hessian regularizer has a richer null-space.
?
Proposition 1 (Eells,
Lemaire [4]) A function f : M ? R with f ? C (M ) has zero second
derivative, ?a ?b f = 0, ?x ? M , if and only if for any geodesic ? : (??, ?) ? M parameterx
ized by arc length s, there exists a constant c? depending only on ? such that
?
f ?(s) = c? ,
?s
? ? ? < s < ?.
?
We call functions f which fulfill ?s
f (?(s)) = const. geodesic functions. They correspond to linear
maps in Euclidean space and encode a constant variation with respect to the geodesic distance of
the manifold. It is however possible that apart from the trivial case f = const. no other geodesic
functions exist on M . What is the implication of these results for regression? First, the use of Laplacian regularization leads always to a bias towards the constant function and does not extrapolate
beyond data points. On the contrary, Hessian regularization is not biased towards constant functions
if geodesic functions exist and extrapolates ?linearly? (if possible) beyond data points. These crucial
differences are illustrated in Figure 1 where we compare Laplacian regularization using the graph
Laplacian as in [2] to Hessian regularization as introduced in the next section for a densely sampled
spiral. Since the spiral is isometric to a subset of R, it allows ?geodesic? functions.
3
Semi-supervised regression using Hessian energy
As discussed in the last section unlabeled data provides us valuable information about the data manifold. We use this information to construct normal coordinates around each unlabeled point, which
requires the estimation of the local structure of the manifold. Subsequently, we employ the normal
coordinates to estimate the Hessian regularizer using the simple form of the second covariant derivative provided in Equation (1). It turns out that these two parts of our construction are similar to the
one done in Hessian eigenmaps [3]. However, their estimate of the regularizer has stability problems when applied to semi-supervised regression as is discussed below. In contrast, the proposed
method does not suffer from this short-coming and leads to significantly better performance. The
solution of the semi-supervised regression problem is obtained by solving a sparse linear system. In
the following, capital letters Xi correspond to sample points and xr denote normal coordinates.
Construction of local normal coordinates. The estimation of local normal coordinates can be
done using the set of k nearest neighbors (NN) Nk (Xi ) of point Xi . The cardinality k will be
chosen later on by cross-validation. In order to estimate the local tangent space TXi M (seen as an
3
m-dimensional affine subspace of Rd ), we perform PCA on the points in Nk (Xi ). The m leading
eigenvectors then correspond to an orthogonal basis of TXi M . In the ideal case, where one has a
densely sampled manifold, the number of dominating eigenvalues should be equal to the dimension
m. However, for real-world datasets like images the sampling is usually not dense enough so that the
dimension of the manifold can not be detected automatically. Therefore the number of dimensions
has to be provided by the user using prior knowledge about the problem or alternatively, and this is
the way we choose in this paper, by cross-validation.
Having the exact tangent space TXi M one can determine the normal coordinates xr of a point
Xj ? Nk (Xi ) as follows. Let {ur }m
r=1 be the m leading PCA eigenvectors, which have been
normalized, then the normal coordinates {xr }m
r=1 of Xj are given as,
dM (Xj , Xi )2
xr (Xj ) = hur , Xj ? Xi i Pm
2,
r=1 hur , Xj ? Xi i
where the first term is just the projection of the difference vector, Xj ? Xi , on the basis vector ur ?
TXi M and the second component is just a rescaling to fulfill the property of normal coordinates that
the distance of a point Xj ? M to the origin (corresponding to Xi ) is equal to the geodesic distance
Pm
2
dM (Xj , Xi ) of Xj to Xi on M , kx(Xj )k = r=1 xr (Xi )2 = dM (Xj , Xi )2 . The rescaling makes
sense only if local geodesic distances can be accurately estimated. In our experiments, this was only
the case for the 1D-toy dataset of Figure 1. For all other datasets we therefore use xr (Xj ) =
hur , Xj ? Xi i as normal coordinates. In [11] it is shown that this replacement yields an error of
2
2
order O(k?a f k ?2 ) in the estimation of k?a ?b f k , where ? is the maximal principal curvature
(the curvature of M with respect to the ambient space Rd ).
Estimation of the Hessian energy. The Hessian regularizer, the squared norm of the second co2
variant derivative, k?a ?b f k , corresponds to the Frobenius norm of the Hessian of f in normal
coordinates, see Equation 1. Thus, given normal coordinates xr at Xi we would like to have an
operator H which given the function values f (Xj ) on Nk (Xi ) estimates the Hessian of f at Xi ,
k
X (i)
? 2 f
H f (Xj ).
?
?xr ?xs Xi j=1 rsj
This can be done by fitting a second-order polynomial p(x) in normal coordinates to {f (Xj )}kj=1 ,
p(i) (x) = f (Xi ) +
n
X
Br xr +
n X
n
X
r
r=1
Ars xr xs ,
(2)
s=r
where the zeroth-order term is fixed at f (Xi ). In the limit as the neighborhood size tends to zero,
p(i) (x) becomes the second-order Taylor expansion of f around Xi , that is,
?f
1 ? 2 f
Br =
Ars =
(3)
,
,
?xr Xi
2 ?xr ?xs Xi
with Ars = Asr . In order to fit the polynomial we use standard linear least squares,
k
2
X
arg min
f (Xj ) ? f (Xi ) ? (?w)j ,
w?RP
j=1
where ? ? R
is the design matrix with P = m+ m(m+1)
. The corresponding basis functions ?,
2
are the monomials, ? = [x1 , . . . , xm , x1 x1 , x1 x2 , . . . , xm xm ], of the normal coordinates (centered
at Xi ) of Xj ? Nk (xi ) up to second order. The solution w ? RP is w = ?+ f , where f ? Rk and
fj = f (Xj ) with Xj ? Nk (Xi ) and ?+ denotes the pseudo-inverse of ?.
k?P
Note, that the last m(m+1)
components of w correspond to the coefficients Ars of the polynomial
2
(up to rescaling for the diagonal components) and thus with Equation (3) we obtain the desired form
(i)
Hrsj . An estimate of the Frobenius norm of the Hessian of f at Xi is thus given as,
k?a ?b f k2 ?
m X
k
X
r,s=1
(i)
Hrs?
f?
?=1
2
=
k
X
?,?=1
4
(i)
f? f? B?? ,
Pm
(i)
(i)
(i)
where B?? = r,s=1 Hrs? Hrs? and finally the total estimated Hessian energy S?Hess (f ) is the sum
over all data points, where n denotes the number of unlabeled and labeled points,
2 X
n
n X
m
X
X
X
? 2 f
(i)
=
f? f? B?? = hf, Bf i ,
S?Hess (f ) =
?x
?x
Xi
r
s
i=1 r,s=1
i=1
??Nk (Xi ) ??Nk (Xi )
where B is the accumulated matrix summing up all the matrices B (i) . Note, that B is sparse since
each point Xi has only contributions from its neighbors.
Moreover, since we sum up the energy over all points, the squared norm of the Hessian is actually weighted with the local density of the points leading to a stronger penalization of the Hessian
in densely sampled regions. The same holds for the estimate S?? (f ) of Laplacian regularization,
Pn
S?? (f ) = i,j=1 wij (fi ? fj )2 , where one also sums up the contributions of all data points (the
rigorous connection between S?? (f ) and S? (f ) has been established in [2, 5]).
The effect of non-uniform sampling can be observed in Figure 1. There the samples of the spiral are
generated by uniform sampling of the angle leading to a more densely sampled ?interior? region,
which leads to the non-linear behavior of the function for the Laplacian regularization. For the
Hessian energy this phenomena cannot be seen in this example, since the Hessian of a ?geodesic?
function is zero everywhere and therefore it does not matter if it is weighted with the density. On
the other hand for non-geodesic functions the weighting matters also for the Hessian energy. We did
not try to enforce a weighting with respect to the uniform density. However, it would be no problem
to compensate the effects of non-uniform sampling by using a weighted from of the Hessian energy.
Final algorithm. Using the ideas of the previous paragraphs the final algorithmic scheme for
semi-supervised regression can now be immediately stated. We have to solve,
l
1X
arg min
(Yi ? f (Xi ))2 + ? hf, Bf i ,
l i=1
f ?Rn
(4)
where for notational simplicity we assume that the data is ordered such that the first l points are
labeled. The solution is obtained by solving the following sparse linear system,
(I0 + l ?B)f = Y,
where I0 is the diagonal matrix with I0ii = 1 if i is labeled and zero else and Yi = 0 if i is not
labeled. The sparsity structure of B is mainly influencing the complexity to solve this linear system.
However, the number of non-zeros entries of B is between O(nk) and O(nk 2 ) depending on how
well behaved the neighborhoods are (the later case corresponds basically to random neighbors) and
thus grows linearly with the number of data points.
Stability of estimation procedure of Hessian energy. Since we optimize the objective in Equation (4) for any possible assignment of function values f on the data points, we have to ensure that
the estimation of the Hessian is accurate for any possible function. However, the quality of the estimate of the Hessian energy depends on the quality of the local fit of p(i) for each data point Xi .
Clearly, there are function assignments where the estimation goes wrong. If k < P (P is the number
of parameters of the polynomial) p can overfit the function and if k > P then p generally underfits.
In both cases, the Hessian estimation is inaccurate. Most dangerous are the cases where the norm of
the Hessian is underestimated in particular if the function is heavily oscillating. Note that during the
estimation of local Hessian, we do not use the full second-order polynomial but fix its zeroth-order
term at the value of f (i.e. p(i) (Xi ) = f (Xi ); cf. Eq. (2)). The reason for this is that underfitting is
much more likely if one fits a full second-order polynomial since the additional flexibility in fitting
the constant term always reduces the Hessian estimate. In the worst case a function which is heavily
oscillating can even have zero Hessian energy, if it allows a linear fit at each point, see Figure 3. If
such a function fits the data well we get useless regression results1 see Fig. 3. While fixing the constant term does not completely rule out such undesired behavior, we did not observe such irregular
solutions in any experiment. In the appendix we discuss a modification of (Eq. (4)) which rules out
1
For the full second-order polynomial even cross-validation does not rule out these irregular solutions.
5
Regression results
using full polynomial
20
15
40
f
Full polynomial
Fixed zeroth?order
1st eigenvector
2nd eigenvector
3rd eigenvector
0.1
10
5
30
10
5
0.05
20
1st eigenvector
2nd eigenvector
3rd eigenvector
0.1
10
0.05
0
0
5
10
0
?5
0
?5
0
?10
4
?0.05
?10
4
?0.05
0
?10
4
2
5
0
?5
2
0
0
?2
?4 ?5
?10
?0.2
?2
?0.1
0
0.1
0.2
?4 ?4
?2
0
2
Figure 2: Fitting two points on the spiral revisited (see Fig. 1): Left image shows the regression result f using the Hessian energy estimated
by fitting a full polynomial in normal coordinates. The Hessian energy of this heavily oscillating function is 0, since every local fit is linear (an example shown in the right image; green
curve). However, fixing the zeroth-order term
yields a high Hessian energy as desired (local fit
is shown as the red curve in the right image).
4
2
0
?0.1
?2
0
10
20
?4 ?4
?2
0
2
4
?0.1
0
10
20
Figure 3: Sinusoid on the spiral: Left two images show the result of semi-supervised regression using the Hessian estimate of [3] and the
corresponding smallest eigenvectors of the Hessian ?matrix?. One observes heavy oscillations,
due to the bad estimation of the Hessian. The
right two images show the result of our method.
Note, that in particular the third eigenvector corresponding to a non-zero eigenvalue of B is
much better behaved.
for sure irregular solutions, but since it did not lead to significantly better experimental results and
requires an additional parameter to tune we do not recommend to use it.
Our estimation procedure of the Hessian has similar motivation as the one done in Hessian eigenmaps [3]. However, in their approach they do not fix the zeroth-order term. This seems to be suitable
for Hessian eigenmaps as they do not use the full Hessian, but only its m + 1-dimensional null space
(where m is the intrinsic dimension of the manifold). Apparently, this resolves the issues discussed
above so that the null space can still be well estimated also with their procedure. However, using their estimator for semi-supervised regression leads to useless results, see Fig. 3. Moreover,
we would like to note that using our estimator not only the eigenvectors of the null space but also
eigenvectors corresponding to higher eigenvalues can be well estimated, see Fig. 3.
4
Experiments
We test our semi-supervised regression method using Hessian regularization on one synthetic and
two real-world data sets. We compare with the results obtained using Laplacian-based regularization and kernel ridge regression (KRR) trained only with the labeled examples. The free parameters
for our method are the number of neighbors k for k-NN, the dimensionality of the PCA subspace,
and the regularization parameter ? while the parameters for the Laplacian regularization-based regression are: k for k-NN, the regularization parameter and the width of the Gaussian kernel. For
KRR we used also the Gaussian kernel with the width as free parameter. These parameters were
chosen for each method using 5-fold cross-validation on the labeled examples. For the digit and
figure datasets, the experiments were repeated with 5 different assignments of labeled examples.
Digit Dataset. In the first set of experiments, we generated 10000 random samples of artificially
generated images (size 28 ? 28) of the digit 1. There are four variations in the data: translation (two
variations), rotation and line thickness. For this dataset we are doing semi-supervised dimensionality
reduction since the task is to estimate the natural parameters which were used to generate the digits.
This is done based on 50 and 100 labeled images. Each of the variation corresponds then to a separate
regression problem which we finally stick together to get an embedding into four dimensions. Note,
that this dataset is quite challenging since translation of the digit leads to huge Euclidean distances
between digits although they look visually very similar. Fig. 2 and Table 1 summarize the results.
As observed in the first row of Fig. 2, KRR (K) and Hessian (H) regularization recover well the
two parameters of line width and rotation (all other embeddings can be found in the supplementary
material). As discussed previously, the Laplacian (L) regularization tends to shrink the estimated
parameters towards a constant as it penalizes the ?geodesic? functions. This results in the poor
estimation of parameters, especially the line-thickness parameter.2 Although KRR estimates well
the thickness parameter, it fails for the rotation parameter (cf. the second row of Fig. 2 where we
2
In this figure, each parameter is normalized to lie in the unit interval while the regression was performed
in the original scale. The point (0.5, 0.5) corresponds roughly to the origin in the original parameters.
6
Figure 2: Results on the digit 1 dataset. First row: the 2D-embedding of the digits obtained by
regression for the rotation and thickness parameter with 100 labels. Second row: 21 digit images
sampled at regular intervals in the estimated parameter spaces: two reference points (inverted images) are sampled in the ground truth parameter space and then in the corresponding estimated
embedding. Then, 19 points are sampled in the estimated parameter spaces based on linear inter/extrapolation of the parameters. The shown image samples are the ones which have parameters
closest to the interpolated ones. In each parameter space the interpolated points, the corresponding
closest data points and the reference points are marked with red dots, blue and cyan circles.
Table 1: Results on digits: mean squared error (standard deviation) (both in units 10?3 ).
K
L
H
50 labeled points
h-trans. v-trans. rotation
thickness
0.78(0.13) 0.85(0.14) 45.49(7.20) 0.02(0.01)
2.41(0.26) 3.91(0.59) 64.56(3.90) 0.39(0.02)
0.34(0.03) 0.88(0.07) 4.03(1.15) 0.15(0.02)
100 labeled points
h-trans. v-trans. rotation
thickness
0.39(0.10) 0.48(0.08) 26.02(2.98) 0.01(0.00)
1.17(0.13) 2.20(0.22) 30.73(6.05) 0.34(0.01)
0.16(0.03) 0.39(0.07) 1.48(0.26) 0.06(0.01)
show the images corresponding to equidistant inter/extrapolation in the estimated parameter space
between two fixed digits (inverted image)). The Hessian regularization provided a moderate level of
accuracy in recovering the thickness parameter and performed best on the remaining ones.
Figure Dataset. The second dataset consists of 2500 views of a toy figure (see Fig. 3) sampled
based on regular intervals in zenith and azimuth angles on the upper hemisphere around the centered
object [10]. Fig. 3 shows the results of regression for three parameters - the zenith angle, and
the azimuth angle is transformed into Euclidean x,y coordinates.3 Both Laplacian and Hessian
regularizers provided significantly better estimation of the parameters in comparison to KRR, which
demonstrates the effectiveness of semi-supervised regression. However, the Laplacian shows again
contracting behavior which is observed in the top view of hemisphere. Note that for our method this
does not occur and the spacing of the points in the parameter space is much more regular, which
again stresses the effectiveness of our proposed regularizer.
Image Colorization. Image colorization refers to the task of estimating the color components of a
given gray level image. Often, this problem is approached based on the color information of a subset
of pixels in the image, which is specified by a user (cf. [8] for more details). This is essentially a
semi-supervised regression problem where the user-specified color components correspond to the
labels. To facilitate quantitative evaluation, we adopted 20 color images, sampled a subset of pixels
in each image as labels, and used the corresponding gray levels as inputs. The number of labeled
points were 30 and 100 for each images, which we regard as a moderate level of user intervention. As
error measure, we use the mean square distance between the original image and the corresponding
3
Although the underlying manifold is two dimensional, the parametrization cannot be directly found based
on regression as the azimuth angle is periodic. This results in contradicting assignments of ground truth labels.
7
ground truth
KRR
Laplacian regularization
Hessian regularization
1
1
1
1
0.5
0.5
0.5
0.5
0
1
0
1
0
1
0
1
1
0
1
0
0
1
0
0
?1 ?1
?1 ?1
error
0
?1 ?1
x coord.
0.2
?1 ?1
y coord.
zenith coord.
0.15
0.2
Laplacian regularization
KRR
Hessian regularization
0.15
Laplacian regularization 0.2
KRR
Hessian regularization
0.15
0.1
0.1
0.1
0.05
0.05
0.05
0
0
10
25
50
1
0
0
100
Laplacian regularization
KRR
Hessian regularization
0
10
25
50
100
number of labeled points
10
25
50
100
Figure 3: Results of regression on the figure dataset. First row: embedding in the three dimensional
spaces with 50 labels. Second row: Left: some example images of the dataset, Right: error plots for
each regression variable for different number of labeled points.
Original image
KRR
L
H
Levin et al. [8]
Figure 4: Example of image colorization with 30 labels. KRR failed in reconstructing (the color of)
the red pepper at the lower-right corner, while the Laplacian regularizer produced overall, a greenish
image. Levin et al?s method well-recovered the lower central part however failed in reconstructing
the upper central pepper. Despite the slight diffusion of red color at the upper-left corner, overall,
the result of Hessian regularization looks best which is also confirmed by the reconstruction error.
reconstruction in the RGB space. During the colorization, we go over to the YUV color model such
that the Y components, containing the gray level values, are used as the input, based on which the
U and V components are estimated. The estimated U-V components are then combined with the Y
component and converted into RGB format. For the regression, for each pixel, we use as features
the 3 ? 3-size image patch centered at the pixel of interest plus the 2-dimensional coordinate value
of that pixel. The coordinate values are weighted by 10 such that the contribution of coordinate
values and gray levels is balanced. For comparison, we performed experiments with the method
of Levin et al. [8] as one of the state-of-the-art methods.4 Figure 4 shows an example and Table 2
summarizes the results. The Hessian regularizer clearly outperformed the KRR and the Laplacianbased regression and produced slightly better results than those of Levin et al. [8]. We expect that
the performance can be further improved by exploiting a priori knowledge on structure of natural
images (e.g., by exploiting the segmentation information (cf. [9, 6]) in the NN structure).
4
Code is available at: http://www.cs.huji.ac.il/?yweiss/Colorization/.
Table 2: Results on colorization: mean squared error (standard deviation) (both in units 10?3 ).
# labels
30
100
K
1.18(1.10)
0.66(0.65)
L
0.83(0.64)
0.50(0.33)
8
H
0.64(0.50)
0.32(0.25)
Levin et al. [8]
0.74(0.61)
0.37(0.26)
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation. Neural Computation, 15(6):1373?1396, 2003. 2
[2] M. Belkin and P. Niyogi. Semi-supervised learning on manifolds. Machine Learning, 56:209?
239, 2004. 1, 3, 5
[3] D. Donoho and C. Grimes. Hessian eigenmaps: Locally linear embedding techniques for highdimensional data. Proc. of the National Academy of Sciences, 100(10):5591?5596, 2003. 2, 3,
6
[4] J. Eells and L. Lemaire. Selected topics in harmonic maps. AMS, Providence, RI, 1983. 3
[5] M. Hein. Uniform convergence of adaptive graph-based regularization. In G. Lugosi and
H. Simon, editors, Proc. of the 19th Conf. on Learning Theory (COLT), pages 50?64, Berlin,
2006. Springer. 5
[6] R. Irony, D. Cohen-Or, and D. Lischinski. Colorization by example. In Proc. Eurographics
Symposium on Rendering, pages 201?210, 2005. 8
[7] J. M. Lee. Riemannian Manifolds - An introduction to curvature. Springer, New York, 1997. 2
[8] A. Levin, D. Lischinski, and Y. Weiss. Colorization using optimization. In Proc. SIGGRAPH,
pages 689?694, 2004. 7, 8
[9] Q. Luan, F. Wen, D. Cohen-Or, L. Liang, Y.-Q. Xu, and H.-Y. Shum. Natural image colorization. In Proc. Eurographics Symposium on Rendering, pages 309?320, 2007. 8
[10] G. Peters. Efficient pose estimation using view-based object representations. Machine Vision
and Applications, 16(1):59?63, 2004. 7
[11] F. Steinke and M. Hein. Non-parametric regression between Riemannian manifolds. In Advances in Neural Information Processing Systems, pages 1561?1568, 2009. 2, 4
[12] J. J. Verbeek and N. Vlassis. Gaussian fields for semi-supervised regression and correspondence learning. Pattern Recognition, 39:1864?1875, 2006. 1, 3
[13] X. Yang, H. Fu, H. Zha, and J. Barlow. Semi-supervised nonlinear dimensionality reduction.
In Proc. of the 23rd international conference on Machine learning, pages 1065?1072, New
York, NY, USA, 2006. ACM. 1
9
| 3741 |@word polynomial:10 norm:7 stronger:1 nd:2 bf:2 seems:1 rgb:2 moment:1 reduction:7 contains:2 shum:1 recovered:1 com:1 discretization:1 dx:1 written:1 extrapolating:1 plot:1 selected:1 parameterization:2 parametrization:1 short:1 provides:1 revisited:1 preference:1 saarland:1 along:6 constructed:1 symposium:2 consists:1 fitting:5 paragraph:1 underfitting:1 inter:2 roughly:1 behavior:3 automatically:1 resolve:1 ucken:1 cardinality:1 becomes:1 provided:4 estimating:1 underlying:3 linearity:2 moreover:3 null:6 what:1 eigenvector:7 ag:1 pseudo:1 quantitative:1 eells:4 every:1 exactly:2 k2:1 wrong:1 stick:1 demonstrates:1 unit:3 intervention:1 before:1 influencing:1 local:10 tends:2 limit:1 despite:1 lugosi:1 zeroth:5 plus:1 coord:3 challenging:1 limited:2 range:1 xr:15 digit:11 procedure:5 empirical:1 significantly:4 projection:1 regular:3 refers:1 suggest:1 get:2 cannot:2 close:2 unlabeled:8 operator:3 interior:1 risk:1 optimize:1 www:1 map:2 go:2 convex:1 simplicity:2 immediately:1 rule:3 estimator:2 fill:2 embedding:6 stability:2 variation:5 coordinate:25 construction:2 heavily:3 user:9 exact:1 origin:2 element:1 recognition:1 particularly:4 labeled:15 observed:3 worst:1 region:2 valuable:1 observes:1 balanced:1 complexity:1 ideally:1 co2:1 geodesic:16 trained:1 solving:2 basis:3 completely:1 easily:2 siggraph:1 greenish:1 tx:1 regularizer:15 detected:1 approached:1 neighborhood:2 whose:2 quite:4 richer:1 valued:3 dominating:1 distortion:1 solve:2 supplementary:1 favor:1 niyogi:3 unseen:1 final:2 eigenvalue:3 matthias:1 propose:3 reconstruction:2 coming:1 maximal:1 adaptation:1 flexibility:1 academy:1 supposed:1 frobenius:3 exploiting:2 convergence:1 oscillating:3 generating:1 object:2 help:1 derive:1 depending:2 ac:1 pose:1 fixing:2 nearest:1 eq:2 recovering:1 c:2 guided:1 subsequently:1 centered:4 zenith:3 material:1 fix:2 yweiss:1 proposition:1 biological:1 hold:1 lying:1 around:5 ground:3 normal:18 visually:1 lischinski:2 mapping:2 algorithmic:1 vary:4 smallest:1 estimation:19 proc:6 outperformed:1 label:9 krr:12 tool:1 weighted:4 minimization:1 clearly:2 always:4 gaussian:3 fulfill:2 pn:1 encode:1 improvement:1 notational:1 mainly:1 contrast:1 rigorous:1 sense:2 lemaire:2 am:1 nn:4 sb:1 accumulated:1 i0:2 inaccurate:1 wij:1 transformed:1 interested:1 germany:3 pixel:5 issue:1 arg:3 classification:3 colt:1 overall:2 priori:1 art:1 special:3 equal:2 construct:3 asr:1 nicely:1 having:1 sampling:5 field:1 seells:2 unsupervised:1 look:4 recommend:1 few:1 belkin:3 employ:1 wen:1 densely:4 national:1 replacement:1 huge:1 interest:1 evaluation:1 grime:1 regularizers:1 implication:1 accurate:1 ambient:1 fu:1 orthogonal:1 euclidean:4 taylor:1 penalizes:1 desired:2 circle:1 hein:3 fitted:1 ar:4 tp:1 assignment:4 deviation:2 subset:4 monomials:1 entry:1 uniform:6 submanifold:3 eigenmaps:8 levin:6 azimuth:3 providence:1 thickness:7 varies:1 periodic:1 synthetic:1 combined:1 st:2 density:3 international:1 huji:1 lee:1 together:1 squared:5 central:3 again:2 eurographics:2 containing:1 choose:1 corner:2 conf:1 derivative:5 leading:4 rescaling:3 yuv:1 li:1 toy:2 converted:1 de:1 coefficient:1 matter:3 depends:2 later:4 try:2 extrapolation:4 performed:3 view:3 apparently:1 doing:1 red:4 zha:1 recover:2 hf:2 capability:1 simon:1 contribution:3 square:2 il:1 accuracy:1 who:1 correspond:5 yield:2 accurately:1 produced:2 basically:1 confirmed:1 cybernetics:1 suffers:1 definition:1 energy:26 dm:3 naturally:1 riemannian:2 sampled:9 dataset:9 hur:3 knowledge:2 color:7 dimensionality:8 underfits:1 segmentation:1 actually:1 higher:1 supervised:32 follow:1 isometric:1 improved:2 wei:1 done:5 shrink:1 just:4 overfit:1 hand:1 nonlinear:1 lack:1 hein1:1 quality:2 gray:4 behaved:2 grows:1 semisupervised:1 facilitate:1 effect:3 usa:1 normalized:2 true:1 barlow:1 regularization:40 sinusoid:1 illustrated:1 undesired:1 during:2 width:3 mpi:1 stress:2 ridge:1 txi:4 fj:2 image:28 harmonic:1 novel:1 recently:1 fi:1 common:1 superior:2 rotation:6 functional:5 cohen:2 volume:1 discussed:4 slight:1 significant:1 refer:1 hess:2 rd:6 pm:3 dot:1 stable:2 access:1 ktx:1 curvature:3 closest:2 moderate:2 apart:1 hemisphere:2 luan:1 continue:1 yi:4 results1:1 inverted:2 seen:2 additional:2 florian:2 determine:1 semi:29 full:7 corporate:1 reduces:1 smooth:2 cross:4 compensate:1 laplacian:25 verbeek:1 variant:1 regression:45 essentially:1 vision:1 kernel:3 penalize:1 irregular:3 want:1 spacing:1 interval:3 underestimated:1 else:1 crucial:1 biased:2 sure:1 contrary:3 effectiveness:2 call:2 yang:1 ideal:3 embeddings:3 spiral:8 enough:1 rendering:2 xj:21 fit:9 equidistant:1 pepper:2 perfectly:1 opposite:1 idea:1 br:2 motivated:1 pca:3 render:1 suffer:1 peter:1 hessian:65 york:2 prefers:1 useful:1 generally:1 eigenvectors:6 tune:1 locally:1 concentrated:2 generate:1 http:1 exist:2 estimated:13 blue:1 four:2 capital:1 diffusion:1 graph:3 sum:3 inverse:1 letter:1 angle:5 everywhere:1 patch:1 oscillation:1 appendix:1 summarizes:1 flavour:1 cyan:1 correspondence:1 fold:1 extrapolates:2 dangerous:1 occur:1 precisely:1 x2:1 ri:1 unlimited:1 interpolated:2 min:3 format:1 department:1 munich:1 poor:1 slightly:1 reconstructing:2 ur:2 modification:1 dv:3 kim1:1 equation:4 previously:1 turn:2 discus:6 know:3 adopted:2 available:2 observe:1 appropriate:1 enforce:1 rp:2 original:4 denotes:2 remaining:1 ensure:1 cf:4 top:1 const:2 build:2 especially:1 rsj:1 objective:1 question:1 already:1 parametric:1 diagonal:2 subspace:2 distance:9 separate:1 berlin:1 topic:2 manifold:32 trivial:1 reason:1 length:1 code:1 useless:3 colorization:9 liang:1 ized:1 stated:1 design:1 proper:1 perform:1 upper:3 observation:1 datasets:4 arc:1 extended:1 vlassis:1 rn:1 introduced:1 specified:2 connection:1 saarbr:1 established:1 trans:4 beyond:3 below:1 usually:1 xm:3 pattern:1 sparsity:1 summarize:1 interpretability:1 green:1 power:1 suitable:1 natural:4 regularized:1 meantime:1 hr:3 scheme:1 technology:1 kj:1 prior:1 tangent:2 loss:3 contracting:1 expect:1 sloppy:1 validation:4 penalization:1 affine:1 editor:1 ktp:1 heavy:1 translation:2 row:6 penalized:1 last:2 free:2 side:1 bias:2 steinke:2 neighbor:4 kwang:1 sparse:3 regard:1 boundary:1 dimension:6 curve:2 world:2 adaptive:2 functionals:2 uni:1 overcomes:1 summing:1 xi:39 alternatively:1 table:4 expansion:1 artificially:1 domain:1 did:3 main:2 dense:1 linearly:8 whole:2 motivation:1 contradicting:1 repeated:1 x1:4 xu:1 fig:9 ny:1 n:1 fails:1 lie:3 weighting:2 third:1 rk:1 bad:1 favourable:1 x:5 exists:1 intrinsic:1 kx:1 nk:10 suited:2 smoothly:2 likely:1 failed:2 ordered:1 springer:2 covariant:3 corresponds:5 truth:3 acm:1 goal:3 formulated:1 marked:1 donoho:1 towards:5 change:2 principal:1 called:3 total:1 experimental:3 siemens:2 highdimensional:1 arises:1 evaluate:1 phenomenon:1 extrapolate:2 |
3,026 | 3,742 | Polynomial Semantic Indexing
Bing Bai(1)
Kunihiko Sadamasa(1)
Jason Weston(1)(2)
Yanjun Qi(1)
David Grangier(1)
Corinna Cortes(2)
Ronan Collobert(1)
Mehryar Mohri(2)(3)
(1)
NEC Labs America, Princeton, NJ
{bbai, dgrangier, collober, kunihiko, yanjun}@nec-labs.com
(2)
Google Research, New York, NY
{jweston, corinna, mohri}@google.com
(3)
NYU Courant Institute, New York, NY
[email protected]
Abstract
We present a class of nonlinear (polynomial) models that are discriminatively
trained to directly map from the word content in a query-document or documentdocument pair to a ranking score. Dealing with polynomial models on word features is computationally challenging. We propose a low-rank (but diagonal preserving) representation of our polynomial models to induce feasible memory and
computation requirements. We provide an empirical study on retrieval tasks based
on Wikipedia documents, where we obtain state-of-the-art performance while providing realistically scalable methods.
1
Introduction
Ranking text documents given a text-based query is one of the key tasks in information retrieval.
A typical solution is to: (i) embed the problem in a feature space, e.g. model queries and target
documents using a vector representation; and then (ii) choose (or learn) a similarity metric that
operates in this vector space. Ranking is then performed by sorting the documents based on their
similarity score with the query.
A classical vector space model, see e.g. [24], uses weighted word counts (e.g. via tf-idf) as the
feature space, and the cosine similarity for ranking. In this case, the model is chosen by hand and no
machine learning is involved. This type of model often performs remarkably well, but suffers from
the fact that only exact matches of words between query and target texts contribute to the similarity
score. That is, words are considered to be independent, which is clearly a false assumption.
Latent Semantic Indexing [8], and related methods such as pLSA and LDA [18, 2], are unsupervised
methods that choose a low dimensional feature representation of ?latent concepts? where words
are no longer independent. They are trained with reconstruction objectives, either based on mean
squared error (LSI) or likelihood (pLSA, LDA). These models, being unsupervised, are still agnostic
to the particular task of interest.
More recently, supervised models for ranking texts have been proposed that can be trained on a
supervised signal (i.e., labeled data) to provide a ranking of a database of documents given a query.
For example, if one has click-through data yielding query-target relationships, one can use this to
train these models to perform well on this task. Or, if one is interested in finding documents related
to a given query document, one can use known hyperlinks to learn a model that performs well on this
task. Many of these models have typically relied on optimizing over only a few hand-constructed
features, e.g. based on existing vector space models such as tf-idf, the title, URL, PageRank and
other information [20, 5]. In this work, we investigate an orthogonal research direction, as we
analyze supervised methods that are based on words only. Such models are both more flexible, e.g.
can be used for tasks such as cross-language retrieval, and can still be used in conjunction with
1
other features explored in previous work for further gains. At least one recent work, called Hash
Kernels [25], has been proposed that does construct a word-feature based model in a learning-to-rank
context.
In this article we define a class of nonlinear (polynomial) models that can capture higher order
relationships between words. Our nonlinear representation of the words results in a very high dimensional feature space. To deal with this space we propose low rank (but diagonal preserving) representations of our polynomial models to induce feasible memory and computation requirements,
resulting in a method that both exhibits strong performance and is tractable to train and test.
We show experimentally on retrieval tasks derived from Wikipedia that our method strongly outperforms other word based models, including tf-idf vector space models, LSI, query expansion, margin
rank perceptrons and Hash Kernels.
The rest of this article is as follows. In Section 2, we describe our method, Section 3 discusses prior
work, and Section 4 describes the experimental study of our method.
2
Polynomial Semantic Indexing
Let us denote the set of documents in the corpus as {dt }`t=1 ? RD and a query text as q ? RD , where
D is the dictionary size, and the j th dimension of a vector indicates the frequency of occurrence of
the j th word, e.g. using the tf-idf weighting and then normalizing to unit length.
Given a query q and a document d we wish to learn a (nonlinear) function f (q, d) that returns a
score measuring the relevance of d given q. Let us first consider the naive approach of concatenating
(q, d) into a single vector and using f (q, d) = w> [q, d] as a linear ranking model. This clearly does
not learn anything useful as it would result in the same document ordering for any query, given fixed
parameters w. However, considering a polynomial model:
f (q, d) = w> ?k ([q, d])
where ?k (?) is a feature map that considers all possible k-degree terms:
?k (x1 , . . . , xD ) = hxi1 . . . xik : 1 ? i1 . . . ik ? Di
does render a useful discriminative model. For example for degree k = 2 we obtain:
X
X
X
1
2
3
f (q, d) =
wij
qi qj +
wij
di qj +
wij
di dj
ij
ij
D?D
ij
where w has been rewritten as w ? R
,w ? R
and w3 ? RD?D . The ranking order
1
of documents d given a fixed query q is independent of w and the value of the term with w3 is
independent of the query, so in the following we will consider models containing only terms with
both q and d. In particular, we will consider the following degree k = 2 model:
1
f 2 (q, d) =
2
D
X
D?D
Wij qi dj = q > W d
(1)
i,j=1
where W ? RD?D , and the degree k = 3 model:
f 3 (q, d) =
D
X
Wijk qi dj dk + f 2 (q, d).
(2)
i,j,k=1
Note that if W is an identity matrix in equation (1), we obtain the cosine similarity with tf-idf
weighting. When other weights are nonzero this model can capture synonymy and polysemy as it
looks at all possible cross terms, which can be tuned directly for the task of interest during training,
e.g. the value of Wij corresponding to related words e.g. the word ?jagger? in the query and ?stones?
in the target could be given a large value during training. The degree k = 3 model goes one stage
further and can upweight Wijk for the triple ?jagger?, ?stones? and ?rolling? and can downweight
the triple ?jagger?, ?gem? and ?stones?. Note that we do not necessarily require preprocessing
methods such as stemming here since these models can already match words with common stems (if
2
it is useful for the task). Note also that in equation (2) we could have just as easily have considered
pairs of words in the query (rather than the document) as well.
Unfortunately, using such polynomial models is clearly infeasible for several reasons. Firstly, it
will hardly be possible to fit W in memory for realistic tasks. If the dictionary size is D = 30000,
then, for k = 2 this requires 3.4GB of RAM (assuming floats), and if the dictionary size is 2.5
Million (as it will be in our experiments in Section 4) this amounts to 14.5TB. For k = 3 this is even
worse. Besides memory requirements, the huge number of parameters can of course also affect the
generalization ability of this model.
We thus propose a low-rank (but diagonal preserving) approximation of these models which will
lead to capacity control, faster computation speed and smaller memory footprint.
For k = 2 we propose to replace W with W , where
W ij = (U > V )ij + Iij =
X
Uli Vlj + Iij .
l
Plugging this into equation (1) yields:
2
fLR
(q, d) = q > (U > V + I)d,
=
N
X
(U q)i (V d)i + q > d.
(3)
(4)
i=1
Here, U and V are N ? D matrices. Before looking at higher degree polynomials, let us first analyze
this case. This induces a N -dimensional ?latent concept? space in a way similar to LSI. However,
this is different in several ways:
? First, and most importantly, we advocate training from a supervised signal using preference
relations (ranking constraints).
? Further, U and V differ so it does not assume the query and target document should be
embedded in the same way. This can hence model when the query text distribution is very
different to the document text distribution, e.g. queries are often short and have different
word occurrence and co-occurrence statistics. In the extreme case in cross language retrieval query and target texts are in different languages yet are naturally modeled in this
setup.
? Finally, the addition of the identity term means this model automatically learns the tradeoff between using the low dimensional space and a classical vector space model. This is
important because the diagonal of the W matrix gives the specificity of picking out when a
word co-occurs in both documents (indeed, setting W = I is equivalent to cosine similarity using tf-idf). The matrix I is full rank and therefore cannot be approximated with the
low-rank model U > V , so our model combines both terms in the approximation.
However, the efficiency and memory footprint are as favorable as LSI. Typically, one caches the
N -dimensional representation for each document to use at query time.
For higher degree polynomials, e.g. k = 3 one can perform a similar approximation. Indeed, Wijk
is approximated with
X
W ijk =
Uli Vlj Ylk
l
where U, V and Y are N ? D. When adding the diagonal preserving term and the lower order terms
from the k = 2 polynomial, we obtain
3
fLR
(q, d) =
N
X
2
(U q)i (V d)i (Y d)i + fLR
(q, d).
i=1
Clearly, we can approximate any degree k polynomial using a product of k linear embeddings in
such a scheme. Note that at test time one can again cache the N -dimensional representation for each
document by computing the product between the V and Y terms and are then still left with only N
multiplications per document for the embedding term at query time.
Interestingly, one can view this model as a ?product of experts?: the document is projected twice i.e.
by two experts V and Y and the training will force them to focus on different aspects.
3
2.1
Training
Training such models could take many forms. In this paper we will adopt the typical ?learning
to rank? setup [17, 20]. Suppose we are given a set of tuples R (labeled data), where each tuple
contains a query q, a relevant document d+ and an non-relevant (or lower ranked) document d? . We
would like to choose W such that f (q, d+ ) > f (q, d? ), that is d+ should be ranked higher than d? .
We thus employ the margin ranking loss [17] which has already been used in several IR methods
before [20, 5, 14], and minimize:
X
max(0, 1 ? f (q, d+ ) + f (q, d? )).
(5)
(q,d+ ,d? )?R
We train this using stochastic gradient descent, (see, e.g. [5]): iteratively, one picks a random tuple
and makes a gradient step for that tuple. We choose the (fixed) learning rate which minimizes the
training error. Convergence (or early stopping) is assessed with a validation set. Stochastic training
is highly scalable and is easy to implement for our model. For example, for k = 2, one makes the
following updates:
U ? U + ?V (d+ ? d? )q > ,
2
2
if 1 ? fLR
(q, d+ ) + fLR
(q, d? ) > 0
V ? V + ?U q(d+ ? d? )> ,
2
2
(q, d? ) > 0.
(q, d+ ) + fLR
if 1 ? fLR
Clearly, it is important to exploit the sparsity of q and d when calculating these updates. In our
experiments we initialized the matrices U and V randomly using a normal distribution with mean
zero and standard deviation one. The gradients for k = 3 are similar.
Note that researchers have also explored optimizing various alternative loss functions other than
the ranking loss including optimizing normalized discounted cumulative gain (NDCG) and mean
average precision (MAP) [5, 4, 6, 28]. In fact, one could use those optimization strategies to train our
models instead of optimizing the ranking loss. One could also just as easily use them in unsupervised
learning, such as in LSI, as well, e.g. by stochastic gradient descent on the reconstruction error.
3
Prior Work
Joachims et al. [20] trained a SVM with hand-designed features based on the title, body, search
engines rankings and the URL. Burges et al. [5] proposed a neural network method using a similar
set of features (569 in total). As described before, in contrast, we limited ourselves to body text (not
using title, URL, etc.) and trained on millions of features based on these words.
The authors of [15] used a model similar to the naive full rank model (1), but for the task of image retrieval, and [13] also used a related (regression-based) method for advert placement. These
techniques are implemented in related software to these two publications, PAMIR1 and Vowpal
Wabbit2 . When the memory usage is too large, the latter bins the features randomly into a reduced
space (hence with random collisions), a technique called Hash Kernels [25]. In all cases, the task
of document retrieval, and the use of low-rank approximation or polynomial features is not studied.
The current work generalizes and extends the Supervised Semantic Indexing approach [1] to general
polynomial models.
Another related area of research is in distance metric learning [27, 19, 12]. Methods like LMNN
[27] also learn a model similar to the naive full rank model (1), i.e. with the full matrix W (but
not with our improvements of this model that make it tractable for word features). They impose the
constraint during the optimization that W be a positive semidefinite matrix. Their method has considerable computational cost. For example, even after considerable optimization of the algorithm,
it still takes 3.5 hours to train on 60,000 examples and 169 features (a pre-processed version of
MNIST). This would hence not be scalable for large-scale text ranking experiments. Nevertheless,
[7] compared LMNN [27], LEGO [19] and MCML [12] to a stochastic gradient method with a full
matrix W (identical to the model (1)) on a small image ranking task and reported in fact that the
stochastic method provides both improved results and efficiency. Our method, on the other hand,
both outperforms models like (1) and is feasible for word features, when (1) is not.
1
2
http://www.idiap.ch/pamir/
http://hunch.net/?vw/
4
A tf-idf vector space model and LSI [8] are two standard baselines we will also compare to. We
already mentioned pLSA [18] and LDA [2]; both have scalability problems and are not reported to
generally outperform LSA and TF-IDF [11]. Query Expansion, often referred to as blind relevance
feedback, is another way to deal with synonyms, but requires manual tuning and does not always
yield a consistent improvement [29].
Several authors [23, 21] have proposed interesting nonlinear versions of unsupervised LSI using
neural networks and showed they outperform LSI or pLSA. However, in the case of [23] we note
their method is rather slow, and a dictionary size of only 2000 was used. A supervised method
for LDA (sLDA) [3] has also been proposed where a set of auxiliary labels are trained on jointly
with the unsupervised task. This provides supervision at the document level (via a class label or
regression value) which is not a task of learning to rank, whereas here we study supervision at
the (query,documents) level. The authors of [10] proposed ?Explicit Semantic Analysis? which
represents the meaning of texts in a high-dimensional space of concepts by building a feature space
derived from the human-organized knowledge from an encyclopedia, e.g. Wikipedia. In the new
space, cosine similarity is applied. Our method could be applied to such feature representations so
that they are not agnostic to a particular supervised task as well.
As we will also evaluate our model over cross-language retrieval, we also briefly mention methods
previously applied to this problem. These include first applying machine translation and then a
conventional retrieval method such as LSI [16], a direct method of applying LSI for this task called
CL-LSI [9], or using Kernel Canonical Correlation Analysis, KCCA [26]. While the latter is a
strongly performing method, it also suffers from scalability problems.
4
Experimental Study
Learning a model of term correlations over a large vocabulary is a considerable challenge that requires a large amount of training data. Standard retrieval datasets like TREC3 or LETOR [22]
contain only a few hundred training queries, and are hence too small for that purpose. Moreover,
some datasets only provide pre-processed features like tf, idf or BM25, and not the actual words.
Click-through from web search engines could provide valuable supervision. However, such data is
not publicly available.
We hence conducted experiments on Wikipedia and used links within Wikipedia to build a largescale ranking task. We considered several tasks: document-document and query-document retrieval
described in Section 4.1, and cross-language document-document retrieval described in Section 4.2.
In these experiments we compared our approach, Polynomial Semantic Indexing (PSI), to the following methods: tf-idf + cosine similarity (TFIDF), Query Expansion (QE), LSI4 , ?LSI + (1 ? ?)
TFIDF, and the margin ranking perceptron and Hash Kernels with hash size h using model (1).
PE
Query Expansion involves applying TFIDF and then adding mean vector ? i=1 dri of the top
E retrieved documents multiplied by a weighting ? to the query, and applying TFIDF again. For
all methods, hyperparameters such as the embedding dimension N ? {50, 100, 200, 500, 1000},
h ? {1M, 3M, 6M }, ?, ? and E were chosen using a validation set.
For each method, we measured the ranking loss (the percentage of tuples in R that are incorrectly
ordered), precision P (n) at position n = 10 (P@10) and the mean average precision (MAP), as
well as their standard deviations. For computational reasons, MAP and P@10 were measured by
averaging over a fixed set of 1000 test queries, and the true test links and random subsets of 10,000
documents were used as the database, rather than the whole testing set. The ranking loss is measured
using 100,000 testing tuples.
4.1
Document Retrieval
We considered a set of 1,828,645 English Wikipedia documents as a database, and split the
24,667,286 links randomly into two portions, 70% for training (plus validation) and 30% for test3
http://trec.nist.gov/
We use the SVDLIBC software http://tedlab.mit.edu/?dr/svdlibc/ and the cosine distance
in the latent concept space.
4
5
Table 1: Document-document ranking results on Wikipedia (limited dictionary size of 30,000
words). Polynomial Semantic Indexing (PSI) outperforms all baselines, and performs better with
higher degree k = 3.
Algorithm
TFIDF
QE
LSI
?LSI + (1 ? ?)TFIDF
Margin Ranking Perceptron using (1)
PSI (k = 2)
PSI (k = 3)
Rank-Loss
1.62%
1.62%
4.79%
1.28%
0.41%
0.30%
0.14%
MAP
0.329?0.010
0.330?0.010
0.158?0.006
0.346?0.011
0.477?0.011
0.517?0.011
0.539?0.011
P@10
0.163?0.006
0.163?0.006
0.098?0.005
0.170?0.007
0.212?0.007
0.229?0.007
0.236?0.007
Table 2: Empirical results for document-document ranking on Wikipedia (unlimited dictionary size).
Algorithm
TFIDF
QE
?LSI + (1 ? ?)TFIDF
Hash Kernels using (1)
PSI (k = 2)
PSI (k = 3)
Rank-Loss
0.842%
0.842%
0.721%
0.347%
0.158%
0.099%
MAP
0.432?0.012
0.432?0.012
0.433?0.012
0.485?0.011
0.547?0.012
0.590?0.012
P@10
0.1933?0.007
0.1933?0.007
0.193?0.007
0.215?0.007
0.239?0.008
0.249?0.008
Table 3: Empirical results for document-document ranking in two train/test setups: partitioning into
train+test sets of links, or into train+test sets of documents with no cross-links (limited dictionary
size of 30,000 words). The two setups yield. similar results.
Algorithm
PSI (k = 2)
PSI (k = 2)
Testing Setup
Partitioned links
Partitioned docs+links
Rank-Loss
0.407%
0.401%
MAP
0.506?0.012
0.503?0.010
P@10
0.225?0.007
0.225?0.006
Table 4: Empirical results for query-document ranking on Wikipedia where query has n keywords
(this experiment uses a limited dictionary size of 30,000 words). For each n we measure the ranking
loss, MAP and P@10 metrics.
Algorithm
TFIDF
?LSI + (1 ? ?)TFIDF
PSI (k = 2)
Rank
21.6%
14.2%
4.37%
n=5
MAP
0.047
0.049
0.166
P@10
0.023
0.023
0.083
Rank
14.0%
9.73%
2.91%
n = 10
MAP
0.083
0.089
0.229
P@10
0.035
0.037
0.100
Rank
9.14%
6.36%
1.80%
n = 20
MAP
0.128
0.133
0.302
P@10
0.054
0.059
0.130
ing.5 We then considered the following task: given a query document q, rank the other documents
such that if q links to d then d is highly ranked.
In our first experiment we constrained all methods to use only the top 30,000 most frequent words.
This allowed us to compare to a margin ranking perceptron using model (1) which would otherwise
not fit in memory. For our approach, Polynomial Semantic Indexing (PSI), we report results for
degrees k = 2 and k = 3. Results on the test set are given in Table 1. Both variants of our method
PSI strongly outperform the existing techniques. The margin rank perceptron using (1) can be seen
as a full rank version of PSI for k = 2 (with W unconstrained) but is outperformed by its lowrank counterpart ? probably because it has too much capacity. Degree k = 3 outperforms k = 2,
indicating that the higher order nonlinearities captured provide better ranking scores. For LSI and
PSI embedding dimension N = 200 worked best, but other values gave similar results. In terms
of other techniques, LSI is slightly better than TFIDF but QE in this case does not improve much
over TFIDF, perhaps because of the difficulty of this task (there may too often be many irrelevant
documents in the top E documents initially retrieved for QE to help).
5
We removed links to calendar years as they provide little information while being very frequent.
6
Table 5: The closest five words in the document embedding space to some example query words.
kitten
vet
ibm
nyc
c++
xbox
beatles
britney
cat
veterinarian
computer
york
programming
console
mccartney
spears
cats
veterinary
company
new
windows
game
lennon
album
animals
medicine
technology
manhattan
mac
games
song
music
species
animals
software
city
unix
microsoft
band
pop
dogs
animal
data
brooklyn
linux
windows
harrison
her
In our second experiment we no longer constrained methods to a fixed dictionary size, so all 2.5
million words are used. In this setting we compare to Hash Kernels which can deal with these
dictionary sizes. The results, given in Table 2 show the same trends, indicating that the dictionary
size restriction in the previous experiment did not bias the results in favor of any one algorithm.
Note also that as a page has on average just over 3 test set links to other pages, the maximum P@10
one can achieve in this case is 0.31.
In some cases, one might be worried that our experimental setup has split training and testing data
only by partitioning the links, but not the documents, hence performance of our model when new
unseen documents are added to the database might be in question. We therefore also tested an
experimental setup where the test set of documents is completely separate from the training set of
documents, by completely removing all training set links between training and testing documents.
In fact, this does not alter the performance significantly, as shown in Table 3.
Query-Document Ranking So far, our evaluation uses whole Wikipedia articles as queries. One
might wonder if the reported improvements also hold in a setup where queries consist of only a few
keywords. We thus also tested our approach in this setup. We used the same setup as before but we
constructed queries by keeping only n random words from query documents in an attempt to mimic
a ?keyword search?. Table 4 reports the results for keyword queries of length n = 5, 10 and 20. PSI
yields similar improvements as in the document-document retrieval case over the baselines.
Word Embedding The document
P embedding V d in equation (3) (similarly for the query embedding U q) can be viewed as V d = i V?i di , in which each column V?i is the embedding of the word
di . It is natural that semantically similar words are more likely to have similar embeddings. Table
5 shows a few examples. The first column contains query words, on the right are the 5 words with
smallest Euclidean distance in the embedded space. We can see that they are quite relevant.
4.2
Cross Language Document Retrieval
Cross Language Retrieval [16] is the task of retrieving documents in a target language E given a
query in a different source language F . For example, Google provides such a service6 . This is
an interesting case for word-based learning to rank models which can naturally deal with this task
without the need for machine translation as they directly learn the correspondence between the two
languages from bi-lingual labeled data in the form of tuples R. The use of a non-symmetric lowrank model like (3) also naturally suits this task (however in this case adding the identity does not
make sense). We therefore also provide a case study in this setting.
We thus considered the same set of 1,828,645 English Wikipedia documents and a set of 846,582
Japanese Wikipedia documents, where 135,737 of the documents are known to be about the same
concept as a corresponding English page (this information can be found in the wiki mark-up provided in a Wikipedia dump.) For example, the page about ?Microsoft? can be found in both English
and Japanese, and they are cross-referenced. These pairs are referred to as ?mates? in the literature
(see, e.g. [9]).
We then consider a cross language retrieval task that is analogous to the task in Section 4.1: given
a Japanese query document qJap that is the mate of the English document qEng , rank the English
6
http://translate.google.com/translate_s
7
Table 6: Cross-lingual Japanese document-English document ranking (limited dictionary size of
30,000 words).
Algorithm
TFIDFEngEng (Google translated queries)
TFIDFEngEng (ATLAS word-based translation)
TFIDFEngEng (ATLAS translated queries)
LSIEngEng (ATLAS translated queries)
?LSIEngEng (ATLAS)+(1 ? ?)TFIDFEngEng (ATLAS)
CL-LSIJapEng
?CL-LSIJapEng +(1 ? ?)TFIDFEngEng (ATLAS)
PSIEngEng (ATLAS)
PSIJapEng
?PSIJapEng + (1 ? ?)TFIDFEngEng (ATLAS)
?PSIJapEng + (1 ? ?)PSIEngEng (ATLAS)
Rank-Loss
4.78%
8.27%
4.83%
7.54%
3.71%
9.29%
3.31%
1.72%
0.96%
0.75%
0.63%
MAP
0.319?0.009
0.115?0.005
0.290?0.008
0.169?0.007
0.300?0.008
0.190?0.007
0.275?0.009
0.399?0.009
0.438?0.009
0.493?0.009
0.524?0.009
P@10
0.259?0.008
0.103?0.005
0.243?0.008
0.150?0.007
0.253?0.008
0.161?0.007
0.212?0.008
0.325?0.009
0.351?0.009
0.377?0.009
0.386?0.009
documents so that the documents linked to qEng appear above the others. The document qEng is
removed and not considered during training or testing. The dataset is split into train/test as before.
The first type of baseline we considered is based on machine translation. We used a machine translation tool on the Japanese query, and then applied TFIDF or LSI. We considered three methods
of machine translation: Google?s API7 or Fujitsu?s ATLAS8 was used to translate each query document, or we translated each word in the Japanese dictionary using ATLAS and then applied this
word-based translation to a query. We also compared to CL-LSI [9] trained on all 90,000 Jap-Eng
pairs from the training set.
For PSI, we considered two cases: (i) apply the ATLAS machine translation tool first, and then use
PSI trained on the task in Section 4.1, e.g. the model given in equation (3) (PSIEngEng ), which was
trained on English queries and English target documents; or (ii) train PSI directly with Japanese
queries and English target documents using the model using (3) without the identity, which we call
PSIJapEng . We use degree k = 2 for PSI (trying k = 3 would have been interesting, but we have
not performed this experiment). The results are given in Table 6. The dictionary size was again
limited to the 30,000 most frequent words in both languages for ease of comparison with CL-LSI.
TFIDF using the three translation methods gave relatively similar results. Using LSI or CLLSI slightly improved these results, depending on the metric. Machine translation followed by
PSIEngEng outperformed all these methods, however the direct PSIJapEng which required no machine translation tool at all, improved results even further. We conjecture that this is because translation mistakes generate noisy features which PSIJapEng circumvents.
However, we also considered combining PSIJapEng with TFIDF or PSIEngEng using a mixing
parameter ? and this provided further gains at the expense of requiring a machine translation tool.
Note that many cross-lingual experiments, e.g. [9], typically measure the performance of finding a
?mate?, the same document in another language, whereas our experiment tries to model a querybased retrieval task. We also performed an experiment in the mate-finding setting. In this case, PSI
achieves a ranking error of 0.53%, and CL-LSI achieves 0.81%.
5
Conclusion
We described a versatile, powerful set of discriminatively trained models for document ranking
based on polynomial features over words, which was made feasible with a low-rank (but diagonal
preserving) approximation. Many generalizations are possible: adding more features into our model,
using other choices of loss function and exploring the use of the same models for tasks other than
document retrieval, for example applying these models to ranking images rather than text, or to
classification, rather than ranking, tasks.
7
8
http://code.google.com/p/google-api-translate-java/
http://www.fujitsu.com/global/services/software/translation/atlas/
8
References
[1] B. Bai, J. Weston, R. Collobert, and D. Grangier. Supervised Semantic Indexing. In European Conference
on Information Retrieval, pages 761?765, 2009.
[2] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. Journal of Machine Learning Research,
3:993?1022, 2003.
[3] D. M. Blei and J. D. McAuliffe. Supervised topic models. In NIPS, pages 121?128, 2007.
[4] C. Burges, R. Ragno, and Q. Le. Learning to Rank with Nonsmooth Cost Functions. In NIPS, pages
193?200, 2007.
[5] C. Burges, T. Shaked, E. Renshaw, A. Lazier, M. Deeds, N. Hamilton, and G. Hullender. Learning to
Rank Using Gradient Descent. In ICML, pages 89?96, 2005.
[6] Z. Cao, T. Qin, T. Liu, M. Tsai, and H. Li. Learning to rank: from pairwise approach to listwise approach.
In ICML, pages 129?136, 2007.
[7] G. Chechik, V. Sharma, U. Shalit, and S. Bengio. Large scale online learning of image similarity through
ranking. In (Snowbird) Learning Workshop, 2009.
[8] S. Deerwester, S. Dumais, G. Furnas, T. Landauer, and R. Harshman. Indexing by latent semantic analysis.
JASIS, 41(6):391?407, 1990.
[9] S. Dumais, T. Letsche, M. Littman, and T. Landauer. Automatic cross-language retrieval using latent
semantic indexing. In AAAI Spring Symposium on Cross-Language Text and Speech Retrieval, pages
15?21, 1997.
[10] E. Gabrilovich and S. Markovitch. Computing semantic relatedness using wikipedia-based explicit semantic analysis. In IJCAI, pages 1606?1611, 2007.
[11] P. Gehler, A. Holub, and M. Welling. The rate adapting poisson (rap) model for information retrieval and
object recognition. In ICML, pages 337?344, 2006.
[12] A. Globerson and S. Roweis. Visualizing pairwise similarity via semidefinite programming. In AISTATS,
2007.
[13] S. Goel, J. Langford, and A. Strehl. Predictive indexing for fast search. In NIPS, pages 505?512, 2008.
[14] D. Grangier and S. Bengio. Inferring document similarity from hyperlinks. In CIKM, pages 359?360,
2005.
[15] D. Grangier and S. Bengio. A discriminative kernel-based approach to rank images from text queries.
IEEE Transactions on Pattern Analysis and Machine Intelligence., 30(8):1371?1384, 2008.
[16] G. Grefenstette. Cross-Language Information Retrieval. Kluwer, Norwell, MA, USA, 1998.
[17] R. Herbrich, T. Graepel, and K. Obermayer. Advances in Large Margin Classifiers, chapter Large margin
rank boundaries for ordinal regression. MIT Press, Cambridge, MA, 2000.
[18] T. Hofmann. Probabilistic latent semantic indexing. In SIGIR, pages 50?57, 1999.
[19] P. Jain, B. Kulis, I. S. Dhillon, and K. Grauman. Online metric learning and fast similarity search. In
NIPS, pages 761?768, 2008.
[20] T. Joachims. Optimizing search engines using clickthrough data. In SIGKDD, pages 133?142, 2002.
[21] M. Keller and S. Bengio. A Neural Network for Text Representation. In International Conference on
Artificial Neural Networks, 2005. IDIAP-RR 05-12.
[22] T. Liu, J. Xu, T. Qin, W. Xiong, and H. Li. Letor: Benchmark dataset for research on learning to rank for
information retrieval. In Proceedings of SIGIR 2007 Workshop on Learning to Rank, 2007.
[23] R. Salakhutdinov and G. Hinton. Semantic Hashing. Proceedings of the SIGIR Workshop on Information
Retrieval and Applications of Graphical Models, 2007.
[24] G. Salton and M. McGill. Introduction to Modern Information Retrieval. McGraw-Hill, 1986.
[25] Q. Shi, J. Petterson, G. Dror, J. Langford, A. Smola, A. Strehl, and V. Vishwanathan. Hash Kernels. In
AISTATS, 2009.
[26] A. Vinokourov, J. Shawe-Taylor, and N. Cristianini. Inferring a Semantic Representation of Text via
Cross-Language Correlation Analysis. In NIPS, pages 1497?1504, 2003.
[27] K. Weinberger and L. Saul. Fast solvers and efficient implementations for distance metric learning. In
ICML, pages 1160?1167, 2008.
[28] Y. Yue, T. Finley, F. Radlinski, and T. Joachims. A support vector method for optimizing average precision. In SIGIR, pages 271?278, 2007.
[29] L. Zighelnic and O. Kurland. Query-drift prevention for robust query expansion. In SIGIR, pages 825?
826, 2008.
9
| 3742 |@word kulis:1 briefly:1 version:3 polynomial:19 plsa:4 eng:1 pick:1 mention:1 versatile:1 bai:2 liu:2 contains:2 score:5 tuned:1 document:76 interestingly:1 outperforms:4 existing:2 current:1 com:5 yet:1 stemming:1 ronan:1 realistic:1 hofmann:1 designed:1 atlas:12 update:2 hash:8 intelligence:1 short:1 renshaw:1 blei:2 provides:3 contribute:1 preference:1 herbrich:1 firstly:1 five:1 constructed:2 direct:2 symposium:1 ik:1 retrieving:1 advocate:1 combine:1 pairwise:2 indeed:2 gabrilovich:1 salakhutdinov:1 discounted:1 lmnn:2 automatically:1 gov:1 actual:1 little:1 cache:2 considering:1 company:1 window:2 provided:2 solver:1 moreover:1 agnostic:2 minimizes:1 dror:1 finding:3 nj:1 xd:1 grauman:1 classifier:1 control:1 unit:1 partitioning:2 lsa:1 appear:1 harshman:1 mcauliffe:1 hamilton:1 before:5 positive:1 service:1 referenced:1 mistake:1 api:1 ndcg:1 might:3 plus:1 twice:1 studied:1 challenging:1 co:2 ease:1 limited:6 bi:1 globerson:1 testing:6 implement:1 footprint:2 area:1 empirical:4 significantly:1 java:1 deed:1 chechik:1 word:42 induce:2 synonymy:1 specificity:1 pre:2 adapting:1 cannot:1 context:1 applying:5 shaked:1 www:2 equivalent:1 map:13 conventional:1 restriction:1 vowpal:1 shi:1 go:1 keller:1 sigir:5 importantly:1 embedding:8 markovitch:1 veterinary:1 analogous:1 mcgill:1 target:9 suppose:1 exact:1 programming:2 us:3 hunch:1 trend:1 approximated:2 recognition:1 labeled:3 database:4 gehler:1 capture:2 keyword:2 ordering:1 removed:2 valuable:1 mentioned:1 littman:1 cristianini:1 trained:10 predictive:1 efficiency:2 completely:2 translated:4 easily:2 various:1 america:1 cat:2 chapter:1 train:10 jain:1 fast:3 describe:1 query:56 artificial:1 quite:1 slda:1 otherwise:1 calendar:1 ability:1 statistic:1 favor:1 unseen:1 jointly:1 noisy:1 online:2 rr:1 net:1 propose:4 reconstruction:2 product:3 frequent:3 qin:2 relevant:3 combining:1 cao:1 translate:3 mixing:1 achieve:1 roweis:1 realistically:1 scalability:2 convergence:1 ijcai:1 requirement:3 letor:2 object:1 help:1 depending:1 snowbird:1 measured:3 lowrank:2 ij:5 keywords:2 strong:1 implemented:1 c:1 idiap:2 auxiliary:1 involves:1 differ:1 direction:1 stochastic:5 human:1 bin:1 require:1 generalization:2 tfidf:15 exploring:1 hold:1 considered:11 normal:1 dictionary:14 adopt:1 early:1 smallest:1 achieves:2 purpose:1 favorable:1 outperformed:2 label:2 title:3 tf:10 city:1 tool:4 weighted:1 mit:2 clearly:5 always:1 rather:5 publication:1 conjunction:1 derived:2 focus:1 joachim:3 improvement:4 rank:32 likelihood:1 indicates:1 uli:2 contrast:1 sigkdd:1 baseline:4 sense:1 stopping:1 typically:3 initially:1 her:1 relation:1 wij:5 interested:1 i1:1 classification:1 flexible:1 prevention:1 animal:3 art:1 constrained:2 construct:1 ng:1 identical:1 represents:1 look:1 unsupervised:5 icml:4 alter:1 mimic:1 report:2 others:1 nonsmooth:1 few:4 employ:1 modern:1 randomly:3 petterson:1 ourselves:1 microsoft:2 suit:1 attempt:1 interest:2 huge:1 investigate:1 highly:2 wijk:3 evaluation:1 extreme:1 yielding:1 semidefinite:2 norwell:1 tuple:3 orthogonal:1 euclidean:1 taylor:1 initialized:1 shalit:1 rap:1 column:2 measuring:1 cost:2 mac:1 deviation:2 subset:1 lazier:1 rolling:1 hundred:1 xbox:1 wonder:1 conducted:1 too:4 reported:3 dumais:2 international:1 probabilistic:1 picking:1 sadamasa:1 linux:1 squared:1 again:3 aaai:1 containing:1 choose:4 dr:1 worse:1 expert:2 return:1 li:2 nonlinearities:1 ranking:34 blind:1 collobert:2 performed:3 view:1 jason:1 lab:2 try:1 analyze:2 linked:1 portion:1 relied:1 minimize:1 ir:1 publicly:1 yield:4 researcher:1 britney:1 suffers:2 manual:1 frequency:1 involved:1 naturally:3 salton:1 di:5 psi:19 gain:3 flr:7 dataset:2 knowledge:1 organized:1 holub:1 graepel:1 higher:6 courant:1 supervised:9 dt:1 hashing:1 improved:3 strongly:3 just:3 stage:1 smola:1 correlation:3 langford:2 hand:4 web:1 beatles:1 nonlinear:5 google:8 lda:4 perhaps:1 usage:1 building:1 usa:1 requiring:1 concept:5 normalized:1 contain:1 true:1 hence:6 counterpart:1 symmetric:1 nonzero:1 iteratively:1 dhillon:1 semantic:16 deal:4 visualizing:1 during:4 game:2 anything:1 qe:5 cosine:6 trying:1 stone:3 hill:1 performs:3 image:5 meaning:1 recently:1 wikipedia:14 common:1 console:1 million:3 kluwer:1 cambridge:1 ylk:1 rd:4 tuning:1 unconstrained:1 nyc:1 similarly:1 automatic:1 grangier:4 language:17 dj:3 shawe:1 similarity:12 longer:2 supervision:3 etc:1 closest:1 recent:1 showed:1 retrieved:2 optimizing:6 irrelevant:1 preserving:5 seen:1 captured:1 impose:1 goel:1 sharma:1 signal:2 ii:2 full:6 stem:1 ing:1 match:2 faster:1 cross:16 retrieval:27 plugging:1 qi:4 scalable:3 regression:3 kcca:1 variant:1 metric:6 poisson:1 kernel:9 addition:1 remarkably:1 whereas:2 harrison:1 float:1 source:1 rest:1 probably:1 yue:1 dri:1 jordan:1 call:1 lego:1 vw:1 split:3 embeddings:2 easy:1 bengio:4 affect:1 fit:2 gave:2 w3:2 click:2 tradeoff:1 vinokourov:1 qj:2 url:3 gb:1 song:1 render:1 yanjun:2 speech:1 york:3 hardly:1 useful:3 collision:1 generally:1 amount:2 encyclopedia:1 band:1 induces:1 processed:2 reduced:1 http:7 wiki:1 outperform:3 bm25:1 lsi:23 canonical:1 percentage:1 generate:1 cikm:1 per:1 key:1 nevertheless:1 ram:1 year:1 deerwester:1 unix:1 powerful:1 mcml:1 extends:1 doc:1 circumvents:1 followed:1 correspondence:1 placement:1 constraint:2 idf:10 worked:1 letsche:1 vishwanathan:1 software:4 unlimited:1 aspect:1 speed:1 ragno:1 spring:1 performing:1 downweight:1 relatively:1 conjecture:1 lingual:3 describes:1 smaller:1 slightly:2 partitioned:2 jweston:1 indexing:12 computationally:1 equation:5 bing:1 previously:1 discus:1 count:1 ordinal:1 tractable:2 generalizes:1 available:1 rewritten:1 multiplied:1 apply:1 occurrence:3 xiong:1 alternative:1 corinna:2 weinberger:1 top:3 dirichlet:1 include:1 graphical:1 calculating:1 medicine:1 exploit:1 music:1 build:1 classical:2 objective:1 already:3 added:1 occurs:1 question:1 strategy:1 diagonal:6 obermayer:1 exhibit:1 gradient:6 distance:4 link:12 separate:1 capacity:2 topic:1 considers:1 dgrangier:1 reason:2 assuming:1 length:2 besides:1 modeled:1 relationship:2 code:1 providing:1 setup:10 unfortunately:1 xik:1 expense:1 implementation:1 clickthrough:1 perform:2 datasets:2 benchmark:1 nist:1 mate:4 descent:3 incorrectly:1 hinton:1 looking:1 trec:1 drift:1 david:1 pair:4 dog:1 required:1 engine:3 hour:1 pop:1 nip:5 brooklyn:1 pattern:1 sparsity:1 hyperlink:2 challenge:1 tb:1 pagerank:1 including:2 memory:8 max:1 ranked:3 force:1 difficulty:1 natural:1 largescale:1 jasis:1 scheme:1 improve:1 technology:1 finley:1 naive:3 hullender:1 text:16 prior:2 literature:1 spear:1 multiplication:1 manhattan:1 embedded:2 loss:12 discriminatively:2 interesting:3 allocation:1 triple:2 validation:3 degree:12 consistent:1 article:3 fujitsu:2 strehl:2 translation:14 ibm:1 course:1 mohri:3 keeping:1 english:10 infeasible:1 bias:1 burges:3 perceptron:4 institute:1 saul:1 listwise:1 feedback:1 dimension:3 vocabulary:1 worried:1 cumulative:1 boundary:1 author:3 made:1 preprocessing:1 projected:1 far:1 welling:1 transaction:1 approximate:1 relatedness:1 mcgraw:1 dealing:1 global:1 corpus:1 gem:1 tuples:4 discriminative:2 landauer:2 search:6 latent:8 vet:1 table:12 learn:6 robust:1 expansion:5 mehryar:1 necessarily:1 cl:6 japanese:7 polysemy:1 european:1 did:1 aistats:2 synonym:1 whole:2 hyperparameters:1 allowed:1 x1:1 body:2 xu:1 referred:2 dump:1 ny:2 slow:1 iij:2 precision:4 furnas:1 position:1 inferring:2 wish:1 explicit:2 concatenating:1 pe:1 weighting:3 learns:1 removing:1 embed:1 nyu:2 explored:2 cortes:1 dk:1 svm:1 normalizing:1 consist:1 workshop:3 mnist:1 false:1 adding:4 nec:2 album:1 margin:8 sorting:1 likely:1 ordered:1 ch:1 ma:2 weston:2 kunihiko:2 grefenstette:1 identity:4 viewed:1 replace:1 content:1 feasible:4 experimentally:1 considerable:3 typical:2 operates:1 semantically:1 vlj:2 averaging:1 called:3 total:1 specie:1 experimental:4 ijk:1 perceptrons:1 indicating:2 mark:1 radlinski:1 latter:2 support:1 assessed:1 relevance:2 tsai:1 evaluate:1 princeton:1 tested:2 kitten:1 |
3,027 | 3,743 | Clustering Sequence Sets for Motif Discovery
Jong Kyoung Kim and Seungjin Choi
Department of Computer Science
Pohang University of Science and Technology
San 31 Hyoja-dong, Nam-gu
Pohang 790-784, Korea
{blkimjk,seungjin}@postech.ac.kr
Abstract
Most of existing methods for DNA motif discovery consider only a single set of
sequences to find an over-represented motif. In contrast, we consider multiple
sets of sequences where we group sets associated with the same motif into a cluster, assuming that each set involves a single motif. Clustering sets of sequences
yields clusters of coherent motifs, improving signal-to-noise ratio or enabling us
to identify multiple motifs. We present a probabilistic model for DNA motif discovery where we identify multiple motifs through searching for patterns which
are shared across multiple sets of sequences. Our model infers cluster-indicating
latent variables and learns motifs simultaneously, where these two tasks interact
with each other. We show that our model can handle various motif discovery problems, depending on how to construct multiple sets of sequences. Experiments on
three different problems for discovering DNA motifs emphasize the useful behavior and confirm the substantial gains over existing methods where only a single
set of sequences is considered.
1 Introduction
Discovering how DNA-binding proteins called transcription factors (TFs) regulate gene expression
programs in living cells is fundamental to understanding transcriptional regulatory networks controlling development, cancer, and many human diseases. TFs that bind to specific cis-regulatory
elements in DNA sequences are essential for mediating this transcriptional control. The first step
toward deciphering this complex network is to identify functional binding sites of TFs referred to as
motifs.
We address the problem of discovering sequence motifs that are enriched in a given target set of
sequences, compared to a background model (or a set of background sequences). There have been
extensive research works on statistical modeling of this problem (see [1] for review), and recent
works have focused on improving the motif-finding performance by integrating additional information into comparative [2] and discriminative motif discovery [3].
Despite the relative long history and the critical roles of motif discovery in bioinformatics, many
issues are still unsolved and controversial. First, the target set of sequences is assumed to have only
one motif, but this assumption is often incorrect. For example, a recent study examining the binding
specificities of 104 mouse TFs observed that nearly half of the TFs recognize multiple sequence
motifs [4]. Second, it is unclear how to select the target set on which over-represented motifs are
returned. The target set of sequences is often constructed from genome-wide binding location data
(ChIP-chip or ChIP-seq) or gene expression microarray data. However, there is no clear way to
partition the data into target and background sets in general. Third, a unified algorithm which is
applicable to diverse motif discovery problems is solely needed to provide a principled framework
for developing more complex models.
1
S1
sm ,1
M
M
sm ,i
Sm
M
sm,i
M
sm , Lm
zm,ij = [0,1]T
LI$*$****7LVDELQGLQJVLWH
???$&$*&$*$****7**$*???
smW,ij = ( sm,ij = $, sm,i ( j +1) = *, L, sm,i ( j +W ?1) = 7)
SM
Figure 1: Notation illustration.
These considerations motivate us to develop a generative probabilistic framework for learning multiple motifs on multiple sets of sequences. One can view our framework as an extension of the classic
sequence models such as the two-component mixture (TCM) [5] and the zero or one occurrence per
sequence (ZOOPS) [6] models in which sequences are partitioned into two clusters, depending on
whether or not they contain a motif. In this paper, we make use of a finite mixture model to partition
the multiple sequence sets into clusters having distinct sequence motifs, which improves the motiffinding performance over the classic models by enhancing signal-to-noise ratio of input sequences.
We also show how our algorithm can be applied into three different problems by simply changing
the way of constructing multiple sets from input sequences without any algorithmic modifications.
2 Problem formulation
We are given M sets of DNA sequences S = {S1 , . . . , SM } to be grouped according to the type of
motif involved with, in which each set is associated with only a single motif but multiple binding
sites are present in each sequence. A set of DNA sequences Sm = {sm,1 , . . . , sm,Lm } is a collection
of strings sm,i of length |sm,i | over the alphabet ? = {A, C, G, T }. To allow for a variable number
of binding sites per sequence, we represent each sequence s m,i as a set of overlapping subsequences
sW
m,ij = (sm,ij , sm,i(j+1) , . . . , sm,i(j+W ?1) ) of length W starting at position j ? Im,i , where sm,ij
denotes the letter at position j and Im,i = {1, . . . , |sm,i |?W +1}, as shown in Fig. 1. We introduce
a latent variable matrix zm,i ? R2?|Im,i | in which the j th column vector zm,ij is a 2-dimensional
binary random vector [zm,ij1 , zm,ij2 ]> such that zm,ij = [0, 1]> if a binding site starts at position
>
j ? Im,i , otherwise, zm,ij = [1,
P 0] . We also introduce K-dimensional binary random vectors
K
tm ? R ( tm,k ? {0, 1} and k tm,k = 1) for m = 1, . . . , M , which involve partitioning the
sequence sets S into K disjoint clusters, where sets in the same cluster are associated with the same
common motif.
For a motif model, we use a position-frequency matrix whose entries correspond to probability
distributions (over the alphabet ?) of each position within a binding site. We denote by ? k ? RW ?4
the k th motif model of length W over ?, where ?>
k,w represents row w, each entry is non-negative,
P4
?k,wl ? 0 for ?w, l, and l=1 ?k,wl = 1 for ?w. The background model ?0 , which describes
frequencies over the alphabet within non-binding sites, is defined by a P th order Markov chain
(represented by a (P + 1)-dimensional conditional probability table).
Our goal is to construct a probabilistic model for DNA motif discovery where we identify multiple
motifs through searching for patterns which are shared across multiple sets of sequences. Our model
infers cluster-indicating latent variables (to find a good partition of S) and learns motifs (inferring
binding site-indicating latent variables zm,i ) simultaneously, where these two tasks interact with
each other.
2
?
zm ,ij
?0
smW,ij
?
?k
| I m ,i | Lm
K
tm
v
?
M
Figure 2: Graphical representation of our mixture model for M sequence sets.
3 Mixture model for motif discovery
We assume that the distribution of S is modeled as a mixture of K components, where it is not
known in advance which mixture component underlies a particular set of sequences. We also assume that the conditional distribution of the subsequence sW
m,ij given tm is modeled as a mixture of
two components, each of which corresponds to the motif and the background models, respectively.
Then, the joint distribution of observed sequence sets S and (unobserved) latent variables Z and T
conditioned on parameters ? is written as:
p(S, Z, T |?) =
M
Y
p(tm |?)
m
Lm Y
Y
(1)
p(sW
m,ij |zm,ij , tm , ?)p(zm,ij |?),
i=1 j?Im,i
where Z = {zm,ij } and T = {tm }. The graphical model associated with (1) is shown in Fig. 2.
The generative process for subsequences sW
m,ij is described as follows. We first draw mixture
weights v = [v1 , . . . , vK ]> (involving set clusters) from the Dirichlet distribution:
p(v|?) ?
K
Y
?k
vkK
?1
(2)
,
k=1
where ? = [?1 , . . . , ?K ]> are the hyperparameters. Given mixture weights, we choose the clusterQ
tm,k
indicator tm for Sm , according to the multinomial distribution p(tm |v) = K
. The chosen
k=1 vk
th
k motif model ?k is drawn from the product of Dirichlet distributions:
p(?k |?) =
W
Y
p(?k,w |?) ?
w=1
W Y
4
Y
l ?1
??k,wl
,
(3)
w=1 l=1
where ? = [?1 , . . . , ?4 ]> are the hyperparameters. The latent variables zm,ij indicating the starting
positions of binding sites are governed by the prior distribution specified by:
p(zm,ij |?) =
2
Y
(4)
?rzm,ijr ,
r=1
where the mixture weights ? = [?1 , ?2 ]> satisfy ?1 , ?2 ? 0 and ?1 + ?2 = 1. Finally, the
subsequences sW
m,ij are drawn from the following conditional distribution:
K
W
zm,ij1
p(sW
m,ij |tm , zm,ij , {?k }k=1 , ?0 ) = p(sm,ij |?0 )
K
Y
zm,ij2 tm,k
(p(sW
)
,
m,ij |?k )
k=1
where
p(sW
m,ij |?0 ) =
W Y
4
Y
?(l,sm,i(j+w?1) )
?0l
, p(sW
m,ij |?k ) =
w=1 l=1
W Y
4
Y
w=1 l=1
3
?(l,s
)
?k,wl m,i(j+w?1) ,
(5)
where ?(l, sm,i(j+w?1) ) is an indicator function which equals 1 if sm,i(j+w?1) = l, and otherwise
0. Here, the background model is specified by the 0th -order Markov chain for notational simplicity.
Several assumptions simplify this generative model. First, the width W of the motif model and
the number K of set clusters are assumed to be known and fixed. Second, the mixture weights ?
together with the background model ?0 are treated as parameters to be estimated. We assume the
hyperparameters ? and ? are set to fixed and known constants. The full set of parameters and hyperparameters will be denoted by ? = {?, ?, ?, ?0 }. Extension to double stranded DNA sequences is
obvious and omitted here due to the lack of space.
Our model builds upon the existing TCM model proposed by [5] where the EM algorithm is applied
to learn a motif on a single target set. This model actually generates subsequences instead of sequences themselves. An alternative model which explicitly generates sequences has been proposed
based on Gibbs sampling [7, 8]. Note that our model is reduced to the TCM model if K, the number
of set clusters, is set to one.
Our model shares some similarities with the recent Bayesian hierarchical model in [9] which also
uses a mixture model to cluster discovered motifs. The main difference is that they focus on clustering motifs already discovered, and in our formulation, we try to cluster sequence sets and discover
motifs simultaneously.
4 Inference by Gibbs sampling
We find the configurations of Z and T by maximizing the posterior distribution over latent variables:
Z ? , T ? = arg max p(Z, T |S, ?).
(6)
Z,T
To this end, we use Gibbs sampling to find the posterior modes by drawing samples repeatedly from
the posterior distribution over Z and T . We will derive a Gibbs sampler for our generative model
in which the set mixture weights v and motif models {?k }K
k=1 are integrated out to improve the
convergence rate and the cost per iteration [8].
The critical quantities needed to implement the Gibbs sampler are the full conditional distributions
for Z and T . We first derive the relevant full conditional distribution over t m conditioned on the
set cluster assignments of all other sets, T\m , the latent positions Z, and the observed sets S. By
applying Bayes? rule, Fig. 2 implies that this distribution factorizes as follows:
p(tm,k = 1|T\m , S, Z, ?) ? p(tm,k = 1|T\m , ?)p(Sm , Zm |T , S\m , Z\m , ?),
(7)
m
where Z\m denotes the entries of Z other than Zm = {zm,i }L
i=1 , and S\m is similarly defined. The
first term represents the predictive distribution of tm given the other set cluster assignments T\m ,
and is given by marginalizing the set mixture weights v:
Z
N ?m + ?Kk
p(tm,k = 1|T\m , ?) =
p(tm,k = 1|v)p(v|T\m , ?)dv = k
(8)
M ? 1 + ?k
v
P
where Nk?m = n6=m ?(tn,k , 1). Note that Nk?m counts the number of sets currently assigned to
the k th set cluster excluding the mth set. The model?s Markov structure implies that the second term
of (7) depends on the current assignments T\m as follows:
=
=
p(Sm , Zm |tm,k = 1, T\m , S\m , Z\m , ?)
Z
p(Sm , Zm |tm,k = 1, ?k , ?)p(?k |{Sn , Zn |tn,k = 1, n 6= m}, ?)d?k
?
? k
?
Lm Y
Y
Y
?
?
p(zm,ij |?)
p(sW
m,ij |zm,ij2 = 0, ?0 )
i=1 j?Im,i
"
W
Y
w=1
j?Im,i ;zm,ij2 =0
P
Q
#
?m
?( l (Nwl
+ ?l )) l ?(Nwl + ?l )
P
,
Q
?m
?( l (Nwl + ?l )) l ?(Nwl
+ ?l )
4
(9)
?m
m
where Nwl = Nwl
+ Nwl
and
?m
Nwl
=
Ln
X
X
X
?(sn,i(j+w?1) , l)
tn,k =1,n6=m i=1 j?In,i ,zn,ij2 =1
m
Nwl
=
Lm
X
X
?(sm,i(j+w?1) , l).
i=1 j?Im,i ,zm,ij2 =1
?m
Note that Nwl
counts the number of letter l at position w within currently assigned binding sites
m
excluding the ones of the mth set. Similarly, Nwl
denotes the number of letter l at position w within
bindings sites of the mth set.
We next derive the full conditional distribution of zm,ij given the remainder of the variables. Integrating over the motif model ?k , we then have the following factorization:
Z
Y
p(zm,ij |Z\m,ij , S, tm,k = 1, T\m , ?) ?
p(Zn , Sn |?k )p(?k |?)d?k
?k t
n,k =1
?
?
?
Ln I
n,i
Y Y
Y
p(zn,ij |?)
tn,k =1 i=1 j=1
Ln
Y
Y
i=1 j?In,i ,zn,ij2 =0
?"
?
p(sW
n,ij |?0 )
#
Q
W
Y
?(Nwl + ?l )
l
P
,(10)
?( l (Nwl + ?l ))
w=1
where Z\m,ij denotes the entries of Z other than zm,ij . For the purpose of sampling, the ratio of the
posterior distribution of zm,ij is given by:
W P4
?m,ij
Y
+ ?)?(sm,i(j+w?1) , l)
p(zm,ij2 = 1|Z\m,ij , S, T , ?)
?2
l=1 (Nwl
=
,
P4
?m,ij
W
p(zm,ij2 = 0|Z\m,ij , S, T , ?)
?1 p(sm,ij |?0 ) w=1
+ ?)
l=1 (Nwl
P
P n P
?m,ij
?m,ij
where Nwl
= tn,k =1 L
dei=1
j 0 ?In,i ,j 0 6=j,zn,ij 0 2 =1 ?(sn,i(j 0 +w?1) , l). Note that Nwl
notes the number of letter l at position w within currently assigned binding sites other than z m,ij .
Combining (7) with (10) is sufficient to define the Gibbs sampler for our finite mixture model. To
provide a convergence measure, we derive the following objective function based on the log of the
posterior distribution:
log p(Z, T |S, ?) ? log p(Z, T , S|?)
?
Lm I
m,i
M X
X
X
log p(zm,ij |?) +
+
k=1 w=1
where Nk =
P
m
(
X
X
log p(sW
m,ij |?0 )
m=1 i=1 j?Imi ;zm,ij2 =0
m=1 i=1 j=1
K X
W
X
Lm
M X
X
k
log ?(Nwl
+ ?l ) ? log ?(
l
k
?(tm,k , 1) and Nwl
=
5 Results
X
k
(Nwl
l
P
tm,k =1
PLm P
i=1
+ ?l ))
)
+
K
X
k=1
j?Im,i ,zm,ij2 =1
log ?(Nk +
?k
), (11)
K
?(sm,i(j+w?1) , l).
We evaluated our motif-finding algorithm on the three different tasks: (1) filtering out undesirable
noisy sequences, (2) incorporating evolutionary conservation information, and (3) clustering DNA
sequences based on the learned motifs (Fig. 3). In the all experiments, we fixed the hyper-parameters
so that ?k = 1 and ?l = 0.5.
5.1 Data sets and evaluation criteria
We first examined the yeast ChIP-chip data published by [10] to investigate the effect of filtering out
noisy sequences from input sequences on identifying true binding sites. We compiled 156 sequencesets by choosing TFs having consensus motifs in the literature [11]. For each sequence-set, we
defined its sequences to be probe sequences that are bound with P -value ? 0.001.
5
(a) Filtering out noisy sequences
(b) Evolutionary conservation
(c) Motif-based clustering
Figure 3: Three different ways of constructing multiple sequence sets. Black rectangles: sequence
sets, Blue bars: sequences, Red dashed rectangles: set clusters, Red and green rectangles: motifs.
To apply our algorithm into the comparative motif discovery problem, we compiled orthologous
sequences for each probe sequence of the yeast ChIP-chip data based on the multiple alignments
of seven species of Saccharomyces (S. cerevisiae, S. paradoxus, S. mikatae, S. kudriavzevii, S.
bayanus, S. castelli, and S. kluyveri) [12]. In the experiments using the ChIP-chip data, the motif width was set to 8 and a fifth-order Markov chain estimated from the whole yeast intergenic
sequences was used to describe the background model. We fixed the mixture weights ? so that
?2 = 0.001.
We next constructed the ChIP-seq data for human neuron-restrictive silence factor (NRSF) to determine whether our algorithm can be applied to partition DNA sequences into biologically meaningful
clusters [13]. The data consist of 200 sequence segments of length 100 from all peak sites with the
top 10% binding intensity (? 500 ChIP-seq reads), where most sequences have canonical NRSFbinding sites. We also added 13 sequence segments extracted from peak sites (? 300 reads) known
to have noncanonical NRSF-binding sites, resulting in 213 sequences. In the experiment using the
ChIP-seq data, the motif width was set to 30 and a zero-order Markov chain estimated from the 213
sequence segments was used to describe the background model. We fixed the mixture weights ? so
that ?2 = 0.005.
In the experiments using the yeast ChIP-chip data, we used the inter-motif distance to measure the
quality of discovered motifs [10]. Specifically, an algorithm will be called successful on a sequence
set only if at least one of the position-frequency matrices constructed from the identified binding
sites is at a distance less than 0.25 from the literature consensus [14].
5.2 Filtering out noisy sequences
Selecting target sequences from the ChIP-chip measurements is largely left to users and this choice
is often unclear. Our strategy of constructing sequence-sets based on the binding P -value cutoff
would be exposed to danger of including many irrelevant sequences. In practice, the inclusion of
noisy sequences in the target set is a serious obstacle in the success of motif discovery. One possible
solution is to cluster input sequences into two smaller sets of target and noisy sequences based
on sequence similarity, and predict motifs from the clustered target sequences with the improved
signal-to-noise ratio. This two-step approach has been applied to only protein sequences because
DNA sequences do not share much similarity for effective clustering [15].
One alternative approach is to seek a better sequence representation based on motifs. To this end, we
constructed multiple sets by treating each sequence of a particular yeast ChIP-chip sequence-set as
one set (Fig. 3(a)). We examined the ability of our algorithm to find a correct motif with two different
numbers of clusters: K = 1 (without filtering) and K = 2 (clustering into two subsets of true and
noisy sequences). We ran each experiment five times with different initializations and reported
means with ?1 standard error. Figure 4 shows that the filtering approach (K = 2) outperforms the
baseline method (K = 1) in general, with the increasing value of the P -value cutoff. Note that the
ZOOPS or TCM models can also handle noisy sequences by modeling them with only a background
model [5, 6]. But we allow noisy sequences to have a decoy motif (randomly occurring sequence
6
Figure 4: Effect of filtering out noisy sequences on the number of successfully identified motifs on
the yeast ChIP-chip data. K = 1: without filtering, K = 2: clustering into two subsets.
patterns or repeating elements) which is modeled with a motif model. Because our model can be
reduced to these classic models by setting K = 1, we concluded that noisy sequences were better
represented by our clustering approach than the previous ones using the background model (Fig. 4).
Two additional lines of evidence indicated that our filtering approach enhances the signal-to-noise
ratio of the target set. First, we compared the results of our filtering approach with that of other
baseline methods (AlignAce [16], MEME [6], MDScan [17], and PRIORITY-U [11]) on the same
yeast ChIP-chip data. For AlignAce, MEME and MDScan, we used the results reported by [14];
for PRIORITY-U, we used two different results reported by [14, 11] according to different sampling
strategy. We expected that our model would perform better than these four methods because they try
to remove noisy sequences based on the classic models. By comparing the results of Fig. 4 and Table
1, we see that our algorithm still performs better. Second, we also compared our model with DRIM
specifically designed to dynamically select the target set from the list of sorted sequences according
to the binding P -values of ChIP-chip measurements. For DRIM, we used the result reported by [18].
Because DRIM does not produce any motifs when they are not statistically enriched at the top of
the ranked list, we counted the number of successfully identified motifs on the sequence-sets where
DRIM generated significant motifs. Our method (number of successes is 16) was slightly better than
DRIM (number of successes is 15).
5.3 Detecting evolutionary conserved motifs
Comparative approach using evolutionary conservation information has been widely used to improve
the performance of motif-finding algorithms because functional TF binding sites are likely to be
conserved in orthologous sequences. To incorporate conservation information into our clustering
framework, orthologous sequences of each sequence of a particular yeast ChIP-chip sequence-set
were considered as one set and the number of clusters was set to 2 (Fig. 3(b)). The constructed sets
contain at most 7 sequences because we only used seven species of Saccharomyces. We used the
single result with the highest objective function value of (11) among five runs and compared it with
the results of five conservation-based motif finding algorithms on the same data set: MEME c [10],
PhyloCon [19], PhyMe [20], PhyloGibbs [21], PRIORITY-C [11]. For the five methods, we used
the results reported by [11]. We did not compare with discriminative methods which are known
to perform better at this data set because our model does not use negative sequences. Table 1
presents the motif-finding performance in terms of the number of correctly identified motifs for
each algorithm. We see that our algorithm greatly outperforms the four alignment-based methods
which rely on multiple or pair-wise alignments of orthologous sequences to search for motifs that are
conserved across the aligned blocks of orthologous sequences. In our opinion, it is because diverged
regions other than the short conserved binding sites may prevent a correct alignment. Moreover, our
algorithm performs somewhat better than PRIORITY-C, which is a recent alignment-free method.
We believe that it is because the signal-to-noise ratio of the input target set is enhanced by clustering.
5.4 Clustering DNA sequences based on motifs
To examine the ability of our algorithm to partition DNA sequences into biologically meaningful
clusters, we applied our algorithm to the NRSF ChIP-seq data which are assumed to have two
7
Table 1: Comparison of the number of successfully identified motifs on the yeast ChIP-chip data
for different methods. NC: Non-conservation, EC: Evolutionary conservation, A: Alignment-based,
AF: Alignment-free, C: Clustering.
Method
AlignAce
MEME
MDScan
PRIORITY-U
MEME c
PhyloCon
PhyME
PhyloGibbs
PRIORITY-C
This work
Description
NC
NC
NC
NC
EC + A
EC + A
EC + A
EC + A
EC + AF
EC + AF + C
(a) Canonical NRSF motif
# of successes
16
35
54
46-58
49
19
21
54
69
75
(b) Noncanonical NRSF motif
Figure 5: Sequence logo of discovered NRSF motifs.
different NRSF motifs (Fig. 3(c)). In this experiment, we have already known the number of clusters
(K = 2). We ran our algorithm five times with different initializations and reported the one with
the highest objective function value. Position-frequency matrices of two clusters are shown in Fig.
5. The two motifs correspond directly to the previously known motifs (canonical and non-canonical
NRSF motifs). However, other motif-finding algorithms such as MEME could not return the noncanonical motif enriched in a very small set of sequences. These observations suggest that our
motif-driven clustering approach is effective at inferring latent clusters of DNA sequences and can
be used to find unexpected novel motifs.
6 Conclusions
In this paper, we have presented a generative probabilistic framework for DNA motif discovery using multiple sets of sequences where we cluster DNA sequences and learn motifs interactively. We
have presented a finite mixture model with two different types of latent variables, in which one is
associated with cluster-indicators and the other corresponds to motifs (transcription factor binding
sites). These two types of latent variables are inferred alternatively using multiple sets of sequences.
Our empirical results show that the proposed method can be applied to various motif discovery problems, depending on how to construct the multiple sets. In the future, we will explore several other
extensions. For example, it would be interesting to examine the possibility of learning the number of clusters from data based on Dirichlet process mixture models, or to extend our probabilistic
framework for discriminative motif discovery.
Acknowledgments: We thank Raluca Gord?an for providing the literature consensus motifs and the script to
compute the inter-motif distance. This work was supported by National Core Research Center for Systems
Bio-Dynamics funded by Korea NRF (Project No. 2009-0091509) and WCU Program (Project No. R31-2008000-10100-0). JKK was supported by a Microsoft Research Asia fellowship.
8
References
[1] G. D. Stormo. DNA binding sites: representation and discovery. Bioinformatics, 16:16?23, 2000.
[2] W. W. Wasserman and A. Sandelin. Applied bioinformatics for the identification of regulatory elements.
Nature Review Genetics, 5:276?287, 2004.
[3] E. Segal, Y. Barash, I. Simon, N. Friedman, and D. Koller. From promoter sequence to expression: a
probabilistic framework. In Proceedings of the International Conference on Research in Computational
Molecular Biology, pages 263?272, 2002.
[4] G. Badis, M. F. Berger, A. A. Philippakis, S. Talukder, A. R. Gehrke, S. A. Jaeger, E. T. Chan, G. Metzler, A. Vedenko, X. Chen, H. Kuznetsov, C. F. Wang, D. Coburn, D. E. Newburger, Q. Morris, T. R.
Hughes, and M. L. Bulyk. Diversity and complexity in DNA recognition by transcription factors. Science, 324:1720?1723, 2009.
[5] T. L. Bailey and C. Elkan. Fitting a mixture model by expectation maximization to discover motifs in
biopolymers. In Proceedings of the International Conference Intelligent Systems for Molecular Biology,
1994.
[6] T. L. Bailey and C. Elkan. The value of prior knowledge in discovering motifs with MEME. In Proceedings of the International Conference Intelligent Systems for Molecular Biology, 1995.
[7] C. E. Lawrence, S. F. Altschul, M. S. Boguski, J. S. Liu, A. F. Neuwald, and J. C. Wootton. Detecting
subtle sequence signals: a Gibbs sampling strategy for multiple alignment. Science, 262:208?214, 1993.
[8] J. S. Liu, A. F. Neuwald, and C. E. Lawrence. Bayesian models for multiple local sequence alignment
and Gibbs sampling strategies. Journal of the American Statistical Association, 90:1156?1170, 1995.
[9] S. T. Jensen and J. S. Liu. Bayesian clustering of transcription factor binding motifs. Journal of the
American Statistical Association, 103:188?200, 2008.
[10] C. T. Harbison, D. B. Gordon, T. I. Lee, N. J. Rinaldi, K. D. Macisaac, T. W. Danford, N. M. Hannett,
J. B. Tagne, D. B. Reynolds, J. Yoo, E. G. Jennings, J. Zeitlinger, D. K. Pokholok, M. Kellis, P. A. Rolfe,
K. T. Takusagawa, E. S. Lander, D. K. Gifford, E. Fraenkel, and R. A. Young. Transcriptional regulatory
code of a eukaryotic genome. Nature, 431:99?104, 2004.
[11] R. Gordan, L. Narlikar, and A. J. Hartemink. A fast, alignment-free, conservation-based method for
transcription factor binding site discovery. In Proceedings of the International Conference on Research
in Computational Molecular Biology, pages 98?111, 2008.
[12] A. Siepel, G. Bejerano, J. S. Pedersen, A. S. Hinrichs, M. Hou, K. Rosenbloom, H. Clawson, J. Spieth,
L. W. Hillier, S. Richards, G. M. Weinstock, R. K. Wilson, R. A. Gibbs, W. J. Kent, W. Miller, and
D. Haussler. Evolutionarily conserved elements in vertebrate, insect, worm, and yeast genomes. Genome
Research, 15:1034?1050, 2005.
[13] D. S. Johnson, A. Mortazavi, R. M. Myers, and B. Wold. Genome-wide mapping of in vivo protein-DNA
interactions. Science, 316:1497?1502, 2007.
[14] L. Narlikar, R. Gordan, and A. J. Hartemink. Nucleosome occupancy information improves de novo
motif discovery. In Proceedings of the International Conference on Research in Computational Molecular
Biology, pages 107?121, 2007.
[15] S. Kim, Z. Wang, and M. Dalkilic. igibbs: improving gibbs motif sampler for proteins by sequence
clustering and iterative pattern sampling. Proteins, 66:671?681, 2007.
[16] F. P. Roth, J. D. Hughes, P. W. Estep, and G. M. Church. Finding DNA regulatory motifs within unaligned
noncoding sequences clustered by whole-genome mRNA quantitation. Nature Biotechnology, 16:939?
945, 1998.
[17] X. S. Liu, D. L. Brutlag, and J. S. Liu. An algorithm for finding protein-DNA binding sites with applications to chromatin-immunoprecipitation microarray experiments. Nature Biotechnology, 20:835?839,
2002.
[18] E. Eden, D. Lipson, S. Yogev, and Z. Yakhini. Discovering motifs in ranked lists of DNA sequences.
PLoS Computational Biology, 3:e39, 2007.
[19] T. Wang and G. D. Stormo. Combining phylogenetic data with co-regulated genes to identify regulatory
motifs. Bioinformatics, 19:2369?2380, 2003.
[20] S. Sinha, M. Blanchette, and M. Tompa. PhyME: a probabilistic algorithm for finding motifs in sets of
orthologous sequences. BMC Bioinformatics, 5:170, 2004.
[21] R. Siddharthan, E. D. Siggia, and E. van Nimwegen. PhyloGibbs: a gibbs sampling motif finder that
incorporates phylogeny. PLoS Computational Biology, 1:e67, 2005.
9
| 3743 |@word seek:1 kent:1 configuration:1 liu:5 selecting:1 bejerano:1 reynolds:1 outperforms:2 existing:3 current:1 comparing:1 ij1:2 written:1 hou:1 partition:5 plm:1 remove:1 treating:1 designed:1 siepel:1 half:1 discovering:5 generative:5 kyoung:1 short:1 core:1 detecting:2 location:1 five:5 phylogenetic:1 constructed:5 incorrect:1 fitting:1 introduce:2 inter:2 expected:1 behavior:1 themselves:1 examine:2 increasing:1 vertebrate:1 project:2 discover:2 notation:1 moreover:1 string:1 barash:1 unified:1 finding:9 unobserved:1 control:1 partitioning:1 bio:1 bind:1 local:1 despite:1 solely:1 black:1 logo:1 initialization:2 examined:2 dynamically:1 co:1 factorization:1 statistically:1 acknowledgment:1 practice:1 block:1 implement:1 hughes:2 danger:1 empirical:1 integrating:2 specificity:1 protein:6 suggest:1 undesirable:1 applying:1 center:1 maximizing:1 roth:1 mrna:1 starting:2 focused:1 simplicity:1 identifying:1 wasserman:1 rule:1 haussler:1 nam:1 classic:4 searching:2 handle:2 controlling:1 target:13 enhanced:1 user:1 us:1 elkan:2 element:4 recognition:1 richards:1 metzler:1 hyoja:1 observed:3 role:1 wang:3 region:1 gifford:1 plo:2 highest:2 ran:2 substantial:1 disease:1 principled:1 ij2:11 meme:7 complexity:1 tfs:6 dynamic:1 motivate:1 segment:3 exposed:1 predictive:1 upon:1 gu:1 joint:1 chip:29 represented:4 various:2 alphabet:3 distinct:1 fast:1 describe:2 effective:2 hyper:1 choosing:1 quantitation:1 whose:1 widely:1 drawing:1 otherwise:2 novo:1 ability:2 seungjin:2 noisy:12 sequence:103 myers:1 interaction:1 product:1 unaligned:1 zm:35 p4:3 remainder:1 relevant:1 combining:2 aligned:1 description:1 convergence:2 cluster:28 double:1 jaeger:1 produce:1 comparative:3 rolfe:1 depending:3 develop:1 ac:1 derive:4 ij:47 involves:1 implies:2 correct:2 human:2 opinion:1 mortazavi:1 clustered:2 im:9 extension:3 considered:2 lawrence:2 algorithmic:1 predict:1 diverged:1 lm:8 stormo:2 mapping:1 omitted:1 purpose:1 applicable:1 currently:3 r31:1 grouped:1 wl:4 tf:1 successfully:3 jkk:1 gehrke:1 cerevisiae:1 factorizes:1 wilson:1 focus:1 vk:2 notational:1 saccharomyces:2 greatly:1 contrast:1 kim:2 baseline:2 inference:1 motif:98 integrated:1 mth:3 koller:1 issue:1 arg:1 among:1 denoted:1 insect:1 development:1 equal:1 construct:3 having:2 sampling:9 biology:7 represents:2 nrf:1 bmc:1 nearly:1 future:1 simplify:1 serious:1 intelligent:2 gordon:1 randomly:1 simultaneously:3 recognize:1 national:1 microsoft:1 friedman:1 investigate:1 possibility:1 gordan:2 evaluation:1 alignment:10 mixture:20 chain:4 korea:2 sinha:1 column:1 modeling:2 obstacle:1 zn:6 assignment:3 maximization:1 cost:1 entry:4 subset:2 deciphering:1 vkk:1 examining:1 successful:1 johnson:1 imi:1 reported:6 fundamental:1 peak:2 international:5 probabilistic:7 dong:1 lee:1 together:1 mouse:1 interactively:1 choose:1 priority:6 american:2 return:1 li:1 segal:1 diversity:1 smw:2 de:1 satisfy:1 explicitly:1 depends:1 script:1 view:1 try:2 red:2 start:1 bayes:1 simon:1 lipson:1 vivo:1 largely:1 miller:1 yield:1 identify:5 correspond:2 bayesian:3 identification:1 pedersen:1 castelli:1 published:1 history:1 frequency:4 involved:1 obvious:1 associated:5 unsolved:1 gain:1 knowledge:1 infers:2 improves:2 subtle:1 actually:1 asia:1 improved:1 formulation:2 evaluated:1 wold:1 overlapping:1 lack:1 mode:1 quality:1 indicated:1 zeitlinger:1 yeast:10 believe:1 effect:2 contain:2 true:2 assigned:3 read:2 postech:1 width:3 criterion:1 tn:5 performs:2 wise:1 consideration:1 novel:1 common:1 functional:2 multinomial:1 extend:1 association:2 measurement:2 significant:1 gibbs:11 similarly:2 inclusion:1 funded:1 similarity:3 compiled:2 rosenbloom:1 posterior:5 recent:4 chan:1 irrelevant:1 driven:1 altschul:1 binary:2 success:4 blanchette:1 conserved:5 additional:2 somewhat:1 determine:1 signal:6 living:1 dashed:1 multiple:22 full:4 af:3 long:1 molecular:5 finder:1 underlies:1 involving:1 enhancing:1 expectation:1 iteration:1 represent:1 cell:1 background:11 fellowship:1 lander:1 microarray:2 concluded:1 incorporates:1 identified:5 tm:23 whether:2 expression:3 tcm:4 returned:1 biotechnology:2 repeatedly:1 useful:1 jennings:1 clear:1 involve:1 repeating:1 morris:1 dna:23 rw:1 reduced:2 canonical:4 estimated:3 disjoint:1 per:3 correctly:1 blue:1 diverse:1 group:1 four:2 pohang:2 eden:1 drawn:2 changing:1 prevent:1 cutoff:2 rectangle:3 v1:1 rinaldi:1 run:1 letter:4 seq:5 draw:1 bound:1 generates:2 estep:1 department:1 developing:1 according:4 across:3 describes:1 em:1 smaller:1 slightly:1 partitioned:1 modification:1 s1:2 biologically:2 dv:1 hillier:1 ln:3 previously:1 count:2 needed:2 end:2 probe:2 apply:1 hierarchical:1 regulate:1 occurrence:1 bailey:2 alternative:2 denotes:4 clustering:16 dirichlet:3 top:2 graphical:2 sw:12 restrictive:1 build:1 kellis:1 objective:3 already:2 quantity:1 added:1 strategy:4 transcriptional:3 unclear:2 evolutionary:5 enhances:1 regulated:1 distance:3 thank:1 seven:2 consensus:3 toward:1 assuming:1 length:4 code:1 modeled:3 berger:1 illustration:1 ratio:6 kk:1 decoy:1 providing:1 nc:5 mediating:1 negative:2 perform:2 brutlag:1 neuron:1 observation:1 markov:5 sm:32 enabling:1 finite:3 excluding:2 discovered:4 biopolymers:1 intensity:1 inferred:1 pair:1 specified:2 extensive:1 coherent:1 learned:1 address:1 bar:1 pattern:4 program:2 max:1 green:1 including:1 critical:2 treated:1 ranked:2 rely:1 indicator:3 occupancy:1 improve:2 technology:1 dei:1 church:1 n6:2 sn:4 review:2 understanding:1 discovery:17 prior:2 literature:3 immunoprecipitation:1 marginalizing:1 relative:1 interesting:1 filtering:10 lv:1 controversial:1 sufficient:1 share:2 row:1 cancer:1 genetics:1 supported:2 free:3 silence:1 allow:2 neuwald:2 wide:2 fifth:1 van:1 noncanonical:3 genome:6 stranded:1 collection:1 san:1 counted:1 ec:7 emphasize:1 transcription:5 gene:3 confirm:1 assumed:3 conservation:8 orthologous:6 discriminative:3 alternatively:1 subsequence:5 search:1 latent:11 regulatory:6 iterative:1 table:4 learn:2 nature:4 improving:3 interact:2 complex:2 constructing:3 intergenic:1 eukaryotic:1 hinrichs:1 did:1 main:1 promoter:1 whole:2 noise:5 hyperparameters:4 evolutionarily:1 enriched:3 site:23 referred:1 fig:10 position:12 inferring:2 governed:1 third:1 learns:2 young:1 choi:1 specific:1 jensen:1 r2:1 list:3 yakhini:1 evidence:1 essential:1 incorporating:1 consist:1 kr:1 ci:1 conditioned:2 occurring:1 narlikar:2 nk:4 chen:1 simply:1 likely:1 explore:1 unexpected:1 hartemink:2 binding:27 kuznetsov:1 corresponds:2 extracted:1 conditional:6 goal:1 sorted:1 shared:2 specifically:2 sampler:4 called:2 specie:2 worm:1 meaningful:2 jong:1 indicating:4 select:2 phylogeny:1 noncoding:1 bioinformatics:5 incorporate:1 yoo:1 chromatin:1 |
3,028 | 3,744 | STDP enables spiking neurons to
detect hidden causes of their inputs
Bernhard Nessler, Michael Pfeiffer, and Wolfgang Maass
Institute for Theoretical Computer Science, Graz University of Technology
A-8010 Graz, Austria
{nessler,pfeiffer,maass}@igi.tugraz.at
Abstract
The principles by which spiking neurons contribute to the astounding computational power of generic cortical microcircuits, and how spike-timing-dependent
plasticity (STDP) of synaptic weights could generate and maintain this computational function, are unknown. We show here that STDP, in conjunction with
a stochastic soft winner-take-all (WTA) circuit, induces spiking neurons to generate through their synaptic weights implicit internal models for subclasses (or
?causes?) of the high-dimensional spike patterns of hundreds of pre-synaptic neurons. Hence these neurons will fire after learning whenever the current input best
matches their internal model. The resulting computational function of soft WTA
circuits, a common network motif of cortical microcircuits, could therefore be
a drastic dimensionality reduction of information streams, together with the autonomous creation of internal models for the probability distributions of their input patterns. We show that the autonomous generation and maintenance of this
computational function can be explained on the basis of rigorous mathematical
principles. In particular, we show that STDP is able to approximate a stochastic
online Expectation-Maximization (EM) algorithm for modeling the input data. A
corresponding result is shown for Hebbian learning in artificial neural networks.
1
Introduction
It is well-known that synapses change their synaptic efficacy (?weight?) w in dependence of the
difference tpost ? tpre of the firing times of the post- and presynaptic neuron according to variations
of a generic STDP rule (see [1] for a recent review). However, the computational benefit of this
learning rule is largely unknown [2, 3]. It has also been observed that local WTA-circuits form a
common network-motif in cortical microcircuits [4]. However, it is not clear how this network-motif
contributes to the computational power and adaptive capabilities of laminar cortical microcircuits,
out of which the cortex is composed. Finally, it has been conjectured for quite some while, on the
basis of theoretical considerations, that the discovery and representation of hidden causes of their
high-dimensional afferent spike inputs is a generic computational operation of cortical networks of
neurons [5]. One reason for this belief is that the underlying mathematical framework, ExpectationMaximization (EM), arguably provides the most powerful approach to unsupervised learning that
we know of. But one has so far not been able to combine these three potential pieces (STDP, WTAcircuits, EM) of the puzzle into a theory that could help us to unravel the organization of computation
and learning in cortical networks of neurons.
We show in this extended abstract that STDP in WTA-circuits approximates EM for discovering
hidden causes of large numbers of input spike trains. We first demonstrate this in section 2 in an
application to a standard benchmark dataset for the discovery of hidden causes. In section 3 we
show that the functioning of this demonstration can be explained on the basis of EM for simpler
non-spiking approximations to the spiking network considered in section 2.
1
2
Discovery of hidden causes for a benchmark dataset
We applied the network architecture shown in Fig. 1A to handwritten digits from the MNIST dataset
[6].1 This dataset consists of 70, 000 28 ? 28-pixel images of handwritten digits2 , from which we
picked the subset of 20, 868 images containing only the digits 0, 3 and 4. Training examples were
randomly sampled from this subset with a uniform distribution of digit classes.
Simple STDP curve
Complex STDP curve
?w
ki
c ? e?wki -1
?
0
-1
0
tpost ? tpre
A
B
Figure 1: A) Architecture for learning with STDP in a WTA-network of spiking neurons. B) Learning curve for the two STDP rules that were used (with ? = 10ms). The synaptic weight wki is
changed in dependence of the firing times tpre of the presynaptic neuron yi and tpost of the postsynaptic neuron zk . If zk fires at time t without a firing of yi in the interval [t ? ?, t + 2?], wki is
reduced by 1. The resulting weight change is in any case multiplied with the current learning rate ?,
which was chosen in the simulations according to the variance tracking rule7 .
Pixel values xj were encoded through population coding by binary variables yi (spikes were produced for each variable yi by a Poisson process with a rate of 40 Hz for yi = 1, and 0 Hz for yi = 0,
at a simulation time step of 1ms, see Fig. 2A). Every training example x was presented for 50ms.
Every neuron yi was connected to all K = 10 output neurons z1 , . . . , z10 . A Poisson process caused
firing of one of the neurons zk on average every 5ms (see [8] for a more realistic firing mechanism).
The WTA-mechanism ensured that only one of the output neurons could fire at any time step. The
winning neuron at time step t was chosen from the soft-max distribution
euk (t)
p(zk fires at time t|y) = PK
,
ul (t)
l=1 e
(1)
Pn
where uk (t) = i=1 wki y?i (t) + wk0 represents the current membrane potential of neuron zk (with
y?i (t) = 1 if yi fired within the time interval [t ? 10ms, t], else y?i (t) = 0).3
STDP with the learning curves shown in Fig. 1B was applied to all synapses wki for an input consisting of a continuous sequence of spike encodings of handwritten digits, each presented for 50ms (see
1
A similar network of spiking neurons had been applied successfully in [7] to learn with STDP the classification of symbolic (i.e., not handwritten) characters. Possibly our theoretical analysis could also be used to
explain their simulation result.
2
Pixels were binarized to black/white. All pixels that were black in less than 5% of the training examples
were removed, leaving m = 429 external variables xj , that were encoded by n = 858 spiking neurons yi . Our
approach works just as well for external variables xj that assume any finite number of values, provided that
they are presented to the network through population coding with one variable yi for every possible value of
xj . In fact, the approach appears to work also for the commonly considered population coding of continuous
external variables.
3
This amounts to a representation of the EPSP caused by a firing of neuron yi by a step function, which facilitates the theoretical analysis in section 3. Learning with the spiking network works just as well for biologically
realistic EPSP forms.
2
Input Spike Trains
Output before Learning
Output Neurons
Input Neurons
200
300
400
500
600
1
2
2
3
3
Output Neurons
100
Output after Learning
1
4
5
6
7
8
700
800
0
50
100
150
4
5
6
7
8
9
9
10
10
0
50
100
150
0
50
100
Time [ms]
Time [ms]
Time [ms]
A
B
C
150
Figure 2: Unsupervised classification learning and sparsification of firing of output neurons after
training. For testing we presented three examples from an independent test set of handwritten digits
0, 3, 4 from the MNIST dataset, and compared the firing of the output-neurons before and after
learning. A) Representation of the three handwritten digits 0, 3, 4 for 50ms each by 858 spiking
neurons yi . B) Response of the output neurons before training. C) Response of the output neurons
after STDP (according to Fig. 1B) was applied to their weights wki for a continuous sequence of
spike encodings of 4000 randomly drawn examples of handwritten digits 0, 3, 4, each represented
for 50ms (like in panel A). The three output neurons z4 , z9 , z6 that respond have generated internal
models for the three shown handwritten digits according to Fig. 3C.
Fig. 2A).4 The learning rate ? was chosen locally according to the variance tracking rule 7 . Fig. 2C
shows that for subsequent representations of new handwritten samples of the same digits only one
neuron responds during each of the 50ms while a handwritten digit is shown. The implicit internal
models which the output neurons z1 , . . . , z10 had created in their weights after applying STDP are
made explicit in Fig. 3B and C. Since there were more output neurons than digits, several output
neurons created internal models for different ways of writing the same digit. When after applying
STDP to 2000 random examples of handwritten digits 0 and 3 also examples of handwritten digit
4 were included in the next 2000 examples, the internal models of the 10 output neurons reorganized autonomously, to include now also two internal models for different ways of writing the digit
4. The adaptation of the spiking network to the examples shown so far is measured in Fig. 3A by
the normalized conditional entropy H(L|Z)/H(L, Z), where L denotes the correct classification of
each handwritten digit y, and Z is the random variable which denotes the cluster assignment with
p(Z = k|y) = p(zk = 1|y), the firing probabilities at the presentation of digit y, see (1).
Since after training by STDP each of the output neurons fire preferentially for one digit, we can
measure the emergent classification capability of the network. The resulting weight-settings achieve
a classification error of 2.19% on the digits 0 and 3 after 2000 training steps and 3.68% on all three
digits after 4000 training steps on independent test sets of 10,000 new samples each.
3
Underlying theoretical principles
We show in this section that one can analyze the learning dynamics of the spiking network considered in the preceding section (with the simple STDP curve of Fig. 1B with the help of Hebbian
learning (using rule (12)) in a corresponding non-spiking neural network Nw . Nw is a stochastic
artificial neural network with the architecture shown in Fig. 1A, and with a parameter vector w consisting of thresholds wk0 (k = 1, . . . , K) for the K output units z1 , . . . , zK and weights wki for the
connection from the ith input node yi (i = 1, . . . , n) to the k th output unit zk . We assume that this
network receives at each discrete time step a binary input vector y ? {0, 1}n and outputs a binary
PK
vector z ? {0, 1}K with k=1 zk = 1, where the k such that zk = 1 is drawn from the distribution
4
Whereas the weights in the theoretical analysis of section 3 will approximate logs of probabilities (see (6)),
one can easily make all weights non-negative by restricting the range of these log-probabilities to [?5, 0], and
then adding a constant 5 to all weight values. This transformation gives rise to the factor c = e5 in Fig. 1B.
3
0.5
Spiking Network (simple STDP curve)
Spiking Network (complex STDP curve)
Non?spiking Network (no missing attributes)
Non?spiking Network (35% missing attributes)
0.45
Conditional Entropy
0.4
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
0
500
1000
1500
2000
2500
3000
3500
4000
Training Examples
A
B
C
Figure 3: Analysis of the learning progress of the spiking network for the MNIST dataset. A)
Normalized conditional entropy (see text) for the spiking network with the two variants of STDP
learning rules illustrated in Fig. 1B (red solid and blue dashed lines), as well as two non-spiking
approximations of the network with learning rule (12) that are analyzed in section 3. According to
this analysis the non-spiking network with 35% missing attributes (dash-dotted line) is expected to
have a very similar learning behavior to the spiking network. 2000 random examples of handwritten
digits 0 and 3 were presented (for 50ms each) to the spiking network as the first 2000 examples.
Then for the next 2000 examples also samples of handwritten digit 4 were included. B) The implicit
internal models created by the neurons after 2000 training examples are made explicit by drawing
for each pixel the difference wki ? wk(i+1) of the weights for input yi and yi+1 that encode the two
possible values (black/white) of the variable xj that encodes this pixel value. One can clearly see
that neurons created separate internal models for different ways of writing the two digits 0 and 3. C)
Re-organized internal models after 2000 further training examples that included digit 4. Two output
neurons had created internal models for the newly introduced digit 4.
over {1, . . . , K} defined by
p(zk = 1|y, w) =
euk
K
P
eul
with uk =
n
X
wki yi + wk0 .
(2)
i=1
l=1
We consider the case where there are arbitrary discrete external variables x1 , . . . , xm , each ranging
over {1, . . . , M } (we had M = 2 in section
Pn 2), and assume that these are encoded through binary
variables y1 , . . . , yn for n = m ? M with i=1 yi = m according to the rule
y(j?1)?M +r = 1 ?? xj = r ,
for j = 1, . . . , m and r = 1, . . . , M .
(3)
In other words: the group Gj of variables y(j?1)?M +1 , . . . , y(j?1)?M +M provides a population coding for the discrete variable xj .
We now consider a class of probability distributions that is particularly relevant for our analysis:
mixtures of multinomial distributions [9], a generalization of mixtures of Bernoulli distributions
(see section 9.3.3 of [10]). This is a standard model for latent class analysis [11] in the case of
discrete variables. Mixtures of multinomial distributions are arbitrary mixtures of K distributions
p1 (x), . . . , pK (x) that factorize, i.e.,
m
Y
pkj (xj )
pk (x) =
j=1
for arbitrary distributions pkj (xj ) over the range {1, . . . , M } of possible values for xj . In other
PK
words: there exists some distribution over hidden binary variables zk with k=1 zk = 1, where the
k with zk = 1 is usually referred to as a hidden ?cause? in the generation of x, such that
p(x) =
K
X
p(zk = 1) ? pk (x).
k=1
4
(4)
We first observe that any such distribution p(x) can be represented with some suitable weight vector
w by the neural network Nw , after recoding of the multinomial variables xj by binary variables yi
as defined before:
n
K
X
X
?
?
?
wki
yi + wk0
,
(5)
euk
with
u?k :=
p(y|w) =
i=1
k=1
for
?
wki
:= log p(yi = 1|zk = 1)
and
?
wk0
:= log p(zk = 1) .
In addition, Nw defines for any weight vector w whose components are normalized, i.e.
K
X
X
ewk0 = 1 and
ewki = 1 ,
for j = 1, . . . , m; k = 1, . . . , K,
k=1
(6)
(7)
i?Gj
a mixture of multinomials of the type (4).
The problem of learning a generative model for some arbitrarily given input distribution p? (x) (or
p? (y) after recoding according to (3)), by the neural network Nw is to find a weight vector w such
that p(y|w) defined by (5) models p? (y) as accurately as possible. As usual, we quantify this goal
by demanding that
Ep? [log p(y|w)]
(8)
is maximized.
Note that the architecture Nw is very useful from a functional point of view, because if (7) holds,
then the weighted sum uk at its unit zk has according to (2) the value log p(zk = 1|y, w), and the
stochastic WTA rule of Nw picks the ?winner? k with zk = 1 from this internally generated model
p(zk = 1|y, w) for the actual distribution p? (zk = 1|y) of hidden causes. We will not enforce the
normalization (7) explicitly during the subsequently considered learning process, but rather use a
learning rule (12) that turns out to automatically approximate such normalization in the limit.
Expectation Maximization (EM) is the standard method for maximizing Ep? [log p(y|w)]. We will
show that the simple STDP-rule of Fig. 1B for the spiking network of section 2 can be viewed as
an approximation to an online version of this EM method. We will first consider in section 3.1 the
standard EM-approach, and show that the Hebbian learning rule (12) provides a stochastic approximation to the maximization step.
3.1
Reduction to EM
The standard method for maximizingPthe expected log-likelihood Ep? [log p(y|w)] with a distribution p of the form p(y|w) =
z p(y, z|w) with hidden variables z, is to observe that
Ep? [log p(y|w)] can be written for arbitrary distributions q(z|y) in the form
Ep? [log p(y|w)] = L(q, w) + Ep? [KL(q(z|y)||p(z|y, w))]
(9)
"
#
X
p(y, z|w)
L(q, w) = Ep?
q(z|y) log
,
(10)
q(z|y)
z
where KL(.) denotes the Kullback-Leibler divergence.
In the E-step one sets q(z|y) = p(z|y, wold ) for the current parameter values w = wold , thereby
achieving Ep? [KL(q(z|y)||p(z|y, wold ))] = 0. In the M -step one replaces wold by new parameters
w that maximize L(q, w) for this distribution q(z|y). One can easily show that this is achieved by
setting
?
?
wki
= log p? (yi = 1|zk = 1),
and
wk0
= log p? (zk = 1),
(11)
with values for the variables zk generated by q(z|y) = p(z|y, wold ), while the values for the variables y are generated by the external distribution p? . Note that this distribution of z is exactly the
distribution (2) of the output of the neural network Nw for inputs y generated by p? .5 In the following section we will show that this M -step can be approximated by applying iteratively a simple
Hebbian learning rule to the weights w of the neural network Nw .
5
Hence one can extend p? (y) for each fixed w to a joint distribution p? (y, z), where the z are generated
for each y by Nw .
5
3.2
A Hebbian learning rule for the M-step
We show here that the target weight values (11) are the only equilibrium points of the following
Hebbian learning rule:
(
? (e?wki ? 1),
??,
?wki =
0,
if yi =1 and zk =1
if yi =0 and zk =1
if zk = 0,
? (e?wk0 ? 1),
?wk0 =
??,
if zk =1
if zk =0
(12)
It is obvious (using for the second equivalence the fact that yi is a binary variable) that
E[?wki ] = 0 ?
p? (yi =1|zk =1)?(e?wki ? 1) ? p? (yi =0|zk =1)? = 0
?
p? (yi =1|zk =1)(e?wki ? 1) + p? (yi =1|zk =1) ? 1 = 0
?
?
p? (yi =1|zk =1)e?wki = 1
wki = log p? (yi =1|zk =1) .
(13)
Analogously one can show that E[?wk0 ] = 0 ? wk0 = log p? (zk =1). With similar elementary
calculations one can show that E[?wki ] has for any w a value that moves wki in the direction of
?
wki
(in fact, exponentially fast).
One can actually show that one single step of (12) is a linear approximation of the ideal incremental
update of wki = log aNkik , with aki and Nk representing the values of the corresponding sufficient
+1
statistics, as log aNkik +1
= wki + log(1 + ?e?wk i ) ? log(1 + ?) for ? = N1k . This also reveals the
role of the learning rate ? as the reciprocal of the equivalent sample size6 .
In order to guarantee the stochastic P
convergence (see [12])
P?of the learning rule one has to use a
?
decaying learning rate ? (t) such that t=1 ? (t) = ? and t=1 (? (t) )2 = 0.7
The learning rule (12) is similar to a rule that had been introduced in [13] in the context of supervised
learning and reinforcement learning. That rule had satisfied an equilibrium condition similar to (13).
But to the best of our knowledge, such type of rule has so far not been considered in the context of
unsupervised learning.
One can easily see the correspondence between the update of wki in (12) and in the simple STDP
rule of Fig. 1B. In fact, if each time where neuron zk fires in the spiking network, each presynaptic
neuron yi that currently has a high firing rate has fired within the last ? = 10ms before the firing
of zk , the two learning rules become equivalent. However since the latter condition could only be
achieved with biologically unrealistic high firing rates, we need to consider in section 3.4 the case
for the non-spiking network where some attributes are missing (i.e., yi = 0 for all i ? Gj ; for some
group Gj that encodes an external variable xj via population coding).
We first show that the Hebbian learning rule (12) is also meaningful in the case of online learning of
Nw , which better matches the online learning process for the spiking network.
3.3
Stochastic online EM
The preceding arguments justify an application of learning rule (12) for a number of steps within
each M-step of a batch EM approach for maximizing E?p [log p(y|w)]. We now show that it is also
meaningful to apply the same rule (12) in an online stochastic EM approach (similarly as in [14]),
where at each combined EM-step only one example y is generated by p? , and the learning rule (12)
6
The equilibrium condition (13) only sets a necessary constraint for the the quotient of the two directions of
the update in (12). The actual formulation of (12) is motivated by the goal of updating a sufficient statistics.
7
In our experiments we used an adaptation of the variance tracking heuristic from [13]. If we assume that
the consecutive values of the weights represent independent samples of their true stochastic distribution at the
current learning rate, then this observed distribution is the log of a beta-distribution of the above mentioned
parameters of the sufficient statistics. Analytically this distribution has the first and second moments E[wki ] ?
E[w2 ]?E[w
]2
2
new
ki
ki
] ? E[wki ]2 + a1ki + N1i , leading to the estimate ?ki
= N1i = e?E[w
. The
log aNkii and E[wki
ki ] +1
empirical estimates of these first two moments can be gathered online by exponentially decaying averages
using the same learning rate ?ki .
6
is applied just once (for zk resulting from p(z|y, w) for the current weights w, or simpler: for the
zk that is output by Nw for the current input y).
Our strategy for showing that a single application of learning rule (12) is expected to provide
progress in an online EM-setting is the following. We consider the Lagrangian F for maximizing Ep? [log p(y|w)] under the constraints (7), and show that an application of rule (12) is expected
to increase the value of F . We set
?
?
!
m
K X
K
X
X
X
ewki ? .
(14)
?kj ?1 ?
ewk0 ?
F (w, ?) = Ep? [log p(y|w)] ? ?0 1 ?
k=1 j=1
k=1
i?Gj
PK
PK
According to (5) one can write p(y|w) = k=1 euk for uk = i=1 wki yi + wk0 . Hence one
arrives at the following conditions for the Lagrange multipliers ?:
K
K
X
X
?F
euk
Ep? [ PK
=
] ? ?0 ewk0
ul
?wk0
e
l=1
k=1
k=1
euk
X
X ?F
=
?wki
i?Gj
Ep? [yi PK
l=1
i?Gj
uk
which yield ?0 = 1 and ?kj = Ep? [ PKe
l=1
eul
eul
!
=0
wki
] ? ?kj e
!
(15)
= 0,
(16)
].
Plugging these values for ? into ?w F ? E?p [?w] with ?w defined by (12) shows that this vector
product is always positive. Hence even a single application of learning rule (12) to a single new
example y, drawn according to p? , is expected to increase Ep? [log p(y|w)] under the constraints
(7).
3.4
Impact of missing attributes
We had shown at the end of 3.2 that learning in the spiking network corresponds to learning in the
non-spiking network Nw with missing attributes. A profound analysis of the correct handling of
missing attribute values in EM can be found in [15]. Their analysis implies that the correct learning
action is then not to change the weights wki for i ? Gj . However the STDP rule of Fig. 1B, as
well as (12), reduce also these weights by ? if zk fires. This yields a modification of the equilibrium
analysis (13):
E[?wki ] = 0 ? (1 ? r) p? (yi =1|zk =1)?(e?wki ? 1) ? p? (yi =0|zk =1)? ? r? = 0
?
wki = log p? (yi =1|zk =1) + log(1 ? r) ,
(17)
where r is the probability that i belongs to a group Gj where the value of xj is missing. Since
this probability r is independent of the neuron zk and also independent of the current value of the
external variable xi , this offset of log(1 ? r) is expected to be the same for all weights. It can easily
be verified, that such an offset does not change the resulting probabilities of the competition in the
E-step according to (2).
3.5
Relationship between the spiking and the non-spiking network
As indicated at the end of section 3.2, the learning process for the spiking network from section 2
with the simple STDP curve from Fig. 1B (and external variables xj encoded by input spike trains
from neurons yi ) is equivalent to a somewhat modified learning process of the non-spiking network
Nw with the Hebbian learning rule (12) and external variables xj encoded by binary variables yi .
Each firing of a neuron zk at some time t corresponds to a discrete time step in Nw with an application of the Hebbian learning rule (12). Each neuron yi that had fired during the time interval
[t ? 10ms, t] contributes a value y?i (t) = 1 to the membrane potential uk (t) of the neuron zk at time
t, and a value y?i (0) = 0 if it did not fire during [t ? 10ms, t]. Hence the weight updates at time t
according to the simple STDP curve are exactly equal to those of (12) in the non-spiking network.
However (12) will in general be applied to a corresponding input y where it may occur that for some
7
j ? {1, . . . , m} one has yi = 0 for all i ? Gj (since none of the neurons yi with i ? Gj fired in the
spiking network during [t ? 10ms, t]). Hence we arrive at an application of (12) to an input y with
missing attributes, as discussed in section 3.4.
Since several neurons zk are likely to fire during the presentation of an external input x (each handwritten digit was presented for 50ms in section 2; but a much shorter presentation time of 10ms also
works quite well), this external input x gives in general rise to several applications of the STDP rule.
This corresponds to several applications of rule (12) to the same input (but with different choices
of missing attributes) in the non-spiking network. In the experiments in section 2, every example
in the non-spiking network with missing attributes was therefore presented for 10 steps, such that
the average number of learning steps is the same as in the spiking case. The learning process of
the spiking network corresponds to a slight variation of the stochastic online EM algorithm that is
implemented through (12) according to the analysis of section 3.3.
4
Discussion
The model for discovering hidden causes of inputs that is proposed in this extended abstract presents
an interesting shortcut for implementing and learning generative models for input data in networks
of neurons. Rather than building and adapting an explicit model for re-generating internally the distribution of input data, our approach creates an implicit model of the input distribution (see Fig. 3B)
that is encoded in the weights of neurons in a simple WTA-circuit. One might call it a Vapnik-style
[16] approach towards generative modeling, since it focuses directly on the task to represent the
most likely hidden causes of the inputs through neuronal firing. As the theoretical analysis via nonspiking networks in section 3 has shown, this approach also offers a new perspective for generating
self-adapting networks on the basis of traditional artificial neural networks. One just needs to add
the stochastic and non-feedforward parts required for implementing stochastic WTA circuits to a
1-layer feedforward network, and apply the Hebbian learning rule (12) to the feedforward weights.
One interesting aspect of the ?implicit generative learning? approach that we consider in this extended abstract is that it retains important advantages of the generative learning approach, faster
learning and better generalization [17], while retaining the algorithmic simplicity of the discriminative learning approach.
Our approach also provides a new method for analyzing details of STDP learning rules. The simulation results of section 2 show that a simplified STDP rule that can be understood clearly from
the perspective of stochastic online EM with a suitable Hebbian learning rule, provides good performance in discovering hidden causes for a standard benchmark dataset. A more complex STDP rule,
whose learning curve better matches experimentally recorded average changes of synaptic weights,
provides almost the same performance. For a comparison of the STDP curves in Fig. 1B with experimentally observed STDP curves one should keep in mind, that most experimental data on STDP
curves are for very low firing rates. The STDP curve of Fig. 7C in [18] for a firing rate of 20Hz has,
similarly as the STDP curves in Fig. 1B of this extended abstract, no pronounced negative dip, and
instead an almost constant negative part.
In our upcoming paper [8] we will provide full proofs for the results announced in this extended
abstract, as well as further applications and extensions of the learning result. We will also demonstrate, that the learning rules that we have proposed are robust to noise, and that they are matched
quite well by experimental data.
Acknowledgments
We would like to thank the anonymous reviewer for a hint in the notational formalism. Written under
partial support by the Austrian Science Fund FWF, project # P17229-N04, project # S9102-N04,
and project # FP6-015879 (FACETS) as well as # FP7-216593 (SECO) of the European Union.
8
References
[1] Y. Dan and M. Poo. Spike timing-dependent plasticity of neural circuits. Neuron, 44:23?30, 2004.
[2] L. F. Abbott and S. B. Nelson. Synaptic plasticity: taming the beast. Nature Neuroscience, 3:1178?1183,
2000.
[3] A. Morrison, A. Aertsen, and M. Diesmann. Spike-timing-dependent plasticity in balanced random networks. Neural Computation, 19:1437?1467, 2007.
[4] R. J. Douglas and K. A. Martin. Neuronal circuits of the neocortex. Annu Rev Neurosci, 27:419?451,
2004.
[5] G. E. Hinton and Z. Ghahramani. Generative models for discovering sparse distributed representations.
Philos Trans R Soc Lond B Biol Sci., 352(1358):1177?1190, 1997.
[6] Y. LeCun, L. Bottou, Y. Bengio, and P. Haffner. Gradient-based learning applied to document recognition.
Proceedings of the IEEE, 86(11):2278?2324, 1998.
[7] A. Gupta and L. N. Long. Character recognition using spiking neural networks. IJCNN, pages 53?58,
2007.
[8] B. Nessler, M. Pfeiffer, and W. Maass. Spike-timing dependent plasticity performs stochastic expectation
maximization to reveal the hidden causes of complex spike inputs. (in preparation).
[9] M. Meil?a and D. Heckerman. An experimental comparison of model-based clustering methods. Machine
Learning, 42(1):9?29, 2001.
[10] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, New York, 2006.
[11] G. McLachlan and D. Peel. Finite mixture models. Wiley, 2000.
[12] J.H. Kushner and G.G. Yin. Stochastic approximation algorithms and applications. Springer, 1997.
[13] B. Nessler, M. Pfeiffer, and W. Maass. Hebbian learning of bayes optimal decisions. In Advances in
Neural Information Processing Systems 21, pages 1169?1176. MIT Press, 2009.
[14] M. Sato. Fast learning of on-line EM algorithm. Rapport Technique, ATR Human Information Processing
Research Laboratories, 1999.
[15] Z. Ghahramani and M.I. Jordan. Mixture models for learning from incomplete data. Computational
Learning Theory and Natural Learning Systems, 4:67?85, 1997.
[16] V. Vapnik. Universal learning technology: Support vector machines. NEC Journal of Advanced Technology, 2:137?144, 2005.
[17] A. Y. Ng and M. I. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression
and naive Bayes. Advances in Neural Information Processing Systems (NIPS), 14:841?848, 2002.
[18] P. J. Sj?ostr?om, G. G. Turrigiano, and S. B. Nelson. Rate, timing, and cooperativity jointly determine
cortical synaptic plasticity. Neuron, 32:1149?1164, 2001.
9
| 3744 |@word version:1 simulation:4 pick:1 thereby:1 solid:1 moment:2 reduction:2 efficacy:1 document:1 current:8 written:2 subsequent:1 realistic:2 plasticity:6 enables:1 update:4 fund:1 v:1 generative:7 discovering:4 ith:1 reciprocal:1 provides:6 contribute:1 node:1 simpler:2 mathematical:2 become:1 beta:1 profound:1 consists:1 combine:1 dan:1 expected:6 behavior:1 p1:1 automatically:1 actual:2 provided:1 project:3 underlying:2 wki:35 circuit:8 panel:1 matched:1 sparsification:1 transformation:1 guarantee:1 every:5 binarized:1 subclass:1 exactly:2 ensured:1 classifier:1 uk:6 unit:3 internally:2 yn:1 arguably:1 before:5 positive:1 understood:1 timing:5 local:1 limit:1 encoding:2 analyzing:1 meil:1 firing:16 black:3 might:1 equivalence:1 range:2 acknowledgment:1 lecun:1 testing:1 union:1 digit:26 universal:1 empirical:1 adapting:2 pre:1 word:2 symbolic:1 context:2 applying:3 writing:3 nessler:4 equivalent:3 lagrangian:1 missing:11 maximizing:3 reviewer:1 poo:1 unravel:1 simplicity:1 rule:40 population:5 autonomous:2 variation:2 target:1 approximated:1 particularly:1 updating:1 recognition:3 observed:3 ep:14 role:1 graz:2 connected:1 autonomously:1 removed:1 mentioned:1 balanced:1 dynamic:1 creation:1 creates:1 basis:4 easily:4 joint:1 emergent:1 represented:2 train:3 fast:2 artificial:3 cooperativity:1 quite:3 encoded:6 whose:2 heuristic:1 drawing:1 seco:1 statistic:3 jointly:1 online:10 sequence:2 advantage:1 turrigiano:1 product:1 epsp:2 adaptation:2 pke:1 relevant:1 fired:4 achieve:1 pronounced:1 competition:1 convergence:1 cluster:1 generating:2 incremental:1 help:2 measured:1 expectationmaximization:1 progress:2 soc:1 implemented:1 quotient:1 implies:1 s9102:1 quantify:1 direction:2 correct:3 attribute:10 stochastic:15 subsequently:1 human:1 implementing:2 pkj:2 generalization:2 anonymous:1 elementary:1 p17229:1 extension:1 hold:1 considered:5 stdp:35 equilibrium:4 puzzle:1 nw:15 algorithmic:1 consecutive:1 currently:1 successfully:1 weighted:1 mclachlan:1 mit:1 clearly:2 always:1 modified:1 rather:2 pn:2 conjunction:1 encode:1 focus:1 notational:1 bernoulli:1 likelihood:1 rigorous:1 detect:1 size6:1 dependent:4 motif:3 hidden:13 pixel:6 classification:5 retaining:1 equal:1 once:1 ng:1 represents:1 unsupervised:3 hint:1 randomly:2 composed:1 divergence:1 astounding:1 consisting:2 fire:9 maintain:1 organization:1 peel:1 analyzed:1 mixture:7 arrives:1 partial:1 necessary:1 shorter:1 incomplete:1 re:2 theoretical:7 formalism:1 soft:3 modeling:2 facet:1 retains:1 assignment:1 maximization:4 subset:2 hundred:1 uniform:1 combined:1 michael:1 together:1 analogously:1 satisfied:1 recorded:1 containing:1 possibly:1 external:11 leading:1 style:1 potential:3 rapport:1 coding:5 wk:2 explicitly:1 caused:2 igi:1 afferent:1 stream:1 piece:1 ewki:2 picked:1 view:1 wolfgang:1 analyze:1 red:1 decaying:2 bayes:2 capability:2 om:1 n1k:1 variance:3 largely:1 maximized:1 gathered:1 yield:2 handwritten:16 accurately:1 produced:1 none:1 explain:1 synapsis:2 whenever:1 synaptic:8 obvious:1 proof:1 sampled:1 newly:1 dataset:7 austria:1 knowledge:1 dimensionality:1 organized:1 z9:1 actually:1 appears:1 supervised:1 response:2 formulation:1 microcircuit:4 wold:5 just:4 implicit:5 receives:1 defines:1 logistic:1 indicated:1 reveal:1 building:1 normalized:3 true:1 functioning:1 multiplier:1 hence:6 analytically:1 leibler:1 iteratively:1 maass:4 laboratory:1 illustrated:1 white:2 during:6 self:1 aki:1 m:19 demonstrate:2 performs:1 image:2 ranging:1 consideration:1 common:2 multinomial:4 spiking:40 functional:1 winner:2 exponentially:2 extend:1 discussed:1 approximates:1 slight:1 philos:1 z4:1 similarly:2 had:8 cortex:1 gj:11 add:1 recent:1 perspective:2 conjectured:1 belongs:1 binary:8 arbitrarily:1 yi:42 somewhat:1 preceding:2 determine:1 maximize:1 dashed:1 morrison:1 full:1 hebbian:12 match:3 faster:1 calculation:1 offer:1 long:1 post:1 plugging:1 impact:1 variant:1 regression:1 maintenance:1 austrian:1 expectation:3 poisson:2 normalization:2 represent:2 achieved:2 whereas:1 addition:1 interval:3 else:1 leaving:1 w2:1 hz:3 n1i:2 facilitates:1 jordan:2 call:1 fwf:1 ideal:1 feedforward:3 bengio:1 xj:15 architecture:4 reduce:1 haffner:1 motivated:1 ul:2 york:1 cause:12 nonspiking:1 action:1 useful:1 clear:1 amount:1 neocortex:1 locally:1 induces:1 wk0:12 reduced:1 generate:2 dotted:1 neuroscience:1 blue:1 discrete:5 write:1 group:3 threshold:1 achieving:1 drawn:3 douglas:1 verified:1 abbott:1 fp6:1 sum:1 powerful:1 respond:1 arrive:1 almost:2 decision:1 announced:1 ki:6 layer:1 dash:1 correspondence:1 laminar:1 replaces:1 sato:1 occur:1 ijcnn:1 constraint:3 encodes:2 diesmann:1 aspect:1 argument:1 lond:1 martin:1 according:14 membrane:2 heckerman:1 em:18 postsynaptic:1 character:2 beast:1 wta:9 biologically:2 modification:1 rev:1 explained:2 turn:1 mechanism:2 know:1 mind:1 fp7:1 drastic:1 end:2 operation:1 z10:2 multiplied:1 apply:2 observe:2 generic:3 enforce:1 batch:1 denotes:3 clustering:1 include:1 kushner:1 tugraz:1 ghahramani:2 upcoming:1 move:1 spike:13 strategy:1 dependence:2 usual:1 responds:1 traditional:1 aertsen:1 gradient:1 separate:1 thank:1 sci:1 atr:1 nelson:2 presynaptic:3 reason:1 relationship:1 demonstration:1 preferentially:1 negative:3 rise:2 n04:2 unknown:2 reorganized:1 neuron:51 benchmark:3 finite:2 extended:5 hinton:1 y1:1 arbitrary:4 introduced:2 required:1 kl:3 z1:3 connection:1 nip:1 tpre:3 trans:1 able:2 usually:1 pattern:3 xm:1 max:1 belief:1 power:2 suitable:2 demanding:1 unrealistic:1 natural:1 pfeiffer:4 advanced:1 representing:1 technology:3 created:5 naive:1 kj:3 text:1 review:1 taming:1 discovery:3 generation:2 interesting:2 sufficient:3 principle:3 changed:1 last:1 ostr:1 institute:1 sparse:1 benefit:1 recoding:2 curve:15 dip:1 cortical:7 distributed:1 tpost:3 adaptive:1 commonly:1 made:2 reinforcement:1 simplified:1 far:3 sj:1 approximate:3 bernhard:1 kullback:1 keep:1 reveals:1 factorize:1 xi:1 discriminative:2 continuous:3 latent:1 z6:1 learn:1 zk:49 robust:1 nature:1 euk:6 contributes:2 e5:1 bottou:1 complex:4 european:1 did:1 pk:10 neurosci:1 noise:1 x1:1 neuronal:2 fig:21 referred:1 wiley:1 explicit:3 winning:1 annu:1 bishop:1 showing:1 offset:2 gupta:1 exists:1 mnist:3 restricting:1 adding:1 vapnik:2 nec:1 nk:1 entropy:3 yin:1 likely:2 lagrange:1 tracking:3 springer:2 corresponds:4 conditional:3 goal:2 presentation:3 viewed:1 towards:1 shortcut:1 change:5 experimentally:2 included:3 justify:1 experimental:3 meaningful:2 internal:12 support:2 latter:1 eul:3 preparation:1 biol:1 handling:1 |
3,029 | 3,745 | An LP View of the M-best MAP problem
Menachem Fromer
Amir Globerson
School of Computer Science and Engineering
The Hebrew University of Jerusalem
{fromer,gamir}@cs.huji.ac.il
Abstract
We consider the problem of finding the M assignments with maximum
probability in a probabilistic graphical model. We show how this problem
can be formulated as a linear program (LP) on a particular polytope. We
prove that, for tree graphs (and junction trees in general), this polytope
has a particularly simple form and differs from the marginal polytope in
a single inequality constraint. We use this characterization to provide an
approximation scheme for non-tree graphs, by using the set of spanning
trees over such graphs. The method we present puts the M -best inference
problem in the context of LP relaxations, which have recently received
considerable attention and have proven useful in solving difficult inference
problems. We show empirically that our method often finds the provably
exact M best configurations for problems of high tree-width.
A common task in probabilistic modeling is finding the assignment with maximum probability given a model. This is often referred to as the MAP (maximum a-posteriori) problem.
Of particular interest is the case of MAP in graphical models, i.e., models where the probability factors into a product over small subsets of variables. For general models, this is an
NP-hard problem [11], and thus approximation algorithms are required. Of those, the class
of LP based relaxations has recently received considerable attention [3, 5, 18]. In fact, it has
been shown that some problems (e.g., fixed backbone protein design) can be solved exactly
via sequences of increasingly tighter LP relaxations [13].
In many applications, one is interested not only in the MAP assignment but also in the
M maximum probability assignments [19]. For example, in a protein design problem, we
might be interested in the M amino acid sequences that are most stable on a given backbone
structure [2]. In cases where the MAP problem is tractable, one can devise tractable algorithms for the M best problem [8, 19]. Specifically, for low tree-width graphs, this can be
done via a variant of max-product [19]. However, when finding MAPs is not tractable, it is
much less clear how to approximate the M best case. One possible approach is to use loopy
max-product to obtain approximate max-marginals and use those to approximate the M
best solutions [19]. However, this is largely a heuristic and does not provide any guarantees
in terms of optimality certificates or bounds on the optimal values.
LP approximations to MAP do enjoy such guarantees. Specifically, they provide upper
bounds on the MAP value and optimality certificates. Furthermore, they often work for
graphs with large tree-width [13]. The goal of the current work is to leverage the power
of LP relaxations to the M best case. We begin by focusing on the problem of finding
the second best solution. We show how it can be formulated as an LP over a polytope
we call the ?assignment-excluding marginal polytope?. In the general case, this polytope
may require an exponential number of inequalities, but we prove that when the graph
is a tree it has a very compact representation. We proceed to use this result to obtain
approximations to the second best problem, and show how these can be tightened in various
ways. Next, we show how M best assignments can be found by relying on algorithms for
1
second best assignments, and thus our results for the second best case can be used to devise
an approximation algorithm for the M best problem.
We conclude by applying our method to several models, showing that it often finds the exact
M best assignments.
1
The M-best MAP problem and its LP formulation
Consider a function on n variables defined as:
X
X
f (x1 , . . . , xn ; ?) =
?ij (xi , xj ) +
?i (xi )
ij?E
(1)
i?V
where V and E are the vertices and nodes of a graph G with n nodes. We shall be interested
in the M assignments with largest f (x; ?) value.1 Denote these by x(1) , . . . , x(M) , so that
x(1) is the assignment that maximizes f (x; ?), x(2) is the 2nd best assignment, etc.
The MAP problem (i.e., finding x(1) ) can be formulated as an LP as follows [15]. Let ? be
a vector of distributions that includes {?ij (xi , xj )}ij?E over edge variables and {?i (xi )}i?V
over nodes. The set of ? that arise from some joint distribution is known as the marginal
polytope [15] and is denoted by M(G). Formally:
M(G) = {? | ?p(x) ? ? s.t. p(xi , xj ) = ?ij (xi , xj ) , p(xi ) = ?i (xi )} .
where ? is the set of distributions on x. The MAP problem can then be shown to be
equivalent to the following LP:2
max f (x; ?) = max ? ? ? ,
(2)
x
??M(G)
It can be shown that this LP always has a maximizing ? that is a vertex of M(G) and
is integral. Furthermore, this ? corresponds to the MAP assignment x(1) . Although the
number of variables in this LP is only O(|E| + |V |), the difficulty comes from an exponential
number of linear inequalities generally required to describe the marginal polytope M(G).
We shall find it useful to define a mapping between assignments x and integral vertices of
the polytope. Given an integral vertex v ? M(G), define x(v) to be the assignment that
maximizes vi (xi ). And, given an assignment z define v(z) to be the integral vertex in M(G)
corresponding to the assignment z. Thus the LP in Eq. 2 will be maximized by v(x(1) ).
One simple outer bound of the marginal polytope is the local polytope ML (G), which only
enforces pairwise constraints between variables:
?
?
?
?
X
X
X
?i (xi ) = 1
(3)
?ij (xi , xj ) = ?j (xj ),
?ij (xi , xj ) = ?i (xi ),
ML (G) = ? ? 0
?
?
xj
xi
xi
The LP relaxation is then to maximize ? ? ? where ? ? ML (G). For tree structured graphs,
ML (G) = M(G) [15] and thus the LP relaxation yields the exact MAP x(1) .
An LP Formulation for the 2nd -best MAP
2
Assume we found the MAP assignment x(1) and are now interested in finding x(2) . Is there
a simple LP whose solution yields x(2) ? We begin by focusing on the case where G is a tree
so that the local LP relaxation is exact. We first treat the case of a connected tree.
To construct an LP whose solution is x(2) , a natural approach is to use the LP for x(1) (i.e.,
the LP in Eq. 2) but somehow eliminate the solution x(1) using additional constraints. This,
however, is somewhat trickier than it sounds. The key difficulty is that the new constraints
should not generate fractional vertices, so that the resulting LP is still exact.
We begin by defining the polytope over which we need to optimize in order to obtain x(2) .
1
2
This is equivalent to finding P
the maximum
probability assignmentsPforPa model p(x) ? ef (x;?) .
P
We use the notation ? ? ? = ij?E xi ,xj ?ij (xi , xj )?ij (xi , xj ) + i xi ?i (xi )?i (xi )
2
Definition 1. The assignment-excluding marginal polytope is defined as:
?
M(G,
z) = {? | ?p(x) ? ? s.t. p(z) = 0, p(xi , xj ) = ?ij (xi , xj ), p(xi ) = ?i (xi )} .
?
M(G, z) is simply the convex hull of all (integral) vectors v(x) for x 6= z.
(4)
?
The following result shows that optimizing over M(G,
x(1) ) will yield the second best solu?
tion x(2) , so that we refer to M(G,
x(1) ) as the second-best marginal polytope.
nd
Lemma 1. The 2 best solution is obtained via the following LP:
maxx6=x(1) f (x; ?) = max??M(G,x
(1) ) ? ? ?. Furthermore, the ? that maximizes the LP on
?
the right is integral and corresponds to the second-best MAP assignment x(2) .
The proof is similar to that of Eq. 2: instead of optimizing over x, we optimize over distributions p(x), while enforcing that p(x(1) ) = 0 so that x(1) is excluded from the maximization.
The key question which we now address is how to obtain a simple characterization of
?
?
M(G,
z). Intuitively, it would seems that M(G,
z) should be ?similar? to M(G), such
that it can be described as M(G) plus some constraints that ?block? the assignment z.
To illustrate the difficulty in finding such ?blocking?
constraints, consider the following
P
?
(z
constraint, originally suggested by Santos [10]:
i i i ) ? n ? 1. This inequality is not
satisfied by ? = v(z) since v(z) attains the value n for the LHS of the above. Furthermore,
for any x 6= z and ? = v(x), the LHS would be n ? 1 or less. Thus, this inequality separates
?
v(z) from all other integral vertices. One might conclude that we can define M(G,
z) by
adding this inequality to M(G). The difficulty is that the resulting polytope has fractional
vertices,3 and maximizing over it won?t generally yield an integral solution.
It turns out that there is a different inequality that does yield an exact characterization of
?
M(G,
z) when G is a tree. We now define this inequality and state our main theorem.
Definition 2. Consider the functional I(?, z) (which is linear in ?):
X
X
(1 ? di )?i (zi ) +
?ij (zi , zj )
(5)
I(?, z) =
i
ij?E
where di is the degree of node i in the tree graph G.
?
Theorem 1. Adding the single inequality I(?, z) ? 0 to M(G) yields M(G,
z).
?
M(G, z) = {? | ? ? M(G), I(?, z) ? 0 }
(6)
The theorem is proved in the appendix. Taken together with Lemma 1, it implies that
x(2) may be obtained via an LP that is very similar to the MAP-LP, but has an additional
constraint. We note the interesting similarity between I(?, z) and the Bethe entropy [20].
The only difference is that in Bethe, ?i , ?ij are replaced by H(Xi ), H(Xi , Xj ) respectively.4
The theorem also generalizes to the case where G is not a tree, but we have a junction tree
for G. In this case, the theorem still holds if we define a generalized I(?, z) inequality as:
X
X
(1 ? dS )?S (zS ) +
?C (zC ) ? 0
(7)
S?S
C?C
where C and S are the junction tree cliques and their separators, respectively, and dS is
the number of cliques that intersect on separator S. In this case, the marginal polytope
should enforce consistency between marginals ?C (zC ) and their separators ?S (zS ). However,
such a characterization requires variables whose cardinality is exponential in the tree-width
and is thus tractable only for graphs of low tree-width. In the next section, we address
approximations for general graphs.
A corresponding result exists for the case when G is a forest. In this case, the inequality
in Eq. 6 is modified to: I(?, z) ? |P | ? 1, where |P | denotes the number of connected
components of G. Interestingly, for a graph without edges, this gives the Santos inequality.
3
Consider the case of a single edge between 2 nodes where the MAP assignment is (0, 0). Adding
the inequality ?1 (0) + ?2 (0) ? 1 produces the fractional vertex (0.5, 0.5).
4
The connection to Bethe can be more clearly understood from a duality-based proof of Theorem
1. We will cover this in an extended version of the manuscript.
3
2nd best LPs for general graphs - Spanning tree inequalities
3
When the graph G is not a tree, the marginal polytope M(G) generally requires an exponential number of inequalities. However, as mentioned above, it does have an exact description
in terms of marginals over cliques and separators of a junction tree. Given such marginals on
?
junction tree cliques, we also have an exact characterization of M(G,
z) via the constraint
in Eq. 7. However, in general, we cannot afford to be exponential in tree-width. Thus a
common strategy [15] is to replace M(G) with an outer bound that enforces consistency between marginals on overlapping sets of variables. The simplest example is ML (G) in Eq. 3.
?
In what follows, we describe an outer-bound approximation scheme for M(G,
z). We use
ML (G) as the approximation for M(G) (more generally ML (G) can enforce consistency
between any set of small regions, e.g., triplets). When G is not a tree, the linear constraint in
?
Eq. 6 will no longer suffice to derive M(G,
z). Moreover, direct application of the inequality
will incorrectly remove some integral vertices. An alternative approach is to add inequalities
that separate v(z) from the other integral vertices. This will serve to eliminate more and
more fractional vertices, and if enough constraints are added, this may result in an integral
solution. One obvious family of such constraints are those corresponding to spanning trees
in G and have the form of Eq. 5.
Definition 3. Consider any T that is a spanning tree of G. Define the functional I T (?, z):
X
X
(1 ? dTi )?i (zi ) +
?ij (zi , zj )
(8)
I T (?, z) =
i
ij?T
where dTi is the degree of i in T . We refer to I T (?, z) ? 0 as a spanning tree inequality.
For any sub-tree T of G, the corresponding spanning tree inequality separates the vertex
v(z) from the other vertices. This can be shown via similar arguments as in the proof of
Theorem 1. Note, however, that the resulting polytope may still have fractional vertices.
The above argument shows that any spanning tree provides a separating inequality for
?
M(G,
z). In principle, we would like to use as many such inequalities as possible.
Definition 4. The spanning tree assignment-excluding marginal polytope is defined as:
? ST (G, z) = ? | ? ? ML (G), ? tree T ? E I T (?, z) ? 0
M
(9)
L
where the ST notation indicates the inclusion of all spanning tree inequalities for G.5
Thus, we would actually like to perform the following optimization problem:
max
? ST (G,z)
??M
L
???
?
as an approximation to optimization over M(G,
z); i.e., we seek the optimal ? subject to all
spanning tree inequalities for G with the ambition that this ? be integral and thus provide
the non-z MAP assignment, with a certificate of optimality.
Although the number of spanning trees is exponential in n, it turns out that all spanning
inequalities can be used in practice. One way to achieve this is via a cutting plane algorithm
[12] that finds the most violated spanning tree inequality and adds it to the LP. To implement
this efficiently, we note that for a particular ? and a spanning tree T , the value of I T (?, z)
can be decomposed into a sum over the edges in T (and a T -independent constant):
i X
Xh
I T (?, z) =
?i (zi )
(10)
?ij (zi , zj ) ? ?i (zi ) ? ?j (zj ) +
i
ij?T
The tree maximizing the above is the maximum-weight spanning tree with edge-weights
wij = ?ij (zi , zj ) ? ?i (zi ) ? ?j (zj ). It can thus be found efficiently.
The cutting plane algorithm proceeds as follows. We start by adding an arbitrary spanning
tree. Then, as long as the optimal ? is fractional, we find the spanning tree inequality that
? most violates (where this is implemented via the maximum-weight spanning tree). This
constraint will necessarily remove ? from the polytope. If there are no violated inequalities
5
?
? ST
Note that M(G,
z) ? M
L (G, z) ? ML (G).
4
but ? is still fractional, then spanning tree inequalities do not suffice to find an integral
solution (but see below on hypertree constraints to add in this case). In practice, we found
that only a relatively small number of inequalities are needed to successfully yield an integral
solution, or determine that all such inequalities are already satisfied.
An alternative approach for solving the all spanning-tree problem is to work via the dual.
The dual variables roughly correspond to points in the spanning tree polytope [16], optimization over which can be done in polynomial time, e.g., via the ellipsoid algorithm. We do
not pursue this here since the cutting plane algorithm performed well in our experiments.
?
As mentioned earlier, we can exactly characterize M(G,
z) using Eq. 7, albeit at a cost
exponential in the tree-width of the graph. A practical compromise would be to use inequalities over clique trees of G, where the cliques are relatively small, e.g., triplets. The
corresponding constraint (Eq. 7 with the small cliques and their separators) will necessarily
separate v(z) from the other integral vertices. Finding the maximally violated such inequality is an NP-hard problem, equivalent to a prize collecting Steiner tree problem, but recent
work has found that such problems are often exactly solvable in practice [7]. It thus might
be practical to include all such trees as constraints using a cutting plane algorithm.
4
From 2nd -best to M-best
Thus far, we only dealt with the 2nd best case. As we show now, it turns out that the
2nd -best formalism can be used to devise an algorithm for M best. We begin by describing
an algorithm for the exact M best and then show how it can be used to approximate those
via the approximations for 2nd best described above. Fig. 1 describes our scheme, which
we call Partitioning for Enumerating Solutions (or PES) for solving the M best problem.
The scheme is general and only assumes that MAP-?like? problems can be solved. It is
inspired by several pre-existing M best solution schemes [4, 6, 8, 19] but differs from them
in highlighting the role of finding a second best solution within a given subspace.
for m ? 1 to M do
if m = 1 then
Run MAP solver to obtain the best assignment: x(1) ? arg max f (x; ?)
CONSTRAINTS1 ? ?
else
k ??
arg max
?
k? ?{1,...,m?1}
f (y(k ) ; ?) // sub-space containing mth best assignment
x(m) ? y(k) // mth best assignment
// A variable choice that distinguishes x(m) from x(k) :
(m)
(v, a) ? any member of the set {(i, xi
(m)
) : xi
(k)
6= xi
}
CONSTRAINTSm ? CONSTRAINTSk ? {xv = a} // Eliminate x(k) (as MAP) from subspace m
CONSTRAINTSk ? CONSTRAINTSk ? {xv 6= a} // Eliminate x(m) (as 2nd -best) from subspace k
y(k) ? CalcNextBestSolution(CONSTRAINTSk , x(k) )
end
y(m) ? CalcNextBestSolution(CONSTRAINTSm , x(m) )
end
return {x(m) }M
m=1
/* Find next best solution in sub-space defined by CONSTRAINTS */
Function CalcNextBestSolution(CONSTRAINTS, x(?) )
// x(?) is the MAP in the sub-space defined by CONSTRAINTS:
Run MAP solver to obtain the second-best solution: y ?
arg max
f (x; ?), and return y.
x6=x(?) ,CONSTRAINTS
end
Figure 1: Pseudocode for the PES algorithm.
The modus operandi of the PES algorithm is to efficiently partition the search space while
systematically excluding all previously determined assignments. Significantly, any MAP
5
Attractive Grids
Ranks
Run-times
1
50
Mixed Grids
Ranks
Run-times
1
50
0.5
0
S
N
B
0
Hard Protein SCP
Ranks
Run-times
1
50
0.5
S
N
B
0
0
S+R N+R B+R
0.5
S+R N+R B+R
0
S+R
B
B+R
0
S+R
B
B+R
Figure 2: Number of best ranks and normalized run-times for the attractive and mixed grids, and
the more difficult protein SCP problems. S, N, and B denote the STRIPES, Nilsson, and BMMF
algorithms. Algorithms marked with +R denote that regions of variables were added for those runs.
solver can be plugged into it, on the condition that it is capable of solving the arg max in
the CalcNextBestSolution subroutine. The correctness of PES can be shown by observing
that at the M th stage, all previous best solutions are excluded from the optimization and
no other assignment is excluded. Of note, this simple partitioning scheme is possible due
to the observation that the first-best and second-best MAP assignments must differ in the
assignment of at least one variable in the graph.
The main computational step of the PES algorithm is to maximize f (x; ?) subject to
x 6= x(?) and x ? CONSTRAINTS (see the CalcNextBestSolution subroutine). The
CONSTRAINTS set merely enforces that some of the coordinates of x are either equal
to or different from specified values.6 Within the LP, these can be enforced by setting
?i (xi = a) = 1 or ?i (xi = a) = 0. It can be shown that if one optimizes ? ? ? with
?
these constraints and ? ? M(G,
x(?) ), the solution is integral. Thus, the only element
?
requiring approximation in the general case is the description of M(G,
x(?) ). We choose as
? ST (G, x(?) ) in Eq. 9. We call the resulting approximathis approximation the polytope M
L
tion algorithm Spanning TRee Inequalities and Partitioning for Enumerating Solutions, or
STRIPES. In the next section, we evaluate this scheme experimentally.
5
Experiments
We compared the performance of STRIPES to the BMMF algorithm [19] and the
Lawler/Nilsson algorithm [6, 8]. Nilsson?s algorithm is equivalent to PES where the 2nd
best assignment is obtained from maximizations within O(n) partitions, so that its runtime is O(n) times the cost of finding a single MAP. Here we approximated each MAP
with its LP relaxation (as in STRIPES), so that both STRIPES and Nilsson come with
certificates of optimality when their LP solutions are integral. BMMF relies on loopy BP to
approximate the M best solutions.7 We used M = 50 in all experiments. To compare the
algorithms, we pooled all their solutions, noting the 50 top probabilities, and then counted
the fraction of these that any particular algorithm found (its solution rank). For run-time
comparisons, we normalized the times by the longest-running algorithm for each example.
We begin by considering pairwise MRFs on binary grid graphs of size 10 ? 10. In the first
experiment, we used an Ising model with attractive (submodular) potentials, a setting in
which the pairwise LP relaxation is exact [14]. For each grid edge ij, we randomly chose
Jij ? [0, 0.5], and local potentials were randomized in the range ?0.5. The results for 25
graphs are shown in Fig. 2. Both the STRIPES and Nilsson algorithms obtained the 50
optimal solutions (as learned from their optimality certificates), while BMMF clearly fared
less well for some of the graphs. While the STRIPES algorithm took < 0.5 to 2 minutes
to run, the Nilsson algorithm took around 13 minutes. On the other hand, BMMF was
quicker, taking around 10 seconds per run, while failing to find a significant portion of the
top solutions. Overall, the STRIPES algorithm was required to employ up to 19 spanning
tree inequalities per calculation of second-best solution.
6
This is very different from the second best constraint, since setting x1 = 1 blocks all assignments
with this value, as opposed to setting x = 1 which blocks only the assignment with all ones.
7
For BMMF, we used the C implementation at http://www.cs.huji.ac.il/~ talyam/
inference.html. The LPs for STRIPES and Nilsson were solved using CPLEX.
6
Next, we studied Ising models with mixed interaction potentials (with Jij and the local potentials randomly chosen in [?0.5, 0.5]). For almost all of the 25 models, all three algorithms
were not able to successfully find the top solutions. Thus, we added regions of triplets (two
for every grid face) to tighten the LP relaxation (for STRIPES and Nilsson) and to perform
GBP instead of BP (for BMMF). This resulted in STRIPES and Nilsson always provably
finding the optimal solutions, and BMMF mostly finding these solutions (Fig. 2). For these
more difficult grids, however, STRIPES was the fastest of the algorithms, taking 0.5 - 5
minutes. On the other hand, the Nilsson and BMMF algorithms took 18 minutes and 2.5 7 minutes, respectively. STRIPES added up to 23 spanning tree inequalities per iteration.
The protein side-chain prediction (SCP) problem is to to predict the placement of amino
acid side-chains given a protein backbone [2, 18]. Minimization of a protein energy function
corresponds to finding a MAP assignment for a pairwise MRF [19]. We employed the
dataset of [18] (up to 45 states per variable, mean approximate tree-width 50), running all
algorithms to calculate the optimal side-chain configurations. For 315 of 370 problems in
the dataset, the first MAP solution was obtained directly as a result of the LP relaxation
having an integral solution (?easy? problems). STRIPES provably found the subsequent
top 50 solutions within 4.5 hours for all but one of these cases (up to 8 spanning trees per
calculation), and BMMF found the same 50 solutions for each case within 0.5 hours; note
that only STRIPES provides a certificate of optimality for these solutions. On the other
hand, only for 146 of the 315 problems was the Nilsson method able to complete within
five days; thus, we do not compare its performance here. For the remaining 55 (?hard?)
problems (Fig. 2), we added problem-specific triplet regions using the MPLP algorithm
[13]. We then ran the STRIPES algorithm to find the optimal solutions. Surprisingly, it
was able to exactly find the 50 top solutions for all cases, using up to 4 standard spanning
tree inequalities per second-best calculation. The STRIPES run-times for these problems
ranged from 6 minutes to 23 hours. On the other hand, whether running BMMF without
these regions (BP) or with the regions (GBP), it did not perform as well as STRIPES
in terms of the number of high-ranking solutions or its speed. To summarize, STRIPES
provably found the top 50 solutions for 369 of the 370 protein SCP problems.
6
Conclusion
?
In this work, we present a novel combinatorial object M(G,
z) and show its utility in
obtaining the M best MAP assignments. We provide a simple characterization of it for
tree structured graphs, and show how it can be used for approximations in non-tree graphs.
As with the marginal polytope, many interesting questions arise about the properties of
?
M(G,
z). For example, in which non-tree cases can we provide a compact characterization
(e.g., as for the cut-polytope for planar graphs [1]). Another compelling question is in which
problems the spanning tree inequalities are provably optimal.
An interesting generalization of our method is to predict diverse solutions satisfying some
local measure of ?distance? from each other, e.g., as in [2].
Here we studied the polytope that results from excluding one assignment. An intriguing
question is to characterize the polytope that excludes M assignments. We have found that
it does not simply correspond to adding M constraints I(?, z i ) ? 0 for i = 1, . . . , M , so its
?
geometry is apparently more complicated than that of M(G,
z).
Here we used LP solvers to solve for ?. Such generic solvers could be slow for large-scale
problems. However, in recent years, specialized algorithms have been suggested for solving
MAP-LP relaxations [3, 5, 9, 17]. These use the special form of the constraints to obtain
local-updates and more scalable algorithms. We intend to apply these schemes to our
method. Finally, our empirical results show that our method indeed leverages the power of
LP relaxations and yields exact M best optimal solutions for problems with large tree-width.
Acknowledgements
We thank Nati Linial for his helpful discussions and Chen Yanover and Talya Meltzer for their
insight and help in running BMMF. We also thank the anonymous reviewers for their useful advice.
7
A
Proof of Theorem 1
P
Recall that for any ? ? M(G), there exists a probability density p(x) s.t. ? = x p(x)v(x).
Denote p? (z) as the minimal value of p(z) among all p(x) that give ?. We prove that
?
p? (z) = max(0, I(?, z)), from which the theorem follows (since p? (z) = 0 iff ? ? M(G,
z)).
The proof is by induction on n. For n = 1, the node has degree 0, so I(?, z) = ?1 (z1 ).
Clearly, p? (z) = ?1 (z1 ), so p? (z) = I(?, z). For n > 1, there must exist a leaf in G
? as the tree obtained
(assume that its index is n and its neighbor?s is n ? 1). Denote G
? as the
by removing node n and its edge with n ? 1. For any assignment x, denote x
corresponding sub-assignment for the first n ? 1 variables. Also, any ? can be derived by
? For an integral vertex ? = v(x),
? ? M(G).
adding appropriate coordinates to a unique ?
? ?,
? For
? as v
? (?
? z
? ) the functional in Eq. 5 applied to G.
denote its projected ?
x). Denote by I(
? it can be seen that:
any ? and its projected ?,
? ?,
? z
?) ? ?
I(?, z) = I(
(11)
P
where we define ? = xn 6=zn ?n?1,n (zn?1 , xn ) (so 0 ? ? ? 1). The inductive assumption
? and also p?(?
? z
? )). We next use p?(?
gives a p?(?
x) that has marginals ?
z ) = max(0, I(?,
x) to
construct a p(x) that has marginals ? and the desired minimal p? (z). Consider three cases:
? ?,
? z
? ) ? 0. From the inductive assumption, p??? (?
z ) = 0, so we define:
I. I(?, z) ? 0 and I(
?n?1,n (xn?1 , xn )
p(x) = p?(?
x)
(12)
?n?1 (xn?1 )
which indeed marginalizes to ?, and p(z) = 0 so that p? (z) = 0 as required. If ?n?1 (xn?1 ) =
0, then p?(?
x) is necessarily 0, in which case we define p(x) = 0. Note that this construction
is identical to that used in proving that ML (G) = M(G) for a tree graph G.
? ?,
? z
? ) > 0. Applying the inductive
II. I(?, z) > 0. Based on Eq. 11 and ? ? 0, we have I(
?
? , we obtain I(?,
? z
? ) = p??? (?
z ) > 0. Now, define p(x) so that p(z) = I(?, z):
assumption to ?
xl , l ? n ? 2
?(xn?1 = zn?1 )
?(xn = zn )
p(x)
no constraint
0
no constraint
As in Eq. 12
0
0
? l xl 6= zl
1
? l x l = zl
1
1
p?(?
x)
0
?n?1,n (zn?1 , xn )
1
I(?, z)
Simple algebra shows that p(x) is non-negative and has ? as marginals. We now show that
p(z) is minimal. Based on the inductive assumption and Eq. 11, it can
P easily be shown
that I(v(z), z) = 1, I(v(x),Pz) ? 0 for x 6= z. For any p(x) s.t. ? = x p(x)v(x), from
linearity, I(?, z) = p(z) + x6=z p(x)I(v(x), z) ? p(z) (since I(v(x), z) ? 0 for x 6= z).
Since the p(z) we define achieves this lower bound, it is clearly minimal.
? ?,
? z
? ) > 0. Applying the inductive assumption to ?
? , we see that
III. I(?, z) ? 0 but I(
?
?
? z
? ) > 0; Eq. 11 implies ? ? I(?,
? z
? ) ? 0. Define ? = ?n?1 (zn?1 ) ? p??? (?
z ), which
z ) = I(?,
p??? (?
? Define p(x) as:
is non-negative since ?n?1 (zn?1 ) = ?
?n?1 (?
z n?1 ) and p? marginalizes to ?.
xl , l ? n ? 2
?(xn?1 = zn?1 )
?(xn = zn )
no constraint
0
no constraint
? l xl 6= zl
0
1
0
1
? l x l = zl
1
As in Eq. 12
? ? z)
?
(z
,x )
p?(?
x) n?1,n ?n?1 n ??I(??,?
?
(z
,z )
p?(?
x) n?1,n ?n?1 n
?
(z
,x )
? ?,
? z
? ) n?1,n ?n?1 n
I(
0
1
p(x)
which indeed marginalizes to ?, and p(z) = 0 so that p? (z) = 0, as required.
8
References
[1] F. Barahona. On cuts and matchings in planar graphs. Math. Program., 60(1):53?68, 1993.
[2] M. Fromer and C. Yanover. Accurate prediction for atomic-level protein design and its application in diversifying the near-optimal sequence space. Proteins: Structure, Function, and
Bioinformatics, 75:682?705, 2009.
[3] A. Globerson and T. Jaakkola. Fixing max-product: Convergent message passing algorithms
for MAP LP-relaxations. In J. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances
in Neural Information Processing Systems 21. MIT Press, Cambridge, MA, 2007.
[4] E. Kloppmann, G. M. Ullmann, and T. Becker. An extended dead-end elimination algorithm
to determine gap-free lists of low energy states. Journal of Comp. Chem., 28:2325?2335, 2007.
[5] N. Komodakis and N. Paragios. Beyond loose LP-relaxations: Optimizing MRFs by repairing
cycles. In D. Forsyth, P. Torr, and A. Zisserman, editors, ECCV, pages 806?820, Heidelberg,
Germany, 2008. Springer.
[6] E. L. Lawler. A procedure for computing the K best solutions to discrete optimization problems
and its application to the shortest path problem. Management Science, 18(7):401?405, 1972.
[7] I. Ljubic, R. Weiskircher, U. Pferschy, G. W. Klau, P. Mutzel, and M. Fischetti. An algorithmic
framework for the exact solution of the prize-collecting steiner tree problem. Mathematical
Programming, 105:427?449, Feb 2006.
[8] D. Nilsson. An efficient algorithm for finding the M most probable configurations in probabilistic expert systems. Statistics and Computing, 8:159?173, Jun 1998.
[9] P. Ravikumar, A. Agarwal, and M. Wainwright. Message-passing for graph-structured linear
programs: proximal projections, convergence and rounding schemes. In Proc. of the 25th
international conference on Machine learning, pages 800?807, New York, NY, USA, 2008.
ACM.
[10] E. Santos. On the generation of alternative explanations with implications for belief revision.
In Proc. of the 7th Annual Conference on Uncertainty in Artificial Intelligence, 1991.
[11] Y. Shimony. Finding the MAPs for belief networks is NP-hard.
68(2):399?410, 1994.
Aritifical Intelligence,
[12] D. Sontag and T. Jaakkola. New outer bounds on the marginal polytope. In J. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages
1393?1400. MIT Press, Cambridge, MA, 2007.
[13] D. Sontag, T. Meltzer, A. Globerson, T. Jaakkola, and Y. Weiss. Tightening LP relaxations
for MAP using message passing. In Proc. of the 24th Annual Conference on Uncertainty in
Artificial Intelligence, pages 503?510, 2008.
[14] B. Taskar, S. Lacoste-Julien, and M. I. Jordan. Structured prediction, dual extragradient and
bregman projections. J. Mach. Learn. Res., 7:1627?1653, 2006.
[15] M. Wainwright and M. Jordan. Graphical models, exponential families, and variational inference. Found. Trends Mach. Learn., 1(1-2):1?305, 2008.
[16] M. J. Wainwright, T. Jaakkola, and A. S. Willsky. A new class of upper bounds on the log
partition function. IEEE Transactions on Information Theory, 51(7):2313?2335, 2005.
[17] T. Werner. A linear programming approach to max-sum problem: A review. IEEE Trans.
Pattern Anal. Mach. Intell., 29(7):1165?1179, 2007.
[18] C. Yanover, T. Meltzer, and Y. Weiss. Linear programming relaxations and belief propagation
? an empirical study. Journal of Machine Learning Research, 7:1887?1907, 2006.
[19] C. Yanover and Y. Weiss. Finding the M most probable configurations using loopy belief propagation. In Advances in Neural Information Processing Systems 16. MIT Press, Cambridge,
MA, 2004.
[20] J. Yedidia, W. W.T. Freeman, and Y. Weiss. Constructing free-energy approximations and
generalized belief propagation algorithms. IEEE Trans. on Information Theory, 51(7):2282?
2312, 2005.
9
| 3745 |@word version:1 polynomial:1 seems:1 nd:10 barahona:1 seek:1 configuration:4 interestingly:1 existing:1 steiner:2 current:1 intriguing:1 must:2 subsequent:1 partition:3 remove:2 update:1 intelligence:3 leaf:1 amir:1 plane:4 prize:2 characterization:7 certificate:6 node:7 provides:2 math:1 five:1 mathematical:1 direct:1 prove:3 pairwise:4 indeed:3 roughly:1 fared:1 inspired:1 relying:1 decomposed:1 freeman:1 talya:1 cardinality:1 solver:5 considering:1 begin:5 revision:1 notation:2 suffice:2 maximizes:3 moreover:1 linearity:1 what:1 backbone:3 santos:3 pursue:1 z:2 finding:17 guarantee:2 dti:2 every:1 collecting:2 runtime:1 exactly:4 platt:2 partitioning:3 zl:4 enjoy:1 engineering:1 local:6 treat:1 understood:1 xv:2 mach:3 path:1 might:3 plus:1 chose:1 studied:2 fastest:1 range:1 practical:2 globerson:3 enforces:3 unique:1 atomic:1 practice:3 block:3 implement:1 differs:2 procedure:1 intersect:1 empirical:2 significantly:1 projection:2 pre:1 protein:10 cannot:1 put:1 context:1 applying:3 optimize:2 equivalent:4 map:35 www:1 reviewer:1 maximizing:3 jerusalem:1 attention:2 convex:1 insight:1 his:1 proving:1 coordinate:2 construction:1 exact:12 programming:3 element:1 trend:1 approximated:1 particularly:1 satisfying:1 stripe:19 cut:2 ising:2 blocking:1 role:1 taskar:1 quicker:1 solved:3 calculate:1 region:6 connected:2 cycle:1 ran:1 mentioned:2 solving:5 algebra:1 compromise:1 serve:1 linial:1 matchings:1 easily:1 joint:1 various:1 describe:2 artificial:2 repairing:1 whose:3 heuristic:1 solve:1 statistic:1 sequence:3 took:3 interaction:1 product:4 jij:2 iff:1 achieve:1 roweis:2 description:2 convergence:1 produce:1 object:1 help:1 illustrate:1 derive:1 ac:2 fixing:1 ij:20 school:1 received:2 eq:17 implemented:1 c:2 come:2 implies:2 differ:1 constraintsk:4 hull:1 violates:1 elimination:1 require:1 generalization:1 anonymous:1 tighter:1 probable:2 solu:1 hold:1 around:2 mapping:1 predict:2 algorithmic:1 achieves:1 failing:1 proc:3 combinatorial:1 largest:1 correctness:1 successfully:2 minimization:1 mit:3 clearly:4 always:2 modified:1 jaakkola:4 derived:1 longest:1 rank:5 indicates:1 attains:1 posteriori:1 inference:4 helpful:1 mrfs:2 eliminate:4 mth:2 koller:2 wij:1 subroutine:2 interested:4 germany:1 provably:5 arg:4 dual:3 html:1 overall:1 denoted:1 among:1 special:1 marginal:12 equal:1 construct:2 having:1 identical:1 np:3 employ:1 distinguishes:1 randomly:2 resulted:1 intell:1 replaced:1 geometry:1 cplex:1 interest:1 message:3 gamir:1 bmmf:12 chain:3 implication:1 accurate:1 bregman:1 edge:7 integral:19 capable:1 lh:2 tree:63 plugged:1 desired:1 re:1 minimal:4 formalism:1 modeling:1 earlier:1 compelling:1 cover:1 shimony:1 zn:9 trickier:1 assignment:39 maximization:2 loopy:3 cost:2 vertex:17 subset:1 werner:1 rounding:1 characterize:2 proximal:1 st:5 density:1 international:1 randomized:1 huji:2 probabilistic:3 together:1 satisfied:2 management:1 containing:1 choose:1 opposed:1 marginalizes:3 dead:1 expert:1 return:2 potential:4 pooled:1 includes:1 forsyth:1 ranking:1 vi:1 tion:2 view:1 performed:1 observing:1 apparently:1 portion:1 start:1 complicated:1 il:2 acid:2 largely:1 efficiently:3 maximized:1 yield:8 correspond:2 dealt:1 comp:1 definition:4 energy:3 obvious:1 proof:5 di:2 proved:1 dataset:2 recall:1 fractional:7 constraints1:1 actually:1 lawler:2 focusing:2 manuscript:1 originally:1 day:1 x6:2 planar:2 zisserman:1 maximally:1 wei:4 formulation:2 done:2 furthermore:4 stage:1 d:2 hand:4 overlapping:1 propagation:3 somehow:1 usa:1 normalized:2 requiring:1 ranged:1 inductive:5 excluded:3 attractive:3 komodakis:1 width:9 won:1 generalized:2 complete:1 variational:1 ef:1 recently:2 novel:1 common:2 specialized:1 pseudocode:1 functional:3 empirically:1 diversifying:1 marginals:8 modus:1 refer:2 significant:1 cambridge:3 consistency:3 grid:7 inclusion:1 mutzel:1 submodular:1 stable:1 similarity:1 longer:1 etc:1 add:3 feb:1 recent:2 optimizing:3 optimizes:1 inequality:37 binary:1 devise:3 seen:1 additional:2 somewhat:1 employed:1 determine:2 maximize:2 shortest:1 ii:1 sound:1 calculation:3 long:1 ravikumar:1 ambition:1 prediction:3 variant:1 mrf:1 scalable:1 iteration:1 agarwal:1 else:1 subject:2 member:1 jordan:2 call:3 near:1 leverage:2 noting:1 iii:1 enough:1 easy:1 meltzer:3 xj:14 zi:9 enumerating:2 whether:1 utility:1 becker:1 sontag:2 proceed:1 afford:1 passing:3 york:1 useful:3 generally:4 clear:1 simplest:1 generate:1 http:1 exist:1 zj:6 per:6 diverse:1 discrete:1 shall:2 key:2 lacoste:1 graph:25 relaxation:17 merely:1 fraction:1 sum:2 excludes:1 enforced:1 run:11 year:1 uncertainty:2 family:2 almost:1 appendix:1 bound:8 convergent:1 annual:2 placement:1 constraint:30 bp:3 speed:1 argument:2 optimality:6 relatively:2 structured:4 describes:1 increasingly:1 lp:42 nilsson:12 intuitively:1 taken:1 previously:1 turn:3 describing:1 loose:1 needed:1 singer:2 tractable:4 end:4 junction:5 generalizes:1 yedidia:1 apply:1 enforce:2 generic:1 appropriate:1 alternative:3 denotes:1 assumes:1 include:1 top:6 running:4 graphical:3 remaining:1 intend:1 question:4 added:5 already:1 strategy:1 subspace:3 distance:1 separate:4 thank:2 separating:1 mplp:1 outer:4 polytope:27 spanning:27 enforcing:1 induction:1 willsky:1 index:1 ellipsoid:1 hebrew:1 difficult:3 mostly:1 hypertree:1 negative:2 tightening:1 fromer:3 design:3 implementation:1 anal:1 perform:3 upper:2 observation:1 incorrectly:1 defining:1 extended:2 excluding:5 arbitrary:1 required:5 specified:1 connection:1 z1:2 gbp:2 learned:1 hour:3 trans:2 address:2 able:3 suggested:2 proceeds:1 below:1 beyond:1 pattern:1 summarize:1 program:3 max:15 explanation:1 belief:5 wainwright:3 power:2 difficulty:4 natural:1 solvable:1 yanover:4 scheme:9 julien:1 jun:1 review:1 acknowledgement:1 nati:1 mixed:3 interesting:3 generation:1 proven:1 degree:3 principle:1 tightened:1 editor:3 systematically:1 eccv:1 surprisingly:1 free:2 zc:2 side:3 neighbor:1 taking:2 face:1 xn:12 projected:2 counted:1 far:1 tighten:1 transaction:1 approximate:6 compact:2 cutting:4 clique:7 ml:10 conclude:2 xi:32 search:1 triplet:4 bethe:3 learn:2 obtaining:1 forest:1 heidelberg:1 necessarily:3 separator:5 constructing:1 did:1 main:2 arise:2 amino:2 x1:2 fig:4 referred:1 advice:1 slow:1 ny:1 sub:5 paragios:1 xh:1 exponential:8 xl:4 pe:6 theorem:9 minute:6 removing:1 specific:1 showing:1 list:1 pz:1 exists:2 albeit:1 adding:6 chen:1 gap:1 aritifical:1 entropy:1 simply:2 highlighting:1 springer:1 corresponds:3 relies:1 acm:1 ma:3 pferschy:1 goal:1 formulated:3 marked:1 replace:1 considerable:2 hard:5 experimentally:1 specifically:2 determined:1 torr:1 extragradient:1 lemma:2 duality:1 formally:1 scp:4 chem:1 bioinformatics:1 violated:3 evaluate:1 |
3,030 | 3,746 | Discrete MDL Predicts in Total Variation
Marcus Hutter
RSISE @ ANU and SML @ NICTA
Canberra, ACT, 0200, Australia
[email protected] www.hutter1.net
Abstract
The Minimum Description Length (MDL) principle selects the model that has the
shortest code for data plus model. We show that for a countable class of models,
MDL predictions are close to the true distribution in a strong sense. The result
is completely general. No independence, ergodicity, stationarity, identifiability,
or other assumption on the model class need to be made. More formally, we show
that for any countable class of models, the distributions selected by MDL (or MAP)
asymptotically predict (merge with) the true measure in the class in total variation
distance. Implications for non-i.i.d. domains like time-series forecasting, discriminative learning, and reinforcement learning are discussed.
1
Introduction
The minimum description length (MDL) principle recommends to use, among competing models,
the one that allows to compress the data+model most [Gr?u07]. The better the compression, the
more regularity has been detected, hence the better will predictions be. The MDL principle can be
regarded as a formalization of Ockham?s razor, which says to select the simplest model consistent
with the data.
Multistep lookahead sequential prediction. We consider sequential prediction problems, i.e. having observed sequence x?(x1 ,x2 ,...,x` )?x1:` , predict z ?(x`+1 ,...,x`+h )?x`+1:`+h , then observe
x`+1 ? X for ` ? `(x) = 0,1,2,.... Classical prediction is concerned with h = 1, multi-step lookahead with 1 < h < ?, and total prediction with h = ?. In this paper we consider the last, hardest
case. An infamous problem in this category is the Black raven paradox [Mah04, Hut07]: Having
observed ` black ravens, what is the likelihood that all ravens are black. A more computer science
problem is (infinite horizon) reinforcement learning, where predicting the infinite future is necessary
for evaluating a policy. See Section 6 for these and other applications.
Discrete MDL. Let M = {Q1 ,Q2 ,...} be a countable class of models=theories=hypotheses=
probabilities over sequences X ? , sorted w.r.t. to their complexity=codelength K(Qi )=2log2 i (say),
containing the unknown true sampling distribution P . Our main result will be for arbitrary measurable spaces X , but to keep things simple in the introduction, let us illustrate MDL for finite X .
In this case, we define Qi (x) as the Qi -probability of data sequence x ? X ` . It is possible to code x
in logP (x)?1 bits, e.g. by using Huffman coding. Since x is sampled from P , this code is optimal
(shortest among all prefix codes). Since we do not know P , we could select the Q ? M that leads to
the shortest code on the observed data x. In order to be able to reconstruct x from the code we need
to know which Q has been chosen, so we also need to code Q, which takes K(Q) bits. Hence x can
be coded in minQ?M {?logQ(x)+K(Q)} bits. MDL selects as model the minimizer
MDLx := arg min {? log Q(x) + K(Q)}
Q?M
Main result. Given x, the true predictive probability of some ?future? event A is P [A|x], e.g. A
could be x`+1:`+h or any other measurable set of sequences (see Section 3 for proper definitions).
1
We consider the sequence of predictive measures MDLx [?|x] for ` = 0,1,2,... selected by MDL. Our
main result is that
MDLx [?|x] converges to P [?|x] in total variation distance for ` ? ? with P -probability 1
(see Theorem 1). The analogous result for Bayesian prediction is well-known, and an immediate
corollary of Blackwell&Dubin?s celebrated merging-of-opinions theorem [BD62]. Our primary contribution is to prove the analogous result for MDL. A priori it is not obvious that it holds at all, and
indeed the proof turns out to be much more complex.
Motivation. The results above hold for completely arbitrary countable model classes M. No independence, ergodicity, stationarity, identifiability, or other assumption need to be made.
The bulk of previous results for MDL are for continuous model classes [Gr?u07]. Much has been
shown for classes of independent identically distributed (i.i.d.) random variables [BC91, Gr?u07].
Many results naturally generalize to stationary-ergodic sequences like (kth-order) Markov. For instance, asymptotic consistency has been shown in [Bar85]. There are many applications violating
these assumptions, some of them are presented below and in Section 6. For MDL to work, P needs
to be in M or at least close to some Q ? M, and there are interesting environments that are not even
close to being stationary-ergodic or i.i.d.
Non-i.i.d. data is pervasive [AHRU09]; it includes all time-series prediction problems like weather
forecasting and stock market prediction [CBL06]. Indeed, these are also perfect examples of nonergodic processes. Too much green house gases, a massive volcanic eruption, an asteroid impact,
or another world war could change the climate/economy irreversibly. Life is also not ergodic; one
inattentive second in a car can have irreversible consequences. Also stationarity is easily violated
in multi-agent scenarios: An environment which itself contains a learning agent is non-stationary
(during the relevant learning phase). Extensive games and multi-agent reinforcement learning are
classical examples [WR04].
Often it is assumed that the true distribution can be uniquely identified asymptotically. For nonergodic environments, asymptotic distinguishability can depend on the realized observations, which
prevent a prior reduction or partitioning of M. Even if principally possible, it can be practically
burdensome to do so, e.g. in the presence of approximate symmetries. Indeed this problem is the
primary reason for considering predictive MDL. MDL might never identify the true distribution, but
our main result shows that the sequentially selected models become predictively indistinguishable.
For arbitrary countable model classes, the following results are known: The MDL one-step lookahead
predictor (i.e. h = 1) of three variants of MDL converges to the true predictive distribution. The
proof technique used in [PH05] is inherently limited to finite h. Another general consistency result
is presented in [Gr?u07, Thm.5.1]. Consistency is shown (only) in probability and the predictive
implications of the result are unclear. A stronger almost sure result is alluded to, but the given
reference to [BC91] contains only results for i.i.d. sequences which do not generalize to arbitrary
classes. So existing results for discrete MDL are far less satisfactory than the elegant Bayesian
merging-of-opinions result.
The countability of M is the severest restriction
S? of our result. Nevertheless the countable case
is useful. A semi-parametric problem class d=1 Md with Md = {Q?,d : ? ? IRd } (say) can be
reduced to a countable class M = {Pd } for which
Sour result holds, where Pd is a Bayes or NML
or other estimate of Md [Gr?u07]. Alternatively, d Md could be reduced to a countable class by
considering only computable parameters ?. Essentially all interesting model classes contain such
a countable topologically dense subset. Under certain circumstances MDL still works for the noncomputable parameters [Gr?u07]. Alternatively one may simply reject non-computable parameters
on philosophical grounds [Hut05]. Finally, the techniques for the countable case might aid proving
general results for continuous M, possibly along the lines of [Rya09].
Contents. The paper is organized as follows: In Section 2 we provide some insights how MDL
works in restricted settings, what breaks down for general countable M, and how to circumvent the
problems. The formal development starts with Section 3, which introduces notation and our main
result. The proof for finite M is presented in Section 4 and for denumerable M in Section 5. In
Section 6 we show how the result can be applied to sequence prediction, classification and regression,
discriminative learning, and reinforcement learning. Section 7 discusses some MDL variations.
2
2
Facts, Insights, Problems
Before starting with the formal development, we describe how MDL works in some restricted settings, what breaks down for general countable M, and how to circumvent the problems. For deterministic environments, MDL reduces to learning by elimination, and results can easily be understood.
Consistency of MDL for i.i.d. (and stationary-ergodic) sources is also intelligible. For general M,
MDL may no longer converge to the true model. We have to give up the idea of model identification,
and concentrate on predictive performance.
Deterministic MDL = elimination learning. For a countable class M = {Q1 ,Q2 ,...} of deterministic theories=models=hypotheses=sequences, sorted w.r.t. to their complexity=codelength
K(Qi ) = 2log2 i (say) it is easy to see why MDL works: Each Q is a model for one infinite seQ
P
quence xQ
1:? , i.e. Q(x ) = 1. Given the true observations x ? x1:` so far, MDL selects the simplest
Q
P
Q consistent with x1:` and for h = 1 predicts x`+1 . This (and potentially other) Q becomes (forever)
inconsistent if and only if the prediction was wrong. Assume the true model is P = Qm . Since
elimination occurs in order of increasing index i, and Qm never makes any error, MDL makes at
most m?1 prediction errors. Indeed, what we have described is just classical Gold style learning
Q
by elimination. For 1 < h < ?, the prediction xQ
`+1:`+h may be wrong only on x`+h , which causes
P
h wrong predictions before the error is revealed. (Note that at time ` only x` is revealed.) Hence
the total number of errors is bounded by h?(m?1). The bound is for instance attained on the class
consisting of Qi = 1ih 0? , and the true sequence switches from 1 to 0 after having observed m?h
ones. For h = ?, a wrong prediction gets eventually revealed. Hence each wrong Qi (i < m) gets
eventually eliminated, i.e. P gets eventually selected. So for h = ? we can (still/only) show that the
number of errors is finite. No bound on the number of errors in terms of m only is possible. For
instance, for M = {Q1 = 1? ,Q2 = P = 1n 0? }, it takes n time steps to reveal that prediction 1? is
wrong, and n can be chosen arbitrarily large.
Comparison of deterministic?probabilistic and MDL?Bayes. The flavor of results carries over
to some extent to the probabilistic case. On a very abstract level even the line of reasoning carries
over, although this is deeply buried in the sophisticated mathematical analysis of the latter. So the
special deterministic case illustrates the more complex probabilistic case. The differences are as
follows: In the probabilistic case, the true P can in general not be identified anymore. Further, while
the Bayesian bound trivially follows from the 1/2-century old classical merging of opinions result
[BD62], the corresponding MDL bound we prove in this paper is more difficult to obtain.
Consistency of MDL for stationary-ergodic sources. For an i.i.d. class P
M, the law of large num`
1
bers
applied
to
the
random
variables
Z
:=
log[P
(x
)/Q(x
)]
implies
t
t
t
t=1 Zt ? KL(P ||Q) :=
P
`
P
(x
)log[P
(x
)/Q(x
)]
with
P
-probability
1.
Either
the
Kullback-Leibler
(KL) divergence is
1
1
1
x1
P`
zero, which is the case if and only if P = Q, or logP (x1:` )?logQ(x1:` ) ? t=1 Z` ? KL(P ||Q)` ?
?, i.e. asymptotically MDL does not select Q. For countable M, a refinement of this argument
shows that MDL eventually selects P [BC91]. This reasoning can be extended to stationary-ergodic
M, but essentially not beyond. To see where the limitation comes from, we present some troubling
examples.
Trouble makers. For instance, let P be a Bernoulli(?0 ) process, but let the Q-probability that
xt = 1 be ?t , i.e. time-dependent (still assuming independence). For a suitably converging but ?oscillating? (i.e. infinitely often larger and smaller than its limit) sequence ?t ? ?0 one can show that
log[P (x1:t )/Q(x1:t )] converges to but oscillates around K(Q) ? K(P ) w.p.1, i.e. there are nonstationary distributions for which MDL does not converge (not even to a wrong distribution).
One idea to solve this problem is to partition M, where two distributions are in the same partition
if and only if they are asymptotically indistinguishable (like P and Q above), and then ask MDL
to only identify a partition. This approach cannot succeed generally, whatever particular criterion is
used, for the following reason: Let P (x1 ) > 0 ?x1 . For x1 = 1, let P and Q be asymptotically indistinguishable, e.g. P =Q on the remainder of the sequence. For x1 =0, let P and Q be asymptotically
distinguishable distributions, e.g. different Bernoullis. This shows that for non-ergodic sources like
this one, asymptotic distinguishability depends on the drawn sequence. The first observation can lead
to totally different futures.
Predictive MDL avoids trouble. The Bayesian posterior does not need to converge to a single (true
or other) distribution, in order for prediction to work. We can do something similar for MDL. At
3
each time we still select a single distribution, but give up the idea of identifying a single distribution asymptotically. We just measure predictive success, and accept infinite oscillations. That?s the
approach taken in this paper.
3
Notation and Main Result
The formal development starts with this section. We need probability measures and filters for infinite
sequences, conditional probabilities and densities, the total variation distance, and the concept of
merging (of opinions), in order to formally state our main result.
Measures on sequences. Let (? := X ? ,F,P ) be the space of infinite sequences with natural filtration and product ?-field F and probability measure P . Let ? ? ? be an infinite sequence sampled from the true measure P . Except when mentioned otherwise, all probability statements and
expectations refer to P , e.g. almost surely (a.s.) and with probability 1 (w.p.1) are short for with
P -probability 1 (w.P .p.1). Let x = x1:` = ?1:` be the first ` symbols of ?.
For countable X , the probability that an infinite sequence starts with x is P (x):=P [{x}?X ? ]. The
conditional distribution of an event A given x is P [A|x] := P [A?({x}?X ? )]/P (x), which exists
w.p.1. For other probability measures Q on ?, we define Q(x) and Q[A|x] analogously. General X
are considered at the end of this section.
Convergence in total variation. P is said to be absolutely continuous relative to Q, written
P Q
:?
[Q[A] = 0 implies P [A] = 0 for all A ? F]
P and Q are said to be mutually singular, written P ?Q, iff there exists an A?F for which P [A]=1
and Q[A] = 0. The total variation distance (tvd) between Q and P given x is defined as
d(P, Q|x) := sup Q[A|x] ? P [A|x]
(1)
A?F
Q is said to predict P in tvd (or merge with P ) if d(P,Q|x) ? 0 for `(x) ? ? with P -probability
1. Note that this in particular implies, but is stronger than one-step predictive on- and off-sequence
convergence Q(x`+1 = a`+1 |x1:` ) ? P (x`+1 = a`+1 |x1:` ) ? 0 for any a, not necessarily equal ?
[KL94]. The famous Blackwell and Dubins convergence result [BD62] states that if P is absolutely
continuous relative to Q, then (and only then [KL94]) Q merges with P :
If P Q then d(P, Q|x) ? 0 w.p.1 for `(x) ? ?
Bayesian prediction. This result can immediately be utilized for Bayesian prediction. Let M :=
{Q
or infinite) class of probability measures, and Bayes[A] :=
P 1 ,Q2 ,Q3 ,...} be a countable (finite P
Q[A]w
with
w
>
0
?Q
and
Q
Q
Q?M
Q?M wQ = 1. If the model assumption P ? M holds, then
obviously P Bayes, hence Bayes merges with P , i.e. d(P,Bayes|x) ? 0 w.p.1 for all P ? M.
Unlike many other Bayesian convergence and consistency theorems, no (independence, ergodicity,
stationarity, identifiability, or other) assumption on the model class M need to be made. The analogous result for MDL is as follows:
Theorem 1 (MDL predictions) Let M be a countable class of probability measures on X ? containing the unknown true sampling distribution P . No (independence, ergodicity, stationarity, identifiability, or other) assumptions need to be made on M. Let
X
MDLx := arg min {? log Q(x) + K(Q)} with
2?K(Q) < ?
Q?M
Q?M
be the measure selected by MDL at time ` given x ? X . Then the predictive distributions MDLx [?|x]
converge to P [?|x] in the sense that
d(P, MDLx |x) ? sup MDLx [A|x] ? P [A|x] ? 0 for `(x) ? ? w.p.1
`
A?F
K(Q)Pis usually interpreted and defined as the length of some prefix code for Q, in which
?1
case Q 2?K(Q) ? 1. If K(Q) := log2 wQ
is chosen as complexity, by Bayes rule Pr(Q|x) =
Q(x)wQ /Bayes(x), the maximum a posteriori estimate MAPx :=argmaxQ?M {Pr(Q|x)}?MDLx .
Hence the theorem also applies to MAP. The proof of the theorem is surprisingly subtle and complex
compared to the analogous Bayesian case. One reason is that MDLx (x) is not a measure on X ? .
4
Arbitrary X . For arbitrary measurable spaces X , definitions are more subtle, essentially because
point probabilities Q(x) have to be replaced by probability densities relative to some base measure
M , usually Lebesgue for X = IRd , counting measure for countable X , and e.g. M [?] = Bayes[?] for
general X . We have taken care of that all results and proofs are valid unchanged for general X ,
with Q(?) defined as a version of the Radon-Nikodym derivative relative to M . We spare the reader
the details, since they are completely standard and do not add any value to this paper, and space is
limited. The formal definitions of Q(x) and Q[A|x] can be found e.g. in [Doo53, BD62]. Note that
MDLx is independent of the particular choice of M .
4
Proof for Finite Model Class
We first prove Theorem 1 for finite model classes M. For this we need the following Definition and
Lemma:
Definition 2 (Relations between Q and P ) For any probability measures Q and P , let
? Qr +Qs =Q be the Lebesgue decomposition of Q relative to P into an absolutely continuous
non-negative measure Qr P and a singular non-negative measure Qs ?P .
? g(?) := dQr /dP = R lim`?? [Q(x1:` )/P (x1:` )] be (a version of) the Radon-Nikodym
derivative, i.e. Qr [A] = A g dP .
? ?? := {? : Q(x1:` )/P (x1:` ) ? 0} ? {? : g(?) = 0}.
~ := {? : d(P,Q|x) ? 0 for `(x) ? ?}.
? ?
It is well-known that the Lebesgue decomposition exists and is unique. The representation of
the Radon-Nikodym derivative as a limit of local densities can e.g. be found in [Doo53, VII?8]:
r/s
Z` (?) := Qr/s (x1:` )/P (x1:` ) for ` = 1,2,3,... constitute two martingale sequences, which conr
is the Radon-Nikodym derivative dQr /dP . (Indeed,
verge w.p.1. Qr P implies that the limit Z?
Doob?s martingale convergence theorem can be used to prove the Radon-Nikodym theorem.) Qs ?P
r
= 0 w.p.1. So g is uniquely defined and finite w.p.1.
implies Z?
Lemma 3 (Generalized merging of opinions) For any Q and P , the following holds:
(i) P Q if and only if P [?? ] = 0
~ =1
(ii) P [?? ] = 0 implies P [?]
? ~
(iii) P [? ? ?] = 1
[(i)+[BD62]]
[generalizes (ii)]
(i) says that Q(x)/P (x) converges almost surely to a strictly positive value if and only if P is
absolutely continuous relative to Q, (ii) says that an almost sure positive limit of Q(x)/P (x) implies
that Q merges with P . (iii) says that even if P 6 Q, we still have d(P,Q|x) ? 0 on almost every
sequence that has a positive limit of Q(x)/P (x).
Proof. Recall Definition 2.
(i?) Assume P [?? ]=0: P [A]>0 implies Q[A]?Qr [A]=
P [?? ] = 0. Therefore P Q.
R
A
g dP >0, since g >0 a.s. by assumption
R
(i?) Assume P Q: Choose a B for which P [B] = 1 and Qs [B] = 0. Now Qr [?? ] = ?? g dP = 0
implies 0 ? Q[B ??? ] ? Qs [B]+Qr [?? ] = 0+0. By P Q this implies P [B ??? ] = 0, hence
P [?? ] = 0.
~ = 1 is Blackwell-Dubins? celebrated result. The result now follows
(ii) That P Q implies P [?]
from (i).
(iii) generalizes [BD62]. For P [?? ] = 0 it reduces to (ii). The case P [?? ] = 1 is trivial. Therefore
we can assume 0 < P [?? ] < 1. Consider measure P 0 [A] := P [A|B] conditioned on B := ?\?? .
R
R
R
Assume Q[A]=0. Using ?? g dP =0, we get 0=Qr [A]= A g dP = A\?? g dP . Since g >0 outside
?? , this implies P [A\?? ] = 0. So P 0 [A] = P [A?B]/P [B] = P [A\?? ]/P [B] = 0. Hence P 0 Q.
Now (ii) implies d(P 0 ,Q|x) ? 0 with P 0 probability 1. Since P 0 P we also get d(P 0 ,P |x) ? 0
w.P 0 .p.1.
~
Together this implies 0?d(P,Q|x)?d(P 0 ,P |x)+d(P 0 ,Q|x)?0 w.P 0 .p.1, i.e. P 0 [?]=1.
The claim
now follows from
5
~
P [?? ? ?]
~ [? \ ?? ] + P [?? ? ?|?
~ ? ]P [?? ]
= P 0 [?? ? ?]P
= 1 ? P [? \ ?? ] + 1 ? P [?? ] = P [?] = 1
The intuition behind the proof of Theorem 1 is as follows. MDL will asymptotically not select Q
for which Q(x)/P (x) ? 0. Hence for those Q potentially selected by MDL, we have ? 6? ?? , hence
~ for which d(P,Q|x) ? 0 (a.s.). The technical difficulties are for finite M that the eligible Q
? ? ?,
depend on the sequence ?, and for infinite M to deal with non-uniformly converging d, i.e. to infer
d(P,MDLx |x) ? 0.
~ Q refer to some Q ? M ?
Proof of Theorem 1 for finite M. Recall Definition 2, and let gQ ,??Q ,?
{Q1 ,...,Qm }. The set of sequences ? for which some gQ for some Q ? M is undefined has P measure zero, and hence can be ignored. Fix some sequence ? ? ? for which gQ (?) is defined for
all Q ? M, and let M? := {Q ? M : gQ (?) = 0}.
MDLx := arg min LQ (x),
Q?M
where LQ (x) := ? log Q(x) + K(Q).
Consider the difference
LQ (x) ? LP (x) = ? log
Q(x)
`??
+ K(Q) ? K(P ) ?? ? log gQ (?) + K(Q) ? K(P )
P (x)
For Q ? M? , the r.h.s. is +?, hence
?Q ? M? ?`Q ?` > `Q : LQ (x) > LP (x)
Since M is finite, this implies
?` > `0 ?Q ? M? : LQ (x) > LP (x),
where `0 := max{`Q : Q ? M? } < ?
x
Therefore, since P ? M, we have MDL T6? M? ?` > `0 , so we can safely ignore all Q ? M? and
~ Q ). Since P [?1 ] = 1 by Lemma 3(iii), we
focus on Q ? M? := M\M? . Let ?1 := Q?M? (??Q ? ?
can also assume ? ? ?1 .
~ Q ? d(P, Q|x) ? 0
Q ? M? ? gQ (?) > 0 ? ? 6? ??Q ? ? ? ?
This implies
d(P, MDLx |x) ?
sup d(P, Q|x) ? 0
Q?M?
where the inequality holds for ` > `0 and the limit holds, since M is finite. Since the set of ?
excluded in our considerations has measure zero, d(P,MDLx |x) ? 0 w.p.1, which proves the
theorem for finite M.
5
Proof for Countable Model Class
The proof in the previous Section crucially exploited finiteness of M. We want to prove that the
probability that MDL asymptotically selects ?complex? Q is small. The following Lemma establishes that the probability that MDL selects a specific complex Q infinitely often is small.
Lemma 4 (MDL avoids complex probability measures Q) For any Q and P we have
P [Q(x)/P (x) ? c infinitly often] ? 1/c.
Proof.
Q(x)
Q(x)
(a)
? c] = P [ lim
? c] ?
`?? P (x)
P (x)
(b) 1
Q(x) (c) 1
Q(x) (e) 1
Q(x) (d) 1
] = E[lim
] ?
lim E[
] ?
? E[lim
` P (x)
c
c
c `
P (x)
c
` P (x)
P [?`0 ?` > `0 :
(a) is true by definition of the limit superior lim, (b) is Markov?s inequality, (c) exploits the fact that
the limit of Q(x)/P (x) exists w.p.1, (d) uses Fatou?s lemma, and (e) is obvious.
For sufficiently complex Q, Lemma 4 implies that LQ (x) > LP (x) for most x. Since convergence is
non-uniform in Q, we cannot apply the Lemma to all (infinitely many) complex Q directly, but need
?
to lump them into one Q.
6
Proof of Theorem 1 for countable M. Let the Q ? M = {Q1 ,Q2 ,...} be ordered somehow,
e.g. in increasing order of complexity K(Q), and P = Qn . Choose some (large) m ? n and let
f := {Qm+1 ,Qm+2 ,...} be the set of ?complex? Q. We show that the probability that MDL selects
M
infinitely often complex Q is small:
f infinitely often] ? P [?`0 ?` > `0 : MDLx ? M]
f
P [MDLx ? M
?
f : LQ (x) ? LP (x)] = P [?`0 ?` > `0 : sup
P [?`0 ?` > `0 ? Q ? M
(a)
?
P [?`0 ?` > `0 :
?
Q(x)
P (x)
(b)
(c)
i>m
Qi (x) K(P )?K(Qi )
P (x) 2
? 1]
? 2K(P ) ? 1] ? ? 2K(P ) ? ?
The first three relations follow immediately from the definition of the various quantities. Bound (a)
is the crucial ?lumping? step. First we bound
?
X
?
Qi (x) ?K(Qi )
Q(x)
Qi (x) ?K(Qi )
sup
2
?
2
= ?
,
P (x)
P (x)
i>m P (x)
i=m+1
X
1X
?
Qi (x)2?K(Qi ) ,
? :=
2?K(Qi ) < ?,
Q(x)
:=
?
i>m
i>m
? is a proper probWhile MDL? [?] is not a (single) measure on ? and hence difficult to deal with, Q
ability measure on ?. In a sense, this step reduces MDL to Bayes. Now we apply Lemma 4 in (b)
? The bound (c) holds for sufficiently large m = m? (P ), since ? ? 0 for
to the (single) measure Q.
m ? ?. This shows that for the sequence of MDL estimates
{MDLx1:` : ` > `0 } ? {Q1 , ..., Qm } with probability at least 1 ? ?
Hence the already proven Theorem 1 for finite M implies that d(P,MDLx |x) ? 0 with probability
at least 1??. Since convergence holds for every ? > 0, it holds w.p.1.
6
Implications
Due to its generality, Theorem 1 can be applied to many problem classes. We illustrate some immediate implications of Theorem 1 for time-series forecasting, classification, regression, discriminative
learning, and reinforcement learning.
Time-series forecasting. Classical online sequence prediction is concerned with predicting x`+1
from (non-i.i.d.) sequence x1:` for ` = 1,2,3,.... Forecasting farther into the future is possible by
predicting x`+1:`+h for some h > 0. Hence Theorem 1 implies good asymptotic (multi-step) predictions. Offline learning is concerned with training a predictor on x1:` for fixed ` in-house, and then
selling and using the predictor on x`+1:? without further learning. Theorem 1 shows that for enough
training data, predictions ?post-learning? will be good.
Classification and Regression. In classification (discrete X ) and regression (continuous X ), a sample is a set of pairs D = {(y1 ,x1 ),...,(y` ,x` )}, and a functional relationship x? = f (y)+noise,
?
i.e. a
conditional probability P (x|
? y)
? shall be learned. For reasons apparent below, we have swapped the
usual role of x? and y.
? The dots indicate x? ? X and y? ? Y), while x = x1:` ? X ` and y = y1:` ? Y ` .
If we assume that also y? follows some distribution, and start with a countable model class M of
joint distributions Q(x,
? y)
? which contains the true joint distribution P (x,
? y),
? our main result implies
that MDLD [(x,
? y)|D]
?
converges to the true distribution P (x,
? y).
? Indeed since/if samples are assumed
i.i.d., we don?t need to invoke our general result.
Discriminative learning. Instead of learning a generative [Jeb03] joint distribution P (x,
? y),
? which
requires model assumptions on the input y,
? we can discriminatively [LSS07] learn P (?|y)
? directly
without any assumption on y (not even i.i.d). We can simply treat y1:? as an oracle to all Q, define
M0 = {Q0 } with Q0 (x) := Q(x|y1:? ), and apply our main result to M0 , leading to MDL0 x [A|x] ?
P 0 [A|x], i.e. MDLx|y1:? [A|x,y1:? ] ? P [A|x,y1:? ]. If y1 ,y2 ,... are conditionally independent, or
more generally for any causal process, we have Q(x|y) = Q(x|y1:? ). Since the x given y are not
identically distributed, classical MDL consistency results for i.i.d. or stationary-ergodic sources do
not apply. The following corollary formalizes our findings:
Corollary 5 (Discriminative MDL) Let M 3 P be a class of discriminative causal distributions
Q[?|y1:? ], i.e. Q(x|y1:? ) = Q(x|y), where x = x1:` and y = y1:` . Regression and classification are
7
typical examples. Further assume M is countable. Let MDLx|y := argminQ?M
{?logQ(x|y)+
x|y
[A|x,y]?
K(Q)} be
the
discriminative
MDL
measure
(at
time
`
given
x,y).
Then
sup
A MDL
P [A|x,y] ? 0 for `(x) ? ?, P [?|y1:? ] almost surely, for every sequence y1:? .
For finite Y and conditionally independent x, the intuitive reason how this can work is as follows:
If y? appears in y1:? only finitely often, it plays asymptotically no role; if it appears infinitely often,
then P (?|y)
? can be learned. For infinite Y and deterministic M, the result is also intelligible: Every
y? might appear only once, but probing enough function values xt = f (yt ) allows to identify the
function.
Reinforcement learning (RL). In the agent framework [RN03], an agent interacts with an environment in cycles. At time t, an agent chooses an action yt based on past experience x<t ?
(x1 ,...,xt?1 ) and past actions y<t with probability ?(yt |x<t y<t ) (say). This leads to a new
perception xt with probability ?(xt |x<t y1:t ) (say). Then cycle t + 1 starts. Let P (xy) =
Q
`
t=1 ?(xt |x<t y1:t )?(yt |x<t y<t ) be the joint interaction probability. We make no (Markov, stationarity, ergodicity) assumption on ? and ?. They may be POMDPs or beyond.
Corollary 6 (Single-agent MDL) For a fixed Q
policy=agent ?, and a class of environments
`
{?1 ,?2 ,...} 3 ?, let M = {Qi } with Qi (x|y) = t=1 ?i (xt |x<t y1:t ). Then d(P [?|y],MDLx|y ) ? 0
with joint P -probability 1.
The corollary follows immediately from the previous corollary and the facts that the Qi are causal
and that with P [?|y1:? ]-probability 1 ?y1:? implies w.P .p.1 jointly in x and y.
In reinforcement learning [SB98], the perception xt := (ot ,rt ) consists of some regular observation
ot and a reward rt ? [0,1]. Goal is to find a policy which maximizes accrued reward in the long run.
The previous corollary implies
Corollary 7 (Fixed-policy MDL value function convergence) Let VP [xy] := EP [?|xy] [r`+1 +
?r`+2 +? 2 r`+3 +...] be the future ?-discounted P -expected reward sum (true value of ?), and similarly VQi [xy] for Qi . Then the MDL value converges to the true value, i.e. VMDLx|y [xy]?VP [xy]?0,
w.P .p.1. for any policy ?.
Proof. The corollary follows from the general inequality |EP [f ]?EQ [f ]| ? sup|f |?supA |P [A]?
Q[A]| by inserting f := r`+1 +?r`+2 +? 2 r`+3 +... and P = P [?|xy] and Q = MDLx|y [?|xy], and
using 0 ? f ? 1/(1??) and Corollary 6.
Since the value function probes the infinite future, we really made use of our convergence result in
total variation. Corollary 7 shows that MDL approximates the true value asymptotically arbitrarily
well. The result is weaker than it may appear. Following the policy that maximizes the estimated
(MDL) value is often not a good idea, since the policy does not explore properly [Hut05]. Nevertheless, it is a reassuring non-trivial result.
7
Variations
MDL is more a general principle for model selection than a uniquely defined procedure. For instance,
there are crude and refined MDL [Gr?u07], the related MML principle [Wal05], a static, a dynamic,
and a hybrid way of using MDL for prediction [PH05], and other variations. For our setup, we could
have
multi-step lookahead prediction as a product of single-step predictions: MDLI(x1:` ) :=
Q` defined
x<t
MDL
(xt |x<t ) and MDLI(z|x)=MDLI(xz)/MDLI(x), which is a more incremental MDL
t=1
version. Both, MDLx and MDLI are ?static? in the sense of [PH05], and each allows for a dynamic
and a hybrid version. Due to its incremental nature, MDLI likely has better predictive properties than
MDLx , and conveniently defines a single measure over X ? , but inconveniently is 6? M. One reason
for using MDL is that it can be computationally simpler than Bayes. E.g. if M is a class of MDPs,
then MDLx is still an MDP and hence tractable, but MDLI like Bayes are a nightmare to deal with.
Acknowledgements. My thanks go to Peter Sunehag for useful discussions.
8
References
[AHRU09] M.-R. Amini, A. Habrard, L. Ralaivola, and N. Usunier, editors. Learning from non-IID data:
Theory, Algorithms and Practice (LNIDD?09), Bled, Slovenia, 2009.
[Bar85] A. R. Barron. Logically Smooth Density Estimation. PhD thesis, Stanford University, 1985.
[BC91]
A. R. Barron and T. M. Cover. Minimum complexity density estimation. IEEE Transactions on
Information Theory, 37:1034?1054, 1991.
[BD62] D. Blackwell and L. Dubins. Merging of opinions with increasing information. Annals of Mathematical Statistics, 33:882?887, 1962.
[CBL06] N. Cesa-Bianchi and G. Lugosi. Prediction, Learning, and Games. Cambridge University Press,
2006.
[Doo53] J. L. Doob. Stochastic Processes. Wiley, New York, 1953.
[Gr?u07] P. D. Gr?unwald. The Minimum Description Length Principle. The MIT Press, Cambridge, 2007.
[Hut03] M. Hutter. Convergence and loss bounds for Bayesian sequence prediction. IEEE Transactions on
Information Theory, 49(8):2061?2067, 2003.
[Hut05] M. Hutter. Universal Artificial Intelligence: Sequential Decisions based on Algorithmic Probability.
Springer, Berlin, 2005. 300 pages, http://www.hutter1.net/ai/uaibook.htm.
[Hut07] M. Hutter. On universal prediction and Bayesian confirmation. Theoretical Computer Science,
384(1):33?48, 2007.
[Jeb03]
T. Jebara. Machine Learning: Discriminative and Generative. Springer, 2003.
[KL94]
E. Kalai and E. Lehrer. Weak and strong merging of opinions. Journal of Mathematical Economics,
23:73?86, 1994.
[LSS07] P. Long, R. Servedio, and H. U. Simon. Discriminative learning can succeed where generative learning
fails. Information Processing Letters, 103(4):131?135, 2007.
[Mah04] P. Maher. Probability captures the logic of scientific confirmation. In C. Hitchcock, editor, Contemporary Debates in Philosophy of Science, chapter 3, pages 69?93. Blackwell Publishing, 2004.
[PH05]
J. Poland and M. Hutter. Asymptotics of discrete MDL for online prediction. IEEE Transactions on
Information Theory, 51(11):3780?3795, 2005.
[RN03] S. J. Russell and P. Norvig. Artificial Intelligence. A Modern Approach. Prentice-Hall, Englewood
Cliffs, NJ, 2nd edition, 2003.
[Rya09] D. Ryabko. Characterizing predictable classes of processes. In Proc. 25th Conference on Uncertainty
in Artificial Intelligence (UAI?09), Montreal, 2009.
[SB98]
R. S. Sutton and A. G. Barto. Reinforcement Learning: An Introduction. MIT Press, Cambridge, MA,
1998.
[Wal05] C. S. Wallace. Statistical and Inductive Inference by Minimum Message Length. Springer, Berlin,
2005.
[WR04] M. Weinberg and J. S. Rosenschein. Best-response multiagent learning in non-stationary environments. In Proc. 3rd International Joint Conf. on Autonomous Agents & Multi Agent Systems (AAMAS?04), pages 506?513, 2004.
9
| 3746 |@word version:4 compression:1 stronger:2 nd:1 suitably:1 crucially:1 decomposition:2 q1:6 carry:2 reduction:1 celebrated:2 series:4 contains:3 prefix:2 past:2 existing:1 written:2 partition:3 stationary:8 generative:3 selected:6 intelligence:3 short:1 farther:1 num:1 simpler:1 mathematical:3 along:1 become:1 prove:5 consists:1 market:1 expected:1 indeed:6 xz:1 wallace:1 multi:6 discounted:1 considering:2 increasing:3 becomes:1 totally:1 notation:2 bounded:1 maximizes:2 what:4 codelength:2 denumerable:1 interpreted:1 q2:5 finding:1 sour:1 nj:1 formalizes:1 safely:1 every:4 act:1 oscillates:1 wrong:7 qm:6 partitioning:1 whatever:1 appear:2 before:2 positive:3 understood:1 local:1 treat:1 limit:8 irreversible:1 infamous:1 consequence:1 sutton:1 cliff:1 multistep:1 merge:2 lugosi:1 black:3 plus:1 might:3 lehrer:1 limited:2 unique:1 practice:1 procedure:1 asymptotics:1 universal:2 reject:1 weather:1 regular:1 get:5 cannot:2 close:3 selection:1 ralaivola:1 prentice:1 www:2 measurable:3 map:2 restriction:1 deterministic:6 yt:4 go:1 economics:1 starting:1 minq:1 ergodic:8 identifying:1 immediately:3 lumping:1 dubin:1 insight:2 rule:1 q:5 regarded:1 proving:1 century:1 variation:10 autonomous:1 analogous:4 annals:1 play:1 norvig:1 massive:1 us:1 hypothesis:2 utilized:1 predicts:2 logq:3 observed:4 role:2 ep:2 capture:1 cycle:2 ryabko:1 russell:1 contemporary:1 deeply:1 mentioned:1 intuition:1 environment:7 pd:2 complexity:5 predictable:1 reward:3 dynamic:2 depend:2 predictive:11 completely:3 selling:1 easily:2 joint:6 htm:1 stock:1 various:1 chapter:1 describe:1 detected:1 artificial:3 hitchcock:1 outside:1 refined:1 apparent:1 larger:1 solve:1 stanford:1 say:9 reconstruct:1 otherwise:1 ability:1 statistic:1 jointly:1 itself:1 online:2 obviously:1 sequence:29 net:3 interaction:1 product:2 gq:6 remainder:1 inserting:1 relevant:1 iff:1 lookahead:4 gold:1 description:3 intuitive:1 qr:9 convergence:10 regularity:1 oscillating:1 perfect:1 converges:6 incremental:2 illustrate:2 montreal:1 bers:1 finitely:1 eq:1 strong:2 implies:22 come:1 indicate:1 nml:1 concentrate:1 filter:1 stochastic:1 eruption:1 australia:1 opinion:7 elimination:4 spare:1 fix:1 really:1 strictly:1 hold:10 practically:1 around:1 considered:1 ground:1 sufficiently:2 hall:1 algorithmic:1 predict:3 claim:1 m0:2 estimation:2 proc:2 maker:1 establishes:1 mit:2 kalai:1 barto:1 corollary:11 pervasive:1 q3:1 focus:1 properly:1 quence:1 bernoulli:2 likelihood:1 logically:1 sense:4 burdensome:1 posteriori:1 inference:1 economy:1 dependent:1 accept:1 relation:2 buried:1 doob:2 selects:7 arg:3 among:2 classification:5 priori:1 development:3 special:1 field:1 equal:1 never:2 having:3 once:1 sampling:2 eliminated:1 hardest:1 future:6 modern:1 divergence:1 replaced:1 phase:1 consisting:1 dubins:3 lebesgue:3 argmaxq:1 stationarity:6 message:1 englewood:1 mdl:67 introduces:1 undefined:1 behind:1 implication:4 necessary:1 experience:1 xy:8 old:1 causal:3 theoretical:1 hutter:5 instance:5 cover:1 logp:2 subset:1 habrard:1 predictor:3 uniform:1 gr:9 too:1 my:1 chooses:1 thanks:1 density:5 accrued:1 international:1 probabilistic:4 off:1 invoke:1 analogously:1 together:1 thesis:1 cesa:1 containing:2 choose:2 possibly:1 conf:1 verge:1 derivative:4 style:1 leading:1 bled:1 sml:1 coding:1 includes:1 tvd:2 countability:1 depends:1 break:2 sup:7 start:5 bayes:12 cbl06:2 identifiability:4 simon:1 contribution:1 identify:3 vp:2 generalize:2 weak:1 bayesian:10 identification:1 famous:1 iid:1 pomdps:1 definition:9 servedio:1 obvious:2 naturally:1 proof:14 static:2 sampled:2 ask:1 recall:2 lim:6 car:1 organized:1 subtle:2 sophisticated:1 appears:2 attained:1 violating:1 follow:1 response:1 generality:1 ergodicity:5 just:2 somehow:1 defines:1 reveal:1 scientific:1 mdp:1 contain:1 true:21 concept:1 y2:1 inductive:1 hence:16 excluded:1 q0:2 leibler:1 satisfactory:1 climate:1 deal:3 conditionally:2 indistinguishable:3 during:1 game:2 uniquely:3 razor:1 criterion:1 generalized:1 slovenia:1 reasoning:2 consideration:1 superior:1 functional:1 rl:1 discussed:1 approximates:1 refer:2 cambridge:3 ai:1 rd:1 consistency:7 trivially:1 similarly:1 predictively:1 dot:1 longer:1 base:1 add:1 something:1 posterior:1 scenario:1 certain:1 inequality:3 arbitrarily:2 success:1 life:1 exploited:1 minimum:5 care:1 surely:3 converge:4 shortest:3 semi:1 ii:6 reduces:3 infer:1 smooth:1 technical:1 long:2 post:1 coded:1 qi:19 converging:2 prediction:30 nonergodic:2 impact:1 variant:1 essentially:3 circumstance:1 regression:5 expectation:1 huffman:1 want:1 argminq:1 singular:2 source:4 finiteness:1 crucial:1 swapped:1 ot:2 unlike:1 sure:2 elegant:1 thing:1 inconsistent:1 lump:1 nonstationary:1 presence:1 counting:1 revealed:3 iii:4 recommends:1 concerned:3 identically:2 easy:1 independence:5 switch:1 enough:2 competing:1 identified:2 idea:4 computable:2 war:1 forecasting:5 ird:2 peter:1 york:1 cause:1 constitute:1 action:2 ignored:1 useful:2 generally:2 category:1 simplest:2 reduced:2 http:1 hutter1:3 estimated:1 bulk:1 discrete:5 shall:1 nevertheless:2 drawn:1 prevent:1 asymptotically:11 sum:1 run:1 letter:1 uncertainty:1 topologically:1 almost:6 reader:1 eligible:1 seq:1 oscillation:1 decision:1 radon:5 bit:3 bound:8 oracle:1 x2:1 maher:1 argument:1 min:3 smaller:1 lp:5 restricted:2 pr:2 principally:1 taken:2 computationally:1 alluded:1 mutually:1 turn:1 discus:1 eventually:4 know:2 mml:1 tractable:1 end:1 usunier:1 generalizes:2 apply:4 observe:1 probe:1 barron:2 amini:1 anymore:1 compress:1 trouble:2 publishing:1 log2:3 exploit:1 prof:1 classical:6 unchanged:1 already:1 realized:1 occurs:1 quantity:1 parametric:1 primary:2 rt:2 md:4 usual:1 interacts:1 unclear:1 said:3 kth:1 dp:8 distance:4 berlin:2 extent:1 trivial:2 reason:6 nicta:1 marcus:2 assuming:1 length:5 code:8 index:1 relationship:1 difficult:2 troubling:1 setup:1 weinberg:1 potentially:2 statement:1 debate:1 negative:2 filtration:1 countable:22 proper:2 policy:7 unknown:2 zt:1 bianchi:1 observation:4 ockham:1 markov:3 finite:15 gas:1 immediate:2 extended:1 paradox:1 y1:20 supa:1 arbitrary:6 thm:1 jebara:1 pair:1 rsise:1 blackwell:5 extensive:1 kl:3 philosophical:1 merges:3 learned:2 distinguishability:2 able:1 beyond:2 below:2 usually:2 perception:2 green:1 max:1 event:2 natural:1 difficulty:1 circumvent:2 predicting:3 hybrid:2 mdps:1 xq:2 poland:1 prior:1 acknowledgement:1 asymptotic:4 law:1 relative:6 loss:1 multiagent:1 discriminatively:1 interesting:2 limitation:1 proven:1 agent:10 consistent:2 principle:6 editor:2 nikodym:5 pi:1 surprisingly:1 last:1 t6:1 offline:1 formal:4 weaker:1 sunehag:1 characterizing:1 distributed:2 evaluating:1 world:1 avoids:2 valid:1 qn:1 made:5 reinforcement:8 refinement:1 far:2 transaction:3 approximate:1 ignore:1 forever:1 kullback:1 rosenschein:1 keep:1 logic:1 sequentially:1 uai:1 assumed:2 discriminative:9 alternatively:2 don:1 continuous:7 why:1 learn:1 nature:1 confirmation:2 inherently:1 symmetry:1 complex:10 necessarily:1 domain:1 main:9 dense:1 motivation:1 intelligible:2 noise:1 edition:1 aamas:1 x1:29 canberra:1 martingale:2 aid:1 probing:1 formalization:1 wiley:1 fails:1 lq:7 house:2 crude:1 theorem:18 down:2 xt:9 specific:1 symbol:1 raven:3 exists:4 ih:1 sequential:3 merging:7 phd:1 illustrates:1 conditioned:1 anu:1 horizon:1 flavor:1 fatou:1 vii:1 distinguishable:1 simply:2 explore:1 infinitely:6 likely:1 nightmare:1 conveniently:1 ordered:1 applies:1 springer:3 minimizer:1 ma:1 succeed:2 conditional:3 reassuring:1 sorted:2 goal:1 content:1 change:1 infinite:12 except:1 uniformly:1 asteroid:1 typical:1 lemma:9 total:9 unwald:1 formally:2 select:5 wq:3 latter:1 violated:1 philosophy:1 absolutely:4 |
3,031 | 3,747 | Help or Hinder: Bayesian Models of
Social Goal Inference
Tomer D. Ullman, Chris L. Baker, Owen Macindoe, Owain Evans,
Noah D. Goodman and Joshua B. Tenenbaum
{tomeru, clbaker, owenm, owain, ndg, jbt}@mit.edu
Department of Brain and Cognitive Sciences
Massachusetts Institute of Technology
Abstract
Everyday social interactions are heavily influenced by our snap judgments about
others? goals. Even young infants can infer the goals of intentional agents from
observing how they interact with objects and other agents in their environment:
e.g., that one agent is ?helping? or ?hindering? another?s attempt to get up a hill
or open a box. We propose a model for how people can infer these social goals
from actions, based on inverse planning in multiagent Markov decision problems
(MDPs). The model infers the goal most likely to be driving an agent?s behavior by assuming the agent acts approximately rationally given environmental constraints and its model of other agents present. We also present behavioral evidence
in support of this model over a simpler, perceptual cue-based alternative.
1
Introduction
Humans make rapid, consistent intuitive inferences about the goals of agents from the most impoverished of visual stimuli. On viewing a short video of geometric shapes moving in a 2D world, adults
spontaneously attribute to them an array of goals and intentions [7]. Some of these goals are simple,
e.g. reaching an object at a particular location. Yet people also attribute complex social goals, such
as helping, hindering or protecting another agent. Recent studies suggest that infants as young as
six months make the same sort of complex social goal attributions on observing simple displays of
moving shapes, or (at older ages) in displays of puppets interacting [6].
How do humans make these rapid social goal inferences from such impoverished displays? On
one approach, social goals are inferred directly from perceptual cues in a bottom-up fashion. For
example, infants in [6] may judge that a triangle pushing a circle up a hill is helping the circle get
to the top of the hill simply because the circle is moving the triangle in the direction the triangle
was last observed moving on its own. This approach, which has been developed by Blythe et al. [3],
seems suited to explain the rapidity of goal attribution, without the need for mediation from higher
cognition. On an alternative approach, these inferences come from a more cognitive and top-down
system for goal attribution. The inferences are based not just on perceptual evidence, but also on
an intuitive theory of mind on which behavior results from rational plans in pursuit of goals. On
this approach, the triangle is judged to be helping the circle because in some sense he knows what
the circle?s goal is, desires for the circle to achieve the goal, constructs a rational plan of action that
he expects will increase the probability of the circle realizing the goal. The virtue of this theoryof-mind approach is its generality, accounting for a much wider range of social goal inferences that
cannot be reduced to simple perceptual cues. Our question here is whether the rapid goal inferences
we make in everyday social situations, and that both infants and adults have been shown to make
from simple perceptual displays, require the sophistication of a theory-based approach or can be
sufficiently explained in terms of perceptual cues.
1
This paper develops the theory-based approach to intuitive social goal inference. There are two main
challenges for this approach. The first is to formalize social goals (e.g. helping or hindering) and
to incorporate this formalization into a general computational framework for goal inference that is
based on theory of mind. This framework should enable the inference that agent A is helping or
hindering agent B from a joint goal inference based on observing A and B interacting. Inference
should be possible even with minimal prior knowledge about the agents and without knowledge of
B?s goal. The second challenge is to show that this computational model provides a qualitative and
quantitative fit to rapid human goal inferences from dynamic visual displays. Can inference based
on abstract criteria for goal attribution that draws on unobservable mental states (e.g. beliefs, goals,
planning abilities) explain fast human judgments from impoverished and unfamiliar stimuli?
In addressing the challenge of formalization, we present a formal account of social goal attribution
based on the abstract criterion of A helping (or hindering) B by acting to maximize (minimize)
Bs probability of realizing his goals. On this account, agent A rationally maximizes utility by
maximizing (minimizing) the expected utility of B, where this expectation comes from As model
of Bs goals and plans of action. We incorporate this formalization of helping and hindering into an
existing computational framework for theory-based goal inference, on which goals are inferred from
actions by inverting a generative rational planning (MDP) model [1]. The augmented model allows
for the inference that A is helping or hindering B from stimuli in which B?s goal is not directly
observable. We test this Inverse Planning model of social goal attribution on a set of simple 2D
displays, comparing its performance to that of an alternative model which makes inferences directly
from visual cues, based on previous work such as that of Blythe et al. [3].
2
Computational Framework
Our framework assumes that people represent the causal role of agents? goals in terms of an intuitive
principle of rationality [4]: the assumption that agents will tend to take efficient actions to achieve
their goals, given their beliefs about the world. For agents with simple goals toward objects or states
of the world, the principle of rationality can be formalized as probabilistic planning in Markov
decision problems (MDPs), and previous work has successfully applied inverse planning in MDPs
to explain human inferences about the object-directed goals of maze-world agents [2]. Inferences of
simple relational goals between agents (such as chasing and fleeing) from maze-world interactions
were considered by Baker, Goodman and Tenenbaum [1], using multiagent MDP-based inverse
planning. In this paper, we present a framework for modeling inferences of more complex social
goals, such as helping and hindering, where an agent?s goals depend on the goals of other agents.
We will define two types of agents: simple agents, which have object-directed goals and do not
represent other agents? goals, and complex agents, which have either social or object-directed goals,
and represent other agents? goals and reason about their likely behavior. For each type of agent and
goal, we describe the multiagent MDPs they define. We then describe joint inferences of objectdirected and social goals based on the Bayesian inversion of MDP models of behavior.
2.1
Planning in multiagent MDPs
An MDP M = (S, A, T , R, ?) is a tuple that defines a model of an agent?s planning process. S is an
encoding of the world into a finite set of mutually exclusive states, which specifies the set of possible
configurations of all agents and objects. A is the set of actions and T is the transition function, which
encodes the physical laws of the world, i.e. T (St+1 , St , At ) = P (St+1 |St , At ) is the marginal
distribution over the next state, given the current state and the agent?s action (marginalizing over all
other agents? actions). R : S ? A ? R is the reward function, which provides agents with realvalued rewards for each state-action pair, and ? is the discount factor. The following subsections
will describe how R depends on the agent?s goal G (object-directed or social), and how T depends
on the agent?s type (simple or complex). We then describe how agents plan over multiagent MDPs.
2.1.1
Reward functions
Object-directed rewards The reward function induced by an object-directed goal G is straightforward. We assume that R is an additive function of state rewards and action costs, such that
R(S, A) = r(S)?c(S, A). We consider a two-parameter family of reward functions, parameterized
2
(a)
?g = 1.0
?g = 0.5
(b)
1.0
7
6
5
4
3
2
1
0.9
0.8
1 2 3 4 5 6 7
?g = 1.0
?g = 2.5
7
6
5
4
3
2
1
i
G
j
St+1
0.6
0.5
0.4
1 2 3 4 5 6 7
?g = 1.0
?g = 10.5
G
St
0.7
7
6
5
4
3
2
1
0.3
0.2
i
j
At
i
At
At +1
j
A t +1
0.1
0
1 2 3 4 5 6 7
Figure 1: (a) Illustration of the state reward functions from the family defined by the parameters ?g and ?g .
The agent?s goal is at (6,6), where the state reward is equal to ?g . The state reward functions range from a unit
reward in the goal location (row 1) to a field of reward that extends to every location in the grid (row 3). (b)
Bayes net generated by multiagent planning. In this figure, we assume that there are two agents, i and j, with
i simple and j complex. The parameters {?ig , ?gi , ?io , ?jg , ?gj } and ? are omitted from the graphical model for
readability.
by ?g and ?g , which captures the intuition that different kinds of object goals induce different rewards
in space. For instance, on a hot summer day in the park, a drinking fountain is only rewarding when
one is standing directly next to it. In contrast, a flower?s beauty is greatest from up close, but can
also be experienced from a range of distances and perspectives. Specifically, ?g and ?g determine the
scale and shape of the state reward function, with ri (S) = max(?g (1 ? distance(S, i, G)/?g ), 0),
where distance(S, i, G) is the geodesic distance between agent i and the goal. With ?g ? 1, the
reward function has a unit value of r(S) = ?g when the agent and object goal occupy the same location, i.e. when distance(S, i, G) = 0, and r(S) = 0 otherwise (see Fig. 1(a), row 1). When ?g > 1,
there is a ?field? of positive reward around the goal, with a slope of ??g /?g (see Fig. 1(a), rows
2 and 3). The state reward is maximal at distance(S, i, G) = 0, where r(S) = ?g , and decreases
linearly with the agent?s geodesic distance from the goal, reaching a minimum of r(S) = 0 when
distance(S, i, G) ? ?g .
Social rewards for helping and hindering For complex agent j, the state reward function induced by a social goal Gj depends on the cost of j?s action Aj , as well as the reward function
Ri of the agent that j wants to help or hinder. Specifically, j?s reward function is the difference
of the expectation of i?s reward function and j?s action cost function, such that Rj (S, Aj ) =
?o EAi [Ri (S, Ai )] ? c(S, Aj ). ?o is the social agent?s scaling of the expected reward of state S
for agent i, which determines how much j ?cares? about i relative to its own costs. For helping
agents, ?o > 0, and for hindering agents, ?o < 0. Computing the expectation EAi [Ri (S, Ai )] relies
on the social agent?s model of i?s planning process, which we will describe below.
2.1.2 State-transition functions
In our interactive setting, T i depends not just on i?s action, but on all other agents? actions as well.
Agent i is assumed to compute T i (St+1 , St , Ait ) by marginalizing over Ajt for all j 6= i:
Y
X
P (St+1 |St , A1:n
P (Ajt ? Aj6t =i |St , G1:n )
T i (St+1 , St , Ait ) = P (St+1 |St , Ait ) =
t )
j
Aj6t =i
where n is the number of agents. This computation requires that an agent have a model of all other
agents, whether simple or complex.
Simple agents We assume that the simple agents model other agents as randomly selecting actions
in proportion to the softmax of their expected cost, i.e. for agent j, P (Aj |S) ? exp(? ? c(S, Aj )).
Complex agents We assume that the social agent j uses its model of other agents? planning process to compute P (Ai |S, Gi ), for i 6= j, allowing for accurate prediction of other agents? actions.
We assume agents have access to the true environment dynamics. This is a simplification of a more
realistic framework in which agents have only partial or false knowledge about the environment.
3
2.1.3
Multiagent planning
Given the variables of MDP M , we can compute the optimal state-action value function Q? :
S ?A ? R, which determines the expected infinite-horizon reward of taking an action in each state.
We assume that agents have softmax-optimal policies, such that P (A|S, G) ? exp(?Q? (S, A)),
allowing occasional deviations from the optimal action depending on the parameter ?, which determines agents? level of determinism (higher ? implies higher determinism, or less randomness). In
a multiagent setting, joint value functions can be optimized recursively, with one agent representing
the value function of the other, and the other representing the representation of the first, and so on
to an arbitrarily high order [10]. Here, we restrict ourselves to the first level of this reasoning hierarchy. That is, an agent A can at most represent an agent B?s reasoning about A?s goals and actions,
but not a deeper recursion in which B reasons about A reasoning about B.
2.2
Inverse planning in multiagent MDPs
Once we have computed P (Ai |S, Gi ) for agents 1 through n using multiagent planning, we use
Bayesian inverse planning to infer agents? goals, given observations of their behavior. Fig. 1(b)
shows the structure of the Bayes net generated by multiagent planning, and over which goal inferences are performed. Let ? = {?ig , ?gi , ?io }1:n be a vector of the parameters of the agents? reward
functions. We compute the joint posterior marginal of agent i?s goal Gi and ?, given the observed
state-sequence S1:T and the action-sequences A1:n
1:T ?1 of agents 1:n using Bayes? rule:
X
1:n
P (Gi , ?|S1:T , A1:n
P (A1:n
, ?, ?)P (G1:n )P (?)
(1)
1:T ?1 , ?) ?
1:T ?1 |S1:T , G
Gj6=i
To generate goal inferences for our experimental stimuli to compare with people?s judgments, we
integrate Eq. 1 over a range of ? values for each stimulus trial:
X
P (Gi |S1:T , A1:n
P (Gi , ?|S1:T , A1:n
(2)
1:T ?1 , ?) =
1:T ?1 , ?)
?
This allows our models to infer the combination of goals and reward functions that best explains the
agents? behavior for each stimulus.
3
Experiment
We designed an experiment to test the Inverse Planning model of social goal attributions in a simple
2D maze-world domain, inspired by the stimuli of many previous studies involving children and
adults [7, 5, 8, 6, 9, 12]. We created a set of videos which depicted agents interacting in a maze.
Each video contained one ?simple agent? and one ?complex agent?, as described in the Computational Framework section. Subjects were asked to attribute goals to the agents after viewing brief
snippets of these videos. Many of the snippets showed agent behavior consistent with more than
one hypothesis about the agents? goals. Data from subjects was compared to the predictions of the
Inverse Planning model and a model based on simple visual cues that we describe in the Modeling
subsection below.
3.1
Participants
Participants were 20 adults, 8 female and 12 male. Mean age was 31 years.
3.2
Stimuli
We constructed 24 scenarios in which two agents moved around a 2D maze (shown in Fig. 2).
The maze always contained two potential object goals (a flower and a tree), and on 12 of the 24
scenarios it also contained a movable obstacle (a boulder). The scenarios were designed to satisfy
two criteria. First, scenarios were to have agents acting in ways that were consistent with more
than one hypothesis concerning their goals, with these ambiguities between goals sometimes being
resolved as the scenario developed (see Fig. 2(a)). This criterion was included to test our model?s
predictions based on ambiguous action sequences. Second, scenarios were to involve a variety of
4
(b)
Frame 1
Frame 4
Frame 7
Frame 8
Frame 16
Frame 1
Frame 4
Frame 6
Frame 8
Frame 16
Scenario 19
(a)
Scenario 6
perceptually distinct plans of action that might be interpreted as issuing from helping or hindering
goals. For example, one agent pushing another toward an object goal, removing an obstacle from
the other agent?s path, and moving aside for the other agent (all of which featured in our scenarios)
could all be interpreted as helping. This criterion was included to test our formalization of social
goals as based on an abstract relation between reward functions. In our model, social agents act to
maximize or minimize the reward of the other agent, and the precise manner in which they do so
will vary depending on the structure of the environment and their initial positions.
Figure 2: Example interactions between Small and Large agents. Agents start as in Frame 1 and progress
through the sequence along the corresponding colored paths. Each frame after Frame 1 corresponds to a probe
point at which the video was cut off and subjects were asked to judge the agents? goals. (a) The Large agent
moves over each of the goal objects (Frames 1-7) and so the video is initially ambiguous between his having
an object goal and a social goal. Disambiguation occurs from Frame 8, when the Large agent moves down and
blocks the Small agent from continuing his path up to the object goal. (b) The Large agent moves the boulder,
unblocking the Small agent?s shortest path to the flower (Frames 1-6). Once the Small agent moves into the
same room (6), the Large agent pushes him onto the flower and allows him to rest there (8-16).
Each scenario featured two different agents, which we call ?Small? and ?Large?. Large agents were
visually bigger and are able to shift both movable obstacles and Small agents by moving directly
into them. Large agents never fail in their actions, e.g. when they try to move left, they indeed move
left. Small agents were visually smaller, and could not shift agents or boulders. In our scenarios,
the actions of Small agents failed with a probability of about 0.4. Large agents correspond to the
?complex agents? introduced in Section 2, in that they could have either object-directed goals or
social goals (helping or hindering the Small agent). Small agents correspond to ?simple agents? and
could have only object goals.
We produced videos of 16 frames in length, displaying each scenario. We showed three snippets
from each video, which stopped some number of frames before the end. For example, the three
snippets of scenario 6 were cut off at frames 4, 7, and 8 respectively (see Fig. 2(a)). Subjects were
asked to make goal attributions at the end of both the snippets and the full 16-frame videos. Asking
subjects for goal attributions at multiple points in a sequence allowed us to track the change in their
judgments as evidence for particular goals accumulated. These cut-off or probe points were selected
to try to capture key events in the scenarios and so occurred before and after crucial actions that
disambiguated between different goals. Since each scenario was used to create 4 stimuli of varying
length, there was a total of 96 stimuli.
3.3
Procedure
Subjects were initially shown a set of familiarization videos of agents interacting in the maze, illustrating the structural properties of the maze-world e.g. the actions available to agents and the possibility of moving obstacles) and the differences between Small and Large agents. The experimental
stimuli were then presented in four blocks, each containing 24 videos. Scenarios were randomized
within blocks across subjects. The left-right orientation of agents and goals was counterbalanced
across subjects. Subjects were told that each snippet would contain two new agents (one Small and
one Large) and this was highlighted in the stimuli by randomly varying the color of the agents for
each snippet. Subjects were told that agents had complete knowledge of the physical structure of the
maze, including the position of all goals, agents and obstacles. After each snippet, subjects made
5
a forced-choice for the goal of each agent. For the Large agent, they could select either of the two
social goals and either of the two object goals. For the Small agent, they could choose only from the
object goals. Subjects also rated their confidence on a 3-point scale.
3.4
Modeling
Model predictions were generated using Eq. 2, assuming uniform priors on goals, and were compared directly to subjects? judgments. In our experiments, the world was given by a 2D maze-world,
and the state space included the set of positions that agents and objects can jointly occupy without
overlapping. The set of actions included U p, Down, Lef t, Right and Stay and we assume that
c(S, A ? {U p, Down, Lef t, Right}) = 1, and c(S, Stay) = 0.1 to reflect the greater cost of
moving than staying put. We set ? to 2 and ? to 0.99, following [2].
For the other parameters (namely ?g , ?g and ?o ) we integrated over a range of values that provided a
good statistical fit to our stimuli. For instance, some stimuli were suggestive of ?field? goals rather
than point goals, and marginalizing over ?g allowed our model to capture this. Values for ?g ranged
from 0.5 to 2.5, going from a weak to a strong reward. For ?g we integrated over three possible
values: 0.5, 2.5 and 10.5. These corresponded to ?point? object goals (agent receives reward for
being on the goal only), ?room? object goals (agent receives the most reward for being on the goal
and some reward for being in the same room as the goal) and ?full space? object goals (agent receives
reward at any point in proportion to distance from goal). Values for ?o ranged from 1 to 9, from
caring weakly about the other agent to caring about it to a high degree.
We compared the Inverse Planning model to a model that made inferences about goals based on
simple visual cues, inspired by previous heuristic- or perceptually-based accounts of human action
understanding of similar 2D animated displays [3, 11]. Our aim was to test whether accurate goal
inferences could be made simply by recognizing perceptual cues that correlate with goals, rather than
by inverting a rational model. We constructed our ?Cue-based? model by selecting ten visual cues
(listed below), including nearly all the applicable cues from the existing cue-based model described
in [3], leaving out those that do not apply to our stimuli, such as heading, angle and acceleration.
We then formulated an inference model based on these cues by using multinomial logistic regression
to subjects? average judgments. The set of cues was as following: (1) the distance moved on the last
timestep, (2) the change in movement distance between successive timesteps, (3+4) the geodesic
distance to goals 1 and 2, (5+6) the change in distance to goals 1 and 2 (7) the distance to Small, (8)
the change in distance to Small, (9+10) the distance of Small to goals 1 and 2.
3.5
Results
Because our main interest is in judgments about the social goals of representationally complex
agents, we analzyed only subjects? judgments about the Large agents. Each subject judged a total of 96 stimuli, corresponding to 4 time points along each of 24 scenarios. For each of these 96
stimuli, we computed an empirical probability distribution representing how likely a subject was
to believe that the Large agent had each of the four goals ?flower?, ?tree?, ?help?, or ?hinder?, by
averaging judgments for that stimulus across subjects, weighted by subjects? confidence ratings. All
analyses then compared these average human judgments to the predictions of the Inverse Planning
and Cue-based models.
Across all goal types, the overall linear correlations between human judgments and predictions
from the two models appear similar: r = 0.83 for the Inverse Planning model, and r = 0.77
for the Cue-based model. Fig. 3 shows these correlations broken down by goal type, and reveals
significant differences between the models on social versus object goals. The Inverse Planning
model correlates well with judgments for all goal types: r = 0.79, 0.77, 0.86, 0.81 for flower, tree,
helping, and hindering respectively. The Cue-based model correlates well with judgments for object
goals (r = 0.85, 0.90 for flower, tree) ? indeed slightly better the Inverse Planning model ? but much
less well for social goals (r = 0.67, 0.66 for helping, hindering). The most notable differences come
on the left-hand sides of the bottom panels in Fig. 3. There are many stimuli for which people are
very confident that the Large agent is either helping or hindering, and the Inverse Planning model
is similarly confident (bar heights near 1). The Cue-based model, in contrast, is unsure: it assigns
roughly equal probabilities of helping or hindering to these cases (bar heights near 0.5). In other
words, the Cue-based model is effective at inferring simple object goals of maze-world agents, but
6
is generally unable to distinguish between the more complex goals of helping and hindering. When
constrained to simply differentiating between social and object goals both models succeed equally
(r = 0.84), where in the Cue-based model this is probably because moving away from the object
goals serves as a good cue to separate these categories. However, the Inverse Planning model is more
successful in differentiating the right goal within social goals (r = 0.73 for the Inverse Planning
model vs. r = 0.44 for the Cue-based model).
Several other general trends in the results are worth noting. The Inverse Planning model fits very
closely with the judgments subjects make after the full 16-frame videos. On 23 of the 24 scenarios,
humans and the Inverse Planning model have the highest posterior / rating in the same goal (r =
0.97, contrasted with r = 0.77 for the Cue-based model). Note that in the one scenario for which
humans and the Inverse Planning model disagreed after observing the full sequence, both humans
and the model were close to being ambivalent whether the Large agent was hindering or interested
in the flower. There is also evidence that the reasonably good overall correlation for the Cue-based
model is partially due to overfitting; this should not be surprising given how many free parameters
the model has. We divided scenarios into two groups depending on whether a boulder was moved
around in the scenario, as movable boulders increase the range of variability in helping and hindering
action sequences. When trained on the ?no boulder? cases, the Cue-based model correlates poorly
with subjects? average judgments on the ?boulder? cases: r = 0.42. The same failure of transfer
occurs when the Cue-based model is trained on the ?boulder? cases and tested on the ?no boulder?
cases: r = 0.36. This is consistent with our general concern that a Cue-based model incorporating
many free parameters may do well when tailored to a particular environment, but is not likely to
generalize well to new environments. In contrast, the Inverse Planning model captures abstract
relations between the agents and their possible goal and so lends itself to a variety of environments.
(a)
(b)
(r = 0.77)
1
1
(r = 0.85)
0.8
0.8
0.8
0.6
0.6
0.6
0.6
0.4
0.4
0.4
0.4
0.2
0.2
0
1
0.025 0.1 0.25 0.5 0.75 0.9 0.975
(r = 0.86)
0
0.8
0.6
0.6
0.4
0.4
0.2
0.2
0.025 0.1 0.25 0.5 0.75 0.9 0.975
0.025 0.1 0.25 0.5 0.75 0.9 0.975
(r = 0.81)
1
0.8
0
0.2
0
0
1
(r = 0.67)
0
0.025 0.1 0.25 0.5 0.75 0.9 0.975
(r = 0.66)
1
0.8
0.8
0.6
0.6
0.4
0.4
0
Flower
Tree
Help
Hinder
0.2
0.025 0.1 0.25 0.5 0.75 0.9 0.975
0.2
0.025 0.1 0.25 0.5 0.75 0.9 0.975
(r = 0.90)
1
0.8
Cue?based model
Inverse planning model
1
(r = 0.79)
0.2
0.025 0.1 0.25 0.5 0.75 0.9 0.975
0
0.025 0.1 0.25 0.5 0.75 0.9 0.975
Human judgments
Human judgments
Figure 3: Correlations between human goal judgments and predictions of the Inverse Planning model (a) and
the Cue-based model (b), broken down by goal type. Bars correspond to bins of stimuli (out of 96 total) on
which the average human judgment for the probability of that goal was within a particular range; the midpoint
of each bin?s range is shown on the x-axis labels. The height of each bar shows the model?s average probability
judgment for all stimuli in that bin. Linear correlations between the model?s goal probabilities and average
human judgments for all 96 stimuli are given in the y-axis labels.
The inability of the heuristic model to distinguish between helping and hindering is illustrated by
the plots in Fig. 4. In contrast, both the Inverse Planning model and the human subjects are often
very confident that an agent is helping and not hindering (or vice versa).
Fig. 4 also illustrates a more general finding, that the Inverse Planning model captures most of
the major qualitative shifts (e.g. shifts resulting from disambiguating sequences) in subjects? goal
attribution. Figure 4 displays mean human judgments on four scenarios. Probe points (i.e. points
within the sequences at which subjects made judgments) are indicated on the plots and human data
is compared with predictions from the Inverse Planning model and the Cue-based model.
On scenario 6 (depicted in Fig. 2(a) but with goals switched), both the Inverse Planning model and
humans subjects recognize the movement of the Large agent one step off the flower (or the tree in
Fig. 2(b)) as strong evidence that Large has a hindering goal. The Cue-based model responds in the
same way but with much less confidence in hindering. Even after 8 subsequent frames of action it
is unable to decide in favor of hindering over helping.
7
While the Inverse Planning model and subjects almost always agree by the end of a sequence, they
sometimes disagree at early probe points. In scenario 5, both agents start off in the bottom-left room,
but with the Small agent right at the entrance to the top-left room. As the Small agent tries to move
towards the flower (the top-left goal), the Large agent moves up from below and pushes Small one
step towards the flower before moving off to the right to the tree. People interpret the Large agent?s
action as strong evidence for helping, in contrast with the Inverse Planning model. For the model,
because Small is so close to his goal, Large could just as well stay put and save his own action costs.
Therefore his movement upwards is not evidence of helping.
(a)
Scenario 6
Scenario 12
(d)
P(Goal|Trial)
(c)
P(Goal|Trial)
(b)
P(Goal|Trial)
Average
rating
People
1
0.5
0
2 4 6 8 10121416
People
1
0.5
0
Inverse Planning
(probe points)
1
0.5
0
2 4 6 8 10121416
2 4 6 8 10121416
1
0.5
0
2 4 6 8 10121416
Frame
2 4 6 8 10121416
1
0.5
0
2 4 6 8 10121416
1
0.5
0
2 4 6 8 10121416
Frame
2 4 6 8 10121416
1
0.5
0
2 4 6 8 10121416
1
0.5
0
2 4 6 8 10121416
Frame
2 4 6 8 10121416
2 4 6 8 10121416
Inverse Planning
(all points)
1
0.5
0
Visual Cue
1
0.5
0
flower
tree
help
hinder
Inverse Planning
(probe points)
Inverse Planning
(all points)
Visual Cue
1
0.5
0
2 4 6 8 10121416
People
1
0.5
0
Inverse Planning
(probe points)
Inverse Planning
(all points)
Visual Cue
1
0.5
0
2 4 6 8 10121416
Scenario 24
People
1
0.5
0
Inverse Planning
(probe points)
Inverse Planning
(all points)
1
0.5
0
Scenario 19
2 4 6 8 10121416
Visual Cue
1
0.5
0
2 4 6 8 10121416
Frame
Figure 4: Example data and model predictions. Probe points are marked as black circles. (a) Average subject
ratings with standard error bars. (b) Predictions of Inverse Planning model interpolated from cut points. (c)
Predictions of Inverse Planning model for all points in the sequence. (d) Predictions of Cue-based model.
4 Conclusion
Our goal in this paper was to address two challenges. The first was to provide a formalization of
social goal attribution incorporated into a general theory-based model for goal attribution. This
model had to enable the inference that A is helping or hindering B from interactions between A
and B but without prior knowledge of either agent?s goal, and to account for the range of behaviors
that humans judge as evidence of helping or hindering. The second challenge was for the model to
perform well on a demanding inference task in which social goals must be inferred from very few
observations without directly observable evidence of agents? goals.
The experimental results presented here go some way to meeting these challenges. The Inverse
Planning model classified a diverse range of agent interactions as helping or hindering in line with
human judgments. This model also distinguished itself against a model based solely on simple
perceptual cues. It produced a closer fit to humans for both social and nonsocial goal attributions,
and was far superior to the visual cue model in discriminating between helping and hindering.
These results suggest various lines of further research. One task is to augment this formal model of
helping and hindering to capture more of the complexity behind human judgments. On the Inverse
Planning model, A will act to advance B?s progress only if there is some chance of B actually
receiving a nontrivial amount of reward in a future state. However, people often help others towards
a goal even if they think it very unlikely that the goal will be achieved. This aspect of helping could
be explored by supposing that the utility of a helping agent depends not just on another agent?s
reward function but also his value function.
Acknowledgments: This work was supported by the James S. McDonnell Foundation Causal Learning Collaborative Initiative, ARO MURI grant W911NF-08-1-0242, AFOSR MURI grant FA9550-07-1-0075 and the
NSF Graduate Fellowship (CLB).
8
References
[1] C. L. Baker, N. D. Goodman, and J. B. Tenenbaum. Theory-based social goal inference. In Proceedings
of the Thirtieth Annual Conference of the Cognitive Science Society, 2008.
[2] C. L. Baker, J. B. Tenenbaum, and R. R. Saxe. Bayesian models of human action understanding. In
Advances in Neural Information Processing Systems, volume 18, pages 99?106, 2006.
[3] P. W. Blythe, P. M. Todd, and G. F. Miller. How motion reveals intention: categorizing social interactions.
In G. Gigerenzer, P. M. Todd, and the ABC Research Group, editors, Simple heuristics that make us smart,
pages 257?286. Oxford University Press, New York, 1999.
[4] D. C. Dennett. The Intentional Stance. MIT Press, Cambridge, MA, 1987.
[5] G. Gergely, Z. N?adasdy, G. Csibra, and S. Bir?o. Taking the intentional stance at 12 months of age.
Cognition, 56:165?193, 1995.
[6] J. K. Hamlin, Karen Wynn, and Paul Bloom. Social evaluation by preverbal infants. Nature, 450:557?560,
2007.
[7] F. Heider and M. A. Simmel. An experimental study of apparent behavior. American Journal of Psychology, 57:243?249, 1944.
[8] V. Kuhlmeier, Karen Wynn, and Paul Bloom. Attribution of dispositional states by 12-month-olds. Psychological Science, 14(5):402?408, 2003.
[9] J. Schultz, K. Friston, D. M. Wolpert, and C. D. Frith. Activation in posterior superior temporal sulcus
parallels parameter inducing the percept of animacy. Neuron, 45:625?635, 2005.
[10] Wako Yoshida, Ray J. Dolan, and Karl J. Friston. Game theory of mind. PLoS Computational Biology,
4(12):1?14, 2008.
[11] Jeffrey M. Zacks. Using movement and intentions to understand simple events. Cognitive Science,
28:979?1008, 2004.
[12] P. D. Tremoulet and J. Feldman The influence of spatial context and the role of intentionality in the
interpretation of animacy from motion. Perception and Psychophysics, 29:943?951, 2006.
9
| 3747 |@word trial:4 illustrating:1 inversion:1 seems:1 proportion:2 open:1 accounting:1 recursively:1 initial:1 configuration:1 selecting:2 preverbal:1 animated:1 wako:1 existing:2 current:1 comparing:1 surprising:1 activation:1 yet:1 issuing:1 must:1 evans:1 additive:1 realistic:1 subsequent:1 entrance:1 shape:3 zacks:1 designed:2 plot:2 aside:1 infant:5 cue:38 generative:1 selected:1 v:1 realizing:2 short:1 fa9550:1 colored:1 mental:1 provides:2 location:4 readability:1 successive:1 simpler:1 height:3 along:2 constructed:2 dispositional:1 initiative:1 qualitative:2 behavioral:1 ray:1 manner:1 indeed:2 expected:4 roughly:1 rapid:4 planning:51 behavior:9 brain:1 inspired:2 provided:1 baker:4 maximizes:1 panel:1 what:1 kind:1 interpreted:2 developed:2 finding:1 temporal:1 quantitative:1 every:1 act:3 interactive:1 puppet:1 unit:2 grant:2 appear:1 before:3 positive:1 todd:2 io:2 representationally:1 encoding:1 oxford:1 path:4 solely:1 approximately:1 might:1 black:1 range:10 graduate:1 directed:7 acknowledgment:1 spontaneously:1 block:3 chasing:1 procedure:1 featured:2 empirical:1 intention:3 induce:1 confidence:3 word:1 suggest:2 get:2 cannot:1 close:3 onto:1 judged:2 put:2 context:1 influence:1 maximizing:1 attribution:14 straightforward:1 go:1 yoshida:1 formalized:1 assigns:1 rule:1 fountain:1 array:1 his:7 hierarchy:1 rationality:2 heavily:1 us:1 hypothesis:2 trend:1 animacy:2 cut:4 muri:2 bottom:3 observed:2 role:2 capture:6 plo:1 decrease:1 movement:4 highest:1 intuition:1 environment:7 broken:2 complexity:1 reward:35 asked:3 hinder:5 dynamic:2 geodesic:3 trained:2 depend:1 weakly:1 gigerenzer:1 smart:1 triangle:4 resolved:1 joint:4 owain:2 various:1 distinct:1 fast:1 describe:6 forced:1 effective:1 corresponded:1 apparent:1 heuristic:3 snap:1 otherwise:1 ability:1 favor:1 gi:8 g1:2 think:1 highlighted:1 jointly:1 itself:2 sequence:11 net:2 propose:1 hindering:29 interaction:6 maximal:1 aro:1 dennett:1 poorly:1 achieve:2 intuitive:4 moved:3 inducing:1 everyday:2 staying:1 object:30 help:6 wider:1 depending:3 progress:2 eq:2 strong:3 judge:3 come:3 implies:1 direction:1 closely:1 attribute:3 human:25 saxe:1 viewing:2 enable:2 bin:3 explains:1 require:1 helping:33 drinking:1 sufficiently:1 intentional:3 considered:1 around:3 visually:2 exp:2 cognition:2 driving:1 major:1 vary:1 early:1 omitted:1 applicable:1 ambivalent:1 label:2 him:2 vice:1 create:1 successfully:1 weighted:1 mit:2 csibra:1 always:2 aim:1 reaching:2 rather:2 beauty:1 varying:2 thirtieth:1 categorizing:1 contrast:5 sense:1 inference:29 accumulated:1 integrated:2 unlikely:1 initially:2 relation:2 going:1 interested:1 unobservable:1 overall:2 orientation:1 augment:1 plan:5 constrained:1 softmax:2 spatial:1 psychophysics:1 marginal:2 equal:2 construct:1 field:3 once:2 having:1 never:1 biology:1 park:1 nearly:1 future:1 others:2 stimulus:22 develops:1 few:1 randomly:2 recognize:1 ourselves:1 jeffrey:1 attempt:1 interest:1 possibility:1 evaluation:1 male:1 ndg:1 behind:1 accurate:2 tuple:1 closer:1 partial:1 tree:8 continuing:1 old:1 circle:8 causal:2 minimal:1 stopped:1 psychological:1 instance:2 modeling:3 obstacle:5 asking:1 w911nf:1 cost:7 addressing:1 deviation:1 expects:1 uniform:1 recognizing:1 successful:1 confident:3 st:15 randomized:1 discriminating:1 stay:3 standing:1 probabilistic:1 rewarding:1 off:6 told:2 receiving:1 gergely:1 ambiguity:1 reflect:1 containing:1 choose:1 cognitive:4 american:1 ullman:1 account:4 potential:1 satisfy:1 notable:1 depends:5 performed:1 try:3 observing:4 wynn:2 start:2 sort:1 bayes:3 participant:2 parallel:1 slope:1 collaborative:1 minimize:2 tremoulet:1 percept:1 miller:1 judgment:25 correspond:3 generalize:1 weak:1 bayesian:4 produced:2 worth:1 randomness:1 classified:1 unblocking:1 explain:3 influenced:1 failure:1 against:1 james:1 rational:4 massachusetts:1 knowledge:5 subsection:2 infers:1 color:1 formalize:1 impoverished:3 actually:1 higher:3 day:1 box:1 generality:1 just:4 correlation:5 hand:1 receives:3 overlapping:1 defines:1 logistic:1 aj:5 indicated:1 mdp:5 believe:1 contain:1 true:1 ranged:2 stance:2 jbt:1 illustrated:1 game:1 ambiguous:2 criterion:5 hill:3 complete:1 motion:2 upwards:1 reasoning:3 superior:2 multinomial:1 physical:2 rapidity:1 volume:1 he:2 occurred:1 interpretation:1 interpret:1 unfamiliar:1 significant:1 versa:1 cambridge:1 ai:4 feldman:1 grid:1 similarly:1 jg:1 had:3 moving:10 access:1 gj:2 movable:3 posterior:3 own:3 recent:1 showed:2 perspective:1 female:1 scenario:28 arbitrarily:1 meeting:1 joshua:1 minimum:1 greater:1 care:1 determine:1 maximize:2 shortest:1 full:4 multiple:1 rj:1 infer:4 divided:1 concerning:1 equally:1 bigger:1 a1:6 prediction:12 involving:1 regression:1 expectation:3 represent:4 sometimes:2 tailored:1 achieved:1 want:1 fellowship:1 leaving:1 crucial:1 goodman:3 rest:1 probably:1 induced:2 tend:1 subject:27 supposing:1 call:1 structural:1 near:2 noting:1 blythe:3 caring:2 variety:2 fit:4 counterbalanced:1 timesteps:1 psychology:1 restrict:1 shift:4 whether:5 six:1 utility:3 karen:2 york:1 action:34 generally:1 eai:2 involve:1 listed:1 amount:1 discount:1 tenenbaum:4 ten:1 category:1 reduced:1 generate:1 specifies:1 occupy:2 nsf:1 track:1 diverse:1 group:2 key:1 four:3 sulcus:1 bloom:2 timestep:1 year:1 inverse:40 parameterized:1 angle:1 extends:1 family:2 almost:1 decide:1 draw:1 disambiguation:1 decision:2 scaling:1 summer:1 simplification:1 display:8 distinguish:2 annual:1 nontrivial:1 noah:1 constraint:1 ri:4 encodes:1 interpolated:1 aspect:1 disambiguated:1 department:1 combination:1 unsure:1 mcdonnell:1 smaller:1 across:4 slightly:1 adasdy:1 b:2 s1:5 simmel:1 explained:1 boulder:9 mutually:1 agree:1 fail:1 mind:4 know:1 end:3 serf:1 pursuit:1 available:1 probe:9 apply:1 occasional:1 away:1 distinguished:1 save:1 alternative:3 top:4 assumes:1 graphical:1 pushing:2 society:1 move:8 question:1 occurs:2 exclusive:1 responds:1 rationally:2 lends:1 distance:16 unable:2 separate:1 chris:1 toward:2 reason:2 assuming:2 length:2 illustration:1 minimizing:1 policy:1 perform:1 allowing:2 disagree:1 observation:2 neuron:1 markov:2 finite:1 snippet:8 protecting:1 situation:1 relational:1 variability:1 precise:1 incorporated:1 frame:26 interacting:4 incorporate:2 tomer:1 inferred:3 rating:4 introduced:1 inverting:2 pair:1 namely:1 optimized:1 adult:4 able:1 bar:5 address:1 flower:13 below:4 perception:1 challenge:6 max:1 including:2 video:12 belief:2 hot:1 greatest:1 event:2 demanding:1 friston:2 recursion:1 representing:3 older:1 technology:1 mdps:7 brief:1 rated:1 realvalued:1 axis:2 created:1 prior:3 geometric:1 understanding:2 marginalizing:3 mediation:1 law:1 relative:1 multiagent:11 afosr:1 dolan:1 versus:1 age:3 foundation:1 integrate:1 switched:1 agent:141 degree:1 consistent:4 principle:2 displaying:1 editor:1 row:4 karl:1 supported:1 last:2 free:2 heading:1 lef:2 formal:2 side:1 deeper:1 understand:1 institute:1 taking:2 differentiating:2 midpoint:1 determinism:2 world:12 transition:2 maze:11 made:4 ig:2 schultz:1 far:1 social:39 correlate:4 observable:2 suggestive:1 overfitting:1 reveals:2 assumed:1 nature:1 reasonably:1 transfer:1 frith:1 interact:1 ajt:2 complex:13 domain:1 main:2 linearly:1 paul:2 ait:3 child:1 allowed:2 augmented:1 fig:12 fashion:1 formalization:5 experienced:1 position:3 inferring:1 perceptual:8 young:2 down:6 removing:1 explored:1 virtue:1 evidence:9 disagreed:1 concern:1 incorporating:1 false:1 perceptually:2 illustrates:1 push:2 horizon:1 bir:1 suited:1 wolpert:1 depicted:2 sophistication:1 simply:3 likely:4 visual:11 failed:1 desire:1 contained:3 partially:1 corresponds:1 environmental:1 determines:3 relies:1 chance:1 abc:1 succeed:1 ma:1 goal:146 month:3 formulated:1 acceleration:1 marked:1 towards:3 room:5 owen:1 disambiguating:1 change:4 included:4 specifically:2 infinite:1 contrasted:1 acting:2 averaging:1 total:3 experimental:4 select:1 people:11 support:1 inability:1 heider:1 tested:1 |
3,032 | 3,748 | Augmenting Feature-driven fMRI Analyses:
Semi-supervised Learning and Resting State Activity
Matthew B. Blaschko
Visual Geometry Group
Department of Engineering Science
University of Oxford
[email protected]
Jacquelyn A. Shelton
Max Planck Institute for Biological Cybernetics
Fakult?at f?ur Informations- und Kognitionswissenschaften
Universit?at T?ubingen
[email protected]
Andreas Bartels
Max Planck Institute for Biological Cybernetics
Centre for Integrative Neuroscience, Universit?at T?ubingen
[email protected]
Abstract
Resting state activity is brain activation that arises in the absence of any task, and
is usually measured in awake subjects during prolonged fMRI scanning sessions
where the only instruction given is to close the eyes and do nothing. It has been
recognized in recent years that resting state activity is implicated in a wide variety of brain function. While certain networks of brain areas have different levels
of activation at rest and during a task, there is nevertheless significant similarity between activations in the two cases. This suggests that recordings of resting
state activity can be used as a source of unlabeled data to augment discriminative regression techniques in a semi-supervised setting. We evaluate this setting
empirically yielding three main results: (i) regression tends to be improved by
the use of Laplacian regularization even when no additional unlabeled data are
available, (ii) resting state data seem to have a similar marginal distribution to that
recorded during the execution of a visual processing task implying largely similar
types of activation, and (iii) this source of information can be broadly exploited to
improve the robustness of empirical inference in fMRI studies, an inherently data
poor domain.
1
Introduction
In this work we study the use of resting state activity for the semi-supervised analysis of human
fMRI studies. We wish to use resting state activity as an additional source of unlabeled data in
a semi-supervised regression setting. We analyze the weights of a trained regressor to infer brain
regions that are implicated in visual processing tasks. As the recording of human fMRI data is
constrained by limits on the time a subject can safely remain in a scanner, and by the high demand
for high-resolution scanning facilities, it is important to fully utilize available data. One source of
such additional data is resting state activity, the brain activation that arises in the absence of any task.
This data has been the subject of many studies in recent years, and has the important advantage of not
being biased by a specific task. We show in this work that the marginal distribution of resting state
activity is suitable to improve regression performance when employed for semi-supervised learning.
In neuroscience there has been a recent surge of interest in analyzing brain activity in more natural,
complex settings, e.g. with volunteers viewing movies, in order to gain insight in brain processes
and connectivity underlying more natural processing. The problem has been approached from dif1
ferent routes: linear regression was used to identify brain areas correlating with particular labels in
the movie [2], the perceived content was inferred based on brain activity [23], data-driven methods
were used to subdivide the brain into units with distinct response profiles [1], and correlation across
subjects was used to infer stimulus-driven brain processes at different timescales [24]. Several pattern recognition techniques have previously been applied to fMRI data of brains, including support
vector machines and Fisher linear discriminant analysis [26, 27, 29]. In [22], kernel canonical correlation analysis (KCCA) was applied to fMRI data from human subjects. We have recently applied a
semi-supervised extension of KCCA to human fMRI data [32] where the unlabeled data source was
given by the subjects viewing a movie for which the labels were not known. In this work, we explore the more realistic setting in which unlabeled data are available as a side product of other fMRI
studies. This enables the more efficient use of available data, and obviates the necessity to waste
scanner time and human labeling effort in order to produce sufficiently large data sets to achieve
satisfactory results.
In Section 2 we discuss the generation and significance of resting state activity. We then discuss
the statistical assumptions implicit in semi-supervised learning in Section 3. We present the experimental setup for data acquisition in Section 4, and discuss the semi-supervised regression model
in Section 5. In Section 6, we show empirically that resting state activity is an effective source of
unlabeled data for semi-supervised learning.
2
Resting State Activity
Resting state activity has attracted the attention of neuroscientists now for over a decade [8, 20].
It is defined as brain activation that arises in the absence of any task, and it is usually measured
in awake subjects during prolonged fMRI scanning sessions, where no other instructions are given
than to close the eyes and to do nothing. The basic idea is that spontaneous fluctuations of neural
activity in the brain may reveal some fundamental characteristics of brain function. This may include
functional aspects, but also structural ones.
For example, certain networks of areas have been shown to be more active at rest than during the
execution of a task, leading to the hypothesis that these areas may be involved in maintaining the default state of the brain, performing mental house-keeping functions, such as monitoring own bodily
states or the self [9, 30, 31], or being involved in intrinsic as opposed to extrinsic (i.e. stimulusdriven) tasks [17]. Additionally, spontaneous fluctuations of brain activity in particular brain regions have been shown to be directly correlated with metabolic activity and also with behavioural
task performance, thus providing evidence that these fluctuations do not merely reflect artefacts of
vascular perfusion, heart rate or breathing [9, 7]. Instead, evidence suggests that spontaneous activity changes reflect to some extent neural activity that may account for trial-to-trial variability of
human behaviour [14, 28].
Resting state activity however also has structural implications, in that temporal correlations between
spatially separate regions (functional connectivity) may be indicative of underlying neural communication between them, which in turn may be mediated by anatomical connections. Several studies
have shown that homologue regions of the hemispheres (e.g. left and right motor cortex, Wernickes
regions, etc) have high temporal correlations during rest [8, 3]. Also networks known to be anatomically connected, such as those belonging to the language network (Brocas area, Wernickes area,
Geschwinds territory) within a given hemisphere show strong correlations during resting state, indicating that spontaneous activity (or activity driven by mental imagery, etc) in one region may
affect others that are directly connected with it [21, 3]. Some recent studies also attempt to reveal
that resting state connectivity directly relates to structural activity as revealed using diffusion tensor
imaging [33, 19]. Finally, alterations in resting state activity patterns have recently been shown to
be diagnostic for clinical conditions such as neuropsychiatric disorders [18], and have been shown
to alter with increasing age [34].
However, the analysis of resting state activity poses a challenge, as it is not stimulus-driven, and is
therefore difficult to analyze or to reveal using hypothesis-driven analysis methods. One common
approach has been to reveal functional networks and their connectivity by measuring the temporal
correlations of a seed region with the remaining voxels of the brain [8, 21, 3, 33, 19]. Another
approach has been to apply data-driven spatio-temporal clustering methods such as independent
component analysis (ICA) to reveal distinct functional areas or networks at rest [35, 1]. The over2
whelming evidence of these studies shows that groups of voxels, but also widespread networks of
cortical areas that are co-engaged during task performances are also consistently co-activated during
rest [35, 1, 12].
We provide an alternative, computationally driven approach to assess whether and to which extent
externally driven functional networks coincide with spontaneous fluctuations during rest. We stimulated volunteers using natural movies, and measured resting state activity during the same session
in separate runs that each lasted 20 min. Prior work has shown that natural movie viewing leads not
only to wide-spread cortical activation, but also to a higher functional separation of distinct networks
and areas compared to that obtained using traditional stimulation with controlled stimuli [23, 1, 17].
This is most likely so because distinct cortical regions responded each to distinct features occurring
in the movie, thus revealing the functional division of labor in cortex [2, 4, 23].
In subsequent sections we show that semi-supervised learning algorithms improve when resting state
data are added to aide feature-regression of movie-viewing data. This improvement indicates that a
similar cortical structure underlies resting state data as that underlying movie-viewing data. These
results thus fall in line with prior work demonstrating consistency of resting state networks across
subjects [35, 12], and reveal that feature-driven activity during natural viewing induces a similar
functional clustering as that occurring during rest. Importantly however, this approach may also be
of other methodological interest, in that data obtained at rest may actually be used to augment the
performance of feature-driven regression of stimulus-driven data.
3
Semi-supervised Learning
Semi-supervised learning makes use of a combination of labeled and unlabeled training points in
order to better learn a mapping from an input space, X (in this case voxels recorded from fMRI),
to an output space, Y (variables recording viewing conditions). Discriminative models typically
attempt to infer a mapping f : X ? Y based on properties the conditional distribution p(y|x).
In order to incorporate training data in X for which no correspondence is known to Y, additional
assumptions must be made about the properties of the joint distribution over X ?Y. This often gives
semi-supervised learning more of a generative flavor in that we assume some properties of the joint
distribution in order to better make use of the marginal distribution p(x) [11].
There are several closely related assumptions employed in the development of semi-supervised
learning algorithms, but we focus here on the manifold assumption [6]. We assume that our high
dimensional data lie on a low dimensional manifold, and that changes in p(y|x) vary slowly as measured by distances within the manifold. The additional unlabeled data in X allow us to better model
the manifold on which the data lie.
In the case of fMRI acquired data, we expect that brain activity follow certain common patterns of
activation. Furthermore, transitions between these patterns of activation will not be discontinuous.
We can therefore be fairly confident in the assumption that the manifold assumption holds in principle. Of crucial importance, however, is that the distribution of the unlabeled samples not result in
a degenerate marginal distribution with respect to the discriminative task at hand, that is to say that
p(y|x) be slowly varying as measured by distances measured within the manifold estimated from
labeled and unlabeled samples from X .
Theoretical accounts of semi-supervised learning frequently assume that all samples from X be
drawn i.i.d. In practice, in a data poor domain, we may have to resort to a source of unlabeled data
that is derived by a (slightly) different process than that of the labeled samples. As resting state data
is a representative byproduct of the experimental design of fMRI studies, we explore the empirical
performance of its employment as a source of unlabeled data. This gives us vital insight into whether
the distribution of brain states is sufficiently similar to that of subjects who are performing a visual
processing task, and suggests a general and powerful improvement to the design of fMRI studies by
making use of this ready source of unlabeled information.
4
Data Acquisition
A Siemens 3TTIM scanner was used to acquire the fMRI data of 5 human volunteers and consisted
of 350 time slices of 3-dimensional fMRI brain volumes. Time-slices were separated by 3.2 s (TR),
3
each with a spatial resolution of 46 slices (2.6 mm width, 0.4 mm gap) with 64x64 pixels of 3x3 mm,
resulting in a spatial resolution of 3x3x3 mm. Each subject watched 2 movies of 18.5 min length,
wherein one movie had labels indicating the continuous content of the movie (i.e. degree of visual
contrast, or the degree to which a face was present, etc.) and the other remained unlabeled. The
subjects additionally were recorded during a resting state of the same length of time. The imaging
data were pre-processed using standard procedures using the Statistical Parametric Mapping (SPM)
toolbox before analysis [15]. Included was a slice-time correction to compensate for acquisition
delays between slices, a spatial realignment to correct for small head-movements, a spatial normalization to the SPM standard brain space (near MNI), and spatial smoothing using a Gaussian filter
of 12 mm full width at half maximum (FWHM). Subsequently, images were skull-and-eye stripped
and the mean of each time-slice was set to the same value (global scaling). A temporal high-pass
filter with a cut-off of 512 s was applied, as well as a low-pass filter with the temporal properties of
the hemodynamic response function (hrf), in order to reduce temporal acquisition noise.
For the movie with corresponding labels, the label time-series were obtained using two separate
methods. First by using computer frame-by-frame analysis of the movie [4], and second using
subjective ratings averaged across an independent set of five human observers [1]. The computerderived labels indicated luminance change over time (temporal contrast), visual motion energy (i.e.
the fraction of temporal contrast that can be explained by motion in the movie). The human-derived
labels indicated the intensity of subjectively experienced color, and the degree to which faces and
human bodies were present in the movie. In prior studies, each of these labels had been shown
to correlate with brain activity in particular and distinct sets of areas specialized to process the
particular label in question [1, 4].
5
Regression Model
We have applied a semi-supervised Laplacian regularized ridge regression framework to learn our
discriminant function. We assume multivariate data xi ? X with associated labels yi ? R, for
i = 1, . . . , n, although the setting is directly extensible to arbitrary input and output domains [10].
Ridge regression is classically formulated as
X
2
argminw
(yi ? hxi , wi) + ?kwk2 ,
(1)
i
where x and y are assumed to be zero mean [25]. This is equivalent to maximizing (Tikhonov
regularized) correlation between y and the projection of x onto w [16]. In order to extend this
to the semi-supervised setting [11], we assume the manifold assumption and employ Laplacian
regularization [37, 38, 5, 36, 6]. We assume that we have px additional unlabeled training samples
and use the variable mx = n+px for notational convenience. We denote the design matrix of labeled
? We can now write our Laplacian regularized
data as X and that of labeled and unlabeled data X.
objective function as
?
? T LXw
?
argminw (y ? Xw)T (y ? Xw) + ?kwk2 + 2 wT X
(2)
mx
where L is an empirical graph Laplacian [6].
The two regularization parameters, ? and ?, are set using a model selection step. We have employed
a variant of the model selection used in [32], which employs a grid search to maximize the difference
in objective functions between a randomized permutation of the correspondences between x and y
and the unpermuted data. We have used a symmetric normalized graph Laplacian where the weights
are given by a Gaussian function with the bandwidth set to the median distance between training
data points
1
1
L = I ? D? 2 SD? 2 ,
(3)
where S is a similarity matrix and D is a diagonal matrix whose entries are the row sums of S.
We have primarily chosen this regression model for its simplicity. Provided the manifold assumption holds for our source of data, and that the conditional distribution, p(y|x), is slowly varying
as measured by the manifold estimated from both labeled and unlabeled data, we can expect that
semi-supervised Laplacian regularization will improve results across a range of loss functions and
output spaces.
4
Table 1: Mean holdout correlations for motion in the five subjects across all experiments. For
a description of the experiments, see Section 5. In all cases, semi-supervision from resting state
activity (Exp C) improves over regression using only fully labeled data (Exp A).
Sub 1
Sub 2
Sub 3
Sub 4
Sub 5
Exp A ?0.008 ? 0.12 ?0.08 ? 0.07 ?0.08 ? 0.04 ?0.06 ? 0.07 ?0.08 ? 0.08
Exp B ?0.02 ? 0.17 ?0.03 ? 0.10
0.01 ? 0.09
?0.02 ? 0.04 ?0.03 ? 0.08
Exp C
0.12 ? 0.06
0.10 ? 0.10
0.17 ? 0.14
0.012 ? 0.09
0.06 ? 0.12
Exp D
0.09 ? 0.09
0.10 ? 0.14
0.15 ? 0.15
0.04 ? 0.04
0.02 ? 0.11
Exp E
0.11 ? 0.10
0.11 ? 0.15
0.12 ? 0.09
0.11 ? 0.08
0.16 ? 0.15
Table 2: Mean holdout correlations for human body in the five subjects across all experiments. For
a description of the experiments, see Section 5. In all cases, semi-supervision from resting state
activity (Exp C) improves over regression using only fully labeled data (Exp A).
Sub 1
Sub 2
Sub 3
Sub 4
Sub 5
Exp A 0.13 ? 0.17 ?0.003 ? 0.12 0.09 ? 0.11 0.06 ? 0.14 0.12 ? 0.17
Exp B 0.16 ? 0.16
0.16 ? 0.22
0.28 ? 0.15 0.16 ? 0.20 0.21 ? 0.16
Exp C 0.36 ? 0.17
0.29 ? 0.16
0.42 ? 0.15 0.30 ? 0.12 0.40 ? 0.06
Exp D 0.34 ? 0.09
0.30 ? 0.14
0.38 ? 0.25 0.25 ? 0.11 0.35 ? 0.11
Exp E 0.35 ? 0.22
0.37 ? 0.17
0.45 ? 0.08 0.33 ? 0.14 0.43 ? 0.05
As our data consist of (i) recordings from a completely labeled movie, (ii) recordings from resting
state activity, and (iii) recordings from an unlabeled movie, we are able to employ several variants
of semi-supervision in the above framework:
? A: In this variant, we employ only fully supervised data and use the regression given by
Equation (1).
? B: We also use only fully supervised data in this variant, but we employ Laplacian regularization in addition to Tikhonov regularization (Equation (2)).
? C: We introduce semi-supervision from resting state activity.
? D: In this variant, semi-supervision comes from the unlabeled movie. This allows us to
evaluate the effects of semi-supervision from data that are designed to be drawn from the
same distribution as our labeled data.
? E: Finally, we combine the unlabeled data from both resting state activity and from the
unlabeled movie.
6
Experimental Results
In order to evaluate the performance of the regression model with different semi-supervised variants,
we have performed five fold cross validation. For each fold, we measure the correlation between the
projected data and its associated labels. We have performed these experiments across five different
subjects with three different output variables. Table 1 shows the test correlations for all subjects
and experiments for the motion output variable, while Table 2 shows results for the human body
variable, and Table 3 for the language variable. Wilcoxon signed-rank tests have shown significant
Table 3: Mean holdout correlations for language in the five subjects across all experiments. For
a description of the experiments, see Section 5. In all cases, semi-supervision from resting state
activity (Exp C) improves over regression using only fully labeled data (Exp A).
Sub 1
Sub 2
Sub 3
Sub 4
Sub 5
Exp A 0.10 ? 0.13
0.10 ? 0.10
0.11 ? 0.14 ?0.03 ? 0.17 ?0.03 ? 0.11
Exp B 0.15 ? 0.17 ?0.05 ? 0.09 0.06 ? 0.23
0.14 ? 0.18
0.03 ? 0.14
Exp C 0.35 ? 0.10
0.15 ? 0.11
0.42 ? 0.03
0.07 ? 0.17
0.10 ? 0.13
Exp D 0.27 ? 0.17
0.29 ? 0.14
0.34 ? 0.20
0.08 ? 0.11
?0.03 ? 0.11
Exp E 0.34 ? 0.17
0.22 ? 0.15
0.30 ? 0.18
0.24 ? 0.15
0.07 ? 0.19
5
(a) Regression without Lapla- (b) Laplacian regularized solu- (c) Semi-supervised Laplacian
cian regularization.
tion.
regularized solution using resting
state data.
Figure 1: Illustration of weight maps obtained for the visual motion feature in experiments A, B,
and D. Transverse slices are shown through a single subjects T1-weighted structural image with
superimposed weight-maps, colored in red for positive weights (left column), and colored in blue
for negative weights (right column). The positive weight maps (left column) reveal the motion
processing area V5/MT+, as well as posterior in the midline a part of peripheral early visual area
V1 (not labelled). The negative weight maps reveal a reduction of BOLD signal in the occipital
poles (the foveal representation of early visual areas V1-V3). Both results are in agreement with the
findings reported in a prior study[4].
(a) Regression without Lapla- (b) Laplacian regularized solu- (c) Semi-supervised Laplacian
cian regularization.
tion.
regularized solution using resting
state data.
Figure 2: Illustration of weight maps for the human body feature. Weight maps (in red) are show on
transverse (left) and sagittal (right) brain sections of a single subject. Activity involves the objectresponsive lateral occipital cortex (LOC) extending dorsally into region responsive to human bodies,
dubbed extrastriate body area (EBA) [13]. The weights in all experiments are very strong for
this feature (see colorbar), and nearly no difference in the extent of activation is visible across
experiments.
improvement between ridge regression and semi-supervised Laplacian regularization with confidence > 95% for all variables. We also provide a qualitative evaluation of the results in the form
of a map of the significant weights onto slices shown through single subjects. Figure 1 shows the
weights for the motion variable, Figure 2 for the human body variable, and Figure 3 for the language
variable.
7
Discussion
One can observe several trends in Tables 1-3. First, we notice that the results for experiment A are
not satisfactory. Correlations appear to be non-existent or low, and show high variation across subjects. We conclude that the labeled training data alone are not sufficient to learn a reliable regressor
for these learning problems. The results in experiment B are mixed. For some subjects and variables
performance improved, but it is not consistent. We expect that this indicates non-linearity in the data,
but that the labeled data alone are not sufficient to accurately estimate the manifold. We see consistent improvement in experiment C over experiment A. This supports the primary hypothesis of this
work ? that the marginal distribution of resting state activity in combination of that from the visual
6
(a) Regression without Laplacian regularization.
(b) Laplacian regularized solution.
(c) Semi-supervised Laplacian regularized solution
using resting state data.
Figure 3: Illustration of weight maps obtained for the language feature across the different experiments. Weight maps (in red) are superimposed on sagittal, coronal and transverse sections of a
single subjects brain. The activation associated to this feature involved the superior temporal sulcus (STS), extending anteriorly to include parts of Wernickes speech processing area, and posterior
and ventrally (increasing with experiments A, B and D) object-responsive region LOC, involved in
analyzing facial features (in accord with the findings in [2]).
processing task allows us to robustly estimate a manifold structure that improves regression performance. The results for experiment C and D are similar, with neither data source dominating the
other. As the unlabeled data for experiment D were generated specifically to match the distribution
of the labeled data, we conclude that resting state activity gives a similar increase in performance to
semi-supervised learning with i.i.d. data. Finally, the setup in experiment E ? in which we use both
sources of semi-supervised data ? performs similarly on average to that in experiments C and D. We
conclude that the two sources of unlabeled data may not hold complimentary data, indicating that a
wholesale replacement of one source by another is an effective strategy.
The feature-weight maps shown in Figures 1-3 were all in accord with established findings in neuroscience, in that distinct features such as visual motion, the perception of human bodies or of
language correlated with activation of distinct brain regions, such as V5+/MT+, the lateral occipital
complex (LOC) and the extrastriate body area (EBA), as well as regions of the STS and Wernickes
area, respectively. These findings have now been established in studies using controlled stimuli, as
well as those showing movie-clips to volunteers [13, 2, 4, 23].
Here we asked whether using semi-supervised learning methods can improve a feature-driven analysis when adding data obtained in the resting state. The motivation for this stems from prior studies
that suggest a functionally relevant involvement of cortical regions during rest. Data-driven analyses of resting state activity reveals a similar functional architecture that can also be observed during
stimulus-driven activity, and which can be reproducibly found across subjects [12, 35]. In addition,
also the functional connectivity between distinct regions appears to be physiologically plausible at
rest [21, 8, 20], and in fact is very similar to the functional connectivity observed during viewing of
movies [3]. Taken together, these findings would suggest that resting state activity may in theory be
able to augment in a non-biased way datasets obtained in a functional setting. At the same time,
if resting state data were indeed found to augment results of feature-driven analyses, this would
form an important finding, as it would directly indicate that resting state activity indeed is similar
in its nature to that induced by stimulus-driven settings. Our findings indeed appear to show such
an effect, as is illustrated in Figures 1-3. For example, the activation of visual motion responsive
cortex V5+/MT+ clearly increased in experiments A-C. Note that this was not only reflected in the
positive weights, but also in the negative ones; in complete consistency with the findings reported
in [4] even the negative involvement of foveal visual representations with increase of visual motion
7
became amplified with the addition of resting state data. Similar findings concerned the cortical regions involved in the perception of language. However, this augmenting effect was not observed in
all subjects for all features Figure 2 for example shows a subject in whom the human body feature
obtained very high weights already in the most basic analysis, and no augmentation was apparent
in the weight maps for the more complex analyses, perhaps reflecting a saturation effect. Since the
resting state is not well-defined, it may also be that particular internal states, sleepiness, etc. would
not guarantee augmenting in all datasets.
All in all however our results show that adding resting state data can indeed augment findings obtained in stimulus-inducing settings. This method may therefore be useful for the increasing number
of imaging centres acquiring resting state data for completely different purposes, which may then be
used to augment functional data, entirely free of cost in terms of scan time. An even more promising
prospect however is that also the baseline or rest condition within stimulus-driven sessions may be
used to augment the results obtained in the stimulus conditions. This may be especially valuable,
since almost all imaging sessions contain baseline conditions, that are often not used for further
analysis, but take up considerable amount of scan time.
Apart from the above, application-orientated considerations, our findings also provide new evidence
that brain-states during rest which are difficult to characterize indeed resemble those during exposure to complex, natural stimulation. Our approach is therefore an extension of prior attempts
to characterize the complex, rich, yet difficult to characterize brain activation during the absence of
externally driven stimulation.
8
Conclusions
In this work, we have proposed the use of resting state data as a source for the unlabeled component
of semi-supervised learning for fMRI studies. Experimental results show that one of the primary
assumptions of semi-supervised learning, the manifold assumption, holds well for this data, and
that the marginal distribution of unlabeled resting state data is observed to augment that of labeled
data to consistently improve regression performance. Semi-supervised Laplacian regularization is
a widely applicable regularization technique that can be added to many kinds of machine learning
algorithms. As we have shown that the basic assumptions of semi-supervised learning hold for
this kind of data, we expect that this approach would work on these other discriminant/regression
methods as well, including kernel logistic regression, support vector machines, and kernel canonical
correlation analysis.
As data acquisition and the manual labeling of stimulus data are expensive components of brain
imaging, the benefits of exploiting additional unlabeled data are clear. Resting state data is a promising source as there are no task specific biases introduced. In future work we intend to further study
the properties of the distribution of resting state activity. We also intend to pursue cross subject
studies. If brain activity is consistent across subjects for the specific task measured by a study, a
large cross subject sample of resting state data may be employed to improve results.
Acknowledgments
The first author is supported by the Royal Academy of Engineering through a Newton International
Fellowship. The second author is supported by an ACM-W scholarship.
References
[1] A. Bartels and S. Zeki. The chronoarchitecture of the human brain?natural viewing conditions reveal a
time-based anatomy of the brain. NeuroImage, 22(1):419 ? 433, 2004.
[2] A. Bartels and S. Zeki. Functional brain mapping during free viewing of natural scenes. Human Brain
Mapping, 21(2):75?85, 02/01/ 2004.
[3] A. Bartels and S. Zeki. Brain dynamics during natural viewing conditions?a new guide for mapping
connectivity in vivo. NeuroImage, 24(2):339 ? 349, 2005.
[4] A. Bartels, S. Zeki, and N. K. Logothetis. Natural vision reveals regional specialization to local motion
and to contrast-invariant, global flow in the human brain. Cereb. Cortex, pages bhm107+, July 2007.
[5] M. Belkin and P. Niyogi. Semi-supervised learning on riemannian manifolds. Machine Learning, 56(13):209?239, 2004.
[6] M. Belkin, P. Niyogi, and V. Sindhwani. Manifold Regularization: A Geometric Framework for Learning
from Labeled and Unlabeled Examples. JMLR, 7:2399?2434, 2006.
8
[7] M. Bianciardi, M. Fukunaga, P. van Gelderen, S. G. Horovitz, J. A. de Zwart, and J. H. Duyn. Modulation
of spontaneous fMRI activity in human visual cortex by behavioral state. NeuroImage, 45(1):160?168,
2009.
[8] B. Biswal, Z. F. Yetkin, V. M. Haughton, and J. S. Hyde. Functional connectivity in the motor cortex of
resting human brain using echo-planar mri. Magnetic Resonance in Medicine, 34(4):537?541, 1995.
[9] B. B. Biswal, J. Van Kylen, and J. S. Hyde. Simultaneous assessment of flow and bold signals in restingstate functional connectivity maps. NMR Biomed, 10(4-5):165?170, 1997.
[10] M. B. Blaschko, C. H. Lampert, and A. Gretton. Semi-supervised laplacian regularization of kernel
canonical correlation analysis. In ECML PKDD ?08: Proceedings of the 2008 European Conference on
Machine Learning and Knowledge Discovery in Databases I, pages 133?145. Springer-Verlag, 2008.
[11] O. Chapelle, B. Sch?olkopf, and A. Zien, editors. Semi-Supervised Learning. MIT Press, Cambridge, MA,
2006.
[12] J. S. S. Damoiseaux, S. A. R. B. A. Rombouts, F. Barkhof, P. Scheltens, C. J. J. Stam, S. M. M. Smith,
and C. F. F. Beckmann. Consistent resting-state networks across healthy subjects. Proc Natl Acad Sci U
S A, August 2006.
[13] P. E. Downing, Y. Jiang, M. Shuman, and N. Kanwisher. A Cortical Area Selective for Visual Processing
of the Human Body. Science, 293(5539):2470?2473, 2001.
[14] M. D. Fox, A. Z. Snyder, J. M. Zacks, and M. E. Raichle. Coherent spontaneous activity accounts for
trial-to-trial variability in human evoked brain responses. Nature Neuroscience, 9(1):23?25, December
2005.
[15] K. Friston, J. Ashburner, S. Kiebel, T. Nichols, and W. Penny, editors. Statistical Parametric Mapping:
The Analysis of Functional Brain Images. Academic Press, 2007.
[16] T. V. Gestel, J. A. K. Suykens, J. D. Brabanter, B. D. Moor, and J. Vandewalle. Kernel canonical correlation analysis and least squares support vector machines. In ICANN ?01: Proceedings of the International
Conference on Artificial Neural Networks, pages 384?389, London, UK, 2001. Springer-Verlag.
[17] Y. Golland, S. Bentin, H. Gelbard, Y. Benjamini, R. Heller, Y. Nir, U. Hasson, and R. Malach. Extrinsic and Intrinsic Systems in the Posterior Cortex of the Human Brain Revealed during Natural Sensory
Stimulation. Cereb. Cortex, 17(4):766?777, 2007.
[18] M. Greicius. Resting-state functional connectivity in neuropsychiatric disorders. Current opinion in
neurology, 24(4):424?430, August 2008.
[19] M. D. Greicius, K. Supekar, V. Menon, and R. F. Dougherty. Resting-State Functional Connectivity
Reflects Structural Connectivity in the Default Mode Network. Cereb. Cortex, 19(1):72?78, 2009.
[20] R. Gur, L. Mozley, P. Mozley, S. Resnick, J. Karp, A. Alavi, S. Arnold, and R. Gur. Sex differences in
regional cerebral glucose metabolism during a resting state. Science, 267(5197):528?531, 1995.
[21] M. Hampson, B. S. Peterson, P. Skudlarski, J. C. Gatenby, and J. C. Gore. Detection of functional
connectivity using temporal correlations in mr images. Hum Brain Mapp, 15(4):247?262, April 2002.
[22] D. R. Hardoon, J. Mour?ao-Miranda, M. Brammer, and J. Shawe-Taylor. Unsupervised Analysis of fMRI
Data Using Kernel Canonical Correlation. NeuroImage, 37(4):1250?1259, 2007.
[23] U. Hasson, Y. Nir, I. Levy, G. Fuhrmann, and R. Malach. Intersubject Synchronization of Cortical Activity
During Natural Vision. Science, 303(5664):1634?1640, 2004.
[24] U. Hasson, E. Yang, I. Vallines, D. J. Heeger, and N. Rubin. A hierarchy of temporal receptive windows
in human cortex. J. Neurosci., 28(10):2539?2550, March 2008.
[25] T. Hastie, R. Tibshirani, and J. Friedman. The Elements of Statistical Learning: Data Mining, Inference,
and Prediction. Springer, 2001.
[26] J. V. Haxby, M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, and P. Pietrini. Distributed and Overlapping Representations of Faces and Objects in Ventral Temporal Cortex. Science, 293(5539):2425?2430,
2001.
[27] J.-D. Haynes and G. Rees. Decoding mental states from brain activity in humans. Nature Reviews
Neuroscience, 7(7):523?534, July 2006.
[28] C. Kelly, L. Uddin, B. Biswal, F. Castellanos, and M. Milham. Competition between functional brain
networks mediates behavioral variability. NeuroImage, 39(1):527?537, January 2008.
[29] S.-P. Ku, A. Gretton, J. Macke, and N. K. Logothetis. Comparison of pattern recognition methods in
classifying high-resolution bold signals obtained at high magnetic field in monkeys. Magnetic Resonance
Imaging, 26(7):1007 ? 1014, 2008.
[30] M. E. Raichle, A. M. MacLeod, A. Z. Snyder, W. J. Powers, D. A. Gusnard, and G. L. Shulman. A default
mode of brain function. Proc Natl Acad Sci U S A, 98(2):676?682, January 2001.
[31] M. E. E. Raichle and A. Z. Z. Snyder. A default mode of brain function: A brief history of an evolving
idea. Neuroimage, 37(4):1083?1090, October 2007.
[32] J. Shelton, M. Blaschko, and A. Bartels. Semi-supervised subspace analysis of human functional magnetic
resonance imaging data. Technical Report 185, Max Planck Institute for Biological Cybernetics, 2009.
[33] P. Skudlarski, K. Jagannathan, V. D. Calhoun, M. Hampson, B. A. Skudlarska, and G. Pearlson. Measuring brain connectivity: Diffusion tensor imaging validates resting state temporal correlations. NeuroImage, 43(3):554 ? 561, 2008.
[34] M. C. Stevens, G. D. Pearlson, and V. D. Calhoun. Changes in the interaction of resting-state neural
networks from adolescence to adulthood. Human Brain Mapping, 8(30):2356?2366, 2009.
[35] V. G. van de Ven, E. Formisano, D. Prvulovic, C. H. Roeder, and D. E. Linden. Functional connectivity as
revealed by spatial independent component analysis of fmri measurements during rest. Hum Brain Mapp,
22(3):165?178, July 2004.
[36] D. Zhou, O. Bousquet, T. N. Lal, J. Weston, and B. Sch?olkopf. Learning with local and global consistency.
In Advances in Neural Information Processing Systems 16, 2004.
[37] X. Zhu and Z. Ghahramani. Learning from labeled and unlabeled data with label propagation. Technical
Report CMU-CALD-02-107, Carnegie Mellon University, 2002.
[38] X. Zhu, Z. Ghahramani, and J. Lafferty. Semi-supervised learning using gaussian fields and harmonic
functions. In International Conference on Machine Learning, pages 912?919, 2003.
9
| 3748 |@word trial:4 hampson:2 mri:1 sex:1 instruction:2 integrative:1 pearlson:2 tr:1 extrastriate:2 reduction:1 necessity:1 series:1 foveal:2 loc:3 subjective:1 current:1 activation:14 yet:1 attracted:1 must:1 kiebel:1 realistic:1 subsequent:1 visible:1 enables:1 motor:2 zacks:1 designed:1 haxby:1 implying:1 generative:1 half:1 alone:2 metabolism:1 indicative:1 eba:2 smith:1 colored:2 mental:3 five:6 downing:1 qualitative:1 combine:1 behavioral:2 introduce:1 acquired:1 kanwisher:1 indeed:5 ica:1 mpg:2 surge:1 frequently:1 pkdd:1 brain:49 bentin:1 prolonged:2 window:1 increasing:3 hardoon:1 provided:1 blaschko:4 underlying:3 linearity:1 furey:1 kind:2 complimentary:1 pursue:1 monkey:1 finding:11 dubbed:1 guarantee:1 safely:1 temporal:14 gore:1 universit:2 nmr:1 uk:2 unit:1 appear:2 planck:3 before:1 t1:1 engineering:2 positive:3 local:2 tends:1 limit:1 sd:1 acad:2 oxford:1 analyzing:2 colorbar:1 jiang:1 fluctuation:4 modulation:1 signed:1 evoked:1 suggests:3 co:2 greicius:2 fwhm:1 range:1 averaged:1 acknowledgment:1 practice:1 x3:1 procedure:1 area:18 empirical:3 evolving:1 revealing:1 projection:1 pre:1 confidence:1 suggest:2 onto:2 close:2 unlabeled:28 convenience:1 selection:2 equivalent:1 map:12 maximizing:1 exposure:1 attention:1 occipital:3 resolution:4 simplicity:1 disorder:2 insight:2 importantly:1 gur:2 fakult:1 x64:1 variation:1 spontaneous:7 logothetis:2 hierarchy:1 hypothesis:3 agreement:1 trend:1 element:1 recognition:2 expensive:1 cut:1 malach:2 labeled:17 database:1 observed:4 resnick:1 region:15 connected:2 movement:1 prospect:1 valuable:1 und:1 asked:1 dynamic:1 employment:1 existent:1 trained:1 division:1 completely:2 joint:2 separated:1 distinct:9 effective:2 london:1 coronal:1 artificial:1 approached:1 labeling:2 whose:1 apparent:1 widely:1 dominating:1 plausible:1 say:1 calhoun:2 niyogi:2 dougherty:1 echo:1 validates:1 brabanter:1 advantage:1 interaction:1 product:1 argminw:2 relevant:1 degenerate:1 achieve:1 amplified:1 academy:1 duyn:1 description:3 inducing:1 competition:1 olkopf:2 stam:1 exploiting:1 extending:2 produce:1 perfusion:1 object:2 ac:1 pose:1 augmenting:3 measured:8 intersubject:1 strong:2 involves:1 come:1 indicate:1 resemble:1 artefact:1 dorsally:1 anatomy:1 stevens:1 closely:1 discontinuous:1 correct:1 filter:3 subsequently:1 human:29 viewing:11 opinion:1 behaviour:1 ao:1 hyde:2 biological:3 solu:2 extension:2 correction:1 scanner:3 hold:5 sufficiently:2 mm:5 exp:21 seed:1 mapping:8 matthew:1 vary:1 early:2 ventral:1 purpose:1 perceived:1 ventrally:1 proc:2 applicable:1 label:12 healthy:1 weighted:1 moor:1 reflects:1 mit:1 clearly:1 gaussian:3 jacquelyn:1 zhou:1 varying:2 karp:1 derived:2 focus:1 improvement:4 consistently:2 rank:1 methodological:1 indicates:2 notational:1 lasted:1 contrast:4 superimposed:2 baseline:2 inference:2 roeder:1 typically:1 bartels:6 raichle:3 selective:1 biomed:1 pixel:1 augment:8 wernicke:4 development:1 resonance:3 constrained:1 spatial:6 fairly:1 smoothing:1 marginal:6 field:2 haynes:1 ven:1 unsupervised:1 nearly:1 uddin:1 alter:1 fmri:20 future:1 report:2 others:1 stimulus:11 employ:5 primarily:1 belkin:2 midline:1 bodily:1 geometry:1 replacement:1 attempt:3 friedman:1 detection:1 neuroscientist:1 interest:2 mining:1 evaluation:1 yielding:1 activated:1 natl:2 implication:1 byproduct:1 facial:1 fox:1 taylor:1 theoretical:1 increased:1 column:3 castellanos:1 extensible:1 measuring:2 cost:1 pole:1 entry:1 delay:1 vandewalle:1 characterize:3 reported:2 ishai:1 scanning:3 confident:1 rees:1 fundamental:1 randomized:1 international:3 off:1 decoding:1 regressor:2 together:1 realignment:1 connectivity:15 imagery:1 reflect:2 recorded:3 augmentation:1 opposed:1 slowly:3 classically:1 resort:1 macke:1 leading:1 account:3 de:4 alteration:1 bold:3 waste:1 sts:2 performed:2 tion:2 observer:1 analyze:2 red:3 neuropsychiatric:2 alavi:1 vivo:1 ass:1 square:1 responded:1 became:1 largely:1 characteristic:1 who:1 gestel:1 identify:1 territory:1 accurately:1 shuman:1 monitoring:1 cybernetics:3 history:1 simultaneous:1 manual:1 ashburner:1 energy:1 acquisition:5 involved:5 associated:3 riemannian:1 gain:1 holdout:3 color:1 knowledge:1 improves:4 actually:1 reflecting:1 appears:1 higher:1 supervised:37 follow:1 reflected:1 response:3 improved:2 wherein:1 planar:1 april:1 ox:1 furthermore:1 implicit:1 correlation:19 hand:1 overlapping:1 assessment:1 propagation:1 widespread:1 spm:2 scheltens:1 logistic:1 mode:3 reveal:9 indicated:2 perhaps:1 menon:1 effect:4 consisted:1 normalized:1 contain:1 nichols:1 facility:1 regularization:14 cald:1 spatially:1 symmetric:1 satisfactory:2 illustrated:1 biswal:3 during:26 self:1 width:2 jagannathan:1 mapp:2 ridge:3 x3x3:1 complete:1 performs:1 motion:11 cereb:3 dif1:1 image:4 harmonic:1 consideration:1 recently:2 common:2 superior:1 specialized:1 functional:23 stimulation:4 empirically:2 mt:3 volume:1 cerebral:1 extend:1 resting:56 functionally:1 kwk2:2 significant:3 measurement:1 glucose:1 cambridge:1 mellon:1 restingstate:1 consistency:3 grid:1 session:5 similarly:1 benjamini:1 centre:2 shawe:1 language:7 had:2 hxi:1 chapelle:1 robot:1 similarity:2 cortex:12 supervision:7 etc:4 subjectively:1 wilcoxon:1 multivariate:1 own:1 recent:4 posterior:3 involvement:2 hemisphere:2 driven:19 apart:1 route:1 certain:3 tikhonov:2 ubingen:2 verlag:2 shulman:1 yi:2 exploited:1 wholesale:1 additional:7 mr:1 employed:4 broca:1 recognized:1 maximize:1 v3:1 signal:3 semi:42 ii:2 relates:1 full:1 zien:1 infer:3 stem:1 gretton:2 july:3 technical:2 match:1 academic:1 clinical:1 compensate:1 cross:3 laplacian:18 controlled:2 prediction:1 watched:1 regression:25 kcca:2 basic:3 underlies:1 variant:6 vision:2 volunteer:4 cmu:1 adulthood:1 kernel:6 normalization:1 accord:2 suykens:1 golland:1 addition:3 fellowship:1 median:1 source:16 crucial:1 sch:2 biased:2 rest:13 regional:2 subject:29 recording:6 induced:1 december:1 flow:2 lafferty:1 seem:1 barkhof:1 structural:5 near:1 yang:1 revealed:3 iii:2 vital:1 concerned:1 variety:1 affect:1 architecture:1 bandwidth:1 hastie:1 andreas:1 idea:2 reduce:1 whether:3 specialization:1 effort:1 vascular:1 speech:1 useful:1 clear:1 lxw:1 amount:1 induces:1 processed:1 clip:1 canonical:5 notice:1 neuroscience:5 extrinsic:2 diagnostic:1 estimated:2 tibshirani:1 anatomical:1 broadly:1 blue:1 write:1 carnegie:1 snyder:3 group:2 zeki:4 nevertheless:1 demonstrating:1 drawn:2 sulcus:1 neither:1 miranda:1 diffusion:2 utilize:1 v1:2 luminance:1 imaging:8 graph:2 unpermuted:1 merely:1 fraction:1 year:2 sum:1 run:1 milham:1 powerful:1 almost:1 separation:1 scaling:1 entirely:1 correspondence:2 fold:2 activity:48 mni:1 hasson:3 awake:2 scene:1 bousquet:1 aspect:1 min:2 fukunaga:1 performing:2 px:2 department:1 peripheral:1 combination:2 poor:2 march:1 belonging:1 remain:1 across:14 slightly:1 ur:1 wi:1 skull:1 making:1 anatomically:1 explained:1 invariant:1 heart:1 taken:1 behavioural:1 computationally:1 equation:2 previously:1 discus:3 turn:1 available:4 gusnard:1 apply:1 observe:1 magnetic:4 responsive:3 robustly:1 alternative:1 robustness:1 subdivide:1 pietrini:1 obviates:1 remaining:1 include:2 clustering:2 maintaining:1 newton:1 xw:2 medicine:1 macleod:1 scholarship:1 especially:1 ghahramani:2 tensor:2 objective:2 intend:2 gobbini:1 added:2 question:1 v5:3 already:1 parametric:2 primary:2 strategy:1 traditional:1 diagonal:1 hum:2 receptive:1 rombouts:1 mx:2 distance:3 separate:3 subspace:1 lateral:2 sci:2 manifold:14 whom:1 extent:3 tuebingen:2 discriminant:3 length:2 illustration:3 providing:1 beckmann:1 acquire:1 setup:2 difficult:3 october:1 negative:4 design:3 datasets:2 gelbard:1 ecml:1 january:2 variability:3 communication:1 head:1 frame:2 orientated:1 arbitrary:1 august:2 intensity:1 inferred:1 rating:1 transverse:3 introduced:1 toolbox:1 connection:1 lal:1 coherent:1 established:2 mediates:1 able:2 usually:2 pattern:5 perception:2 breathing:1 challenge:1 saturation:1 max:3 including:2 reliable:1 royal:1 power:1 suitable:1 natural:12 friston:1 regularized:9 zhu:2 improve:7 movie:21 brief:1 eye:3 ready:1 fuhrmann:1 mediated:1 nir:2 prior:6 voxels:3 geometric:1 discovery:1 heller:1 review:1 kelly:1 synchronization:1 fully:6 expect:4 permutation:1 loss:1 mixed:1 generation:1 age:1 validation:1 sagittal:2 degree:3 cian:2 sufficient:2 consistent:4 rubin:1 principle:1 metabolic:1 editor:2 classifying:1 row:1 supported:2 keeping:1 free:2 implicated:2 schouten:1 side:1 allow:1 bias:1 guide:1 institute:3 wide:2 fall:1 formisano:1 face:3 stripped:1 arnold:1 peterson:1 penny:1 benefit:1 slice:8 van:3 default:4 cortical:8 transition:1 distributed:1 rich:1 ferent:1 sensory:1 author:2 made:1 coincide:1 projected:1 correlate:1 global:3 correlating:1 active:1 reveals:2 assumed:1 conclude:3 spatio:1 discriminative:3 xi:1 neurology:1 continuous:1 search:1 physiologically:1 decade:1 table:7 additionally:2 stimulated:1 ku:1 learn:3 nature:3 promising:2 inherently:1 complex:5 european:1 domain:3 brammer:1 significance:1 main:1 timescales:1 spread:1 icann:1 motivation:1 noise:1 lampert:1 profile:1 neurosci:1 nothing:2 body:11 representative:1 experienced:1 sub:15 neuroimage:7 wish:1 heeger:1 lie:2 house:1 hrf:1 jmlr:1 levy:1 supekar:1 externally:2 remained:1 specific:3 showing:1 linden:1 evidence:4 intrinsic:2 consist:1 adding:2 importance:1 execution:2 occurring:2 demand:1 gap:1 flavor:1 explore:2 likely:1 visual:16 labor:1 sindhwani:1 acquiring:1 springer:3 acm:1 ma:1 weston:1 conditional:2 formulated:1 labelled:1 absence:4 content:2 fisher:1 change:4 included:1 specifically:1 considerable:1 wt:1 pas:2 engaged:1 experimental:4 siemens:1 indicating:3 internal:1 support:4 arises:3 scan:2 aide:1 incorporate:1 evaluate:3 hemodynamic:1 shelton:2 correlated:2 |
3,033 | 3,749 | Locality-Sensitive Binary Codes
from Shift-Invariant Kernels
Maxim Raginsky
Duke University
Durham, NC 27708
[email protected]
Svetlana Lazebnik
UNC Chapel Hill
Chapel Hill, NC 27599
[email protected]
Abstract
This paper addresses the problem of designing binary codes for high-dimensional
data such that vectors that are similar in the original space map to similar binary strings. We introduce a simple distribution-free encoding scheme based on
random projections, such that the expected Hamming distance between the binary codes of two vectors is related to the value of a shift-invariant kernel (e.g., a
Gaussian kernel) between the vectors. We present a full theoretical analysis of the
convergence properties of the proposed scheme, and report favorable experimental
performance as compared to a recent state-of-the-art method, spectral hashing.
1 Introduction
Recently, there has been a lot of interest in the problem of designing compact binary codes for
reducing storage requirements and accelerating search and retrieval in large collections of highdimensional vector data [11, 13, 15]. A desirable property of such coding schemes is that they
should map similar data points to similar binary strings, i.e., strings with a low Hamming distance.
Hamming distances can be computed very efficiently in hardware, resulting in very fast retrieval of
strings similar to a given query, even for brute-force search in a database consisting of millions of
data points [11, 13]. Moreover, if code strings can be effectively used as hash keys, then similarity
searches can be carried out in sublinear time. In some existing schemes, e.g. [11, 13], the notion of
similarity between data points comes from supervisory information, e.g., two documents are similar
if they focus on the same topic or two images are similar if they contain the same objects. The
binary encoder is then trained to reproduce this ?semantic? similarity measure. In this paper, we are
more interested in unsupervised schemes, where the similarity is given by Euclidean distance or by
a kernel defined on the original feature space. Weiss et al. [15] have recently proposed a spectral
hashing approach motivated by the idea that a good encoding scheme should minimize the sum of
Hamming distances between pairs of code strings weighted by the value of a Gaussian kernel between the corresponding feature vectors. With appropriate heuristic simplifications, this objective
can be shown to yield a very efficient encoding rule, where each bit of the code is given by the sign
of a sine function applied to a one-dimensional projection of the feature vector. Spectral hashing
shows promising experimental results, but its behavior is not easy to characterize theoretically. In
particular, it is not clear whether the Hamming distance between spectral hashing code strings converges to any function of the Euclidean distance or the kernel value between the original vectors as
the number of bits in the code increases.
In this paper, we propose a coding method that is similar to spectral hashing computationally, but
is derived from completely different considerations, is amenable to full theoretical analysis, and
shows better practical behavior as a function of code size. We start with a low-dimensional mapping
of the original data that is guaranteed to preserve the value of a shift-invariant kernel (specifically,
the random Fourier features of Rahimi and Recht [8]), and convert this mapping to a binary one
with similar guarantees. In particular, we show that the normalized Hamming distance (i.e., Ham-
ming distance divided by the number of bits in the code) between any two embedded points sharply
concentrates around a well-defined continuous function of the kernel value. This leads to a Johnson?
Lindenstrauss type result [4] which says that a set of any N points in a Euclidean feature space can
be embedded in a binary cube of dimension O(log N ) in a similarity-preserving way: with high
probability, the binary encodings of any two points that are similar (as measured by the kernel) are
nearly identical, while those of any two points that are dissimilar differ in a constant fraction of their
bits. Using entropy bounds from the theory of empirical processes, we also prove a stronger result
of this type that holds for any compact domain of RD , provided the number of bits is proportional
to the intrinsic dimension of the domain. Our scheme is completely distribution-free with respect to
the data: its structure depends only on the underlying kernel. In this, it is similar to locality sensitive
hashing (LSH) [1], which is a family of methods for deriving low-dimensional discrete representations of the data for sublinear near-neighbor search. However, our scheme differs from LSH in
that we obtain both upper and lower bounds on the normalized Hamming distance between any two
embedded points, while in LSH the goal is only to preserve nearest neighbors (see [6] for further discussion of the distinction between LSH and more general similarity-preserving embeddings). To the
best of our knowledge, our scheme is among the first random projection methods for constructing a
similarity-preserving embedding into a binary cube. In addition to presenting a thorough theoretical
analysis, we have evaluated our approach on both synthetic and real data (images from the LabelMe
database [10] represented by high-dimensional GIST descriptors [7]) and compared its performance
to that of spectral hashing. Despite the simplicity and distribution-free nature of our scheme, we
have been able to obtain very encouraging experimental results.
2 Binary codes for shift-invariant kernels
Consider a Mercer kernel K(?, ?) on RD that satisfies the following for all points x, y ? RD :
(K1) It is translation-invariant (or shift-invariant), i.e., K(x, y) = K(x ? y).
(K2) It is normalized, i.e., K(x ? y) ? 1 and K(x ? x) ? K(0) = 1.
(K3) For any real number ? ? 1, K(?x ? ?y) ? K(x ? y).
The Gaussian kernel K(x, y) = exp(??kx ? yk2 /2) or the Laplacian kernel K(x, y) =
exp(??kx ? yk1 ) are two well-known examples. We would like to construct an embedding F n of
RD into the binary cube {0, 1}n such that for any pair x, y the normalized Hamming distance
n
X
1
? 1
dH (F n (x), F n (y)) =
1{Fi (x)6=Fi (y)}
n
n i=1
between F n (x) = (F1 (x), . . . , Fn (x)) and F n (y) = (F1 (y), . . . , Fn (y)) behaves like
h1 (K(x ? y)) ?
1
dH (F n (x), F n (y)) ? h2 (K(x ? y))
n
where h1 , h2 : [0, 1] ? R+ are continuous decreasing functions, and h1 (1) = h2 (1) = 0 and
h1 (0) = h2 (0) = c > 0. In other words, we would like to map D-dimensional real vectors into
n-bit binary strings in a locality-sensitive manner, where the notion of locality is induced by the
kernel K. We will achieve this goal by drawing F n appropriately at random.
Random Fourier features. Recently, Rahimi and Recht [8] gave a scheme that takes a Mercer
kernel satisfying (K1) and (K2) and produces a random mapping ?n : RD ? Rn , such that,
with high probability, the inner product of any two transformed points approximates the kernel:
?n (x)??n (y) ? K(x?y) for all x, y. Their scheme exploits Bochner?s theorem [9], a fundamental
result in harmonic analysis which says that any such K is a Fourier transform of a uniquely defined
probability measure PK on RD . They define the random Fourier features (RFF) via
?
?
??,b (x) = 2 cos(? ? x + b),
(1)
2
where ? ? PK and b ? Unif[0, 2?]. For example, for the Gaussian kernel K(s) = e??ksk /2 , we
take ? ? Normal(0, ?ID?D ). With these features, we have E[??,b (x)??,b (y)] = K(x ? y).
The scheme of [8] is as follows: draw an i.i.d. sample ((? 1 , b1 ), . . . , (? n , bn )), where each
?
? i ? PK and bi ? Unif[0, 2?], and define a mapping ?n : RD ? Rn via ?n (x) =
?1 ?? ,b (x), . . . , ?? ,b (x) for x ? X . Then E[?n (x) ? ?n (y)] = K(x ? y) for all x, y.
n n
1 1
n
From random Fourier features to random binary codes. We will compose the RFFs with
random binary quantizers. Draw a random threshold t ? Unif[?1, 1] and define the quantizer
?
Qt : [?1, 1] ? {?1, +1} via Qt (u) = sgn(u + t), where we let sgn(u) = ?1 if u < 0 and
sgn(u) = +1 if u ? 0. We note the following basic fact (we omit the easy proof):
Lemma 2.1 For any u, v ? [?1, 1], Pt {Qt (u) 6= Qt (v)} = |u ? v|/2.
Now, given a kernel K, we define a random map Ft,?,b : RD ? {0, 1} through
1
[1 + Qt (cos(? ? x + b))] ,
(2)
2
where t ? Unif[?1, 1], ? ? PK , and b ? Unif[0, 2?] are independent of one another. From now
on, we will often omit the subscripts t, ?, b and just write F for the sake of brevity. We have:
?
Ft,?,b (x) =
Lemma 2.2
?
E 1{F (x)6=F (y)} = hK (x ? y) =
?
8 X 1 ? K(mx ? my)
,
? 2 m=0
4m2 ? 1
?x, y
(3)
Proof: Using Lemma 2.1, we can show E 1{F (x)6=F (y)} = 21 E?,b | cos(? ? x + b) ? cos(? ? y + b)|.
Using trigonometric identities and the independence of ? and b, we can express this expectation as
4
? ? (x ? y)
Eb,? |cos(? ? x + b) ? cos(? ? y + b)| = E? sin
.
?
2
We now make use of the Fourier series representation of the full rectified sine wave g(? ) = | sin(? )|:
g(? ) =
?
?
2
4 X
1
4 X 1 ? cos(2m? )
+
cos(m?
)
=
.
?
? m=1 1 ? 4m2
? m=1 4m2 ? 1
Using this together with the fact that E? cos(? ? s) = K(s) for any s ? RD [8], we obtain (3).
Lemma 2.2 shows that the probability that F (x) 6= F (y) is a well-defined continuous function of
x ? y. The infinite series in (3) can, of course, be computed numerically to any desired precision. In
addition, we have the following upper and lower bounds solely in terms of the kernel value K(x?y):
Lemma 2.3 Define the functions
4
h1 (u) = 2 (1 ? u)
?
?
and
?
h2 (u) = min
1?
4
1 ? u, 2 (1 ? 2u/3) ,
2
?
where u ? [0, 1]. Note that h1 (0) = h2 (0) = 4/? 2 ? 0.405 and that h1 (1) = h2 (1) = 0. Then
h1 (K(x ? y)) ? hK (x ? y) ? h2 (K(x ? y)) for all x, y.
?
?
?
Proof: Let ? = cos(? ? x + b) ? cos(? ? y + b). Then E |?| = E ?2 ? E ?2 (the last step
uses concavity of the square root). Using the properties of the RFF, E ?2 = (1/2)
p E[(??,b (x) ?
2
??,b (y)) ] = 1 ? K(x ? y). Therefore, E 1{F (x)6=F (y)} = (1/2) E |?| ? (1/2) 1 ? K(x ? y).
We also have
?
4
8 X K(mx ? my)
4
8
4
E 1{F (x)6=F (y)} = 2 ? 2
? 2 ? 2 K(x ? y) = 2 1 ? 2K(x? y)/3 .
2
?
? m=1 4m ? 1
?
3?
?
This proves the upper bound in the lemma. On the other hand, since K satisfies (K3),
?
8 X
1
4
hK (x ? y) ? 1 ? K(x ? y) ? 2
= 2 1 ? K(x ? y) ,
? m=1 4m2 ? 1
?
because the mth term of the series in (3) is not smaller than 1 ? K(x ? y) /(4m2 ? 1).
Fig. 1 shows a comparison of the kernel approximation properties of the RFFs [8] with our scheme
for the Gaussian kernel.
(a)
(b)
(c)
Figure 1: (a) Approximating the Gaussian kernel by random features (green) and random signs (red). (b) Relationship of normalized Hamming distance between random signs to functions of the kernel. The scatter plots in
(a) and (b) are obtained from a synthetic set of 500 uniformly distributed 2D points with n = 5000. (c) Bounds
for normalized Hamming distance in Lemmas 2.2 and 2.3 vs. the Euclidean distance.
Now we concatenate several mappings of the form Ft,?,b to construct an embedding of X into the
binary cube {0, 1}n . Specifically, we draw n i.i.d. triples (t1 , ? 1 , b1 ), . . . , (tn , ? n , bn ) and define
?
where Fi (x) ? Fti ,?i ,bi (x), i = 1, . . . , n
F n (x) = F1 (x), . . . , Fn (y) ,
As we will show next, this construction ensures that, for any two points x and y, the fraction of the
bits where the binary strings F n (x) and F n (y) disagree sharply concentrates around hK (x ? y),
provided n is large enough. Using the results proved above, we conclude that, for any two points
x and y that are ?similar,? i.e., K(x ? y) ? 1, most of the bits of F n (x) and F n (y) will agree,
whereas for any two points x and y that are ?dissimilar,? i.e., K(x ? y) ? 0, F n (x) and F n (y)
will disagree in about 40% or more of their bits.
Analysis of performance. We first prove a Johnson?Lindenstrauss type result which says that,
for any finite subset of RD , the normalized Hamming distance respects the similarities between
points. It should be pointed out that the analogy with Johnson?Lindenstrauss is only qualitative:
our embedding is highly nonlinear, in contrast to random linear projections used there [4], and the
resulting distortion of the neighborhood structure, although controllable, does not amount to a mere
rescaling by constants.
Theorem 2.4 Fix ?, ? ? (0, 1). For any finite data set D = {x1 , . . . , xN } ? RD , F n is such that
1
dH (F n (xj ), F n (xk )) ? hK (xj ? xk ) + ?
(4)
n
1
h1 (K(xj ? xk )) ? ? ? dH (F n (xj ), F n (xk )) ? h2 (K(xj ? xk )) + ?
(5)
n
2
for all j, k with probability ? 1 ? N 2 e?2n? . Moreover, the events (4) and (5) will hold with
2
2
probability ? 1 ? ? if n ? (1/2? ) log(N /?). Thus, any N -point subset of RD can be embedded,
with high probability, into the binary cube of dimension O(log N ) in a similarity-preserving way.
hK (xj ? xk ) ? ? ?
The proof (omitted) is by a standard argument using Hoeffding?s inequality and the union bound, as
well as the bounds of Lemma 2.3. We also prove a much stronger result: any compact subset X ?
RD can be embedded into a binary cube whose dimension depends only on the intrinsic dimension
and the diameter of X and on the second moment of PK , such that the normalized Hamming distance
behaves in a similarity-preserving way for all pairs of points in X simultaneously. We make use of
the following [5]:
Definition 2.5 The Assouad dimension of X ? RD , denoted by dX , is the smallest integer k, such
that, for any ball B ? RD , the set B ? X can be covered by 2k balls of half the radius of B.
The Assouad dimension is a widely used measure of the intrinsic dimension [2, 6, 3]. For example,
if X is an ?p ball in RD , then dX = O(D); if X is a d-dimensional hyperplane in RD , then
dX = O(d) [2]. Moreover, if X is a d-dimensional Riemannian submanifold of RD with a suitably
bounded curvature, then dX = O(d) [3]. We now have the following result:
p
?
Theorem 2.6 Suppose that the kernel K is such that LK = E??PK k?k2 < +?. Then there
exists a constant C > 0 independent of D and K, such that the following holds. Fix any ?, ? > 0. If
2
CLK dX diam X 2
,
log
,
n ? max
?2
?2
?
then, with probability at least 1 ? ?, the mapping F n is such that, for every pair x, y ? X ,
1
hK (x ? y) ? ? ? dH (F n (x), F n (y)) ? hK (x ? y) + ?
(6)
n
Proof: For every pair x, y ? X , let Ax,y be the set of all ? ? (t, ?, b), such that Ft,?,b (x) 6=
Ft,?,b (y), and let A = {Ax,y : x, y ? X }. Then we can write
n
1
1X
dH (F n (x), F n (y)) =
1{?i ?Ax,y } .
n
n i=1
For any sequence ? n = (? 1 , . . . , ?n ), define the uniform deviation
n
1 X
?
?(? n ) = sup
1{?i ?Ax,y } ? E 1{Ft,?,b (x)6=Ft,?,b (y)} .
x,y?X n i=1
For every 1 ? i ? n and an arbitrary ? ?i , let ?n(i) denote ?n with the ith component replaced by ? ?i .
Then |?(? n ) ? ?(? n(i) )| ? 1/n for any i and any ? ?i . Hence, by McDiarmid?s inequality,
2
P {|?(? n ) ? E?n ?(? n )| > ?} ? 2e?2n? ,
?? > 0.
(7)
Now we need to bound E?n ?(? n ). Using a standard symmetrization technique [14], we can write
#
n
"
1 X
?
n
?i 1{?i ?Ax,y } ,
(8)
E?n ?(? ) ? 2R(A) = 2 E?n ,?n sup
x,y?X n
i=1
where ? n = (?1 , . . . , ?n ) is an i.i.d. Rademacher sequence, P{?i = ?1} = P(?i = +1} = 1/2.
The quantity R(A) can be bounded by the Dudley entropy integral [14]
Z
C0 ? q
?
R(A) ?
log N (?, A, k ? kL2 (?) )d?,
(9)
n 0
where C0 > 0 is a universal constant, and N (?, A, k ? kL2 (?) ) is the ?-covering number of the
function class {? 7? 1{??A} : A ? A} with respect to the L2 (?) norm, where ? is the distribution
of ? ? (t, ?, b). We will bound these covering numbers by the covering numbers of X with respect
to the Euclidean norm on RD . It can be shown that, for any four points x, x? , y, y ? ? X ,
Z
2
1Ax,y ? 1A ? ?
2 2 =
1{??Ax,y } ? 1{??Ax? ,y? } d?(?) ? ?(Bx ?Bx? ) + ?(By ?By? ),
x ,y
L (?)
?
where ? denotes symmetric difference of sets, and Bx = {(t, ?, b) : Qt (cos(? ? x + b)) = +1}
(details omitted for lack of space). Now,
h
i
2? (Bx ?Bx? ) = 2 E?,b Pt Qt (cos(? ? x + b)) 6= Qt (cos(? ? y + b))
=
E?,b |cos(? ? x + b) ? cos(? ? x? + b)| ? E? |? ? (x ? x? )| ? LK kx ? x? k.
Then ? (Bx ?Bx? ) + ? (By ?By? ) ? L2K (kx ? x? k + ky ? y ? k).
This implies that
N (?, A, k ? kL2 (?) ) ? N (?2 /LK , X , k?k)2 , where N (?, X , k?k) are the covering numbers of X w.r.t.
the Euclidean norm k?k. By definition of the Assouad dimension, N (?, X , k?k) ? (2 diam X /?)dX ,
X 2dX
so N (?, A, k ? kL2 (?) ) ? 2LK diam
. We can now estimate the integral in (9) by
?2
r
LK dX diam X
R(A) ? C1
,
(10)
n
q
for some constant C1 > 0. From (10) and (8), we obtain E?n ?(? n ) ? C2 LK dX ndiam X , where
C2 = 2C1 . Using this and (7) with ? = ?/2, we obtain (6) with C = 16C22 .
?
2
For example, with the Gaussian kernel K(s) = e??ksk /2 on RD , we have LK = D?. The kernel
bandwidth ? is often chosen as ? ? 1/[D(diam X )2 ] (see, e.g., [12, Sec. 7.8]); with this setting,
the number of bits needed to guarantee the bound (6) is n = ?((dX /? 2 ) log(1/?)). It is possible,
in principle, to construct a dimension-reducing embedding of X into a binary cube, provided the
number of bits in the embedding is larger than the intrinsic dimension of X .
Our method
Spectral hashing
(a)
(b)
(c)
(d)
(e)
(f)
Figure 2: Synthetic results. First row: scatter plots of normalized Hamming distance vs. Euclidean distance
for our method (a) and spectral hashing (b) with code size 32 bits. Green indicates pairs of data points that
are considered true ?neighbors? for the purpose of retrieval. Second row: scatter plots for our method (c) and
spectral hashing (d) with code size 512 bits. Third row: recall-precision plots for our method (e) and spectral
hashing (f) for code sizes from 8 to 512 bits (best viewed in color).
3 Empirical Evaluation
In this section, we present the results of our scheme with a Gaussian kernel, and compare our performance to spectral hashing [15].1 Spectral hashing is a recently introduced, state-of-the-art approach
that has been reported to obtain better results than several other well-known methods, including
LSH [1] and restricted Boltzmann machines [11]. Unlike our method, spectral hashing chooses
code parameters in a deterministic, data-dependent way, motivated by results on convergence of
1
We use the code made available by the authors of [15] at http://www.cs.huji.ac.il/?yweiss/SpectralHashing/.
Our method
Spectral hashing
Figure 3: Recall-precision curves for the LabelMe database for our method (left) and for spectral hashing
(right). Best viewed in color.
eigenvectors of graph Laplacians to Laplacian eigenfunctions on manifolds. Though spectral hashing is derived from completely different considerations than our method, its encoding scheme is
similar to ours in terms of basic computation. Namely, each bit of a spectral hashing code is given
by sgn(cos(k ? ? x)), where ? is a principal direction of the data (instead of a randomly sampled
direction, as in our method) and k is a weight that is deterministically chosen according to the analytical form of certain kinds of Laplacian eigenfunctions. The structural similarity between spectral
hashing and our method makes comparison between them appropriate.
To demonstrate the basic behavior of our method, we first report results for two-dimensional synthetic data using a protocol similar to [15] (we have also conducted tests on higher-dimensional
synthetic data, with very similar results). We sample 10,000 ?database? and 1,000 ?query? points
from a uniform distribution defined on a 2d rectangle with aspect ratio 0.5. To distinguish true positives from false positives for evaluating retrieval performance, we select a ?nominal? neighborhood
radius so that each query point on average has 50 neighbors in the database. Next, we rescale the
data so that this radius is 1, and set the bandwidth of the kernel to ? = 1. Fig. 2 (a,c) shows scatter
plots of normalized Hamming distance vs. Euclidean distance for each query point paired with each
database point for 32-bit and 512-bit codes. As more bits are added to our code, the variance of the
scatter plots decreases, and the points cluster tighter around the theoretically expected curve (Eq. (3),
Fig. 1). The scatter plots for spectral hashing are shown in Fig. 2 (b,d). As the number of bits in the
spectral hashing code is increased, normalized Hamming distance does not appear to converge to any
clear function of the Euclidean distance. Because the derivation of spectral hashing in [15] includes
several heuristic steps, the behavior of the resulting scheme appears to be difficult to analyze, and
shows some undesirable effects as the code size increases. Figure 2 (e,f) compares recall-precision
curves for both methods using a range of code sizes. Since the normalized Hamming distance for
our method converges to a monotonic function of the Euclidean distance, its performance keeps
improving as a function of code size. On the other hand, spectral hashing starts out with promising
performance for very short codes (up to 32 bits), but then deteriorates for higher numbers of bits.
Next, we present retrieval results for 14,871 images taken from the LabelMe database [10]. The
images are represented by 320-dimensional GIST descriptors [7], which have proven to be effective
at capturing perceptual similarity between scenes. For this experiment, we randomly select 1,000
images to serve as queries, and the rest make up the ?database.? As with the synthetic experiments, a
nominal threshold of the average distance to the 50th nearest neighbor is used to determine whether
a database point returned for a given query is considered a true positive. Figure 3 shows precisionrecall curves for code sizes ranging from 16 bits to 1024 bits. As in the synthetic experiments,
spectral hashing appears to have an advantage over our method for extremely small code sizes, up to
about 32 bits. However, this low bit regime may not be very useful in practice, since below 32 bits,
neither method achieves performance levels that would be satisfactory for real-world applications.
For larger code sizes, our method begins to dominate. For example, with a 128-bit code (which is
equivalent to just two double-precision floating point numbers), our scheme achieves 0.8 precision
Euclidean neighbors
32 bit code
512 bit code
Precision: 0.81
Precision: 1.00
Precision: 0.38
Precision: 0.96
Figure 4: Examples of retrieval for two query images on the LabelMe database. The left column shows top
48 neighbors for each query according to Euclidean distance (the query image is in the top left of the collage).
The middle (resp. right) column shows nearest neighbors according to normalized Hamming distance with a
32-bit (resp. 512-bit) code. The precision of retrieval is evaluated as the proportion of top Hamming neighbors
that are also Euclidean neighbors within the ?nominal? radius. Incorrectly retrieved images in the middle and
right columns are shown with a red border. Best viewed in color.
at 0.2 recall, whereas spectral hashing only achieves about 0.5 precision at the same recall. Moreover, the performance of spectral hashing actually begins to decrease for code sizes above 256 bits.
Finally, Figure 4 shows retrieval results for our method on a couple of representative query images.
In addition to being completely distribution-free and exhibiting more desirable behavior as a function of code size, our scheme has one more practical advantage. Unlike spectral hashing, we retain
the kernel bandwidth ? as a ?free parameter,? which gives us flexibility in terms of adapting to target
neighborhood size, or setting a target Hamming distance for neighbors at a given Euclidean distance. This can be especially useful for making sure that a significant fraction of neighbors for each
query are mapped to strings whose Hamming distance from the query is no greater than 2. This is a
necesary condition for being able to use binary codes for hashing as opposed to brute-force search
(although, as demonstrated in [11, 13], even brute-force search with binary codes can already be
quite fast). To ensure high recall within a low Hamming radius, we can progressively increase the
kernel bandwidth ? as the code size increases, thus counteracting the increase in unnormalized Hamming distance that inevitably accompanies larger code sizes. Preliminary results (omitted for lack of
space) show that this strategy can indeed increase recall for low Hamming radius while sacrificing
some precision. In the future, we will evaluate this tradeoff more extensively, and test our method
on datasets consisting of millions of data points. At present, our promising initial results, combined
with our comprehensive theoretical analysis, convincingly demonstrate the potential usefulness of
our scheme for large-scale indexing and search applications.
Acknowledgments
This work was supported by NSF CAREER Award No. IIS 0845629.
References
[1] A. Andoni and P. Indyk. Near-optimal hashing algorithms for approximate nearest neighbor in high
dimensions. Commun. ACM, 51(1):117?122, 2008.
[2] K. Clarkson. Nearest-neighbor searching and metric space dimensions. In Nearest-Neighbor Methods for
Learning and Vision: Theory and Practice, pages 15?59. MIT Press, 2006.
[3] S. Dasgupta and Y. Freund. Random projection trees and low dimensional manifolds. In STOC, 2008.
[4] S. Dasgupta and A. Gupta. An elementary proof of a theorem of Johnson and Lindenstrauss. Random
Struct. Alg., 22(1):60?65, 2003.
[5] J. Heinonen. Lectures on Analysis on Metric Spaces. Springer, New York, 2001.
[6] P. Indyk and A. Naor. Nearest-neighbor-preserving embeddings. ACM Trans. Algorithms, 3(3):Art. 31,
2007.
[7] A. Oliva and A. Torralba. Modeling the shape of the scene: a holistic representation of the spatial envelope. Int. J. Computer Vision, 42(3):145?175, 2001.
[8] A. Rahimi and B. Recht. Random features for large-scale kernel machines. In NIPS, 2007.
[9] M. Reed and B. Simon. Methods of Modern Mathematical Physics II: Fourier Analysis, Self-Adjointness.
Academic Press, 1975.
[10] B. Russell, A. Torralba, K. Murphy, and W. T. Freeman. LabelMe: a database and web-based tool for
image annotation. Int. J. Computer Vision, 77:157?173, 2008.
[11] R. Salakhutdinov and G. Hinton. Semantic hashing. In SIGIR Workshop on Inf. Retrieval and App. of
Graphical Models, 2007.
[12] B. Sch?olkopf and A. J. Smola. Learning With Kernels. MIT Press, 2002.
[13] A. Torralba, R. Fergus, and Y. Weiss. Small codes and large databases for recognition. In CVPR, 2008.
[14] A. W. van der Vaart and J. A. Wellner. Weak Convergence and Empirical Processes. Springer, 1996.
[15] Y. Weiss, A. Torralba, and R. Fergus. Spectral hashing. In NIPS, 2008.
| 3749 |@word middle:2 proportion:1 stronger:2 norm:3 suitably:1 c0:2 unif:5 bn:2 moment:1 initial:1 series:3 document:1 ours:1 existing:1 scatter:6 dx:10 fn:3 concatenate:1 shape:1 plot:7 gist:2 progressively:1 hash:1 v:3 half:1 xk:6 ith:1 short:1 quantizer:1 mcdiarmid:1 c22:1 mathematical:1 c2:2 qualitative:1 prove:3 naor:1 compose:1 manner:1 introduce:1 theoretically:2 indeed:1 expected:2 behavior:5 freeman:1 ming:1 decreasing:1 salakhutdinov:1 encouraging:1 provided:3 fti:1 moreover:4 underlying:1 bounded:2 begin:2 kind:1 string:10 guarantee:2 thorough:1 every:3 k2:3 brute:3 omit:2 appear:1 t1:1 positive:3 despite:1 encoding:5 id:1 subscript:1 solely:1 eb:1 co:17 bi:2 range:1 practical:2 acknowledgment:1 union:1 practice:2 differs:1 precisionrecall:1 universal:1 empirical:3 adapting:1 projection:5 word:1 unc:2 undesirable:1 storage:1 www:1 equivalent:1 map:4 deterministic:1 demonstrated:1 l2k:1 sigir:1 simplicity:1 chapel:2 rule:1 m2:5 deriving:1 dominate:1 embedding:6 searching:1 notion:2 resp:2 pt:2 construction:1 suppose:1 nominal:3 target:2 duke:2 us:1 designing:2 satisfying:1 recognition:1 database:12 yk1:1 ft:7 ensures:1 decrease:2 russell:1 ham:1 trained:1 serve:1 completely:4 represented:2 derivation:1 fast:2 effective:1 query:12 neighborhood:3 whose:2 heuristic:2 widely:1 larger:3 cvpr:1 say:3 drawing:1 distortion:1 quite:1 encoder:1 vaart:1 transform:1 indyk:2 sequence:2 advantage:2 analytical:1 propose:1 product:1 holistic:1 trigonometric:1 flexibility:1 achieve:1 ky:1 olkopf:1 convergence:3 rff:2 requirement:1 cluster:1 rademacher:1 produce:1 double:1 converges:2 object:1 ac:1 measured:1 rescale:1 nearest:7 qt:8 eq:1 c:2 come:1 implies:1 differ:1 concentrate:2 direction:2 radius:6 exhibiting:1 sgn:4 f1:3 fix:2 yweiss:1 preliminary:1 tighter:1 elementary:1 hold:3 around:3 considered:2 normal:1 exp:2 k3:2 mapping:6 achieves:3 torralba:4 smallest:1 omitted:3 purpose:1 favorable:1 sensitive:3 symmetrization:1 tool:1 weighted:1 mit:2 gaussian:8 derived:2 focus:1 ax:8 indicates:1 hk:8 contrast:1 dependent:1 mth:1 reproduce:1 transformed:1 interested:1 among:1 denoted:1 art:3 spatial:1 cube:7 construct:3 identical:1 unsupervised:1 nearly:1 future:1 report:2 modern:1 randomly:2 preserve:2 simultaneously:1 comprehensive:1 murphy:1 floating:1 replaced:1 consisting:2 interest:1 highly:1 evaluation:1 amenable:1 integral:2 tree:1 euclidean:14 desired:1 sacrificing:1 theoretical:4 increased:1 column:3 modeling:1 rffs:2 deviation:1 subset:3 uniform:2 submanifold:1 usefulness:1 johnson:4 conducted:1 characterize:1 reported:1 synthetic:7 my:2 chooses:1 recht:3 combined:1 fundamental:1 huji:1 retain:1 physic:1 together:1 opposed:1 hoeffding:1 rescaling:1 bx:7 potential:1 coding:2 sec:1 includes:1 int:2 depends:2 sine:2 h1:9 lot:1 root:1 analyze:1 sup:2 red:2 start:2 wave:1 annotation:1 simon:1 minimize:1 square:1 il:1 descriptor:2 variance:1 efficiently:1 yield:1 weak:1 mere:1 rectified:1 app:1 definition:2 kl2:4 proof:6 riemannian:1 hamming:23 couple:1 sampled:1 proved:1 recall:7 knowledge:1 color:3 actually:1 appears:2 hashing:31 higher:2 wei:3 evaluated:2 though:1 just:2 smola:1 hand:2 web:1 nonlinear:1 lack:2 supervisory:1 effect:1 normalized:13 true:3 contain:1 hence:1 symmetric:1 satisfactory:1 semantic:2 sin:2 self:1 uniquely:1 covering:4 unnormalized:1 hill:2 presenting:1 demonstrate:2 tn:1 image:10 lazebnik:2 consideration:2 harmonic:1 recently:4 fi:3 ranging:1 behaves:2 million:2 approximates:1 numerically:1 significant:1 rd:20 pointed:1 lsh:5 similarity:12 yk2:1 curvature:1 recent:1 retrieved:1 commun:1 inf:1 certain:1 inequality:2 binary:23 der:1 preserving:6 greater:1 bochner:1 converge:1 determine:1 ii:2 full:3 desirable:2 rahimi:3 academic:1 retrieval:9 divided:1 award:1 paired:1 laplacian:3 basic:3 oliva:1 vision:3 expectation:1 metric:2 kernel:33 c1:3 addition:3 whereas:2 appropriately:1 envelope:1 rest:1 unlike:2 sch:1 eigenfunctions:2 sure:1 induced:1 integer:1 counteracting:1 structural:1 near:2 easy:2 embeddings:2 enough:1 independence:1 xj:6 gave:1 bandwidth:4 inner:1 idea:1 tradeoff:1 shift:5 whether:2 motivated:2 accelerating:1 wellner:1 clarkson:1 returned:1 accompanies:1 york:1 useful:2 clear:2 covered:1 eigenvectors:1 necesary:1 amount:1 extensively:1 hardware:1 diameter:1 http:1 nsf:1 sign:3 deteriorates:1 clk:1 discrete:1 write:3 dasgupta:2 express:1 key:1 four:1 threshold:2 neither:1 rectangle:1 graph:1 fraction:3 sum:1 convert:1 raginsky:2 svetlana:1 family:1 draw:3 bit:32 capturing:1 bound:10 guaranteed:1 simplification:1 distinguish:1 sharply:2 scene:2 sake:1 fourier:7 aspect:1 argument:1 min:1 extremely:1 according:3 ball:3 smaller:1 making:1 invariant:6 restricted:1 indexing:1 taken:1 computationally:1 agree:1 needed:1 available:1 spectral:27 appropriate:2 dudley:1 struct:1 original:4 denotes:1 top:3 ensure:1 graphical:1 exploit:1 k1:2 prof:1 especially:1 approximating:1 objective:1 added:1 quantity:1 already:1 strategy:1 mx:2 distance:31 mapped:1 topic:1 manifold:2 code:40 relationship:1 reed:1 ratio:1 nc:2 difficult:1 quantizers:1 stoc:1 boltzmann:1 upper:3 disagree:2 datasets:1 finite:2 inevitably:1 incorrectly:1 hinton:1 rn:2 arbitrary:1 introduced:1 pair:6 namely:1 distinction:1 nip:2 trans:1 address:1 able:2 below:1 laplacians:1 regime:1 convincingly:1 green:2 max:1 including:1 event:1 force:3 scheme:20 lk:7 carried:1 l2:1 embedded:5 freund:1 lecture:1 ksk:2 sublinear:2 proportional:1 analogy:1 proven:1 triple:1 h2:9 mercer:2 principle:1 translation:1 row:3 course:1 supported:1 last:1 free:5 neighbor:16 distributed:1 van:1 curve:4 dimension:13 xn:1 lindenstrauss:4 evaluating:1 world:1 concavity:1 author:1 collection:1 made:1 approximate:1 compact:3 keep:1 heinonen:1 b1:2 conclude:1 fergus:2 search:7 continuous:3 promising:3 nature:1 controllable:1 career:1 improving:1 alg:1 constructing:1 domain:2 protocol:1 pk:6 border:1 x1:1 fig:4 representative:1 precision:13 deterministically:1 perceptual:1 collage:1 third:1 theorem:4 gupta:1 intrinsic:4 exists:1 workshop:1 false:1 andoni:1 effectively:1 maxim:1 kx:4 durham:1 locality:4 entropy:2 monotonic:1 springer:2 satisfies:2 dh:6 assouad:3 acm:2 goal:2 identity:1 diam:5 viewed:3 labelme:5 specifically:2 infinite:1 reducing:2 uniformly:1 hyperplane:1 lemma:8 principal:1 experimental:3 select:2 highdimensional:1 dissimilar:2 brevity:1 evaluate:1 |
3,034 | 375 | Optimal Sampling of Natural Images: A Design
Principle for the Visual System?
William Bialek, a,b Daniel L. Ruderman, a and A. Zee C
a Department of Physics, and
Department of Molecular and Cell Biology
University of California at Berkeley
Berkeley, California 94720
bNEC Research Institute
4 Independence Way
Princeton, New Jersey 08540
CInstitute for Theoretical Physics
University of California at Santa Barbara
Santa Barbara, California 93106
Abstract
We formulate the problem of optimizing the sampling of natural images
using an array of linear filters. Optimization of information capacity is
constrained by the noise levels of the individual channels and by a penalty
for the construction of long-range interconnections in the array. At low
signal-to-noise ratios the optimal filter characteristics correspond to bound
states of a Schrodinger equation in which the signal spectrum plays the
role of the potential. The resulting optimal filters are remarkably similar
to those observed in the mammalian visual cortex and the retinal ganglion
cells of lower vertebrates. The observed scale invariance of natural images
plays an essential role in this construction.
363
364
Bialek, Ruderman, and Zee
1
Introduction
Under certain conditions the visual system is capable of performing extremely efficient signal processing [I]. One ofthe major theoretical issues in neural computation
is to understand how this efficiency is reached given the constraints imposed by the
biological hardware. Part of the problem [2] is simply to give an informative representation of the visual world using a limited number of neurons, each of which
has a limited information capacity. The information capacity of the visual system
is determined in part by the spatial transfer characteristics, or "receptive fields," of
the individual cells. From a theoretical point of view we can ask if there exists an
optimal choice for these receptive fields, a choice which maximizes the information
transfer through the system given the hardware constraints. We show that this
optimization problem has a simple formulation which allows us to use the intuition
developed through the variational approach to quantum mechanics.
In general our approach leads to receptive fields which are quite unlike those observed for cells in the visual cortex. In particular orientation selectivity is not a
generic prediction. The optimal filters, however, depend on the statistical properties of the images we are trying to sample. Natural images have a symmetry - scale
invariance [4] - which saves the theory: The optimal receptive fields for sampling
of natural images are indeed orientation selective and bear a striking resemblance
to observed receptive field characteristics in the mammalian visual cortex as well as
the retinal ganglion of lower vertebrates.
2
General Theoretical Formulation
We assume that images are defined by a scalar field .p(x) on a two dimensional
surface with coordinates x. This image is sampled by an array of cells whose
outputs Yn are given by
(I)
where the cell is loacted at site X n , its spatial transfer function or receptive field is
defined by F, and TJ is an independent noise source at each sampling point. We will
assume for simplicity that the noise source is Gaussian, with (TJ2) = (T2. Our task
is to find the receptive field F which maximizes the information provided about ?l
by the set of outputs {Yn}'
If the field .p is itself chosen from a stationary Gaussian distribution then the information carried by the {Yn } is given by [3]
1= _1_Tr In [6
21n 2
nm
+~
(T2
J
d 2 k eik ,(xn-Xm)IF(k)12 S(k)]
(271'')2
,
(2)
where S(k) is the power spectrum of the signals,
S(k) =
and F(k)
= J d2xe- ik .x F(x)
J
d 2 ye- ik .y (?l(x + y)?l(x)),
(3)
is the receptive field in momentum (Fourier) space.
Optimal Sampling of Natural Images
At low signal-to-noise ratios (large 0- 2 ) we have
J
N
I ~ 2ln 20- 2
d2k
-
2
(211" p IF(k)1 S(k),
(4)
where N is the total number of cells.
To make our definition of the noise level 0- meaningful we must constrain the total
"gain" of the filters F. One simple approach is to normalize the functions F in the
usual L2 sense,
J
2
2
d xF (x)
=
J
d2k
-
(211" p IF(k)1
2
= 1.
(5)
If we imagine driving the system with spectrally white images, this condition fixes
the total signal power passing through the filter.
Even with normalization, optimization of information capacity is still not wellposed. To avoid pathologies we must constrain the scale of variations in k-space.
This makes sense biologically since we know that sharp features in k-space can
be achieved only by introducing long-range interactions in real space, and cells in
the visual system typically have rather local interconnections. We implement this
constraint by introducing a penalty proportional to the mean square spatial extent
of the receptive field,
(6)
With all the constraints we find that, at low signal to noise ratio, our optimization
problem becomes that of minimizing the functional
(7)
where A is a Lagrange multiplier and a measures the strength of the locality constraint. The optimal filters are then solutions of the variational equation,
a
2-
- '2 \7 kF(k) -
1
-
2ln 20- 2 S(k)F(k)
= AF(k).
(8)
We recognize this as the Schrodinger equation for a particle moving in k-space, in
which the mass M
n? / a, the potential V (k)
S(k) /21n 20- 2 , and A is the
energy eigenvalue. Since we are interested in normalizable F, we are restricted to
bound states, and the optimal filter is just the bound state wave function.
=
=-
There are in general several optimal filters, corresponding to the different bound
states. Each of these filters gives the same value for the total cost function C[fr]
and hence is equally "good" in this context. Thus each sampling point should be
served by a set of filters rather than just one. Indeed, in the visual cortex one finds
a given region of the visual field being sampled by many cells with different spatial
frequency and orientation selectivities.
365
366
Bialek, Ruderman, and Zee
3
A Near-Fatal Flaw and its Resolution
If the signal spectra S(k) are isotropic, so that features appear at all orientations
across the visual field, all of the bound states of the corresponding Schrodinger
equation are eigenstates of angular momentum. But real visual neurons have receptive fields with a single optimal orientation, not the multiple optima expected if the
filters F correspond to angular momentum eigenstates. One would like to combine
different angular momentum eigenfunctions to generate filters which respond to localized regions of orientation. In general, however, the different angular momenta
are associated with different energy eigenvalues and hence it is impossible to form
linear combinations which are still solutions of the variational problem.
We can construct receptive fields which are localized in orientation if there is some
extra symmetry or accidental degeneracy which allows the existence of equal-energy
states with different angular momenta. If we believe that real receptive fields are the
solutions of our variational problem, it must be the case that the signal spectrum
S(k) for natural images possesses such a symmetry.
Recently Field [4] has measured the power spectra of several natural scenes. As
one might expect from discussions of "fractal" landscapes, these spectra are scale
A/lkI 2. It is easy to see that the corresponding quantum
invariant, with S(k)
mechanics problem is a bit sick - the energy is not bounded from below. In the
present context, however, this sickness is a saving grace. The equivalent Schrodinger
equation is
a 2A
(9)
- 2\71: F (k) - 2In2u2IkI2F(k) AF(k).
=
=
If we take q
= (y'2IAI/a)k, then for bound states (A < 0) we find
2 -
\7 q F(q)
B -
+ IqI2F(q) =
F(q),
(10)
=
with B
A/ In 2u 2 ? Thus we see that the energy A can be scaled away; there
is no quantization condition. We are free to choose any value of A, but for each
such value there are several angular momentum states. Since they correspond to
the same energy, superpositions of these states are also solutions of the original
variational problem. The scale invariance of natural images is the symmetry we
need in order to form localized receptive fields.
4
Predicting Receptive Fields
To solve Eq. (9) we find it easier to transform back to real space. The result is
02F
r2(1 + r2) or2
of
+ r(1 + 5r2) or + [r2(4 + B + 0 2/04>2) + 0 2 /04>2]F = 0,
=
(11)
where 4> is the angular variable and r
(y'a/2IADlxl. Angular momentum states
Fm .- eim4J have the asymptotic Fm(r ? 1) .- r?m, Fm(r ? 1) .- r),?(m), with
A?(m) = -2?v'm2 - B. We see that for m 2 < B the solutions are oscillatory functions of r, since A has an imaginary part. For m 2 > B + 4 the solution can diverge
Optimal Sampling of Natural Images
as r becomes large, and in this case we must be careful to choose solutions which
are regular both at the origin and at infinity if we are to maintain the constraint in
Eq. (5). Numerically we find that there are no such solutions; the functions which
behave as r+ 1ml near the origin diverge at large r if m 2 > B + 4. We conclude
that for a given value of B, which measures the signal-to-noise ratio, there exists a
finite set of angular momentum states; these states can then be superposed to give
receptive fields with localized angular sensitivity.
In fact all linear combinations of m-states are solutions to the variational problem
at low signal to noise ratio, so the precise form of orientation tuning is not determined. If we continue our expansion of the information capacity in powers of the
signal-to-noise ratio we find terms which will select different linear combinations of
the m-states and hence determine the precise orientation selectivity. These higherorder terms, however, involve multi-point correlation functions of the image. At the
lowest SNR, corresponding to the first term in our expansion, we are sensitive only
to the two-point function (power spectrum) of the signal ensemble, which carries
no information about angular correlations. A truly predictive theory of orientation
tuning must thus rest on measurements of angular correlations in natural images;
as far as we know such measurements have not been reported.
Even without knowing the details of the higher-order correlation functions we can
make some progress. To begin, it is clear that at very small B orientation selectivity
is impossible since there are only m 0 solutions. This is the limit of very low SNR,
or equivalently very strong constraints on the locality of the receptive field (large
Q above). The circularly symmetric receptive fields that one finds in this limit are
center-surround in structure, with the surround becoming more prominent as the
signal-to-noise ratio is increased. These predictions are in qualitative accord with
what one sees in the mammalian retina, which is indeed extremely local- receptive
field centers for foveal ganglion cells may consist of just a single cone photoreceptor.
As one proceeds to the the cortex the constraints of locality are weaker and orientation selectivity becomes possible. Similarly in lower vertebrates there is a greater
range of lateral connectivity in the retina itself, and hence orientation selectivity is
possible at the level of the ganglion cell.
=
To proceed further we have explored the types of receptive fields which can be
produced by superposing m-states at a given value of B. VVe consider for the
moment only even-symmetric receptive fields, so we add all terms in phase. One
such receptive field is shown in Fig. 1, together with experimental results for a
simple cell in the primary visual cortex of monkeys [5]. It is clear that we can obtain
reasonable correspondence between theory and experiment. Obviously we have
made no detailed "fit" to the data, and indeed we are just beginning a quantitative
comparison of theory with experiment. Much of the arbitrariness in the construction
of Fig. 1 will be removed once we have control over the higher terms in the SNR
expansion, as described above.
It is interesting that, at low SNR, there is no preferred value for the length scale.
Thus the optimal system may choose to sample images at many different scales and
at different scales in different regions of the image. The experimental variability in
spatial frequency tuning from cell to cell may thus not represent biological sloppiness
but rather the fact that any peak spatial frequency constitutes an optimal filter in
the sense defined here.
367
368
Bialek, Ruderman, and Zee
Figure 1: !-.'lodel (left) and monkey (right) receptive fields. Monkey RF is from
reference (5 J.
5
Discussion
The selectivity of cortical neurons for orientation and spatial frequency are among
the best known facts about the visual system. Not surprisingly there have been
many attempts to derive these features from some theoretical perspective. One approach is to argue that such selectivity provides a natural preprocessing stage for
more complex computations. A very different view is that the observed organization
of the cortex is a consequence of developmental rules, but this approach does not
address the computational function which may be expressed by cortical organization. Finally several authors have considered the possibility that cortical receptive
fields are in some sense optimaL so that they can be predicted from a variational
principle (6, 7. 8J. Clearly we have adopted this last hypothesis; the issue is whether
one can make a compelling argument for any particular variational principle.
Optimization of information capacity seems like a very natural principle to apply in
the early stages of visual processing. As we have emphasized, this principle must be
supplemented by a knowledge of hardware constraints and of image statistics. Different authors have made different choices. especially for the constraints. Different
formulations, however, may be related - optimization of information transfer at
some fixed "gain" of the receptive fields is equivalent, through a Legendre transformation, to minimization of the redundancy at fixed information transfer, a problem
discussed by Atick and Redlich (8]. This latter approach has given very successful
predictions for the structure of ganglion cell receptive fields in cat and monkey,
although there are still some arbitrary parameters to be determined. It is our hope
that these ideas of receptive fields as solutions to variational problems can be given
Optimal Sampling of Natural Images
more detailed tests in the lower vertebrate retinas, where it is possible to characterize signals and noise at each of three layers of processing cicuitry.
As far as we know our work is unique in that the statistics of natural images, is
an essential component of the theory. Indeed the scale invariance of natural images plays a decisive role in our prediction of orientation selectivity; other classes
of signals would result in qualitatively different receptive fields. We find this direct
linkage between the properties of natural images and the architecture of natural
computing systems to be extremely attractive. The semi-quantitative correspondence between predicted and observed receptive fields (Fig. 1) suggests that we
have the kernel of a truly predictive theory for visual processing.
Acknowledgements
We thank K. DeValois, R. DeValois, J. D. Jackson, and N. Socci for helpful discussions. Work at Berkeley was supported in part by the National Science Foundation
through a Presidential Young Investigator Award (to WB), supplemented by funds
from Sun Microsystems and Cray Research, and by the Fannie and John Hertz
Foundation through a graduate fellowship (to DLR) . Work in Santa Barbara was
supported in part by the NSF through Grant No. PHY82-l7853, supplemented by
funds from NASA.
References
[1] W. Bialek. In E. Jen, editor, 1989 Lectures in Complex Systems, SFI Studies
in the Sciences of Complexity, Lect. Vol. II, pages 513-595. Addison-Wesley,
Menlo Park, CA, 1990.
[2] H. B. Barlow. In W. A. Rosenblith, editor, Sensory Communication, page 217.
MIT Press, Cambridge, MA, 1961.
[3] C . E. Shannon and W. Weaver. The Mathematical Theory of Communication.
University of Illinois Press, Urbana, IL, 1949.
[4] D. Field. J. Opt. Soc. Am., 4:2379, 1987.
[5] M. A. Webster and R. L. DeValois. J. Opt. Soc. Am., 2:1124-1132, 1985.
[6] B. Sakitt and H. B. Barlow. Bioi. Cybern., 43:97-108, 1982.
[7] R. Linsker. In D. Touretzky, editor, Advances in Neural Information Processing
1, page 186. Morgan Kaufmann, San Mateo, CA, 1989.
[8] J . J. Atick and A. N. Redlich. Neural Computation, 2:308, 1990
369
| 375 |@word seems:1 tr:1 carry:1 moment:1 foveal:1 daniel:1 imaginary:1 must:6 john:1 informative:1 webster:1 fund:2 stationary:1 isotropic:1 beginning:1 provides:1 mathematical:1 direct:1 ik:2 qualitative:1 cray:1 combine:1 expected:1 indeed:5 mechanic:2 multi:1 vertebrate:4 becomes:3 provided:1 begin:1 bounded:1 maximizes:2 mass:1 lowest:1 what:1 monkey:4 spectrally:1 developed:1 transformation:1 berkeley:3 quantitative:2 scaled:1 control:1 grant:1 appear:1 yn:3 local:2 limit:2 consequence:1 becoming:1 might:1 mateo:1 suggests:1 limited:2 range:3 graduate:1 unique:1 implement:1 regular:1 context:2 impossible:2 cybern:1 superposed:1 equivalent:2 imposed:1 center:2 formulate:1 resolution:1 simplicity:1 m2:1 rule:1 array:3 jackson:1 coordinate:1 variation:1 construction:3 play:3 imagine:1 hypothesis:1 origin:2 mammalian:3 observed:6 role:3 region:3 sun:1 removed:1 intuition:1 developmental:1 complexity:1 depend:1 predictive:2 efficiency:1 jersey:1 cat:1 lect:1 quite:1 whose:1 solve:1 interconnection:2 fatal:1 presidential:1 statistic:2 transform:1 itself:2 obviously:1 eigenvalue:2 interaction:1 fr:1 normalize:1 optimum:1 derive:1 measured:1 progress:1 eq:2 strong:1 soc:2 predicted:2 filter:14 fix:1 opt:2 biological:2 considered:1 driving:1 major:1 early:1 superposition:1 sensitive:1 minimization:1 hope:1 mit:1 clearly:1 gaussian:2 rather:3 avoid:1 sickness:1 sense:4 am:2 helpful:1 flaw:1 typically:1 selective:1 interested:1 issue:2 among:1 orientation:15 superposing:1 constrained:1 spatial:7 field:34 construct:1 equal:1 saving:1 once:1 sampling:8 biology:1 park:1 constitutes:1 linsker:1 eik:1 t2:2 retina:3 recognize:1 national:1 individual:2 phase:1 william:1 maintain:1 attempt:1 organization:2 possibility:1 truly:2 tj:1 capable:1 zee:4 theoretical:5 increased:1 compelling:1 wb:1 cost:1 introducing:2 snr:4 successful:1 characterize:1 reported:1 peak:1 sensitivity:1 physic:2 diverge:2 together:1 connectivity:1 nm:1 choose:3 d2k:2 potential:2 retinal:2 fannie:1 decisive:1 view:2 reached:1 wave:1 square:1 il:1 kaufmann:1 characteristic:3 ensemble:1 correspond:3 ofthe:1 landscape:1 produced:1 served:1 oscillatory:1 rosenblith:1 touretzky:1 definition:1 energy:6 frequency:4 associated:1 degeneracy:1 sampled:2 gain:2 ask:1 knowledge:1 back:1 nasa:1 wesley:1 higher:2 iai:1 formulation:3 just:4 angular:12 stage:2 atick:2 correlation:4 ruderman:4 resemblance:1 believe:1 ye:1 multiplier:1 barlow:2 hence:4 symmetric:2 sfi:1 white:1 attractive:1 trying:1 prominent:1 schrodinger:4 image:22 variational:9 recently:1 functional:1 arbitrariness:1 discussed:1 numerically:1 measurement:2 surround:2 cambridge:1 tuning:3 similarly:1 particle:1 illinois:1 pathology:1 moving:1 cortex:7 surface:1 add:1 sick:1 perspective:1 optimizing:1 barbara:3 selectivity:9 certain:1 continue:1 devalois:3 morgan:1 greater:1 determine:1 signal:16 semi:1 ii:1 multiple:1 xf:1 af:2 long:2 molecular:1 equally:1 award:1 prediction:4 normalization:1 represent:1 accord:1 kernel:1 achieved:1 cell:15 fellowship:1 remarkably:1 source:2 extra:1 rest:1 unlike:1 posse:1 eigenfunctions:1 near:2 easy:1 independence:1 fit:1 architecture:1 fm:3 dlr:1 idea:1 knowing:1 whether:1 linkage:1 penalty:2 passing:1 proceed:1 fractal:1 detailed:2 santa:3 involve:1 clear:2 hardware:3 generate:1 nsf:1 vol:1 redundancy:1 cone:1 respond:1 striking:1 reasonable:1 bit:1 bound:6 layer:1 correspondence:2 accidental:1 strength:1 constraint:10 infinity:1 constrain:2 normalizable:1 scene:1 fourier:1 argument:1 extremely:3 performing:1 department:2 combination:3 legendre:1 hertz:1 across:1 biologically:1 restricted:1 invariant:1 ln:2 equation:5 know:3 addison:1 adopted:1 apply:1 away:1 generic:1 save:1 existence:1 original:1 especially:1 receptive:28 primary:1 usual:1 grace:1 bialek:5 higherorder:1 thank:1 lateral:1 capacity:6 argue:1 extent:1 length:1 ratio:7 minimizing:1 eigenstates:2 equivalently:1 lki:1 design:1 tj2:1 neuron:3 urbana:1 finite:1 behave:1 variability:1 precise:2 communication:2 sharp:1 arbitrary:1 california:4 address:1 proceeds:1 below:1 microsystems:1 xm:1 rf:1 power:5 natural:18 predicting:1 weaver:1 carried:1 l2:1 acknowledgement:1 kf:1 asymptotic:1 expect:1 bear:1 lecture:1 interesting:1 proportional:1 localized:4 foundation:2 principle:5 editor:3 surprisingly:1 last:1 free:1 supported:2 weaker:1 understand:1 institute:1 xn:1 world:1 cortical:3 quantum:2 sensory:1 author:2 made:2 qualitatively:1 preprocessing:1 san:1 far:2 preferred:1 ml:1 conclude:1 spectrum:7 socci:1 channel:1 transfer:5 ca:2 menlo:1 symmetry:4 expansion:3 complex:2 noise:12 vve:1 site:1 fig:3 redlich:2 momentum:9 young:1 jen:1 emphasized:1 supplemented:3 r2:4 explored:1 essential:2 exists:2 quantization:1 circularly:1 consist:1 easier:1 locality:3 simply:1 ganglion:5 visual:16 lagrange:1 expressed:1 scalar:1 ma:1 bioi:1 careful:1 determined:3 total:4 invariance:4 experimental:2 photoreceptor:1 meaningful:1 shannon:1 select:1 latter:1 investigator:1 princeton:1 |
3,035 | 3,750 | Kernel Choice and Classifiability for RKHS
Embeddings of Probability Distributions
Bharath K. Sriperumbudur
Department of ECE
UC San Diego, La Jolla, USA
[email protected]
Kenji Fukumizu
The Institute of Statistical Mathematics
Tokyo, Japan
[email protected]
Arthur Gretton
Carnegie Mellon University
MPI for Biological Cybernetics
[email protected]
Gert R. G. Lanckriet
Department of ECE
UC San Diego, La Jolla, USA
[email protected]
Bernhard Sch?olkopf
MPI for Biological Cybernetics
T?ubingen, Germany
[email protected]
Abstract
Embeddings of probability measures into reproducing kernel Hilbert spaces have
been proposed as a straightforward and practical means of representing and comparing probabilities. In particular, the distance between embeddings (the maximum mean discrepancy, or MMD) has several key advantages over many classical
metrics on distributions, namely easy computability, fast convergence and low bias
of finite sample estimates. An important requirement of the embedding RKHS is
that it be characteristic: in this case, the MMD between two distributions is zero
if and only if the distributions coincide. Three new results on the MMD are introduced in the present study. First, it is established that MMD corresponds to the
optimal risk of a kernel classifier, thus forming a natural link between the distance
between distributions and their ease of classification. An important consequence
is that a kernel must be characteristic to guarantee classifiability between distributions in the RKHS. Second, the class of characteristic kernels is broadened to
incorporate all strictly positive definite kernels: these include non-translation invariant kernels and kernels on non-compact domains. Third, a generalization of
the MMD is proposed for families of kernels, as the supremum over MMDs on
a class of kernels (for instance the Gaussian kernels with different bandwidths).
This extension is necessary to obtain a single distance measure if a large selection
or class of characteristic kernels is potentially appropriate. This generalization is
reasonable, given that it corresponds to the problem of learning the kernel by minimizing the risk of the corresponding kernel classifier. The generalized MMD is
shown to have consistent finite sample estimates, and its performance is demonstrated on a homogeneity testing example.
1
Introduction
Kernel methods are broadly established as a useful way of constructing nonlinear algorithms
from linear ones, by embedding points into higher dimensional reproducing kernel Hilbert spaces
(RKHSs) [9]. A generalization of this idea is to embed probability distributions into RKHSs, giving
1
us a linear method for dealing with higher order statistics [6, 12, 14]. More specifically, suppose
we are given the set P of all Borel probability measures defined on the topological space M , and
the RKHS
R (H, k) of functions on M with k as its reproducing kernel (r.k.). For P ? P, denote by
Pk := M k(., x) dP(x). If k is measurable and bounded, then we may define the embedding of P
in H as Pk ? H. The RKHS distance between two such mappings associated with P, Q ? P is
called the maximum mean discrepancy (MMD) [6, 14], and is written
?k (P, Q) = kPk ? QkkH .
(1)
We say that k is characteristic [4, 14] if the mapping P 7? Pk is injective, in which case (1) is
zero if and only if P = Q, i.e., ?k is a metric on P. An immediate application of the MMD is to
problems of comparing distributions based on finite samples: examples include tests of homogeneity
[6], independence [7], and conditional independence [4]. In this application domain, the question of
whether k is characteristic is key: without this property, the algorithms can fail through inability to
distinguish between particular distributions.
Characteristic kernels are important in binary classification: The problem of distinguishing distributions is strongly related to binary classification: indeed, one would expect easily distinguishable
distributions to be easily classifiable.1 The link between these two problems is especially direct in
the case of the MMD: in Section 2, we show that ?k is the negative of the optimal risk (corresponding to a linear loss function) associated with the Parzen window classifier [9, 11] (also called kernel
classification rule [3, Chapter 10]), where the Parzen window turns out to be k. We also show that
?k is an upper bound on the margin of a hard-margin support vector machine (SVM). The importance of using characteristic RKHSs is further underlined by this link: if the property does not hold,
then there exist distributions that are unclassifiable in the RKHS H. We further strengthen this by
showing that characteristic kernels are necessary (and sufficient under certain conditions) to achieve
Bayes risk in the kernel-based classification algorithms.
Characterization of characteristic kernels: Given the centrality of the characteristic property to
both RKHS classification and RKHS distribution testing, we should take particular care in establishing which kernels satisfy this requirement. Early results in this direction include [6], where k is
shown to be characteristic on compact M if it is universal in the sense of Steinwart [15, Definition
4]; and [4, 5], which address the case of non-compact M , and show that k is characteristic if and
only if H + R is dense in the Banach space of p-power (p ? 1) integrable functions. The conditions
in both these studies can be difficult to check and interpret, however, and the restriction of the first
to compact M is limiting. In the case of translation invariant kernels, [14] proved the kernel to
be characteristic if and only if the support of the Fourier transform of k is the entire Rd , which is
a much easier condition to verify. Similar sufficient conditions are obtained by [5] for translation
invariant kernels on groups and semi-groups. In Section 3, we expand the class of characteristic
kernels to include kernels that may or may not be translation invariant, with the introduction of a
novel criterion: strictly positive definite kernels (see Definition 3) on M are characteristic.
Choice of characteristic kernels: In expanding the families of allowable characteristic kernels, we
have so far neglected the question of which characteristic kernel to choose. A practitioner asking by
how much two samples differ does not want to receive a blizzard of answers for every conceivable
kernel and bandwidth setting, but a single measure that satisfies some ?reasonable? notion of distance across the family of kernels considered. Thus, in Section 4, we propose a generalization of the
MMD, yielding a new distance measure between P and Q defined as
?(P, Q) = sup{?k (P, Q) : k ? K} = sup{kPk ? QkkH : k ? K},
(2)
which is the maximal RKHS distance between P and Q over a family, K of positive definite kernels.
For example, K can be the family of Gaussian kernels on Rd indexed by the bandwidth parameter.
This distance measure is very natural in the light of our results on binary classification (in Section 2):
most directly, this corresponds to the problem of learning the kernel by minimizing the risk of the
associated Parzen-based classifier. As a less direct justification, we also increase the upper bound on
the margin allowed for a hard margin SVM between the samples. To apply the generalized MMD
in practice, we must ensure its empirical estimator is consistent. In our main result of Section 4,
we provide an empirical estimate of ?(P, Q) based on finite samples, and show that many popular
kernels like the Gaussian, Laplacian, and the entire Mat?ern class on Rd yield consistent estimates
1
There is a subtlety here, since unlike the problem of testing for differences in distributions, classification
suffers from slow learning rates. See [3, Chapter 7] for details.
2
of ?(P, Q). The proof is based on bounding the Rademacher chaos complexity of K, which can be
understood as the U-process equivalent of Rademacher complexity [2].
Finally, in Section 5, we provide a simple experimental demonstration that the generalized MMD
can be applied in practice to the problem of homogeneity testing. Specifically, we show that when
two distributions differ on particular length scales, the kernel selected by the generalized MMD
is appropriate to this difference, and the resulting hypothesis test outperforms the heuristic kernel
choice employed in earlier studies [6]. The proofs of the results in Sections 2-4 are provided in the
supplementary material.
2
Characteristic Kernels and Binary Classification
One of the most important applications of the maximum mean discrepancy is in nonparametric hypothesis testing [6, 7, 4], where the characteristic property of k is required to distinguish between
probability measures. In the following, we show how MMD naturally appears in binary classification, with reference to the Parzen window classifier and hard-margin SVM. This motivates the need
for characteristic k to guarantee that classes arising from different distributions can be classified by
kernel-based algorithms.
To this end, let us consider the binary classification problem with X being a M -valued random
variable, Y being a {?1, +1}-valued random variable and the product space, M ? {?1, +1}, being
endowed with an induced Borel probability measure ?. A discriminant function, f is a real valued
measurable function on M , whose sign is used to make a classification decision. Given a loss
function L : {?1, +1} ? R ? R, the goal is to choose an f that minimizes the risk associated with
L, with the optimal L-risk being defined as
Z
Z
n Z
o
L
RF
=
inf
L(y,
f
(x))
d?(x,
y)
=
inf
?
L
(f
)
dP
+
(1
?
?)
L?1 (f ) dQ , (3)
1
?
f ?F?
M
f ?F?
M
M
where F? is the set of all measurable functions on M , L1 (?) := L(1, ?), L?1 (?) := L(?1, ?),
P(X) := ?(X|Y = +1), Q(X) := ?(X|Y = ?1), ? := ?(M, Y = +1). Here, P and Q represent
the class-conditional distributions and ? is the prior distribution of class +1. Now, we present the
result that relates ?k to the optimal risk associated with the Parzen window classifier.
?
Theorem 1 (?k and Parzen classification). Let L1 (?) = ? ?? and L?1 (?) = 1??
. Then, ?k (P, Q) =
L
?RFk , where Fk = {f : kf kH ? 1} and H is an RKHS with a measurable and bounded k.
Suppose {(Xi , Yi )}N
i=1 , Xi ? M , Yi ? {?1, +1}, ? i is a training sample drawn i.i.d. from ? and
m = |{i : Yi = 1}|. If fe ? Fk is an empirical minimizer of (3) (where F? is replaced by Fk in (3)),
then
P
P
?
1
1
1,
m PYi =1 k(x, Xi ) > N ?m PYi =?1 k(x, Xi ) ,
(4)
sign(fe(x)) =
1
1
?1, m
Yi =1 k(x, Xi ) ? N ?m
Yi =?1 k(x, Xi )
which is the Parzen window classifier.
Theorem 1 shows that ?k is the negative of the optimal L-risk (where L is the linear loss as defined
in Theorem 1) associated with the Parzen window classifier. Therefore, if k is not characteristic,
L
which means ?k (P, Q) = 0 for some P 6= Q, then RF
= 0, i.e., the risk is maximum (note that
k
L
since 0 ? ?k (P, Q) = ?RFk , the maximum risk is zero). In other words, if k is characteristic, then
the maximum risk is obtained only when P = Q. This motivates the importance of characteristic
kernels in binary classification. In the following, we provide another result which provides a similar
motivation for the importance of characteristic kernels in binary classification, wherein we relate ?k
to the margin of a hard-margin SVM.
Theorem 2 (?k and hard-margin SVM). Suppose {(Xi , Yi )}N
i=1 , Xi ? M , Yi ? {?1, +1}, ? i is
a training sample drawn i.i.d. from ?. Assuming the training sample is separable, let fsvm be the
solution to the program, inf{kf kH : Yi f (Xi ) ? 1, ? i}, where H is an RKHS with measurable and
bounded k. If k is characteristic, then
?k (Pm , Qn )
1
?
,
(5)
kfsvm kH
2
P
P
1
1
where Pm := m
Yi =1 ?Xi , Qn := n
Yi =?1 ?Xi , m = |{i : Yi = 1}| and n = N ? m. ?x
represents the Dirac measure at x.
3
Theorem 2 provides a bound on the margin of hard-margin SVM in terms of MMD. (5) shows that
a smaller MMD between Pm and Qn enforces a smaller margin (i.e., a less smooth classifier, fsvm ,
where smoothness is measured as kfsvm kH ). We can observe that the bound in (5) may be loose if
the number of support vectors is small. Suppose k is not characteristic, then ?k (Pm , Qn ) can be zero
for Pm 6= Qn and therefore the margin is zero, which means even unlike distributions can become
inseparable in this feature representation.
Another justification of using characteristic kernels in kernel-based classification algorithms can be
provided by studying the conditions on H for which the Bayes risk is realized for all ?. Steinwart
and Christmann [16, Corollary 5.37] have showed that under certain conditions on L, the Bayes risk
is achieved for all ? if and only if H is dense in Lp (M, ?) for all ?, where ? = ?P + (1 ? ?)Q.
Here, Lp (M, ?) represents the Banach space of p-power integrable functions, where p ? [1, ?) is
dependent on the loss function, L. Denseness of H in Lp (M, ?) implies H + R is dense Lp (M, ?),
which therefore yields that k is characteristic [4, 5]. On the other hand, if constant functions are
included in H, then it is easy to show that the characteristic property of k is also sufficient to
achieve the Bayes risk. As an example, it can be shown that characteristic kernels are necessary (and
sufficient if constant functions are in H) for SVMs to achieve the Bayes risk [16, Example 5.40].
Therefore, the characteristic property of k is fundamental in kernel-based classification algorithms.
Having showed how characteristic kernels play a role in kernel-based classification, in the following
section, we provide a novel characterization for them.
3
Novel Characterization for Characteristic Kernels
A positive definite (pd) kernel, k is said to be characteristic to P if and only if ?k (P, Q) = 0 ? P =
Q, ? P, Q ? P. The following result provides a novel characterization for characteristic kernels,
which shows that strictly pd kernels are characteristic to P. An advantage with this characterization
is that it holds for any arbitrary topological space M unlike the earlier characterizations where a
group structure on M is assumed [14, 5]. First, we define strictly pd kernels as follows.
Definition 3 (Strictly positive definite kernels). Let M be a topological
R R space. A measurable and
bounded kernel, k is said to be strictly positive definite if and only if M M k(x, y) d?(x) d?(y) > 0
for all finite non-zero signed Borel measures, ? defined on M .
Note that the above definition is not equivalent to the usual definition of strictly pd kernels that involves finite sums [16, Definition 4.15]. The Rabove
R definition is a generalization of integrally strictly
positive definite functions [17, Section 6]:
k(x, y)f (x)f (y) dx dy > 0 for all f ? L2 (Rd ),
which is the strictly positive definiteness of the integral operator given by the kernel. Definition 3 is
stronger than the finite sum definition as [16, Theorem 4.62] shows a kernel that is strictly pd in the
finite sum sense but not in the integral sense.
Theorem 4 (Strictly pd kernels are characteristic). If k is strictly positive definite on M , then k is
characteristic to P.
The proof idea is to derive necessary and sufficient conditions for a kernel not to be characteristic.
We show that choosing k to be strictly pd violates these conditions and k is therefore characteristic to
P. Examples of strictly pd kernels on Rd include exp(??kx?yk22 ), ? > 0, exp(??kx?yk1 ), ? >
? y) = f (x)k(x, y)f (y) is a
0, (c2 + kx ? yk22 )?? , ? > 0, c > 0, B2l+1 -splines etc. Note that k(x,
strictly pd kernel if k is strictly pd, where f : M ? R is a bounded continuous function. Therefore,
translation-variant strictly pd kernels can be obtained by choosing k to be a translation invariant
strictly pd kernel. A simple example of a translation-variant kernel that is a strictly pd kernel on
? y) = exp(?xT y), ? > 0, where we have chosen f (.) = exp(?k.k2 /2)
compact sets of Rd is k(x,
2
and k(x, y) = exp(??kx ? yk22 /2), ? > 0. Therefore, k? is characteristic on compact sets of Rd ,
which is the same result that follows from the universality of k? [15, Section 3, Example 1].
The following result in [10], which is based on the usual definition of strictly pd kernels, can be
obtained as a corollary to Theorem 4.
n
Corollary 5 ([10]). Let X = {xi }m
j }j=1 ? M and assume that xi 6= xj , yi 6=
i=1 ? M , Y = {yP
Pn
m
yj , ? i, j. Suppose k is strictly positive definite. Then i=1 ?i k(., xi ) = j=1 ?j k(., yj ) for some
?i , ?j ? R\{0} ? X = Y .
Pm
1
1
Suppose
i=1 ?i k(., xi )
Pn we choose ?i = m , ? i and ?j = n , ? j in Corollary 5. Then
and j=1 ?j k(., yj ) represent the mean functions in H. Note that the Parzen classifier in (4)
4
is a mean
classifier (that separates
the mean functions) in H, i.e., sign(hk(., x), wiH ), where
Pm
Pn
1
1
k(.,
x
)
?
k(.,
yi ). Suppose k is strictly pd (more generally, suppose k is
w = m
i
i=1
i=1
n
characteristic). Then, by Corollary 5, the normal vector, w to the hyperplane in H passing through
the origin is zero, i.e., the mean functions coincide (and are therefore not classifiable) if and only if
X =Y.
4 Generalizing the MMD for Classes of Characteristic Kernels
The discussion so far has been related to the characteristic property of k that makes ?k a metric on
P. We have seen that this characteristic property is of prime importance both in distribution testing,
and to ensure classifiability of dissimilar distributions in the RKHS. We have not yet addressed how
to choose among a selection/family of characteristic kernels, given a particular pair of distributions
we wish to discriminate between. We introduce one approach to this problem in the present section.
Let M = Rd and k? (x, y) = exp(??kx ? yk22 ), ? ? R+ , where ? represents the bandwidth
parameter. {k? : ? ? R+ } is the family of Gaussian kernels and {?k? : ? ? R+ } is the family
of MMDs indexed by the kernel parameter, ?. Note that k? is characteristic for any ? ? R++ and
therefore ?k? is a metric on P for any ? ? R++ . However, in practice, one would prefer a single
number that defines the distance between P and Q. The question therefore to be addressed is how
to choose appropriate ?. The choice of ? has important implications on the statistical aspect of ?k? .
Note that as ? ? 0, k? ? 1 and as ? ? ?, k? ? 0 a.e., which means ?k? (P, Q) ? 0 as ? ? 0
or ? ? ? for all P, Q ? P (this behavior is also exhibited by k? (x, y) = exp(??kx ? yk1 ) and
k? (x, y) = ? 2 /(? 2 + kx ? yk22 ), which are also characteristic). This means choosing sufficiently
small or sufficiently large ? (depending on P and Q) makes ?k? (P, Q) arbitrarily small. Therefore, ?
has to be chosen appropriately in applications to effectively distinguish between P and Q. Presently,
the applications involving MMD set ? heuristically [6, 7].
To generalize the MMD to families of kernels, we propose the following modification to ?k , which
yields a pseudometric on P,
?(P, Q) = sup{?k (P, Q) : k ? K} = sup{kPk ? QkkH : k ? K}.
(6)
Note that ? is the maximal RKHS distance between P and Q over a family, K of positive definite
kernels. It is easy to check that if any k ? K is characteristic, then ? is a metric on P. Examples for
2
K include: Kg := {e??kx?yk2 , x, y ? Rd : ? ? R+ }; Kl := {e??kx?yk1 , x, y ? Rd : ? ? R+ };
K? := {e???(x,y) , x, y ? M : ? ? R+ }, where ? : M ? M ? R is a negative definite kernel;
R?
2
Krbf := { 0 e??kx?yk2 d?? (?), x, y ? Rd , ?? ? M + : ? ? ? ? Rd }, where M + is the set of
all finite nonnegative Borel measures, ?? on R+ that are not concentrated at zero, etc.
The proposal of ?(P, Q) in (6) can be motivated by the connection that we have established in
Section 2 between ?k and the Parzen window classifier. Since the Parzen window classifier depends
on the kernel, k, one can propose to learn the kernel like in support vector machines [8], wherein
L
L
the kernel is chosen such that RF
in Theorem 1 is minimized over k ? K, i.e., inf k?K RF
=
k
k
? supk?K ?k (P, Q) = ??(P, Q). A similar motivation for ? can be provided based on (5) as
learning the kernel in a hard-margin SVM by maximizing its margin.
At this point, we briefly discuss the issue of normalized vs. unnormalized
kernel families, K in
R
(6). We say a translation-invariant kernel, k on Rd is normalized if M ?(y) dy = c (some positive
constant independent of the kernel parameter), where k(x, y) = ?(x ? y). K is a normalized kernel
family if every kernel in K is normalized. If K is not normalized, we say it is unnormalized. For
example, it is easy to see that Kg and Kl are unnormalized kernel families. Let us consider the
2
normalized Gaussian family, Kng = {(?/?)d/2 e??kx?yk2 , x, y ? Rd : ? ? [?0 , ?)}. It can be
shown that for any k? , k? ? Kng , 0 < ? < ? < ?, we have ?k? (P, Q) ? ?k? (P, Q), which
means, ?(P, Q) = ??0 (P, Q). Therefore, the generalized MMD reduces to a single
p kernel MMD. A
similar result also holds for the normalized inverse-quadratic kernel family, { 2? 2 /?(? 2 + kx ?
yk22 )?1 , x, y ? R : ? ? [?0 , ?)}. These examples show that the generalized MMD definition
is usually not very useful if K is a normalized kernel family. In addition, ?0 should be chosen
beforehand, which is equivalent to heuristically setting the kernel parameter in ?k . Note that ?0
cannot be zero because in the limiting case of ? ? 0, the kernels approach a Dirac distribution,
which means the limiting kernel is not bounded and therefore the definition of MMD in (1) does
not hold. So, in this work, we consider unnormalized kernel families to render the definition of
generalized MMD in (6) useful.
5
To use ? in statistical applications where P and Q are known only through i.i.d. samples {Xi }m
i=1
and {Yi }ni=1 respectively, we require its estimator ?(Pm , Qn ) to be consistent, where Pm and Qn
represent the empirical measures based on {Xi }m
and {Yj }nj=1 . For k measurable and bounded,
p i=1
[6, 12] have shown that ?k (Pm , Qn ) is a mn/(m + n)-consistent estimator of ?k (P, Q). The
statistical consistency of ?(Pm , Qn ) is established in the following theorem, which uses tools from
U-process theory [2, Chapters 3,5]. We begin with the following definition.
Definition 6 (Rademacher chaos). Let G be a class of functions on M ? M and {?i }ni=1 be
independent Rademacher random variables, i.e., Pr(?i = 1) = Pr(?i = ?1) = 21 . The
homogeneous
Rademacher chaos process of order two with respect to {?i }ni=1 is defined as
Pn
n
?1
{n
i<j ?i ?j g(xi , xj ) : g ? G} for some {xi }i=1 ? M . The Rademacher chaos complexity over G is defined as
n
?1 X
?
?
?
Un (G; {xi }ni=1 ) := E? sup ?
(7)
?i ?j g(xi , xj )?.
g?G n i<j
We now provide the main result of the present section.
Theorem 7 (Consistency of ?(Pm , Qn )). Let every k ? K be measurable and bounded with ? :=
supk?K,x?M k(x, x) < ?. Then, with probability at least 1 ? ?, |?(Pm , Qn ) ? ?(P, Q)| ? A,
where
q
?
?
r
4
16Um (K; {Xi }) 16Un (K; {Yi }) ( 8? + 36? log ? ) m + n
?
+
+
A=
.
(8)
m
n
mn
a.s.
From (8), it is clear that if Um (K; {Xi }) = OP (1) and Un (K; {Yi }) = OQ (1), then ?(Pm , Qn ) ?
?(P, Q). The following result provides a bound on Um (K; {Xi }) in terms of the entropy integral.
Lemma 8 (Entropy bound). For any K as in Theorem 7 with 0 ? K, there exists a universal constant
C such that
Z ?
m
log N (K, D, ?) d?,
(9)
Um (K; {Xi }i=1 ) ? C
0
hP
m
i1
1
2 2
. N (K, D, ?) represents the ?where D(k1 , k2 ) = m
(k
(X
,
X
)
?
k
(X
,
X
))
1
i
j
2
i
j
i<j
covering number of K with respect to the metric D.
Assuming K to be a VC-subgraph class, the following result, as a corollary to Lemma 8 provides
an estimate of Um (K; {Xi }m
i=1 ). Before presenting the result, we first provide the definition of a
VC-subgraph class.
Definition 9 (VC-subgraph class). The subgraph of a function g : M ? R is the subset of M ? R
given by {(x, t) : t < g(x)}. A collection G of measurable functions on a sample space is called a
VC-subgraph class, if the collection of all subgraphs of the functions in G forms a VC-class of sets
(in M ? R).
The VC-index (also called the VC-dimension) of a VC-subgraph class, G is the same as the pseudodimension of G. See [1, Definition 11.1] for details.
Corollary 10 (Um (K; {Xi }) for VC-subgraph, K). Suppose K is a VC-subgraph class with V (K)
being the VC-index. Assume K satisfies the conditions in Theorem 7 and 0 ? K. Then
Um (K; {Xi }) ? C? log(C1 V (K)(16e9 )V (K) ),
(10)
for some universal constants C and C1 .
p
Using (10) in (8), we have |?(Pm , Qn ) ? ?(P, Q)| = OP,Q ( (m + n)/mn) and by the Borela.s.
Cantelli lemma, |?(Pm , Qn ) ? ?(P, Q)| ? 0. Now, the question reduces to which of the kernel classes, K have V (K) < ?. [18, Lemma 12] showed that V (Kg ) = 1 (also see [19]) and
Um (Krbf ) ? C2 Um (Kg ), where C2 < ?. It can be shown that V (K? ) = 1 and V (Kl ) = 1.
All these classes satisfy the conditions of Theorem 7 and Corollary 10 and therefore provide consistent estimates of ?(P, Q) for any P, Q ? P. Examples of kernels on Rd that are covered by these
classes include the Gaussian, Laplacian, inverse multiquadratics, Mat?ern class etc. Other choices
for K that are popular in machine learning are the linear combination of kernels, Klin := {k? =
Pl
Pl
Pl
Pl
i=1 ?i ki | k? is pd,
i=1 ?i = 1} and Kcon := {k? =
i=1 ?i ki | ?i ? 0,
i=1 ?i = 1}. [13,
Lemma 7] have shown that V (Kcon ) ? V (Klin ) ? l. Therefore, instead of using a class based on a
fixed, parameterized kernel, one can also use a finite linear combination of kernels to compute ?.
6
So far, we have presented the metric property and statistical consistency (of the empirical estimator)
of ?. Now, the question is how do we compute ?(Pm , Qn ) in practice. To show this, in the following,
we present two examples.
Example 11. Suppose K = Kg . Then, ?(Pm , Qn ) can be written as
?
?
m,n ??kX ?Y k2
m
n
??kXi ?Xj k2
??kYi ?Yj k2
X
X
X
i
j
e
e
e
? . (11)
? 2 (Pm , Qn ) = sup ?
+
?2
2
m2
n
mn
??R+ i,j=1
i,j=1
i,j=1
The optimum ? ? can be obtained by solving (11) and ?(Pm , Qn ) = kPm k?? ? Qn k?? kH?? .
Example 12. Suppose K = Kcon . Then, ?(Pm , Qn ) becomes
Z Z
? 2 (Pm , Qn ) =
sup kPm k ? Qn kk2H = sup
k d(Pm ? Qn ) ? (Pm ? Qn )
k?Kcon
T
k?Kcon
T
= sup{? a : ? 1 = 1, ? ? 0},
(12)
Pl
where we have replaced k by i=1 ?i ki . Here ? = (?1 , . . . , ?l ) and (a)i = kPm ki ? Qn ki k2Hi =
Pm
Pn
Pm,n
1
1
2
a,b=1 ki (Xa , Xb ) + n2
a,b=1 ki (Ya , Yb ) ? mn
a,b=1 ki (Xa , Yb ). It is easy to see that
m2
2
? (Pm , Qn ) = max1?i?l (a)i .
Similar examples can be provided for other K, where ?(Pm , Qn ) can be computed by solving a
semidefinite program (K = Klin ) or by the constrained gradient descent ( K = Kl , Krbf ).
Finally, while the approach in (6) to generalizing ?k is our focus in this paper, an alternative Bayesian
strategy would be to defineR a non-negative finite measure ? over K, and to average ?k over that
measure, i.e., ?(P, Q) := K ?k (P, Q) d?(k). This also yields a pseudometric on P. That said,
?(P, Q) ? ?(K)?(P, Q), ? P, Q, which means if P and Q can be distinguished by ?, they can be
distinguished by ?, but not vice-versa. In this sense, ? is stronger than ?. One further complication
with the Bayesian approach is in defining a sensible ? over K. Note that ?k0 (single kernel MMD
based on k0 ) can be obtained by defining ?(k) = ?(k ? k0 ) in ?(P, Q).
5 Experiments
In this section, we present a benchmark experiment that illustrates the generalized MMD proposed in
Section 4 is preferred above the single kernel MMD where the kernel parameter is set heuristically.
The experimental setup is as follows.
Let p = N (0, ?p2 ), a normal distribution in R with zero mean and variance, ?p2 . Let q be the perturbed
version of p, given as q(x) = p(x)(1 + sin ?x). Here p and q are the densities associated with P and
Q respectively. It is easy to see that q differs from p at increasing frequencies with increasing ?. Let
k(x, y) = exp(?(x ? y)2 /?). Now, the goal is that given random samples drawn i.i.d. from P and
Q (with ? fixed), we would like to test H0 : P = Q vs. H1 : P 6= Q. The idea is that as ? increases,
it will be harder to distinguish between P and Q for a fixed sample size. Therefore, using this setup
we can verify whether the adaptive bandwidth selection achieved by ? (as the test statistic) helps
to distinguish between P and Q at higher ? compared to ?k with a heuristic ?. To this end, using
?(Pm , Qn ) and ?k (Pm , Qn ) (with various ?) as test statistics Tmn , we design a test that returns H0
if Tmn ? cmn , and H1 otherwise. The problem therefore reduces to finding cmn . cmn is determined
as the (1 ? ?) quantile of the asymptotic distribution of Tmn under H0 , which therefore fixes the
type-I error (the probability of rejecting H0 when it is true) to ?. The consistency of this test under
?k (for any fixed ?) is proved in [6]. A similar result can be shown for ? under some conditions on
K. We skip the details here.
In our experiments, we set m = n = 1000, ?p2 = 10 and draw two sets of independent random
samples from Q. The distribution of Tmn is estimated by bootstrapping on these samples (250 bootstrap iterations are performed) and the associated 95th quantile (we choose ? = 0.05) is computed.
Since the performance of the test is judged by its type-II error (the probability of accepting H0
when H1 is true), we draw a random sample, one each from P and Q and test whether P = Q.
This process is repeated 300 times, and estimates of type-I and type-II errors are obtained for both
? and ?k . 14 different values for ? are considered on a logarithmic scale of base 2 with exponents
(?3, ?2, ?1, 0, 1, 23 , 2, 52 , 3, 72 , 4, 5, 6) along with the median distance between samples as one more
choice. 5 different choices for ? are considered: ( 12 , 34 , 1, 54 , 32 ).
7
Type?I error
Type?II error
2
0
0.5
0.75
1
?
1.25
1.5
(a)
25
?=0.5
?=0.75
?=1.0
?=1.25
?=1.5
20
15
10
Type?II error (in %)
Type?I error (in %)
Error (in %)
6
5
4
5
100
50
?=0.5
?=0.75
?=1.0
?=1.25
?=1.5
0
?3 ?2 ?1 0 1 2 3 4 5 6
log ?
?3 ?2 ?1 0 1 2 3 4 5 6
log ?
(b)
(c)
11
Median as ?
log ?
3
2
1
10
9
8
0
0.5 0.75
1
?
1.25 1.5
0.5 0.75
(d)
1
?
1.25 1.5
(e)
Figure 1: (a) Type-I and Type-II errors (in %) for ? for varying ?. (b,c) Type-I and type-II error (in
%) for ?k (with different ?) for varying ?. The dotted line in (c) corresponds to the median heuristic,
which shows that its associated type-II error is very large at large ?. (d) Box plot of log ? grouped
by ?, where ? is selected by ?. (e) Box plot of the median distance between points (which is also a
choice for ?), grouped by ?. Refer to Section 5 for details.
Figure 1(a) shows the estimated type-I and type-II errors using ? as the test statistic for varying
?. Note that the type-I error is close to its design value of 5%, while the type-II error is zero for
all ?, which means ? distinguishes between P and Q for all perturbations. Figures 1(b,c) show the
estimates of type-I and type-II errors using ?k as the test statistic for different ? and ?. Figure 1(d)
shows the box plot for log ?, grouped by ?, where ? is the bandwidth selected by ?. Figure 1(e)
shows the box plot of the median distance between points (which is also a choice for ?), grouped by
?. From Figures 1(c) and (e), it is easy to see that the median heuristic exhibits high type-II error for
? = 23 , while ? exhibits zero type-II error (from Figure 1(a)). Figure 1(c) also shows that heuristic
choices of ? can result in high type-II errors. It is intuitive to note that as ? increases, (which means
the characteristic function of Q differs from that of P at higher frequencies), a smaller ? is needed
to detect these changes. The advantage of using ? is that it selects ? in a distribution-dependent
fashion and its behavior in the box plot shown in Figure 1(d) matches with the previously mentioned
intuition about the behavior of ? with respect to ?. These results demonstrate the validity of using ?
as a distance measure in applications.
6 Conclusions
In this work, we have shown how MMD appears in binary classification, and thus that characteristic
kernels are important in kernel-based classification algorithms. We have broadened the class of
characteristic RKHSs to include those induced by strictly positive definite kernels (with particular
application to kernels on non-compact domains, and/or kernels that are not translation invariant). We
have further provided a convergent generalization of MMD over families of kernel functions, which
becomes necessary even in considering relatively simple families of kernels (such as the Gaussian
kernels parameterized by their bandwidth). The usefulness of the generalized MMD is illustrated
experimentally with a two-sample testing problem.
Acknowledgments
The authors thank anonymous reviewers for their constructive comments and especially the reviewer who pointed out the connection between characteristic kernels and the achievability of Bayes
risk. B. K. S. was supported by the MPI for Biological Cybernetics, National Science Foundation (grant DMS-MSPA 0625409), the Fair Isaac Corporation and the University of California MICRO program. A. G. was supported by grants DARPA IPTO FA8750-09-1-0141, ONR MURI
N000140710747, and ARO MURI W911NF0810242.
8
References
[1] M. Anthony and P. L. Bartlett. Neural Network Learning: Theoretical Foundations. Cambridge University
Press, UK, 1999.
[2] V. H. de la Pe?na and E. Gin?e. Decoupling: From Dependence to Independence. Springer-Verlag, NY,
1999.
[3] L. Devroye, L. Gyorfi, and G. Lugosi. A Probabilistic Theory of Pattern Recognition. Springer-Verlag,
New York, 1996.
[4] K. Fukumizu, A. Gretton, X. Sun, and B. Sch?olkopf. Kernel measures of conditional dependence. In J.C.
Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems
20, pages 489?496, Cambridge, MA, 2008. MIT Press.
[5] K. Fukumizu, B. K. Sriperumbudur, A. Gretton, and B. Sch?olkopf. Characteristic kernels on groups
and semigroups. In D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, Advances in Neural
Information Processing Systems 21, pages 473?480, 2009.
[6] A. Gretton, K. M. Borgwardt, M. Rasch, B. Sch?olkopf, and A. Smola. A kernel method for the two sample
problem. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information Processing
Systems 19, pages 513?520. MIT Press, 2007.
[7] A. Gretton, K. Fukumizu, C.-H. Teo, L. Song, B. Sch?olkopf, and A. Smola. A kernel statistical test of
independence. In Advances in Neural Information Processing Systems 20, pages 585?592. MIT Press,
2008.
[8] G. R. G. Lanckriet, N. Christianini, P. Bartlett, L. El Ghaoui, and M. I. Jordan. Learning the kernel matrix
with semidefinite programming. Journal of Machine Learning Research, 5:24?72, 2004.
[9] B. Sch?olkopf and A. J. Smola. Learning with Kernels. MIT Press, Cambridge, MA, 2002.
[10] B. Sch?olkopf, B. K. Sriperumbudur, A. Gretton, and K. Fukumizu. RKHS representation of measures. In
Learning Theory and Approximation Workshop, Oberwolfach, Germany, 2008.
[11] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press,
UK, 2004.
[12] A. J. Smola, A. Gretton, L. Song, and B. Sch?olkopf. A Hilbert space embedding for distributions. In
Proc. 18th International Conference on Algorithmic Learning Theory, pages 13?31. Springer-Verlag,
Berlin, Germany, 2007.
[13] N. Srebro and S. Ben-David. Learning bounds for support vector machines with learned kernels. In
G. Lugosi and H. U. Simon, editors, Proc. of the 19th Annual Conference on Learning Theory, pages
169?183, 2006.
[14] B. K. Sriperumbudur, A. Gretton, K. Fukumizu, G. R. G. Lanckriet, and B. Sch?olkopf. Injective Hilbert
space embeddings of probability measures. In R. Servedio and T. Zhang, editors, Proc. of the 21st Annual
Conference on Learning Theory, pages 111?122, 2008.
[15] I. Steinwart. On the influence of the kernel on the consistency of support vector machines. Journal of
Machine Learning Research, 2:67?93, 2002.
[16] I. Steinwart and A. Christmann. Support Vector Machines. Springer, 2008.
[17] J. Stewart. Positive definite functions and generalizations, an historical survey. Rocky Mountain Journal
of Mathematics, 6(3):409?433, 1976.
[18] Y. Ying and C. Campbell. Generalization bounds for learning the kernel. In Proc. of the 22nd Annual
Conference on Learning Theory, 2009.
[19] Y. Ying and D. X. Zhou. Learnability of Gaussians with flexible variances. Journal of Machine Learning
Research, 8:249?276, 2007.
9
| 3750 |@word version:1 briefly:1 mmds:2 stronger:2 nd:1 heuristically:3 harder:1 rkhs:14 fa8750:1 outperforms:1 com:1 comparing:2 gmail:1 dx:1 must:2 written:2 universality:1 yet:1 w911nf0810242:1 plot:5 v:2 selected:3 accepting:1 characterization:6 provides:5 complication:1 zhang:1 along:1 c2:3 direct:2 become:1 introduce:1 classifiability:3 indeed:1 behavior:3 mpg:1 window:8 considering:1 increasing:2 blizzard:1 provided:5 begin:1 bounded:8 becomes:2 kg:5 mountain:1 minimizes:1 finding:1 bootstrapping:1 nj:1 corporation:1 guarantee:2 every:3 um:9 classifier:13 k2:5 uk:2 platt:2 broadened:2 grant:2 positive:14 before:1 understood:1 consequence:1 establishing:1 lugosi:2 signed:1 ease:1 gyorfi:1 practical:1 acknowledgment:1 enforces:1 testing:7 yj:5 practice:4 definite:13 differs:2 bootstrap:1 universal:3 empirical:5 word:1 tmn:4 cannot:1 close:1 selection:3 operator:1 judged:1 risk:17 influence:1 restriction:1 measurable:9 equivalent:3 demonstrated:1 reviewer:2 maximizing:1 straightforward:1 survey:1 pyi:2 subgraphs:1 rule:1 estimator:4 m2:2 embedding:4 gert:2 notion:1 justification:2 limiting:3 diego:2 suppose:11 play:1 strengthen:1 programming:1 homogeneous:1 distinguishing:1 us:1 hypothesis:2 origin:1 lanckriet:3 recognition:1 muri:2 yk1:3 role:1 sun:1 mentioned:1 intuition:1 pd:16 complexity:3 cristianini:1 neglected:1 solving:2 max1:1 easily:2 darpa:1 k0:3 chapter:3 integrally:1 various:1 fast:1 choosing:3 h0:5 whose:1 heuristic:5 supplementary:1 valued:3 say:3 otherwise:1 statistic:5 transform:1 advantage:3 propose:3 wih:1 aro:1 maximal:2 product:1 subgraph:8 achieve:3 roweis:1 intuitive:1 kh:5 dirac:2 olkopf:10 convergence:1 requirement:2 optimum:1 rademacher:6 ben:1 help:1 derive:1 depending:1 ac:1 measured:1 op:2 p2:3 kenji:1 christmann:2 implies:1 involves:1 skip:1 differ:2 direction:1 rasch:1 tokyo:1 vc:11 material:1 violates:1 require:1 fix:1 generalization:8 anonymous:1 biological:3 strictly:23 extension:1 pl:5 hold:4 sufficiently:2 considered:3 normal:2 exp:8 mapping:2 algorithmic:1 inseparable:1 early:1 proc:4 teo:1 grouped:4 vice:1 tool:1 hoffman:1 fukumizu:7 mit:4 gaussian:7 pn:5 zhou:1 varying:3 corollary:8 focus:1 check:2 cantelli:1 hk:1 sense:4 detect:1 dependent:2 el:1 entire:2 koller:2 expand:1 i1:1 germany:3 b2l:1 selects:1 issue:1 classification:20 among:1 flexible:1 exponent:1 constrained:1 uc:2 having:1 represents:4 discrepancy:3 minimized:1 spline:1 micro:1 distinguishes:1 national:1 homogeneity:3 replaced:2 semigroups:1 yielding:1 light:1 semidefinite:2 xb:1 implication:1 beforehand:1 integral:3 arthur:2 necessary:5 injective:2 indexed:2 taylor:1 theoretical:1 instance:1 earlier:2 asking:1 stewart:1 subset:1 usefulness:1 learnability:1 answer:1 perturbed:1 kxi:1 st:1 density:1 fundamental:1 borgwardt:1 international:1 probabilistic:1 parzen:11 na:1 choose:6 e9:1 return:1 yp:1 japan:1 de:2 satisfy:2 depends:1 performed:1 h1:3 sup:9 bayes:6 simon:1 ni:4 variance:2 characteristic:56 who:1 yield:4 generalize:1 bayesian:2 rejecting:1 fsvm:2 cybernetics:3 classified:1 bharath:1 kpk:3 suffers:1 definition:18 sriperumbudur:4 servedio:1 frequency:2 isaac:1 dm:1 naturally:1 associated:9 proof:3 proved:2 popular:2 hilbert:4 campbell:1 appears:2 higher:4 rfk:2 wherein:2 yb:2 box:5 strongly:1 xa:2 smola:4 hand:1 steinwart:4 nonlinear:1 defines:1 usa:2 pseudodimension:1 validity:1 verify:2 true:2 normalized:8 illustrated:1 sin:1 covering:1 mpi:3 unnormalized:4 oberwolfach:1 criterion:1 generalized:9 allowable:1 presenting:1 demonstrate:1 l1:2 chaos:4 novel:4 jp:1 banach:2 interpret:1 mellon:1 refer:1 versa:1 cambridge:4 smoothness:1 rd:15 fk:3 mathematics:2 pm:30 consistency:5 hp:1 pointed:1 shawe:1 yk2:3 etc:3 base:1 showed:3 jolla:2 inf:4 prime:1 krbf:3 kpm:3 certain:2 verlag:3 ubingen:1 underlined:1 binary:9 arbitrarily:1 onr:1 yi:16 integrable:2 seen:1 care:1 employed:1 semi:1 relates:1 ii:13 gretton:9 reduces:3 smooth:1 match:1 laplacian:2 variant:2 involving:1 metric:7 iteration:1 kernel:120 represent:3 mmd:30 achieved:2 c1:2 receive:1 proposal:1 want:1 addition:1 addressed:2 median:6 sch:10 appropriately:1 unlike:3 exhibited:1 comment:1 induced:2 oq:1 jordan:1 practitioner:1 yk22:6 bengio:1 embeddings:4 easy:7 independence:4 xj:4 bandwidth:7 idea:3 whether:3 motivated:1 bartlett:2 song:2 render:1 passing:1 york:1 useful:3 generally:1 clear:1 covered:1 nonparametric:1 concentrated:1 svms:1 exist:1 dotted:1 sign:3 estimated:2 arising:1 broadly:1 carnegie:1 mat:2 group:4 key:2 drawn:3 kyi:1 christianini:1 computability:1 sum:3 inverse:2 parameterized:2 classifiable:2 family:19 reasonable:2 draw:2 decision:1 dy:2 prefer:1 bound:8 ki:8 distinguish:5 convergent:1 topological:3 quadratic:1 nonnegative:1 annual:3 fourier:1 aspect:1 pseudometric:2 separable:1 relatively:1 ern:2 department:2 combination:2 across:1 smaller:3 lp:4 b:1 modification:1 presently:1 invariant:7 pr:2 ghaoui:1 previously:1 turn:1 loose:1 fail:1 discus:1 needed:1 singer:1 end:2 studying:1 gaussians:1 endowed:1 apply:1 observe:1 appropriate:3 distinguished:2 centrality:1 alternative:1 rkhss:4 include:8 ensure:2 klin:3 giving:1 ism:1 k1:1 especially:2 quantile:2 classical:1 question:5 realized:1 strategy:1 dependence:2 usual:2 said:3 exhibit:2 conceivable:1 dp:2 gradient:1 distance:14 link:3 separate:1 thank:1 gin:1 berlin:1 sensible:1 tuebingen:1 discriminant:1 assuming:2 length:1 devroye:1 index:2 minimizing:2 demonstration:1 ying:2 difficult:1 setup:2 fe:2 potentially:1 relate:1 negative:4 design:2 bharathsv:1 motivates:2 upper:2 benchmark:1 finite:11 descent:1 immediate:1 defining:2 incorporate:1 ucsd:2 reproducing:3 perturbation:1 arbitrary:1 introduced:1 david:1 namely:1 required:1 pair:1 kl:4 connection:2 california:1 learned:1 established:4 address:1 usually:1 pattern:2 program:3 rf:4 power:2 natural:2 mn:5 representing:1 rocky:1 prior:1 l2:1 kf:2 asymptotic:1 loss:4 expect:1 srebro:1 foundation:2 sufficient:5 consistent:6 dq:1 editor:5 translation:9 achievability:1 supported:2 denseness:1 bias:1 institute:1 dimension:1 qn:29 author:1 collection:2 adaptive:1 san:2 coincide:2 historical:1 far:3 compact:7 preferred:1 bernhard:1 supremum:1 dealing:1 assumed:1 xi:28 continuous:1 un:3 n000140710747:1 learn:1 expanding:1 decoupling:1 schuurmans:1 bottou:1 constructing:1 domain:3 anthony:1 cmn:3 pk:3 dense:3 main:2 bounding:1 motivation:2 n2:1 allowed:1 repeated:1 fair:1 borel:4 fashion:1 definiteness:1 slow:1 ny:1 wish:1 pe:1 third:1 mspa:1 theorem:14 embed:1 xt:1 showing:1 svm:7 exists:1 workshop:1 effectively:1 importance:4 illustrates:1 margin:14 kx:13 easier:1 entropy:2 generalizing:2 logarithmic:1 distinguishable:1 forming:1 subtlety:1 supk:2 springer:4 corresponds:4 minimizer:1 satisfies:2 ma:2 conditional:3 unclassifiable:1 goal:2 hard:7 change:1 included:1 specifically:2 determined:1 experimentally:1 hyperplane:1 lemma:5 called:4 discriminate:1 ece:3 experimental:2 la:3 ya:1 support:7 inability:1 ipto:1 dissimilar:1 constructive:1 |
3,036 | 3,751 | Bayesian Source Localization with the
Multivariate Laplace Prior
Marcel van Gerven1,2 Botond Cseke1 Robert Oostenveld2 Tom Heskes1,2
1
Institute for Computing and Information Sciences
2
Donders Institute for Brain, Cognition and Behaviour
Radboud University Nijmegen
Nijmegen, The Netherlands
Abstract
We introduce a novel multivariate Laplace (MVL) distribution as a sparsity promoting prior for Bayesian source localization that allows the specification of constraints between and within sources. We represent the MVL distribution as a scale
mixture that induces a coupling between source variances instead of their means.
Approximation of the posterior marginals using expectation propagation is shown
to be very efficient due to properties of the scale mixture representation. The computational bottleneck amounts to computing the diagonal elements of a sparse matrix inverse. Our approach is illustrated using a mismatch negativity paradigm for
which MEG data and a structural MRI have been acquired. We show that spatial
coupling leads to sources which are active over larger cortical areas as compared
with an uncoupled prior.
1
Introduction
Electroencephalography (EEG) and magnetoencephalography (MEG) provide an instantaneous and
non-invasive measure of brain activity. Let q, p, and t denote the number of sensors, sources and
time points, respectively. Sensor readings Y ? Rq?t and source currents S ? Rp?t are related by
Y = XS + E
(1)
where X ? Rq?p is a lead field matrix that represents how sources project onto the sensors and
E ? Rq?t represents sensor noise.
Unfortunately, localizing distributed sources is an ill-posed inverse problem that only admits a
unique solution when additional constraints are defined. In a Bayesian setting, these constraints
take the form of a prior on the sources [3, 19]. Popular choices of prior source amplitude distributions are Gaussian or Laplace priors, whose MAP estimates correspond to minimum norm and
minimum current estimates, respectively [18]. Minimum norm estimates produce spatially smooth
solutions but are known to suffer from depth bias and smearing of nearby sources. In contrast, minimum current estimates lead to focal source estimates that may be scattered too much throughout the
brain volume [9].
In this paper, we take the Laplace prior as our point of departure for Bayesian source localization
(instead of using just the MAP estimate). The obvious approach is to assume univariate Laplace
priors on individual sources. Here, in contrast, we assume a multivariate Laplace distribution over
all sources, which allows sources to be coupled. We show that such a distribution can be represented
as a scale mixture [2] that differs substantially from the one presented in [5].
Our representation allows the specification of both spatio-temporal as well as sparsity constraints.
Since the posterior cannot be computed exactly, we formulate an efficient expectation propagation
1
algorithm [12] which allows us to approximate the posterior of interest for very large models. Efficiency arises from the block diagonal form of the approximate posterior covariance matrix due to
properties of the scale mixture representation. The computational bottleneck then reduces to computation of the diagonal elements of a sparse matrix inverse, which can be solved through Cholesky
decomposition of a sparse matrix and application of the Takahashi equation [17]. Furthermore, moment matching is achieved by one-dimensional numerical integrations. Our approach is evaluated
on MEG data that was recorded during an oddball task.
2
Bayesian source localization
In a Bayesian setting, the goal of source localization is to estimate the posterior
p(S | Y, X, ?, ?) ? p(Y | S, X, ?)p(S | ?)
(2)
Q
where the likelihood term p(Y | S) = t N (yt | Xst , ?) factorizes over time and ? represents
sensor noise. The prior p(S | ?), with ? acting as a proxy for the hyper-parameters, can be used
to incorporate (neuroscientific) constraints. For simplicity, we assume independent Gaussian noise
with a fixed variance ? 2 , i.e., ? = ? 2 I. Without loss of generality, we will focus on one time-point
(yt , st ) only and drop the subscript when clear from context.1
The source localization problem can be formulated as a (Bayesian) linear regression problem where
the source currents s play the role of the regression coefficients and rows of the lead field matrix
X can be interpreted as covariates. In the following, we define a multivariate Laplace distribution,
represented in terms of a scale mixture, as a convenient prior that incorporates both spatio-temporal
and sparsity constraints.
The univariate Laplace distribution
?
exp (??|s|)
(3)
2
can be represented as a scale mixture of Gaussians [2], the scaling function being an exponential
distribution with parameter ?2 /2. The scale parameter ? controls the width of the distribution and
thus the regularizing behavior towards zero. Since the univariate exponential distribution is a ?22
distribution, one can alternatively write
Z
L (s | ?) = dudv N s | 0, u2 + v 2 N u | 0, 1/?2 N v | 0, 1/?2 .
(4)
L (s | ?) ?
Eltoft et al [5] defined the multivariate Laplace distribution as a scale mixture of a multivariate
?
Gaussian given by z?1/2 s where s is a standard normal multivariate Gaussian, ? is a positive
definite matrix, and z is drawn from a univariate exponential distribution. The work presented
in [11] is based on similar ideas but replaces the distribution on z with a multivariate log-normal
distribution.
In contrast, we use an alternative formulation of the multivariate Laplace distribution that couples
the variances of the sources rather than the source currents themselves. This is achieved by generalizing the representation in Eq. (4) to the multivariate case. For an uncoupled multivariate Laplace
distribution, this generalization reads
Z
Y
L (s | ?) = dudv
N si | 0, u2i + vi2 N vi | 0, 1/?2 N ui | 0, 1/?2
(5)
i
such that each source current si gets assigned scale variables ui and vi . We can interpret the scale
variables corresponding to source i as indicators of its relevance: the larger (the posterior estimate
of) u2i + vi2 , the more relevant the corresponding source. In order to introduce correlations between
sources, we define our multivariate Laplace (MVL) distribution as the following scale mixture:
!
Z
Y
L (s | ?, J) ? dudv
N si | 0, u2i + vi2 N v | 0, J?1 /?2 N u | 0, J?1 /?2 , (6)
i
1
Multiple time-points can be incorporated by vectorizing Y and S, and augmenting X.
2
f
s1
s2
???
sp
g1
g2
???
gp
???
up
u1
v1
u2
v2
h1
vp
h2
Figure 1: Factor graph representation of Bayesian source localization
with a multivariate Laplace
prior. The factor f represents the likelihood term N y | Xs, ? 2 I . Factors gi correspond to the
coupling between sources and scales. Factors h1 and h2 represent the (identical) multivariate Gaussians on u and v with prior precision matrix J. The gi are the only non-Gaussian terms and need to
be approximated.
where J?1 is a normalized covariance matrix. This definition yields a coupling in the magnitudes
of the source currents through their variances. The normalized covariance matrix J?1 specifies the
correlation strengths, while ? acts as a regularization parameter. Note that this approach is defining
the multivariate Laplace with the help of a multivariate exponential distribution [10]. As will be
shown in the next section, apart from having a semantics that differs from [5], our scale mixture representation has some desirable characteristics that allow for efficient approximate inference. Based
on the above formulation, we define the sparse linear model as
p(y, s | X, ? 2 , ?, J) = N y | Xs, ? 2 I L (s | ?, J) .
(7)
The factor graph in Fig. 1 depicts the interactions between the variables in our model.
3
Approximate inference
Our goal is to compute posterior marginals for sources si as well as scale variables ui and vi in order
to determine source relevance. These marginals are intractable and we need to resort to approximate
inference methods. In this paper we use a deterministic approximate inference method called expectation propagation (EP) [12]. For a detailed analysis of the use of EP in case of the decoupled prior,
which is a special case of our MVL prior, we refer to [16]. EP works by iterative minimizations of
the Kullback?Leibler (KL) divergence between appropriately chosen distributions in the following
way.
We introduce the vector of all latent variables z = (sT , uT , vT )T . The posterior distribution on z
given the data y (which is considered fixed and given and therefore omitted in our notation) can be
written in the factorized form
Y
p(z) ? t0 (z)
ti (z) ,
(8)
i
where t0 (z) ? N y | Xs, ? I N v | 0, J?1 /?2 N u | 0, J?1 /?2 and ti (z) = ti (si , ui , vi ) =
N si | 0, u2i + vi2 . The term t0 (z) is a Gaussian function, i.e., it can be written in the form
exp(zT h0 ? zT K0 z/2). It factorizes into Gaussian functions of s, u, and v such
Qthat K0 has a
block-diagonal structure. Using EP, we will approximate p(z) with q (z) ? t0 (z) i t?i (z), where
the t?i (z) are Gaussian functions as well.
2
Our definition of the MVL distribution leads to several computational benefits. Equation (6) introduces 2p auxiliary Gaussian variables (u, v) that are coupled to the si ?s by p non-Gaussian factors,
thus, we have to approximate p terms. The multivariate Laplace distribution defined in [5] introduces
one auxiliary variable and couples all the si sj terms to it, therefore, it would lead to p2 non-Gaussian
terms to be approximated. Moreover, as we will see below, the a priori independence of u and v and
3
the form of the terms ti (z) results in an approximation of the posterior with the same block-diagonal
structure as that of t0 (z).
Q
In each step, EP updates t?i with t??i by defining q \i ? t0 (z) \i t?j , minimizing KL ti q \i k q ? with
respect to q ? and setting t??i ? q ? /q \i . It can be shown that when ti depends only on a subset of
variables zi (in our case on zi = (si , ui , vi )) then so does t?i . The minimization
of the KL diver
gence then boils down to the minimization of KL ti (zi )q \i (zi ) k q ? (zi ) with respect to q ? (zi )
and t?i is updated to t??i (zi ) ? q ? (zi )/q \i (zi ). Minimization of the KL divergence corresponds to
moment matching, i.e., q ? (si , ui , vi ) is a Gaussian with the same mean and covariance matrix as
q i (zi ) ? ti (zi )q \i (zi ). So, to update the i-th term in a standard application of EP, we would have
to compute q \i (zi ) and could then use a three-dimensional (numerical) integration to compute all
first and second moments of q i (zi ). Below we will explain how we can exploit the specific characteristics of the MVL to do this more efficiently. For stability, we use a variant of EP, called power
(1??) Q ?
? \i
EP [13], where q \i ? t?i
k q ? with ? ? (0, 1] is minimized. The above
\i tj and KL ti q
explanation of standard EP corresponds to ? = 1. In the following we will give the formulas for
general ?.
We will now work out the EP update for the i-th term approximation in more detail to show by
induction that t?i (si , ui , vi ) factorizes into independent terms for si , ui , and vi . Since ui and vi play
exactly the same role, it is also easy to see that the term approximation is always symmetric in ui
and vi . Let us suppose that q (si , ui , vi ) and consequently q \i (si , ui , vi ) factorizes into independent
terms for si , ui , and vi , e.g., we can write
q \i (si , ui , vi ) = N (si | mi , ?i2 )N (ui | 0, ?i2 )N (vi | 0, ?i2 ).
(9)
By initializing t?i (si , ui , vi ) = 1, we have q(z) ? t0 (z) and the factorization of q \i (si , ui , vi )
follows directly from the factorization of t0 (z) into independent terms for s, u, and v. That is, for
the first EP step, the factorization can be guaranteed. To obtain the new term approximation, we
have to compute the moments of the distribution q i (si , ui , vi ) ? N (si | 0, u2i + vi2 )? q \i (si , ui , vi ),
which, by regrouping terms, can be written in the form q i (si , ui , vi ) = q i (si | ui , vi )q i (ui , vi ) with
mi (u2i + vi2 )
?i2 (u2i + vi2 )
,
(10)
q i (si | ui , vi ) ? N si |
??i2 + u2i + vi2 ??i2 + u2i + vi2
(1??)/2
?
q i (ui , vi ) ? u2i + vi2
N
?mi | 0, ??i2 + u2i + vi2
?N (ui | 0, ?i2 )N (vi | 0, ?i2 ) .
(11)
Since q i (ui , vi ) only depends on u2i and vi2 and is thus invariant under sign changes of ui and vi ,
we must
have
E[ui ] = E [vi ] = 0, as
well as E [ui vi ] = 0. Because of symmetry, we further have
E u2i = E vi2 = (E u2i + E vi2 )/2. Since q i (ui , vi ) can be expressed as a function of u2i + vi2
only, this variance can be computed from (11) using one-dimensional Gauss-Laguerre numerical
quadrature [15]. The first and second moments of si conditioned upon ui and vi follow directly
from (10). Because both (10) and (11) are invariant under sign changes of ui and vi , we must have
E [si ui ] = E [si vi ] = 0. Furthermore, since the conditional moments again depend only on u2i + vi2 ,
also E [si ] and E s2i can be computed with one-dimensional Gauss-Laguerre integration. Summarizing, we have shown that if the old term approximations factorize into independent terms for si , ui ,
and vi , the new term approximation after an EP update, t??i (si , ui , vi ) ? q ? (si , ui , vi )/q \i (si , ui , vi ),
must do the same. Furthermore, given the cavity distribution q \i (si , ui , vi ), all required moments
can be computed using one-dimensional numerical integration.
The crucial observation here is that the terms ti (si , ui , vi ) introduce dependencies between si and
(ui , vi ), as expressed in Eqs. (10) and (11), but do not lead to correlations that we have to keep track
of in a Gaussian approximation. This is not specific to EP, but a consequence of the symmetries and
invariances of the exact distribution p(s, u, v). That is, also when the expectations are taken with
respect
to the exact
p(s, u, v) we have E [ui ] = E [vi ] = E [ui vi ] = E [si ui ] = E [si vi ] = 0 and
E u2i = E vi2 . The variance of the scales E u2i + vi2 determines the amount of regularization
on the source parameter si such that large variance implies little regularization.
Last but not least, contrary to conventional sequential updating, we choose to update the terms t?i in
parallel. That is, we compute all q \i s and update all terms simultaneously. Calculating q \i (si , ui , vi )
4
requires the computation of the marginal moments q(si ), q(ui ) and q(vi ). For this, we need the
diagonal elements of the inverse of the precision matrix K of q(z). This precision matrix has the
block-diagonal form
?
?
XT X/? 2 + Ks
0
0
?
K=?
(12)
0
?2 J + Ku
0
2
0
0
? J + Kv
where J is a sparse precision matrix which determines the coupling, and Ks , Ku , and Kv = Ku
are diagonal matrices that contain the contributions of the term approximations. We can exploit the
low-rank representation of XT X/? 2 + Ks to compute its inverse using the Woodbury formula [7].
The diagonal elements of the inverse of ?2 J + Ku can be computed efficiently via sparse Cholesky
decomposition and the Takahashi equation [17]. By updating the term approximations in parallel,
we only need to perform these operations once per parallel update.
4
Experiments
Returning to the source localization problem, we will show that the MVL prior can be used to induce
constraints on the source estimates. To this end, we use a dataset obtained for a mismatch negativity
experiment (MMN) [6]. The MMN is the negative component of the difference between responses
to normal and deviant stimuli within an oddball paradigm that peaks around 150 ms after stimulus
onset. In our experiment, the subject had to listen to normal (500 Hz) and deviant (550 Hz) tones,
presented for 70 ms. Normal tones occurred 80% of the time, whereas deviants occurred 20% of the
time. A total of 600 trials was acquired.
Data was acquired with a CTF MEG System (VSM MedTech Ltd., Coquitlam, British Columbia,
Canada), which provides whole-head coverage using 275 DC SQUID axial gradiometers. A realistically shaped volume conduction model was constructed based on the individual?s structural
MRI [14]. The brain volume was discretized to a grid with a 0.75 cm resolution and the lead field
matrix was calculated for each of the 3863 grid points according to the head position in the system
and the forward model. The lead field matrix is defined for the three x, y, and z orientations in each
of the source locations and was normalized to correct for depth bias. Consequently, the lead field
matrix X is of size 275 ? 11589. The 275 ? 1 observation vector y was rescaled to prevent issues
with numerical precision.
In the next section, we compare source estimates for the MMN difference wave that have been
obtained when using either a decoupled or a coupled MVL prior. For ease of exposition, we focus
on a spatial prior induced by the coupling of neighboring sources. In order to demonstrate the effect
of the spatial prior, we assume a fixed regularization parameter ? and fixed noise variance ? 2 , as
estimated by means of the L curve criterion [8]. Differences in the source estimates will therefore
arise only from the form of the 11589 ? 11589 sparse precision matrix J. The first estimate is
obtained by assuming that there is no coupling between elements of the lead field matrix, such that
J = I. This gives a Bayesian formulation of the minimum current estimate [18]. The second
estimate is obtained by assuming a coupling between neighboring sources i and j within the brain
volume with fixed strength c. This coupling is specified through the unnormalized precision matrix
? by assuming J?i ,j = J?i ,j = J?i ,j = ?c while diagonal elements J?ii are set to 1 ? P J?ij .2
J
x x
y y
z z
j6=i
This prior dictates that the magnitude of the variances of the source currents are coupled between
sources.
For the coupling strength c, we use correlation as a guiding principle. Recall that the unnormal? in the end determines the correlations (of the variances) between sources.
ized precision matrix J
Specifically, correlation between sources si and sj is given by
1
1
? ?1 / J
? ?1 2 J
? ?1 2 .
rij = J
ij
ii
(13)
jj
For example, using c = 10, we would obtain a correlation coefficient of ri,i+1 = 0.78. Note that this
also leads to more distant sources having non-zero correlations. The positive correlation between
2
1
1
? ?1 ) 2 J
? diag(J
? ?1 ) 2 .
The normalized precision matrix is obtained through J = diag(J
5
J
L
C
Figure 2: Spatial coupling leads to the normalized precision matrix J with coupling of neighboring
source orientations in the x, y, and z directions. The (reordered) matrix L is obtained from the
Cholesky decomposition of J. The correlation matrix C shows the correlations between the source
orientations. For the purpose of demonstration, we show matrices using a very coarse discretization
of the brain volume.
neighboring sources is motivated by the notion that we expect neighboring sources to be similarly
though not equivalently involved for a given task. Evidently, the desired correlation coefficient also
depends on the resolution of the discretized brain volume.
Figure 2 demonstrates how a chosen coupling leads to a particular structure of J, where irregularities
in J are caused by the structure of the imaged brain volume. The figure also shows the computational
bottleneck of our algorithm, which is to compute diagonal elements of J?1 . This can be solved by
means of the Takahashi equation which operates on the matrix L that results from a sparse Cholesky
decomposition. The block diagonal structure of L arises from a reordering of rows and columns
using, for instance, the amd algorithm [1]. The correlation matrix C shows the correlations between
the sources induced by the structure of J. Zeros in the correlation matrix arise from the independence
between source orientations x, y, and z.
5
Results
Figure 3 depicts the difference wave that was obtained by subtracting the trial average for standard
tones from the trial average for deviant tones. A negative deflection after 100 ms is clearly visible.
The event-related field indicates patterns of activity at central channels in both hemispheres. These
?14
x 10
4
standard
deviant
difference
2
0
?2
?4
?6
?8
?10
0
0.05
0.1
0.15
time (s)
0.2
0.25
0.3
Figure 3: Evolution of the difference wave at right central sensors and event-related field of the
difference wave 125 ms after cue onset.
6
Figure 4: Source estimates using a decoupled prior (top) or a coupled prior (bottom). Plots are
centered on the left temporal source.
Figure 5: Relative variance using a decoupled prior (top) or a coupled prior (bottom). Plots are
centered on the right temporal source.
findings are consistent with the mismatch negativity literature [6]. We now proceed to localizing the
sources of the activation induced by mismatch negativity.
Figure 4 depicts the localized sources when using either a decoupled MVL prior or a coupled MVL
prior. The coupled spatial prior leads to stronger source currents that are spread over a larger brain
volume. MVL source localization has correctly identified the source over left temporal cortex but
does not capture the source over right temporal cortex that is also hypothesized to be present (cf.
Fig. 3). Note however that the source estimates in Fig. 4 represent estimated mean power and thus
do not capture the full posterior over the sources.
Differences between the decoupled and the coupled prior become more salient when we look at the
relative variance of the auxiliary variables as shown in Fig. 5. Relative variance is defined here as
posterior variance minus prior variance of the auxiliary variables, normalized to be between zero
and one. This measure indicates the change in magnitude of the variance of the auxiliary variables,
and thus indirectly that of the sources via Eq. (6). Since only sources with non-zero contributions
should have high variance, this measure can be used to indicate the relevance of a source. Figure 5
7
shows that temporal sources in both left and right hemispheres are relevant. The relevance of the
temporal source in the right hemisphere becomes more pronounced when using the coupled prior.
6
Discussion
In this paper, we introduced a multivariate Laplace prior as the basis for Bayesian source localization. By formulating this prior as a scale mixture we were able to approximate posteriors of interest
using expectation propagation in an efficient manner. Computation time is mainly influenced by the
sparsity structure of the precision matrix J which is used to specify interactions between sources by
coupling their variances. We have demonstrated the feasibility of our approach using a mismatch
negativity dataset. It was shown that coupling of neighboring sources leads to source estimates that
are somewhat more spatially smeared as compared with a decoupled prior. Furthermore, visualization of the relative variance of the auxiliary variables gave additional insight into the relevance of
sources.
Contrary to the MAP estimate (i.e., the minimum current estimate), our Bayesian estimate does not
exactly lead to sparse posteriors given a finite amount of data. However, posterior marginals can
still be used to exclude irrelevant sources since these will typically have a mean activation close to
zero with small variance. In principle, we could force our posteriors to become more MAP-like by
1/T
replacing the likelihood term with N y | Xs, ? 2 I
in the limit T ? 0. From the Bayesian point
of view, one may argue whether taking this limit is fair. In any case, given the inherent uncertainty
in our estimates we favor the representation in terms of (non-sparse) posterior marginals.
Note that it is straightforward to impose other constraints since this only requires the specification
of suitable interactions between sources through J. For instance, the spatial prior could be made
more realistic by taking anatomical constraints into account or by the inclusion of coupling between
sources over time. Other constraints that can be implemented with our approach are the coupling of
individual orientations within a source, or even the coupling of source estimates between different
subjects. Coupling of source orientations has been realized before in [9] through an `1 /`2 norm,
although not using a fully Bayesian approach. In future work, we aim to examine the effect of the
proposed priors and optimize the regularization and coupling parameters via empirical Bayes [4].
Other directions for further research are inclusion of the noise variance in the optimization procedure
and dealing with the depth bias that often arises in distributed source models in a more principled
way.
In [11], fields of Gaussian scale mixtures were used for modeling the statistics of wavelet coefficients
of photographics images. Our approach differs in two important aspects. To obtain a generalization
of the univariate Laplace distribution, we used a multivariate exponential distribution of the scales,
to be compared with the multivariate log-normal distribution in [11]. The Laplace distribution has
the advantage that it is the most sparsifying prior that, in combination with a linear model, still leads
to a unimodal posterior [16]. Furthermore, we described an efficient method for approximating
marginals of interest whereas in [11] an iterative coordinate-ascent method was used to compute the
MAP solution. Since (the efficiency of) our method for approximate inference only depends on the
sparsity of the multivariate scale distribution, and not on its precise form, it should be feasible to
compute approximate marginals for the model presented in [11] as well.
Concluding, we believe the scale mixture representation of the multivariate Laplace distribution
to be a promising approach to Bayesian distributed source localization. It allows a wide range of
constraints to be included and, due to the characteristics of the scale mixture, posteriors can be
approximated efficiently even for very large models.
Acknowledgments
The authors gratefully acknowledge the support of the Dutch technology foundation STW (project
number 07050) and the BrainGain Smart Mix Programme of the Netherlands Ministry of Economic
Affairs and the Netherlands Ministry of Education, Culture and Science. Tom Heskes is supported
by Vici grant 639.023.604.
8
References
[1] P. R. Amestoy, T. A. Davis, and I. S. Duff. Algorithm 837: Amd, an approximate minimum
degree ordering algorithm. ACM Transactions on Mathematical Software, 30(3):381?388,
2004.
[2] D. F. Andrews and C. L. Mallows. Scale mixtures of normal distributions. Journal of the Royal
Statistical Society, Series B, 36(1):99?102, 1974.
[3] S. Baillet and L. Garnero. A Bayesian approach to introducing anatomo-functional priors in the
EEG/MEG inverse problem. IEEE Transactions on Biomedical Engineering, 44(5):374?385,
1997.
[4] J. M. Bernardo and J. F. M. Smith. Bayesian Theory. Wiley, 1994.
[5] T. Eltoft, T. Kim, and T. Lee. On the multivariate Laplace distribution. IEEE Signal Processing
Letters, 13(5):300?303, 2006.
[6] M. I. Garrido, J. M. Kilner, K. E. Stephan, and K. J. Friston. The mismatch negativity: A
review of underlying mechanisms. Clinical Neurophysiology, 120:453?463, 2009.
[7] G. Golub and C. van Loan. Matrix Computations. John Hopkins University Press, Baltimore,
MD, 3rd edition, 1996.
[8] P. C. Hansen. Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear
Inversion. Monographs on Mathematical Modeling and Computation. Society for Industrial
Mathematics, 1987.
[9] S. Haufe, V. V. Nikulin, A. Ziehe, K.-R. M?uller, and G. Nolte. Combining sparsity and rotational invariance in EEG/MEG source reconstruction. NeuroImage, 42(2):726?738, 2008.
[10] N. T. Longford. Classes of multivariate exponential and multivariate geometric distributions
derived from Markov processes. In H. W. Block, A. R. Sampson, and T. H. Savits, editors,
Topics in statistical dependence, volume 16 of IMS Lecture Notes Monograph Series, pages
359?369. IMS Business Office, Hayward, CA, 1990.
[11] S. Lyu and E. P. Simoncelli. Statistical modeling of images with fields of Gaussian scale
mixtures. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural Information
Processing Systems 19, pages 945?952. MIT Press, Cambridge, MA, 2007.
[12] T. Minka. Expectation propagation for approximate Bayesian inference. In J. Breese and
D. Koller, editors, Proceedings of the Seventeenth Conference on Uncertainty in Artificial Intelligence, pages 362?369. Morgan Kaufmann, 2001.
[13] T. Minka. Power EP. Technical report, Microsoft Research, Cambridge, 2004.
[14] G. Nolte. The magnetic lead field theorem in the quasi-static approximation and its use
for magnetoencephalography forward calculation in realistic volume conductors. Physics in
Medicine & Biology, 48(22):3637?3652, 2003.
[15] W. H. Press, S. A. Teukolsky, W. T. Vetterling, and B. P. Flannery. Numerical Recipes in C.
Cambridge University Press, 3rd edition, 2007.
[16] M. W. Seeger. Bayesian inference and optimal design for the sparse linear model. Journal of
Machine Learning Research, 9:759?813, 2008.
[17] K. Takahashi, J. Fagan, and M. S. Chen. Formation of a sparse bus-impedance matrix and its
application to short circuit study. In 8th IEEE PICA Conference, pages 63?69, Minneapolis,
MN, 1973.
[18] K. Uutela, M. H?am?al?ainen, and E. Somersalo. Visualization of magnetoencephalographic data
using minimum current estimates. NeuroImage, 10:173?180, 1999.
[19] D. Wipf and S. Nagarajan. A unified Bayesian framework for MEG/EEG source imaging.
NeuroImage, 44(3):947?966, 2009.
9
| 3751 |@word neurophysiology:1 trial:3 qthat:1 mri:2 inversion:1 norm:3 stronger:1 squid:1 covariance:4 decomposition:4 minus:1 moment:8 series:2 current:12 discretization:1 si:43 activation:2 written:3 must:3 john:1 numerical:7 distant:1 visible:1 realistic:2 drop:1 plot:2 update:7 ainen:1 cue:1 intelligence:1 tone:4 affair:1 smith:1 short:1 provides:1 coarse:1 location:1 u2i:18 mathematical:2 constructed:1 become:2 magnetoencephalographic:1 manner:1 introduce:4 acquired:3 behavior:1 themselves:1 examine:1 brain:9 discretized:2 little:1 electroencephalography:1 becomes:1 project:2 notation:1 moreover:1 underlying:1 factorized:1 hayward:1 circuit:1 cm:1 interpreted:1 substantially:1 unified:1 finding:1 temporal:8 act:1 ti:10 bernardo:1 exactly:3 returning:1 demonstrates:1 platt:1 control:1 grant:1 mmn:3 positive:2 before:1 engineering:1 limit:2 consequence:1 subscript:1 k:3 ease:1 factorization:3 range:1 minneapolis:1 seventeenth:1 unique:1 woodbury:1 acknowledgment:1 mallow:1 block:6 definite:1 differs:3 irregularity:1 procedure:1 area:1 empirical:1 dictate:1 matching:2 convenient:1 induce:1 get:1 onto:1 cannot:1 close:1 context:1 optimize:1 conventional:1 map:5 deterministic:1 yt:2 demonstrated:1 straightforward:1 formulate:1 resolution:2 simplicity:1 insight:1 stability:1 notion:1 coordinate:1 laplace:20 updated:1 play:2 suppose:1 exact:2 element:7 approximated:3 updating:2 ep:14 role:2 bottom:2 solved:2 initializing:1 rij:1 capture:2 ordering:1 rescaled:1 rq:3 principled:1 monograph:2 ui:45 covariates:1 depend:1 smart:1 reordered:1 localization:11 upon:1 efficiency:2 basis:1 k0:2 represented:3 s2i:1 radboud:1 artificial:1 hyper:1 formation:1 h0:1 whose:1 larger:3 posed:2 favor:1 statistic:1 gi:2 g1:1 gp:1 advantage:1 evidently:1 nikulin:1 subtracting:1 interaction:3 reconstruction:1 neighboring:6 relevant:2 combining:1 realistically:1 kv:2 pronounced:1 olkopf:1 recipe:1 produce:1 help:1 coupling:20 eltoft:2 andrew:1 kilner:1 augmenting:1 axial:1 ij:2 eq:3 p2:1 auxiliary:6 coverage:1 marcel:1 implies:1 indicate:1 implemented:1 direction:2 correct:1 centered:2 education:1 behaviour:1 nagarajan:1 generalization:2 around:1 considered:1 normal:7 exp:2 cognition:1 lyu:1 garrido:1 omitted:1 purpose:1 hansen:1 hoffman:1 minimization:4 smeared:1 mit:1 uller:1 clearly:1 sensor:6 gaussian:15 always:1 aim:1 rather:1 factorizes:4 office:1 derived:1 focus:2 rank:2 likelihood:3 indicates:2 mainly:1 contrast:3 industrial:1 seeger:1 kim:1 summarizing:1 am:1 inference:7 vetterling:1 typically:1 koller:1 quasi:1 semantics:1 issue:1 ill:2 orientation:6 smearing:1 priori:1 spatial:6 integration:4 special:1 oddball:2 marginal:1 field:11 once:1 having:2 shaped:1 identical:1 represents:4 biology:1 look:1 future:1 minimized:1 report:1 stimulus:2 wipf:1 inherent:1 simultaneously:1 divergence:2 uutela:1 individual:3 microsoft:1 interest:3 golub:1 introduces:2 mixture:15 tj:1 culture:1 decoupled:7 old:1 desired:1 instance:2 column:1 modeling:3 localizing:2 gradiometers:1 introducing:1 subset:1 too:1 conduction:1 dependency:1 st:2 peak:1 lee:1 physic:1 hopkins:1 again:1 central:2 recorded:1 choose:1 resort:1 takahashi:4 exclude:1 account:1 coefficient:4 caused:1 vi:43 depends:4 onset:2 h1:2 view:1 wave:4 bayes:1 parallel:3 contribution:2 botond:1 kaufmann:1 variance:21 characteristic:3 efficiently:3 correspond:2 yield:1 vp:1 bayesian:19 j6:1 explain:1 somersalo:1 influenced:1 fagan:1 definition:2 involved:1 invasive:1 obvious:1 minka:2 mi:3 boil:1 couple:2 static:1 dataset:2 popular:1 recall:1 ut:1 listen:1 amplitude:1 follow:1 tom:2 response:1 specify:1 formulation:3 evaluated:1 though:1 generality:1 furthermore:5 just:1 anatomo:1 biomedical:1 correlation:15 replacing:1 propagation:5 believe:1 effect:2 hypothesized:1 normalized:6 contain:1 evolution:1 regularization:5 assigned:1 spatially:2 read:1 leibler:1 symmetric:1 imaged:1 i2:9 illustrated:1 during:1 width:1 davis:1 unnormalized:1 m:4 criterion:1 demonstrate:1 image:2 instantaneous:1 novel:1 functional:1 volume:10 occurred:2 marginals:7 interpret:1 ims:2 refer:1 ctf:1 cambridge:3 rd:2 focal:1 grid:2 similarly:1 inclusion:2 heskes:1 mathematics:1 gratefully:1 had:1 specification:3 cortex:2 multivariate:25 posterior:18 hemisphere:3 apart:1 irrelevant:1 regrouping:1 vt:1 morgan:1 minimum:8 additional:2 somewhat:1 impose:1 ministry:2 determine:1 paradigm:2 signal:1 ii:2 multiple:1 simoncelli:1 desirable:1 reduces:1 full:1 unimodal:1 smooth:1 mix:1 baillet:1 technical:1 calculation:1 clinical:1 feasibility:1 variant:1 regression:2 expectation:6 dutch:1 represent:3 achieved:2 whereas:2 xst:1 baltimore:1 source:82 crucial:1 appropriately:1 sch:1 ascent:1 subject:2 hz:2 induced:3 deficient:1 contrary:2 incorporates:1 structural:2 haufe:1 easy:1 stephan:1 independence:2 zi:14 gave:1 nolte:2 identified:1 mvl:11 idea:1 economic:1 bottleneck:3 t0:8 motivated:1 whether:1 ltd:1 suffer:1 proceed:1 jj:1 clear:1 detailed:1 netherlands:3 amount:3 induces:1 specifies:1 sign:2 estimated:2 track:1 per:1 correctly:1 anatomical:1 write:2 discrete:1 sparsifying:1 salient:1 drawn:1 prevent:1 v1:1 imaging:1 graph:2 deflection:1 inverse:7 letter:1 uncertainty:2 throughout:1 scaling:1 guaranteed:1 replaces:1 activity:2 strength:3 constraint:11 ri:1 software:1 nearby:1 u1:1 aspect:2 formulating:1 concluding:1 according:1 combination:1 pica:1 s1:1 invariant:2 taken:1 equation:4 visualization:2 bus:1 mechanism:1 end:2 gaussians:2 operation:1 promoting:1 v2:1 indirectly:1 magnetic:1 dudv:3 alternative:1 rp:1 top:2 cf:1 calculating:1 medicine:1 exploit:2 approximating:1 society:2 realized:1 dependence:1 md:1 diagonal:12 gence:1 amd:2 topic:1 argue:1 induction:1 assuming:3 meg:7 rotational:1 minimizing:1 demonstration:1 equivalently:1 unfortunately:1 robert:1 nijmegen:2 negative:2 neuroscientific:1 ized:1 stw:1 design:1 zt:2 perform:1 observation:2 markov:1 finite:1 acknowledge:1 defining:2 incorporated:1 head:2 precise:1 dc:1 duff:1 canada:1 introduced:1 required:1 kl:6 specified:1 uncoupled:2 able:1 vici:1 below:2 laguerre:2 mismatch:6 departure:1 pattern:1 sparsity:6 reading:1 royal:1 vi2:18 explanation:1 power:3 event:2 suitable:1 friston:1 force:1 business:1 indicator:1 mn:1 technology:1 negativity:6 coupled:10 columbia:1 prior:36 literature:1 vectorizing:1 review:1 geometric:1 relative:4 loss:1 expect:1 reordering:1 fully:1 lecture:1 localized:1 h2:2 foundation:1 degree:1 proxy:1 consistent:1 principle:2 editor:3 row:2 supported:1 last:1 bias:3 allow:1 institute:2 wide:1 taking:2 sparse:12 diver:1 van:2 distributed:3 benefit:1 depth:3 cortical:1 calculated:1 donders:1 curve:1 forward:2 made:1 author:1 programme:1 transaction:2 sj:2 approximate:13 kullback:1 cavity:1 keep:1 dealing:1 active:1 spatio:2 factorize:1 alternatively:1 iterative:2 latent:1 impedance:1 promising:1 ku:4 channel:1 ca:1 symmetry:2 eeg:4 diag:2 sp:1 spread:1 s2:1 noise:5 whole:1 deviant:5 arise:2 edition:2 fair:1 quadrature:1 fig:4 scattered:1 depicts:3 vsm:1 wiley:1 precision:11 neuroimage:3 position:1 guiding:1 exponential:6 wavelet:1 down:1 formula:2 british:1 theorem:1 specific:2 xt:2 x:5 admits:1 intractable:1 sequential:1 magnitude:3 conditioned:1 chen:1 flannery:1 generalizing:1 univariate:5 expressed:2 g2:1 u2:2 corresponds:2 determines:3 acm:1 ma:1 teukolsky:1 conditional:1 goal:2 formulated:1 magnetoencephalography:2 consequently:2 exposition:1 towards:1 sampson:1 feasible:1 change:3 included:1 specifically:1 loan:1 operates:1 acting:1 conductor:1 called:2 total:1 breese:1 invariance:2 gauss:2 ziehe:1 cholesky:4 support:1 arises:3 relevance:5 incorporate:1 regularizing:1 |
3,037 | 3,752 | Label Selection on Graphs
Andrew Guillory
Department of Computer Science
University of Washington
[email protected]
Jeff Bilmes
Department of Electrical Engineering
University of Washington
[email protected]
Abstract
We investigate methods for selecting sets of labeled vertices for use in predicting
the labels of vertices on a graph. We specifically study methods which choose
a single batch of labeled vertices (i.e. offline, non sequential methods). In this
setting, we find common graph smoothness assumptions directly motivate simple
label selection methods with interesting theoretical guarantees. These methods
bound prediction error in terms of the smoothness of the true labels with respect
to the graph. Some of these bounds give new motivations for previously proposed
algorithms, and some suggest new algorithms which we evaluate. We show improved performance over baseline methods on several real world data sets.
1
Introduction
In this work we consider learning on a graph. Assume we have an undirected graph of n nodes
given by a symmetric weight matrix W . The ith node in the graph has a label yi ? {0, 1} stored in
a vector of labels y ? {0, 1}n . We want to predict all of y from the labels yL for a labeled subset
L ? V = [n]. V is the set of all vertices. We use y? ? {0, 1}n to denote our predicted labels. The
number of incorrect predictions is ||y ? y?||2 .
Graph-based learning is an interesting alternative to traditional feature-based learning. In many
problems, graph representations are more natural than feature vector representations. When classifying web pages, for example, edge weights in the graph may incorporate information about hyperlinks. Even when the original data is represented as feature vectors, transforming the data into a
graph (for example using a Gaussian kernel to compute weights between points) can be convenient
for exploiting properties of a data set.
In order to bound prediction error, we assume that the labels are smoothly P
varying with respect
to the underlying graph. The simple smoothness assumption we use is that i,j Wi,j |yi ? yj | is
small. Here || denotes absolute value, but the labels are binary so we can equivalently use squared
difference. This smoothness assumption has been used by graph-based semi-supervised learning
algorithms which compute y? using a labeled set L chosen uniformly at random from V [Blum and
Chawla, 2001, Hanneke, 2006, Pelckmans et al., 2007, Bengio et al., 2006] and by online graph labeling methods that operate on an adversarially ordered stream of vertices [Pelckmans and Suykens,
2008, Brautbar, 2009, Herbster et al., 2008, 2005, Herbster, 2008]
In this work we consider methods that make use of the smoothness assumption and structure of the
graph in order to both select L as well as make predictions. Our hope is to achieve higher prediction
accuracy as compared to random label selection and other methods for choosing L. We are particularly interested in batch offline methods which select L up front, receive yL and then predict y?. The
single batch, offline label selection problem is important in many real-world applications because it
is often the case that problem constraints make requesting more than one batch of labels very costly.
For example, if requesting a label involves a time consuming, expensive experiment (potentially
1
involving human subjects), it may be significantly less costly to run a single batch of experiments in
parallel as compared to running experiments in series.
P
We give several methods which, under the assumption i,j Wi,j |yi ? yj | is small, guarantee the
prediction error ||y ? y?||2 will also be small. Some of the bounds provide interesting justifications
for previously used methods, and we show improved performance over random label selection and
baseline submodular maximization methods on several real world data sets.
2
General Worst Case Bound
We first give a simple worst case bound on prediction error in terms of label smoothness using few
assumptions about the method used to select labels or make predictions. In fact, the only assumption
we make is that the predictions are consistent with the set of labeled points (i.e. y?L = yL ). The
bound motivates an interesting method for selecting labeled points and provides a new motivation
for a standard prediction method Blum and Chawla [2001] when used with arbitrarily selected L.
The bound also forms the basis of the other bounds we derive which make additional assumptions.
P
Define the graph cut function ?(A, B) , i?A,j?B Wi,j . Let
?(L) ,
min
T ?(V \L)6=0
?(T, (V \ T ))
|T |
Note this function is different from normalized cut (also called sparsest cut). In this function, the
denominator is simply |T | while for normalized cut the denominator is min(|T |, |V \ T |). This
difference is important: computing normalized cut is NP-hard, but we will show ?(L) can be computed in polynomial time. ?(L) measures how easily we can cut a large portion of the graph away
from L. If ?(L) is small, then we can separate many nodes from L without cutting very many edges.
We show that ?(L) where L is the set of labeled vertices measures to what extent prediction error
can be high relative to label smoothness. This makes intuitive sense because if ?(L) is small than
there is a large set of unlabeled nodes which are weakly connected to the remainder of the graph
(including L).
Theorem 1. For any y? consistent with a labeled set L
X
X
1 X
1
||y ? y?||2 ?
Wi,j (|yi ? yj | ? |?
yi ? y?j |) ?
(
Wi,j |yi ? yj | +
Wi,j |?
yi ? y?j |)
2?(L) i,j
2?(L) i,j
i,j
where ? is the XOR operator.
Proof. Let I be the set of incorrectly classified points. First note that I ? L = ? (none of the labeled
points are incorrectly classified).
|I| = ?(I, V \ I)
|I|
?(I, V \ I)
?
?(I, V \ I)
?(L)
Note that for all of the edges (i, j) counted in ?(I, V \ I), y?i = y?j implies yi 6= yj and y?i 6= y?j
implies yi = yj . Then
|I| ?
The
1
2
1 X
Wi,j (|yi ? yj | ? |?
yi ? y?j |)
2?(L) i,j
term is introduced because the sum double counts edges.
This bound is tight when the set of incorrectly classified points I is one of the sets minimizing
minT ?(V \L)6=0 ?(T, (V \ T ))/|T |.
This bound provides an interesting justification for the algorithm in Blum and Chawla [2001] and
related P
methods when used with arbitrarily selected labeled sets. The term involving the predicted
labels, i,j Wi,j |?
yi ? y?j |, is the objective function minimized under the constraint y?L = yL by the
algorithm of Blum and Chawla [2001]. When this is used to compute y?, the bound simplifies.
2
ComputeCut(L)
T0 ? V \ L
repeat
T ? T0
\T )
? ? ?(T,V
|T |
0
T ? argmin ?(A, V \ A) ? ?|A|
Maximize?(L, k)
L??
repeat
T ? ComputeCut(L)
i ? random vertex in T
L ? L ? {i}
until |L| = k
return L
A?(V \L)
until ?(T 0 , V \ T 0 ) ? ?|T 0 | = 0
return T
Figure 1: Left: Algorithm for computing ?(L). Right: Heuristic for maximizing ?(L).
Lemma 1. If
y? = argminy??{0,1}n :?yL =yL
X
Wi,j |?
yi ? y?j |
i,j
for a labeled set L then
||y ? y?||2 ?
Proof. When we choose y? in this way
follows from Theorem 1.
1 X
Wi,j |yi ? yj |
?(L) i,j
P
i,j
Wi,j |?
yi ? y?j | ?
P
i,j
Wi,j |yi ? yj | and the lemma
Label propagation solves a version of this problem in which y? is real valued [Bengio et al., 2006].
The bound also motivates a simple label selection method. In particular, we would like to select a labeled L set that maximizes ?(L). We first describe how to compute ?(L) for a fixed L. Computing
?(L) is related to computing
min ?(T, V \ T ) ? ?|T |
(1)
T ?(V \L)
with parameter ? > 0. The following result is paraphrased from Fujishige [2005] (pages 248-249).
Theorem 2. ?0 = minT
f (T )
g(T )
if and only if
?? ? ?0 min f (T ) ? ?g(T ) = 0
T
and
?? > ?0 min f (T ) ? ?g(T ) < 0
T
We can compute Equation 1 for all ? via a parametric maxflow/mincut computation (it is known
there are no more than n ? 1 distinct solutions). This gives a polynomial time algorithm for computing ?(L). Note this theorem is for unconstrained minimization of T , but restricting T ? L = ?
does not change the result: this constraint simply removes elements from the ground set. In practice,
this constraint can be enforced by contracting the graph used in the flow computations or by giving
certain edges infinite capacity.
As an alternative to solving the parametric flow problem, we can find the desired ? value through
an iterative method [Cunningham, 1985]. The left of Figure 1 shows this approach. The algorithm
takes in a set L and computes argminT ?(V \L)6=0 ?(T, (V \T ))/|T |. The correctness proof is simple.
When the algorithm terminates, we know ? ? ?0 = minT ?(V \L)6=0 ?(T, (V \ T ))/|T | because we
set ? to be ?(T, (V \ T ))/|T | for a particular T . By Theorem 2 and the termination condition, we
also know ? ? ?0 and can conclude ? = ?0 and the set T returned achieves this minimum. One can
also show the algorithm terminates in at most |V | iterations [Cunningham, 1985].
Having shown how to compute ?(L), we now consider methods for maximizing it. ? is neither
submodular nor supermodular. This seems to rule out straightforward set function optimization. In
our experiments, we try a simple heuristic based on the following observation: for any L, if ?(L0 ) >
?(L) then it must be the case that L0 intersects one of the cuts minimizing minT ?(V \L)6=? ?(T, (V \
3
T ))/|T |. In other words, in order to increase ?(L) we must necessarily include a point from the
current cut. Our heuristic is then to simply add a random element from this cut to L. The right of
Figure 1 shows this method.
Several issues remain. First, although we have proposed a reasonable heuristic for maximizing
?(L), we do not have methods for maximizing it exactly or with guaranteed approximation. Aside
from knowing the function is not submodular or supermodular, we also do not know the hardness
of the problem. In the next section, we describe a lower bound on the ? function based on a notion
of graph covering. This lower bound can be maximized approximately via a simple algorithm and
has a well understood hardness of approximation. Second, we have found in experimenting with our
heuristic for maximizing ?(L) that the function can be prone to imbalanced cuts; the computed cuts
sometimes contain all or most of the unselected points V \ L and other times focus on small sets of
outliers. We give a third bound on error which attempts to address some of this sensitivity.
3 Graph Covering Algorithm
The method we consider in this section
P uses a notion of graph covering. We say a set L ?-covers
the graph if ?i ? V either i ? L or j?L Wi,j ? ?. In other words, every node in the graph is
either in L or connected with total weight at least ? to nodes in L (or both). This is a simple real
valued extension of dominating sets. A dominating set is a set L ? V such that ?i ? V either
i ? L or a neighbor of i is in L (or both). This notion of covering is related to the ? function
discussed in the previous section. In particular, if a set L ?-covers a graph than it is necessarily the
case that ?(L) ? ?. The converse does not hold, however. In other words, ? is a lower bound on
?(L). Then, ? can replace ?(L) in the bound in the previous section for a looser upper bound on
prediction error. Although the bound is looser, compared to maximizing ?(L) we better understand
the complexity of computing an ?-cover.
Corollary 1. For any y? consistent with a labeled set L that is an ?-cover
X
1 X
1 X
||y ? y?||2 ?
Wi,j (|yi ? yj | ? |?
yi ? y?j |) ?
(
Wi,j |yi ? yj | +
Wi,j |?
yi ? y?j |)
2? i,j
2? i,j
i,j
where ? is the XOR operator.
Similar to Lemma 1, by making additional assumptions concerning the prediction method used we
can derive a slightly simpler bound. In particular, for a labeled set L that is an ? cover, we assume
unlabeled nodes are labeled with the weighted majority voteP
of neighbors in L. In
Pother words, set
y?i = yi for i ? L, and set y?i = y 0 for i ?
/ L with y 0 such that j?L:yj =y0 Wi,j ? j?L:yj 6=y0 Wi,j .
With this prediction method we get the following bound.
Lemma 2. If L is an ?-cover and V \ L is labeled according to majority vote
1X
1X
||y ? y?||2 ?
Wi,j |yi ? yj |(1 ? |?
yi ? y?j |) ?
Wi,j |yi ? yj |
? i,j
? i,j
Proof. The right hand side follows immediately from the middle expression, so we focus on the first
inequality. For every incorrectly
Plabeled node, there is a set of nodes Li = {j ? L : y?i = y?j } which
satisfies yi 6= yj ?j ? Li , and j?Li Wi,j ? ?/2. We then have for every incorrectly labeled node
a unique set of edges with total weight at least ?/2 included inside the summation in the middle
expression.
In computing an ?-cover, we want to solve.
min |L| : F (L) ? ?
L?V
Where
F (L) , min
i?V \L
X
Wi,j = F 0 (L) , min
i?V
j?L
X
0
Wi,j
j?L
0
0
where Wi,j
= Wi,j for i 6= j and Wi,i
= ?. F 0 is the minimum of a set of modular functions. F 0
is neither supermodular nor submodular. However, we can still compute an approximately minimal
4
?-cover using a trick introduced by Krause et al. [2008]. In particular, Krause et al. [2008] point out
that
X
X
X1
0
0
min
Wi,j
???
min(
Wi,j
, ?) ? ?
i?V
n
i
j?L
j?L
P
0
Also, min( j?L Wi,j
, ?) is submodular, and the sum of submodular functions is submodular.
Then, we can replace F 0 with
X
X1
0
F?0 (L) =
min(
Wi,j
, ?)
n
i
j?L
and solve
min |L| : F?0 (L) ? ?
L?V
This is a submodular set cover problem. The greedy algorithm has approximation guarantees for
this problem for integer valued functions [Krause et al., 2008]. For binary weight graphs the approximation is O(log n). For real valued functions, it?s possible to round the function values to get
an approximation guarantee. In practice, we apply the greedy algorithm directly.
As previously mentioned, ?-covers can be seen as real valued generalizations of dominating sets.
In particular, an ?-cover is a dominating set for binary weight graphs and ? = 1. The hardness
of approximation results for finding a minimum size dominating set then carry over to the more
general ?-cover problem. The next theorem shows that the ?-cover problem is NP-hard and in fact
the greedy algorithm for computing an ?-cover is optimal up to constant factors for ? = 1 and binary
weight graphs. It is based on the well known connection between finding a minimum dominating
set problem and finding a minimum set cover.
Theorem 3. Finding the smallest dominating set L in a binary weight graph is N P -complete.
Furthermore, if there is some ? > 0 such that a polynomial time algorithm approximates the smallest
dominating set within (1 ? ?) ln(n/2) then N P ? T IM E(nO(log log n) ).
We have so far discussed computing a small ? cover for a fixed ?. If we instead have a fixed label
budget and want to maximize ?, we can do so by performing binary search over ?. This is the
approach used by Krause et al. [2008] and gives a bi-approximation.
4 Normalized Cut Algorithm
In this section we consider an algorithm that clusters the data set and replaces the ? function with a
normalized cut value. The normalized cut value for a set T ? V is
?(T, V \ T )
min(|T |, |V \ T |)
In other words, normalized cut is the ratio between the cut value for T and minimum of the size of
T and its complement. Computing the minimum normalized cut for a graph is NP-hard.
Consider the following method: 1) partition the set of nodes V into clusters S1 , S2 , ...Sk , 2) for each
cluster request sufficient labels to estimate the majority class with probability at least 1 ? ?/k, and
3) label all nodes in each cluster with the majority label for that cluster. Here the probability 1 ? ?/k
is with respect to the choice of the labeled nodes used to estimate the majority class for each cluster.
Theorem 4. Let S1 , S2 , ...Sk be a partition of V , and assume we have estimates of the majority
class of each Sl each of which are accurate with probability at least 1 ? ?/k. If y? labels every i ? Sl
according to the estimated majority label for Sl then with probability at least 1 ? ?
X 1 X
1 X
Wi,j |yi ? yj | ?
||y ? y?||2 ?
Wi,j |yi ? yj |
2?l
2? i,j
l
i,j?Sl
where
?l = min
T ?Sl
?(T, Sl \ T )
min(|T |, |Sl \ T |)
and
? = min ?l
l
5
Proof. By the union bound, the estimated majority labels for all of the clusters are correct with
probability at least 1 ? ?. Let I be the set of incorrectly labeled nodes (errors). We consider the
Sk
intersection of I with each of the clusters. Let Il , |I ? Sl |. I = l=1 Il Note that |Il | ? |Sl \ Il |
since we labeled cluster according to the majority label for the cluster. Then
X
X
|Il |
|I| =
|Il | =
?(Il , Sl \ Il )
?(Il , Sl \ Il )
l
l
X
min(|Il |, |Sl \ Il |)
=
?(Il , Sl \ Il )
?(Il , Sl \ Il )
l
X ?(Il , Sl \ Il )
?
?l
l
For any i, j, with i ? Il and j ? Sl \ Il , we must have yi 6= yj . Also, for any i, j with yi 6= yj and
i, j ? Sl , either i ? Il or j ? Il . In other words, there is a one-to-one correspondence between 1)
edges i, j for which i, j ? Sl and either i ? Il or j ? Il and 2) edges i, j for which i, j ? Sl and
yi 6= yj . The desired result then follows.
Note in practice we only label the unlabeled nodes in each cluster using the majority label estimates.
Using the true labels for the labeled nodes only decreases error, so the theorem still holds.
In this bound, ? is a measure of the density of the clusters. Computing ?l for a particular cluster
is NP-hard, but there are approximation algorithms. However, we are not aware of approximation
algorithms for computing a partition such that ? is maximized. This is different from the standard
normalized cut clustering problem; we do not care if clusters are strongly connected to each other
only that each cluster is internally dense. In our experiments, we try several standard clustering
algorithms and achieve good real world performance, but it remains an interesting open question to
design a clustering algorithm for directly maximizing ?. An approach we have not yet tried is to use
the error bound to choose between the results of different clustering algorithms.
We now consider the problem of estimating the majority class for a cluster. If we uniformly sample
labels from a cluster, standard results give that the probability of incorrectly estimating the majority
decreases exponentially with the number of labels if the fraction of nodes in the minority class is
bounded away from 1/2 by a constant. We now show that if the labels are sufficiently smooth and
the cluster is sufficiently dense then the fraction of nodes in the minority class is small.
Theorem 5. The fraction of nodes in the minority class of S is at most
P
i,j?S Wi,j |yi ? yj |
?|S|
where
? = min
T ?S
?(T, S \ T )
min(|T |, |S \ T |)
Proof. Let S ? be the set of nodes belonging to the minority class and S + be the set of nodes
belonging to the other class. Let f be the fraction of nodes in the minority class.
P
|S ? |
|S ? | i,j?S Wi,j |yi ? yj | min(|S + |, |S ? |)
P
f=
=
|S|
|S|
min(|S + |, |S ? |)
i,j?S Wi,j |yi ? yj |
P
+
?
i,j?S Wi,j |yi ? yj | min(|S |, |S |)
=
+
?
|S|
?(S , S )
P
i,j?S Wi,j |yi ? yj |
?
?|S|
If we have an estimate of the smoothness of the labels in a cluster, we can use this bound and an
approximation of ? to determine the number of labels needed to estimate the majority class with
high confidence. In our experiments, we simply request a single label per cluster.
6
Digit1/10
Text/10
BCI/10
USPS/10
g241c/10
g241d/10
Digit1/100
Text/100
BCI/100
USPS/100
g241c/100
g241d/100
Spectral
9.54 (4.42)
37.64 (8.64)
50.13 (2.16)
15.22 (6.22)
39.63 (5.67)
22.31 (7.06)
4.47 (1.35)
31.67 (2.41)
47.37 (2.80)
6.23 (1.49)
44.31 (2.09)
41.70 (2.44)
k-Cut
50.02 (1.04)
50.03 (0.3)
50.16 (0.64)
31.53 (23.65)
50.03 (0.03)
50.02 (0.23)
50.07 (1.46)
50.26 (2.73)
50.14 (0.5)
31.13 (26.31)
50.02 (0.18)
50.03 (0.18)
METIS
4.93 (4.05)
34.76 (6.05)
49.68 (2.63)
8.15 (5.51)
29.18 (7.28)
22.57 (7.26)
3.24 (0.76)
32.57 (1.88)
45.35 (1.91)
9.28 (1.38)
37.47 (2.13)
35.96 (1.99)
?
49.92 (3.18)
50.05 (0.06)
50.32 (0.55)
20.07 (2.70)
50.29 (0.07)
50.01 (0.09)
2.60 (0.83)
48.34 (0.67)
48.17 (1.87)
10.17 (0.39)
52.48 (0.37)
50.33 (0.21)
Baseline
20.90 (15.67)
45.91 (7.96)
50.12 (1.32)
15.87 (4.82)
47.26 (5.19)
48.46 (3.39)
2.57 (0.67)
26.82 (3.88)
47.48 (2.99)
6.33 (2.46)
42.86 (4.50)
41.56 (4.34)
Table 1: Error rate mean (standard deviation) for different data set, label count, method combinations.
8
8
6
6
4
4
2
2
0
0
?2
?2
?4
?3
?2
?1
0
1
2
3
4
5
6
?4
?3
7
?2
?1
0
1
2
3
4
5
6
7
Figure 2: Left: Points selected by the ? function maximization method. Right: Points selected by
the spectral clustering method.
5 Experiments
We experimented with a method based on Lemma 1. We use the randomized method for maximizing
? and then predict with min-cuts [Blum and Chawla, 2001]. We also tried a method based on
Theorem 4. We cluster the data then label each cluster according to a single randomly chosen point.
We chose the number of clusters to be equal to the number of labeled points observing that if a
cluster is split evenly amongst the two classes then we will have a high error rate regardless of
how well we estimate the majority class. We tried three clustering algorithms: a spectral clustering
method [Ng et al., 2001], the METIS package for graph partitioning [Karypis and Kumar, 1999],
and a k-cut approximation algorithm [Saran and Vazirani, 1995, Gusfield, 1990]. As a baseline we
use random label selection and prediction using the label propagation method of Bengio et al. [2006]
with ? = 10?6 and ? = 10?6 and class mass normalization. We also experimented with a method
motivated by the graph covering bound, but for lack of space we omit these results.
We used six benchmark data sets [Chapelle et al., 2006]. We use graphs constructed with a Gaussian
kernel with standard deviation chosen to be the average distance to the k1 th nearest neighbor divided
by 3 (a similar heuristic is used by Chapelle et al. [2006]). We then make this graph sparse by
removing the edge between node i and j unless i is one of j?s k2 nearest neighbors or j is one of i?s
k2 nearest neighbors. We use 10 and 100 labels. We set k1 and k2 for each data set and label count
to be the parameters which give the lowest average error rate for label propagation averaging over
100 trials and choosing from the set {5, 10, 50, 100}. We tune the graph construction parameters to
give low error for the baseline method to ensure any bias is in favor of the baseline as opposed to
the new methods we propose. We then report average error over 1000 trials in the 10 label case and
100 trials in the 100 label case for each combination of data set and algorithm.
Table 1 shows these results. We find that the ? function method does not perform well. We found
on most of the data sets the cuts found by the method included all or almost all of V \ L. In this case
the points selected are essentially random. However, on the USPS data set and on some synthetic
data sets we have tried, we have also observed the opposite behavior where the cuts are very small
and seem to focus on small sets of outliers. Figure 2 shows an example of this. The k-cut method
7
also did not perform well. We?ve found this method has similar problems with outliers. We think
these outlier sensitive methods are impractical for graphs constructed from real world data.
The results for the spectral clustering and METIS clustering methods, however, are quite encouraging. These methods performed well matching or beating the baseline method on the 10 label trials
and in some cases significantly improving performance. The METIS method seems particularly robust. On the 100 label trials, performance was not as good. In general, we expect label selection to
help more when learning from very few labels. The choice in clustering method seems to be of great
practical importance. The clustering methods which work best seem to be methods which minimize
normalize cut like objectives. This is not surprising given the presence of the normalized cut term in
Theorem 4, but it is an open problem to give a clustering method for directly minimizing the bound.
We finally note that the numbers we report for our baseline method are in some cases significantly
different than the published numbers [Chapelle et al., 2006]. This seems to be because of a variety of
factors including differences in implementation as well as significant differences in experiment set
up. We have also experimented with several heuristic modifications to our methods and compared
our methods to simple greedy methods. One modification we tried is to use label propagation for
prediction in conjunction with our label selection methods. We omit these results for lack of space.
6
Related Work
Previous work has also used clustering, covering, and other graph properties to guide label selection
on graphs. We are, however, the first to our knowledge to give bounds which relate prediction
error to label smoothness for single batch label selection methods. Most previous work on label
selection methods for learning on graphs has considered active (i.e. sequential) label selection [Zhu
and Lafferty, 2003, Pucci et al., 2007, Zhao et al., 2008, Wang et al.,P2007, Afshani et al., 2007].
Afshani et al. [2007] show in this setting O(c log(n/c)) where c = i,j Wi,j |yi ? yj | labels are
sufficient and necessary to learn the labeling exactly under some balance assumptions. Without
balance assumptions they show O(c log(1/?) + c log(n/c)) labels are sufficient to achieve an ? error
rate. In some cases, our bounds are better despite considering only non sequential label selection.
Consider the case where c grows linearly with n so c/n = a for some constant a > 0. In this case,
with the bound of Afshani et al. [2007] the number of labels required to achieve a fixed error rate
? also grows linearly with n. In comparison, our graph covering bound needs an ?-cover with ? =
a/?. For some graph topologies, the size of such a cover can grow sublinearly with n (for example
if the graph contains large, dense clusters). Afshani et al. [2007] also use a kind of dominating set
in their method, and it could be interesting to see if portions of their analysis could be adapted to the
offline setting. Zhao et al. [2008] also use a clustering algorithm to select initial labels.
Other work has given generalization error bounds in terms of label smoothness [Pelckmans et al.,
2007, Hanneke, 2006, Blum et al., 2004] for transductive learning from randomly selected L.
These
P bounds are PAC style which typically show that, roughly, the error rate decreases with
O( i,j Wi,j |yi ? yj |/(b|L|)) where b is the minimum 2-cut of the graph. Depending on the
graph structure, our bounds can be significantly better. For example, if a binary weight graph contains
Pc cliques of size n/c then, we can find an ? cover of size c? log(c?) giving an error rate of
O( i,j Wi,j |yi ? yj |/(n?)). This is better if c log(c?) < n/b.
A line of work has examined mistake bounds in terms of label smoothness for online learning on
graphs [Pelckmans and Suykens, 2008, Brautbar, 2009, Herbster et al., 2008, 2005, Herbster, 2008].
These mistake bounds hold no matter how the sequence of vertices are chosen. Herbster [2008]
also considers how cluster structure can improve mistake bounds in this setting and gives a mistake
bound similar to our graph covering bound on prediction error. Herbster et al. [2005] discusses using
an active learning method for the first several steps of an online algorithm. Our work differs from
this previous work by considering prediction error bounds for offline learning as opposed to mistake
bounds for online learning. The mistake bound setting is significantly different as the prediction
method receives feedback after every prediction.
Acknowledgments
This material is based upon work supported by the National Science Foundation under grant IIS0535100.
8
References
P. Afshani, E. Chiniforooshan, R. Dorrigiv, A. Farzan, M. Mirzazadeh, N. Simjour, and H. Zarrabi-Zadeh. On
the complexity of finding an unknown cut via vertex queries. In COCOON, 2007.
Y. Bengio, O. Delalleau, and N. Le Roux. Label propagation and quadratic criterion. In O. Chapelle,
B. Sch?olkopf, and A. Zien, editors, Semi-Supervised Learning. MIT Press, 2006.
A. Blum and S. Chawla. Learning from labeled and unlabeled data using graph mincuts. In ICML, 2001.
A. Blum, J. Lafferty, M. R. Rwebangira, and R. Reddy. Semi-supervised learning using randomized mincuts.
In ICML, 2004.
M. Brautbar. Online Learning a Labeling of a Graph. Mining and Learning with Graphs, 2009.
O. Chapelle, B. Sch?olkopf, and A. Zien. Semi-supervised learning. MIT press, 2006.
W. Cunningham. Optimal attack and reinforcement of a network. Journal of the ACM, 1985.
S. Fujishige. Submodular Functions and Optimization. Elsevier Science, 2005.
D. Gusfield. Very simple methods for all pairs network flow analysis. SIAM Journal on Computing, 1990.
S. Hanneke. An analysis of graph cut size for transductive learning. In ICML, 2006.
M. Herbster. Exploiting Cluster-Structure to Predict the Labeling of a Graph. In ALT, 2008.
M. Herbster, M. Pontil, and L. Wainer. Online learning over graphs. In ICML, 2005.
M. Herbster, G. Lever, and M. Pontil. Online Prediction on Large Diameter Graphs. In NIPS, 2008.
G. Karypis and V. Kumar. A fast and highly quality multilevel scheme for partitioning irregular graphs. SIAM
Journal on Scientific Computing, 1999.
A. Krause, H. B. McMahan, C. Guestrin, and A. Gupta. Robust submodular observation selection. JMLR,
2008.
A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: Analysis and an algorithm. In NIPS, 2001.
K. Pelckmans and J. Suykens. An online algorithm for learning a labeling of a graph. In Mining and Learning
with Graphs, 2008.
K. Pelckmans, J. Shawe-Taylor, J. Suykens, and B. De Moor. Margin based transductive graph cuts using linear
programming. 2007.
A. Pucci, M. Gori, and M. Maggini. Semi-supervised active learning in graphical domains. In Mining and
Learning With Graphs, 2007.
H. Saran and V. V. Vazirani. Finding k cuts within twice the optimal. SIAM Journal on Computing, 1995.
M. Wang, X. Hua, Y. Song, J. Tang, and L. Dai. Multi-Concept Multi-Modality Active Learning for Interactive
Video Annotation. In International Conference on Semantic Computing, 2007.
W. Zhao, J. Long, E. Zhu, and Y. Liu. A scalable algorithm for graph-based active learning. In Frontiers in
Algorithmics, 2008.
X. Zhu and J. Lafferty. Combining active learning and semi-supervised learning using gaussian fields and
harmonic functions. In ICML, 2003.
9
| 3752 |@word trial:5 middle:2 version:1 polynomial:3 seems:4 open:2 termination:1 tried:5 carry:1 initial:1 liu:1 series:1 contains:2 selecting:2 current:1 surprising:1 yet:1 must:3 partition:3 mirzazadeh:1 remove:1 aside:1 greedy:4 selected:6 pelckmans:6 ith:1 provides:2 node:23 attack:1 simpler:1 constructed:2 incorrect:1 inside:1 sublinearly:1 hardness:3 roughly:1 behavior:1 nor:2 multi:2 encouraging:1 considering:2 estimating:2 underlying:1 bounded:1 maximizes:1 mass:1 lowest:1 what:1 argmin:1 kind:1 finding:6 impractical:1 guarantee:4 every:5 interactive:1 exactly:2 k2:3 partitioning:2 converse:1 internally:1 brautbar:3 omit:2 grant:1 engineering:1 understood:1 mistake:6 despite:1 approximately:2 chose:1 twice:1 examined:1 bi:1 karypis:2 unique:1 practical:1 acknowledgment:1 yj:29 practice:3 union:1 differs:1 pontil:2 maxflow:1 significantly:5 convenient:1 matching:1 word:6 confidence:1 suggest:1 get:2 unlabeled:4 selection:15 operator:2 g241c:2 maximizing:8 straightforward:1 regardless:1 roux:1 immediately:1 rule:1 notion:3 justification:2 construction:1 programming:1 us:1 trick:1 element:2 expensive:1 particularly:2 cut:31 labeled:23 observed:1 electrical:1 wang:2 worst:2 connected:3 decrease:3 mentioned:1 transforming:1 complexity:2 motivate:1 weakly:1 tight:1 solving:1 upon:1 basis:1 usps:3 easily:1 represented:1 intersects:1 distinct:1 fast:1 describe:2 query:1 labeling:5 choosing:2 quite:1 heuristic:7 modular:1 valued:5 dominating:9 say:1 solve:2 delalleau:1 bci:2 favor:1 think:1 transductive:3 online:8 sequence:1 propose:1 remainder:1 combining:1 achieve:4 intuitive:1 normalize:1 olkopf:2 exploiting:2 double:1 cluster:27 help:1 derive:2 andrew:1 depending:1 nearest:3 solves:1 paraphrased:1 c:1 predicted:2 involves:1 implies:2 correct:1 human:1 material:1 multilevel:1 generalization:2 summation:1 im:1 extension:1 frontier:1 hold:3 sufficiently:2 considered:1 ground:1 great:1 predict:4 achieves:1 digit1:2 smallest:2 label:66 sensitive:1 correctness:1 weighted:1 moor:1 hope:1 minimization:1 mit:2 gaussian:3 varying:1 conjunction:1 corollary:1 l0:2 focus:3 experimenting:1 baseline:8 sense:1 elsevier:1 typically:1 cunningham:3 interested:1 issue:1 equal:1 aware:1 field:1 having:1 washington:4 ng:2 adversarially:1 icml:5 minimized:1 np:4 report:2 few:2 randomly:2 ve:1 national:1 attempt:1 investigate:1 mining:3 highly:1 pc:1 accurate:1 edge:9 necessary:1 unless:1 taylor:1 desired:2 theoretical:1 minimal:1 cover:19 maximization:2 vertex:9 subset:1 deviation:2 front:1 stored:1 guillory:2 synthetic:1 density:1 herbster:9 sensitivity:1 randomized:2 siam:3 international:1 yl:6 squared:1 lever:1 opposed:2 choose:3 zhao:3 style:1 return:2 li:3 de:1 matter:1 stream:1 performed:1 try:2 observing:1 portion:2 parallel:1 annotation:1 minimize:1 il:24 accuracy:1 xor:2 maximized:2 none:1 bilmes:2 hanneke:3 published:1 classified:3 proof:6 knowledge:1 higher:1 supermodular:3 supervised:6 improved:2 wei:1 strongly:1 furthermore:1 until:2 hand:1 receives:1 web:1 propagation:5 lack:2 quality:1 scientific:1 grows:2 normalized:10 true:2 concept:1 contain:1 symmetric:1 semantic:1 round:1 covering:8 criterion:1 complete:1 wainer:1 harmonic:1 argminy:1 common:1 exponentially:1 discussed:2 approximates:1 significant:1 smoothness:11 unconstrained:1 submodular:10 shawe:1 chapelle:5 add:1 imbalanced:1 mint:4 certain:1 inequality:1 binary:7 arbitrarily:2 yi:38 seen:1 minimum:8 additional:2 care:1 dai:1 guestrin:1 determine:1 maximize:2 semi:6 zien:2 smooth:1 long:1 divided:1 concerning:1 maggini:1 prediction:22 involving:2 scalable:1 denominator:2 essentially:1 iteration:1 kernel:2 sometimes:1 normalization:1 suykens:4 irregular:1 receive:1 want:3 krause:5 grow:1 modality:1 sch:2 operate:1 subject:1 fujishige:2 undirected:1 flow:3 lafferty:3 seem:2 jordan:1 integer:1 ee:1 presence:1 bengio:4 split:1 variety:1 topology:1 opposite:1 g241d:2 simplifies:1 knowing:1 requesting:2 t0:2 expression:2 motivated:1 six:1 song:1 returned:1 tune:1 diameter:1 sl:19 estimated:2 per:1 blum:8 neither:2 rwebangira:1 graph:60 fraction:4 sum:2 enforced:1 run:1 package:1 almost:1 reasonable:1 looser:2 zadeh:1 bound:43 guaranteed:1 correspondence:1 replaces:1 quadratic:1 adapted:1 constraint:4 min:24 kumar:2 performing:1 department:2 according:4 metis:4 request:2 combination:2 belonging:2 terminates:2 remain:1 slightly:1 y0:2 wi:40 making:1 s1:2 modification:2 outlier:4 ln:1 equation:1 reddy:1 previously:3 remains:1 discus:1 count:3 needed:1 know:3 apply:1 away:2 spectral:5 chawla:6 batch:6 alternative:2 original:1 denotes:1 running:1 include:1 clustering:15 mincut:1 ensure:1 gori:1 graphical:1 giving:2 k1:2 objective:2 question:1 parametric:2 costly:2 traditional:1 amongst:1 distance:1 separate:1 capacity:1 majority:14 evenly:1 extent:1 considers:1 minority:5 ratio:1 minimizing:3 balance:2 equivalently:1 potentially:1 relate:1 design:1 implementation:1 motivates:2 unknown:1 perform:2 upper:1 observation:2 gusfield:2 benchmark:1 incorrectly:7 introduced:2 complement:1 pair:1 required:1 connection:1 algorithmics:1 nip:2 address:1 beating:1 hyperlink:1 including:2 video:1 natural:1 predicting:1 zhu:3 scheme:1 improve:1 unselected:1 text:2 relative:1 contracting:1 expect:1 interesting:7 foundation:1 sufficient:3 consistent:3 editor:1 classifying:1 prone:1 repeat:2 supported:1 offline:5 side:1 bias:1 understand:1 guide:1 neighbor:5 absolute:1 sparse:1 farzan:1 feedback:1 world:5 computes:1 reinforcement:1 counted:1 far:1 vazirani:2 cutting:1 argmint:1 clique:1 active:6 conclude:1 consuming:1 search:1 iterative:1 sk:3 table:2 learn:1 robust:2 improving:1 necessarily:2 domain:1 did:1 dense:3 linearly:2 motivation:2 s2:2 x1:2 sparsest:1 mcmahan:1 jmlr:1 third:1 tang:1 theorem:12 removing:1 pac:1 experimented:3 alt:1 gupta:1 restricting:1 sequential:3 importance:1 budget:1 margin:1 smoothly:1 intersection:1 simply:4 ordered:1 hua:1 pucci:2 satisfies:1 acm:1 jeff:1 replace:2 hard:4 change:1 included:2 specifically:1 infinite:1 uniformly:2 averaging:1 lemma:5 called:1 total:2 mincuts:2 vote:1 select:5 incorporate:1 evaluate:1 |
3,038 | 3,753 | On Learning Rotations
Raman Arora
University of Wisconsin-Madison
Department of Electrical and Computer Engineering
1415 Engineering Drive, Madison, WI 53706
[email protected]
Abstract
An algorithm is presented for online learning of rotations. The proposed algorithm
involves matrix exponentiated gradient updates and is motivated by the von Neumann divergence. The multiplicative updates are exponentiated skew-symmetric
matrices which comprise the Lie algebra of the rotation group. The orthonormality and unit determinant of the matrix parameter are preserved using matrix logarithms and exponentials and the algorithm lends itself to intuitive interpretation
in terms of the differential geometry of the manifold associated with the rotation
group. A complexity reduction result is presented that exploits the eigenstructure
of the matrix updates to simplify matrix exponentiation to a quadratic form.
1
Introduction
The problem of learning rotations finds application in many areas of signal processing and machine
learning. It is an important problem since many problems can be reduced to that of learning rotations; for instance Euclidean motion in Rn?1 is simply rotation in Rn . A conformal embedding was
presented in [1] that extends rotations to a representation for all Euclidean transformations. Furthermore, the rotation group provides a universal representation for all Lie groups. This was established
in [2] by showing that any Lie algebra can be expressed as a bivector algebra. Since the Lie algebra
describes the structure of the associated Lie group completely, any Lie group can be represented as
rotation group.
The batch version of the problem was originally posed as the problem of estimating the attitude of
satellites by Wahba in 1965 [3]. In psychometrics, it was presented as the orthogonal Procrustes
problem [4]. It has been studied in various forms over the last few decades and finds application in
many areas of computer vision [5, 6, 7], face recognition [8], robotics [9, 10], crystallography[11]
and physics [12].
While the batch version of the problem is well understood, the online learning of rotations from
vector instances is challenging since the manifold associated with the rotation group is a curved
space and it is not possible to form updates that are linear combinations of rotations [13]. The set
of rotations about the origin in n-dimensional Euclidean space forms a compact Lie group, SO(n),
under the operation of composition. The manifold associated with the n-dimensional rotation group
is the unit sphere Sn?1 in n dimensional Euclidean space.
1.1
Related Work
The online version of learning rotations was posed as an open problem by Smith and Warmuth
[13]. Online learning algorithms were recently presented for some matrix groups. In [14], an online
algorithm was proposed for learning density matrix parameters and was extended in [15] to the
problem of learning subspaces of low rank. However, the extension of these algorithms to learning
rotations will require repeated projection and approximation [13]. Adaptive algorithms were also
1
studied in [16] for optimization under unitary matrix constraint. The proposed methods are steepest
descent methods on Riemannian manifolds.
1.2
Our Approach
This paper presents an online algorithm for learning rotations that utilizes the Bregman matrix divergence with respect to the quantum relative entropy (also known as von Neumann divergence) as a
distance measure between two rotation matrices. The resulting algorithm has matrix-exponentiated
gradient (MEG) updates [14]. The key ingredients of our approach are (a) von Neumann Divergence
between rotation matrices [17], (b) squared error loss function and (c) matrix exponentiated gradient
(MEG) updates.
Any Lie group is also a smooth manifold and the updates in the proposed algorithm have an intuitive
interpretation in terms of the differential topology of the associated manifold. We also utilize various
elementary Lie algebra concepts to provide intuitive interpretation of the updates. The development
in the paper closely follows that of the matrix exponentiated gradient (MEG) updates in [14] for
density matrix parameters. The form of the updates are similar to steepest descent methods of [16],
but are derived for learning rotations from vector instances using an information-theoretic approach.
The MEG updates are reduced to a quadratic form in the Lie algebra element corresponding to the
gradient of loss function on the rotation group.
The paper is organized as follows. The problem is formulated in Section 2. Section 3 presents
mathematical preliminaries in differential geometry and Bregman matrix divergence. The matrix
exponentiated gradient updates are developed in Section 4. The MEG updates are simplified in
Section 5. Experimental results are discussed in Section 6.
2
Problem Statement
Let xt be a stream of instances of n-dimensional unit vectors. Let R? be an unknown n ? n rotation
? t denotes the estimate
matrix that acts on xt to give the rotated vector yt = R? xt . The matrix R
? t xt represents the prediction for the rotated vector yt . The loss
?t = R
of R? at instance t and y
? t ) = d(?
incurred due to error in prediction is Lt (R
yt , yt ), where d(?, ?) is a distance function. The
estimate of the rotation needs to be updated based on the loss incurred at every instance and the
objective is to develop an algorithm for learning R? that has a bounded regret.
We seek adaptive updates that solve the following optimization problem at each step,
? t+1 = arg min ?F (R, R
? t ) + ? Lt (R),
R
R
(1)
? t is the estimated rotation matrix at instance t, ? is the learning rate or the step-size and ?F
where R
is a matrix divergence that measures the discrepancy between matrices. This is a typical problem
formulation in online learning where the objective comprises a loss function and a divergence term.
The parameter ? balances the trade-off between the two conflicting goals at each update: incurring
small loss on the new data versus confidence in the estimate from the previously observed data.
Minimizing the weighted objective therefore results in smooth updates as well as minimizes the loss
function.
In this paper, the updates are smoothed using the von Neumann divergence which is defined for
matrices as
? t ) = tr(R log R ? R log R
?t ?R+R
? t ),
?F (R, R
(2)
where tr(A) is the trace of the matrix A. The search is over all R ? SO(n), i.e. over all n ? n
matrices such that RT R = I, RRT = I and det(R) = 1.
3
Mathematical Preliminaries
This section reviews some basic definitions and concepts in linear algebra and differential geometry
that are utilized for the development of the updates in the next section.
2
3.1
Matrix Calculus
Given a real-valued matrix function F : Rn?n ? R, the gradient of the function with respect to the
matrix R ? Rn?n is defined to be the matrix [18],
? ?F
?
?F
? ? ? ?R
?R11
1n
?
?
..
..
?R F (R) = ? ...
(3)
?.
.
.
?F
?F
? ? ? ?Rnn
?Rn1
Some of the matrix derivatives that are used later in the paper are following: for a constant matrix
? ? Rn?n ,
1. ?R tr(?RRT ) = (? + ?T )R,
2. ?R det(R) = det(R)(R?1 )T ,
3. ?R (y ? Rx)T (y ? Rx) = ?2(y ? Rx)xT .
A related concept in differential geometry is that of the space of vectors tangent to a group at the
identity element of the group. This is defined to be the Lie algebra associated with the group. It is
a convenient way of describing the infinitesimal structure of a topological group about the identity
element and completely determines the associated group. The utility of the Lie algebra is due to the
fact that it is a vector space and thus it is much easier to work with it than with the linear group.
A real n ? n matrix A is in the Lie algebra of the rotation group SO(n) if and only if it is a skewsymmetric matrix (i.e. AT = ?A). Furthermore, for any matrix A in the Lie algebra of SO(n),
exp(?A) is a one-parameter subgroup of the rotation group, parametrized by ? ? R [19].
The matrix exponential and logarithm play an important role in relating a matrix Lie group G and
the associated Lie algebra g. The exponential of a matrix R ? Rn?n is given by the following
series,
1
1
exp(R) = I + R + R2 + R2 + ? ? ?
(4)
2!
3!
Given an element A ? g, the matrix exponential exp(A) is the corresponding element in the group.
The matrix logarithm log (R) is defined to be the inverse of the matrix exponential: it maps from the
Lie group G into the Lie algebra g. The matrix logarithm is a well-defined map since the exponential
map is a local diffeomorphism between a neighborhood of the zero matrix and a neighborhood of
the identity matrix [19, 20].
3.2
Riemannian Gradient
Consider a real-valued differentiable function, Lt : SO(n) ? R, defined on the rotation group. The
? R Lt of the function Lt on the Lie group SO(n) evaluated at the rotation
Riemannian gradient ?
matrix R and translated to the identity (to get a Lie algebra element) is given as [16]
? R Lt = ?R Lt RT ? R ?TR Lt ,
?
(5)
where ?R Lt is the matrix derivative of the cost function in the Euclidean space defined in (3) at
matrix R.
3.3
Von Neumann Divergence
In any online learning problem, the choice of divergence between the parameters dictates the resulting updates. This paper utilizes the von Neumann divergence which is a special case of the Bregman
divergence and measures discrepancy between two matrices.
Let F be convex differentiable function defined on a subset of Rn?n with the gradient f (R) =
?R F (R). The Bregman divergence between two matrices R1 and R2 is defined as
?F (R1 , R2 ) := F (R1 ) ? F (R2 ) ? tr((R1 ? R2 )f (R2 )T ).
(6)
The gradient of Bregman divergence with respect to R1 is given as,
?R1 ?F (R1 , R2 ) = f (R1 ) ? f (R2 ).
3
(7)
Choosing the function F in the definition of Bregman divergence to be the von Neumann entropy,
given as F (R) = tr(R log R ? R)), obtain the von Neumann divergence [14, 17]:
?F (R1 , R2 ) = Tr(R1 log R1 ? R1 log R2 ? R1 + R2 ).
(8)
Finally, the gradient of the von Neumann entropy was shown to be f (R) = ?R F (R) = log R in
[14]. Consequently, the gradient of the von Neumann divergence can be expressed as
?R1 ?F (R1 , R2 ) = log (R1 ) ? log (R2 ).
4
(9)
Online Algorithm
The problem of online learning of rotations can be expressed as the optimization problem
? t+1 = arg min
R
? t ) + ?Lt (R)
?F (R, R
R
s.t.
(10)
RT R = I, RRT = I
det(R) = 1
? t is the estimate of the rotation matrix at time instance t and Lt is the loss incurred in the
where R
prediction of yt . The proposed adaptive updates are matrix exponentiated gradient (MEG) updates
given as
!
T
?
?
?
?
Rt+1 = exp log Rt ? ? skew Rt ?R Lt (Rt ) ,
(11)
? t ) is the gradient of the cost function in the Euclidean space with respect to the
where ?R Lt (R
rotation matrix R and skew (?) is the skew-symmetrization operator on the matrices, skew (A) =
A ? AT . The updates seem intuitive, given the following elementary facts about the Lie algebraic
structure of the rotation group: (a) the gradient of loss function gives geodesic direction and velocity
vector on the unit sphere, (b) a skew-symmetric matrix is an element of Lie algebra [19, 20], (c) the
matrix logarithm maps a rotation matrix to the corresponding Lie algebra element, (d) composition
of two elements of Lie algebra yields another Lie algebra element and (e) the matrix exponential
maps a Lie algebra element to corresponding rotation matrix.
The loss function is defined to be the squared error loss function and therefore the gradient of the
? t ) = 2(?
loss function is given by the matrix ?R Lt (R
yt ? yt )xTt . This results in the online updates
!
T
T
?
?
?
yt ? yt )x
Rt+1 = exp log Rt ? 2? skew R (?
,
t
t
!
? t exp
= R
4.1
T
? T (?
? 2? skew R
.
t yt ? yt )xt
(12)
Updates Motivated by von-Neumann Divergence
The optimization problem in (10) is solved using the method of Lagrange multipliers. First observe
that the constraints RT R = I and RRT = I are redundant since one implies the other. Introducing
the Lagrangian multiplier matrix ? for the orthonormality constraint and Lagrangian multiplier ?
for the unity determinant constraint, the objective function can be written as
? t ) + ?Lt (R) + tr(?(RRT ? I)) + ?(det(R) ? 1).
J (R, ?, ?) = ?F (R, R
(13)
Taking the gradient on both sides of equation with respect to the matrix R, get
?R J (R, ?, ?)
? t ) + ??
? R Lt (R)
= ?R ?F (R, R
+(? + ?T )R + ? det(R)(R?1 )T ,
4
(14)
using the matrix derivatives from Section 3.1 and the Riemannian gradient for the loss function from
? t ) = f (R) ? f (R
? t ), get
eqn. (5). Putting ?R J (R, ?, ?) = 0 and using the fact that ?R ?F (R, R
? t ) + ? skew R
? T ?R Lt (R) + (? + ?T )R + ? det(R)(R?1 )T . (15)
0 = f (R) ? f (R
t
Given that f is a bijective map, write
? t ) ? ? skew R
? T ?R Lt (R) ? (? + ?T )R ? ? det(R)(R?1 )T . (16)
R = f ?1 f (R
t
Since the objective is convex, it is sufficient to produce a choice of Lagrange multipliers that enT
forces the rotation constraint. Choosing ? = det(R)?1 and ? = ?(1/2) R?1 R?1 yields the
following implicit update
!
T
?
?
?
?
Rt+1 = exp log Rt ? ? skew Rt ?R Lt (Rt+1 ) .
(17)
As noted by Tsuda et. al. in [14], the implicit updates of the form above are usually not solvable in
? t+1 ) with ?R Lt (R
? t ) (as in [21, 14]), we obtain
closed form. However, by approximating ?R Lt (R
an explicit update
!
T
? ?R Lt (R
? t) .
? t+1 = exp log R
? t ? ? skew R
(18)
R
t
The next result ensures the closure property for the matrix exponentiated gradient updates in the
equation above. In other words, the estimates for the rotation matrix do not steer away from the
? 0 ? SO(n) then R
? t+1 ? SO(n).
manifold associated with the rotation group. Therefore, if R
? t ? SO(n) then R
? t+1 given by the updates in (18) is a rotation matrix in SO(n).
Lemma 1. If R
Proof. Using the properties of matrix logarithm and matrix exponential, express (18) as
? t+1 = R
? t exp(??S),
R
(19)
T
T
?
?
where S = Rt ?R Lt (R) ? ?R Lt (R) Rt is an n ? n dimensional skew-symmetric matrix with
trace zero. Then
T
?T R
? t+1 =
? t e??S
? t e??S ,
R
R
R
t+1
T T
? R
? t e??S ,
= e??S
R
t
T
= e??S
e??S ,
T
= e?(?S ?S) = e?(S?S) = I,
? t ? SO(n), eS T = eS T , ST = ?S and that e0 = I. Similarly,
where we used the facts that R
? t+1 R
? T = I. Finally, note that
R
t+1
? t+1 ) = det(R
? t e??S ) = det(R
? t ) ? det(e??S ) = e?? Tr (S) ,
det(R
since determinant of exponential of a matrix is equal to the exponential of the trace of the matrix.
? t+1 ) = 1.
And since S is a trace zero matrix, det(R
4.2
Differential Geometrical Interpretation
The resulting updates in (18) have nice interpretation in terms of the differential geometry of the
? t ), in the Euclidean space gives a tangent
rotation group. The gradient of the cost function, ?R Lt (R
direction at the current estimate of the rotation matrix. The Riemannian gradient is computed as
? t) ? R
? t ?T Lt (R
? t) R
? t . The Riemannian gradient at the identity element of the group is
?R Lt (R
R
?
? t ), as in (5). The gradient corresponds to an element
? R Lt (R
obtained by de-rotation by Rt , giving ?
of the Lie algebra, so(n), of the rotation group. The exponential map gives the corresponding rotation matrix which is the multiplicative update to the estimate of the rotation matrix at the previous
instance.
5
5
Complexity Reduction of MEG Updates
The matrix exponentiated gradient updates ensure that the estimates for the rotation matrix stay on
the manifold associated with the rotation group at each iteration. However, with the matrix exponentiation at each step, the updates are computationally intensive and in fact the computational
complexity of the updates is comparable to other approaches that would require repeated approximation and projection on to the manifold. This section discusses a fundamental complexity reduction
result to establish a simpler update by exploiting the eigen-structure of the update matrix. First observe that the matrix in the exponential in eqn. (12) (for the case of squared error loss function) can
be written as
T
? T (?
S = ?2? skew R
,
t yt ? yt )xt
? T (R
? t xt ? R? xt )xT ,
= ?2? skew R
t
t
T
T
T
? R? xt x ,
= ?2? skew xt xt ? R
t
t
? T R? xt xT ? xt xT RT R
?t ,
= 2? R
t
t
t
?
= AT X ? XA,
(20)
? t . Each term in the matrix S is a rank-one matrix (due to pre
where X ? xt xTt and A ? 2?RT? R
and post-multiplication with xt xTt , respectively). Thus S is at most rank-two. Since S is skewsymmetric, it has (at most) two eigenvalues in a complex conjugate pair ?j? (and n ? 2 zero
eigenvalues) [22], which allows the following simplification.
Lemma 2. The matrix exponentiated gradient updates in eqn. (12) are equivalent to the following
updates,
!
sin(?)
1 ? cos(?) 2
?
?
Rt+1 = Rt I +
S+
S ,
(21)
?
?2
q
2
? t and S is the skew-symmetric matrix given in eqn. (20) with eigenvalwhere ? = 2? 1 ? ytT y
ues ?j?.
? t are unit vectors in Rn and therefore ? is real-valued. The proof of the complexNote that yt , y
ity reduction follows easily from a generalization of the Rodrigues? formula for computing matrix
exponentials for skew-symmetric matrix. The proof is not presented here due to space constraints
but the interested reader is referred to [23, 24]. Owing to the result above the matrix exponential
reduces to a simple quadratic form involving an element from the Lie algebra of the rotation group.
The pseudocode is given in Algorithm 1.
Choose ?
Initialize R1 = I
for t = 1, 2, . . . do
Obtain an instance of unit vector xt ? Rn ;
? t xt ;
?t = R
Predict the rotated vector y
Receive the true rotated vector yt = R? xt ;
? t ) = |yt ? y
? 2
Incur the loss Lt (R
t| ;
? T yt xT ? xt yT R
?t ;
Compute the matrix S = 2? R
t
t
t
q
2
?t ;
Compute the eigenvalues ? = 2? 1 ? ytT y
? t+1 = R
? t I + sin(?) S +
Update the rotation matrix R
?
1?cos(?) 2
S
?2
end
Algorithm 1: Pseudocode for Learning rotations using Matrix Exponentiated Gradient updates
6
6
Experimental Results
This section presents experimental results with the proposed algorithm for online learning of rotations. The performance of the algorithm is evaluated in terms of the Frobenius norm of the difference
of the true rotation matrix and the estimate. Figure 1 shows the error plot with respect to time. The
unknown rotation is a 12 ? 12 dimensional matrix and changes randomly every 200 instances. The
trajectories are averaged over 1000 random simulations. It is clear from the plot that the estimation
error decays rapidly to zero and estimates of the rotation matrices are exact.
Estimation error versus time ? SO(12)
5
Frobenius norm
Spectral norm
Estimation Error
4
3
2
1
0
0
200
400
600
800
Time (instance index)
1000
1200
Figure 1: Online learning of rotations: Estimate of unknown rotation is updated every time new
instance of rotation is observed. The true rotation matrix is randomly changing at regular interval
(N=200). The error in Frobenius norm is plotted against the instance index.
The online algorithm is also found robust to small amount of additive white Gaussian noise in
observations of the true rotated vectors, i.e. the observations are now given as yt = R? xt + ? wt ,
where ? determines the signal to noise ratio. The performance of the algorithm is studied with
various noisy conditions. Figure 2 shows error plots with respect to time for various noisy conditions
in R20 . The Frobenius norm error decays quickly to a noise floor determined by the SNR as well as
the step size ?. In the simulations in Fig. 2 the step size was decreased gradually over time. It is not
clear immediately how to pick the optimal step size but a classic step size adaptation rule or Armijo
rule may be followed [25, 16].
The tracking performance of the online algorithm is compared with the batch version. In Figure
3, the unknown rotation R? ? SO(30) changes slightly after every 30 instances. The smoothly
changing rotation is induced by composing R? matrix with a matrix R? every thirty iterations. The
matrix R? is composed of 3 ? 3 block-diagonal matrices, each corresponding to rotation about the
X-axis in 3D space by ?/360 radians. The batch version stores the last 30 instances in an 30 ? 30
matrix X and corresponding rotated vectors in matrix Y. The estimate of the unknown rotation is
given as YX?1 . The batch version achieves zero error only at time instances when all the data in
X, Y correspond to the same rotation whereas the online version consistently achieves a low error
and tracks the changing rotation.
It is clear from the simulations that the Frobenius norm decreases at each iteration. It is easy to
show this global stability of the updates proposed here in noise-free scenario [24]. The proposed
algorithm was also applied to learning and tracking the rotations of 3D objects. Videos showing
experimental results with the 3D Stanford bunny [26] are posted online at [27].
7
Conclusion
In this paper, we have presented an online algorithm for learning rotations. The algorithm was
motivated using the von Neumann divergence and squared error loss function and the updates were
7
Estimation Error (Frobenius norm)
Avg. Estimation error (Frobenius norm) vs time ? SO(20)
5
0
5? 10?4
4
?3
1x10
1.5x10?3
3
2x10?3
2
1
0
0
100
200
300
400
Time (instance index)
500
600
Figure 2: Average error plotted against instance index for various noise levels.
Tracking rotations in SO(30)
5
Error (Frobenius norm)
Batch version
Online algorithm
4
3
2
1
0
100
200
300
400
Time (instance index)
500
600
Figure 3: Comparing the performance of tracking rotations for the batch version versus the online
algorithm. The rotation matrix changes smoothly every M = 30 instances.
developed in the Lie algebra of the rotation group. The resulting matrix exponentiated gradient
updates were reduced to a simple quadratic form. The estimation performance of the proposed
algorithm was studied under various scenarios. Some of the future directions include identifying
alternative loss functions that exploit the spherical geometry as well as identifying regret bounds for
the proposed updates.
Acknowledgements: The author would like to thank W. A. Sethares, M. R. Gupta and A. B. Frigyik
for helpful discussions and feedback on early drafts of the paper.
References
[1] Rich Wareham, Jonathan Cameron, and Joan Lasenby, ?Applications of conformal geometric
algebra in computer vision and graphics,? in IWMM/GIAE, 2004, pp. 329?349.
[2] C. Doran, D. Hestenes, F. Sommen, and N. Van Acker, ?Lie groups as spin groups,? Journal
of Mathematical Physics, vol. 34, no. 8, pp. 36423669, August 1993.
[3] Grace Wahba, ?Problem 65-1, a least squares estimate of satellite attitude,? SIAM Review, vol.
7, no. 3, July 1965.
8
[4] P. Schonemann, ?A generalized solution of the orthogonal Procrustes problem,? Psychometrika, vol. 31, no. 1, pp. 3642?3669, March 1966.
[5] P. Besl and N. McKay, ?A method for registration of 3D shapes,? . IEEE Trans. on Pattern
Analysis and Machine Intelligence, vol. 14, pp. 239?256, 1992.
?
[6] Hannes Edvardson and Orjan
Smedby, ?Compact and efficient 3D shape description through
radial function approximation,? Computer Methods and Programs in Biomedicine, vol. 72, no.
2, pp. 89?97, 2003.
[7] D.W. Eggert, A. Lorusso, and R.B. Fisher, ?Estimating 3D rigid body transformations: a
comparison of four major algorithms,? Machine Vision and Applications, Springer, vol. 9, no.
5-6, Mar 1997.
[8] R. Sala Llonch, E. Kokiopoulou, I. Tosic, and P. Frossard, ?3D face recognition with sparse
spherical representations,? Preprint, Elsiever, 2009.
[9] Ameesh Makadia and Kostas Daniilidis, ?Rotation recovery from spherical images without
correspondences,? IEEE Trans. Pattern Analysis Machine Intelligence, vol. 28, no. 7, pp.
1170?1175, 2006.
[10] Raman Arora and Harish Parthasarathy, ?Navigation using a spherical camera,? in International Conference on Pattern Recognition (ICPR), Tampa, Florida, Dec 2008.
[11] Philip R. Evans, ?Rotations and rotation matrices,? Acta Cryst., vol. D57, pp. 1355?1359,
2001.
[12] Richard L. Liboff, Introductory Quantum Mechanics, Addison-Wesley, 2002.
[13] Adam Smith and Manfred Warmuth, ?Learning rotations,? in Conference on Learning Theory
(COLT), Finland, Jun 2008.
[14] Koji Tsuda, Gunnar Ratsch, and Manfred K Warmuth, ?Matrix exponentiated gradient updates
for on-line learning and Bregman projection,? Journal of Machine Learning Research, vol. 6,
Jun 2005.
[15] Manfred K Warmuth, ?Winnowing subspaces,? in Proc. 24th Int. Conf. on Machine Learning,
2007.
[16] T.E. Abrudan, J. Eriksson, and V. Koivunen, ?Steepest descent algorithms for optimization
under unitary matrix constraint,? Signal Processing, IEEE Transactions on, vol. 56, no. 3, pp.
1134?1147, March 2008.
[17] M. A. Nielsen and I. L. Chuang, Quantum Computation and Quantum Information, Cambridge, 2000.
[18] Kaare Brandt Petersen and Michael Syskind Pedersen, ?The matrix cookbook,? http://
matrixcookbook.com, November 14, 2008.
[19] Michael Artin, Algebra, Prentice Hall, 1991.
[20] John A. Thorpe, Elementary topics in Differential Geometry, Springer-Verlag, 1994.
[21] J. Kivinen andM. K.Warmuth, ?Exponentiated gradient versus gradient descent for linear predictors,? Information and Computation, vol. 132, no. 1, pp. 1?63, Jan 1997.
[22] L. J. Butler, Applications of Matrix Theory to Approximation Theory, MS Thesis, Texas Tech
University, Aug. 1999.
[23] J. Gallier and D. Xu, ?Computing exponentials of skew-symmetric matrices and logarithms of
orthogonal matrices,? International Journal of Robotics and Automation, vol. 17, no. 4, 2002.
[24] Raman Arora, Group theoretical methods in signal processing: learning similarities, transformation and invariants, Ph.D. thesis, University of Wisconsin-Madison, Madison, August
2009.
[25] E. Polak, Optimization: Algorithms and Consistent Approximations, Springer-Verlag, 1997.
[26] Stanford University Computer Graphics Laboratory, ?The Stanford 3D scanning repository,?
http://graphics.stanford.edu/data/.
[27] Raman Arora, ?Tracking rotations of 3D Stanford bunny,? http://www.cae.wisc.edu/
?sethares/links/raman/LearnROT/vids.html, 2009.
9
| 3753 |@word determinant:3 version:9 repository:1 norm:9 open:1 calculus:1 closure:1 seek:1 simulation:3 pick:1 frigyik:1 tr:9 reduction:4 series:1 current:1 comparing:1 com:1 written:2 john:1 evans:1 additive:1 shape:2 plot:3 update:46 v:1 rrt:5 intelligence:2 warmuth:5 steepest:3 smith:2 manfred:3 provides:1 draft:1 brandt:1 simpler:1 mathematical:3 differential:8 introductory:1 frossard:1 mechanic:1 spherical:4 psychometrika:1 estimating:2 bounded:1 minimizes:1 developed:2 transformation:3 every:6 act:1 kaare:1 unit:6 eigenstructure:1 engineering:2 understood:1 local:1 acta:1 studied:4 challenging:1 co:2 averaged:1 thirty:1 camera:1 regret:2 block:1 jan:1 area:2 universal:1 rnn:1 dictate:1 projection:3 convenient:1 confidence:1 word:1 regular:1 pre:1 radial:1 petersen:1 get:3 eriksson:1 operator:1 prentice:1 ameesh:1 www:1 equivalent:1 map:7 lagrangian:2 yt:19 convex:2 identifying:2 immediately:1 recovery:1 rule:2 ity:1 embedding:1 classic:1 stability:1 sethares:2 updated:2 play:1 exact:1 rodrigues:1 origin:1 element:14 velocity:1 recognition:3 utilized:1 observed:2 role:1 preprint:1 electrical:1 solved:1 ensures:1 trade:1 decrease:1 complexity:4 geodesic:1 algebra:24 incur:1 cryst:1 completely:2 translated:1 easily:1 cae:1 represented:1 various:6 attitude:2 neighborhood:2 choosing:2 posed:2 solve:1 valued:3 psychometrics:1 stanford:5 besl:1 polak:1 itself:1 noisy:2 online:20 differentiable:2 eigenvalue:3 adaptation:1 rapidly:1 intuitive:4 frobenius:8 description:1 exploiting:1 neumann:12 satellite:2 r1:17 produce:1 adam:1 rotated:6 object:1 develop:1 aug:1 involves:1 implies:1 direction:3 closely:1 owing:1 require:2 generalization:1 preliminary:2 elementary:3 extension:1 hall:1 exp:9 predict:1 major:1 achieves:2 early:1 finland:1 estimation:6 proc:1 symmetrization:1 winnowing:1 weighted:1 gaussian:1 artin:1 derived:1 consistently:1 rank:3 tech:1 helpful:1 hestenes:1 rigid:1 interested:1 arg:2 colt:1 html:1 development:2 special:1 initialize:1 gallier:1 equal:1 comprise:1 washington:1 represents:1 cookbook:1 discrepancy:2 future:1 simplify:1 richard:1 few:1 thorpe:1 r11:1 randomly:2 composed:1 divergence:19 skewsymmetric:2 geometry:7 navigation:1 bregman:7 orthogonal:3 euclidean:7 logarithm:7 koji:1 plotted:2 tsuda:2 e0:1 theoretical:1 instance:21 steer:1 andm:1 cost:3 introducing:1 subset:1 mckay:1 snr:1 predictor:1 graphic:3 scanning:1 st:1 density:2 fundamental:1 siam:1 international:2 stay:1 physic:2 off:1 michael:2 quickly:1 von:12 squared:4 thesis:2 rn1:1 choose:1 conf:1 derivative:3 de:1 automation:1 int:1 stream:1 multiplicative:2 later:1 closed:1 square:1 spin:1 yield:2 correspond:1 pedersen:1 rx:3 trajectory:1 drive:1 daniilidis:1 biomedicine:1 definition:2 infinitesimal:1 against:2 pp:9 associated:10 riemannian:6 proof:3 radian:1 organized:1 nielsen:1 wesley:1 originally:1 hannes:1 formulation:1 evaluated:2 mar:1 furthermore:2 xa:1 implicit:2 kokiopoulou:1 eqn:4 concept:3 orthonormality:2 true:4 multiplier:4 symmetric:6 laboratory:1 white:1 sin:2 noted:1 m:1 generalized:1 bijective:1 theoretic:1 eggert:1 motion:1 geometrical:1 image:1 recently:1 rotation:75 pseudocode:2 discussed:1 interpretation:5 relating:1 composition:2 cambridge:1 similarly:1 similarity:1 scenario:2 store:1 verlag:2 floor:1 redundant:1 signal:4 july:1 reduces:1 x10:3 smooth:2 sphere:2 post:1 cameron:1 prediction:3 involving:1 basic:1 vision:3 iteration:3 robotics:2 dec:1 bunny:2 receive:1 preserved:1 whereas:1 interval:1 decreased:1 ratsch:1 induced:1 seem:1 unitary:2 easy:1 wahba:2 topology:1 intensive:1 det:14 texas:1 motivated:3 utility:1 algebraic:1 clear:3 procrustes:2 amount:1 ph:1 reduced:3 http:3 estimated:1 track:1 write:1 vol:12 express:1 group:37 key:1 putting:1 four:1 gunnar:1 changing:3 wisc:1 registration:1 utilize:1 d57:1 inverse:1 exponentiation:2 extends:1 reader:1 utilizes:2 raman:5 comparable:1 bound:1 followed:1 simplification:1 correspondence:1 quadratic:4 topological:1 constraint:7 min:2 department:1 icpr:1 combination:1 march:2 conjugate:1 describes:1 slightly:1 unity:1 wi:1 gradually:1 invariant:1 computationally:1 equation:2 previously:1 skew:19 describing:1 discus:1 addison:1 conformal:2 end:1 operation:1 incurring:1 observe:2 away:1 spectral:1 batch:7 alternative:1 eigen:1 florida:1 chuang:1 denotes:1 harish:1 ensure:1 include:1 madison:4 yx:1 exploit:2 giving:1 establish:1 approximating:1 objective:5 rt:21 diagonal:1 grace:1 gradient:31 lends:1 subspace:2 distance:2 thank:1 link:1 parametrized:1 philip:1 topic:1 manifold:9 meg:7 makadia:1 index:5 ratio:1 balance:1 minimizing:1 statement:1 trace:4 unknown:5 observation:2 descent:4 curved:1 november:1 extended:1 rn:9 smoothed:1 august:2 pair:1 conflicting:1 established:1 subgroup:1 ytt:2 trans:2 syskind:1 usually:1 pattern:3 program:1 tampa:1 video:1 force:1 solvable:1 kivinen:1 arora:4 axis:1 jun:2 ues:1 sn:1 parthasarathy:1 joan:1 review:2 nice:1 acknowledgement:1 tangent:2 geometric:1 xtt:3 multiplication:1 relative:1 wisconsin:2 loss:17 versus:4 ingredient:1 incurred:3 sufficient:1 consistent:1 last:2 free:1 side:1 exponentiated:14 face:2 taking:1 sparse:1 van:1 feedback:1 rich:1 quantum:4 author:1 adaptive:3 avg:1 simplified:1 transaction:1 compact:2 r20:1 global:1 butler:1 search:1 decade:1 robust:1 composing:1 complex:1 posted:1 noise:5 repeated:2 body:1 xu:1 fig:1 referred:1 en:1 kostas:1 comprises:1 explicit:1 exponential:15 lie:30 formula:1 xt:25 showing:2 r2:14 decay:2 gupta:1 easier:1 crystallography:1 entropy:3 smoothly:2 lt:29 simply:1 lagrange:2 expressed:3 doran:1 tracking:5 springer:3 corresponds:1 determines:2 goal:1 formulated:1 identity:5 consequently:1 diffeomorphism:1 fisher:1 change:3 typical:1 determined:1 wt:1 lemma:2 experimental:4 e:2 armijo:1 jonathan:1 |
3,039 | 3,754 | A Rate Distortion Approach for Semi-Supervised
Conditional Random Fields
Yang Wang??
Gholamreza Haffari??
?
School of Computing Science
Simon Fraser University
Burnaby, BC V5A 1S6, Canada
{ywang12,ghaffar1,mori}@cs.sfu.ca
Shaojun Wang?
Greg Mori?
?
Kno.e.sis Center
Wright State University
Dayton, OH 45435, USA
[email protected]
Abstract
We propose a novel information theoretic approach for semi-supervised learning
of conditional random fields that defines a training objective to combine the conditional likelihood on labeled data and the mutual information on unlabeled data.
In contrast to previous minimum conditional entropy semi-supervised discriminative learning methods, our approach is grounded on a more solid foundation,
the rate distortion theory in information theory. We analyze the tractability of the
framework for structured prediction and present a convergent variational training algorithm to defy the combinatorial explosion of terms in the sum over label
configurations. Our experimental results show the rate distortion approach outperforms standard l2 regularization, minimum conditional entropy regularization as
well as maximum conditional entropy regularization on both multi-class classification and sequence labeling problems.
1
Introduction
In most real-world machine learning problems (e.g., for text, image, audio, biological sequence
data), unannotated data is abundant and can be collected at almost no cost. However, supervised
machine learning techniques require large quantities of data be manually labeled so that automatic
learning algorithms can build sophisticated models. Unfortunately, manual annotation of a large
quantity of data is both expensive and time-consuming. The challenge is to find ways to exploit the
large quantity of unlabeled data and turn it into a resource that can improve the performance of supervised machine learning algorithms. Meeting this challenge requires research at the cutting edge
of automatic learning techniques, useful in many fields such as language and speech technology, image processing and computer vision, robot control and bioinformatics. A surge of semi-supervised
learning research activities has occurred in recent years to devise various effective semi-supervised
training schemes. Most of these semi-supervised learning algorithms are applicable only to multiclass classification problems [1, 10, 32], with very few exceptions that develop discriminative models suitable for structured prediction [2, 9, 16, 20, 21, 22].
In this paper, we propose an information theoretic approach for semi-supervised learning of conditional random fields (CRFs) [19], where we use the mutual information between the empirical distribution of unlabeled data and the discriminative model as a data-dependent regularized prior. Grandvalet and Bengio [15] and Jiao et al. [16] have proposed a similar information theoretic approach that
used the conditional entropy of their discriminative models on unlabeled data as a data-dependent
regularization term to obtain very encouraging results. Minimum entropy approach can be explained
from data-smoothness assumption and is motivated from semi-supervised classification, using unlabeled data to enhance classification; however, its degeneracy is even more problematic and arguable
by noting minimum entropy 0 can be achieved by putting all mass on one label and zeros for the
rest of labels. As far as we know, there is no formal principled explanation for the validity of this
minimum conditional entropy approach. Instead, our approach can be naturally cast into the rate
?
These authors contributed equally to this work.
1
distortion theory framework which is well-known in information theory [14]. The closest work to
ours is the one by Corduneanu et al. [11, 12, 13, 28]. Both works are discriminative models and
do indeed use mutual information concepts. There are two major distinctions between our work
and theirs. First, their approach is essentially motivated from semi-supervised classification point
of view and formulated as a communication game, while our approach is based on a completely
different motivation, semi-supervised clustering that uses labeled data to enhance clustering and is
formulated as a data compression scheme, thus leads to a formulation distinctive from Corduneanu
et al. Second, their model is non-parametric, whereas ours is parametric. As a result, their model can
be trained by optimizing a convex objective function through a variant of Blahut-Arimoto alternating
minimization algorithm, whereas our model is more complex and the objective function becomes
non-convex. In particular, training a simple chain structured CRF model [19] in our framework turns
out to be intractable even if using Blahut-Arimoto?s type of alternating minimization algorithm. We
develop a convergent variational approach to approximately solve this problem. Another relevant
work is the information bottleneck (IB) method introduced by Tishby et al [30]. IB method is an
information-theoretic framework for extracting relevant components of an input random variable
X, with respect to an output random variable Y . Instead of directly compressing X to its representation Y subject to an expected distortion through a parametric probabilistic mapping like our
proposed approach, IB method is performed by finding a third, compressed, non-parametric and
model-independent representation T of X that is most informative about Y . Formally speaking, the
notion of compression is quantified by the mutual information between T and X while the informativeness is quantified by the mutual information between T and Y . The solutions are characterized
by the bottleneck equations and can be found by a convergent re-estimation method that generalizes the Blahut-Arimoto algorithm. Finally in contrast to our approach which minimizes both the
negative conditional likelihood on labeled data and the mutual information between the hidden variables and the observations on unlabeled data for a discriminative model, Oliver and Garg [24] have
proposed maximum mutual information hidden Markov models (MMIHMM) of semi-supervised
training for chain structured graph. The objective is to maximize both the joint likelihood on labeled
data and the mutual information between the hidden variables and the observations on unlabeled data
for a generative model. It is equivalent to minimizing conditional entropy of a generative HMM for
the part of unlabeled data. The maximum mutual information of a generative HMM was originally
proposed by Bahl et al. [4] and popularized in speech recognition community [23], but it is different from Oliver and Garg?s approach in that an individual HMM is learned for each possible class
(e.g., one HMM for each word string), and the point-wise mutual information between the choice
of HMM and the observation sequence is maximized. It is equivalent to maximizing the conditional
likelihood of a word string given observation sequence to improve the discrimination across different models [18]. Thus in essence, Bahl et al. [4] proposed a discriminative learning algorithm for
generative HMMs of training utterances in speech recognition.
In the following, we first motivate our rate distortion approach for semi-supervised CRFs as a data
compression scheme and formulate the semi-supervised learning paradigm as a classic rate distortion
problem. We then analyze the tractability of the framework for structured prediction and present a
convergent variational learning algorithm to defy the combinatorial explosion of terms in the sum
over label configurations. Finally we demonstrate encouraging results with two real-world problems
to show the effectiveness of the proposed approach: text categorization as a multi-class classification
problem and hand-written character recognition as a sequence labeling problem. Similar ideas have
been successfully applied to semi-supervised boosting [31].
2
Rate distortion formulation
Let X be a random variable over data sequences to be labeled, and Y be a random variable
over corresponding label sequences. All components, Yi , of Yn are assumed to range over a o
fil
(1)
(1)
(N )
(N )
nite label alphabet Y. Given a set of labeled examples, D = (x , y ), ? ? ? , (x , y ) ,
o
n
and unlabeled examples, Du = x(N +1) , ? ? ? , x(M ) , we would like to build a CRF model
p? (y|x) = Z?1(x) exp h?, f (x, y)i over sequential input data x, where ? = (?1 , ? ? ? , ?K )? ,
P
f (x, y) = (f1 (x, y), ? ? ? , fK (x, y))? , and Z? (x) = y exp h?, f (x, y)i . Our goal is to learn
such a model from the combined set of labeled and unlabeled examples, Dl ? Du . For notational
convenience, we assume that there are no identical examples in Dl and Du .
2
The standard supervised training procedure for CRFs is based on minimizing the negative log conditional likelihood of the labeled examples in Dl
CL(?) = ?
N
X
log p? (y(i) |x(i) ) + ?U (?)
(1)
i=1
where U (?) can be any standard regularizer on ?, e.g. U (?) = k?k2 /2 and ? is a parameter that
controls the influence of U (?). Regularization can alleviate over-fitting on rare features and avoid
degeneracy in the case of correlated features.
Obviously, Eq. (1) ignores the unlabeled examples in Du . To make full use of the available training
data, Grandvalet and Bengio [15] and Jiao et al. [16] proposed a semi-supervised learning algorithm that exploits a form of minimum conditional entropy regularization on the unlabeled data.
Specifically, they proposed to minimize the following objective
RLminCE (?) = ?
N
X
log p? (y(i) |x(i) ) + ?U (?) ? ?
M
X
X
p? (y|x(j) ) log p? (y|x(j) )
(2)
j=N +1 y
i=1
where the first term is the negative log conditional likelihood of the labeled data, and the third term
is the conditional entropy of the CRF model on the unlabeled data. The tradeoff parameters ? and ?
control the influences of U (?) and the unlabeled data, respectively.
This is equivalent to minimizing the following objective (with different values of ? and ?)
?
?
?
?
X
RLminCE (?) = D p?l (x, y), p?l (x)p? (y|x) + ?U (?) + ?
p?u (x)H p? (y|x)
(3)
x?D u
P
p?l (x,y)
p
?
(x,
y)
log
where D p?l (x, y), p?l (x)p? (y|x)
=
,
H
p
(y|x)
=
l
l
?
(x,y)?D
p?l (x)p? (y|x)
P
?l (x, y) to denote the empirical distribution of both X and
y p? (y|x) log p? (y|x). Here we use p
Y on labeled data Dl , p?l (x) to denote the empirical distribution of X on labeled data Dl , and p?u (x)
to denote the empirical distribution of X on unlabeled data Du .
In this paper, we propose an alternative approach for semi-supervised CRFs. Rather than using
minimum conditional entropy as a regularization term on unlabeled data, we use minimum mutual
information on unlabeled data. This approach has a nice and strong information theoretic interpretation by rate distortion theory.
We defineP
the marginal distribution p? (y) of our discriminative model on unlabeled data Du to be
p? (y) = x?Du p?u (x)p? (y|x) over the input data x. Then the mutual information between the
empirical distribution p?(x) and the discriminative model is
?
? X
?
?
?
? XX
? p? (x)p (y|x) ?
u
?
= H p? (y) ?
p?u (x)H p? (y|x)
I p?u (x), p? (y|x) =
p?u (x)p? (y|x) log
p?u (x)p? (y)
x?D u
x?D u y
P
P P
where H p? (y) = ? y x?Du p?u (x)p? (y|x) log
?u (x)p? (y|x) is the entropy of
x?D u p
the label Y on unlabeled data. Thus in rate distortion terminology, the empirical distribution of
unlabeled data p?u (x) corresponds to input distribution, the model p? (y|x) corresponds to the probabilistic mapping from X to Y , and p? (y) corresponds to the output distribution of Y .
Our proposed rate distortion approach for semi-supervised CRFs optimizes the following constrained optimization problem,
?
?
min I p?u (x), p? (y|x) s.t.
?
?
?
D p?l (x, y), p?l (x)p? (y|x) + ?U (?) ? d
(4)
The rationale for this formulation can be seen from an information-theoretic perspective using the
rate distortion theory [14]. Assume we have a source X with a source distribution p(x) and its compressed representation Y through a probabilistic mapping p? (y|x). If there is a large set of features
(infinite in the extreme case), this probabilistic mapping might be too redundant. We?d better look
for its minimum description. What determines the quality of the compression is the information
rate, i.e. the average number of bits per message needed to specify an element in the representation
without confusion. According to the standard asymptotic
arguments [14], this quantity is bounded
below by the mutual information I p(x), p? (y|x) since the average cardinality of the partitioning of X is given by the ratio of the volume of X to the average volume of the elements of X
3
that are mapped to the same representation Y through p? (y|x), 2H(X) /2H(X|Y ) = 2I(X,Y ) . Thus
mutual information is the minimum information rate and is used as a good metric for clustering
[26, 27]. True distribution of X should be used to compute the mutual information. Since it is
unknown,
we use its empirical distribution on unlabeled data set Du and the mutual information
I p?u (x), p? (y|x) instead. However, information rate alone is not enough to characterize good
representation since the rate can always be reduced by throwing away many features in the probabilistic mapping. This makes the mapping likely too simple and leads to distortion. Therefore
we need an additional constraint provided through a distortion function which is presumed to be
small for good representations. Apparently there is a tradeoff between minimum representation
and maximum distortion. Since joint distribution gives the distribution for the pair of X and its
p(x,y)
representation Y , we choose the log likelihood ratio, log p(x)p
, plus a regularized complexity
? (y|x)
term
of
?,
?U
(?),
as
the
distortion
function.
Thus
the
expected
distortion
is the non-negative term
D p(x, y), p(x)p? (y|x) + ?U (?). Again true distributions p(x, y) and p(x) should be used here,
but they are unknown. In semi-supervised setting, we have labeled data available which provides
valuable information to measure the
distortion: we use theempirical distributions on labeled data set
l
D and the expected distortion D p?l (x, y), p?l (x)p? (y|x) + ?U (?) instead to encode the information provided by labeled data, and add a distortion constraint we should respect for data compression
to help the clustering. There is a monotonic tradeoff between the rate of the compression and the
expected distortion: the larger the rate, the smaller is the achievable distortion. Given a distortion
measure between X and Y on the labeled data set Dl , what is the minimum rate description required to achieve a particular distortion on the unlabeled data set Du ? The answer can be obtained
by solving (4).
Following standard procedure, we convert the constrained optimization problem (4) into an unconstrained optimization problem which minimizes the following objective:
?
?
? ?
?
?
RLMI (?) = I p?u (x), p? (y|x) + ? D p?l (x, y), p?l (x)p? (y|x) + ?U (?)
where ? > 0, which again is equivalent to minimizing the following objective (with ? =
(5)
1 1
?) :
?
?
?
?
RLMI (?) = D p?l (x, y), p?l (x)p? (y|x) + ?U (?) + ?I p?u (x), p? (y|x)
(6)
If (4) is a convex optimization problem, then for every solution ? to Eq. (4) found using some
particular value of d, there is some corresponding value of ? in the optimization problem (6) that
will give the same ?. Thus, these are two equivalent re-parameterizations of the same problem. The
equivalence between the two problems can be verified using convex analysis [8] by noting that the
Lagrangian for the constrained optimization (4) is exactly the objective in the optimization (5) (plus
a constant that does not depend on ?), where ? is the Lagrange multiplier. Thus, (4) can be solved
by solving either (5) or (6) for an
appropriate ? or
?. Unfortunately (4) is not a convex optimization
problem, because its objective I p?u (x), p? (y|x) is not convex. This can be verified using the same
argument as in the minimum conditional entropy regularization case [15, 16]. There may be some
minima of (4) that do not minimize (5) or (6) whatever the value of ? or ? may be. This is however
not essential to motivate the optimization criterion. Moreover there are generally local minima in
(5) or (6) due to the non-convexity of its mutual information regularization term.
Another training method for semi-supervised CRFs is the maximum entropy approach, maximizing
conditional entropy (minimizing negative conditional entropy) over unlabeled data Du subject to the
constraint on labeled data Dl ,
min
?
?
?
X
x?D u
?
??
p?u (x)H p? (y|x)
s.t.
?
?
D p?l (x, y), p?l (x)p? (y|x) + ?U (?) ? d
(7)
again following standard procedure, we convert the constrained optimization problem (7) into an
unconstrained optimization problem which minimizes the following objective:
?
?
?
?
X
RLmaxCE (?) = D p?l (x, y), p?l (x)p? (y|x) + ?U (?) ? ?
p?u (x)H p? (y|x)
(8)
x?D u
1
For the part of unlabeled data, the MMIHMM algorithm [24] maximizes mutual information,
I(?
pu (x), p? (x|y)), of a generative model p? (x|y) instead, which is equivalent to minimizing conditional entropy of a generative model p? (x|y), since I(?
pu (x), p? (x|y)) = H(?
pu (x)) ? H(p? (x|y)) and H(?
pu (x)) is
a constant.
4
Again minimizing (8) is not exactly equivalent to (7); however, it is not essential to motivate the
optimization criterion. When comparing maximum entropy approach with minimum conditional
entropy approach, there is only a sign change on conditional entropy term.
For non-parametric models, using the analysis developed in [5, 6, 7, 25], it can be shown that maximum conditional entropy approach is equivalent to rate distortion approach when we compress code
vectors in a mass constrained scheme [25]. But for parametric models such as CRFs, these three
approaches are completely distinct.
The difference between our rate distortion approach for semi-supervised CRFs (6) and the minimum
conditional entropy regularized semi-supervised CRFs (2) is not only on the different sign of conditional entropy on unlabeled data but also the additional term ? entropy of p? (y) on unlabeled data.
It is this term that makes direct computation of the derivative of the objective for the rate distortion
approach for semi-supervised CRFs intractable. To see why, we take derivative of this term with
respect to ?, we have:
?
? ?
? H(p? (y))
??
X
=
p?u (x)
x?D u
?
X
p? (y|x)f (x, y) log
x?D u
y
X
p?u (x)
x?D u
? X
X
p? (y|x) log
? X
x?D u
y
?
p?u (x)p? (y|x)
?X
p?u (x)p? (y|x)
p? (y? |x)f (x, y? )
y?
In the case of structured prediction, the number of sums over Y is exponential, and there is a sum
inside the log. These make the computation of the derivative intractable even for a simple chain
structured CRF.
An alternative way to solve (6) is to use the famous algorithm for the computation of the rate distortion function established by Blahut [6] and Arimoto [3]. Corduneanu and Jaakkola [12, 13] proposed
a distributed propagation algorithm, a variant of Blahut-Arimoto algorithm, to solve their problem.
However as illustrated in the following, this approach is still intractable for structured prediction in
our case.
By extending a lemma for computing rate distortion in [14] to parametric models, we can rewrite
the minimization problem (5) of mutual information regularized semi-supervised CRFs as a double
minimization,
min min g(?, r(y))
?
g(?, r(y)) =
X X
x?D u
r(y)
p?u (x)p? (y|x) log
y
where
? ?
?
?
p? (y|x)
+ ? D p?l (x, y), p?l (x)p? (y|x) + ?U (?)
r(y)
We can use an alternating minimization algorithm to find a local minimum of RLM I (?). First, we
assign the initial CRF model to be the optimal solution of the supervised CRF on labeled data and
denote it as p?(0) (y|x). Then we define r(0) (y) and in general r(t) (y) for t ? 1 by
r(t) (y) =
X
p?u (x)p?(t) (y|x)
(9)
x?D u
In order to define p?(1) (y|x) and in general p?(t) (y|x), we need to find the p? (y|x) which minimizes
g for a given r(y). The gradient of g(?, r(y)) with respect to ? is
?
g(?, r(y))
??
=
M
X
i=N +1
+
?
h
i
X
p? (y|x(i) )f (x(i) , y) log r(y) (10)
p?u (x(i) ) covp? (y|x(i) ) f (x(i) , y) ? ?
y
X
(i)
p? (y|x ) log r(y)
y?
y
??
N
X
X
(i)
(i)
p?l (x ) f (x , y
(i)
i=1
?
p? (y |x )f (x(i) , y? )
)?
?
X
y
(i)
(i)
(i)
(11)
!
p? (y|x )f (x , y)
+ ??
?
U (?)
??
(12)
Even though the first term in Eq. (10) and (12) can be efficiently computed via recursive formulas [16], we run into the same intractable problem to compute the second term Eq. (10) and Eq. 11)
since the number of sums over Y is exponential and implicitly there is a sum inside the log due
to r(y). This makes the computation of the derivative in the alternating minimization algorithm
intractable.
5
3
A variational training procedure
In this section, we derive a convergent variational algorithm to train rate distortion based semisupervised CRFs for sequence labeling. The basic idea of convexity-based variational inference is
to make use of Jensen?s inequality to obtain an adjustable upper bound on the objective function
[17]. Essentially, one considers a family of upper bounds indexed by a set of variational parameters.
The variational parameters are chosen by an optimization procedure that attempts to find the tightest
possible upper bound.
Following Jordan et al. [17], we begin by introducing a variational distribution q(x) to bound
H(p? (y)) using Jensen?s inequality as the following,
H(p? (y))
=
?
!
X p?u (x)p? (y|x)
q(x)
q(x)
y x?D u
x?D u
"
?
?#
M
M
X
X X
p?u (x(l) )p? (y|x(l) )
(j)
(j)
(l)
p?u (x )p? (y|x )
q(x ) log
?
q(x(l) )
y j=N +1
l=N +1
?
X X
p?u (x)p? (y|x) log
Thus the desideratum of finding a tight upper bound of RLMI (?) in Eq. (6) translates directly into
the following alternative optimization problem:
(?? , q ? ) = min U(?, q)
where
?,q
U(?, q) =
?
N
X
p?l (x(i) ) log p? (y(i) |x(i) ) + ?U (?) ? ?
??
j=N +1
M
X
p?u (x(j) )q(x(l) )
j=N +1 l=N +1
i=1
M
X
M
X
p?u (x(j) )
M
X
q(x(l) ) log
l=N +1
(l)
p? (y|x(j) ) log p? (y|x(l) ) (13)
y
M
X
X
p?u (x )
+?
q(x(l) )
j=N +1
X
p?u (x(j) )p? (y|x(j) ) log p? (y|x(j) )
(14)
y
Minimizing U with respect to q has a closed form solution,
?P
?
p?u (x(j) )p? (y|x(j) ) log p? (y|x(l) )
?
?P
q(x(l) ) = P
P
M
M
?u (x(j) )p? (y|x(j) ) log p? (y|x(k) )
?u (x(k) ) exp
j=N +1
yp
k=1 p
p?u (x(l) ) exp
M
j=N +1
P
y
? x(l) ? Du
(15)
It can be shown that
U(?, q) ? RLMI (?) +
X X
y
p?u (x)p? (y|x)
x?D u
X
x?D u
?
?
D q(x), q? (x|y) ? 0
(16)
? (y|x)
u
where q? (x|y) = P p?uu(x)p
?u (x)p? (y|x) ? x ? D . Thus U is bounded below, the alternative minix?D p
mization algorithm monotonically decreases U and converges.
In order to calculate the derivative of U with respect to ?, we just need to notice that the first term
in Eq. (13) is the log-likelihood in CRF, and the first term in Eq. (14) is a constant and second term
in Eq. (14) is the conditional entropy in [16]. They all can be efficiently computed [16, 21]. In the
following, we show how to compute the derivative of the last term in Eq.(13) using an idea similar
to that proposed in [21]. Without loss of generality, we assume all the unlabeled data are of equal
lengths in the sequence labeling case. We will describe how to handle the case of unequal lengths in
Sec. 4.
P
(j)
(l)
If we define A(y, x(j) , x(l) ) =
y p? (y|x ) log p? (y|x ) in (13) for a fixed (j, l) pair,
where we assume x(j) and x(l) form two linear-chain graphs of equal lengths, we can calculate the derivative of A(y, x(j) , x(l) ) with respect to the k-th parameter ?k , where all the terms
can
through standard dynamic programming techniques in CRFs except one term
P be computed
(j)
p
(y|x
)
log
p? (y|x(l) )fk (x(j) , y). Nevertheless similar to [21], we compute this term as
?
y
follows [21]: we first define pairwise subsequence constrained entropy on (x(j) , x(l) ) (as suppose
to the subsequence constrained entropy defined in [21]) as:
?
Hjl
(y?(a..b) |ya..b , x(j) , x(l) ) =
X
p? (y?(a..b) |ya..b , x(j) ) log p? (y?(a..b) |ya..b , x(l) )
y?(a..b)
6
?
where y?(a..b) is the label sequence with its subsequence ya..b fixed. If we have Hjl
for all (a, b),
P
then the term y p? (y|x(j) ) log p? (y|x(l) )fk (x(j) , y) can be easily computed. Using the independence property of linear-chain CRF, we have the following:
X
p? (y?(a..b) , ya..b |x(j) ) log p? (y?(a..b) , ya..b |x(l) )
y?(a..b)
=
?
p? (ya..b |x(j) ) log p? (ya..b |x(l) ) + p? (ya..b |x(j) )Hjl
(y1..(a?1) |ya , x(j) , x(l) )
?
+p? (ya..b |x(j) )Hjl
(y(b+1)..n |yb , x(j) , x(l) )
?
?
Given Hjl
(?) and Hjl
(?), any sequence entropy can be computed in constant time [21]. Computing
?
Hjl (?) can be done using the following dynamic programming [21]:
?
Hjl
(y1..i |yi+1 , x(j) , x(l) )
=
X
p? (yi |yi+1 , x(j) ) log p? (yi |yi+1 , x(l) )
yi
+
X
?
p? (yi |yi+1 , x(j) )Hjl
(y1..(i?1) |yi , x(j) , x(l) )
yi
?
The base case for the dynamic programming is Hjl
(?|y1 , x(j) , x(l) ) = 0. All the probabilities (i.e.,
?
p? (yi |yi+1 , xj )) needed in the above formula can be obtained using belief propagation. Hjl
(?) can
be similarly computed using dynamic programming.
4
Experiments
We compare our rate distortion approach for semi-supervised learning with one of the state-of-the-art
semi-supervised learning algorithms, minimum conditional entropy approach and maximum conditional entropy approach on two real-world problems: text categorization and hand-written character
recognition. The purpose of the first task is to show the effectiveness of rate distortion approach
over minimum and maximum conditional entropy approaches when no approximation is needed in
training. In the second task, a variational method has to be used to train semi-supervised chain
structured CRFs. We demonstrate the effectiveness of the rate distortion approach over minimum
and maximum conditional entropy approaches even when an approximation is used during training.
4.1
Text categorization
We select different class pairs from the 20 newsgroup dataset 2 to construct our binary classification
problems. The chosen classes are similar to each other and thus hard for classification algorithms.
We use Porter stemmer to reduce the morphological word forms. For each label, we rank words
based on their mutual information with that label (whether it predicts label 1 or 0). Then we choose
the top 100 words as our features. For each problem, we select 15% of the training data, almost 150
instances, as the labeled training data and select the unlabeled data from the remaining data. The
validation set (for setting the free parameters, e.g. ? and ?) contains 100 instances. The test set
contains about 700 instances. We vary the ratio between the amount of unlabeled and labeled data,
repeat the experiments ten times with different randomly selected labeled and unlabeled training
data, and report the mean and standard deviation over different trials. For each run, we initialize the
model parameter for mutual information (MI) regularization and maximum/minimum conditional
entropy (CE) regularization using the parameter learned from a l2 -regularized logistic regression
classifier. Figure 1 shows the classification accuracies of these four regularization methods versus
the ratio between the amount of unlabeled and labeled data on different classification problems. We
can see that mutual information regularization outperforms the other three regularization schemes.
In most cases, maximum CE regularization outperforms minimum CE regularization and the baseline (logistic regression with l2 regularization) which uses only the labeled data. Although the
randomly selected labeled instances are different for different experiments, we should not see a significant difference in the performance of the learned models based on the baseline; since for each
particular ratio of labeled and unlabeled data, the performance is averaged over ten runs. We suspect
the reason for the performance differences of the baselines models in Figure 1 is due to our feature
selection phase.
2
http://people.csail.mit.edu/jrennie/20Newsgroups.
7
0.882
0.87
MI
0.88
0.865
minCE
0.878
L2
0.86
0.874
0.872
0.87
0.868
0.84
0.855
0.85
MI
0.845
0.835
0.83
MI
0.825
minCE
0.866
minCE
maxCE
0.84
1
2
3
4
5
0.835
0
6
1
ratio unlabel/label
L2
2
3
4
5
2
3
4
5
6
MI
minCE
0.795
minCE
maxCE
L2
L2
0.79
accuracy
accuracy
1
0.8
maxCE
0.7
0.69
0.68
0.785
0.78
0.775
0.67
0.66
0
6
ratio unlabel/label
MI
0.71
0.815
0
ratio unlabel/label
0.73
0.72
maxCE
0.82
L2
0.864
0.862
0
accuracy
accuracy
accuracy
0.845
maxCE
0.876
0.77
1
2
3
4
5
0.765
0
6
1
ratio unlabel/label
2
3
4
5
6
ratio unlabel/label
Figure 1: Results on five different binary classification problems in text categorization (left to right):
comp.os.ms-windows.misc vs comp.sys.mac.hardware; rec.autos vs rec.motorcycles; rec.sport.baseball vs
rec.sport.hockey; talk.politics.guns vs talk.politics.misc; sci.electronics vs sci.med.
0.825
0.78
MI
minCE
maxCE
L2
MI
0.82
maxCE
L2
0.74
accuracy
accuracy
0.815
0.81
0.805
0.8
0.72
0.7
0.68
0.795
0.66
0.79
0.785
0
minCE
0.76
1
2
3
4
5
0.64
0
6
ratio unlabel/label
1
2
3
4
5
6
ratio unlabel/label
Figure 2: Results on hand-written character recognition: (left) sequence labeling; (right) multi-class classification.
4.2
Hand-written character recognition
Our dataset for hand-written character recognition contains ?6000 handwritten words with average
length of ?8 characters. Each word was divided into characters, each character is resized to a 16 ? 8
binary image. We choose ?600 words as labeled data, ?600 words as validation data, ?2000 words
as test data. Similar to text categorization, we vary the ratio between the amount of unlabeled and
labeled data, and report the mean and standard deviation of classification accuracies over several
trials.
We use a chain structured graph to model hand-written character recognition as a sequence labeling
problem, similar to [29]. P
Since the unlabeled data may have different lengths, we modify the mutual information as I = ? I? , where I? is the mutual information computed on all the unlabeled
data with length ?. We compare our approach (MI) with other regularizations (maximum/minimum
conditional entropy, l2 ). The results are shown in Fig. 2 (left). As a sanity check, we have also
tried solving hand-written character recognition as a multi-class classification problem, i.e. without
considering the correlation between adjacent characters in a word. The results are shown in Fig. 2
(right). We can see that MI regularization outperforms maxCE, minCE and l2 regularizations in
both multi-class and sequence labeling cases. There are significant gains in the structured learning
compared with the standard multi-class classification setting.
5
Conclusion and future work
We have presented a new semi-supervised discriminative learning algorithm to train CRFs. The
proposed approach is motivated by the rate distortion framework in information theory and utilizes
the mutual information on the unlabeled data as a regularization term, to be more precise a data
dependent prior. Even though a variational approximation has to be used during training process for
even a simple chain structured graph, our experimental results show that our proposed rate distortion
approach outperforms supervised CRFs with l2 regularization and a state-of-the-art semi-supervised
minimum conditional entropy approach as well as semi-supervised maximum conditional entropy
approach in both multi-class classification and sequence labeling problems. As future work, we
would like to apply this approach to other graph structures, develop more efficient learning algorithms and illuminate how reducing the information rate helps generalization.
8
References
[1] S. Abney. Semi-Supervised Learning for Computational Linguistics. Chapman & Hall/CRC, 2007.
[2] Y. Altun, D. McAllester and M. Belkin. Maximum margin semi-supervised learning for structured variables. NIPS 18:33-40, 2005.
[3] S. Arimoto. An algorithm for computing the capacity of arbitrary discrete memoryless channels. IEEE
Transactions on Information Theory, 18:1814-1820, 1972.
[4] L. Bahl, P. Brown, P. de Souza and R. Mercer. Maximum mutual information estimation of hidden Markov
model parameters for speech recognition. ICASSP, 11:49-52, 1986.
[5] T. Berger and J. Gibson. Lossy source coding. IEEE Transactions on Information Theory, 44(6):26932723, 1998.
[6] R. Blahut. Computation of channel capacity and rate-distortion functions. IEEE Transactions on Information Theory, 18:460-473, 1972.
[7] R. Blahut. Principles and Practice of Information Theory, Addison-Wesley, 1987.
[8] S. Boyd and L. Vandenberghe. Convex Optimization, Cambridge University Press, 2004.
[9] U. Brefeld and T. Scheffer. Semi-supervised learning for structured output variables. ICML, 145-152,
2006.
[10] O. Chapelle, B. Scholk?opf and A. Zien. Semi-Supervised Learning, MIT Press, 2006.
[11] A. Corduneanu and T. Jaakkola. On information regularization. UAI, 151-158, 2003.
[12] A. Corduneanu and T. Jaakkola. Distributed information regularization on graphs. NIPS, 17:297-304,
2004.
[13] A. Corduneanu and T. Jaakkola. Data dependent regularization. In Semi-Supervised Learning, O.
Chapelle, B. Scholk?opf and A. Zien, (Editors), 163-182, MIT Press, 2006.
[14] T. Cover and J. Thomas. Elements of Information Theory, Wiley, 1991.
[15] Y. Grandvalet and Y. Bengio. Semi-supervised learning by entropy minimization. NIPS, 17:529-536,
2004.
[16] F. Jiao, S. Wang, C. Lee, R. Greiner and D. Schuurmans. Semi-supervised conditional random fields for
improved sequence segmentation and labeling. COLING/ACL, 209-216, 2006.
[17] M. Jordan, Z. Ghahramani, T. Jaakkola and L. Saul. Introduction to variational methods for graphical
models. Machine Learning, 37:183-233, 1999.
[18] D. Jurafsky and J. Martin. Speech and Language Processing, 2nd Edition, Prentice Hall, 2008.
[19] J. Lafferty, A. McCallum and F. Pereira. Conditional random fields: Probabilistic models for segmenting
and labeling sequence data. ICML, 282-289, 2001.
[20] C. Lee, S. Wang, F. Jiao, D. Schuurmans and R. Greiner. Learning to model spatial dependency: Semisupervised discriminative random fields. NIPS, 19:793-800, 2006.
[21] G. Mann and A. McCallum. Efficient computation of entropy gradient for semi-supervised conditional
random fields. NAACL/HLT, 109-112, 2007.
[22] G. Mann and A. McCallum. Generalized expectation criteria for semi-supervised learning of conditional
random fields. ACL, 870-878, 2008.
[23] Y. Normandin. Maximum mutual information estimation of hidden Markov models. In Automatic Speech
and Speaker Recognition: Advanced Topics, C. Lee, F. Soong and K. Paliwal (Editors), 57-81, Springer,
1996.
[24] N. Oliver and A. Garg. MMIHMM: maximum mutual information hidden Markov models. ICML, 466473, 2002.
[25] K. Rose. Deterministic annealing for clustering, compression, classification, regression, and related optimization problems. Proceedings of the IEEE, 80:2210-2239, 1998.
[26] N. Slonim, G. Atwal, G. Tkacik and W. Bialek. Information based clustering. Proceedings of National
Academy of Science (PNAS), 102:18297-18302, 2005.
[27] S. Still and W. Bialek. How many clusters? An information theoretic perspective. Neural Computation,
16:2483-2506, 2004.
[28] M. Szummer and T. Jaakkola. Information regularization with partially labeled data. NIPS, 1025-1032,
2002.
[29] B. Taskar, C. Guestrain and D. Koller. Max-margin Markov networks. NIPS, 16:25-32, 2003.
[30] N. Tishby, F. Pereira, and W. Bialek. The information bottleneck method. The 37th Annual Allerton
Conference on Communication, Control, and Computing, 368-377, 1999.
[31] L. Zheng, S. Wang, Y. Liu and C. Lee. Information theoretic regularization for semi-supervised boosting.
KDD, 1017-1026, 2009.
[32] X. Zhu. Semi-supervised learning literature survey. Computer Sciences TR 1530, University of Wisconsin
Madison, 2007.
9
| 3754 |@word trial:2 achievable:1 compression:7 nd:1 tried:1 tkacik:1 tr:1 solid:1 electronics:1 configuration:2 contains:3 liu:1 initial:1 bc:1 ours:2 outperforms:5 comparing:1 si:1 written:7 informative:1 kdd:1 discrimination:1 alone:1 generative:6 selected:2 v:5 mccallum:3 sys:1 provides:1 boosting:2 parameterizations:1 allerton:1 five:1 covp:1 direct:1 combine:1 fitting:1 inside:2 pairwise:1 expected:4 presumed:1 indeed:1 surge:1 multi:7 encouraging:2 window:1 cardinality:1 considering:1 becomes:1 provided:2 xx:1 bounded:2 moreover:1 maximizes:1 mass:2 begin:1 what:2 minimizes:4 string:2 developed:1 finding:2 every:1 exactly:2 k2:1 classifier:1 control:4 partitioning:1 whatever:1 yn:1 segmenting:1 local:2 modify:1 slonim:1 approximately:1 might:1 plus:2 garg:3 acl:2 quantified:2 equivalence:1 jurafsky:1 hmms:1 range:1 averaged:1 recursive:1 practice:1 procedure:5 dayton:1 nite:1 empirical:8 gibson:1 boyd:1 word:11 altun:1 convenience:1 unlabeled:36 selection:1 prentice:1 influence:2 equivalent:8 deterministic:1 lagrangian:1 center:1 crfs:16 maximizing:2 convex:7 survey:1 formulate:1 vandenberghe:1 oh:1 s6:1 classic:1 handle:1 notion:1 suppose:1 programming:4 us:2 element:3 expensive:1 recognition:11 rec:4 predicts:1 labeled:28 taskar:1 wang:6 solved:1 calculate:2 compressing:1 morphological:1 decrease:1 valuable:1 principled:1 rose:1 convexity:2 complexity:1 dynamic:4 trained:1 motivate:3 solving:3 depend:1 rewrite:1 tight:1 hjl:11 distinctive:1 baseball:1 completely:2 easily:1 joint:2 icassp:1 mization:1 various:1 talk:2 regularizer:1 alphabet:1 train:3 jiao:4 distinct:1 effective:1 describe:1 labeling:10 sanity:1 larger:1 solve:3 distortion:36 compressed:2 obviously:1 sequence:17 brefeld:1 propose:3 relevant:2 ywang12:1 motorcycle:1 achieve:1 academy:1 description:2 double:1 cluster:1 extending:1 categorization:5 converges:1 help:2 derive:1 develop:3 school:1 eq:10 strong:1 c:1 uu:1 unlabel:7 kno:1 mcallester:1 mann:2 crc:1 require:1 assign:1 f1:1 generalization:1 alleviate:1 biological:1 fil:1 hall:2 wright:2 exp:4 mapping:6 major:1 vary:2 purpose:1 estimation:3 applicable:1 combinatorial:2 label:18 successfully:1 minimization:7 mit:3 always:1 rather:1 avoid:1 resized:1 jaakkola:6 encode:1 notational:1 rank:1 likelihood:8 check:1 contrast:2 baseline:3 inference:1 dependent:4 burnaby:1 hidden:6 koller:1 classification:17 constrained:7 art:2 initialize:1 mutual:28 marginal:1 field:9 equal:2 construct:1 spatial:1 manually:1 identical:1 chapman:1 look:1 icml:3 future:2 report:2 few:1 belkin:1 randomly:2 national:1 individual:1 phase:1 blahut:7 attempt:1 message:1 zheng:1 extreme:1 chain:8 oliver:3 edge:1 explosion:2 indexed:1 abundant:1 re:2 instance:4 cover:1 tractability:2 mac:1 cost:1 introducing:1 deviation:2 rare:1 tishby:2 too:2 characterize:1 dependency:1 answer:1 combined:1 csail:1 probabilistic:6 lee:4 enhance:2 again:4 choose:3 derivative:7 yp:1 de:1 sec:1 coding:1 unannotated:1 performed:1 view:1 closed:1 analyze:2 apparently:1 annotation:1 simon:1 minimize:2 v5a:1 greg:1 accuracy:9 efficiently:2 maximized:1 famous:1 handwritten:1 comp:2 manual:1 hlt:1 naturally:1 mi:10 degeneracy:2 gain:1 dataset:2 segmentation:1 sophisticated:1 wesley:1 originally:1 supervised:45 specify:1 improved:1 yb:1 formulation:3 done:1 though:2 generality:1 just:1 correlation:1 hand:7 o:1 propagation:2 porter:1 defines:1 logistic:2 bahl:3 quality:1 corduneanu:6 lossy:1 semisupervised:2 usa:1 validity:1 concept:1 true:2 multiplier:1 brown:1 naacl:1 regularization:27 alternating:4 memoryless:1 misc:2 illustrated:1 adjacent:1 game:1 during:2 essence:1 speaker:1 criterion:3 m:1 scholk:2 generalized:1 theoretic:8 crf:8 demonstrate:2 confusion:1 image:3 variational:12 wise:1 novel:1 arimoto:6 volume:2 occurred:1 interpretation:1 theirs:1 significant:2 cambridge:1 smoothness:1 automatic:3 unconstrained:2 fk:3 similarly:1 language:2 jrennie:1 robot:1 chapelle:2 add:1 pu:4 base:1 closest:1 recent:1 perspective:2 optimizing:1 optimizes:1 paliwal:1 inequality:2 binary:3 meeting:1 yi:13 devise:1 seen:1 minimum:25 additional:2 maximize:1 paradigm:1 redundant:1 monotonically:1 semi:40 zien:2 full:1 pnas:1 characterized:1 divided:1 equally:1 fraser:1 prediction:5 variant:2 basic:1 desideratum:1 regression:3 vision:1 essentially:2 metric:1 expectation:1 grounded:1 achieved:1 whereas:2 annealing:1 source:3 rest:1 subject:2 suspect:1 med:1 lafferty:1 effectiveness:3 jordan:2 extracting:1 yang:1 noting:2 bengio:3 enough:1 newsgroups:1 independence:1 xj:1 reduce:1 idea:3 multiclass:1 tradeoff:3 translates:1 politics:2 bottleneck:3 whether:1 motivated:3 speech:6 speaking:1 useful:1 generally:1 amount:3 ten:2 hardware:1 reduced:1 http:1 arguable:1 problematic:1 notice:1 sign:2 per:1 discrete:1 putting:1 four:1 terminology:1 nevertheless:1 ce:3 verified:2 graph:6 sum:6 year:1 convert:2 run:3 almost:2 family:1 atwal:1 utilizes:1 sfu:1 bit:1 bound:5 convergent:5 annual:1 activity:1 constraint:3 throwing:1 argument:2 min:5 martin:1 structured:14 according:1 popularized:1 across:1 smaller:1 character:11 explained:1 soong:1 definep:1 mori:2 resource:1 equation:1 turn:2 needed:3 know:1 addison:1 generalizes:1 available:2 tightest:1 apply:1 away:1 appropriate:1 shaojun:2 alternative:4 thomas:1 compress:1 top:1 clustering:6 remaining:1 linguistics:1 graphical:1 madison:1 exploit:2 ghahramani:1 build:2 objective:13 quantity:4 parametric:7 illuminate:1 bialek:3 gradient:2 mapped:1 sci:2 capacity:2 hmm:5 gun:1 topic:1 collected:1 considers:1 reason:1 code:1 length:6 berger:1 ratio:13 minimizing:8 unfortunately:2 negative:5 unknown:2 contributed:1 adjustable:1 upper:4 observation:4 markov:5 communication:2 precise:1 y1:4 arbitrary:1 community:1 canada:1 souza:1 introduced:1 cast:1 pair:3 required:1 unequal:1 distinction:1 learned:3 established:1 nip:6 haffari:1 below:2 challenge:2 max:1 explanation:1 belief:1 suitable:1 regularized:5 advanced:1 zhu:1 scheme:5 improve:2 technology:1 auto:1 utterance:1 text:6 prior:2 nice:1 l2:13 literature:1 asymptotic:1 opf:2 wisconsin:1 loss:1 rationale:1 gholamreza:1 versus:1 validation:2 foundation:1 informativeness:1 mercer:1 principle:1 editor:2 grandvalet:3 repeat:1 last:1 free:1 formal:1 stemmer:1 saul:1 distributed:2 world:3 ignores:1 author:1 far:1 transaction:3 cutting:1 implicitly:1 uai:1 assumed:1 consuming:1 discriminative:11 subsequence:3 why:1 hockey:1 abney:1 learn:1 channel:2 defy:2 ca:1 schuurmans:2 du:12 complex:1 cl:1 motivation:1 edition:1 fig:2 scheffer:1 wiley:1 pereira:2 exponential:2 ib:3 third:2 coling:1 formula:2 jensen:2 dl:7 intractable:6 essential:2 sequential:1 margin:2 entropy:38 likely:1 greiner:2 lagrange:1 sport:2 partially:1 monotonic:1 springer:1 corresponds:3 determines:1 conditional:39 goal:1 formulated:2 change:1 hard:1 specifically:1 infinite:1 except:1 reducing:1 rlm:1 lemma:1 experimental:2 ya:11 newsgroup:1 exception:1 formally:1 select:3 people:1 szummer:1 bioinformatics:1 audio:1 correlated:1 |
3,040 | 3,755 | Nash Equilibria of Static Prediction Games
?
Michael Bruckner
Department of Computer Science
University of Potsdam, Germany
[email protected]
Tobias Scheffer
Department of Computer Science
University of Potsdam, Germany
[email protected]
Abstract
The standard assumption of identically distributed training and test data is violated
when an adversary can exercise some control over the generation of the test data.
In a prediction game, a learner produces a predictive model while an adversary
may alter the distribution of input data. We study single-shot prediction games in
which the cost functions of learner and adversary are not necessarily antagonistic.
We identify conditions under which the prediction game has a unique Nash equilibrium, and derive algorithms that will find the equilibrial prediction models. In a
case study, we explore properties of Nash-equilibrial prediction models for email
spam filtering empirically.
1
Introduction
The assumption that training and test data are governed by identical distributions underlies many
popular learning mechanisms. In a variety of applications, however, data at application time are
generated by an adversary whose interests are in conflict with those of the learner. In computer
and network security, fraud detection, and drug design, the distribution of data is changed ? by a
malevolent individual or under selective pressure ? in response to the predictive model.
An adversarial interaction between learner and data generator can be modeled as a single-shot game
in which one player controls the predictive model whereas the other player exercises some control
over the distribution of the input data. The optimal action for either player generally depends on
both players? moves.
The minimax strategy minimizes the costs under the worst possible move of the opponent. This
strategy is motivated for an opponent whose goal is to inflict the highest possible costs on the learner;
it can also be applied when no information about the interests of the adversary is available. Lanckriet
et al. [1] study the so called Minimax Probability Machine. This classifier minimizes the maximal
probability of misclassifying new instances for a given mean and covariance matrix of each class.
El Ghaoui et al. [2] study a minimax model for input data that are known to lie within some hyperrectangle. Their solution minimizes the worst-case loss over all possible choices of the data in these
intervals. Similarly, minimax solutions to classification games in which the adversary deletes input
features or performs a feature transformation have been studied [3, 4, 5]. These studies show that
the minimax solution outperforms a learner that naively minimizes the costs on the training data
without taking the adversary into account.
When rational opponents aim at minimizing their personal costs, then the minimax solution is overly
pessimistic. A Nash equilibrium is a pair of actions chosen such that no player gains a benefit by
unilaterally selecting a different action. If a game has a unique Nash equilibrium, it is the strongest
available concept of an optimal strategy in a game against a rational opponent. If, however, multiple
equilibria exist and the players choose their action according to distinct ones, then the resulting
combination may be arbitrarily disadvantageous for either player. It is therefore interesting to study
whether adversarial prediction games have a unique Nash equilibrium.
1
We study games in which both players ? learner and adversary ? have cost functions that consist of
data-dependent loss and regularizer. Contrasting prior results, we do not assume that the players?
cost functions are antagonistic. As an example, consider that a spam filter may minimize the error
rate whereas a spam sender may aim at maximizing revenue solicited by spam emails. These criteria
are conflicting, but not the exact negatives of each other. We study under which conditions unique
Nash equilibria exist and derive algorithms for identifying them.
The rest of this paper is organized as follows. Section 2 introduces the problem setting and defines
action spaces and cost functions. We study the existence of a unique Nash equilibrium and derive an
algorithm that finds it under defined conditions in Section 3. Section 4 discusses antagonistic loss
functions. For this case, we derive an algorithm that finds a unique Nash equilibrium whenever it
exists. Section 5 reports on experiments on email spam filtering; Section 6 concludes.
2
Modeling the Game
We study prediction games between a learner (v = +1) and an adversary (v = ?1). We consider
static infinite games. Static or single-shot game means that players make decisions simultaneously;
neither player has information about the opponent?s decisions. Infinite refers to continuous cost
functions that leave players with infinitely many strategies to choose from. We constrain the players
to select pure (i.e., deterministic) strategies. Mixed strategies and extensive-form games such as
Stackelberg, Cournot, Bertrand, and repeated games are not within the scope of this work.
Both players can access an input matrix Q
of training instances X with outputs y, drawn according
n
to a probability distribution q(X, y) = i=1 q(xi , yi ). The learner?s action a+1 ? A+1 now is
to choose parameters of a linear model ha+1 (x) = a+1 T x. Simultaneously, the adversary chooses
a transformation function ?a?1 that maps any input matrix X to an altered matrix ?a?1 (X). This
transformation inducesQ
a transition from input distribution q to test distribution qtest with q(X, y) =
n
qtest (?a?1 (X), y) = i=1 qtest (?a?1 (X)i , yi ). Our main result uses a model that implements
transformations as matrices a?1 ? A?1 ? Rm?n . Transformation ?a?1 (X) = X + a?1 adds
perturbation matrix a?1 to input matrix X, i.e., input pattern xi is subjected to a perturbation vector
a?1,i . If, for instance, inputs are word vectors, the perturbation matrix adds and deletes words.
The possible moves a = [a+1 , a?1 ] constitute the joint action space A = A+1 ? A?1 which is
assumed to be nonempty, compact, and convex. Action spaces Av are parameters of the game. For
instance, in spam filtering it is appropriate to constrain A?1 such that perturbation matrices contain
zero vectors for non-spam messages; this reflects that spammers can only alter spam messages.
Each pair of actions a incurs costs of ?+1 (a) and ??1 (a), respectively, for the players. Each player
has an individual loss function `v (y 0 , y) where y 0 is the value of decision function ha+1 and y
is the true label. Section 4 will discuss antagonistic loss functions `+1 = ?`?1 . However, our
main contribution in Section 3 regards non-antagonistic loss functions. For instance, a learner may
minimize the zero-one loss whereas the adversary may focus on the lost revenue.
Both players aim at minimizing their loss over the test distribution qtest . But, since q and consequently qtest are unknown, the cost functions are regularized empirical loss functions over the
sample ?a?1 (X) which reflects test distribution qtest . Equation 1 defines either player?s cost function as player-specific loss plus regularizer. The learner?s regularizer ?a+1 will typically regularize
the capacity of ha+1 . Regularizer ?a?1 controls the amount of distortion that the adversary may
inflict on the data and thereby the extent to which an information payload has to be preserved.
n
X
?v (av , a?v ) =
`v (ha+1 (?a?1 (X)i ), yi ) + ?av
(1)
i=1
Each player?s cost function depends on the opponent?s parameter. In general, there is no value av
that maximizes ?v (av , a?v ) independently of the opponent?s choice of a?v . The minimax solution
arg minav maxa?v ?v (av , a?v ) minimizes the costs under the worst possible move of the opponent.
This solution is optimal for a malicious opponent whose goal is to inflict maximally high costs on
the learner. In absence of any information on the opponent?s goals, the minimax solution still gives
the lowest upper bound on the learner?s costs over all possible strategies of the opponent.
If both players ? learner and adversary ? behave rationally in the sense of minimizing their personal
costs, then the Nash equilibrium is the strongest available concept of an optimal choice of av . A
2
Nash equilibrium is defined as a pair of actions a? = [a?+1 , a??1 ] such that no player can benefit
from changing the strategy unilaterally. That is, for both players v ? {?1, +1},
?v (a?v , a??v ) = min ?v (av , a??v ).
av ?Av
(2)
The Nash equilibrium has several catches. Firstly, if the adversary behaves irrationally in the sense
of inflicting high costs on the other player at the expense of incurring higher personal costs, then
choosing an action according to the Nash equilibrium may result in higher costs than the minimax
solution. Secondly, a game may not have an equilibrium point. If an equilibrium point exists, the
game may thirdly possess multiple equilibria. If a? = [a?+1 , a??1 ] and a0 = [a0+1 , a0?1 ] are distinct
equilibria, and each player decides to act according to one of them, then a combination [a?v , a0?v ]
may be a poor joint strategy and may give rise to higher costs than a worst-case solution. However,
if a unique Nash equilibrium exists and both players seek to minimize their individual costs, then
the Nash equilibrium is guaranteed to be the optimal move.
3
Solution for Convex Loss Functions
In this section, we study the existence of a unique Nash equilibrium of prediction games with
cost functions as in Equation 1. We derive an algorithm that identifies the unique equilibrium if
sufficient conditions are met. We consider regularized player-specific loss functions `v (y 0 , y) which
are not assumed to satisfy the antagonicity criterion `+1 = ?`?1 . Both loss functions are, however,
required to be convex and twice differentiable, and we assume strictly convex regularizers ?av
such as the l2 -norm regularizer. Player- and instance-specific costs may be attached to the loss
functions; however, we omit such cost factors for greater notational harmony. This section?s main
result is that if both loss functions are monotonic in y 0 with different monotonicities ? that is, one is
monotonically increasing, and one is decreasing for any fixed y ? then the game has a unique Nash
equilibrium that can be found efficiently.
Theorem 1. Let the cost functions be defined as in Equation 1 with strictly convex regularizers ?av ,
let action spaces Av be nonempty, compact, and convex subsets of finite-dimensional Euclidean
spaces. If for any fixed y, both loss functions `v (y 0 , y) are monotonic in y 0 ? R with distinct
monotonicity, convex in y 0 , and twice differentiable in y 0 , then a unique Nash equilibrium exists.
Proof.
The players? regularizers ?av are strictly convex, and both loss functions
`v (ha+1 (?a?1 (X)i ), yi ) are convex and twice differentiable in av ? Av for any fixed a?v ? A?v .
Hence, both cost functions ?v are continuously differentiable and strictly convex, and according to
Theorem 4.3 in [6], at least one Nash equilibrium exists. As each player has an own nonempty,
compact, and convex action space Av , Theorem 2 of [7] applies as well; that is, if function
?r (a)
= r?+1 (a+1 , a?1 ) + (1 ? r)??1 (a+1 , a?1 )
(3)
is diagonally strictly convex in a for some fixed 0 < r < 1, then a unique Nash equilibrium exists.
A sufficient condition for ?r (a) to be diagonally strictly convex is that matrix Jr (a) in Equation 4
is positive definite for any a ? A (see Theorem 6 in [7]). This matrix
?
?
r?2a+1 a+1 ?+1 (a)
r?2a+1 a?1 ?+1 (a)
Jr (a) =
(4)
(1 ? r)?2a?1 a+1 ??1 (a) (1 ? r)?2a?1 a?1 ??1 (a)
is the Jacobian of the pseudo-gradient of ?r (a), that is,
?
?
r?a+1 ?+1 (a)
gr (a) =
.
(1 ? r)?a?1 ??1 (a)
(5)
We want to show that Jr (a) is positive definite for some fixed r if both loss functions `v (y 0 , y)
have distinct monotonicity and are convex in y 0 . Let `0v (y 0 , y) be the first and `00v (y 0 , y) be the
second derivative of `v (y 0 , y) with respect to y 0 . Let Ai denote the matrix where the i-th column is a+1 and all other elements are zero, let ?v be the diagonal matrix with diagonal elements
?v,i = `00v (ha+1 (?a?1 (X)i ), yi ), and we define ?v,i = `0v (ha+1 (?a?1 (X)i ), yi ). Using these defini3
tions, the Jacobian of Equation 4 can be rewritten,
?
Jr (a)
=
?
?
?a?1 (X)
0
?a?1 (X)
0
?
??
?
0
A
0
A
1 ?
1
r?+1
r?+1
?
?
?
?
?
..
.. ? (1 ? r)?
..
..
(1
?
r)?
?1
?1
?
?
?
.
.
.
.
0
An
0
An
?
?
r?2 ?a+1
r?+1,1 I
...
r?+1,n I
2
? (1 ? r)??1,1 I (1 ? r)? ?a?1 . . .
?
0
?
?
+?
?.
..
..
..
.
.
?
?
.
.
.
.
(1 ? r)??1,n I
0
. . . (1 ? r)?2 ?a?1
?T
?
?
?
?
(6)
The eigenvalues of the inner matrix of the first summand in Equation 6 are r?+1,i + (1 ? r)??1,i and
zero. Loss functions `v are convex in y 0 , that is, both second derivatives `00v (y 0 , y) are non-negative
for any y 0 and consequently r?+1,i + (1 ? r)??1,i ? 0. Hence, the first summand of Jacobian
Jr (a) is positive semi-definite for any choice of 0 < r < 1. Additionally, we can decompose the
regularizers? Hessians as follows:
?2 ?av = ?v I + (?2 ?av ? ?v I),
(7)
2
where ?v is the smallest eigenvalue of ? ?av . As the regularizers are strictly convex, ?v > 0 and
the second summand in Equation 7 is positive semi-definite. Hence, it suffices to show that matrix
?
r?+1 I
? (1 ? r)??1,1 I
?
..
?
.
(1 ? r)??1,n I
r?+1,1 I
(1 ? r)??1 I
..
.
0
...
...
..
.
...
r?+1,n I
0
..
.
(1 ? r)??1 I
?
?
?
?
(8)
is positive definite. We derive the eigenvalues of this matrix which assume only three different
values; these are (1 ? r)??1 and
?
?
q
1
r?+1 + (1 ? r)??1 ? (r?+1 ? (1 ? r)??1 )2 + 4r(1 ? r)?T
?
(9)
+1 ?1 .
2
Eigenvalue (1 ? r)??1 is positive by definition. The others are positive if the value under the square
root is non-negative and less than (r?+1 + (1 ? r)??1 )2 . The scalar product b = ?T
+1 ??1 is nonpositive as both loss functions `v (y 0 , y) are monotonic in y 0 with distinct monotonicity, i.e., both
derivatives have a different sign for any y 0 ? R and consequently b ? 0. This implies that the
value under the square root is less or equal to (r?+1 ? (1 ? r)??1 )2 < (r?+1 + (1 ? r)??1 )2 . In
addition, b is bounded from below as action spaces Av , and therefore the value of ha+1 (?a?1 (X)i ),
is bounded. Let b = inf a?A ?T
+1 ??1 be such a lower bound with ?? < b ? 0. We solve for r such
that the value under the square root in Equation 9 attains a non-negative value, that is,
q
(?+1 + ??1 )??1 ? 2b ? 2 b2 ? ?+1 ??1 b
(10)
0<r?
(?+1 + ??1 )2 ? 4b
or alternatively
q
(?+1 + ??1 )??1 ? 2b + 2 b2 ? ?+1 ??1 b
? r < 1.
(11)
(?+1 + ??1 )2 ? 4b
For any ?+1 , ??1 > 0 there are values r that satisfy Inequality 10 or 11 because, for any fixed b ? 0,
q
(12)
0 < (?+1 + ??1 )??1 ? 2b ? 2 b2 ? ?+1 ??1 b < (?+1 + ??1 )2 ? 4b.
For such r all eigenvalues in Equation 9 are strictly positive which completes the proof.
According to Theorem 1, a unique Nash equilibrium exists for suitable loss functions such as the
squared hinge loss, logistic loss, etc. To find this equilibrium, we make use of the weighted NikaidoIsoda function (Equation 13). Intuitively, ?rv (a, b) quantifies the weighted sum of the relative cost
savings that the players can enjoy by changing from strategy av to strategy bv while their opponent
continues to play a?v . Equation 14 defines the value function Vrv (a) as the weighted sum of greatest
4
possible cost savings attainable by changing from a to any strategy unilaterally. By these definitions,
a? is a Nash equilibrium if, and only if, Vrv (a? ) is a global minimum of the value function with
Vrv (a? ) = 0 for any fixed weights r+1 = r and r?1 = 1 ? r, where 0 < r < 1.
X
?rv (a, b) =
rv (?v (av , a?v ) ? ?v (bv , a?v ))
(13)
v?{+1,?1}
Vrv (a) =
max ?rv (a, b)
b?A
(14)
To find this global minimum of Vrv (a) we make use of Corollary 3.4 of [8]. The weights rv are
fixed scaling factors of the players? objectives which do not affect the Nash equilibrium in Equation 2; however, these weights ensure the main condition of Corollary 3.4, that is, the positive
b ? a is
definiteness of the Jacobian Jr (a) in Equation 4. According to this corollary, vector d = b
b is the maximizing argument
a descent direction for the value function at any position a, where b
b = arg maxb?A ?r (a, b). In addition, the convexity of A ensures that any point a + td with
b
v
b is a valid pair of actions.
t ? [0, 1] (i.e., a point between a and b)
Algorithm 1 Nash Equilibrium of Games with Convex Loss Functions
Require: Cost functions ?v as defined in Equation 1 and action spaces Av .
1: Select initial a0 ? A+1 ? A?1 , set k := 0, and choose r that satisfies Inequality 10 or 11.
2: repeat
3:
Set bk := arg maxb?A+1 ?A?1 ?rv (ak , b) where ?rv is defined in Equation 13.
4:
Set dk := bk ? ak .
5:
Find maximal step size tk ? {2?l : l ? N} with Vrv (ak + tk dk ) ? Vrv (ak ) ? ?ktk dk k2 .
6:
Set ak+1 := ak + tk dk and k := k + 1.
7: until kak ? ak?1 k ? ?.
Algorithm 1 exploits these properties and finds the global minimum of Vrv and thereby the unique
Nash equilibrium, under the preconditions of Theorem 1. Convergence follows from the fact that if
in the k-th iteration dk = 0, then ak is a Nash equilibrium which is unique according to Theorem 1.
If dk 6= 0, then dk is a descent direction of Vrv at position ak . Together with term ?ktk dk k2 ,
this ensures Vrv (ak+1 ) < Vrv (ak ), and as value function Vrv is bounded from below, Algorithm 1
converges to the global minimum of Vrv . Note that r only controls the convergence rate, but has no
influence on the solution. Any value of r that satisfies Inequality 10 or 11 ensures convergence.
4
Solution for Antagonistic Loss Functions
Algorithm 1 is guaranteed to identify the unique equilibrium if the loss functions are convex, twice
differentiable, and of distinct monotonicities. We will now study the case in which the learner?s cost
function is continuous and convex, and the adversary?s loss function is antagonistic to the learner?s
loss, that is, `+1 = ?`?1 . We abstain from making assumptions about the adversary?s regularizers.
Because of the regularizers, the game is still not a zero-sum game. In this setting, a unique Nash
equilibrium cannot be guaranteed to exist because the adversary?s cost function is not necessarily
strictly convex. However, an individual game may still possess a unique Nash equilibrium, and we
can derive an algorithm that identifies it whenever it exists.
The symmetry of the loss functions simplifies the players? cost functions in Equation 1 to
?+1 (a+1 , a?1 )
=
n
X
`+1 (ha+1 (?a?1 (X)i ), yi ) + ?a+1 ,
(15)
i=1
??1 (a?1 , a+1 )
= ?
n
X
`+1 (ha+1 (?a?1 (X)i ), yi ) + ?a?1 .
(16)
i=1
Even though the loss functions are antagonistic, the cost functions in Equations 15 and 16 are not,
unless the player?s regularizers are antagonistic as well. Hence, the game is not a zero-sum game.
However, according to Theorem 2, if the game has a unique Nash equilibrium, then this equilibrium
is a minimax solution of the zero-sum game defined by the joint cost function of Equation 17.
5
Theorem 2. If the game with cost functions ?+1 and ??1 defined in Equations 15
and 16 has a unique Nash equilibrium a? , then this equilibrium also satisfies a? =
arg mina+1 maxa?1 ?0 (a+1 , a?1 ) where
Xn
?0 (a+1 , a?1 ) =
`+1 (ha+1 (?a?1 (X)i ), yi ) + ?a+1 ? ?a?1 .
(17)
i=1
The proof can be found in the appendix. As a consequence of Theorem 2, we can identify the unique
Nash equilibrium of the game with cost functions ?+1 and ??1 , if it exists, by finding the minimax
solution of the game with joint cost function ?0 . The minimax solution is given by
a?+1
= arg
min
max ?0 (a+1 , a?1 ).
a+1 ?A+1 a?1 ?A?1
(18)
b?1 ) to be the function of a+1
To solve this optimization problem, we define ?b0 (a+1 ) = ?0 (a+1 , a
b?1 is set to the value a
b?1 = arg maxa?1 ?0 (a+1 , a?1 ). Since cost function ?0 is continuous
where a
in its arguments, convex in a+1 , and A?1 is a compact set, Danskin?s Theorem [9] implies that ?b0
is convex in a+1 with gradient
b?1 ).
??b0 (a+1 ) = ?a+1 ?0 (a+1 , a
(19)
b?1 ) at
The significance of Danskin?s Theorem is that when calculating the gradient ?a+1 ?0 (a+1 , a
b?1 acts as a constant in the derivative instead of as a function of a+1 .
position a+1 , argument a
The convexity of ?b0 (a+1 ) suggests the gradient descent method implemented in Algorithm 2. It
identifies the unique Nash equilibrium of a game with antagonistic loss functions, if it exists, by
finding the minimax solution of the game with joint cost function ?0 .
Algorithm 2 Nash Equilibrium of Games with Antagonistic Loss Functions
Require: Joint cost function ?0 as defined in Equation 17 and action spaces Av .
1: Select initial a0+1 ? A+1 and set k := 0.
2: repeat
3:
Set ak?1 := arg maxa?1 ?A?1 ?0 (ak+1 , a?1 ).
4:
Set dk := ??ak+1 ?0 (ak+1 , ak?1 ).
5:
Find maximal step size tk ? {2?l : l ? N} with
?0 (ak+1 + tk dk , ak?1 ) ? ?0 (ak+1 , ak?1 ) ? ?ktk dk k2 .
k k
k
Set ak+1
+1 := a+1 + t d and k := k + 1.
k
7:
Project a+1 to the admissible set A+1 , if necessary.
8: until kak+1 ? ak?1
+1 k ? ?
6:
A minimax solution arg mina+1 maxa?1 ?+1 (a+1 , a?1 ) of the learner?s cost function minimizes
the learner?s costs when playing against the most malicious opponent; for instance, Invar-SVM [4]
finds such a solution. By contrast, the minimax solution arg mina+1 maxa?1 ?0 (a+1 , a?1 ) of the
joint cost function as defined in Equation 17 constitutes a Nash equilibrium of the game with cost
functions ?+1 and ??1 , defined in Equations 15 and 16. It minimizes the costs for each of two players
that seek their personal advantage. Algorithmically, Invar-SVM and Algorithm 2 are very similar;
the main difference lies in the optimization criteria and the resulting properties of the solution.
5
Experiments
We study the problem of email spam filtering where the learner tries to identify spam emails while
the adversary conceals spam messages in order to penetrate the filter. Our goal is to explore the
relative strengths and weaknesses of the proposed Nash models for antagonistic and non-antagonistic
loss functions and existing baseline methods. We compare a regular SVM, logistic regression, SVM
with Invariances (Invar-SVM, [4]), the Nash equilibrium for antagonistic loss functions found by
identifying the minimax solution of the joint cost function (Minimax, Algorithm 2), and the Nash
equilibrium for convex loss functions (Nash, Algorithm 1).
6
Amount of Transformation vs. Accuracy
Amount of Transformation vs. Accuracy
SVM
LogReg
Invar-SVM
0.996
AUC
0.996
AUC
Amount of Transformation vs. Accuracy
1
0.992
0.988
1
SVM
LogReg
Minimax
0.992
0.988
1
5
10
20
40
K
80
120 160
0.5
SVM
LogReg
Nash
0.996
AUC
1
0.992
0.988
0.1 0.05 0.02 0.01 0.0050.0020.001
??1
5
1
0.5
0.1
??1
0.05
0.01 0.005
Figure 1: Adversary?s regularization parameter and AUC on test data (private emails).
We use the logistic loss as the learner?s loss function `+1 (h(x), y) = log(1 + e?yh(x) ) for the
Minimax and the Nash model. Consequently, the adversary?s loss for the Minimax solution is the
negative loss of the learner. In the Nash model, we choose `?1 (h(x), y) = log(1 + eyh(x) ) which is
a convex approximation of the adversary?s zero-one loss, that is, correct predictions by the learner
incur high costs for the adversary. We use the additive transformation model ?a?1 (X)i = xi +a?1,i
as defined in Section 2. For spam emails xi , we impose box constraints ? 21 xi ? a?1,i ? 21 xi on
the adversary?s parameters; for non-spam we set a?1,i = 0. That is, the spam sender can only
transform spam emails. This model is equivalent to the component-wise scaling model [4] with
scaling factors between 0.5 and 1.5, and ensures that the adversary?s action space is nonempty,
compact, and convex. We use l2 -norm regularizers for both players, that is, ?av = ?2v kav k22 where
?v is the regularization parameter of player v. For the Nash model we set r to the mean of the
interval defined by Inequality 11, where b = ? n4 is a lower bound for the chosen logistic loss and
regularization parameters ?v are identical to the smallest eigenvalues of ?2 ?av .
We use two email corpora: the first contains 65,000 publicly available emails received between 2000
and 2002 from the Enron corpus, the SpamAssassin corpus, Bruce Guenter?s spam trap, and several
mailing lists. The second contains 40,000 private emails received between 2000 and 2007. All
emails are binary word vectors of dimensionality 329,518 and 160,981, respectively. The emails are
sorted chronologically and tagged with label, date, and size. The preprocessed corpora are available
from the authors. We cannot use a standard TREC corpus because there the delivery dates of the
spam messages have been fabricated, and our experiments require the correct chronological order.
Our evaluation protocol is as follows. We use the 6,000 oldest instances as training portion and
set the remaining emails aside as test instances. We use the area under the ROC curve as a fair
evaluation metric that is adequate for the application; error bars indicate the standard error. We train
all methods 20 times for the first experiment and 50 times for the following experiments on a subset
of 200 messages drawn at random from the training portion and average the AUC values on the test
set. In order to tune both players? regularization parameters, we conduct a grid search maximizing
the AUC for 5-fold cross validation on the training portion.
In the first experiment, we explore the impact of the regularization parameter of the transformation
model, i.e., ??1 for our models and K ? the maximal number of alterable attributes ? for Invar-SVM.
Figure 1 shows the averaged AUC value on the private corpus? test portion. The crosses indicate the
parameter values found by the grid search with cross validation on the training data.
In the next experiment, we evaluate all methods into the future by processing the test set in chronological order. Figure 2 shows that Invar-SVM, Minimax, and the Nash solution outperform the regular SVM and logistic regression significantly. For the public data set, Minimax performs slightly
better than Nash; for the private corpus, there is no significant difference between the solutions of
Minimax and Nash. For both data sets, the l2 -regularization gives Minimax and Nash an advantage
over Invar-SVM. Recall that Minimax refers to the Nash equilibrium for antagonistic loss functions
found by solving the minimax problem for the joint cost function (Algorithm 2). In this setting, loss
functions ? but not cost functions ? are antagonistic; hence, Nash cannot gain an advantage over
Minimax. Figure 2 (right hand side) shows the execution time of all methods. Regular SVM and
logistic regression are faster than the game models; the game models behave comparably.
Finally, we explore a setting with non-antagonistic loss. We weight the loss functions with playerand instance specific factors cv,i , that is, `cv (ha+1 (?a?1 (X)i ), yi ) = cv,i `v (ha+1 (?a?1 (X)i ), yi ).
7
Accuracy over Time (65,000 Public Emails)
Accuracy over Time (40,000 Private Emails)
1
1
Execution Time
10000
time in sec
AUC
AUC
1000
0.995
0.99
0.99
0.98
present
20,000
40,000
t emails received after training
future
SVM
LogReg
Invar-SVM
Minimax
Nash
0.985
present
100
10
1
0.1
10,000
20,000
t emails received after training
future
100
400
1,600
number of training emails
6,200
Figure 2: Left, center: AUC evaluated into the future after training on past. Right: execution time.
Storage Costs vs. Accuracy (65,000 Public Emails)
90
85
80
75
45
required storage in MB
95
required storage in MB
Storage Costs vs. Accuracy (40,000 Private Emails)
SVM
SVM with costs
LogReg
LogReg with costs
Invar-SVM
Minimax
Nash
Nash with costs
44
43
42
41
40
39
38
70
0.84
0.88
0.92
non-spam recall
0.96
0.92
0.94
0.96
non-spam recall
0.98
Figure 3: Average storage costs versus non-spam recall.
Our model reflects that an email service provider may delete detected spam emails after a latency period whereas other emails incur storage costs c+1,i proportional to their file size. The spam sender?s
costs are c?1,i = 1 for all spam instances and c?1,i = 0 for all non-spam instances. The classifier
threshold balances a trade-off between non-spam recall (fraction of legitimate emails delivered) and
storage costs. For a threshold of ??, storage costs and non-spam recall are zero for all decision
functions. Likewise, a threshold of ? gives a recall of 1, but all emails have to be stored. Figure 3 shows this trade-off for all methods. The Nash prediction model behaves most favorably: it
outperforms all reference methods for almost all threshold values, often by several standard errors.
Invar-SVM and Minimax cannot reflect differing costs for learner and adversary in their optimization criteria and therefore perform worse. Logistic regression and the SVM with costs perform better
than their counterparts without costs, but worse than the Nash model.
6
Conclusion
We studied games in which each player?s cost function consists of a data-dependent loss and a
regularizer. A learner produces a linear model while an adversary chooses a transformation matrix
to be added to the data matrix. Our main result regards regularized non-antagonistic loss functions
that are convex, twice differentiable, and have distinct monotonicity. In this case, a unique Nash
equilibrium exists. It minimizes the costs of each of two players that aim for their highest personal
benefit. We derive an algorithm that identifies the equilibrium under these conditions. For the case
of antagonistic loss functions with arbitrary regularizers a unique Nash equilibrium may or may
not exist. We derive an algorithm that finds the unique Nash equilibrium, if it exists, by solving a
minimax problem on a newly derived joint cost function.
We evaluate spam filters derived from the different optimization problems on chronologically ordered future emails. We observe that game models outperform the reference methods. In a setting
with player- and instance-specific costs, the Nash model for non-antagonistic loss functions excels
because this setting is poorly modeled with antagonistic loss functions.
Acknowledgments
We gratefully acknowledge support from STRATO AG.
8
References
[1] Gert R. G. Lanckriet, Laurent El Ghaoui, Chiranjib Bhattacharyya, and Michael I. Jordan. A
robust minimax approach to classification. Journal of Machine Learning Research, 3:555?582,
2002.
[2] Laurent El Ghaoui, Gert R. G. Lanckriet, and Georges Natsoulis. Robust classification with interval data. Technical Report UCB/CSD-03-1279, EECS Department, University of California,
Berkeley, 2003.
[3] Amir Globerson and Sam T. Roweis. Nightmare at test time: robust learning by feature deletion.
In Proceedings of the International Conference on Machine Learning, 2006.
[4] Choon Hui Teo, Amir Globerson, Sam T. Roweis, and Alex J. Smola. Convex learning with
invariances. In Advances in Neural Information Processing Systems, 2008.
[5] Amir Globerson, Choon Hui Teo, Alex J. Smola, and Sam T. Roweis. Dataset Shift in Machine
Learning, chapter An adversarial view of covariate shift and a minimax approach, pages 179?
198. MIT Press, 2009.
[6] Tamer Basar and Geert J. Olsder. Dynamic Noncooperative Game Theory. Society for Industrial
and Applied Mathematics, 1999.
[7] J. B. Rosen. Existence and uniqueness of equilibrium points for concave n-person games.
Econometrica, 33(3):520?534, 1965.
[8] Anna von Heusinger and Christian Kanzow. Relaxation methods for generalized Nash equilibrium problems with inexact line search. Journal of Optimization Theory and Applications,
143(1):159?183, 2009.
[9] John M. Danskin. The theory of max-min, with applications. SIAM Journal on Applied Mathematics, 14(4):641?664, 1966.
9
| 3755 |@word private:6 norm:2 seek:2 covariance:1 natsoulis:1 attainable:1 pressure:1 incurs:1 thereby:2 shot:3 initial:2 contains:2 selecting:1 bhattacharyya:1 outperforms:2 existing:1 past:1 john:1 additive:1 christian:1 v:5 aside:1 amir:3 oldest:1 alterable:1 firstly:1 consists:1 bertrand:1 decreasing:1 td:1 increasing:1 project:1 bounded:3 maximizes:1 lowest:1 minimizes:8 maxa:6 contrasting:1 differing:1 finding:2 transformation:11 fabricated:1 ag:1 pseudo:1 berkeley:1 act:2 concave:1 chronological:2 classifier:2 rm:1 k2:3 control:5 omit:1 enjoy:1 positive:9 service:1 consequence:1 ak:22 laurent:2 plus:1 twice:5 studied:2 cournot:1 suggests:1 averaged:1 unique:25 acknowledgment:1 globerson:3 lost:1 implement:1 definite:5 area:1 empirical:1 drug:1 significantly:1 word:3 fraud:1 refers:2 regular:3 cannot:4 storage:8 inflicting:1 influence:1 equivalent:1 deterministic:1 map:1 center:1 maximizing:3 independently:1 convex:27 penetrate:1 identifying:2 pure:1 legitimate:1 regularize:1 unilaterally:3 geert:1 gert:2 antagonistic:21 play:1 exact:1 us:1 lanckriet:3 element:2 continues:1 worst:4 precondition:1 ensures:4 trade:2 highest:2 nash:58 convexity:2 econometrica:1 tobias:1 dynamic:1 personal:5 solving:2 predictive:3 incur:2 learner:24 logreg:6 joint:10 chapter:1 regularizer:6 train:1 distinct:7 detected:1 choosing:1 tamer:1 whose:3 solve:2 distortion:1 transform:1 delivered:1 advantage:3 differentiable:6 eigenvalue:6 interaction:1 maximal:4 product:1 mb:2 date:2 poorly:1 roweis:3 convergence:3 produce:2 leave:1 converges:1 tk:5 tions:1 derive:9 received:4 b0:4 implemented:1 c:2 payload:1 implies:2 met:1 indicate:2 direction:2 stackelberg:1 correct:2 attribute:1 filter:3 public:3 require:3 suffices:1 decompose:1 pessimistic:1 secondly:1 strictly:9 equilibrium:52 scope:1 smallest:2 uniqueness:1 harmony:1 label:2 teo:2 reflects:3 weighted:3 mit:1 aim:4 corollary:3 derived:2 focus:1 notational:1 contrast:1 adversarial:3 attains:1 baseline:1 sense:2 industrial:1 dependent:2 el:3 typically:1 a0:6 selective:1 germany:2 arg:9 classification:3 equal:1 saving:2 identical:2 constitutes:1 alter:2 future:5 rosen:1 report:2 others:1 summand:3 simultaneously:2 choon:2 individual:4 detection:1 interest:2 message:5 evaluation:2 weakness:1 introduces:1 regularizers:10 necessary:1 solicited:1 unless:1 conduct:1 euclidean:1 noncooperative:1 delete:1 instance:13 column:1 modeling:1 cost:67 strato:1 subset:2 gr:1 stored:1 eec:1 chooses:2 person:1 international:1 siam:1 off:2 michael:2 together:1 continuously:1 squared:1 reflect:1 von:1 choose:5 worse:2 derivative:4 account:1 de:2 b2:3 sec:1 satisfy:2 depends:2 root:3 try:1 view:1 portion:4 disadvantageous:1 bruce:1 contribution:1 minimize:3 square:3 publicly:1 accuracy:7 efficiently:1 likewise:1 identify:4 comparably:1 provider:1 strongest:2 whenever:2 email:27 definition:2 inexact:1 against:2 proof:3 static:3 nonpositive:1 rational:2 gain:2 newly:1 dataset:1 popular:1 recall:7 dimensionality:1 organized:1 higher:3 response:1 maximally:1 eyh:1 evaluated:1 though:1 box:1 smola:2 until:2 hand:1 defines:3 logistic:7 k22:1 concept:2 true:1 contain:1 counterpart:1 hence:5 regularization:6 tagged:1 game:42 auc:10 kak:2 criterion:4 guenter:1 generalized:1 mina:3 performs:2 wise:1 abstain:1 behaves:2 empirically:1 attached:1 thirdly:1 significant:1 ai:1 cv:3 grid:2 mathematics:2 similarly:1 mailing:1 gratefully:1 access:1 invar:10 etc:1 add:2 own:1 inf:1 inequality:4 binary:1 arbitrarily:1 yi:11 minimum:4 greater:1 george:1 impose:1 period:1 monotonically:1 semi:2 rv:7 multiple:2 technical:1 faster:1 cross:3 impact:1 prediction:11 underlies:1 regression:4 metric:1 iteration:1 preserved:1 whereas:4 want:1 addition:2 interval:3 completes:1 malicious:2 rest:1 posse:2 enron:1 file:1 jordan:1 identically:1 maxb:2 variety:1 affect:1 inner:1 simplifies:1 shift:2 whether:1 motivated:1 spammer:1 hessian:1 constitute:1 action:18 adequate:1 generally:1 latency:1 tune:1 amount:4 outperform:2 exist:4 misclassifying:1 sign:1 overly:1 algorithmically:1 threshold:4 deletes:2 drawn:2 changing:3 preprocessed:1 neither:1 chronologically:2 relaxation:1 fraction:1 sum:5 almost:1 delivery:1 decision:4 appendix:1 scaling:3 basar:1 bound:3 guaranteed:3 fold:1 bv:2 strength:1 constraint:1 constrain:2 qtest:6 alex:2 argument:3 min:3 department:3 according:9 combination:2 poor:1 jr:6 slightly:1 sam:3 n4:1 making:1 intuitively:1 ghaoui:3 equation:22 chiranjib:1 discus:2 mechanism:1 nonempty:4 subjected:1 available:5 incurring:1 opponent:13 rewritten:1 observe:1 appropriate:1 existence:3 ktk:3 remaining:1 ensure:1 hinge:1 calculating:1 exploit:1 society:1 move:5 objective:1 added:1 strategy:12 diagonal:2 rationally:1 gradient:4 excels:1 capacity:1 extent:1 modeled:2 minimizing:3 balance:1 expense:1 favorably:1 negative:5 rise:1 danskin:3 design:1 unknown:1 perform:2 upper:1 av:27 finite:1 acknowledge:1 descent:3 behave:2 trec:1 perturbation:4 arbitrary:1 bk:2 pair:4 required:3 hyperrectangle:1 extensive:1 security:1 conflict:1 kav:1 california:1 potsdam:4 conflicting:1 deletion:1 adversary:26 bar:1 below:2 pattern:1 max:3 greatest:1 suitable:1 regularized:3 minimax:33 altered:1 identifies:4 concludes:1 catch:1 prior:1 l2:3 conceals:1 relative:2 loss:50 mixed:1 generation:1 interesting:1 filtering:4 proportional:1 versus:1 generator:1 revenue:2 validation:2 sufficient:2 olsder:1 playing:1 changed:1 diagonally:2 repeat:2 bruckner:1 side:1 taking:1 distributed:1 benefit:3 regard:2 curve:1 xn:1 transition:1 valid:1 author:1 spam:27 compact:5 uni:2 monotonicity:4 global:4 decides:1 corpus:7 assumed:2 xi:6 alternatively:1 continuous:3 search:3 quantifies:1 additionally:1 robust:3 symmetry:1 necessarily:2 protocol:1 anna:1 significance:1 main:6 csd:1 repeated:1 fair:1 scheffer:2 roc:1 definiteness:1 position:3 exercise:2 lie:2 governed:1 jacobian:4 yh:1 admissible:1 theorem:12 specific:5 covariate:1 list:1 dk:11 svm:21 naively:1 consist:1 exists:12 trap:1 hui:2 execution:3 explore:4 sender:3 infinitely:1 nightmare:1 ordered:1 scalar:1 monotonic:3 applies:1 satisfies:3 inflict:3 goal:4 sorted:1 consequently:4 absence:1 infinite:2 spamassassin:1 called:1 invariance:2 player:41 ucb:1 select:3 support:1 violated:1 evaluate:2 |
3,041 | 3,756 | An Integer Projected Fixed Point Method for Graph
Matching and MAP Inference
Marius Leordeanu
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Martial Hebert
Robotics Institute
Carnegie Mellon University
Pittsburgh, PA 15213
[email protected]
Rahul Sukthankar
Intel Labs Pittsburgh
Pittsburgh, PA 15213
[email protected]
Abstract
Graph matching and MAP inference are essential problems in computer vision
and machine learning. We introduce a novel algorithm that can accommodate
both problems and solve them efficiently. Recent graph matching algorithms are
based on a general quadratic programming formulation, which takes in consideration both unary and second-order terms reflecting the similarities in local appearance as well as in the pairwise geometric relationships between the matched
features. This problem is NP-hard, therefore most algorithms find approximate
solutions by relaxing the original problem. They find the optimal continuous solution of the modified problem, ignoring during optimization the original discrete
constraints. Then the continuous solution is quickly binarized at the end, but very
little attention is put into this final discretization step. In this paper we argue that
the stage in which a discrete solution is found is crucial for good performance.
We propose an efficient algorithm, with climbing and convergence properties, that
optimizes in the discrete domain the quadratic score, and it gives excellent results
either by itself or by starting from the solution returned by any graph matching
algorithm. In practice it outperforms state-or-the art graph matching algorithms
and it also significantly improves their performance if used in combination. When
applied to MAP inference, the algorithm is a parallel extension of Iterated Conditional Modes (ICM) with climbing and convergence properties that make it a
compelling alternative to the sequential ICM. In our experiments on MAP inference our algorithm proved its effectiveness by significantly outperforming [13],
ICM and Max-Product Belief Propagation.
1
Introduction
Graph matching and MAP inference are essential problems in computer vision and machine learning
that are frequently formulated as integer quadratic programs, where obtaining an exact solution is
computationally intractable. We present a novel algorithm, Integer Projected Fixed Point (IPFP), that
efficiently finds approximate solutions to such problems. In this paper we focus on graph matching, because it is in this area that we have extensively compared our algorithm to state-of-the-art
methods. Feature matching using pairwise constraints is gaining a widespread use in computer vision, especially in shape and object matching and recognition. It is a generalization of the classical
graph matching problem, formulated as an integer quadratic program [1,3,4,5,7,8,16,17] that takes
into consideration both unary and second-order terms reflecting the similarities in local appearance
as well as in the pairwise geometric relationships between the matched features. The problem is
NP-hard, and a lot of effort has been spent in finding good approximate solutions by relaxing the
integer one-to-one constraints, such that the continuous global optimum of the new problem can
be found efficiently. In the end, little computational time is spent in order to binarize the solution,
based on the assumption that the continuous optimum is close to the discrete global optimum of
the original combinatorial problem. In this paper we show experimentally that this is not the case
and that, in fact, carefully searching for a discrete solution is essential for maximizing the quadratic
score. Therefore we propose an iterative algorithm that takes as input any continuous or discrete
solution, possibly given by some other graph matching method, and quickly improves it by aiming
to maximize the original problem with its integer constraints. Each iteration consists of two stages,
being loosely related to the Frank-Wolfe method (FW) [14, 15], a classical optimization algorithm
from operation research. The first stage maximizes in the discrete domain a linear approximation of
the quadratic function around the current solution, which gives a direction along which the second
stage maximizes the original quadratic score in the continuous domain. Even though this second
stage might find a non-discrete solution, the optimization direction given by the first stage is always
towards an integer solution, which is often the same one found in the second stage. The algorithm
always improves the quadratic score in the continuous domain finally converging to a maximum. If
the quadratic function is convex the solution at every iteration is always discrete and the algorithm
converges in a finite number of steps. In the case of non-convex quadratic functions, the method
tends to pass through/near discrete solutions and the best discrete solution encountered along the
path is returned, which, in practice is either identical or very close to the point of convergence. We
have performed extensive experiments with our algorithm with excellent results, the most representative of which being shown in this paper. Our method clearly outperforms four state-of-the-art
algorithms, and, when used in combination, the final solution is dramatically improved. Some recent MAP inference algorithms [11,12,13] for Markov Random Fields formulate the problem as
an integer quadratic program, for which our algorithm is also well suited, as we later explain and
demonstrate in more detail.
Matching Using Pairwise Constraints The graph matching problem, in its most recent and general form, consists of finding the indicator vector x? that maximizes a certain quadratic score function:
Problem 1:
x? = argmax(xT Mx) s. t. Ax = 1, x ? {0, 1}n
n
(1)
given the one-to-one constraints Ax = 1, x ? {0, 1} , which require that x is an indicator vector
such that xia = 1 if feature i from one image is matched to feature a from the other image and
zero otherwise. Usually one-to-one constraints are imposed on x such that one feature from one
image can be matched to at most one other feature from the other image. In MAP inference problems, only many-to-one constraints are usually required, which can be accommodated by the same
formulation, by appropriately setting the constraints matrix A. In graph matching, M is usually
a symmetric matrix with positive elements containing the compatibility score functions, such that
Mia;jb measures how similar the pair of features (i, j) from one image is in both local appearance
and pair-wise geometry with the pair of their candidate matches (a, b) from the other image. The difficulty of Problem 1 depends on the structure of this matrix M, but in the general case it is NP-hard
and no efficient algorithm exists that can guarantee optimality bounds. Previous algorithms modify
Problem 1, usually by relaxing the constraints on the solution, in order to be able to find efficiently
optimal solutions to the new problem. For example, spectral matching [5] (SM) drops the constraints
entirely and assumes that the leading eigenvector of M is close to the optimal discrete solution. It
then finds the discrete solution x by maximizing the dot-product with the leading eigenvector of
M. The assumption is that M is a slightly perturbed version of an ideal matrix, with rank-1, for
which maximizing this dot product gives the global optimum. Later, spectral graph matching with
affine constraints was developed [3] (SMAC), which finds the optimal solution of a modified score
function, with a tighter relaxation that imposes the affine constraints Ax = 1 during optimization.
A different, probabilistic interpretation, not based on the quadratic formulation, is given in [2] (PM),
also based on the assumption that M is close to a rank-1 matrix, which is the outer product of the
vector of probabilities for each candidate assignment. An important observation is that none of the
previous methods are concerned with the original integer constraints during optimization, and the
final post processing step, when the continuous solution is binarized, is usually just a very simple
procedure. They assume that the continuous solution is close to the discrete one. The algorithm
we propose here optimizes the original quadratic score in the continuous domain obtained by only
dropping the binary constraints, but it always targets discrete solutions through which it passes most
of the time. Note that even in this continuous domain the quadratic optimization problem is NPhard, so we cannot hope to get any global optimality guarantees. But we do not lose much, since
guaranteed global optimality for a relaxed problem does not require closeness to the global optimum
of the original problem, a fact that is evident in most of our experiments. Our experimental results
from Section 4 strongly suggest an important point: algorithms with global optimality properties in
a loosely relaxed domain can often give relatively poor results in the original domain, and a welldesigned procedure with local optimality properties in the original domain, such as IPFP, can have
a greater impact on the final solution than the global optimality in the relaxed domain.
Our algorithm aims to optimize the following continuous problem, in which we only drop the integer
constraints from Problem 1:
Problem 2:
x? = argmax(xT Mx) s. t. Ax = 1, x ? 0
(2)
Note that Problem 2 is also NP-hard, and it becomes a concave minimization problem, equivalent to
Problem 1, when M is positive definite.
2
Algorithm
We introduce our novel algorithm, Integer Projected Fixed Point (IPFP), that takes as input any initial
solution, continuous or discrete, and quickly finds a solution obeying the initial discrete constraints
of Problem 1 with a better score, most often significantly better than the initial one (Pd from Step 2
is a projection on the discrete domain, discussed shortly afterwards):
1. Initialize x? = x0 , S ? = xT
0 Mx0 , k = 0,
where xi ? 0 and x 6= 0
T
2. Let bk+1 = Pd (Mxk ), C = xT
k M(bk+1 ? xk ), D = (bk+1 ? xk ) M(bk+1 ? xk )
3. If D ? 0 set xk+1 = bk+1 .
Else let r = min {?C/D, 1} and set
xk+1 = xk + r(bk+1 ? xk )
?
?
T
?
4. If bT
k+1 Mbk+1 ? S then set S = bk+1 Mbk+1 and x = bk+1
5. If xk+1 = xk stop and return the solution x?
6. Set k = k + 1 and go back to Step 2.
This algorithm is loosely related to the power method for eigenvectors, also used by spectral matching [9]: at Step 2 it replaces the fixed point iteration of the power method vk+1 = P (Mvk ), where
P is the projection on the unit sphere, with a similar iteration bk+1 = Pd (Mxk ), in which Pd is
the projection on the one-to-one (for graph matching) or many-to-one (for MAP inference) discrete
constraints. Pd boils down to finding the discrete vector bk+1 = argmax bT Mxk , which can be
easily found in linear time for many-to-one constraints. For one-to-one constraints the efficient Hungarian method can be used. This is true since all binary vectors in the given discrete domain have
the same norm. Note that (see Proposition 1), in both cases (one-to-one or many-to-one constraints),
the discrete bk+1 is also the one maximizing the dot-product with Mxk in the continuous domain
Ab = 1, b > 0. IPFP is also related to Iterative Conditional Modes (ICM) [10] used for inference
in graphical models. In the domain of many-to-one constraints IPFP becomes an extension of ICM
for which the updates are performed in parallel without losing the climbing property and the convergence to a discrete solution. Note that the fully parallel version of ICM is IPFP without Step 3:
xk+1 = Pd (Mxk ). The theoretical results that we will present shortly are valid for both one-to-one
and many-to-one constraints, with a few differences that we will point out when deemed necessary.
The algorithm is a basically a sequence of linear assignment (or independent labeling) problems,
in which the next solution is found by using the previous one. In practice the algorithm converges
in about 5 ? 10 steps, which makes it very efficient, with basically the same complexity as the
complexity of Step 2. Step 3 insures that the quadratic score increases with each iteration. Step 4
guarantees that the binary solution returned is never worse than the initial solution. In practice, the
algorithm significantly improves the initial binary solution, and the final continuous solution is most
often discrete, and always close to the best discrete one found. In fact, in the case of MAP inference,
it is guaranteed that the point of convergence is discrete, as a fixed point of Pd .
Intuition The intuition behind this algorithm is the following: at every iteration the quadratic
score xT Mx is first approximated by the first order Taylor expansion around the current solution xk :
T
xT Mx ? xT
k Mxk +2xk M(x ? xk ). This approximation is maximized within the discrete domain
of Problem 1, at Step 2, where bk+1 is found. From Proposition 1 (see next) we know that the same
discrete bk+1 also maximizes the linear approximation in the continuous domain of Problem 2.
The role of bk+1 is to provide a direction of largest possible increase (or ascent) in the first-order
approximation, within both the continuous domain and the discrete domain simultaneously. Along
this direction the original quadratic score can be further maximized in the continuous domain of
Problem 2 (as long as bk+1 6= xk ). At Step 3 we find the optimal point along this direction,
also inside the continuous domain of Problem 2. The hope, also confirmed in practice, is that
the algorithm will tend to converge towards discrete solutions that are, or are close to, maxima of
Problem 2.
3
Theoretical Analysis
Proposition 1: For any vector x ? Rn there exists a global optimum y? of xT My in the domain of
Problem 2 that has binary elements (thus it is also in the domain of Problem 1).
Proof: Maximizing xT My with respect to y, subject to Ay = 1 and y > 0 is a linear program for
which an integer optimal solution exists because the constraints matrix A is totally unimodular [9].
This is true for both one-to-one and many-to-one constraints.
It follows that the maximization from Step 2 bk+1 = argmax bT Mxk in the original discrete
domain, also maximizes the same dot-product in the continuous domain of Problem 2, of relaxed
constraints Ax = 1 and x > 0. This ensures that the algorithm will always move towards some
discrete solution that also maximizes the linear approximation of the quadratic function in the domain of Problem 2. Most often in practice, that discrete solution also maximizes the quadratic score,
along the same direction and within the continuous domain. Therefore xk is likely to be discrete at
every step.
Property 1:
The quadratic score xT
k Mxk increases at every step k and the sequence of xk converges.
Proof:
For a given step k, if bk+1 = xk we have convergence. If bk+1 6= xk , let x be a point on the line between xk and bk+1 , x = xk +t(bk+1 ?xk ). For any 0 ? t ? 1, x is in the feasible domain of ProbT
2
lem 2. Let Sk = xT
k Mxk . Let us define the quadratic function f (t) = x Mx = Sk + 2tC + t D,
which is the original function in the domain of Problem 2 on the line between xk and bk+1 . Since
bk+1 maximizes the dot product with xT
k M in the discrete (and the continuous) domain, it follows
that C ? 0. We have two cases: D ? 0, when xk+1 = bk+1 (Step 3) and Sk+1 = xT
k+1 Mxk+1 =
fq (1) ? Sk = xT
Mx
;
and
D
<
0,
when
the
quadratic
function
f
(t)
is
convex
with
the maximum
k
q
k
in the domain of Problem 2 attained at point xk+1 = xk + r(bk+1 ? xk ). Again, it also follows
T
that Sk+1 = xT
k+1 Mxk+1 = fq (r) ? Sk = xk Mxk . Therefore, the algorithm is guaranteed to
increase the score at every step. Since the score function is bounded above on the feasible domain,
it has to converge, which happens when C = 0.
By always improving the quadratic score in the continuous domain, at each step the next solution
moves towards discrete solutions that are better suited for solving the original Problem 1.
Property 2: The algorithm converges to a maximum of Problem 2.
Proof:
Let x? be a point of convergence. At that point the gradient 2Mx? is non-zero since both M and
x? have positive elements and (x? )T Mx? > 0, (it is higher than the score at the first iteration, also
greater than zero). Since x? is a point of convergence it follows that C = 0, that is, for any other x in
the continuous domain of Problem 2, (x? )T Mx? ? (x? )T Mx. This implies that for any direction
vector v such that x? + tv is in the domain of Problem 2 for a small enough t > 0, the dot-product
between v and the gradient of the quadratic score is less than or equal to zero (x? )T Mv ? 0, which
further implies that x? is a maximum (local or global) of the quadratic score within the continuous
domain of equality constraints Ax? = 1, x? > 0.
For many-to-one constraints (MAP inference) it basically follows that the algorithm will converge
to a discrete solution, since the strict (local and global) maxima of Problem 2 are in the discrete
domain [12]. If the maximum is not strict, IPFP still converges to a discrete solution (which is also a
local maximum): the one found at Step 2. This is another similarity with ICM, which also converges
to a maximum. Therefore, combining ours with ICM cannot improve the performance of ICM, and
vice-versa.
Property 3: If M is positive semidefinite with positive elements, then the algorithm converges in a
finite number of iterations to a discrete solution, which is a maximum of Problem 2.
Proof: Since M is positive semidefinite we always have D ? 0, thus xk is always discrete for any
k. Since the number of discrete solutions is finite, the algorithm must converge in a finite number of
steps to a local (or global) maximum, which must be discrete. This result is obviously true for both
one-to-one and many-to-one constraints.
When M is positive semidefinite, Problem 2 is a concave minimization problem for which it is well
known that the global optimum has integer elements, so it is also a global optimum of the original
Problem 1. In this case our algorithm is only guaranteed to find a local optimum in a finite number of
iterations. Global optimality of concave minimization problems is a notoriously difficult task since
the problem can have an exponential number of local optima. In fact, if a large enough constant
is added to the diagonal elements of M, every point in the original domain of possible solutions
becomes a local optimum for one-to-one problems. Therefore adding a large constant to make the
problem concave is not good idea , even if the global optimum does not change. In practice M is
rarely positive semidefinite, but it can be close to being one if the first eigenvalue is much larger than
the rest, which is the assumption made by the spectral matching algorithm, for example.
Property 4: If M has non-negative elements and is rank-1, then the algorithm will converge and
return the global optimum of the original problem after the first iteration.
Proof:
Let v, ? be the leading eigenpair of M. Then, since M has non-negative elements both v and ? are
positive. Since M is also rank one, we have Mx0 = ?(vT x0 )v. Since both x0 and v have positive
elements it immediately follows that x1 after the first iteration is the indicator solution vector that
maximizes the dot-product with the leading eigenvector (vT x0 = 0 is a very unlikely case that never
happens in practice). It is clear that this vector is the global optimum, since in the rank-1 case we
have: xT Mx = ?1 (vT x)2 , for any x.
The assumption that M is close to being rank-1 is used by two recent algorithms, [2] and [5].
Spectral matching [5] also returns the optimal solution in this case and it assumes that the rank-1
assumption is the ideal matrix to which a small amount of noise is added. Probabilistic graph matching [2] makes the rank-1 approximation by assuming that each second-order element of Mia;jb is
the product of the probability of feature i being matched to a and feature j being matched to b, independently. However, instead of maximizing the quadratic score function, they use this probabilistic
interpretation of the pair-wise terms and find the solution by looking for the closest rank-1 matrix
to M in terms of the KL-divergence. If the assumptions in [2] were perfectly met, then spectral
matching, probabilistic graph matching and our algorithm would all return the same solution. For a
comparison of all these algorithms on real world experiments please see the experiments section.
4
Experiments
We first present some representative experiments on graph matching problems. We tested IPFP by
itself, as well as in conjunction with other algorithms as a post-processing step. When used by itself
IPFP is always initialized with a flat, uniform continuous solution. We followed the experiments
of [6] in the case of outliers: we used the same cars and motorbikes image pairs, extracted from
the Pascal 2007 database, the same features (oriented points extracted from contours) and the same
T
second-order potentials Mia;jb = e?w gia;jb ; gia;jb is a five dimensional vector of deformations in
pairwise distances and angles when matching features (i, j) from one image to features (a, b) from
the other and w is the set of parameters that control the weighting of the elements of gia;jb . We
followed the setup from [6] exactly, in order to have a fair comparison of our algorithm against the
results they obtained. Due to space limitations, we refer the interested reader to [6] for the details.
These experiments are difficult due the large number of outliers (on average 5 times more outliers
than inliers), and, in the case of cars and motorbikes, also due to the large intra-category variations
Figure 1: Results on motorbikes and cars averaged over 30 experiments: at each iteration the average
score xT
k Mxk normalized by the ground truth score is displayed. The comparisons are not affected
by this normalization, since all scores are normalized by the same value. Notice how quickly IPFP
converges (fewer than 10 iterations)
Table 1: Average matching rates for the experiments with outliers on cars and motorbikes from Pascal 07. Note that our algorithm by itself outperforms on average all the others by themselves. When
the solution of other algorithms is the starting point of IPFP the performance is greatly improved.
Dataset
Cars and Motorbikes: alone
Cars and Motorbikes: + IPFP
Cars and Motorbikes: Improvement
IPFP
64.4%
64.4%
NA
SM
58.2%
67.0%
+8.8%
SMAC
58.6%
66.2%
+7.6%
GA
46.7%
66.3%
+19.6%
PM
36.6%
67.2%
+30.6%
in shape present in the Pascal 2007 database. By outliers we mean the features that have no ground
truth correspondences in the other image, and by inliers those that have such correspondences. As in
[6] we allow outliers only in one of the images in which they are present in large number, the ratio
of outliers to inliers varying from 1.5 to over 10. The ground truth correspondences were manually
selected by the authors of [6].
The difficulty of the matching problems is reflected by the relatively low matching scores of all
algorithms (Table 1). In order to ensure an optimal performance of all algorithms, we used the
supervised version of the graph matching learning method from [6]. Learning w was effective,
improving the performance by more than 15% on average, for all algorithms. The algorithms we
chose for comparison and also for combining with ours are among the current state-of-the-art in
the literature: spectral matching with affine constraints (SMAC) [3], spectral matching (SM) [5],
probabilistic graph matching (PM) [2], and graduated assignment (GA) [4]. In Tables 1 and 2 we
show that in our experiments IPFP significantly outperforms other state-of-the-art algorithms.
In our experiments we focused on two aspects. Firstly, we tested the matching rate of our algorithm
against the others, and observed that it consistently outperforms them, both in the matching rate and
in the final quadratic score achieved by the resulting discrete solution (see Tables 1, 2). Secondly,
we combined our algorithm, as a post-processing step, with the others and obtained a significant improvement over the output matching rate and quadratic score of the other algorithms by themselves
(see Figures 1, 2). In Figure 2 we show the quadratic score of our algorithm, per iteration, for several individual experiments, when it takes as initial solution the output of several other algorithms.
The score at the first iteration is the score of the final discrete solution returned by those algorithms
and the improvement in just a few iterations is substantial, sometimes more than doubling the final
quadratic score reached by the other algorithms. In Figure 1 we show the average scores of our
algorithm, over 30 different experiments on cars and motorbikes, per iteration, normalized by the
score of the solutions given by the human ground truth labeling. We notice that regardless of the
starting condition, the final scores are very similar, slightly above the value of 1 (Table 2), which
means that the solutions reached are, on average, at least as good, in terms of the matching score
function, as the manually picked solutions. None of the algorithms by themselves, except only for
IPFP, reach this level of quality. We also notice that a quadratic score of 1 does not correspond
to a perfect matching rate, which indicates the fact that besides the ground truth solution there are
Table 2: Quadratic scores on the Cars and Motorbikes image sets (the higher, the better). S ? is
the score of the manually picked ground truth. Note that the ground truth score S ? does not affect
the comparison since it is the same normalization value for all algorithms. The ?Convergence to a
binary solution? row shows the average rate at which our algorithm converges to a discrete solution.
Experiments on Cars and Motorbikes
Alone, avg Smax /S ?
+ IPFP, avg Smax /S ?
Convergence to a binary solution
IPFP
1.081
1.081
86.7%
SM
0.781
1.070
93.3%
SMAC
0.927
1.082
86.7%
GA
0.623
1.086
93.3%
PM
0.4785
1.080
86.7%
other solutions with high score. This is expected, given that the large number of outliers can easily
introduce wrong solutions of high score. However, increasing the quadratic score, does increase the
matching rate as can be seen by comparing the results between the Tables 2 and 1.
Figure 2: Experiments on cars and motorbikes: at each iteration the score xT
k Mxk normalized by
the ground truth score is displayed for 30 individual matching experiments for our algorithm starting
from different solutions (uniform, or given by some other algorithm).
Experiments on MAP inference problems We believe that IPFP can have a greater impact in
graph matching problems than in MAP inference ones, due to the lack of efficient, high-quality
discretization procedures in the graph matching literature. In the domain of MAP inference for
MRFs, it is important to note that IPFP is strongly related to the parallel version of Iterated Conditional Modes, but, unlike parallel ICM, it has climbing, strong convergence and local optimality
properties. To see the applicability of our method to MAP inference, we tested it against sequential ICM, Max-Product BP with damping oscillations (Table 3), the algorithm L2QP of [12], and
the the algorithm of [13], which is based on a convex approximation. In the case of [12] and [13],
which give continuous optimal solutions to a relaxed problem, a post-processing step is required
for discretization. Note that the authors of [13] use ICM to obtain a binary solution. However, we
wanted to emphasize the quality of the methods by themselves, without a powerful discretization
step, and used ICM for comparisons separately. Thus, for discretization we used one iteration of
ICM for both our L2QP [12] and CQP [13]. Both ICM and IPFP used as initial condition a uniform
flat solution as in the case of graph matching. We used the same experimental setup as in [11] and
[12], on graphs with different degrees of edge density (by generating random edges with a given
probability, varying from 0.1 to 1). The values of the potentials were randomly generated as in [11]
and [12], favoring the correct labels vs. the wrong ones. In Figure 3 we show the average scores
normalized by the score of IPFP over 30 different experiments, for different probabilities of edge
generation pEdge on graphs with 50 nodes and different number of possible labels per node. The
most important observation is that both ICM and IPFP outperform L2QP and CQP by a wide margin
on all problems without any single exception. In our experiments, on every single problem, IPFP
Table 3: Average objective score over 30 different experiments on 4-connected and 8-connected
planar graphs with 50 sites and 10 possible labels per site
Graph type
4-connected planar
8-connected planar
IPFP
79.5
126.0
ICM
78.2
123.2
BP
54.2
75.4
outperformed ICM, while both IPFP and ICM outperformed both L2QP and CQP by a wide margin,
which is reflected in the averages shown in Figure 3.
Figure 3: Average quadratic scores normalized by the score of IPFP, over 30 different experiments,
for each probability of edge generation pEdge ? 0.1, 0.2, ..., 1 and different number of labels, for
graphs with 50 nodes. Note that IPFP consistently ourperforms L2QP [12] and CQP [13] (by a wide
margin) and ICM. Note that L2QP and CQP perform similarly for a small number of labels.
5
Conclusion
This paper presents a novel and computationally efficient algorithm, Integer Projected Fixed Point
(IPFP), that outperforms state-of-the-art methods for solving quadratic assignment problems in
graph matching, and well-established methods in MAP inference such as BP and ICM. We analyze
the theoretical properties of IPFP and show that it has strong convergence and climbing guarantees.
Also, IPFP can be employed in conjunction with existing techniques, such as SMAC or SM for graph
matching or BP for inference to achieve solutions that are dramatically better than the ones produced
independently by those methods alone. Furthermore, IPFP is very straightforward to implement and
converges in only 5?10 iterations in practice. Thus, IPFP is very well suited for addressing a broad
range of real-world problems in computer vision and machine learning.
6
Acknowledgments
This work was supported in part by NSF Grant IIS0713406 and by the Intel Graduate Fellowship
program.
References
[1] A. Berg, T. Berg and J. Malik. Shape matching and object recognition using low distortion correspondences.
Computer Vision and Pattern Recognition, 2005
[2] R. Zass and A. Shashua. Probabilistic Graph and Hypergraph Matching. Computer Vision and Pattern
Recognition, 2008
[3] T. Cour, P. Srinivasan and J. Shi. Balanced Graph Matching. Neural Information Processing Systems, 2006
[4] S. Gold, and A. Rangarajan. A graduated assignment algorithm for graph matching. Pattern Analysis and
Machine Intelligence, 1996
[5] M. Leordeanu and M. Hebert. A Spectral Technique for Correspondence Problems using Pairwise Constraints. International Conference on Computer Vision, 2005
[6] M. Leordeanu and M. Hebert. Unsupervised Learning for Graph Matching. Computer Vision and Pattern
Recognition, 2009
[7] C. Schellewald and C. Schnorr. Probabilistic subgraph matching based on convex relaxation. EMMCVPR,
2005
[8] P.H.S Torr. Solving markov random fields using semi definite programming. Artificial Intelligence and
Statistics, 2003
[9] B. Rainer, M. Dell?Amico and S. Martello. Assignment Problems. SIAM Publications, 2009
[10] J. Besag. On the Statistical Analysis of Dirty Pictures. JRSS, 1986
[11] T. Cour and J. Shi. Solving Markov Random Fields with Spectral Relaxation. International Conference
on Artificial Intelligence and Statistics, 2007
[12] M. Leordeanu and M. Hebert. Efficient MAP approximation for dense energy functions. International
Conference on Machine Learning, 2006
[13] P. Ravikumar and J. Lafferty. Quadratic Programming Relaxations for Metric Labeling and Markov Random Field MAP Estimation, International Conference on Machine Learning, 2006
[14] M. Frank and P. Wolfe. An algorithm for quadratic programming, Naval Research Logistics Quarterly,
1956.
[15] N.W. Brixius and K.M. Anstreicher. Solving quadratic assignment problems using convex quadratic programming relaxations, Optimization Methods and Software, 2001
[16] J. Maciel and J.P. Costeira. A global solution to sparse correspondence problems Pattern Analysis and
Machine Intelligence, 2003
[17] L. Torresani, V. Kolmogorov and C. Rother. Feature correspondence via graph matching: Models and
global optimization. European Conference on Computer Vision, 2008
| 3756 |@word version:4 norm:1 accommodate:1 initial:7 score:49 ours:2 outperforms:6 existing:1 current:3 com:1 discretization:5 comparing:1 gmail:1 must:2 shape:3 wanted:1 drop:2 update:1 v:1 alone:3 intelligence:4 fewer:1 selected:1 xk:28 node:3 firstly:1 five:1 dell:1 along:5 consists:2 inside:1 introduce:3 x0:4 pairwise:6 expected:1 themselves:4 frequently:1 mx0:2 little:2 totally:1 becomes:3 increasing:1 matched:6 bounded:1 maximizes:9 eigenvector:3 developed:1 finding:3 guarantee:4 every:7 binarized:2 concave:4 unimodular:1 exactly:1 wrong:2 control:1 unit:1 grant:1 positive:10 local:12 modify:1 tends:1 aiming:1 path:1 might:1 chose:1 relaxing:3 range:1 graduate:1 averaged:1 acknowledgment:1 practice:9 definite:2 implement:1 procedure:3 area:1 significantly:5 matching:51 projection:3 suggest:1 get:1 cannot:2 close:9 ga:3 put:1 sukthankar:1 optimize:1 equivalent:1 map:17 imposed:1 shi:2 maximizing:6 go:1 attention:1 starting:4 independently:2 convex:6 focused:1 formulate:1 regardless:1 straightforward:1 immediately:1 searching:1 variation:1 target:1 exact:1 programming:5 mxk:14 losing:1 pa:3 wolfe:2 element:11 recognition:5 approximated:1 database:2 observed:1 role:1 ensures:1 connected:4 substantial:1 intuition:2 pd:7 balanced:1 complexity:2 hypergraph:1 solving:5 easily:2 kolmogorov:1 effective:1 artificial:2 labeling:3 larger:1 solve:1 distortion:1 otherwise:1 statistic:2 itself:4 final:9 obviously:1 sequence:2 eigenvalue:1 propose:3 product:11 combining:2 subgraph:1 achieve:1 gold:1 convergence:12 cour:2 optimum:14 rangarajan:1 smax:2 generating:1 perfect:1 converges:10 object:2 spent:2 strong:2 c:1 hungarian:1 implies:2 met:1 direction:7 correct:1 human:1 require:2 generalization:1 proposition:3 tighter:1 secondly:1 extension:2 around:2 ground:8 gia:3 estimation:1 outperformed:2 lose:1 combinatorial:1 label:5 largest:1 vice:1 hope:2 minimization:3 clearly:1 always:10 aim:1 modified:2 varying:2 publication:1 conjunction:2 rainer:1 ax:6 focus:1 naval:1 vk:1 improvement:3 rank:9 fq:2 consistently:2 indicates:1 greatly:1 besag:1 martello:1 inference:17 mrfs:1 unary:2 bt:3 unlikely:1 favoring:1 interested:1 compatibility:1 among:1 pascal:3 art:6 initialize:1 field:4 equal:1 never:2 manually:3 identical:1 broad:1 unsupervised:1 np:4 jb:6 others:3 torresani:1 few:2 oriented:1 randomly:1 simultaneously:1 divergence:1 individual:2 argmax:4 geometry:1 ab:1 intra:1 semidefinite:4 behind:1 inliers:3 edge:4 necessary:1 damping:1 loosely:3 accommodated:1 taylor:1 initialized:1 mbk:2 deformation:1 theoretical:3 compelling:1 assignment:7 maximization:1 applicability:1 addressing:1 uniform:3 perturbed:1 my:2 combined:1 density:1 international:4 siam:1 probabilistic:7 quickly:4 na:1 again:1 containing:1 possibly:1 worse:1 leading:4 return:4 potential:2 mv:1 depends:1 performed:2 later:2 lot:1 lab:1 picked:2 analyze:1 reached:2 shashua:1 parallel:5 efficiently:4 maximized:2 correspond:1 climbing:5 iterated:2 produced:1 basically:3 none:2 confirmed:1 notoriously:1 mia:3 explain:1 reach:1 against:3 energy:1 proof:5 boil:1 stop:1 proved:1 dataset:1 car:11 improves:4 carefully:1 reflecting:2 back:1 attained:1 higher:2 supervised:1 reflected:2 planar:3 rahul:1 improved:2 costeira:1 formulation:3 though:1 strongly:2 furthermore:1 just:2 stage:7 propagation:1 lack:1 widespread:1 mode:3 quality:3 believe:1 normalized:6 true:3 equality:1 symmetric:1 during:3 please:1 evident:1 ay:1 demonstrate:1 pedge:2 image:11 wise:2 consideration:2 novel:4 discussed:1 interpretation:2 mellon:2 refer:1 significant:1 versa:1 pm:4 similarly:1 dot:7 similarity:3 closest:1 recent:4 optimizes:2 certain:1 outperforming:1 binary:8 vt:3 seen:1 greater:3 relaxed:5 employed:1 converge:5 maximize:1 semi:1 afterwards:1 match:1 sphere:1 long:1 cqp:5 zass:1 post:4 ravikumar:1 impact:2 converging:1 emmcvpr:1 vision:9 cmu:2 metric:1 iteration:20 normalization:2 sometimes:1 robotics:2 achieved:1 fellowship:1 separately:1 else:1 crucial:1 appropriately:1 rest:1 unlike:1 pass:1 ascent:1 subject:1 tend:1 strict:2 lafferty:1 effectiveness:1 integer:14 near:1 ideal:2 enough:2 concerned:1 affect:1 graduated:2 perfectly:1 idea:1 effort:1 returned:4 ipfp:32 dramatically:2 clear:1 eigenvectors:1 amount:1 anstreicher:1 extensively:1 category:1 outperform:1 nsf:1 notice:3 per:4 carnegie:2 discrete:46 dropping:1 affected:1 srinivasan:1 four:1 graph:33 relaxation:5 angle:1 powerful:1 reader:1 oscillation:1 entirely:1 bound:1 guaranteed:4 followed:2 correspondence:7 quadratic:40 replaces:1 encountered:1 constraint:31 bp:4 ri:1 flat:2 software:1 aspect:1 optimality:8 min:1 relatively:2 marius:1 tv:1 combination:2 poor:1 jr:1 slightly:2 lem:1 happens:2 smac:5 outlier:8 computationally:2 know:1 end:2 operation:1 quarterly:1 spectral:10 alternative:1 shortly:2 motorbike:11 original:17 assumes:2 dirty:1 ensure:1 graphical:1 schellewald:1 especially:1 classical:2 move:2 objective:1 added:2 malik:1 diagonal:1 gradient:2 mx:11 distance:1 outer:1 argue:1 binarize:1 assuming:1 rother:1 besides:1 relationship:2 ratio:1 difficult:2 setup:2 frank:2 negative:2 perform:1 observation:2 markov:4 sm:5 finite:5 displayed:2 logistics:1 looking:1 rn:1 bk:24 pair:5 required:2 kl:1 extensive:1 established:1 able:1 usually:5 pattern:5 program:5 max:2 gaining:1 belief:1 power:2 difficulty:2 indicator:3 improve:1 picture:1 martial:1 deemed:1 geometric:2 literature:2 fully:1 generation:2 limitation:1 degree:1 affine:3 imposes:1 row:1 supported:1 hebert:5 allow:1 institute:2 wide:3 sparse:1 xia:1 valid:1 world:2 contour:1 author:2 made:1 avg:2 projected:4 approximate:3 emphasize:1 global:20 pittsburgh:4 xi:1 continuous:27 iterative:2 mvk:1 sk:6 table:9 schnorr:1 ignoring:1 obtaining:1 improving:2 expansion:1 excellent:2 european:1 domain:38 dense:1 noise:1 fair:1 icm:21 x1:1 site:2 intel:2 representative:2 nphard:1 obeying:1 exponential:1 candidate:2 weighting:1 down:1 xt:18 amico:1 closeness:1 essential:3 intractable:1 exists:3 sequential:2 adding:1 margin:3 suited:3 tc:1 appearance:3 insures:1 likely:1 doubling:1 leordeanu:5 truth:8 extracted:2 conditional:3 formulated:2 towards:4 feasible:2 hard:4 experimentally:1 fw:1 change:1 except:1 torr:1 eigenpair:1 pas:1 experimental:2 rarely:1 exception:1 berg:2 tested:3 |
3,042 | 3,757 | Estimating image bases for visual image
reconstruction from human brain activity
Yusuke Fujiwara1 Yoichi Miyawaki2,1 Yukiyasu Kamitani1
1
ATR Computational Neuroscience Laboratories
2
National Institute of Information and Communications Technology
2-2-2 Hikaridai, Seika-cho, Kyoto, Japan
[email protected] yoichi [email protected] [email protected]
Abstract
Image representation based on image bases provides a framework for understanding neural representation of visual perception. A recent fMRI study has shown
that arbitrary contrast-defined visual images can be reconstructed from fMRI activity patterns using a combination of multi-scale local image bases. In the reconstruction model, the mapping from an fMRI activity pattern to the contrasts of the
image bases was learned from measured fMRI responses to visual images. But the
shapes of the images bases were fixed, and thus may not be optimal for reconstruction. Here, we propose a method to build a reconstruction model in which image
bases are automatically extracted from the measured data. We constructed a probabilistic model that relates the fMRI activity space to the visual image space via a
set of latent variables. The mapping from the latent variables to the visual image
space can be regarded as a set of image bases. We found that spatially localized,
multi-scale image bases were estimated near the fovea, and that the model using
the estimated image bases was able to accurately reconstruct novel visual images.
The proposed method provides a means to discover a novel functional mapping
between stimuli and brain activity patterns.
1 Introduction
The image basis is a key concept for understanding neural representation of visual images. Using
image bases, we can consider natural scenes as a combination of simple elements corresponding
to neural units. Previous works have shown that image bases similar to receptive fields of simple
cells are learned from natural scenes by the sparse coding algorithm [4, 9]. A recent fMRI study
has shown that visual images can be reconstructed using a linear combination of multi-scale image
bases (1x1, 1x2, 2x1, and 2x2 pixels covering an entire image), whose contrasts were predicted
from the fMRI activity pattern [6]. The multi-scale bases produced more accurate reconstruction
than the pixel-by-pixel prediction, and each scale contributed to reconstruction in a way consistent
with known visual cortical representation. However, the predefined shapes of image bases may not
be optimal for image reconstruction.
Here, we developed a method to automatically extract image bases from measured fMRI responses
to visual stimuli, and used them for image reconstruction. We employed the framework of canonical
correlation analysis (CCA), in which two multi-dimensional observations are related via a common
coordinate system. CCA finds multiple correspondences between a weighted sum of voxels and
a weighted sum of pixels. These correspondences provide an efficient mapping between the two
observations. The pixel weights for each correspondence can be thought to define an image basis.
As the early visual cortex is known to be organized in a retinotopic manner, one can assume that
a small set of pixels corresponds to a small set of voxels. To facilitate the mapping between small
1
(a)
(b)
Figure 1: Model for estimating image bases. (a) Illustration of the model framework. The visual
image I (pixels) and an fMRI activity pattern r (voxels) is linked by latent variables z. The links
from each latent variable to image pixels define an image basis WI , and the links from each latent
variable to fMRI voxels is called a weight vector Wr . (b) Graphical representation of the model.
Circles indicate model parameters to be estimated and squares indicate observations. The matrices
WI and Wr , the common latent variable z, and the inverse variances ?I and ?r are simultaneously
estimated using the variational Bayesian method. Using the estimated parameters, the predictive
distribution for a visual image given a new brain activity pattern is constructed (dashed line).
sets of pixels and voxels, we extended CCA to Bayesian CCA [10] with sparseness priors. Bayesian
CCA treats the multiple correspondences as latent variables with two transformation matrices to two
sets of observations. The transformation matrix to the visual image can be regarded as a set of image
bases. The matrices are assumed to be random variables with hyper-parameters. We introduced a
sparseness prior into each element of the matrices, such that only small subsets of voxels and pixels
are related with non-zero matrix elements.
The Bayesian CCA model was applied to the data set of Miyawaki et al. [6]. We show that spatially localized image bases were extracted, especially around the foveal region, whose shapes were
similar to those used in the previous work. We also demonstrate that the model using the estimated
image bases produced accurate visual image reconstruction.
2
Method
We constructed a model in which a visual image is related with an fMRI activity pattern via latent
variables (Figure 1). Each latent variable has links to a set of pixels, which can be regarded as
an image basis because links from a single latent variable construct an element of a visual image.
The latent variable also has multiple links to a set of fMRI voxels, which we call a weight vector.
This model is equivalent to CCA: each latent variable corresponds to a canonical coefficient [3] that
bundles a subset of fMRI voxels responding to a specific visual stimulus. We then extended the CCA
model to the Bayesian CCA model that can conduct a sparse selection of these links automatically.
2.1
Canonical Correlation Analysis
We first consider the standard CCA for estimating image bases given visual images I and fMRI
activity patterns r. Let I be an N ? 1 vector and r be a K ? 1 vector where N is the number
of image pixels, K is the number of fMRI voxels and t is a sample index. Both data sets are
independent identically distributed (i.i.d.) samples. CCA finds linear combinations u1 (t) = a01 ? I(t)
and v1 (t) = b01 ? r(t) such that the correlation between u1 and v1 is maximized. The variables u1
and v1 are called the first canonical variables and the vectors a1 and b1 are called the canonical
coefficients. Then, the second canonical variables u2 (t) = a02 ? I(t) and v2 (t) = b02 ? r(t) are sought
by maximizing the correlation of u2 and v2 while the second canonical variables are orthogonalized
to the first canonical variables. This procedure is continued up to a pre-defined number of times M .
The number M is conventionally set to the smaller dimension of the two sets of observations: in
our case, M = N because the number of visual-image pixels is much smaller than that of the fMRI
2
voxels (N < K). The M sets of canonical variables are summarized as
u(t) = A ? I(t),
(1)
v(t) = B ? r(t),
(2)
where u(t) and v(t) are M ? 1 vectors, A is an M ? N matrix, and B is a M ? K matrix. The
matrices A and B are obtained by solving the eigen problem of the covariance matrix between I
and r [1]. The visual image can be reconstructed by
I(t) = A?1 ? B ? r(t),
(3)
where each column vector of the inverse matrix A?1 is an image basis.
2.2
Bayesian CCA
Bayesian CCA introduces common latent variables that relate a visual image I and the fMRI activity pattern r with image basis set WI and weight vector set Wr (Figure 1 (b)). These variables
are treated as random variables and prior distributions are assumed for each variable. Hyper-prior
distributions are also assumed for an inverse variance of each element of the image bases and the
weight vectors. The image bases and the weight vectors are estimated as a posterior distribution by
the variational Bayesian method [2]. After the parameters are determined, a predictive distribution
for the visual image can be calculated.
We assume two likelihood functions. One is for visual images that are generated from latent variables. The other is for fMRI activity patterns that are generated from the same latent variables.
When observation noises for visual images and fMRI voxels are assumed to follow a Gaussian distribution with zero mean and spherical covariance, the likelihood functions of the visual image I and
the fMRI activity pattern r are
[
]
T
1 ?
P (I|WI , z) ? exp ? ?I
||I(t) ? WI ? z(t)||2 ,
(4)
2 t=1
[
]
T
1 ?
P (r|Wr , z) ? exp ? ?r
||r(t) ? Wr ? z(t)||2 ,
(5)
2 t=1
where WI is an N ? M matrix representing M image bases, each of which consists of N pixels,
Wr is a K ?M matrix representing M weight vectors, each of which consist of K voxels, z(t) is an
M ? 1 vector representing latent variables, ?I?1 and ?r?1 are scalar variables representing unknown
noise variances of the visual image and fMRI activity pattern, and T is the number of observations.
The latent variables are treated as the following Gaussian prior distribution,
]
[
T
1?
P0 (z) ? exp ?
||z(t)||2 .
2 t=1
(6)
The image bases and weight vectors are regarded as random variables, and the prior distributions of
them are assumed as,
]
[
N
M
(
)2
1??
(7)
?I(n,m) WI(n,m) ,
P0 (WI |?I ) ? exp ?
2 n=1 m=1
[
]
K M
(
)2
1??
P0 (Wr |?r ) ? exp ?
(8)
?r(k,m) Wr(k,m) ,
2
m=1
k=1
where ?I(n,m) and ?r(k,m) are the inverse variances of the elements in WI and Wr , respectively,
which are assumed to be mutually independent.
We also assume hyper-prior distributions for the inverse variances ?I(n,m) and ?r(k,m) ,
??
G(?I(n,m) |?
? I(n,m) , ? I(n,m) ),
P0 (?I ) =
P0 (?r ) =
n
m
k
m
??
G(?I(k,m) |?
? r(k,m) , ? r(k,m) ),
3
(9)
(10)
where G(?|?
?, ?) represents the Gamma distribution with mean ?
? and confidence parameter ?. For
our analysis, all the means ?
? I(n,m) and ?
? r(k,m) were set to 1 and all the confidence parameters
? I(n,m) and ? r(k,m) were set to 0.
This configuration of the prior and hyper-prior settings is known as the automatic relevance determination (ARD), where non-effective parameters are automatically driven to zero [7]. In the current
case, these priors and hyper-priors lead to a sparse selection of links from each latent variable to
pixels and voxels.
Prior distributions of observation noise are assumed as non-informative priors, which are described
by the observation noise,
1
,
?I
1
P0 (?r ) =
.
?r
P0 (?I ) =
2.3
(11)
(12)
Parameter estimation by the variational Bayesian method
The image bases and weight vectors are estimated as a posterior distribution P (WI , Wr |I, r), given
the likelihood functions (Eqs. (4) and (5)), the prior distributions (Eqs. (6) - (8), (11) and (12)), and
the hyper-prior distributions (Eqs. (9) and (10)). This posterior distribution is obtained by marginalizing the joint posterior distribution P (WI , Wr , z, ?I , ?r , ?I , ?r |I, r) with respect to latent variables and variance parameters,
?
P (WI , Wr |I, r) = dzd?I d?r d?I d?r P (WI , Wr , z, ?I , ?r , ?I , ?r |I, r).
(13)
Since the joint posterior distribution cannot be calculated analytically, we approximate it using a
trial distribution based on the variational Bayesian (VB) method [2]. In the VB method, a trial
distribution Q(WI , Wr , z, ?I , ?r , ?I , ?r ) with the following factorization is assumed,
Q(WI , Wr , z, ?I , ?r , ?I , ?r ) = Qw (WI )Qw (Wr )Qz (z)Q? (?I , ?r , ?I , ?r ).
(14)
The joint posterior distribution P (WI , Wr , z, ?I , ?r , ?I , ?r |I, r) is approximated by the factorized
distribution (Eq. (14)). According to the standard calculation of the VB method, the trial distribution
of the image bases Qw (WI ) is derived as
Qw (WI ) =
N ?
M
?
N (WI(n,m) |WI(n,m) , ? ?1
I(n,m) ),
(15)
n=1 m=1
where
WI(n,m)
= ??I ? ?1
I(n,m)
? I(n,m) = ??I
(?
T
T
?
In (t)zm (t),
t=1
z2m (t) + T ??1
z(m,m)
(16)
)
+ ?I(n,m) ,
(17)
t=1
and N (x|?
x, ? ?1 ) represents a Gaussian distribution with mean x
? and variance ? ?1 . The trial distribution of the weight vectors Qw (Wr ) is obtained in a similar way, by replacing I with r, n with k,
and N with K in Eqs. (15-17). The trial distribution of the latent variables Qz (z) is obtained by
Qz (z) =
T
?
N (z(t)|z(t), ??1
z ),
(18)
t=1
where
)
(
0
0
z(t) = ??1
??I WI I(t) + ??r Wr r(t) ,
z
( 0
)
( 0
)
?1
?
?z = ??I WI WI + ??1
wI + ?r Wr Wr + ?wr + E.
4
(19)
(20)
In Eq. (20), E is an identity matrix, and ?wI and ?wr are defined as
([ N
])
N
?
?
?wI = diag
? I(n,1) , ? ? ? ,
? I(n,M ) ,
n=1
?wr = diag
([ K
?
n=1
? r(k,1) , ? ? ? ,
k=1
K
?
(21)
])
? r(k,M ) .
(22)
k=1
Finally, the distribution of the inverse variances Q? (?I , ?r , ?I , ?r ) is further factorized into
Q? (?I )Q? (?r )Q? (?I )Q? (?r ), each having a function form equivalent to a gamma distribution.
The expectation of ?I(n,m) is given by
)?1
)( 1
(1
1 ?1
?1
2
? I(n,m) =
+ ? I0(n,m)
(WI(n,m) ) + ? I(n,m) + ? I0(n,m) ?I0(n,m)
,
(23)
?
2
2
2
and that of ?I is given by
{ T
[
(?
]}?1
)
T
?
0
?1
?1
?1
2
0
?
?(t)|| + Tr ?
?I = N T
||I(t) ? WI z
.
z(t)z (t) + T ?
+ T ? W WI
wI
z
t=1
z
I
t=1
(24)
The expectations of Q? (?r ) and Q? (?r ) are obtained in a similar way, by replacing I with r, n
with k, and N with K in Eq. (23) and Eq. (24), respectively. The expectations of these distributions
are used in the calculation of Qw (WI ), Qw (Wr ) and Qz (z) (Eqs. (15) - (20)). The algorithm
estimates the joint posterior by successive calculations of 1) Qw (WI ) and Qw (Wr ), 2) Qz (z), and
3) Q? (?I , ?r , ?I , ?r ). After the algorithm converges, image bases WI are calculated by taking the
expectation of Q(WI ).
2.4 Predictive distribution for visual image reconstruction
Using the estimated parameters, we can derive the predictive distribution for a visual image Inew
given a new brain activity rnew (Figure 1 (b), dashed line). Note that Inew and rnew were taken
from the data set reserved for testing the model, independent of the data set to estimate the model
parameters. The predictive distribution P (Inew |rnew ) is constructed from the likelihood of the visual
image (Eq. (4)), the estimated distribution of image bases Q(WI ) (Eqs. (15) - (17)), and a posterior
distribution of latent variables P (znew |rnew ) as follows,
?
P (Inew |rnew ) = dWI dznew P (Inew |WI , znew )Q(WI )P (znew |rnew ).
(25)
Because the multiple integral over the random variable WI and znew is intractable, we replace the
random variable WI with the estimated image bases WI to vanish the integral over WI . Then the
predictive distribution becomes
?
P (Inew |rnew ) ' dznew P (Inew |znew )P (znew |rnew ),
(26)
where
]
[
1?
2
P (Inew |znew ) ? exp ? ?I ||Inew ? WI znew || .
2
(27)
Since P (znew |rnew ) is an unknown distribution, we approximate P (znew |rnew ) based on the trial
ez (znew ), by omitdistribution Q(z) (Eqs. (18) - (20)). We construct an approximate distribution Q
ting the terms related to the visual image in Eqs. (18) - (20),
ez (znew ) = N (z|?
Q
znew , ??1
znew ),
(28)
where
0
?new = ??r ??1
z
znew Wr rnew ,
( 0
)
?znew = ??r W Wr + ??1 + E.
r
5
wr
(29)
(30)
Finally, the predictive distribution is obtained by
?
ez (znew )
P (Inew |rnew ) ' dznew P (Inew |znew )Q
= N (Inew |?Inew , ??1
Inew ),
(31)
where
?Inew = ??r WI ??1 W0 rnew ,
znew
r
0
?1
?Inew = WI ?
W + ???1 E.
znew
I
I
(32)
(33)
The reconstructed visual image is calculated by taking the expectation of the predictive distribution.
2.5
fMRI data
We used the data set from Miyawaki et al. [6], in which fMRI signals were measured while subjects
viewed visual images consisting of contrast-defined 10 ? 10 patches. The data set contained two
independent sessions. One is a ?random image session?, in which spatially random patterns were
sequentially presented for 6 s followed by a 6 s rest period. A total of 440 different random patterns
were presented for each subject. The other is a ?figure image session?, in which alphabetical letters
and simple geometric shapes were sequentially presented for 12 s followed by a 12 s rest period.
Five alphabetical letters and five geometric shapes were presented six or eight times per subject. We
used fMRI data from V1 for the analyses. See Miyawaki et al. [6] for details.
3
Results
We estimated image bases and weight vectors using the data from the ?random image session?.
Then, reconstruction performance was evaluated with the data from the ?figure image session?.
3.1
Estimated image bases
Figure 2 (a) shows representative image bases estimated by Bayesian CCA (weight values are indicated by a gray scale). The estimation algorithm extracted spatially localized image bases whose
shapes were consistent with those used in the previous study [6] (1 ? 1, 1 ? 2, and 2 ? 1 shown
in 1st and 2nd row of Figure 2 (a)). We also found image bases with other shapes (e.g., L-shape,
3 ? 1 and 1 ? 3, 3rd row of Figure 2 (a)) that were not assumed in the previous study. We repeated
the estimation using data resampled from the random image session, and calculated the distribution
of the image bases (defined by a pixel cluster with magnitudes over 3 SD of all pixel values) over
eccentricity for different sizes (Figure 2 (a), right). The image bases of the smallest size (1 ? 1)
were distributed over the visual field, and most of them were within three degrees of eccentricity.
The size of the image basis tended to increase with eccentricity. For comparison, we also performed
the image basis estimation using CCA, but it did not produce spatially localized image bases (Figure 2 (b)). Estimated weight vectors for fMRI voxels had high values around the retinotopic region
corresponding the location of the estimated basis (data not shown).
3.2
Visual image reconstruction using estimated image bases
The reconstruction model with the estimated image bases was tested on five alphabet letters and five
geometric shapes (Figure 3 (a), 1st row). The images reconstructed by Bayesian CCA captured the
essential features of the presented images (Figure 3 (a), 2nd row). In particular, they showed fine
reconstruction for figures consisting of thin lines such as small frames and alphabet letters. However,
the peripheral reconstruction was poor and often lacked shapes of the presented images. This may
be due to the lack of estimated image bases in the peripheral regions (Figure 2 (a), right). The
standard CCA produced poorer reconstruction with noise scattered over the entire image (Figure
3 (a), 3rd row), as expected from the non-local image bases estimated by CCA (Figure 2 (b)).
Reconstruction using fixed image bases [6] showed moderate accuracy for all image types (Figure
3 (a), 4th row). To evaluate the reconstruction performance quantitatively, we calculated the spatial
correlation between the presented and reconstructed images (Figure 3 (b)). The correlation values
6
(a) Estimated image bases by Bayesian CCA
1-pixel basis
40
0
40
2x1
1x2
0
1x2
Frequency
2-pixel basis
3-pixel basis
0.5
40
0
-0.5
L-shape
3x1
1x3
1
2
3
4
5
Eccentricity [deg]
6
0
(b) Estimated image bases by CCA
Figure 2: Image basis estimation: (a) Representative bases estimated by Bayesian CCA (left,
sorted by the number of pixels), and their frequency as a function of eccentricity (right). 3-pixel
bases (L-shape, 3x1 and 1x3) were not assumed in Miyawaki et al. [6]. Negative (dark) bases were
often associated with negative voxel weights, thus equivalent to positive bases with positive voxel
weights. (b) Examples of image bases estimated by the standard CCA.
were not significantly different between Bayesian CCA and the fixed basis method when the alphabet
letters and the geometric shapes were analyzed together. However, Bayesian CCA outperformed the
fixed basis method for the alphabet letters, while the fixed basis method outperformed Bayesian
CCA for the geometric shapes (p < .05). This is presumably because the alphabet letters consist
of more foveal pixels, which overlap the region covered by the image bases estimated by Bayesian
CCA. The reconstruction performance of CCA was lowest in all cases.
4
Discussion
We have proposed a new method to estimate image bases from fMRI data and presented visual
stimuli. Our model consists of the latent variables and two matrices relating the two sets of observations. The previous work used fixed image bases and estimated the weights between the image
bases and fMRI voxels. This estimation was conducted by the sparse logistic regression that assumed sparsenes in the weight values, which effectively removed irrelevant voxels [8]. The proposed
method introduced sparseness priors not only for fMRI voxels but also for image pixels. These priors lead to automatic extraction of images bases, and the mappings between a small number of fMRI
voxels and a small number of image pixels. Using this model, we successfully extracted spatially
localized image bases including those not used in the previous work [6]. Using the set of image
bases, we were able to accurately reconstruct arbitrary contrast-defined visual images from fMRI
activity patterns. The sparseness priors played an important role to estimate spatially localized image bases, and to improve reconstruction performance, as demonstrated by the comparison with the
results from standard CCA (Figure 2 and 3).
Our method has several limitations. First, as the latent variables were assumed to have an orthogonal Gaussian distribution, it may be difficult to obtain non-orthogonal image bases, which have been
7
(a)
G eom etric shapes
Alphabet letters
Presented
Reconstructed
Bayesian CCA
CCA
Fixed bases (M iyawaki et al.)
Spatial Correlation
(b)
Bayesian CCA
CCA
Fixed bases (M iyawaki et al.)
0.8
0.4
0
All
Alphabet
Letters
G eom etric
shapes
Figure 3: Visual image reconstruction: (a) Presented images (1st row, alphabet letters and geometric shapes) and the reconstructed images obtained from Bayesian CCA, the standard CCA, and
the fixed basis model (2nd - 4th rows). (b) Spatial correlation between presented and reconstructed
images.
shown to provide an effective image representation in the framework of sparse coding [4, 9]. Different types of image bases could be generated by introducing non-orthogonality and/or non-lineality
in the model. The shape of estimated image bases may also depend on the visual stimuli used for
the training of the reconstruction model. Although we used random images as visual stimuli, other
types of images including natural scenes may lead to more effective image bases that allow for accurate reconstruction. Finally, our method failed to estimate peripheral image bases, and as a result,
only poor reconstruction was achieved for peripheral pixels. The cortical magnification factor of the
visual cortex [5] suggests that a small number of voxels represent a large number of image pixels in
the periphery. Elaborate assumptions about the degree of sparseness depending on eccentricity may
help to improve basis estimation and image reconstruction in the periphery.
Acknowledgments
This study was supported by the Nissan Science Foundation, SCOPE (SOUMU) and SRPBS
(MEXT).
8
References
[1] Anderson, T.W. (2003). An Introduction to Multivariate Statistical Analysis. 3rd ed. Wiley
Interscience.
[2] Attias, H. (1999). Inferring parameters and structure of latent variable models by variational
Bayes. Proc. 15th Conference on Uncertainty in Artificial Intelligence, 21-30.
[3] Bach, F.R. and Jordan, M.I. (2005). A probabilistic interpretation of canonical correlation analysis. Dept. Statist., Univ. California, Berkeley, CA, Tech. Repo. 688.
[4] Bell, A.J. and Sejnowski, T.J. (1997) The independent components of natural scenes are edge
filter. Vision Res. 27(23), 3327-3338.
[5] Engel, S.A., Glover, G.H. and Wandell, B.A. (1997) Retinotopic organization in human visual
cortex and the spatial precision of functional MRI. Cereb. Cortex 7, 181-192.
[6] Miyawaki, Y., Uchida, H., Yamashita, O., Sato, MA., Morito, Y., Tanabe, HC., Sadato, N. and
Kamitani, Y. (2008). Visual image reconstruction from human brain activity using a combination of multiscale local image decoders. Neuron 60(5), 915-929.
[7] Neal, R.M. (1996). Bayesian learning for Neural Networks. Springer-Verlag.
[8] Yamashita, O., Sato, MA., Yoshioka, T., Tong, F., Kamitani, Y. (2008) Sparse estimation automatically selects voxels relevant for the decoding of fMRI activity patterns. Neuroimage.
42(4), 1414-29.
[9] Olshausen ,B.A. and Field, D.J. (1996). Emergence of simple-cell receptive field properties by
learning a sparse code for natural images. Nature 381, 607-609.
[10] Wang, C. (2007). Variatonal Bayesian Approach to Canonical Correlation Analysis. IEEE
Trans Neural Netw. 18(3), 905-910.
9
| 3757 |@word trial:6 mri:1 nd:3 a02:1 covariance:2 p0:7 tr:1 configuration:1 foveal:2 etric:2 current:1 com:1 gmail:1 b01:1 informative:1 shape:18 intelligence:1 provides:2 location:1 successive:1 five:4 glover:1 constructed:4 consists:2 interscience:1 manner:1 expected:1 seika:1 multi:5 brain:5 spherical:1 automatically:5 becomes:1 estimating:3 discover:1 retinotopic:3 factorized:2 qw:9 lowest:1 developed:1 miyawaki:5 transformation:2 berkeley:1 unit:1 positive:2 local:3 treat:1 sd:1 yusuke:1 suggests:1 factorization:1 acknowledgment:1 testing:1 alphabetical:2 x3:2 procedure:1 bell:1 thought:1 significantly:1 pre:1 confidence:2 cannot:1 selection:2 equivalent:3 demonstrated:1 maximizing:1 continued:1 regarded:4 coordinate:1 dzd:1 element:6 approximated:1 magnification:1 role:1 wang:1 region:4 removed:1 repo:1 depend:1 solving:1 predictive:8 basis:18 joint:4 alphabet:8 univ:1 effective:3 sejnowski:1 artificial:1 hyper:6 whose:3 reconstruct:2 emergence:1 reconstruction:26 propose:1 zm:1 relevant:1 cluster:1 eccentricity:6 produce:1 converges:1 help:1 derive:1 depending:1 measured:4 ard:1 eq:13 predicted:1 indicate:2 filter:1 human:3 inew:16 around:2 exp:6 presumably:1 mapping:6 scope:1 sought:1 early:1 smallest:1 estimation:8 proc:1 outperformed:2 engel:1 successfully:1 weighted:2 gaussian:4 derived:1 likelihood:4 tech:1 contrast:5 a01:1 yoshioka:1 i0:3 entire:2 selects:1 pixel:27 spatial:4 field:4 construct:2 having:1 extraction:1 represents:2 thin:1 fmri:31 stimulus:6 quantitatively:1 simultaneously:1 national:1 gamma:2 consisting:2 yamashita:2 organization:1 introduces:1 analyzed:1 bundle:1 predefined:1 accurate:3 poorer:1 integral:2 edge:1 orthogonal:2 conduct:1 circle:1 re:1 orthogonalized:1 column:1 introducing:1 subset:2 conducted:1 cho:1 st:3 probabilistic:2 decoding:1 together:1 japan:1 coding:2 summarized:1 coefficient:2 kamitani:2 performed:1 linked:1 bayes:1 square:1 accuracy:1 variance:8 reserved:1 maximized:1 bayesian:23 accurately:2 produced:3 tended:1 ed:1 frequency:2 associated:1 organized:1 follow:1 response:2 evaluated:1 anderson:1 correlation:10 replacing:2 multiscale:1 lack:1 logistic:1 indicated:1 gray:1 olshausen:1 facilitate:1 concept:1 analytically:1 spatially:7 laboratory:1 neal:1 covering:1 demonstrate:1 cereb:1 image:128 variational:5 novel:2 common:3 wandell:1 functional:2 jp:2 yukiyasu:1 interpretation:1 relating:1 automatic:2 rd:3 session:6 had:1 cortex:4 base:64 posterior:8 multivariate:1 recent:2 showed:2 moderate:1 driven:1 irrelevant:1 periphery:2 verlag:1 captured:1 employed:1 period:2 dashed:2 signal:1 relates:1 multiple:4 kyoto:1 determination:1 calculation:3 bach:1 a1:1 prediction:1 regression:1 vision:1 expectation:5 represent:1 achieved:1 cell:2 fine:1 rest:2 subject:3 z2m:1 jordan:1 call:1 near:1 identically:1 attias:1 six:1 covered:1 dark:1 statist:1 canonical:11 neuroscience:1 estimated:27 wr:29 per:1 key:1 v1:4 sum:2 inverse:6 znew:20 letter:10 uncertainty:1 patch:1 vb:3 cca:34 resampled:1 followed:2 played:1 correspondence:4 activity:18 sato:2 orthogonality:1 scene:4 x2:4 uchida:1 u1:3 according:1 peripheral:4 combination:5 poor:2 smaller:2 wi:46 taken:1 mutually:1 lacked:1 eight:1 v2:2 eigen:1 responding:1 graphical:1 ting:1 build:1 hikaridai:1 especially:1 receptive:2 fovea:1 link:7 atr:3 decoder:1 w0:1 evaluate:1 code:1 index:1 illustration:1 difficult:1 relate:1 nissan:1 negative:2 unknown:2 contributed:1 observation:10 neuron:1 extended:2 communication:1 frame:1 arbitrary:2 introduced:2 california:1 learned:2 trans:1 able:2 perception:1 pattern:16 rnew:13 including:2 overlap:1 natural:5 treated:2 representing:4 improve:2 eom:2 technology:1 conventionally:1 extract:1 prior:18 understanding:2 voxels:20 geometric:6 marginalizing:1 limitation:1 localized:6 foundation:1 degree:2 consistent:2 row:8 supported:1 allow:1 institute:1 taking:2 sparse:7 distributed:2 dimension:1 cortical:2 calculated:6 dwi:1 tanabe:1 voxel:2 reconstructed:9 approximate:3 netw:1 deg:1 sequentially:2 b1:1 assumed:12 latent:24 qz:5 nature:1 ca:1 hc:1 diag:2 did:1 noise:5 repeated:1 x1:5 representative:2 scattered:1 elaborate:1 wiley:1 tong:1 precision:1 neuroimage:1 inferring:1 vanish:1 specific:1 consist:2 intractable:1 essential:1 effectively:1 magnitude:1 sparseness:5 ez:3 visual:44 failed:1 contained:1 scalar:1 u2:2 springer:1 corresponds:2 extracted:4 ma:2 identity:1 viewed:1 sorted:1 yoichi:2 replace:1 determined:1 called:3 total:1 mext:1 relevance:1 dept:1 tested:1 |
3,043 | 3,758 | Learning to Rank by Optimizing NDCG Measure
Hamed Valizadegan
Rong Jin
Computer Science and Engineering
Michigan State University
East Lansing, MI 48824
{valizade,rongjin}@cse.msu.edu
Ruofei Zhang
Jianchang Mao
Advertising Sciences, Yahoo! Labs
4401 Great America Parkway,
Santa Clara, CA 95054
{rzhang,jmao}@yahoo-inc.com
Abstract
Learning to rank is a relatively new field of study, aiming to learn a ranking function from a set of training data with relevancy labels. The ranking algorithms
are often evaluated using information retrieval measures, such as Normalized Discounted Cumulative Gain (NDCG) [1] and Mean Average Precision (MAP) [2].
Until recently, most learning to rank algorithms were not using a loss function
related to the above mentioned evaluation measures. The main difficulty in direct
optimization of these measures is that they depend on the ranks of documents, not
the numerical values output by the ranking function. We propose a probabilistic
framework that addresses this challenge by optimizing the expectation of NDCG
over all the possible permutations of documents. A relaxation strategy is used to
approximate the average of NDCG over the space of permutation, and a bound
optimization approach is proposed to make the computation efficient. Extensive
experiments show that the proposed algorithm outperforms state-of-the-art ranking algorithms on several benchmark data sets.
1
Introduction
Learning to rank has attracted the focus of many machine learning researchers in the last decade
because of its growing application in the areas like information retrieval (IR) and recommender
systems. In the simplest form, the so-called pointwise approaches, ranking can be treated as classification or regression by learning the numeric rank value of documents as an absolute quantity [3, 4].
The second group of algorithms, the pairwise approaches, considers the pair of documents as independent variables and learns a classification (regression) model to correctly order the training
pairs [5, 6, 7, 8, 9, 10, 11]. The main problem with these approaches is that their loss functions are
related to individual documents while most evaluation metrics of information retrieval measure the
ranking quality for individual queries, not documents.
This mismatch has motivated the so called listwise approaches for information ranking, which treats
each ranking list of documents for a query as a training instance [2, 12, 13, 14, 15, 16, 17]. Unlike
the pointwise or pairwise approaches, the listwise approaches aim to optimize the evaluation metrics
such as NDCG and MAP. The main difficulty in optimizing these evaluation metrics is that they are
dependent on the rank position of documents induced by the ranking function, not the numerical
values output by the ranking function. In the past studies, this problem was addressed either by the
convex surrogate of the IR metrics or by heuristic optimization methods such as genetic algorithm.
In this work, we address this challenge by a probabilistic framework that optimizes the expectation
of NDCG over all the possible permutation of documents. To handle the computational difficulty, we
present a relaxation strategy that approximates the expectation of NDCG in the space of permutation,
and a bound optimization algorithm [18] for efficient optimization. Our experiment with several
benchmark data sets shows that our method performs better than several state-of-the-art ranking
techniques.
1
The rest of this paper is organized as follows. The related work is presented in Section 2. The
proposed framework and optimization strategy is presented in Section 3. We report our experimental
study in Section 4 and conclude this work in Section 5.
2
Related Work
We focus on reviewing the listwise approaches that are closely related to the theme of this work.
The listwise approaches can be classified into two categories. The first group of approaches directly
optimizes the IR evaluation metrics. Most IR evaluation metrics, however, depend on the sorted
order of documents, and are non-convex in the target ranking function. To avoid the computational
difficulty, these approaches either approximate the metrics with some convex functions or deploy
methods (e.g., genetic algorithm [19]) for non-convex optimization. In [13], the authors introduced
LambdaRank that addresses the difficulty in optimizing IR metrics by defining a virtual gradient
on each document after the sorting. While [13] provided a simple test to determine if there exists
an implicit cost function for the virtual gradient, theoretical justification for the relation between
the implicit cost function and the IR evaluation metric is incomplete. This may partially explain
why LambdaRank performs very poor when compared to MCRank [3], a simple adjustment of
classification for ranking (a pointwise approach). The authors of MCRank paper even claimed that
a boosting model for regression produces better results than LambdaRank. Volkovs and Zemel [17]
proposed optimizing the expectation of IR measures to overcome the sorting problem, similar to
the approach taken in this paper. However they use monte carlo sampling to address the intractable
task of computing the expectation in the permutation space which could be a bad approximation
for the queries with large number of documents. AdaRank [20] uses boosting to optimize NDCG,
similar to our optimization strategy. However they deploy heuristics to embed the IR evaluation
metrics in computing the weights of queries and the importance of weak rankers; i.e. it uses NDCG
value of each query in the current iteration as the weight for that query in constructing the weak
ranker (the documents of each query have similar weight). This is unlike our approach that the
contribution of each single document to the final NDCG score is considered. Moreover, unlike our
method, the convergence of AdaRank is conditional and not guaranteed. Sun et al. [21] reduced
the ranking, as measured by NDCG, to pairwise classification and applied alternating optimization
strategy to address the sorting problem by fixing the rank position in getting the derivative. SVMMAP [2] relaxes the MAP metric by incorporating it into the constrains of SVM. Since SVM-MAP
is designed to optimize MAP, it only considers the binary relevancy and cannot be applied to the
data sets that have more than two levels of relevance judgements.
The second group of listwise algorithms defines a listwise loss function as an indirect way to optimize the IR evaluation metrics. RankCosine [12] uses cosine similarity between the ranking list
and the ground truth as a query level loss function. ListNet [14] adopts the KL divergence for loss
function by defining a probabilistic distribution in the space of permutation for learning to rank.
FRank [9] uses a new loss function called fidelity loss on the probability framework introduced in
ListNet. ListMLE [15] employs the likelihood loss as the surrogate for the IR evaluation metrics.
The main problem with this group of approaches is that the connection between the listwise loss
function and the targeted IR evaluation metric is unclear, and therefore optimizing the listwise loss
function may not necessarily result in the optimization of the IR metrics.
3
3.1
Optimizing NDCG Measure
Notation
Assume that we have a collection of n queries for training, denoted by Q = {q 1 , . . . , q n }. For each
query q k , we have a collection of mk documents Dk = {dki , i = 1, . . . , mk }, whose relevance to
k
q k is given by a vector rk = (r1k , . . . , rm
) ? Zmk . We denote by F (d, q) the ranking function that
k
takes a document-query pair (d, q) and outputs a real number score, and by jik the rank of document
dki within the collection Dk for query q k . The NDCG value for ranking function F (d, q) is then
computed as following:
k
mk
n
2ri ? 1
1X 1 X
L(Q, F ) =
(1)
n
Zk i=1 log(1 + jik )
k=1
where Zk is the normalization factor [1]. NDCG is usually truncated at a particular rank level (e.g.
the first 10 retrieved documents) to emphasize the importance of the first retrieved documents.
2
3.2
A Probabilistic Framework
One of the main challenges faced by optimizing the NDCG metric defined in Equation (1) is that the
dependence of document ranks (i.e., jik ) on the ranking function F (d, q) is not explicitly expressed,
which makes it computationally challenging. To address this problem, we consider the expectation
of L(Q, F ) over all the possible rankings induced by the ranking function F (d, q), i.e.,
*
+
k
k
mk
mk
n
n
X
2ri ? 1
1X 1 X
1X 1 X
2ri ? 1
k
k
?
Pr(?
|F,
L(Q, F ) =
=
q
)
(2)
n
Zk i=1 log(1 + jik )
n
Zk i=1 k
log(1 + ? k (i))
k=1
k=1
F
? ?Smk
where Smk stands for the group of permutations of mk documents, and ? k is an instance of permutation (or ranking). Notation ? k (i) stands for the rank position of the ith document by ? k . To this end,
we first utilize the result in the following lemma to approximate the expectation of 1/ log(1 + ? k (i))
by the expectation of ? k (i).
?
?
Lemma 1. For any distribution Pr(?|F, q), the inequality L(Q,
F ) ? H(Q,
F ) holds where
k
m
n
k
r
2 i ?1
1X 1 X
?
H(Q,
F) =
(3)
n
Zk i=1 log(1 + h? k (i)iF )
k=1
Proof. The proof follows from the fact that (a) 1/x is a convex function when x > 0 and therefore
h1/ log(1+x)i ? 1/hlog(1+x)i; (b) log(1+x) is a concave function, and therefore hlog(1+x)i ?
log(1 + hxi). Combining these two factors together, we have the result stated in the lemma.
?
?
?
Given H(Q,
F ) provides a lower bound for L(Q,
F ), in order to maximize L(Q,
F ), we could
?
?
alternatively maximize H(Q,
F ), which is substantially simpler than L(Q,
F ). In the next step of
simplification, we rewrite ? k (i) as
mk
X
? k (i) = 1 +
I(? k (i) > ? k (j))
(4)
j=1
where I(x) outputs 1 when x is true and zero otherwise. Hence, h? k (i)i is written as
mk
mk
X
X
h? k (i)i = 1 +
hI(? k (i) > ? k (j))i = 1 +
Pr(? k (i) > ? k (j))
j=1
(5)
j=1
?
As a result, to optimize H(Q,
F ), we only need to define Pr(? k (i) > ? k (j)), i.e., the marginal
k
distribution for document dj to be ranked before document dki . In the next section, we will discuss how to define a probability model for Pr(? k |F, q k ), and derive pairwise ranking probability
Pr(? k (i) > ? k (j)) from distribution Pr(? k |F, q k ).
3.3
Objective Function
We model Pr(? k |F, q k ) as follows
?
?
mk
X
X
1
Pr(? k |F, q k ) =
exp ?
(F (dki , q k ) ? F (dkj , q k ))?
Z(F, q k )
k
k
i=1 j:? (j)>? (i)
?m
!
k
X
1
k
k k
exp
(mk ? 2? (i) + 1)F (di , q )
=
Z(F, q k )
i=1
(6)
where Z(F, q k ) is the partition function that ensures the sum of probability is one. Equation (6)
models each pair (dki , dkj ) of the ranking list ? k by the factor exp(F (dki , q k ) ? F (dkj , q k )) if dki
is ranked before dkj (i.e., ? k (dki ) < ? k (dkj )) and vice versa. This modeling choice is consistent
with the idea of ranking the documents with largest scores first; intuitively, the more documents in
a permutation are in the decreasing order of score, the bigger the probability of the permutation is.
?
Using Equation (6) for Pr(? k |F, q k ), we have H(Q,
F ) expressed in terms of ranking function F .
?
By maximizing H(Q, F ) over F , we could find the optimal solution for ranking function F .
As indicated by Equation (5), we only need to compute the marginal distribution Pr(? k (i) > ? k (j)).
To approximate Pr(? k (i) > ? k (j)), we divide the group of permutation Smk into two sets:
3
Gka (i, j) = {? k |? k (i) > ? k (j)} and Gkb (i, j) = {? k |? k (i) < ? k (j)}. Notice that there is a
one-to-one mapping between these two sets; namely for any ranking ? k ? Gka (i, j), we could create
a corresponding ranking ? k ? Gkb (i, j) by switching the rankings of document dki and dkj and vice
versa. The following lemma allows us to bound the marginal distribution Pr(? k (i) > ? k (j)).
Lemma 2. If F (dki , q k ) > F (dkj , q k ), we have
1
?
?
(7)
Pr(? k (i) > ? k (j)) ?
1 + exp 2(F (dki , q k ) ? F (dkj , q k ))
Proof.
1
=
X
? k ?Gk
a (i,j)
=
X
? k ?Gk
a (i,j)
?
X
Pr(? k |F, q k ) +
X
Pr(? k |F, q k )
? k ?Gk
b (i,j)
?
?
??
Pr(? k |F, q k ) 1 + exp 2(? k (i) ? ? k (j))(F (dki , q k ) ? F (dkj , q k ))
?
?
?
?? ?
Pr(? k |F, q k ) 1 + exp 2(F (dki , q k ) ? F (dkj , q k ))
? k ?Gk
a (i,j)
?
?
?? ?
?
= 1 + exp 2(F (dki , q k ) ? F (dkj , q k )) Pr ? k (i) > ? k (j)
We used the definition of Pr(? k |F, q k ) in Equation (6) to find Gkb (i, j) as the dual of Gka (i, j) in the
first step of the proof. The inequality in the proof is because ? k (i) ? ? k (j) ? 1 and the last step is
because Pr(? k |F, q k ) is the only term dependent on ?.
This lemma indicates that we could approximate Pr(? k (i) > ? k (j)) by a simple logistic model. The
idea of using logistic model for Pr(? k (i) > ? k (j)) is not new in learning to rank [7, 9]; however
it has been taken for granted and no justification has been provided in using it for learning to rank.
Using the logistic model approximation introduced in Lemma 2, we now have h? k (i)i written as
mk
X
1
?
?
h? k (i)i ? 1 +
(8)
k , q k ) ? F (dk , q k ))
1
+
exp
2(F
(d
i
j
j=1
To simplify our notation, we define Fik = 2F (dki , q k ), and rewrite the above expression as
mk
mk
X
X
1
k
k
k
h? (i)i = 1 +
Pr(? (i) > ? (j)) ? 1 +
k
k
1
+
exp(F
i ? Fj )
j=1
j=1
? in Equation (3) written as
Using the above approximation for h? k (i)i, we have H
k
mk
n
X
X
1
2ri ? 1
1
?
H(Q, F ) ?
n
Zk i=1 log(2 + Aki )
k=1
where
mk
X
I(j 6= i)
k
Ai =
k
k
1
+
exp(F
i ? Fj )
j=1
(9)
(10)
We define the following proposition to further simplify the objective function:
Proposition 1.
1
Aki
1
?
?
k
log(2) 2 [log(2)]2
log(2 + Ai )
The proof is due to the Taylor expansion of convex function 1/log(2 + x), x > ?1 around x = 0
noting that Aki > 0 (the proof of convexity of 1/log(1 + x) is given in Lemma 1). By plugging the
result of this proposition to the objective function in Equation (9), the new objective is to minimize
the following quantity:
mk
n
k
1X 1 X
?
(2ri ? 1)Aki
(11)
M(Q,
F) ?
n
Zk i=1
k=1
The objective function in Equation (11) is explicitly related to F via term Aki . In the next section, we
? It is
aim to derive an algorithm that learns an effective ranking function by efficiently minimizing M.
? is no longer a rigorous lower bound for the original objective
also important to note that although M
? our empirical study shows that this approximation is very effective in identifying the
function L,
appropriate ranking function from the training data.
4
3.4
Algorithm
?
To minimize M(Q,
F ) in Equation (11), we employ the bound optimization strategy [18] that iteratively updates the solution for F . Let Fik denote the value obtained so far for document dki . To
improve NDCG, following the idea of Adaboost, we restrict the new ranking value for document dki ,
denoted by F?ik , is updated as to the following form:
F?ik = Fik + ?fik
(12)
where ? > 0 is the combination weight and fik = f (dki , q k ) ? {0, 1} is a binary value. Note that in
the above, we assume the ranking function F (d, q) is updated iteratively with an addition of binary
classification function f (d, q), which leads to efficient computation as well as effective exploitation
?
of the existing algorithms for data classification. . To construct a lower bound for M(Q,
F ), we
first handle the expression [1 + exp(Fik ? Fjk )]?1 , summarized by the following proposition.
Proposition 2.
?
?
1
1
k
k
k
?
+
?
exp(?(f
?
f
))
?
1
(13)
i,j
j
i
k
k
1 + exp(Fi ? Fj )
1 + exp(F?ik ? F?jk )
where
exp(Fik ? Fjk )
k
=?
(14)
?i,j
?2
1 + exp(Fik ? Fjk )
The proof of this proposition can be found in Appendix A. This proposition separates the term
related to Fik from that related to ?fik in Equation (11), and shows how the new weak ranker (i.e.,
the binary classification function f (d, q)) will affect the current ranking function F (d, q). Using
the above proposition, we can derive the upper bound for M (Theorem 1) as well as a closed form
solution for ? given the solution for F (Theorem 2).
Theorem 1. Given the solution for binary classifier fid , the optimal ? that minimizes the objective
function in Equation (11) is
?P
?
Pmk 2rik ?1 k
n
k
k
?
I(f
<
f
)
1
i,j
j
i
k=1
i,j=1 Zk
?
? = log ? P
(15)
Pmk 2rik ?1 k
n
2
k
? I(f > f k )
k=1
i,j=1
Zk
k
k
where ?i,j
= ?i,j
I(j 6= i).
Theorem 2.
i,j
j
i
?
?
k
mk
mk r k
n X
X
X
i ? 2rj
2
exp(3?)
?
1
k ?
?
?
fik ?
?i,j
M(Q,
F? ) ? M(Q,
F ) + ?(?) +
3
Z
k
i=1
j=1
k=1
where ?(?) is only a function of ? with ?(0) = 0.
The proofs of these theorems are provided in Appendix B and Appendix C respectively. Note that the
bound provided by Theorem 2 is tight because by setting ? = 0, the inequality reduces to equality
?
?
resulting M(Q,
F? ) = M(Q,
F ). The importance of this theorem is that the optimal solution for fik
can be found without knowing the solution for ?.
Algorithm 1 1 summarizes the procedure in minimizing the objective function in Equation (11).
k
First, it computes ?ij
for every pair of documents of query k. Then, it computes wik , a weight for
each document which can be positive or negative. A positive weight wik indicates that the ranking
position of dki induced by the current ranking function F is less than its true rank position, while a
negative weight wik shows that ranking position of dki induced by the current F is greater than its
true rank position. Therefore, the sign of weight wik provides a clear guidance for how to construct
the next weak ranker, the binary classifier in our case; that is, the documents with a positive wik
should be labeled as +1 by the binary classifier and those with negative wik should be labeled as ?1.
The magnitude of wik shows how much the corresponding document is misplaced in the ranking.
In other words, it shows the importance of correcting the ranking position of document dki in terms
of improving the value of NDCG. This leads to maximizing ? given in Equation (17) which can be
considered as some sort of classification accuracy. We use sampling strategy in order to maximize ?
because most binary classifiers do not support the weighted training set; that is, we first sample the
documents according to |wik | and then construct a binary classifier with the sampled documents. It
can be shown that the proposed algorithm reduces the objective function M exponentially (the proof
is removed due to the lack of space).
1
Notice that we use F (dki ) instead of F (dki , q k ) to simplify the notation in the algorithm.
5
Algorithm 1 NDCG Boost: A Boosting Algorithm for Maximizing NDCG
1: Initialize F (dki ) = 0 for all documents
2: repeat
k
k
k
3:
Compute ?i,j
= ?i,j
I(j 6= i) for all document pairs of each query. ?i,j
is given in Eq. (14).
3:
Compute the weight for each document as
wik =
k
mk r k
X
2 i ? 2r j k
?i,j
Zk
j=1
Assign each document the following class label yik = sign(wik ).
Train a classifier f (x) : Rd ? {0, 1} that maximizes the following quantity
mk
n X
X
? =
|wik |f (dki )yik
3:
4:
(16)
(17)
k=1 i=1
5:
Predict fi for all documents in {Dk , i = 1, . . . , n}
6:
Compute the combination weight ? as provided in Equation (15).
7:
Update the ranking function as Fik ? Fik + ?fik .
8: until reach the maximum number of iterations
4
Experiments
To study the performance of NDCG Boost we use the latest version (version 3.0) of LETOR package
provided by Microsoft Research Asia [22]. LETOR Package includes several benchmark data data,
baselines and evaluation tools for research on learning to rank.
4.1
Letor Data Sets
There are seven data sets provided in the LETOR package: OHSUMED, Top Distillation 2003
(TD2003), Top Distillation 2004 (TD2004), Homepage Finding 2003 (HP2003), Homepage Finding
2003 (HP2003), Named Page Finding 2003 (NP2003) and Named Page Finding 2004 (NP2004) 2 .
There are 106 queries in the OSHUMED data sets with a number of documents for each query.
The relevancy of each document in OHSUMED data set is scored 0 (irrelevant), 1 (possibly) or
2 (definitely). The total number of query-document relevancy judgments provided in OHSUMED
data set is 16140 and there are 45 features for each query-document pair. For TD2003, TD2004,
HP2003, HP2004 and NP2003, there are 50, 75, 75, 75 and 150 queries, respectively, with about
1000 retrieved documents for each query. This amounts to a total number of 49171, 74170, 74409,
73834 and 147606 query-document pairs for TD2003, TD2004, HP2003, HP2004 and NP2003
respectively. For these data sets, there are 63 features extracted for each query-document pair and a
binary relevancy judgment for each pair is provided.
For every data sets in LETOR, five partitions are provided to conduct the five-fold cross validation,
each includes training, test and validation sets. The results of a number of state-of-the-art learning
to rank algorithms are also provided in the LETOR package. Since these baselines include some
of the most well-known learning to rank algorithms from each category (pointwise, pairwise and
listwise), we use them to study the performance of NDCG Boost. Here is the list of these baselines
(the details can be found in the LETOR web page):
Regression: This is a simple linear regression which is a basic pointwise approach and can be
considered as a reference point.
RankSVM: RankSVM is a pairwise approach using Support Vector Machine [5].
FRank: FRank is a pairwise approach. It uses similar probability model to RankNet [7] for the
relative rank position of two documents, with a novel loss function called Fidelity loss
function [9]. TSai et al [9] showed that FRank performs much better than RankNet.
ListNet: ListNet is a listwise learning to rank algorithm [14]. It uses cross-entropy loss as its
listwise loss function.
AdaRank NDCG: This is a listwise boosting algorithm that incorporates NDCG in computing the
samples and combination weights [20].
2
The experiment result for the last data set is not reported due to the lack of space.
6
OHSUMED dataset
0.54
0.52
TD2003 dataset
TD2004 dataset
0.42
0.5
0.4
0.45
0.38
NDCG
0.46
NDCG
SVM_MAP
NDCG_\BOOST
0.48
0.36
NDCG
0.5
NDCG
Regression
FRank
ListNet
RankSVM
AdaRank
0.34
0.32
0.4
0.35
0.3
0.44
0.28
0.42
0.4
0.3
0.26
1
2
3
4
5
6
7
8
9
0.24
10
1
2
3
4
5
@n
6
7
8
9
10
0.25
1
2
3
4
5
@n
(a) OHSUMED
6
7
8
9
10
8
9
10
@n
(b) TD2003
(c) TD2004
HP2004 dataset
0.85
HP2003 dataset
0.85
NP2003 dataset
0.8
0.8
0.75
0.75
0.7
0.8
0.75
NDCG
NDCG
NDCG
0.7
0.65
0.6
0.55
0.65
0.6
0.5
0.55
0.7
0.45
0.5
0.4
0.65
1
2
3
4
5
6
7
@n
(d) HP2003
8
9
10
0.35
1
2
3
4
5
6
7
@n
(e) HP2004
8
9
10
0.45
1
2
3
4
5
6
7
@n
(f) NP2003
Figure 1: The experimental results in terms of NDCG for Letor 3.0 data sets
SVM MAP: SVM MAP is a support vector machine with MAP measure used in the constraints. It
is a listwise approach [2].
While the validation set is used in finding the best set of parameters in the baselines in LETOR,
it is not being used for NDCG Boost in our experiments. For NDCG Boost, we set the maximum
number of iteration to 100 and use decision stump as the weak ranker.
Figure 1 provides the the average results of five folds for different learning to rank algorithms in
terms of NDCG @ each of the first 10 truncation level on the LETOR data sets 3 . Notice that the
performance of algorithms in comparison varies from one data set to another; however NDCG Boost
performs almost always the best. We would like to point out a few statistics; On OHSUMED
data set, NDCG Boost performs 0.50 at N DCG@3, a 4% increase in performance, compared to
FRANK, the second best algorithm. On TD2003 data set, this value for NDCG Boost is 0.375
that shows a 10% increase, compared with RankSVM (0.34), the second best method. On HP2004
data set, NDCG Boost performs 0.80 at N DCG@3, compared to 0.75 of SVM MAP, the second
best method, which indicates a 6% increase. Moreover, among all the methods in comparison,
NDCG Boost appears to be the most stable method across all the data sets. For example, FRank,
which performs well in OHSUMED and TD2004 data sets, yields a poor performance on TD2003,
HP2003 and HP 2004. Similarly, AdaRank NDCG achieves a decent performance on OHSUMED
data set, but fails to deliver accurate ranking results on TD2003, HP2003 and NP2003. In fact, both
AdaRank NDCG and FRank perform even worse than the simple Regression approach on TD2003,
which further indicates their instability. As another example, ListNet and RankSVM, which perform
well on TD2003 are not competitive to NDCG boost on OHSUMED and TD2004 data sets.
5
Conclusion
Listwise approach is a relatively new approach to learning to rank. It aims to use a query-level
loss function to optimize a given IR measure. The difficulty in optimizing IR measure lies in the
inherited sort function in the measure. We address this challenge by a probabilistic framework that
optimizes the expectation of NDCG over all the possible permutations of documents. We present a
relaxation strategy to effectively approximate the expectation of NDCG, and a bound optimization
strategy for efficient optimization. Our experiments on benchmark data sets shows that our method
is superior to the state-of-the-art learning to rank algorithms in terms of performance and stability.
3
NDCG is commonly measured at the first few retrieved documents to emphasize their importance.
7
6
Acknowledgements
The work was supported in part by the Yahoo! Labs4 and National Institute of Health
(1R01GM079688-01). Any opinions, findings, and conclusions or recommendations expressed in
this material are those of the authors and do not necessarily reflect the views of Yahoo! and NIH.
A
Proof of Proposition 2
1
1 + exp(F?ik ? F?jk )
=
=
1
1 + exp(Fik ? Fjk + ?(fik ? fjk ))
?
1
1
+
exp(Fik ? Fjk )
??1
exp(?(fik ? fjk )
1 + exp(Fik ? Fjk ) 1 + exp(Fik ? Fjk ) 1 + exp(Fik ? Fjk )
?
?
exp(Fik ? Fjk )
exp(Fik ? Fjk )
1
?
1?
+
exp(?(fjk ? fik )
k
k
k
k
k
k
1 + exp(Fi ? Fj )
1 + exp(Fi ? Fj ) 1 + exp(Fi ? Fj )
?
?
1
k
=
+ ?i,j
exp(?(fjk ? fik ) ? 1
1 + exp(Fik ? Fjk )
The first step is a simple manipulations of the terms and the second step is due to the convexity of
inverse function on R+ .
B
Proof of Theorem 1
In order to obtain the result of the Theorem 1, we first plug Equation (13) in Equation (11). This
?
Pn Pmk 2rik ?1 k ?
k
k
k
leads to minimizing k=1 i,j=1
Zk ?i,j exp(?(fj ? fi )) , the term related to ? . Since fi
takes binary values 0 and 1, we have the following:
k
k
mk
mk
n X
n X
?
X
X
2ri ? 1 k
2ri ? 1 k ?
k
k
?i,j exp(?(fj ? fi )) =
?i,j exp(?)I(fjk > fik ) + exp(??)I(fjk < fik )
Zk
Zk
i,j=1
i,j=1
k=1
k=1
Getting the partial derivative of this term respect to ? and having it equal to zero results the theorem.
C
Proof of Theorem 2
First, we provide the following proposition to handle exp(?(fjk ? fik )).
Proposition 3. If x, y ? [0, 1], we have
exp(3?) ? 1
exp(3?) + exp(?3?) + 1
(x ? y) +
exp(?(x ? y)) ?
3
3
(18)
Proof. Due to the convexity of exp function, we have:
1?x+y 1
x?y+1
+0?
+ ? ?3?)
exp(?(x ? y)) = exp(3?
3
3
3
x?y+1
1?x+y 1
exp(3?) +
+ exp(?3?)
?
3
3
3
Using the result in the above proposition, we can bound the last term in Equation (13) as follows:
? exp(3?) ? 1
?
?
exp(3?) + exp(?3?) ? 2 ?
k
k
(fjk ? fik ) +
(19)
?i,j
exp(?(fjk ? fik ) ? 1 ? ?i,j
3
3
?
Using the result in Equation (19) and (13), we have M(Q,
F? ) in Equation (11) bounded as
mk r k
mk
n X
X
2 i ?1X
exp(3?) ? 1
k
?
?
?
?i,j
(fik ? fjk )
M(Q, F ) ? M(Q, F ) + ?(?) +
3
Z
k
j=1
k=1 i=1
?
?
m
m
n
k
k
rjk
rik
X
X
X
2 ?2 k ?
exp(3?) ? 1
?
fik ?
?i,j
= M(Q,
F ) + ?(?) +
3
Zk
i=1
j=1
k=1
4
The first author has been supported as a part-time intern in Yahoo!
8
References
[1] Kalervo J?arvelin and Jaana Kek?al?ainen. Ir evaluation methods for retrieving highly relevant
documents. In SIGIR 2000: Proceedings of the 23th annual international ACM SIGIR conference on Research and development in information retrieval, pages 41?48, 2000.
[2] Yisong Yue, Thomas Finley, Filip Radlinski, and Thorsten Joachims. A support vector method
for optimizing average precision. In SIGIR 2007: Proceedings of the 30th annual Int. ACM
SIGIR Conf. on Research and development in information retrieval, pages 271?278, 2007.
[3] Ping Li, Christopher Burges, and Qiang Wu. Mcrank: Learning to rank using multiple classification and gradient boosting. In Neural Information Processing System 2007.
[4] Ramesh Nallapati. Discriminative models for information retrieval. In SIGIR ?04: Proceedings of the 27th annual international ACM SIGIR conference on Research and development in
information retrieval, pages 64?71, New York, NY, USA, 2004. ACM.
[5] Ralf Herbrich, Thore Graepel, and Klaus Obermayer. Support vector learning for ordinal
regression. In Int. Conf. on Artificial Neural Networks 1999, pages 97?102, 1999.
[6] Yoav Freund, Raj Iyer, Robert E. Schapire, and Yoram Singer. An efficient boosting algorithm
for combining preferences. Journal of Machine Learning Research, 4:933?969, 2003.
[7] Chris Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and Greg
Hullender. Learning to rank using gradient descent. In International Conference on Machine
Learning 2005, 2005.
[8] Yunbo Cao, Jun Xu, Tie-Yan Liu, Hang Li, Yalou Huang, and Hsiao-Wuen Hon. Adapting
ranking svm to document retrieval. In SIGIR 2006: Proceedings of the 29th annual international ACM SIGIR conference on Research and development in information retrieval, pages
186?193, 2006.
[9] Ming Feng Tsai, Tie yan Liu, Tao Qin, Hsin hsi Chen, and Wei ying Ma. Frank: A ranking
method with fidelity loss. In SIGIR 2007: Proceedings of the 30th annual international ACM
SIGIR conference on Research and development in information retrieval, 2007.
[10] Rong Jin, Hamed Valizadegan, and Hang Li. Ranking refinement and its application to information retrieval. In WWW ?08: Proc. of the 17th int. conference on World Wide Web.
[11] Steven C.H. Hoi and Rong Jin. Semi-supervised ensemble ranking. In Proceedings of Association for the Advancement of Artificial Intelligence (AAAI2008).
[12] Tao Qin, Tie yan Liu, Ming feng Tsai, Xu dong Zhang, and Hang Li. Learning to search web
pages with query-level loss functions. Technical report, 2006.
[13] Christopher J. C. Burges, Robert Ragno, and Quoc V. Le. Learning to rank with nonsmooth
cost functions. In Neural Information Processing System 2006, 2006.
[14] Zhe Cao and Tie yan Liu. Learning to rank: From pairwise approach to listwise approach. In
International Conference on Machine Learning 2007, pages 129?136, 2007.
[15] Fen Xia, Tie-Yan Liu, Jue Wang, Wensheng Zhang, and Hang Li. Listwise approach to learning
to rank: theory and algorithm. In Int. Conf. on Machine Learning 2008, pages 1192?1199,
2008.
[16] Michael Taylor, John Guiver, Stephen Robertson, and Tom Minka. Softrank: optimizing nonsmooth rank metrics.
[17] Maksims N. Volkovs and Richard S. Zemel. Boltzrank: learning to maximize expected ranking
gain. In ICML ?09: Proceedings of the 26th Annual International Conference on Machine
Learning, pages 1089?1096, New York, NY, USA, 2009. ACM.
[18] Ruslan Salakhutdinov, Sam Roweis, and Zoubin Ghahramani. On the convergence of bound
optimization algorithms. In Proc. 19th Conf. in Uncertainty in Artificial Intelligence (UAI 03).
[19] Jen-Yuan Yeh, Yung-Yi Lin, Hao-Ren Ke, and Wei-Pang Yang. Learning to rank for information retrieval using genetic programming. In SIGIR 2007 workshop: Learning to Rank for
Information Retrieval.
[20] Jun Xu and Hang Li. Adarank: a boosting algorithm for information retrieval. In SIGIR
?07: Proceedings of the 30th annual international ACM SIGIR conference on Research and
development in information retrieval, pages 391?398, 2007.
[21] Zhengya Sun, Tao Qin, Qing Tao, and Jue Wang. Robust sparse rank learning for non-smooth
ranking measures. In SIGIR ?09: Proceedings of the 32nd international ACM SIGIR conference
on Research and development in information retrieval, pages 259?266, New York, NY, USA,
2009. ACM.
[22] Tie-Yan Liu, Tao Qin, Jun Xu, Wenying Xiong, and Hang Li. Letor: Benchmark dataset for
research on learning to rank for information retrieval.
9
| 3758 |@word exploitation:1 version:2 judgement:1 mcrank:3 nd:1 relevancy:5 liu:6 score:4 genetic:3 document:53 outperforms:1 past:1 existing:1 current:4 com:1 clara:1 attracted:1 written:3 john:1 r01gm079688:1 numerical:2 partition:2 listmle:1 designed:1 ainen:1 update:2 intelligence:2 advancement:1 ith:1 renshaw:1 provides:3 boosting:7 cse:1 herbrich:1 preference:1 simpler:1 zhang:3 five:3 direct:1 ik:4 retrieving:1 yuan:1 lansing:1 pairwise:8 valizadegan:2 expected:1 growing:1 salakhutdinov:1 discounted:1 decreasing:1 ming:2 ohsumed:9 provided:11 moreover:2 notation:4 maximizes:1 homepage:2 bounded:1 substantially:1 minimizes:1 jik:4 finding:6 every:2 concave:1 tie:6 rm:1 classifier:6 misplaced:1 hamilton:1 before:2 positive:3 engineering:1 treat:1 aiming:1 switching:1 ndcg:45 hsiao:1 challenging:1 procedure:1 area:1 empirical:1 yan:6 adapting:1 deed:1 word:1 zoubin:1 cannot:1 gkb:3 shaked:1 lambdarank:3 instability:1 hp2003:8 optimize:6 www:1 map:9 nicole:1 maximizing:3 latest:1 convex:6 sigir:15 guiver:1 ke:1 identifying:1 correcting:1 fik:34 ralf:1 stability:1 handle:3 wensheng:1 justification:2 zmk:1 updated:2 target:1 deploy:2 programming:1 us:6 robertson:1 jk:2 labeled:2 steven:1 wang:2 ensures:1 sun:2 removed:1 mentioned:1 convexity:3 constrains:1 hp2004:5 depend:2 reviewing:1 rewrite:2 tight:1 arvelin:1 deliver:1 indirect:1 america:1 train:1 effective:3 monte:1 query:24 zemel:2 artificial:3 klaus:1 whose:1 heuristic:2 otherwise:1 statistic:1 final:1 propose:1 qin:4 relevant:1 combining:2 cao:2 roweis:1 getting:2 convergence:2 letor:11 produce:1 derive:3 fixing:1 measured:2 ij:1 eq:1 closely:1 opinion:1 virtual:2 material:1 hoi:1 assign:1 proposition:12 rong:3 td2004:7 hold:1 around:1 considered:3 ground:1 exp:52 great:1 ranksvm:5 mapping:1 predict:1 r1k:1 achieves:1 ruslan:1 proc:2 label:2 largest:1 vice:2 create:1 tool:1 weighted:1 always:1 aim:3 avoid:1 pn:1 focus:2 joachim:1 rank:35 likelihood:1 indicates:4 rigorous:1 baseline:4 dependent:2 dcg:2 relation:1 tao:5 classification:9 fidelity:3 hon:1 dual:1 denoted:2 among:1 yahoo:5 development:7 art:4 initialize:1 marginal:3 field:1 construct:3 equal:1 having:1 sampling:2 qiang:1 icml:1 report:2 nonsmooth:2 simplify:3 richard:1 employ:2 few:2 divergence:1 national:1 individual:2 qing:1 microsoft:1 highly:1 evaluation:13 np2003:6 accurate:1 partial:1 conduct:1 incomplete:1 divide:1 taylor:2 guidance:1 theoretical:1 mk:25 instance:2 modeling:1 yoav:1 cost:3 lazier:1 reported:1 varies:1 definitely:1 international:9 volkovs:2 probabilistic:5 dong:1 gka:3 michael:1 together:1 reflect:1 yisong:1 huang:1 possibly:1 worse:1 conf:4 derivative:2 li:7 stump:1 summarized:1 erin:1 includes:2 int:4 inc:1 explicitly:2 ranking:47 h1:1 view:1 lab:1 closed:1 hsin:1 competitive:1 sort:2 inherited:1 contribution:1 minimize:2 pang:1 ir:15 greg:1 accuracy:1 kek:1 efficiently:1 ensemble:1 judgment:2 yield:1 weak:5 ren:1 carlo:1 advertising:1 researcher:1 classified:1 hamed:2 explain:1 reach:1 ping:1 definition:1 minka:1 proof:14 mi:1 di:1 gain:2 sampled:1 dataset:7 organized:1 graepel:1 appears:1 supervised:1 adaboost:1 listnet:6 asia:1 wei:2 tom:1 evaluated:1 implicit:2 until:2 web:3 christopher:2 lack:2 defines:1 logistic:3 quality:1 indicated:1 thore:1 usa:3 matt:1 normalized:1 true:3 dkj:11 hence:1 equality:1 alternating:1 iteratively:2 aki:5 cosine:1 performs:7 fj:8 novel:1 recently:1 fi:8 nih:1 superior:1 ari:1 exponentially:1 td2003:10 association:1 approximates:1 distillation:2 versa:2 ai:2 rd:1 hp:1 similarly:1 dj:1 hxi:1 stable:1 similarity:1 longer:1 showed:1 retrieved:4 optimizing:11 optimizes:3 irrelevant:1 raj:1 manipulation:1 claimed:1 inequality:3 binary:11 fid:1 yi:1 fen:1 greater:1 determine:1 maximize:4 hsi:1 semi:1 multiple:1 stephen:1 rj:1 reduces:2 smooth:1 technical:1 adarank:7 plug:1 cross:2 retrieval:17 lin:1 bigger:1 plugging:1 wuen:1 regression:8 basic:1 expectation:10 metric:17 iteration:3 normalization:1 addition:1 addressed:1 rest:1 unlike:3 yue:1 induced:4 incorporates:1 noting:1 yang:1 relaxes:1 decent:1 affect:1 restrict:1 idea:3 knowing:1 ranker:5 motivated:1 expression:2 granted:1 york:3 ranknet:2 yik:2 santa:1 clear:1 amount:1 category:2 simplest:1 reduced:1 schapire:1 notice:3 sign:2 correctly:1 group:6 rankcosine:1 utilize:1 relaxation:3 sum:1 package:4 inverse:1 uncertainty:1 named:2 almost:1 wu:1 decision:1 appendix:3 summarizes:1 bound:12 hi:1 guaranteed:1 simplification:1 jue:2 fold:2 annual:7 constraint:1 ri:7 tal:1 ragno:1 relatively:2 according:1 combination:3 poor:2 across:1 sam:1 quoc:1 intuitively:1 pr:24 thorsten:1 taken:2 computationally:1 equation:19 discus:1 singer:1 ordinal:1 end:1 appropriate:1 xiong:1 original:1 thomas:1 top:2 include:1 yoram:1 ghahramani:1 feng:2 objective:9 quantity:3 strategy:9 dependence:1 surrogate:2 unclear:1 obermayer:1 gradient:4 separate:1 chris:1 seven:1 considers:2 rjk:1 pointwise:5 minimizing:3 svmmap:1 ying:1 hlog:2 robert:2 frank:9 gk:4 hao:1 stated:1 negative:3 perform:2 recommender:1 upper:1 pmk:3 softrank:1 benchmark:5 ramesh:1 jin:3 descent:1 truncated:1 defining:2 introduced:3 pair:10 namely:1 kl:1 extensive:1 connection:1 boost:12 address:7 usually:1 mismatch:1 challenge:4 difficulty:6 treated:1 ranked:2 wik:11 improve:1 finley:1 fjk:21 health:1 jun:3 hullender:1 faced:1 yeh:1 acknowledgement:1 smk:3 relative:1 maksims:1 freund:1 loss:17 permutation:12 validation:3 rik:4 consistent:1 repeat:1 last:4 truncation:1 supported:2 burges:3 institute:1 wide:1 absolute:1 sparse:1 listwise:16 overcome:1 xia:1 world:1 numeric:1 cumulative:1 stand:2 computes:2 author:4 adopts:1 collection:3 commonly:1 refinement:1 far:1 approximate:6 emphasize:2 hang:6 uai:1 parkway:1 filip:1 conclude:1 discriminative:1 alternatively:1 zhe:1 ruofei:1 msu:1 search:1 decade:1 why:1 learn:1 zk:14 robust:1 ca:1 rongjin:1 improving:1 expansion:1 necessarily:2 constructing:1 main:5 scored:1 nallapati:1 xu:4 ny:3 precision:2 fails:1 mao:1 position:9 theme:1 lie:1 learns:2 rk:1 theorem:11 embed:1 bad:1 jen:1 list:4 dk:4 svm:6 dki:25 exists:1 intractable:1 incorporating:1 workshop:1 effectively:1 importance:5 magnitude:1 iyer:1 sorting:3 chen:1 entropy:1 michigan:1 intern:1 yung:1 expressed:3 adjustment:1 partially:1 recommendation:1 truth:1 extracted:1 acm:10 ma:1 conditional:1 sorted:1 targeted:1 lemma:8 called:4 total:2 experimental:2 east:1 support:5 radlinski:1 relevance:2 tsai:3 |
3,044 | 3,759 | Who?s Doing What: Joint Modeling of Names and
Verbs for Simultaneous Face and Pose Annotation
Luo Jie
Idiap and EPF Lausanne
[email protected]
Barbara Caputo
Idiap Research Institute
[email protected]
Vittorio Ferrari
ETH Zurich
[email protected]
Abstract
Given a corpus of news items consisting of images accompanied by text captions,
we want to find out ?who?s doing what?, i.e. associate names and action verbs in
the captions to the face and body pose of the persons in the images. We present
a joint model for simultaneously solving the image-caption correspondences and
learning visual appearance models for the face and pose classes occurring in the
corpus. These models can then be used to recognize people and actions in novel
images without captions. We demonstrate experimentally that our joint ?face and
pose? model solves the correspondence problem better than earlier models covering only the face, and that it can perform recognition of new uncaptioned images.
1
Introduction
A huge amount of images with accompanying text captions are available on the Internet. Websites
selling various items such as houses and clothing provide photographs of their products along with
concise descriptions. Online newspapers 1 have pictures illustrating events and comment them in
the caption. These news websites are very popular because people are interested in other people,
especially if they are famous (figure 1). Exploiting the associations between images and text hidden
in this wealth of data can lead to a virtually infinite source of annotations from which to learn visual
models without explicit manual intervention.
The learned models could then be used in a variety of Computer Vision applications, including face
recognition, image search engines, and to annotate new images for which no caption is available.
Moreover, recovering image-text associations is useful for auto-annotating a closed corpus of data,
e.g. for users of news website to see ?who?s in the picture? [6], or to search for images where a
certain person does a certain thing.
Previous works on news items has focused on associating names in the captions to faces in the images [5, 6, 16, 21]. This is difficult due to the correspondence ambiguity problem: multiple persons
appear in the image and the caption. Moreover, persons in the image are not always mentioned in the
caption, and not all names in the caption appear in the image. The techniques tackle the correspondence problem by exploiting the fact that different images show different combinations of persons.
As a result, these methods work well for frequently occurring persons (typical for famous people)
appearing in dataset with thousands of news items.
In this paper we propose to go beyond the above works, by modeling both names and action verbs
jointly. These correspond to faces and body poses in the images (figure 3). The connections between the subject (name) and verb in a caption can be found by well established language analysis
techniques [1, 8]. Essentially, by considering the subject-verb language construct, we generalize the
?who?s in the picture? line of works to ?who?s doing what?. We present a new generative model
where the observed variables are names and verbs in the caption as well as detected persons in the
image. The image-caption correspondences are carried by latent variables, while the visual appearance of face and pose classes corresponding to different names and verbs are model parameters.
During learning, we simultaneously solve for the correspondence and learn the appearance models.
1
www.daylife.com, news.yahoo.com,
news.google.com
1
(a) Four sets ... Roger Federer prepares
to hit a backhand in a quarter-final match
with Andy Roddick at the US Open.
(b) US Democratic presidential candidate Senator Barack
Obama waves to supporters together with his wife Michelle
Obama standing beside him at his North Carolina and Indiana primary election night rally in Raleigh.
Figure 1: Examples of image-caption pairs in our dataset. The face and upper body of the persons in the image
are marked by bounding-boxes. We stress a caption might contain names and/or verbs not visible in the image,
and vice versa.
In our joint model, the correspondence ambiguity is reduced because the face and pose information
help each other. For example, in figure 1b, knowing what ?waves? means would reveal who of the
two imaged persons is Obama. The other way around, knowing who is Obama would deliver a
visual example for the ?waving? pose.
We show experimentally that (i) our joint ?face and pose? model solves the correspondence problem
better than simpler models covering either face or pose alone; (ii) the learned model can be used to
effectively annotate new images with or without captions; (iii) our model with face alone performs
better than the existing face-only methods based on Gaussian mixture appearance models.
Related works. This paper is most closely related to works on associating names and faces, which
we discussed above. There exist also works on associating nouns to image regions [2, 3, 10], starting
from images annotated with a list of nouns indicating the objects it contains (typical datasets contain
natural scenes and objects such as ?water? and ?tiger?). A recent work in this line is that of Gupta
and Davis [17], who model prepositions in addition to nouns (e.g. ?bear in water?, ?car on street?).
To the best of our knowledge, ours is the first work on jointly modeling names and verbs.
2
Generative model for faces and body poses
The news item corpus used to train our face and pose model consists of still images of person(s)
performing some action(s). Each image is annotated with a caption describing ?who?s doing what?
in the image (figure 1). Some names from the caption might not appear in the image, and viceversa some imaged persons might not be mentioned in the caption. The basic units in our model
are persons in the image, consisting of their face and upper body. Our system automatically detects
them by bounding-boxes in the image using a face detector [23] and an upper body detector [14].
In the rest of the paper, we say ?person? to indicate a detected face and the upper body associated
with it (including false positive detections). A face and an upper-body are considered to belong to
the same person if the face lies near the center of the upper body bounding-box. For each person,
we obtain a pose estimate using [11] (figure 3(right)). In addition to these image features, we use
a language parser [1, 8] to extract a set of name-verb pairs from each caption. Our goals are to:
(i) associate the persons in the images to the name-verb pairs in the captions, and (ii) learn visual
appearance models corresponding to names and verbs. These can then be used for recognition on
new images with or without caption. Learning in our model can be seen as a constrained clustering
problem [4, 24, 25].
2.1 Generative model
We start by describing how our generative model explains the image-caption data (figure 2). The
notation is summarized in Table I. Suppose we have a collection of documents D = {D1 , . . . , DM }
with each document Di consisting of an image I i and its caption C i . These captions implicitly
provide the labels of the person(s)? name(s) and pose(s) in the corresponding images. For each
caption C i , we consider only the name-verb pairs ni returned by a language parser [1, 8] and ignore
other words. We make the same assumptions as for the name-face problem [5, 6, 16, 21] that the
labels can only come from the name-verb pairs in the captions or null (for persons not mentioned
in the caption). Based on this, we generate the set of all possible assignments Ai from the ni in
2
i
i i=M
D = {D i }i=M
i=1 = {I , C }i=1
I i,p : pth person in image I i
i,p
i,p
, Ipose
)
I i,p = (Iface
M : Number of documents in D (image-caption pairs)
P i : Number of detected persons in image I i
W i : Number of name-verb pairs in caption C i
Y : Latent variables encoding the true assignments
?Verb
?name
V
U
i
Y i : Y i = (y i,1 , . . . , y i,P ), y i,p is the assignment of the pth person in ith image
Ai : Set of possible assignments for document i
Ai = {ai1 , . . . , aiLi }
Li : Number of possible assignments for document D i
ail :
ail
i,P i
},
{ai,1
l , . . . , al
where
lth assignment
=
?: Appearance models for face and pose classes
V : Number of different verbs
U : Number of different names
? k : Sets of class representative vectors for class k
v
v,Rv
?verb
= {?v,1
pose , . . . , ?pose }
ai,p
l
is the label for the pth person
? = (?name , ?verb )
1
V
?verb = (?verb
, . . . , ?verb
, ?verb )
1
U
?name = (?name
, . . . , ?name
, ?name )
?k
r : a representative vector for class k
u,Ru
u
?name
= {?u,1
face , . . . , ?face }
Table I: The mathematical notation used in the paper
I
C
W
Y
P
A
L M
Figure 2:
Graphical
plate representation of
the generative model.
C i (see section 2.4 for details). Hence, we replace the captions by the sets of possible assignments
A = {A1 , . . . , AM }. Let Y = {Y 1 , . . . , Y M } be latent variables encoding the true assignments
i
(i.e. name/verb labels for the faces/poses), and Y i = (y i,1 , . . . , y i,P ) be the assignment for the P i
i,p
i,p
persons in the ith image. Each y i,p = (yface
, ypose
) is a pair of indices defining the assignment of
a person?s face to a name and pose to a verb. These take on values from the set of name indices
{1, . . . , U, null}, and verb indices {1, . . . , V, null}. N/V is the number of different names/verbs
over all the captions and null represents unknown names/verbs and false positive person detections.
Document collection likelihood.
the whole document collection is
P (I, Y , A|?) =
M
Y
i=1
Assuming independence between documents, the likelihood of
P (I i , Y i , Ai |?) =
M
Y
P (I i |Y i , Ai , ?)P (Y i |Ai , ?)P (Ai |?)
(1)
i=1
where ? are the model parameters explaining the visualQ
appearance of the persons? faces and poses
in the images. Therefore, equation (1) can be written as P (I i |Y i , ?)P (Y i |Ai )P (Ai ). The goal
of learning is to find the parameters ? and the labels Y that maximize the likelihood. Below we
focus on P (I i |Y i , ?), and then define P (Y i |Ai ) and P (Ai ) in section 2.4.
Image likelihood. The basic image units in our model are persons. Assuming independence between multiple persons in an image, the likelihood of an image can be expressed as the product over
the likelihood of each person:
Y
P (I i,p |y i,p , ?)
(2)
P (I i |Y i , ?) =
I i,p ?I i
i,p
i,p
where y i,p define the name-verb indices of the pth person in the image. A person I i,p = (Iface
, Ipose
)
i,p
i,p
is represented by the appearance of her face Iface
and pose Ipose
. Assuming independence between
the face and pose appearance of a person, the conditional probability for the appearance of the pth
person in image I i given the latent variable y i,p is:
i,p i,p
i,p
i,p
P (I i,p |y i,p , ?) = P (Iface
|yface , ?name )P (Ipose
|ypose
, ?verb )
(3)
where ? = (?name , ?verb ) are the appearance models associated with the various names and verbs.
v
1
V
Each ?verb
in ?verb = (?verb
, . . . , ?verb
, ?verb ) is a set of representative vectors modeling the variability
within the pose class corresponding to a verb v. For example, the verb ?serve? in tennis could
correspond to different poses such as holding the ball on the racket, tossing the ball and hitting it.
u
Analogously, ?name
models the variability within the face class corresponding to a name u.
2.2 Face and pose descriptors and similarity measures
After detecting faces from the images with the multi-view algorithm [23], we use [12] to detect nine
distinctive feature points within the face bounding box (figure 3(left)). Each feature is represented by
SIFT descriptors [18], and their concatenation gives the overall descriptor vector for the face. We use
the cosine as a naturally normalized similarity measure between two face descriptors: simface (a, b) =
aT b
kak kbk . The distance between two faces is distface (a, b) = 1 ? simcos (a, b).
We use [14] to detect upper-bodies and [11] to estimate their pose. A pose E consists of a distribution over the position (x, y and orientation) for each of 6 body parts (head, torso, upper/lower
3
Figure 3: Example images with facial features and pose estimates superimposed. Left Facial features (left
and right corners of each eye, two nostrils, tip of the nose, and the left and right corners of the mouth) located
using [12] in the detected face bounding-box. Right Example estimated poses corresponding to verbs: ?hit
backhand?, ?shake hands? and ?hold?. Red indicates torso, blue upper arms, green lower arms and head.
Brighter pixels are more likely to belong to a part. Color planes are added up, so that yellow indicates overlap
between lower-arm and torso, purple between upper-arm and torso, and so on (best viewed in color).
left/right arms). The pose estimator factors out variations due to clothing and background, so E
conveys purely spatial arrangements of body parts. We derive three relatively low-dimensional pose
descriptors from E, as proposed in [13]. These descriptors represent pose in different ways, such as
the relative position between pairs of body parts, and part-specific soft-segmentations of the image
(i.e. the probability of pixels as belonging to a part). We refer to [13, 11] for more details and the
similarity measure associated with each descriptor. We normalize the range of each similarity to
[0, 1], and denote their average as simpose (a, b). The final distance between two poses a, b used in
the rest of this paper is distpose (a, b) = 1 ? simpose (a, b).
2.3 Appearance model
The appearance model for a pose class (corresponding to a verb) is defined as:
X
i,p
i,p
i,p
i,p
k
P (Ipose
|ypose
, ?verb ) =
?(ypose
, k) ? P (Ipose
|?verb
)
(4)
k?{1,...,V,null}
k
where ?verb
are the parameters of the kth pose class (or ?verb if k = null). The indicator function
i,p
i,p
i,p
?(ypose , k) = 1 if ypose
= k and ?(ypose
, k) = 0 otherwise. We only explain here the model for a
pose class, as the face model is derived analogously.
i,p
k
) is a key ingredient for the success of our
How to model the conditional probability P (Ipose
|?verb
approach. Some previous works on names-faces used a Gaussian mixture model [6, 21]: each name
is associated with a Gaussian density, plus an additional Gaussian to model the null class. Using
functions of the exponential family like a Gaussian simplifies computations. However, a Gaussian
may restrict the representative power of the appearance model. Problems such as face and pose
recognition are particularly challenging because they involve complex non-Gaussian multimodal
distributions. Figure 3(right) shows a few examples of the variance within the pose class for a verb.
Moreover, we cannot easily employ existing pose similarity measures [13]. Therefore, we represent
the conditional probability using a exemplar-based likelihood function:
(
i,p k
?dpose (Ipose
,?verb )
1
if k ? {known verbs}
Z?verb e
i,p
k
P (Ipose |?verb ) =
(5)
1
??verb
e
if
k = null
Z?
verb
i,p
where Z?verb is the normalizer and dpose is the distance between the pose descriptor Ipose
and its
k
k,R
k
k
= {?k,1
closest class representative vector ?kr ? ?verb
pose , . . . , ?pose }, where R is the number of
k
representative poses for verb k. The likelihood depends on the model parameters ?verb
, and the
distance function dpose . The scalar ?verb represents the null model, thus poses assigned to null have
likelihood Z?1 e??verb . It is important to have this null model, as some detected persons might not
verb
correspond to any verb in the caption or they might be false detections. By generalizing the similarity
measure simpose (a, b) as a kernel product K(a, b) = ?(a) ? ?(b), the distance from a vector a to the
sample center vector ?kr can be written similarly as in the weighted kernel k-means method [9]:
2
2?b??rk w(b)k(a, b) ?b,d??rk w(b)w(d)k(b, d)
?b??rk w(b)?(b)
+
(6)
= K(a, a) ?
?(a) ?
?b??rk w(b)
?b??rk w(b)
(?b??rk w(b))2
4
The center vector ?kr is defined as ?b??rk w(b)?(b) / ?b??rk w(b) , where ?rk is the cluster of
vectors assigned to ?kr , and w(b) is the weight for each point b, representing the likelihood that b
belongs to the class of ?kr (as in equation (11)). This formulation can be considered as a modified
version of the k-means [19] clustering algorithm. The number of centers Rk can vary for different
verbs, depending on the distribution of the data and the number of samples. As we are interested
only in computing the distance between ?kr and each data point, and not in the explicit value of ?kr ,
the only term that needs to be computed in equation (6) is the second (the third term is constant for
each assigned ?kr ).
2.4 Name-verb assignments
The name-verb pairs ni for a document are observed in its caption C i . We derive from them the set
of all possible assignments Ai = {ai1 , . . . , aiLi } of name-verb pairs to persons in the image. The
number of possible assignments Li depends both on the number of persons and of name-verb pairs.
As opposed to the standard matching problem, here the assignments have to take into account null.
Moreover, we have the same constraints as in the name-face problem [6]: a person can be assigned
to at most one name-verb pair, and vice-versa. Therefore, given a document with P i persons and
Pmin(P i ,W i ) P i W i
W i name-verb pairs, the number of possible assignments is Li = j=0
j , where
j ?
j is the number of persons assigned to a name-verb pair instead of null. Even by imposing the
above constraints, this number grows rapidly with P i and W i . However, since different assignments
share many common sub-assignments, the number of unique likelihood computations is much lower,
namely P i ? (W i + 1). Thus, we can evaluate all possible assignments for an image efficiently.
Although certain assignments are unlikely to happen (e.g. all persons are assigned to null), here we
use an uniform prior over all assignments, i.e. P (ail ) = 1/Li . Since the true assignment Y i can
only come from Ai , we define the conditional probability over the latent variables Y i as:
1/Li if Y i ? Ai
i
i
P (Y |A ) =
(7)
0
otherwise
The latent assignment Y i play the role of the annotations necessary for learning appearance models.
3
Learning the model
The task of learning is to find the model parameters ? and the assignments Y which maximize
the likelihood of the complete dataset {I, Y , A}. The joint probability of {I, Y , A} given ? from
equation (1) can be written as
?
?
Pi
M
Y
Y
i,p
i,p
i,p
i,p
?P (Y i |Ai )P (Ai )
P (Iface |yface , ?name )P (Ipose
|ypose
, ?verb )? (8)
P (I, Y , A|?) =
p=1
i=1
Maximizing the log of this joint likelihood is equivalent to minimizing the following clustering
objective function over the latent variables Y and parameters ?:
i,p
X
X
X
i,p
ypose
yface
i,p
i,p
J =
dface (Iface
, ?name
)+
?name +
dpose (Ipose
, ?verb
)
i,p
i,p,yface
6=null
+
X
i,p
i,p,ypose
=null
i,p
i,p,yface
=null
?verb ?
X
i,p
i,p,ypose
6=null
(logP (Y i |Ai ) + logP (Ai )) +
i
X
(9)
(logZ?name + logZ?verb )
i,p
Thus, to minimize J , each latent variable Y i must belong to the set of possible assignments Ai . If
Y would be known, the cluster centers ? ? ?name , ? ? ?verb which minimize J could be determined
uniquely (given also the number of class centers R). However, it is difficult to set R before seeing
the data. In our implementation, we determine the centers approximately using the data points and
their K nearest neighbors. Since estimating the normalization constants Z?name and Z?verb is computationally expensive, we make an approximation by considering them as constant in the clustering
process (i.e. drop their terms from J ). In our experiments, this did not significantly affect the
results, as also noted in several other works (e.g. [4]).
Since the assignments Y are unknown, we use a generalized EM procedure [7, 22] for simultaneously learning the parameters ? and solving the correspondence problem (i.e. find Y ):
5
1
90
Verb
Ass.
80
Name
Ass.
Accuracy [%]
75
A. Agassi
J. Jankovic
J. Jankovic
0.9
Name
Ass.
R. Federer
R. Federer
A. Agassi
Verb
Ass.
70
B. Clinton
B. Clinton S. Williams S. Williams
0.95
Precision
Name
Ass.
85
GMM Face
Model Face
Model Face+Pose
Model Pose
Verb
Ass.
65
0.85
G. Bush
K. Garnett
N. Sarkozy
T. Woods
A. Merkel
T. Woods
0.8
60
G. Bush
0.75
55
N. Sarkozy
A. Merkel
50
0.7
Model Face
Model Face+Pose
45
0.65
0.1
40
a. ground?truth
b. automated
0.2
0.3
0.4
K. Garnett
0.5
0.6
0.7
0.8
0.9
1
Recall
c. multiple
Figure 4: Left. Comparison of different models under different setups: using the manually annotated nameverb pairs (ground-truth); using the Named Entity detector and language parser (automated); and using the
more difficult subset (multiple). The accuracy for name (Name Ass.) and verb (Verb Ass.) assignments
are reported separately. GMM Face refers to the face-only model using GMM appearance models, as in [6].
Right. Comparison of precision and recall for 10 individuals using the stripped-down face only model, and our
face+pose model. The reported results are based on automatically parsed captions for learning.
Input.
Data D; hyper-parameters ?name , ?verb , K
1. Initialization. We start by computing the distance matrix between faces/poses from images
sharing some name/verb in the caption. Next we initialize ? using all documents in D. For each
different name/verb, we select all captions containing only this name/verb. If the corresponding
k
k
/?verb
.
images contain only one person, their faces/poses are used to initialize the center vectors ?name
The center vectors are found approximately using each data point and their K nearest neighbors
of the same name/verb class. If a name/verb only appears in captions with multiple names/verbs
or if the corresponding images always contain multiple persons (e.g. verbs like ?shake hand?),
we randomly assign the name/verb to any face/pose in each image. The center vectors are then
initialized using these data points. The initial weights w for all data points are set to one (equation 6).
This step yields an initial estimate of the model parameters ?. We refine the parameters and assignments by repeating the following EM-steps until convergence.
2. E-step.
Compute the labels Y using the parameters ?old from the previous iteration
arg maxP (Y |I, A, ?old ) ? arg maxP (I|Y , ?old )P (Y |A)
Y
(10)
Y
3. M-step. Given the labels Y , update ? so as to minimize J (i.e. update the cluster centers ?).
Our algorithm assigns each point to exactly one cluster. Each point I i,p in a cluster is given a weight
wYi,pi = P
P (Y i |I i,p , Ai , ?)
j i,p , Ai , ?)
Y j ?Ai P (Y |I
(11)
i,p
i,p
which represents the likelihood that Iface
and Ipose
belong to the name and verb defined by Y i .
Therefore, faces and poses from images with many detections have a lower weights and contribute
less to the cluster centers, reflecting the larger uncertainty in their assignments.
4
Experiments and conclusions
Datasets There are datasets of news image-caption pairs such as those in [6, 16]. Unfortunately,
these datasets are not suitable in our scenario for two reasons. Faces often occupy most of the image
so the body pose is not visible. Second, the captions frequently describe the event at an abstract
level, rather than using a verb to describe the actions of the persons in the image (compare figure 1
to the figures in [6, 16]). Therefore, we collected a new dataset 2 by querying Google-images using
a combination of names and verbs (from sports and social interactions), corresponding to distinct
upper body poses. An example query is ?Barack Obama? + ?shake hands?. Our dataset contains
1610 images, each with at least one person whose face occupies less than 5% of the image, and with
the accompanying snippet of text returned by Google-images. External annotators were asked to
2
We released this dataset online at http://www.vision.ee.ethz.ch/?ferrari
6
C:
R. Nadal - clench fist
E. Gulbis - null
K. Garnett - hold
Celtics - null
J. Jankovic - serve
M. Bartoli - null
J. Jankovic - hold
D. Safina - null
R. Nadal - null
R. Federer - hit forehand
F::
FP:
E. Gulbis
R. Nadal
Celtics
K. Garnett
null
J. Jankovic
D. Safina
J. Jankovic
R. Nadal; null
R. Federer; null
C:
V. Williams - hit backhand
S. Williams - hold
R. Nadal - hit forehand
C. Clinton - clap
B. Clinton - kiss
H. Clinton - kiss
N. Sarkozy - embrace
Brian Cowen - null
Hu Jintao - Wave
R. Venables - wave
F::
FP:
V. Williams
S. Williams
null
R. Nadal
C. Clinton
null
Brian Cowen
N. Sarkozy
null
Hu Jintao
Hu Jintao - shake hands
J. Chirac - shake hands
Hu Jintao - shake hands
N. Sarkozy - shake hands
A. Garcia - toast
A. Merkel - drink
A. Merkel - gesture
C:
Hu Jintao - shake hands
K. Bakjyev - shake hands
Kyrgyzstan - null
F::
FP:
null;null;null
null;null;Hu Jintao
null;Hu Jintao
N. Sarkozy; Hu Jintao
A. Merkel
A. Garcia
null;null;A. Merkel
A. Merkel;null;null;
Hu Jintao;null
Hu Jintao;K. Bakjyev
Figure 5: Examples of when modeling pose improves the results at learning time. Below the images we report
the name-verb pairs (C) from the caption as returned by the automatic parser and compare the association
recovered by a model using only faces (F) and using both faces and poses (FP). The assigned names (left to
right) correspond to the detected face bounding-boxes (left to right).
Image Query Keywords
110
100
90
0
1
2
3
4
5
6
Wave
Baseline on Face Annoation (with caption)
Face Model on Face Annoation (with caption)
Face + Pose Model on Face Annotation (with caption)
Face + Pose Model on Face Annotation (without caption)
Face + Pose Model on Pose Annotation (without caption)
Hold
Shake Hand
Wave
R. Federer
Accuracy [%]
80
J. Jankoviv
70
B. Obama
B. Obama
B. Obama
Hit Backhand
NULL
60
Shake Hands
50
40
NULL
30
Hu Jintao
20
Shake Hands
10
Federer
Sharapova
Nadal
Backhand Hold trophy Forehand
Obama
Wave
NULL
Shake Hands
Hu Jintao
Shake Hands
M.Sharapova
Hold
Hu Jintao
Shakehands
Figure 6: Recognition results on images without text captions (using models learned from automatically parsed
captions). Left compares face annotation using different models and scenarios (see main text); Right shows a
few examples of the labels predicted by the joint face and pose model (without using captions).
extend these snippets into realistic captions when necessary, with varied long sentences, mentioning
the action of the persons in the image as well as names/verbs not appearing in the image (as ?noise?,
figure 1). Moreover, they also annotated the ground-truth name-verb pairs mentioned in the captions
as well as the location of the target persons in the images, enabling to evaluate results quantitatively.
In total the ground-truth consists of 2627 name-verb pairs. In our experiments we only consider
7
names and verbs occurring in at least 3 captions for a name, and 20 captions for a verb. This leaves
69 names corresponding to 69 face classes and 20 verbs corresponding to 20 pose classes.
We used an open source Named Entity recognizer [1] to detect names in the captions and a language
parser [8] to find name-verbs pairs (or name-null if the language parser could not find a verb associated with a name). By using simple stemming rules, the same verb under different tenses and
possessive adjectives was merged together. For instance ?shake their hands?, ?is shaking hands? and
?shakes hands? all correspond to the action verb ?shake hands?. In total, the algorithms achieves
precision 85.5% and recall 68.8% on our dataset over the ground-truth name-verb pair. By discarding infrequent names and verbs as explained above, we retain 85 names and 20 verbs to be learned by
our model (recall that some of these are false positives rather than actual person names and verbs).
Results for learning The learning algorithm takes about five iterations to converge. We compare
experimentally our face and pose model to stripped-down versions using only face or pose information. For comparison, we also implement the constrained mixture model [6] described in section 2.3.
Although [6] also originally incorporates also a language model of the caption, we discard it here
so that both methods use the same amount of information. We run the experiments in three setups:
(a) using the ground-truth name-verb annotations from the captions; (b) using the name-verb pairs
automatically extracted by the language parser; (c) similar as (b) but only on documents with multiple persons in the image or multiple name-verb pairs in the caption. These setups are progressively
more difficult, as (b) has more noisy name-verb pairs, and (c) has no documents with a single name
and person, where our initialization is very reliable.
Figure 4(left) compares the accuracy achieved by different models on these setups. The accuracy is
defined as the percentage of correct assignments over all detected persons, including assignments to
null, as in [5, 16]. As the figure shows, our joint ?face and pose? model outperforms both models
using face or pose alone in all setups. Both the annotation of faces and poses improve, demonstrating
they help each other when successfully integrated by our model. This is the main point of the
paper. Figure 4(right) shows improvements on precision and recall over models using faces or poses
alone. As a second point, our model with face alone also outperforms the baseline approach using
Gaussian mixture appearance models (e.g. used in [6]). Figure 5 shows a few examples of how
including pose improves the learning results and solve some of the correspondence ambiguities.
Improvements happen mainly in three situations: (a) when there are multiple names in a caption, as
not all names in the captions are associated to action verbs (figure 1(a) and figure 5(top)); (b) when
there are multiple persons in an image, because the pose disambiguates the assignment (figure 1(b)
and figure 5(bottom)) and (c) when there are false detections, rare faces or faces at viewpoints
different than frontal (i.e. where face recognition works less well, e.g. figure 5(middle)).
Results for recognition Once the model is learned, we can use it to recognize ?who?s doing what?
in novel images with or without captions. We collected a new set of 100 images and captions from
Google-images using five keywords based on names and verbs from the training dataset. We evaluate
the learned model in two scenarios: (a) the test data consists of images and captions. Here we run
inference on the model, recovering the best assignment Y from the set of possible assignments
generated from the captions; (b) the same test images are used but the captions are not given, so
the problem degenerates to a standard face and pose recognition task. Figure 6(left) reports face
annotation accuracy for three methods using captions (scenario (a)): (?) a baseline which randomly
assigns a name (or null) from the caption to each face in the image; (x) our face and pose model; ()
our model using only faces. The figure also shows results for scenario (b), where our full model tries
to recognize faces (+) and poses (?) in the test images without captions. On scenario (a) all models
outperform the baseline, and our joint face and pose model improves significantly on the face-only
model for all keywords, especially when there are multiple persons in the image.
Conclusions. We present an approach for the joint modeling of faces and poses in images and
their association to names and action verbs in accompanying text captions. Experimental results
show that our joint model performs better than face-only models both in solving the image-caption
correspondence problem on the training data, and in annotating new images. Future work aims at
incorporating an effective web crawler and html/language parsing tools to harvest image-caption
pairs from the internet fully automatically. Other techniques such as learning distance functions [4,
15, 20] may also be incorporated during learning to improve recognition results.
Acknowledgments We thank K. Deschacht and M.F. Moens for providing the language parser. L. J. and B.
Caputo were supported by EU project DIRAC IST-027787 and V. Ferrari by the Swiss National Science Found.
8
References
[1] http://opennlp.sourceforge.net/.
[2] K. Barnard, P. Duygulu, D. Forsyth, N. de Freitas, D. Blei, and M. Jordan. Matching words
and pictures. JMLR, 3:1107?1135, 2003.
[3] K. Barnard and Q. Fan. Reducing correspondence ambiguity in loosely labeled training data.
In Proc. CVPR?07.
[4] S. Basu, M. Bilenko, A. Banerjee, and R. J. Mooney. Probabilistic semi-supervised clustering with constraints. In O. Chapelle, B. Sch?olkopf, and A. Zien, editors, Semi-Supervised
Learning, pages 71?98. MIT Press, 2006.
[5] T. Berg, A. Berg, J. Edwards, and D. Forsyth. Names and faces in the news. In Proc. CVPR?04.
[6] T. Berg, A. Berg, J. Edwards, and D. Forsyth. Who?s in the picture? In Proc. NIPS?04.
[7] A. P. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the
em algorithm. Journal Royal Statistical Society, 39:1?38, 1977.
[8] K. Deschacht and M.-F. Moens. Semi-supervised semantic role labeling using the latent words
language model. In Proc. EMNLP?09.
[9] I. Dhillon, Y. Guan, and B. Kulis. Kernel k-means: spectral clustering and normalized cuts. In
Proc. KDD?04.
[10] P. Duygulu, K. Barnard, N. de Freitas, and D. Forsyth. Object recognition as machine translation: Learning a lexicon for a fixed image vocabulary. In Proc. ECCV?02.
[11] M. Eichner and V. Ferrari. Better appearance models for pictorial structures. In Proc.
BMVC?09.
[12] M. Everingham, J. Sivic, and A. Zisserman. Hello! my name is... buffy - automatic naming of
characters in tv video. In Proc. BMVC?06.
[13] V. Ferrari, M. Marin, and A. Zisserman. Pose search: retrieving people using their pose. In
Proc. CVPR?09.
[14] V. Ferrari, M. Marin, and A. Zisserman. Progressive search space reduction for human pose
estimation. In Proc. CVPR?08.
[15] A. Frome, Y. Singer, and J. Malik. Image retrieval and classification using local distance
functions. In Proc. NIPS?06.
[16] M. Guillaumin, T. Mensink, J. Verbeek, and C. Schmid. Automatic face naming with captionbased supervision. In Proc. CVPR?08.
[17] A. Gupta and L. Davis. Beyond nouns: Exploiting prepositions and comparative adjectives for
learning visual classifiers. In Proc. ECCV?08.
[18] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110,
2004.
[19] J. B. MacQueen. Some methods for classification and analysis of multivariate observations. In
Proc. of 5th Berkeley Symposium on Mathematical Statistics and Probability, 1967.
[20] T. Malisiewicz and A. Efros. Recognition by association via learning per-exemplar distances.
In Proc. CVPR?08.
[21] T. Mensink and J. Verbeek. Improving people search using query expansions: How friends
help to find people. In Proc. ECCV?08.
[22] R. Neal and G. E. Hinton. A view of the em algorithm that justifies incremental, sparse, and
other variants. In M. I. Jordan, editor, Learning in Graphical Models, pages 355?368. Kluwer
Academic Publishers, 1998.
?
[23] Y. Rodriguez. Face Detection and Verification using Local Binary Patterns. PhD thesis, Ecole
Polytechnique F?ed?erale de Lausanne, 2006.
[24] N. Shental, A. Bar-Hillel, T. Hertz, and D. Weinshall. Computing gaussian mixture models
with em using equivalence constraints. In Proc. NIPS?03.
[25] K. Wagstaff, C. Cardie, S. Rogers, and S. Schroedl. Constrained k-means clustering with
background knowledge. In Proc. ICML?01.
9
| 3759 |@word kulis:1 illustrating:1 middle:1 version:2 everingham:1 open:2 hu:13 carolina:1 concise:1 reduction:1 initial:2 contains:2 ecole:1 ours:1 document:13 outperforms:2 existing:2 freitas:2 recovered:1 com:3 luo:1 written:3 must:1 parsing:1 stemming:1 visible:2 happen:2 realistic:1 kdd:1 drop:1 update:2 progressively:1 alone:5 generative:5 leaf:1 website:3 item:5 plane:1 ith:2 blei:1 detecting:1 contribute:1 location:1 lexicon:1 simpler:1 five:2 mathematical:2 along:1 symposium:1 retrieving:1 consists:4 ijcv:1 frequently:2 multi:1 detects:1 automatically:5 bilenko:1 election:1 actual:1 considering:2 project:1 estimating:1 moreover:5 notation:2 null:48 what:6 weinshall:1 ail:3 nadal:7 indiana:1 berkeley:1 tackle:1 barack:2 eichner:1 exactly:1 trophy:1 hit:6 classifier:1 unit:2 intervention:1 appear:3 before:1 positive:3 local:2 marin:2 encoding:2 approximately:2 might:5 plus:1 initialization:2 equivalence:1 lausanne:2 challenging:1 mentioning:1 range:1 malisiewicz:1 unique:1 acknowledgment:1 implement:1 swiss:1 procedure:1 logz:2 merkel:7 cowen:2 eth:1 significantly:2 matching:2 viceversa:1 word:3 refers:1 seeing:1 cannot:1 www:2 equivalent:1 vittorio:1 center:12 maximizing:1 go:1 williams:6 starting:1 focused:1 moens:2 assigns:2 estimator:1 rule:1 his:2 ferrari:7 variation:1 target:1 suppose:1 parser:8 user:1 caption:71 play:1 infrequent:1 associate:2 recognition:11 particularly:1 located:1 expensive:1 cut:1 labeled:1 observed:2 role:2 bottom:1 thousand:1 region:1 news:10 eu:1 mentioned:4 dempster:1 asked:1 solving:3 deliver:1 serve:2 distinctive:2 purely:1 selling:1 multimodal:1 joint:12 easily:1 various:2 represented:2 train:1 distinct:1 describe:2 effective:1 detected:7 query:3 labeling:1 hyper:1 hillel:1 whose:1 larger:1 solve:2 cvpr:6 say:1 annotating:2 otherwise:2 presidential:1 maxp:2 statistic:1 jointly:2 noisy:1 laird:1 final:2 online:2 net:1 propose:1 interaction:1 product:3 backhand:5 rapidly:1 erale:1 shaking:1 degenerate:1 description:1 normalize:1 dirac:1 sourceforge:1 olkopf:1 exploiting:3 convergence:1 cluster:6 comparative:1 incremental:1 object:3 help:3 derive:2 depending:1 friend:1 pose:78 exemplar:2 nearest:2 keywords:3 edward:2 solves:2 recovering:2 idiap:4 predicted:1 indicate:1 come:2 frome:1 closely:1 annotated:4 merged:1 correct:1 occupies:1 human:1 rogers:1 explains:1 assign:1 brian:2 toast:1 clothing:2 accompanying:3 hold:7 around:1 considered:2 ground:6 efros:1 vary:1 achieves:1 released:1 recognizer:1 estimation:1 proc:18 label:8 him:1 venables:1 vice:2 successfully:1 tool:1 weighted:1 mit:1 always:2 gaussian:9 aim:1 modified:1 rather:2 hello:1 forehand:3 derived:1 focus:1 improvement:2 likelihood:15 superimposed:1 indicates:2 mainly:1 normalizer:1 baseline:4 am:1 detect:3 inference:1 unlikely:1 integrated:1 hidden:1 her:1 interested:2 federer:7 pixel:2 overall:1 classification:2 orientation:1 html:1 bartoli:1 arg:2 yahoo:1 noun:4 constrained:3 spatial:1 initialize:2 construct:1 once:1 manually:1 represents:3 progressive:1 icml:1 future:1 report:2 quantitatively:1 few:3 employ:1 randomly:2 simultaneously:3 recognize:3 national:1 senator:1 individual:1 pictorial:1 consisting:3 detection:6 huge:1 ai1:2 mixture:5 andy:1 necessary:2 facial:2 incomplete:1 old:3 loosely:1 initialized:1 instance:1 modeling:6 earlier:1 soft:1 logp:2 assignment:33 subset:1 rare:1 uniform:1 reported:2 celtic:2 my:1 person:52 density:1 retain:1 standing:1 probabilistic:1 tip:1 together:2 analogously:2 epf:1 thesis:1 ambiguity:4 opposed:1 containing:1 emnlp:1 corner:2 external:1 pmin:1 li:5 account:1 de:3 accompanied:1 summarized:1 north:1 forsyth:4 depends:2 view:2 try:1 closed:1 lowe:1 doing:5 red:1 wave:7 start:2 annotation:10 waving:1 minimize:3 purple:1 ni:3 accuracy:6 descriptor:8 who:11 variance:1 efficiently:1 correspond:5 yield:1 yellow:1 generalize:1 famous:2 cardie:1 mooney:1 simultaneous:1 detector:3 explain:1 manual:1 sharing:1 guillaumin:1 ed:1 dm:1 naturally:1 associated:6 di:1 nostril:1 conveys:1 uncaptioned:1 dataset:8 popular:1 recall:5 knowledge:2 car:1 color:2 torso:4 segmentation:1 improves:3 reflecting:1 appears:1 originally:1 supervised:3 zisserman:3 bmvc:2 formulation:1 mensink:2 box:6 roger:1 until:1 hand:18 night:1 web:1 banerjee:1 google:4 rodriguez:1 reveal:1 grows:1 name:99 contain:4 true:3 normalized:2 tense:1 hence:1 assigned:7 imaged:2 dhillon:1 semantic:1 neal:1 during:2 uniquely:1 covering:2 davis:2 noted:1 kak:1 cosine:1 generalized:1 plate:1 stress:1 complete:1 demonstrate:1 polytechnique:1 performs:2 image:89 novel:2 common:1 quarter:1 association:5 discussed:1 belong:4 extend:1 kluwer:1 refer:1 versa:2 imposing:1 ai:24 automatic:3 similarly:1 language:12 chapelle:1 tennis:1 similarity:6 supervision:1 closest:1 multivariate:1 recent:1 belongs:1 barbara:1 scenario:6 possessive:1 certain:3 discard:1 binary:1 success:1 seen:1 prepares:1 additional:1 tossing:1 determine:1 maximize:2 converge:1 ii:2 rv:1 multiple:11 fist:1 full:1 semi:3 keypoints:1 zien:1 match:1 academic:1 gesture:1 long:1 retrieval:1 naming:2 a1:1 verbeek:2 variant:1 basic:2 vision:3 essentially:1 annotate:2 represent:2 kernel:3 normalization:1 iteration:2 achieved:1 addition:2 want:1 background:2 separately:1 wealth:1 source:2 publisher:1 sch:1 rest:2 comment:1 subject:2 virtually:1 thing:1 incorporates:1 jordan:2 ee:2 near:1 iii:1 automated:2 variety:1 independence:3 affect:1 brighter:1 associating:3 restrict:1 simplifies:1 knowing:2 supporter:1 harvest:1 deschacht:2 returned:3 nine:1 action:9 jie:1 useful:1 involve:1 shake:17 amount:2 repeating:1 reduced:1 generate:1 occupy:1 http:2 exist:1 percentage:1 outperform:1 estimated:1 per:1 blue:1 shental:1 ist:1 key:1 four:1 demonstrating:1 gmm:3 wood:2 wife:1 run:2 uncertainty:1 named:2 family:1 internet:2 drink:1 correspondence:12 fan:1 refine:1 constraint:4 scene:1 duygulu:2 performing:1 relatively:1 embrace:1 tv:1 combination:2 ball:2 belonging:1 hertz:1 em:5 character:1 kbk:1 explained:1 invariant:1 wagstaff:1 computationally:1 equation:5 zurich:1 describing:2 singer:1 nose:1 available:2 spectral:1 appearing:2 top:1 clustering:7 graphical:2 parsed:2 especially:2 society:1 objective:1 malik:1 added:1 arrangement:1 schroedl:1 primary:1 kth:1 distance:10 thank:1 concatenation:1 street:1 entity:2 collected:2 water:2 reason:1 assuming:3 ru:1 index:4 providing:1 minimizing:1 difficult:4 setup:5 unfortunately:1 holding:1 wyi:1 implementation:1 unknown:2 perform:1 upper:11 observation:1 datasets:4 macqueen:1 enabling:1 snippet:2 defining:1 situation:1 variability:2 head:2 incorporated:1 hinton:1 varied:1 verb:115 pair:26 namely:1 connection:1 sentence:1 sivic:1 engine:1 learned:6 established:1 nip:3 beyond:2 bar:1 below:2 pattern:1 democratic:1 fp:4 adjective:2 royal:1 including:4 green:1 reliable:1 mouth:1 power:1 event:2 overlap:1 natural:1 suitable:1 video:1 indicator:1 arm:5 representing:1 improve:2 eye:1 picture:5 carried:1 auto:1 extract:1 schmid:1 text:8 prior:1 relative:1 beside:1 fully:1 bear:1 querying:1 ingredient:1 annotator:1 verification:1 rubin:1 viewpoint:1 editor:2 share:1 pi:2 translation:1 eccv:3 preposition:2 clap:1 supported:1 raleigh:1 institute:1 explaining:1 neighbor:2 face:96 stripped:2 michelle:1 basu:1 sparse:1 vocabulary:1 collection:3 pth:5 social:1 newspaper:1 ignore:1 implicitly:1 disambiguates:1 corpus:4 search:5 latent:9 table:2 learn:3 caputo:2 improving:1 expansion:1 as:8 complex:1 clinton:6 obama:9 garnett:4 did:1 main:2 bounding:6 whole:1 noise:1 body:15 representative:6 precision:4 sub:1 position:2 explicit:2 exponential:1 candidate:1 house:1 lie:1 guan:1 jmlr:1 third:1 rk:10 down:2 specific:1 discarding:1 sift:1 list:1 gupta:2 incorporating:1 false:5 effectively:1 kr:8 phd:1 justifies:1 occurring:3 racket:1 generalizing:1 garcia:2 photograph:1 appearance:18 likely:1 visual:6 expressed:1 hitting:1 kiss:2 sport:1 scalar:1 ch:4 truth:6 extracted:1 conditional:4 lth:1 marked:1 goal:2 viewed:1 buffy:1 replace:1 barnard:3 experimentally:3 tiger:1 infinite:1 typical:2 determined:1 reducing:1 total:2 experimental:1 indicating:1 select:1 berg:4 people:7 crawler:1 ethz:2 bush:2 frontal:1 evaluate:3 d1:1 |
3,045 | 376 | A Delay-Line Based
Motion Detection Chip
Tim Horiuchi t
John Lazzaro?
Andrew Moore t
Christof Koch t
tComputation and Neural Systems Program
?Department of Computer Science
California Institute of Technology MS 216-76
Pasadena, CA 91125
Abstract
Inspired by a visual motion detection model for the ra.bbit retina
and by a computational architecture used for early audition in the
barn owl, we have designed a chip that employs a correlation model
to report the one-dimensional field motion of a scene in real time.
Using subthreshold analog VLSI techniques, we have fabricated and
successfully tested a 8000 transistor chip using a standard MOSIS
process.
1. INTRODUCTION
Most proposed short-range intensity-based motion detection schemes fall into two
major categories: gradient models and correlation models. In gradient models,
computation begins from local image qualities such as spatial gradients and temporal derivatives that can be vulnerable to noise or limited resolution. Correlation
models, on the other hand, use a filtered version of the input intensity multiplied
with the temporally delayed and filtered version of the intensity at a neighboring
* Present address: John Lazzaro, University of Colorado at Boulder, Campus Box
425, Boulder, Colorado, 80309-0425
406
A Delay-Line Based Motion Detection Chip
receptor. Many biological motion detection systems have been shown to use a correlation model (Grzywacz and Poggio, 1990). To make use of this model, previous
artificial systems, that typically look at sampled images of a scene changing in time,
have had to cope with the correspondence problem, i.e. the problem of matching
features between two images and measuring their shift in position. Whereas traditional digital approaches lend themselves to the measurement of image shift over a
fixed time, an analog approach lends itself to the measurement of time over fixed
distance. The latter is a local computation that scales to different velocity ranges
gracefully without suffering from the problems of extended interconnection.
Inspired by visual motion detection models (Barlow and Levick, 1965) and by a
computational architecture found in early audition (Konishi, 1986), we have designed a chip that contains a large array of velocity-tuned "cells" that correlate two
events in time, using a delay-line structure. We have fabricated and successfully
tested an analog integrated circuit tbat can can report, in real time, the field motion of a one-dimensional image projected onto the chip. The chip contains 8000
transistors and a linear photoreceptor array with 28 elements.
2. SYSTEM ARCHITECTURE
Figure 1 shows the block diagram of the chip. The input to the chip is a real-world
ima.ge, focused directly onto the silicon via a lens mounted over the chip. The onedimensional array of on-chip hysteretic photoreceptors (Delbriick and Mead, 1989)
receives the light and reports rapid changes in the signal for both large and small
changes. Each photoreceptor is connected to a half-wave rectifying neuron circuit
(Lazzaro and Mead, 1989) that fires a single pulse of constant voltage amplitude
and duration when it receives a quickly rising (but not falling) light-intensity signal.
This rising light intensity signal is interpreted to be a moving edge in the image
passing over the photoreceptor. It is this signal that is the "feature" to be correlated. Note that the cboice of the rising or falling intensity as a feature, from an
algorithmic point of view, is arbitrary. Each neuron circuit is in turn connected to
an axon circuit (Mead, 1989) that propagates the pulse down its length. By orienting the axons in alternating directions, as shown in Figure 1, any two adjacent
receptors generates pulses that will "race" toward each other and meet at some
point along the axon. Correlators between the axons detect when pulses pass each
other, indicating the detection of a specific time difference. The width of the pulses in the axon circuits is adjustable and determines the detectable velocity range.
From the summing of "votes" for different velocities by correia tors across the entire
chip, a winner-take-all circuit (Lazzaro et aI., 1989) determines the velocity.
407
408
Horiuchi, Lazzaro, Andrew Moore, and Koch
Winner-Take-All Ch'cuit
(17 inputs)
Output Map
of
Velocity
Time-Multiplexing Scanner
-v
+v
Figure 1. Block diagram of the chip, showing information flow from the photoreceptors (P), to tile time-multiplexed winner-take-all output. Rising light signals
are converted to pulses that propagate down the axons. Correlators are drawn as
circles and axons are piecewise denoted by t1t boxes. See the text for explanation.
A Delay-Line Based Motion Detection Chip
3. SYSTEM OPERATION AND RESULTS
3.1 READING BETWEEN THE LINES
The basic signal quantity that we are measuring is the time a "feature" takes to
travel from one photoreceptor to one of its neighbors. By placing two delay lines
in parallel that propagate signals in opposing directions, a temporal difference in
signal start times from opposite ends will manifest itself as a difference in the
location where the two signals will meet. Between the axons, correlation units
perform a logical AND with the axon signals on both sides. If pulses start down
adjacent axons with zero difference in start times (i.e. infinite velocity), they will
meet in the center and activate a correlator in the center of the axon. If the time
difference is small (i.e. the velocity is large), correlations occur near the center. As
the time difference increases, correlations occur further out toward the edges. The
two halves of the axon with respect to the center represent different directions of
motion. \Vhen a single stimulus (e.g. a step edge) is passed over the length of the
photoreceptor array with a constant velocity, a specific subset of correlators will be
activated that all represent the same velocity. A current summing line is connected
to each of these correlators and is passed to a winner-take-all circuit. The winner of
the winner-take-all computation corresponds to the line that is receiving the largest
number of correlation inputs. The output of the winner-take-all is scanned off the
chip using an external input clock. Because the frequency of correlation affects the
confidence of the data, scenes that are denser in edges provide more confident data
as well as a quicker response.
3.2 SINGLE VS. BURSTING MODE
Uutil now, the circuit described uses a single pulse to indicate a passing edge. Due to
the statistical nature of this system, a large number of samples are needed to make
a confident statement of the detected time difference, or velocity. By externally
increasing the amplitude of the signal passed to the neuron during each event, the
neuron can fire multiple pulses in quick succession. With an increased number
of pulses travelling down the axon, the number of correlations increase, but with
a decrease in accuracy, due to the multiple incorrect correlations. The incorrect
correlations are not random, however, but occur closely around the correct velocity.
The end result is a net decrease in resolution in order to achieve increased confidence
in the final data.
3.3 VELOCITY RANGE
The chip output is the measured time difference of two events in multiples of T, the
time-constant of a single axon section. The time difference (measured in seconds/pixel) is translated into velocity, by the equation V
1/ At, where V is velocity
in pixels/sec and At can be positive or negative. Thus the linear measurement of
time difference gives a non-linear velocity interpretation with the highest resolution
=
409
410
Horiuchi, Lazzaro, Andrew Moore, and Koch
at the slower speeds. At the slower speeds, however, we tend to have decreased
confidence in the data due to the relatively smaller correlation frequency. This
is expected to be less troublesome as larger photoreceptor arrays are used. The
variable resolution in the computation is often an acceptable feature for control of
robotic motion systems since high velocity motions are often ballistic or at least
coarse, whereas fine control is needed at lower velocities.
3.4 PERFORMANCE
We have fabricated the circuit shown in Figure 1 using a double polysilicon 2J-lm
process in the MOSIS Tiny Chip die. The chip has 17 velocity channels, and an
input array of 28 photoreceptors. The voltages from the winner-take-all circuit are
scanned out sequentially by on-chip scanners, the only clocked circuitry on the chip.
In testing the chip, gratings of varying spatial frequencies and natural images from
newspaper photos and advertisements were mounted on a rotating drum in front
of the lens. Although the most stable data was collected using the gratings, both
images sources provided satisfactory data. Figure 2 shows oscilloscope traces of
scanned winner-take-all channels for twelve different negative and positive velocities within a specific velocity range setting. The values to the right indicate the
approximate center of the velocity range. Figure 3(a) shows the winning time interval channel vs. actual time delay. The response is linear as expected. Figure
3(b) shows the data from Figure 3(a) converted to the interpreted velocity channel
vs. velocity. The horizontal bars indicate the range of velocity inside of which each
channel responds. As described above, at the lower velocities, correlations occur at
a lower rate, thus some of the lowest velocity channels do not respond. By increasing the number of parallel photoreceptor channels, it is expected that this situation
will improve. The circuit, currently with only eight velocity channels per direction,
is able to reliably measure, over different settings, velocities from 2.9 pixels/sec up
to 50 pixels/sec.
---.l1
-3.1
rL
4.1
---Il
A
-5.8
rL-
6.0
-1.8
~
1.6
n
n
-16
n
?
-10
V
(a)
n
n
n
-to
-21
?
+10
?
-10
V
9.0
13
22
?
+10
(b)
Figure 2. Winner-take-all oscilloscope traces for twelve positive (a) and negative
(b) velocities. Trace labels represent the approximate center of the velocity range.
A Delay-Line Based Motion Detection Chip
....
?a~
16
41
12
....?::
e
....>.
?0
0
~
41
~
~
"'d
8
....4141
j'
41
Q
4
0
100
41
::E
8
0.
t -4
4
]
-8
0
-0.3 -0.1
0.1
0.3
- 0
-10
10
Actual Time Dela.y (s)
V (pixels/s)
(a)
(b)
30
Figure 3. (a) Plot of winning time interval channel vs. actual time delay. (b) Plot
of interpreted velocity channel vs. velocity (same data as in (a?.
An interesting feature of our model that also manifests itself in the visual system
of the fly (Buchner 1984) is spatial aliasing, leading in the worst case to motion
reversal. Spatial aliasing is due to the discrete sampling provided by photoreceptor
spacing. At spatial frequencies higher than the Nyquist limit, a second stimulus can
enter the neighboring axon before the first stimulus has exited, causing a sudden
change in the sign of the velocity.
4 CONCLUSION
A correlation-based model for motion detection has been successfully demonstrated
in subthreshold analog VLSI. The chip has shown the ability to successfully detect
relatively low velocities; the slowest speed detected was 2.9 pixels/sec. and shows
promise for use in different settings where other motion detection strategies have
difficulty. The chip responds very well to low-light stimulus and its output is robust
against changes in contrast. This is due to the high temporal derivative sensitivity
of the hysteretic photoreceptor to both large and small changes. Interestingly, the
statistical nature of the computation allows the system to perform successfully in
noise as well as to produce a level of confidence measure. In addition, the nature of
the velocity computation provides the highest resolution at the slower speeds and
may be considered as an effective way to expand the detectable velocity range.
Acknow ledgeIllellts
We thank Carver :Mead for providing laboratory resources for the design, fabrication, and init.ial testing of this chip. "Ve thank Rockwell International and the Hughes Aircraft Corporation for financial support of VLSI research in Christof Koch's
laboratory, and we thank the System Development Foundation and the Office Naval
Research for financial support of VLSI research in Carver Mead's laboratory. We
thank Hewlett-Packard for computing support and the Defense Advanced Research
411
412
Horiuchi, Lazzaro, Andrew Moore, and Koch
Projects Agency and the MOS Implementation Service (MOSIS) for chip fabrication.
References
Barlow, H.B. and Levick, \V.R. (1965) The mechanism of directionally sensitive
units in rabbit's retina. J. Physiol. 178: 477-504.
Buchner, E. (1984). Behavioural Analysis of Spatial Vision in Insects. In Ali, M.
A. (ed) Photoreception and Vision in Invertebrates. New York: Plenum Press, pp.
561-621.
Delbriick, T. and Mead, C. (1989) An Electronic Photoreceptor Sensitive to Small
Changes in Intensity. In Touretzky (ed), Neural Information Processing Systems 1.
San Mateo, CA: Morgan Kaufmann Publishers, pp. 720-727.
Grzywacz, N. and Poggio, T. (1990). Computation of Motion by Real Neurons. In
Zornetzer (ed), An Introduction to Neural and Electronic Networks. New York:
Academic Press, pp. 379-401.
Konishi, M. (1986). Centrally synthesized maps of sensory space. Trends in Neuroscience 4: 163-168.
Lazzaro, J. and Mead, C. (1989). Circuit models of sensory transduction in the
cochlea. In Mead, C. and Ismail, M. (eds), Analog VLSI Implementations of Neural
Networks. Norwell, MA: Kluwer Academic Publishers, pp. 85-101.
Lazzaro, J., Ryckebusch, S., Mahowald, M. A., and Mead, C. (1988). Winnertake-all networks of O(n) complexity. In Tourestzky, D. (ed), Advances in Neural
Information Processing Systems 1. San Mateo, CA: Morgan Kaufmann Publishers,
pp. 703-711.
Mead., C. (1989) Analog VLSI and Neural Systems. Reading, MA: Addison-Wesley,
pp. 193-203.
Part VIII
Control and Navigation
| 376 |@word aircraft:1 version:2 rising:4 pulse:10 propagate:2 contains:2 tuned:1 interestingly:1 current:1 john:2 physiol:1 designed:2 plot:2 v:5 half:2 ial:1 short:1 filtered:2 sudden:1 provides:1 coarse:1 location:1 along:1 incorrect:2 inside:1 expected:3 ra:1 rapid:1 themselves:1 oscilloscope:2 aliasing:2 inspired:2 actual:3 correlator:1 increasing:2 begin:1 provided:2 campus:1 project:1 circuit:12 lowest:1 interpreted:3 fabricated:3 corporation:1 temporal:3 control:3 unit:2 christof:2 positive:3 before:1 service:1 local:2 limit:1 receptor:2 troublesome:1 mead:10 meet:3 mateo:2 bursting:1 limited:1 range:9 testing:2 hughes:1 block:2 matching:1 confidence:4 onto:2 map:2 quick:1 center:6 demonstrated:1 duration:1 rabbit:1 focused:1 resolution:5 array:6 financial:2 konishi:2 grzywacz:2 plenum:1 colorado:2 us:1 velocity:37 element:1 trend:1 quicker:1 fly:1 worst:1 connected:3 decrease:2 highest:2 agency:1 complexity:1 ali:1 translated:1 chip:26 horiuchi:4 effective:1 activate:1 artificial:1 tbat:1 detected:2 larger:1 denser:1 interconnection:1 ability:1 itself:3 final:1 directionally:1 transistor:2 net:1 neighboring:2 causing:1 achieve:1 ismail:1 double:1 produce:1 tim:1 andrew:4 measured:2 grating:2 indicate:3 direction:4 closely:1 correct:1 owl:1 biological:1 scanner:2 koch:5 around:1 barn:1 considered:1 cuit:1 algorithmic:1 mo:1 lm:1 circuitry:1 major:1 tor:1 early:2 travel:1 label:1 ballistic:1 currently:1 sensitive:2 largest:1 successfully:5 varying:1 voltage:2 office:1 naval:1 slowest:1 contrast:1 detect:2 typically:1 integrated:1 entire:1 pasadena:1 vlsi:6 expand:1 pixel:6 denoted:1 insect:1 development:1 spatial:6 field:2 sampling:1 placing:1 look:1 report:3 stimulus:4 piecewise:1 employ:1 retina:2 ve:1 delayed:1 ima:1 fire:2 opposing:1 detection:11 tourestzky:1 navigation:1 light:5 activated:1 hewlett:1 norwell:1 edge:5 poggio:2 carver:2 rotating:1 circle:1 increased:2 measuring:2 mahowald:1 subset:1 delay:8 fabrication:2 rockwell:1 front:1 confident:2 twelve:2 sensitivity:1 international:1 off:1 receiving:1 quickly:1 exited:1 tile:1 external:1 audition:2 derivative:2 leading:1 converted:2 sec:4 race:1 view:1 wave:1 start:3 parallel:2 rectifying:1 il:1 accuracy:1 kaufmann:2 succession:1 subthreshold:2 touretzky:1 ed:5 against:1 frequency:4 pp:6 sampled:1 logical:1 manifest:2 amplitude:2 levick:2 wesley:1 higher:1 response:2 box:2 correlation:15 clock:1 hand:1 receives:2 horizontal:1 mode:1 quality:1 orienting:1 barlow:2 alternating:1 moore:4 satisfactory:1 vhen:1 laboratory:3 adjacent:2 during:1 width:1 die:1 clocked:1 m:1 motion:17 l1:1 image:8 rl:2 winner:10 analog:6 interpretation:1 kluwer:1 onedimensional:1 synthesized:1 measurement:3 silicon:1 ai:1 enter:1 winnertake:1 had:1 moving:1 stable:1 morgan:2 dela:1 signal:11 multiple:3 polysilicon:1 academic:2 basic:1 vision:2 cochlea:1 represent:3 cell:1 whereas:2 addition:1 fine:1 spacing:1 decreased:1 interval:2 diagram:2 source:1 publisher:3 tend:1 flow:1 correlators:4 near:1 affect:1 architecture:3 opposite:1 drum:1 shift:2 defense:1 passed:3 nyquist:1 passing:2 york:2 lazzaro:9 category:1 sign:1 neuroscience:1 per:1 discrete:1 promise:1 hysteretic:2 falling:2 drawn:1 changing:1 mosis:3 respond:1 electronic:2 t1t:1 acceptable:1 centrally:1 correspondence:1 occur:4 scanned:3 scene:3 multiplexing:1 invertebrate:1 generates:1 speed:4 relatively:2 department:1 across:1 smaller:1 boulder:2 behavioural:1 equation:1 resource:1 turn:1 detectable:2 mechanism:1 needed:2 addison:1 ge:1 end:2 photo:1 travelling:1 reversal:1 operation:1 multiplied:1 eight:1 slower:3 quantity:1 strategy:1 ryckebusch:1 traditional:1 responds:2 gradient:3 lends:1 distance:1 thank:4 gracefully:1 collected:1 toward:2 viii:1 length:2 providing:1 statement:1 trace:3 negative:3 acknow:1 design:1 reliably:1 implementation:2 adjustable:1 perform:2 neuron:5 situation:1 extended:1 delbriick:2 arbitrary:1 intensity:7 california:1 address:1 able:1 bar:1 reading:2 program:1 packard:1 lend:1 explanation:1 event:3 natural:1 difficulty:1 advanced:1 scheme:1 improve:1 technology:1 temporally:1 text:1 interesting:1 mounted:2 digital:1 foundation:1 propagates:1 tiny:1 side:1 institute:1 fall:1 neighbor:1 world:1 sensory:2 projected:1 san:2 cope:1 correlate:1 newspaper:1 approximate:2 robotic:1 sequentially:1 photoreceptors:3 summing:2 zornetzer:1 nature:3 channel:10 robust:1 ca:3 init:1 noise:2 suffering:1 transduction:1 axon:15 position:1 winning:2 advertisement:1 externally:1 down:4 specific:3 showing:1 photoreception:1 visual:3 vulnerable:1 ch:1 corresponds:1 determines:2 ma:2 change:6 infinite:1 lens:2 pas:1 photoreceptor:10 vote:1 indicating:1 support:3 latter:1 multiplexed:1 tested:2 correlated:1 |
3,046 | 3,760 | Differential Use of Implicit Negative Evidence in
Generative and Discriminative Language Learning
Anne S. Hsu
Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
{showen,tom griffiths}@berkeley.edu
Abstract
A classic debate in cognitive science revolves around understanding how children
learn complex linguistic rules, such as those governing restrictions on verb alternations, without negative evidence. Traditionally, formal learnability arguments
have been used to claim that such learning is impossible without the aid of innate
language-specific knowledge. However, recently, researchers have shown that statistical models are capable of learning complex rules from only positive evidence.
These two kinds of learnability analyses differ in their assumptions about the distribution from which linguistic input is generated. The former analyses assume
that learners seek to identify grammatical sentences in a way that is robust to the
distribution from which the sentences are generated, analogous to discriminative
approaches in machine learning. The latter assume that learners are trying to estimate a generative model, with sentences being sampled from that model. We show
that these two learning approaches differ in their use of implicit negative evidence
? the absence of a sentence ? when learning verb alternations, and demonstrate
that human learners can produce results consistent with the predictions of both
approaches, depending on how the learning problem is presented.
1
Introduction
Languages have a complex structure, full of general rules with idiosyncratic exceptions. For example, the causative alternation in English allows a class of verbs to take both the transitive form,
?I opened the door?, and the intransitive form, ?The door opened?. With other verbs, alternations
are restricted, and they are grammatical in only one form. For example, ?The rabbit disappeared?
is grammatical whereas ?I disappeared the rabbit? is ungrammatical. There is a great debate over
how children learn language, related to the infamous ?poverty of the stimulus? argument [1, 2, 3, 4].
A central part of the debate arises from the fact that a child mostly learns language only by hearing adults speak grammatical sentences, known as positive evidence. Children are believed to learn
language mostly from positive evidence because research has found that children rarely receive indications from parents that a sentence is not grammatical, and they ignore these indications when
they do recieve them. An explicit indication that a sentence is not grammatical is known as negative evidence [5, 6, 7]. Yet, speaking a language speaking involves the generalization of linguistic
patterns into novel combinations of phrases that have never been heard before. This presents the
following puzzle: How do children eventually learn that certain novel linguistic generalizations are
not allowed if they are not explicitly told? There have been two main lines of analyses addressing
this question. These analyses have taken two different perspectives on the basic task involved in
language learning, and have yielded quite different results.
One perspective is that language is acquired by learning rules for identifying grammatically acceptable and unacceptable sentences in a way that is robust to the actual distribution of observed
sentences. From this perspective, Gold?s theorem [8] asserts that languages with infinite recursion,
such as most human languages, are impossible to learn from positive evidence alone. In particular, linguistic exceptions, such as the restrictions on verb alternations mentioned above, are cited
as being impossible to learn empirically. More recent analyses yield similar results, while making
weaker assumptions about the desired outcome of learning (for a review, see [9]). In light of this,
it has been argued that child language learning abilities can only be explained by the presence of
innate knowledge specific to language [3, 4, 10].
On the other side of the debate, results indicating that relatively sophisticated linguistic representations such as probabilistic context-free grammars can be learned from positive evidence have been
obtained by viewing language acquisition as a process of forming a probabilistic model of the linguistic input, under the assumption that the observed data are sampled from this model [11, 12, 13].
In addition to these general theoretical results, statistical learning models have been shown to be
capable of learning exceptions in language from positive examples only in a variety of domains,
including verb alternations [14, 15, 16, 17, 18, 19]. Furthermore, previous experimental work has
shown that humans are capable of learning linguistic exceptions in an artificial language without
negative evidence [20], bearing out the predictions of some of these models.
One key difference between these two perspectives on learning is in the assumptions that they make
about how observed sentences are generated. In the former approach, the goal is to learn to identify grammatical sentences without making assumptions about the distribution from which they are
drawn. In the latter approach, the goal is to learn a probability distribution over sentences, and the
observed sentences are assumed to be drawn from that distribution. This difference is analogous to
the distinction between discriminative and generative models in machine learning (e.g., [21]). The
stronger distributional assumptions made in the generative approach result in a less robust learner,
but make it possible to learn linguistic exceptions without negative evidence. In particular, generative models can exploit the ?implicit negative evidence? provided by the absence of a sentence:
the assumption that sentences are generated from the target probability distribution means that not
observing a sentence provides weak evidence that it does not belong to the language. In contrast,
discriminative models that seek to learn a function for labelling sentences as grammatical or ungrammatical are more robust to the distribution from which the sentences are drawn, but their weaker
assumptions about this distribution mean that they are unable to exploit implicit negative evidence.
In this paper, we explore how these two different views of learning are related to human language
acquisition. Here we focus on the task of learning an artifical language containing both alternating
and non-alternating verbs. Our goal is to use modeling and human experiments to demonstrate that
the opposing conclusions from the two sides of the language acquisition debate can be explained by
a difference in learning approach. We compare the learning performance of a hierarchical Bayesian
model [15], which takes a generative approach, with a logistic regression model, which takes a discriminative approach. We show that without negative evidence, the generative model will judge a
verb structure that is absent in the input to be ungrammatical, while the discriminative model will
judge it to be grammatical. We then conduct an experiment designed to encourage human participants to adopt either a generative or discriminative language learning perspective. The experimental
results indicate that human learners behave in accordance with model predictions: absent verb structures are rejected as ungrammatical under a generative learning perspective and accepted as grammatical under a discriminative one. Our modeling comparisons and experimental results contribute
to the language acquisition debate in the following ways: First, our results lend credence to conclusions from both sides of the debate by showing that linguistic exceptions appear either unlearnable
or learnable, depending on the learning perspective. Second, our results indicate that the opposing
conclusions about learnability can indeed be attributed to whether one assumes a discriminative or
a generative learning perspective. Finally, because our generative learning condition is much more
similar to actual child language learning, our results lend weight to the argument that children can
learn language empirically from positive input.
2
Models of language learning: Generative and discriminative
Generative approaches seek to infer the probability distribution over sentences that characterizes the
language, while discriminative models seek to identify a function that indicates whether a sentence
is grammatical. General results exist that characterize the learnability of languages from these two
?, ?
?, ?
?1
?2
?3
y1
y2
y3
S1 S1 S2
S2 S2 S1
S1 S2?
S2 S2 S2
S2 S2 S2
S2 S2 ?
S1 S1 S1
S1 S1 S1
S1 S1 ?
?4
y4
S1 S1 S1
S1 S1 S1
S1 S1 ?
Figure 1: A hierarchical Bayesian model for learning verb alternations. Figure adapted from [15].
perspectives, but there are few direct comparisons of generative and discriminative approaches to the
same specific language learning situation. Here, we compare a simple generative and discriminative
model?s predictions of how implicit negative evidence is used to learn verb alternations.
2.1
Generative model: Hierarchical Bayes
In the generative model, the problem of learning verb alternations is formulated as follows. Assume
we have a set of m verbs, which can occur in up to k different sentence structures. Restricting
ourself to positive examples for the moment, we observe a total of n sentences x1 , . . . xn . The ni
sentences containing verb i can be summarized in a k-dimensional vector y i containing the verb
occurrence frequency in each of the k sentence structures. For example if we had three possible
sentence structure types and verb i occurred in the first type two times, the second type four times
and the third type zero times, y i would be [2, 4, 0] and ni would be 6.
We model these data using a hierarchical Bayesian model (HBM) originally introduced in [15], also
known to statisticians as a Dirichlet-Multinomial model [22]. In statistical notation the HBM is
?i
y i |ni
?
?
Dirichlet(??)
Multinomial(? i )
?
?
?
?
Exponential(?)
Dirichlet(?)
where y i is the data (i.e. the observed frequency of different grammatical sentence structures for
verb i) given ni occurrences of that verb, as summarized above. ? i captures the distribution over
sentence structures associated with verb i, assuming that sentences are generated independently and
structure k is generated with probability ?ki . The hyperparameters ? and ? represent generalizations
about the kinds of sentence structures that typically occur. More precisely, ? represents the distribution of sentence structures across all verbs, with ?k being the mean probability of sentence structure
k, while ? represents the extent to which verbs tends to appear in only one sentence structure type.
In this model, the number of verbs and the number of possible sentence structures are both fixed.
The hyperparameters ? and ? are learned, and the prior on these hyperparameters is fixed by setting
? = 1 and ? = 1 for all i. This prior asserts a weak expectation that the range of ? and ? do
not contain extreme
values. The model is fit to the data by computing the posterior distribution
R
p(? i |y i ) = ?,? p(? i |?, ?, y)p(?, ?|y) d? d?. The posterior can be estimated using a Markov
Chain Monte Carlo (MCMC) algorithm. Following [15], we use Gaussian proposals on log(?), and
draw proposals for ? from a Dirichlet distribution with the current ? as its mean.
2.2
Discriminative model: Logistic regression
For our discriminative model we use logistic regression. A logistic regression model can be used
to learn a function that classifies observations into two classes. In the context of language learning,
the observations are sentences and the classification problem is deciding whether each sentence is
grammatical. As above, we observe n sentences, x1 , . . . xn , but now each sentence xj is associated
with a variable cj indicating whether the sentence is grammatical (cj = +1) or ungrammatical
(cj = ?1). Each sentence is associated with a feature vector f (xj ) that uses dummy variables to
encode the verb, the sentence structure, and the interaction of the two (ie. each sentence?s particular
verb and sentence structure combination). With m verbs and k sentence structures, this results in
m verb features, k sentence structure features, and mk interaction features, each of which take the
value 1 when they match the sentence and 0 when they do not. For example, a sentence containing
the second of four verbs in the first of three sentence structures would be encoded with the binary
feature vector 0100100000100000000.
The logistic regression model learns which features of sentences are predictive of grammaticality.
This is done by defining the probability of grammaticality to be
p(cj = +1|xj , w, b) = 1/(1 + exp{?wT f (xj ) ? b})
(1)
where w and
Pnb are the parameters of the model. w and b are estimated by maximizing the log
likelihood j=1 log p(cj |xj , w, b). Features for which the likelihood is uninformative (e.g. features
that are not observed) have weights that are set to zero.
3
Testing the models on an artificial language
To examine the predictions that these two models make about the use of implicit negative evidence
in learning verb alternations, we applied them to a simple artificial language based on that used in
[20]. This language has four transitive verbs and three possible sentence structures. Three of the
verbs only appear in one sentence structure (non-alternating), while one verb appears in two possible
sentence structures (alternating). The language consisted of three-word sentences, each containing a
subject (N1), object (N2) and verb (V), with the order depending on the particular sentence structure.
3.1
Vocabulary
The vocabulary was a subset of that used in [20]. There were three two-syllable nouns, each beginning with a different consonant, referring to three cartoon animals: blergen (lion), nagid (elephant),
tombat (giraffe). Noun referents are fixed across participants. The four one-syllable verbs were:
gund, flern, semz, and norg, corresponding to the four transitive actions: eclipse, push-to-side, explode and jump on. While the identity of the nouns and verbs is irrelevant to the models, we developed this language with the intent of also examining human learning, as described below. With
human learners, the mapping of verbs to actions was randomly selected for each participant.
3.2
Syntax and grammar
In our language of three-word sentences, a verb could appear in 3 different positions (as the 1st,
2nd or 3rd word). We constrained the possible sentences such that the subject, N1, always appeared
before the object, N2. This leaves us with three possible sentence structures, S1,S2, and S3, each of
which corresponded to one of the following word orders: N1-N2-V, N1-V-N2 and V-N1-N2. In our
experiment, the mapping from sentence structure to word order was randomized among participants.
For example, S1 might correspond to N1-N2-V for one participant or it might correspond to V-N1N2 for another participant. There was always one sentence structure, which we denote S3, that was
never grammatical for any of the verbs. For S1 and S2, grammaticality varied depending on the verb.
We designed our language to have 1 alternating verb and 3 non-alternating verbs. One of the three
non-alternating verbs was only grammatical in S1. The other two non-alternating verbs were only
grammatical in S2. For example, let?s consider the situation where S1 is N1-V-N2, S2 is N1-N2-V
and S3 is V-N1-N2. If flern was an alternating verb, both nagid flern tombat and nagid tombat flern
would be allowed. If semz was non-alternating, and only allowed in S2, nagid tombat semz would be
grammatical and nagid tombat semz would be ungrammatical. In this example, flern nagid tombat
and semz nagid tombat are both ungrammatical. The language is summarized in Table 1.
3.3
Modeling results
The generative hierarchical Bayesian model and the discriminative logistic regression model outlined in the previous section were applied to a corpus of sentences generated from this language.
Sentence Structure
S1
S2
S3
+(9)
+(9) -(9)
-(3)
+(18) -(3)
+(18)
-(3)
-(3)
+(18)
?(0)
-(6)
Verb
V1
V2
V3
V4
Table 1: Grammaticality of verbs. + and - indicate grammatical and ungrammatical respectively,
while ? indicates that grammaticality is underdetermined by the data. The number in parentheses is
the frequency with which each sentence was presented to model and human learners in our experiment. Verb V4 was never shown in sentence structure S2. Grammaticality predictions for sentences
containing this verb were used to explore the interpretation of implicit negative evidence.
a) V1 (S1,S2)
Grammaticality
1
b) V2 (S2)
1
c) V3 (S1)
1
d) V4 (S1)
1
generative
discriminative
0.5
0
0.5
S1
S2
S3
0
0.5
S1
S2
S3
0
0.5
S1
S2
S3
0
S1
S2
S3
Figure 2: Predicted grammaticality judgments from generative and discriminative models. In parentheses next to the verb index in the title of each plot is the sentence structure(s) that were shown to
be grammatical for that verb in the training corpus.
The frequencies of each verb and sentence structure combination are also shown in Table 1. We
were particularly interested in the predictions that the two models made about the grammaticality of
verb V4 in sentence structure S2, since this combination of verb and sentence structure never occurs
in the data. As a consequence, a generative learner receives implicit negative evidence that S2 is not
grammatical for V4, while a discriminative learner receives no information.
We trained the HBM on the grammatical instances of the sentences, using 10,000 iterations of
MCMC. The results indicate that V1 is expected to occur in both S1 and S2 50% of the time, while
all other verbs are expected to occur 100% of the time in the one sentence structure for which they
are grammatical, accurately reflecting the distribution in our language input. Predictions for grammaticality are extracted from the HBM model as follows: The ith verb is grammatical in sentence
structure k if the probability of sentence structure k, ?ki is greater than or equal to ? and ungrammatical otherwise, where ? is a small number. Theoretically, ? should be set so that any sentence
observed once will be considered grammatical. Here, posterior values of ?ki were highly peaked
about 0.5 for V1 in S1 and S2, and either 0 or 1 for other verb and sentence structure combinations,
resulting in clear grammaticality predictions. These are shown in Figure 2. Critically, the model
predicts that V4 in S2 is not grammatical.
Logistic regression was performed using all sentences in our corpus, both grammatical and ungrammatical. Predictions for grammaticality from the logistic regression model were read out directly
from p(cj = +1|xj , w, b). The results are shown in Figure 2. While the model has not seen V4
in S2, and has consequently not estimated a weight for the feature that uniquely identifies this sentence, it has seen 27 grammatical and 3 ungrammatical instances of S2, and 18 grammatical and
6 ungrammatical instances of V4, so it has learned positive weights for both of these features of
sentences. As a consequence, it predicts that V4 in S2 is grammatical.
4
Generative and discriminative learning in humans
The simulations above illustrate how generative and discriminative approaches to language learning
differ in their treatment of implicit negative evidence. This raises the question of whether a similar
difference can be produced in human learners by changing the nature of the language learning task.
We conducted an experiment to explore whether this is the case.
In our experiment, participants learned the artificial language used to generate the model predictions
in the previous section by watching computer animated scenes accompanied by spoken and written
sentences describing each scene. Participants were also provided with information about whether the
sentence was grammatical or ungrammatical. Participants were assigned to one of two conditions,
which prompted either generative or discriminative learning. Participants in both conditions were
exposed to exactly the same sentences and grammaticality information. The two conditions differed
only in how grammaticality information presented.
4.1
Participants
A total of 22 participants were recruited from the community at the University of California, Berkeley.
4.2
Stimuli
As summarized in Table 1, participants viewed each of the 4 verbs 24 times, 18 grammatical sentences and 6 ungrammatical sentences. The alternating verb was shown 9 times each in S1 and
S2 and 6 times in S3. The non-alternating verbs were shown 18 times each in their respectively
grammatical sentence structures and 3 times each in the 2 ungrammatical structures. Presentation
of sentences was ordered as follows: Two chains of sentences were constructed, one grammatical
and one ungrammatical. The grammatical chain consisted of 72 sentences (18 for each verb) and
the ungrammatical chain consisted of 24 sentences (6 for each verb). For each sentence chain, verbs
were presented cyclically and randomized within cycles. For the grammatical chain, V1 occurrences
of S1 and S2 were cycled through in semi-random order (verbs V2-V4 appeared grammatically in
only one sentence construction). Similarly, for the ungrammatical chain, V2 and V3 cycled semirandomly through occurrences of S1 and S3 and S2 and S3 respectively (verbs V1 and V4 only
appeared ungrammatically in S3). While participants were being trained on the language, presentation of one sentence from the ungrammatical chain was randomly interleaved within every three
presentations of sentences from the grammatical chain. Subject-object noun pairs were randomized
for each verb across presentations. There were a total of 96 training sentences.
4.3
Procedure
Participants in both conditions underwent pre-training trials to acquaint them with the vocabulary.
During pre-training they heard and saw each word along with pictures of each noun and scenes
corresponding to each verb along with spoken audio of each noun/verb. All words were cycled
through three times during pre-training. During the main experiment, all participants were told they
were to learn an artificial language. They all saw a series of sentences describing animated scenes
where a subject noun performed an action on an object noun. All sentences were presented in both
spoken and written form.
4.3.1
Generative learning condition
In the generative learning condition, participants were told that they would listen to an adult speaker
who was always spoke grammatical sentences and a child speaker who always spoke ungrammatically. Cartoon pictures of either the adult or child speaker accompanied each scene. The child
speaker?s voice was low-pass filtered to create a believably child-like sound. We hypothesized that
participants in this condition would behave similarly to a generative model: they would build a
probabilistic representation of the language from the grammatical sentences produced by the adult
speaker.
4.3.2
Discriminative learning condition
In the discriminative learning condition, participants were presented with spoken and written sentences describing each scene and asked to choose whether each of the presented sentences were
grammatical or not. They were assured that only relevant words were used and they only had to figure out if the verb occurred in a grammatical location. Participants then received feedback on their
choice. For example, if a participant answered that the sentence was grammatical, they would see
either ?Yes, you were correct. This sentence is grammatical!? or ?Sorry, you were incorrect. This
Proportion grammatical
a) V1 (S1,S2)
1
b) V2 (S2)
1
c) V3 (S1)
1
d) V4 (S1)
1
generative
discriminative
0.5
0
0.5
S1
S2
S3
0
0.5
S1
S2
S3
0
0.5
S1
S2
S3
0
S1
S2
S3
Figure 3: Human grammar judgments, showing proportion grammatical for each sentence structure.
sentence is ungrammatical!? The main difference from the generative condition is that in the discriminative condition, the presented sentences are assumed to be chosen at random, whereas in the
generative learning condition, sentences from the adult speaker are assumed to have been sampled
from the language distribution. We hypothesized that participants in the discriminative condition
would behave similarly to a discriminative model: they would use feedback about both grammatical
and ungrammatical sentences to formulate rules about what made sentences grammatical.
4.3.3
Testing
After the language learning phase, participants in both conditions were subjected to a grammar test.
In this testing phase, participants were shown a series of written sentences and asked to rate the
sentence as either grammatical or ungrammatical. Here, all sentences had blergen as the subject
and nagid as the object. All verb-sentence structure combinations were shown twice. Additionally
the verb V4 was shown an extra two times in S2 as this was the crucial generalization that we were
testing.
Participants also underwent a production test in which they were shown a scene and asked to type
in a sentence describing that scene. Because we did not want this to be a memory test, we displayed
the relevant verb on the top of the screen. Pictures of all the nouns, with their respective names
below, were also available on the bottom of the screen for reference. Four scenes were presented for
each verb, using subject-object noun pairs that were cycled through random. Verbs were also cycled
through at random.
4.4
Results
Our results show that participants in both conditions were largely able to learn much of the grammar
structure. Hoewever, there were significant differences between the generative and discriminative
conditions (see Figure 3). Most notably, the generative learners overwhelmingly judged verb V4 to
be ungrammatical in S2, while the majority of discriminative learners deemed V4 in to be grammatical in S2 (see Figure 3d). This difference between conditions was highly statistically significant
by a Pearson?s ?2 test (?2 (1) = 7.28, p = 0.007). This difference aligned with the difference in
the predictions of the HBM (generative) model and the logistic regression (discriminative) model
discussed earlier. Our results strongly suggest participants in the generative condition were learning
language with a probabilistic perspective that allowed them to learn restrictions on verb alternations by using implicit negative evidence whereas participants in the discriminative condition made
sampling assumptions that did not allow them to learn the alternation restriction.
Another difference we found between the two conditions was that discriminative learners were more
willing to consider verbs to be alternating (i.e. allow those verbs to be grammatical in two sentence
structures.) This is evidenced by the fact that participants in the generative condition rated occurrences of V1 (the alternating verb) in S1 and S2 as grammatical only 68% and 72% of the time. This
is because many participants judged V1 to be grammatical in either S1 or S2 and not both. On the
other hand, participants in the discriminative condition rated occurrences of V1 in S1 and S2 grammatical 100% of the time (see Figure 3a). Pearson?s ?2 tests for the difference between conditions
for grammaticality of V1 in S1 and S2 were marginally significant, with ?2 (1) = 4.16, p = .04
and ?2 (1) = 3.47, p = 0.06 respectively. From post-experiment questioning, we learned that many
participants in the generative condition did not think verbs would occur in two possible sentence
d) V4 (S1)
Production
1
b) V2 (S2)
1
c) V3 (S1)
1
d) V4 (S1)
1
generative
discriminative
0.5
0
0.5
S1 S2 S3other
0
0.5
S1 S2 S3other
0
0.5
S1 S2 S3other
0
S1 S2 S3other
Figure 4: Human production data, showing proportion of productions in each sentence structure.
structures. None of the participants in the discriminative condition were constrained by this assumption. Why the two conditions prompted significantly different prior assumptions about the
prevalence of verb alternations will be a question for future research, but is particularly interesting
in the context of the HBM, which can learn a prior expressing similar constraints.
Production test results showed that participants tended to use verbs in the sentences structure that
they heard them in (see Figure 4). Notably, even though the majority of the learners in the discriminative condition rated verb V4 in S2 as grammatical, only 20% of the productions of V4 were in
S2. This is in line with previous results that show that how often a sentence structure is produced
is proportional to how often that structure is heard, and rarely heard structures are rarely produced,
even if they are believed to be grammatical [20].
5
Discussion
We have shown that artificial language learners may or may not learn restrictions on verb alternations, depending on the learning context. Our simulations of generative and discriminative learners
made predictions about how these approaches deal with implicit negative evidence, and these predictions were borne out in an experiment with human learners. Participants in both experimental
conditions viewed exactly the same sentences and were told whether each sentence was grammatical
or ungrammatical. What varied between conditions was the way the the grammaticality information
was presented. In the discriminative condition, participants were given yes/no grammaticality feedback on sentences presumed to be sampled at random. Because of the random sampling assumption,
the absence of a verb in a given sentence structure did not provide implicit negative evidence against
the grammaticality of that construction. In contrast, participants in the generative condition judged
the unseen verb-sentence structure to be ungrammatical. This is in line with the idea that they had
sought to estimate a probability distribution over sentences, under the assumption that the sentences
they observed were drawn from that distribution.
Our simulations and behavioral results begin to clarify the connection between theoretical analyses
of language learnability and human behavior. In showing that people learn differently under different construals of the learning problem, we are able to examine how well normal language learning
corresponds to the learning behavior we see in these two cases. Participants in our generative condition heard sentences spoken by a grammatical speaker, similar to the way children learn by listening
to adult speech. In post-experiment questioning, generative learners also stated that they ignored all
negative evidence from the ungrmamatical child speaker, similar to the way children ignore negative
evidence in real language acquisition. These observations support the idea that human language
learning is better characterized by the generative approach. Establishing this connection to the generative approach helps to identify the strengths and limitations of human language learning, leading
to the expectation that human learners can use implicit negative evidence to identify their language,
but will not be as robust to variation in the distribution of observed sentences as a discriminative
learner might be.
Acknowledgments. This work was supported by grant SES-0631518 from the National Science Foundation.
References
[1] C. L. Baker. Syntactic theory and the projection problem. Linguistic Inquiry, 10:533?538, 1979.
[2] C. L. Baker and J. J. McCarthy. The logical problem of language acquisition. MIT Press, 1981.
[3] N. Chomsky. Aspects if the theories of syntax. MIT Press, 1965.
[4] S. Pinker. Learnability and Cognition: The acquisition of argument structure. MIT Press, 1989.
[5] M. Bowerman. The ?No Negative Evidence? Problem: How do children avoid constructing an overly
general grammar? In J. Hawkins, editor, Explaining Language Universals, pages 73?101. Blackwell,
New York, 1988.
[6] R. Brown and C. Hanlon. Derivational complexity and order of acquisition in child speech. Wiley, 1970.
[7] G. F. Marcus. Negative evidence in language acquisition. Cognition, 46:53?85, 1993.
[8] E. M. Gold. Language identification in the limit. Information and Control, 16:447?474, 1967.
[9] M. A. Nowak, N. L. Komarova, and P. Niyogi. Computational and evolutionary aspects of language.
Nature, 417:611?617, 2002.
[10] S. Crain and L. D. Martin. An introduction to linguistic theory and language acquisition. Blackwell,
1999.
[11] D. Angluin. Identifying languages from stochastic examples. Technical Report YALEU/DCS/RR-614,
Yale University, Department of Computer Science, 1988.
[12] J. J. Horning. A study of grammatical inference. PhD thesis, Stanford University, 1969.
[13] N. Chater and P. Vitanyi. ?Ideal learning? of natural language: Positive results about learning from
positive evidence. Journal of Mathematical Psychology, 51:135?163, 2007.
[14] M. Dowman. Addressing the learnability of verb subcategorizations with Bayesian inference. In Proceedings of the 22nd Annual Conference of the Cognitive Science Society, 2005.
[15] D. Kemp, A. Perfors, and J. Tenenbaum. Learning overhypothesis with hierarchical Bayesian models.
Developmental Science, 10:307?321, 2007.
[16] P. Langley and S. Stromsten. Learning context-free grammars with a simplicity bias. In Proceedings of
the 11th European Conference on Machine Learning, 2000.
[17] L. Onnis, M. Roberts, and N. Chater. Simplicity: A cure for overgeneralizations in language acquisition?
In Proceedings of the 24th Annual Conference of the Cognitive Science Society, pages 720?725, 2002.
[18] A. Perfors, J. Tenenbaum, and T. Regier. Poverty of the stimulus: A rational approach? In Proceedings
of the 28th Annual Conference of the Cognitive Science Society, pages 664?668, 2006.
[19] A. Stolcke. Bayesian learning of probabilistic language models. PhD thesis, UC Berkeley, 1994.
[20] E. Wonnacott, E. Newport, and M. Tanenhaus. Acquiring and processing verb argument structure: Distributional learning in a miniature language. Cognitive Psychology, 56:165?209, 2008.
[21] A. Y. Ng and M. Jordan. On discriminative vs. generative classifiers: A comparison of logistic regression
and naive Bayes. In Advances in Neural Information Processing Systems 17, 2001.
[22] A. Gelman, J. B. Carlin, H. S. Stern, and D. B. Rubin. Bayesian data analysis. Chapman Hall, 2003.
| 3760 |@word trial:1 stronger:1 proportion:3 nd:2 willing:1 seek:4 simulation:3 yaleu:1 moment:1 series:2 animated:2 current:1 anne:1 yet:1 written:4 designed:2 plot:1 v:1 alone:1 generative:44 credence:1 selected:1 leaf:1 beginning:1 ith:1 filtered:1 provides:1 contribute:1 location:1 mathematical:1 unacceptable:1 constructed:1 direct:1 differential:1 along:2 incorrect:1 behavioral:1 theoretically:1 acquired:1 notably:2 expected:2 presumed:1 indeed:1 behavior:2 examine:2 horning:1 actual:2 provided:2 classifies:1 notation:1 begin:1 baker:2 what:2 kind:2 developed:1 spoken:5 berkeley:5 y3:1 every:1 exactly:2 classifier:1 control:1 grant:1 appear:4 positive:11 before:2 accordance:1 tends:1 limit:1 consequence:2 infamous:1 establishing:1 might:3 twice:1 revolves:1 range:1 statistically:1 acknowledgment:1 testing:4 prevalence:1 procedure:1 langley:1 universal:1 significantly:1 projection:1 causative:1 word:8 griffith:2 pre:3 suggest:1 chomsky:1 gelman:1 judged:3 context:5 impossible:3 restriction:5 maximizing:1 independently:1 rabbit:2 formulate:1 simplicity:2 identifying:2 rule:5 classic:1 traditionally:1 variation:1 analogous:2 target:1 construction:2 speak:1 us:1 particularly:2 distributional:2 predicts:2 observed:9 bottom:1 capture:1 cycle:1 mentioned:1 developmental:1 complexity:1 asked:3 trained:2 raise:1 exposed:1 predictive:1 learner:20 differently:1 perfors:2 monte:1 artificial:6 corresponded:1 outcome:1 pearson:2 quite:1 encoded:1 stanford:1 s:1 elephant:1 otherwise:1 grammar:7 ability:1 niyogi:1 unseen:1 think:1 syntactic:1 indication:3 questioning:2 rr:1 interaction:2 crain:1 relevant:2 aligned:1 gold:2 asserts:2 parent:1 produce:1 disappeared:2 object:6 help:1 depending:5 illustrate:1 received:1 predicted:1 involves:1 judge:2 indicate:4 differ:3 correct:1 opened:2 stochastic:1 human:19 viewing:1 newport:1 argued:1 generalization:4 underdetermined:1 clarify:1 around:1 considered:1 hawkins:1 normal:1 deciding:1 great:1 exp:1 puzzle:1 mapping:2 cognition:2 claim:1 miniature:1 sought:1 adopt:1 title:1 saw:2 create:1 mit:3 gaussian:1 always:4 avoid:1 overwhelmingly:1 chater:2 linguistic:12 encode:1 focus:1 referent:1 indicates:2 likelihood:2 contrast:2 inference:2 typically:1 sorry:1 interested:1 classification:1 among:1 animal:1 noun:10 constrained:2 uc:1 equal:1 once:1 never:4 ng:1 cartoon:2 sampling:2 chapman:1 represents:2 peaked:1 future:1 report:1 stimulus:3 few:1 randomly:2 national:1 poverty:2 phase:2 statistician:1 opposing:2 n1:9 highly:2 intransitive:1 extreme:1 light:1 chain:9 capable:3 encourage:1 nowak:1 respective:1 conduct:1 desired:1 theoretical:2 mk:1 instance:3 modeling:3 earlier:1 phrase:1 hearing:1 addressing:2 subset:1 examining:1 conducted:1 learnability:7 characterize:1 referring:1 st:1 cited:1 randomized:3 ie:1 told:4 probabilistic:5 v4:19 thesis:2 central:1 hall:1 containing:6 choose:1 borne:1 watching:1 cognitive:5 leading:1 accompanied:2 summarized:4 explicitly:1 grammaticality:18 performed:2 view:1 observing:1 characterizes:1 pinker:1 bayes:2 participant:38 ni:4 who:2 largely:1 yield:1 identify:5 correspond:2 judgment:2 yes:2 weak:2 bayesian:8 identification:1 accurately:1 critically:1 produced:4 marginally:1 carlo:1 none:1 researcher:1 inquiry:1 tended:1 against:1 acquisition:11 frequency:4 involved:1 associated:3 attributed:1 hsu:1 sampled:4 rational:1 treatment:1 logical:1 knowledge:2 listen:1 cj:6 sophisticated:1 reflecting:1 appears:1 originally:1 tom:1 done:1 though:1 strongly:1 furthermore:1 governing:1 implicit:13 rejected:1 hand:1 receives:2 logistic:10 innate:2 name:1 hypothesized:2 contain:1 y2:1 consisted:3 brown:1 former:2 assigned:1 alternating:14 read:1 deal:1 regier:1 during:3 uniquely:1 speaker:8 trying:1 syntax:2 demonstrate:2 novel:2 recently:1 multinomial:2 empirically:2 belong:1 occurred:2 interpretation:1 discussed:1 significant:3 expressing:1 rd:1 tanenhaus:1 outlined:1 similarly:3 language:66 had:4 posterior:3 mccarthy:1 recent:1 showed:1 perspective:10 irrelevant:1 certain:1 binary:1 alternation:14 seen:2 greater:1 v3:5 semi:1 full:1 sound:1 infer:1 technical:1 match:1 characterized:1 believed:2 post:2 parenthesis:2 prediction:14 basic:1 regression:10 expectation:2 iteration:1 represent:1 receive:1 whereas:3 addition:1 proposal:2 uninformative:1 want:1 crucial:1 extra:1 subject:6 recruited:1 grammatically:2 jordan:1 presence:1 door:2 ideal:1 stolcke:1 variety:1 xj:6 fit:1 psychology:3 carlin:1 idea:2 listening:1 absent:2 whether:9 speech:2 speaking:2 york:1 action:3 ignored:1 heard:6 clear:1 tenenbaum:2 stromsten:1 generate:1 angluin:1 exist:1 pnb:1 s3:16 estimated:3 overly:1 dummy:1 key:1 four:6 drawn:4 changing:1 spoke:2 v1:11 you:2 draw:1 acceptable:1 interleaved:1 ki:3 syllable:2 vitanyi:1 yale:1 yielded:1 annual:3 adapted:1 occur:5 strength:1 precisely:1 constraint:1 scene:9 explode:1 aspect:2 answered:1 argument:5 relatively:1 martin:1 department:2 combination:6 across:3 making:2 s1:56 explained:2 restricted:1 taken:1 describing:4 eventually:1 hbm:6 subjected:1 available:1 observe:2 hierarchical:6 v2:6 occurrence:6 voice:1 thomas:1 assumes:1 dirichlet:4 top:1 exploit:2 build:1 society:3 question:3 occurs:1 n1n2:1 evolutionary:1 unable:1 majority:2 ourself:1 extent:1 kemp:1 marcus:1 assuming:1 index:1 y4:1 prompted:2 mostly:2 idiosyncratic:1 robert:1 debate:7 negative:22 stated:1 intent:1 stern:1 observation:3 markov:1 behave:3 displayed:1 situation:2 defining:1 y1:1 dc:1 varied:2 verb:86 community:1 introduced:1 evidenced:1 pair:2 blackwell:2 sentence:124 connection:2 california:2 learned:5 distinction:1 acquaint:1 adult:6 able:2 lion:1 pattern:1 below:2 appeared:3 including:1 memory:1 lend:2 natural:1 recursion:1 rated:3 picture:3 identifies:1 deemed:1 transitive:3 naive:1 review:1 understanding:1 prior:4 interesting:1 limitation:1 proportional:1 derivational:1 foundation:1 consistent:1 rubin:1 cycled:5 editor:1 production:6 supported:1 free:2 english:1 formal:1 weaker:2 side:4 allow:2 bias:1 explaining:1 underwent:2 grammatical:58 feedback:3 xn:2 vocabulary:3 cure:1 made:5 jump:1 ignore:2 corpus:3 assumed:3 consonant:1 discriminative:41 why:1 table:4 additionally:1 learn:21 nature:2 robust:5 ca:1 ungrammatical:25 bearing:1 complex:3 european:1 constructing:1 domain:1 assured:1 did:4 giraffe:1 main:3 s2:56 hyperparameters:3 n2:9 child:18 allowed:4 x1:2 screen:2 differed:1 aid:1 wiley:1 position:1 explicit:1 exponential:1 third:1 learns:2 cyclically:1 theorem:1 specific:3 showing:4 learnable:1 evidence:29 restricting:1 hanlon:1 phd:2 labelling:1 push:1 explore:3 forming:1 ordered:1 eclipse:1 acquiring:1 corresponds:1 extracted:1 goal:3 formulated:1 identity:1 consequently:1 viewed:2 presentation:4 absence:3 infinite:1 wt:1 total:3 pas:1 accepted:1 experimental:4 indicating:2 exception:6 rarely:3 people:1 support:1 latter:2 arises:1 artifical:1 mcmc:2 audio:1 unlearnable:1 |
3,047 | 3,761 | Robust Value Function Approximation Using
Bilinear Programming
Shlomo Zilberstein
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Marek Petrik
Department of Computer Science
University of Massachusetts
Amherst, MA 01003
[email protected]
Abstract
Existing value function approximation methods have been successfully used in
many applications, but they often lack useful a priori error bounds. We propose
approximate bilinear programming, a new formulation of value function approximation that provides strong a priori guarantees. In particular, this approach provably finds an approximate value function that minimizes the Bellman residual.
Solving a bilinear program optimally is NP-hard, but this is unavoidable because
the Bellman-residual minimization itself is NP-hard. We therefore employ and
analyze a common approximate algorithm for bilinear programs. The analysis
shows that this algorithm offers a convergent generalization of approximate policy iteration. Finally, we demonstrate that the proposed approach can consistently
minimize the Bellman residual on a simple benchmark problem.
1
Motivation
Solving large Markov Decision Problems (MDPs) is a very useful, but computationally challenging
problem addressed widely in the AI literature, particularly in the area of reinforcement learning.
It is widely accepted that large MDPs can only be solved approximately. The commonly used approximation methods can be divided into three broad categories: 1) policy search, which explores
a restricted space of all policies, 2) approximate dynamic programming, which searches a restricted
space of value functions, and 3) approximate linear programming, which approximates the solution using a linear program. While all of these methods have achieved impressive results in many
domains, they have significant limitations.
Policy search methods rely on local search in a restricted policy space. The policy may be represented, for example, as a finite-state controller [22] or as a greedy policy with respect to an approximate value function [24]. Policy search methods have achieved impressive results in such domains
as Tetris [24] and helicopter control [1]. However, they are notoriously hard to analyze. We are not
aware of any theoretical guarantees regarding the quality of the solution.
Approximate dynamic programming (ADP) methods iteratively approximate the value function [4, 20, 23]. They have been extensively analyzed and are the most commonly used methods.
However, ADP methods typically do not converge and they only provide weak guarantees of approximation quality. The approximation error bounds are usually expressed in terms of the worst-case
approximation of the value function over all policies [4]. In addition, most available bounds are with
respect to the L? norm, while the algorithms often minimize the L2 norm. While there exist some
L2 -based bounds [14], they require values that are difficult to obtain.
Approximate linear programming (ALP) uses a linear program to compute the approximate value
function in a particular vector space [7]. ALP has been previously used in a wide variety of settings [2, 9, 10]. Although ALP often does not perform as well as ADP, there have been some recent
1
efforts to close the gap [18]. ALP has better theoretical properties than ADP and policy search. It is
guaranteed to converge and return the closest L1 -norm approximation v? of the optimal value function v ? up to a multiplicative factor. However, the L1 norm must be properly weighted to guarantee
a small policy loss, and there is no reliable method for selecting appropriate weights [7].
To summarize, the existing reinforcement learning techniques often provide good solutions, but typically require significant domain knowledge [20]. The domain knowledge is needed partly because
useful a priori error bounds are not available, as mentioned above. Our goal is to develop a more
robust method that is guaranteed to minimize an actual bound on the policy loss.
We present a new formulation of value function approximation that provably minimizes a bound
on the policy loss. Unlike in some other algorithms, the bound in this case does not rely on values
that are hard to obtain. The new method unifies policy search and value-function search methods
to minimize the L? norm of the Bellman residual, which bounds the policy loss. We start with a
description of the framework and notation in Section 2. Then, in Section 3, we describe the proposed Approximate Bilinear Programming (ABP) formulation. A drawback of this formulation is
its computational complexity, which may be exponential. We show in Section 4 that this is unavoidable, because minimizing the approximation error bound is in fact NP-hard. Although our focus is
on the formulation and its properties, we also discuss some simple algorithms for solving bilinear
programs. Section 5 shows that ABP can be seen as an improvement of ALP and Approximate Policy Iteration (API). Section 6 demonstrates the applicability of ABP using a common reinforcement
learning benchmark problem. A complete discussion of sampling strategies?an essential component for achieving robustness?is beyond the scope of this paper, but the issue is briefly discussed in
Section 6. Complete proofs of the theorems can be found in [19].
2
Solving MDPs using ALP
In this section, we formally define MDPs, their ALP formulation, and the approximation errors
involved. These notions serve as a basis for developing the ABP formulation.
A Markov Decision Process is a tuple (S, A, P, r, ?), where S is the finite set of states, A is the
finite set of actions. P : S ? S ? A 7? [0, 1] is the transition function, where P (s0 , s, a) represents
the probability of transiting to state s0 from state s, given action a. The function r : S ? A 7? R is
the reward function, and ? : S 7? [0, 1] is the initial state distribution. The objective is to maximize
the infinite-horizon discounted cumulative reward. To shorten the notation, we assume an arbitrary
ordering of the states: s1 , s2 , . . . , sn . Then, Pa and ra are used to denote the probabilistic transition
matrix and reward for action a.
The solution of
Pan MDP is a policy ? : S ? A ? [0, 1] from a set of possible policies ?, such that
for all s ? S, a?A ?(s, a) = 1. We assume that the policies may be stochastic, but stationary [21].
A policy is deterministic when ?(s, a) ? {0, 1} for all s ? S and a ? A. The transition and reward
functions for a given policy are denoted by P? and r? . The value function update for a policy ? is
denoted by L? , and the Bellman operator is denoted by L. That is:
L? v = P? v + r?
Lv = max L? v.
???
The optimal value function, denoted v ? , satisfies v ? = Lv ? . We focus on linear value function
approximation for discounted infinite-horizon problems. In linear value function approximation, the
value function is represented as a linear combination of nonlinear basis functions (vectors). For
each state s, we define a row-vector ?(s) of features. The rows of the basis matrix M correspond
to ?(s), and the approximation space is generated by the columns of the matrix. That is, the basis
matrix M , and the value function v are represented as:
?
?
? ?(s1 ) ?
?
?
M = ?? ?(s2 ) ?? v = M x.
..
.
Definition 1. A value function, v, is representable if v ? M ? R|S| , where M = colspan (M ),
and is transitive-feasible when v ? Lv. We denote the set of transitive-feasible value functions as:
K = {v ? R|S| v ? Lv}.
2
Notice that the optimal value function v ? is transitive-feasible, and M is a linear space. Also, all the
inequalities are element-wise.
Because the new formulation is related to ALP, we introduce it first. It is well known that an infinite horizon discounted MDP problem may be formulated in terms of solving the following linear
program:
X
minimize
c(s)v(s)
v
s?S
X
(1)
s.t.
v(s) ? ?
P (s0 , s, a)v(s0 ) ? r(s, a) ?(s, a) ? (S, A)
s0 ?S
We use A as a shorthand notation for the constraint matrix and b for the right-hand
side. The value
P
c represents a distribution over the states, usually a uniform one. That is, s?S c(s) = 1. The
linear program in Eq. (1) is often too large to be solved precisely, so it is approximated to get an
approximate linear program by assuming that v ? M [8], as follows:
minimize cT v
x
Av ? b
s.t.
(2)
v?M
The constraint v ? M denotes the approximation. To actually solve this linear program, the
value function is represented as v = M x. In the remainder of the paper, we assume that 1 ? M to
guarantee the feasibility of the ALP, where 1 is a vector of all ones. The optimal solution of the ALP,
v?, satisfies that v? ? v ? . Then, the objective of Eq. (2) represents the minimization of k?
v ? v ? k1,c ,
where k ? k1,c is a c-weighted L1 norm [7].
The ultimate goal of the optimization is not to obtain a good value function v?, but a good policy.
The quality of the policy, typically chosen to be greedy with respect to v?, depends non-trivially on
the approximate value function. The ABP formulation will minimize policy loss by minimizing
kL?
v ? v?k? , which bounds the policy loss as follows.
Theorem 2 (e.g. [25]). Let v? be an arbitrary value function, and let v? be the value of the greedy
policy with respect to v?. Then:
2
kv ? ? v?k? ?
kL?
v ? v?k? ,
1??
In addition, if v? ? L?
v , the policy loss is smallest for the greedy policy.
Policies, like value functions, can be represented as vectors. Assume an arbitrary ordering of the
state-action pairs, such that o(s, a) 7? N maps a state and an action to its position. The policies are
represented as ? ? R|S|?|A| , and we use the shorthand notation ?(s, a) = ?(o(s, a)).
Remark 3. The corresponding ? and ? are denoted as ? ? and ?? and satisfy:
? ? (s, a) = ?? (s, a).
We will also consider approximations of the policies in the policy-space, generated by columns of a
matrix N . A policy is representable when ? ? N , where N = colspan (N ).
3
Approximate Bilinear Programs
This section shows how to formulate minv?M kLv ? vk? as a separable bilinear program. Bilinear
programs are a generalization of linear programs with an additional bilinear term in the objective
function. A separable bilinear program consists of two linear programs with independent constraints
and are fairly easy to solve and analyze.
Definition 4 (Separable Bilinear Program). A separable bilinear program in the normal form is
defined as follows:
T
T
T
T
minimize f (w, x, y, z) = sT
1 w + r1 x + x Cy + r2 y + s2 z
w,x y,z
s.t.
A1 x + B1 w = b1
A2 y + B2 z = b2
w, x ? 0
y, z ? 0
3
(3)
We separate the variables using a vertical line and the constraints using different columns to emphasize the separable nature of the bilinear program.
In this paper, we only use separable bilinear programs and refer to them simply as bilinear programs.
An approximate bilinear program can now be formulated as follows.
minimize
? T ? + ?0
0
? ?,? ,v
s.t.
B? = 1 z = Av ? b
??0
z?0
(4)
? + ?0 1 ? z
??0
??N
v?M
0
All variables are vectors except ? , which is a scalar. The symbol z is only used to simplify
the notation and does not need to represent an optimization variable. The variable v is defined for
each state and represents the value function. Matrix A represents constraints that are identical to the
constraints in Eq. (2). The variables ? correspond to all state-action pairs. These variables represent
the Bellman residuals that are being minimized. The variables ? are defined for all state-action pairs
and represent policies in Remark 3. The matrix B represents the following constraints:
X
?(s, a) = 1 ?s ? S.
a?A
As with approximate linear programs, we initially assume that all the constraints on z are used. In
realistic settings, however, the constraints would be sampled or somehow reduced. We defer the
discussion of this issue until Section 6. Note that the constraints in our formulation correspond to
elements of z and ?. Thus when constraints are omitted, also the corresponding elements of z and ?
are omitted.
To simplify the notation, the value function approximation in this problem is denoted only implicitly
by v ? M, and the policy approximation is denoted by ? ? N . In an actual implementation, the
optimization variables would be x, y using the relationships v = M x and ? = N y. We do not
assume any approximation of the policy space, unless mentioned otherwise. We also use v or ?
to refer to partial solutions of Eq. (4) with the other variables chosen appropriately to achieve
feasibility.
The ABP formulation is closely related to approximate linear programs, and we discuss the connection in Section 5. We first analyze the properties of the optimal solutions of the bilinear program
and then show and discuss the solution methods in Section 4. The following theorem states the main
property of the bilinear formulation.
? v?, ?,
? ??0 ) be an optimal solution of Eq. (4) and assume that 1 ? M. Then:
Theorem 5. b Let (?,
? + ??0 = kL?
??T ?
v ? v?k? ? min kLv ? vk? ? 2 min kLv ? vk? ? 2(1 + ?) min kv ? v ? k? .
v?K?M
v?M
v?M
??
In addition, ? minimizes the Bellman residual with regard to v?, and its value function v? satisfies:
2
min kLv ? vk? .
k?
v ? v ? k? ?
1 ? ? v?M
The proof of the theorem can be found in [19]. It is important to note that, as Theorem 5 states, the
ABP approach is equivalent to a minimization over all representable value functions, not only the
transitive-feasible ones. Notice also the missing coefficient 2 (2 instead of 4) in the last equation
of Theorem 5. This follows by subtracting a constant vector 1 from v? to balance the lower bounds
on the Bellman residual error with the upper ones. This modified approximate value function will
have 1/2 of the original Bellman residual but an identical greedy policy. Finally, note that whenever
v ? ? M, both ABP and ALP will return the optimal value function.
The ABP solution minimizes the L? norm of the Bellman residual due to: 1) the correspondence
between ? and the policies, and 2) the dual representation with respect to variables ? and ?0 . The
theorem then follows using techniques similar to those used for approximate linear programs [7].
4
Algorithm 1: Iterative algorithm for solving Eq. (3)
(x0 , w0 ) ? random ;
(y0 , z0 ) ? arg miny,z f (w0 , x0 , y, z) ;
i?1;
while yi?1 6= yi or xi?1 6= xi do
(yi , zi ) ? arg min{y,z A2 y+B2 z=b2 y,z?0} f (wi?1 , xi?1 , y, z) ;
(xi , wi ) ? arg min{x,w A1 x+B1 w=b1 x,w?0} f (w, x, yi , zi ) ;
i?i+1
return f (wi , xi , yi , zi )
4
Solving Bilinear Programs
In this section we describe simple methods for solving ABPs. We first describe optimal methods,
which have exponential complexity, and then discuss some approximation strategies.
Solving a bilinear program is an NP-complete problem [3]. The membership in NP follows from
the finite number of basic feasible solutions of the individual linear programs, each of which can be
checked in polynomial time. The NP-hardness is shown by a reduction from the SAT problem [3].
The NP-completeness of ABP compares unfavorably with the polynomial complexity of ALP. However, most other ADP algorithms are not guaranteed to converge to a solution in finite time.
The following theorem shows that the computational complexity of the ABP formulation is asymptotically the same as the complexity of the problem it solves.
Theorem 6. b Determining minv?K?M kLv ? vk? < is NP-complete for the full constraint
representation, 0 < ? < 1, and a given > 0. In addition, the problem remains NP-complete when
1 ? M, and therefore minv?M kLv ? vk? < is also NP-complete.
As the theorem states, the value function approximation does not become computationally simpler
even when 1 ? M ? a universal assumption in the paper. Notice that ALP can determine whether
minv?K?M kLv ? vk? = 0 in polynomial time.
The proof of Theorem 6 is based on a reduction from SAT and can be found in [19]. The policy in the
reduction determines the true literal in each clause, and the approximate value function corresponds
to the truth value of the literals. The approximation basis forces literals that share the same variable
to have consistent values.
Bilinear programs are non-convex and are typically solved using global optimization techniques.
The common solution methods are based on concave cuts [11] or branch-and-bound [6]. In ABP
settings with a small number of features, the successive approximation algorithm [17] may be applied efficiently. We are, however, not aware of commercial solvers available for solving bilinear
programs. Bilinear programs can be formulated as concave quadratic minimization problems [11], or
mixed integer linear programs [11, 16], for which there are numerous commercial solvers available.
Because we are interested in solving very large bilinear programs, we describe simple approximate
algorithms next. Optimal scalable methods are beyond the scope of this paper.
The most common approximate method for solving bilinear programs is shown in Algorithm 1.
It is designed for the general formulation shown in Eq. (3), where f (w, x, y, z) represents the
objective function. The minimizations in the algorithm are linear programs which can be easily
solved. Interestingly, as we will show in Section 5, Algorithm 1 applied to ABP generalizes a
version of API.
While Algorithm 1 is not guaranteed to find an optimal solution, its empirical performance is often
remarkably good [13]. Its basic properties are summarized by the following proposition.
Proposition 7 (e.g. [3]). Algorithm 1 is guaranteed to converge, assuming that the linear program
solutions are in a vertex of the optimality simplex. In addition, the global optimum is a fixed point
of the algorithm, and the objective value monotonically improves during execution.
5
The proof is based on the finite count of the basic feasible solutions of the individual linear programs.
Because the objective function does not increase in any iteration, the algorithm will eventually converge.
In the context of MDPs, Algorithm 1 can be further refined. For example, the constraint v ? M in
Eq. (4) serves mostly to simplify the bilinear program and a value function that violates it may still
be acceptable. The following proposition motivates the construction of a new value function from
two transitive-feasible value functions.
Proposition 8. Let v?1 and v?2 be feasible value functions in Eq. (4). Then the value function
v?(s) = min{?
v1 (s), v?2 (s)} is also feasible in Eq. (4). Therefore v? ? v ? and kv ? ? v?k? ?
min {kv ? ? v?1 k? , kv ? ? v?2 k? }.
The proof of the proposition is based on Jensen?s inequality and can be found in [19].
Proposition 8 can be used to extend Algorithm 1 when solving ABPs. One option is to take the
state-wise minimum of values from multiple random executions of Algorithm 1, which preserves
the transitive feasibility of the value function. However, the increasing number of value functions
used to obtain v? also increases the potential sampling error.
5
Relationship to ALP and API
In this section, we describe the important connections between ABP and the two closely related ADP
methods: ALP, and API with L? minimization. Both of these methods are commonly used, for
example to solve factored MDPs [10]. Our analysis sheds light on some of their observed properties
and leads to a new convergent form of API.
ABP addresses some important issues with ALP: 1) ALP provides value function bounds with respect to L1 norm, which does not guarantee small policy loss, 2) ALP?s solution quality depends
significantly on the heuristically-chosen objective function c in Eq. (2) [7], and 3) incomplete constraint samples in ALP easily lead to unbounded linear programs. The drawback of using ABP,
however, is the higher computational complexity.
Both the first and the second issues in ALP can be addressed by choosing the right objective function [7]. Because this objective function depends on the optimal ALP solution, it cannot be practically computed. Instead, various heuristics are usually used. The heuristic objective functions may
lead to significant improvements in specific domains, but they do not provide any guarantees. ABP,
on the other hand, has no such parameters that require adjustments.
The third issue arises when the constraints of an ALP need to be sampled in some large domains.
The ALP may become unbounded with incomplete samples because its objective value is defined
using the L1 norm on the states, and the constraints are defined using the L? norm of the Bellman
residual. In ABP, the Bellman residual is used in both the constraints and objective function. The
objective function of ABP is then bounded below by 0 for an arbitrarily small number of samples.
ABP can also improve on API with L? minimization (L? -API for short), which is a leading method
for solving factored MDPs [10]. Minimizing the L? approximation error is theoretically preferable,
since it is compatible with the existing bounds on policy loss [10]. In contrast, few practical bounds
exist for API with the L2 norm minimization [14], such as LSPI [12].
L? -API is shown in Algorithm 2, where f (?) is calculated using the following program:
minimize ?
?,v
s.t.
(I ? ?P? )v + 1? ? r?
?(I ? ?P? )v + 1? ? ?r?
(5)
v?M
Here I denotes the identity matrix. We are not aware of a convergence or a divergence proof of
L? -API, and this analysis is beyond the scope of this paper.
6
Algorithm 2: Approximate policy iteration, where f (?) denotes a custom value function approximation for the policy ?.
?0 , k ? rand, 1 ;
while ?k 6= ?k?1 do
v?k ? f (?k?1 ) ;
P
?k (s) ? arg maxa?A r(s, a) + ? s0 ?S P (s0 , s, a)?
vk (s) ?s ? S ;
k ?k+1
We propose Optimistic Approximate Policy Iteration (OAPI), a modification of API. OAPI is shown
in Algorithm 2, where f (?) is calculated using the following program:
minimize ?
?,v
s.t.
Av ? b (? (I ? ?P? )v ? r? ?? ? ?)
?(I ? ?P? )v + 1? ? ?r?
(6)
v?M
In fact, OAPI corresponds to Algorithm 1 applied to ABP because Eq. (6) corresponds to Eq. (4)
with fixed ?. Then, using Proposition 7, we get the following corollary.
Corollary 9. Optimistic approximate policy iteration converges in finite time. In addition, the Bellman residual of the generated value functions monotonically decreases.
OAPI differs from L? -API in two ways: 1) OAPI constrains the Bellman residuals by 0 from below
and by ? from above, and then it minimizes ?. L? -API constrains the Bellman residuals by ? from
both above and below. 2) OAPI, like API, uses only the current policy for the upper bound on the
Bellman residual, but uses all the policies for the lower bound on the Bellman residual.
L? -API cannot return an approximate value function that has a lower Bellman residual than ABP,
given the optimality of ABP described in Theorem 5. However, even OAPI, an approximate ABP
algorithm, performs comparably to L? -API, as the following theorem states.
Theorem 10. b Assume that L? -API converges to a policy ? and a value function v that both
?
1 is feasible in Eq. (4), and it is a fixed
satisfy: ? = kv ? L? vk? = kv ? Lvk? . Then v? = v + 1??
point of OAPI. In addition, the greedy policies with respect to v? and v are identical.
The proof is based on two facts. First, v? is feasible with respect to the constraints in Eq. (4). The
Bellman residual changes for all the policies identically, since a constant vector is added. Second,
because L? is greedy with respect to v?, we have that v? ? L? v? ? L?
v . The value function v? is
therefore transitive-feasible. The full proof can be found in [19].
To summarize, OAPI guarantees convergence, while matching the performance of L? -API. The
convergence of OAPI is achieved because given a non-negative Bellman residual, the greedy policy
also minimizes the Bellman residual. Because OAPI ensures that the Bellman residual is always
non-negative, it can progressively reduce it. In comparison, the greedy policy in L? -API does not
minimize the Bellman residual, and therefore L? -API does not always reduce it. Theorem 10 also
explains why API provides better solutions than ALP, as observed in [10]. From the discussion
above, ALP can be seen as an L1 -norm approximation of a single iteration of OAPI. L? -API, on
the other hand, performs many such ALP-like iterations.
6
Empirical Evaluation
As we showed in Theorem 10, even OAPI, the very simple approximate algorithm for ABP, can
perform as well as existing state-of-the art methods on factored MDPs. However, a deeper understanding of the formulation and potential solution methods will be necessary in order to determine
the full practical impact of the proposed methods. In this section, we validate the approach by
applying it to the mountain car problem, a simple reinforcement learning benchmark problem.
We have so far considered that all the constraints involving z are present in the ABP in Eq. (4).
Because the constraints correspond to all state-action pairs, it is often impractical to even enumerate
7
(a) L? error of the Bellman residual
Features
100
144
OAPI
0.21 (0.23)
0.13 (0.1)
ALP
13. (13.)
3.6 (4.3)
LSPI
9. (14.)
3.9 (7.7)
API
0.46 (0.08) 0.86 (1.18)
(b) L2 error of the Bellman residual
Features
100
144
OAPI
0.2 (0.3)
0.1 (1.9)
ALP
9.5 (18.)
0.3 (0.4)
LSPI
1.2 (1.5)
0.9 (0.1)
API
0.04 (0.01) 0.08 (0.08)
Table 1: Bellman residual of the final value function. The values are averages over 5 executions,
with the standard deviations shown in parentheses.
them. This issue can be addressed in at least two ways. First, a small randomly-selected subset of the
constraints can be used in the ABP, a common approach in ALP [9, 5]. The ALP sampling bounds
can be easily extended to ABP. Second, the structure of the MDP can be used to reduce the number
of constraints. Such a reduction is possible, for example, in factored MDPs with L? -API and
ALP [10], and can be easily extended to OAPI and ABP.
In the mountain-car benchmark, an underpowered car needs to climb a hill [23]. To do so, it first
needs to back up to an opposite hill to gain sufficient momentum. The car receives a reward of 1
when it climbs the hill. In the experiments we used a discount factor ? = 0.99.
The experiments are designed to determine whether OAPI reliably minimizes the Bellman residual in comparison with API and ALP. We use a uniformly-spaced linear spline to approximate the
value function. The constraints were based on 200 uniformly sampled states with all 3 actions per
state. We evaluated the methods with the number of the approximation features 100 and 144, which
corresponds to the number of linear segments.
The results of ABP (in particular OAPI), ALP, API with L2 minimization, and LSPI are depicted
in Table 1. The results are shown for both L? norm and uniformly-weighted L2 norm. The runtimes of all these methods are comparable, with ALP being the fastest. Since API (LSPI) is not
guaranteed to converge, we ran it for at most 20 iterations, which was an upper bound on the number
of iterations of OAPI. The results demonstrate that ABP minimizes the L? Bellman residual much
more consistently than the other methods. Note, however, that all the considered algorithms would
perform significantly better given a finer approximation.
7
Conclusion and Future Work
We proposed and analyzed approximate bilinear programming, a new value-function approximation
method, which provably minimizes the L? Bellman residual. ABP returns the optimal approximate
value function with respect to the Bellman residual bounds, despite the formulation with regard to
transitive-feasible value functions. We also showed that there is no asymptotically simpler formulation, since finding the closest value function and solving a bilinear program are both NP-complete
problems. Finally, the formulation leads to the development of OAPI, a new convergent form of API
which monotonically improves the objective value function.
While we only discussed approximate solutions of the ABP, a deeper study of bilinear solvers may
render optimal solution methods feasible. ABPs have a small number of essential variables (that
determine the value function) and a large number of constraints, which can be leveraged by the
solvers [15]. The L? error bound provides good theoretical guarantees, but it may be too conservative in practice. A similar formulation based on L2 norm minimization may be more practical.
We believe that the proposed formulation will help to deepen the understanding of value function
approximation and the characteristics of existing solution methods, and potentially lead to the development of more robust and widely-applicable reinforcement learning algorithms.
Acknowledgements
This work was supported by the Air Force Office of Scientific Research under Grant No. FA955008-1-0171. We also thank the anonymous reviewers for their useful comments.
8
References
[1] Pieter Abbeel, Varun Ganapathi, and Andrew Y. Ng. Learning vehicular dynamics, with application to modeling helicopters. In Advances in Neural Information Processing Systems, pages
1?8, 2006.
[2] Daniel Adelman. A price-directed approach to stochastic inventory/routing. Operations Research, 52:499?514, 2004.
[3] Kristin P. Bennett and O. L. Mangasarian. Bilinear separation of two sets in n-space. Technical
report, Computer Science Department, University of Wisconsin, 1992.
[4] Dimitri P. Bertsekas and Sergey Ioffe. Temporal differences-based policy iteration and applications in neuro-dynamic programming. Technical Report LIDS-P-2349, LIDS, 1997.
[5] Guiuseppe Calafiore and M.C. Campi. Uncertain convex programs: Randomized solutions and
confidence levels. Mathematical Programming, Series A, 102:25?46, 2005.
[6] Alberto Carpara and Michele Monaci. Bidimensional packing by bilinear programming. Mathematical Programming Series A, 118:75?108, 2009.
[7] Daniela P. de Farias. The Linear Programming Approach to Approximate Dynamic Programming: Theory and Application. PhD thesis, Stanford University, 2002.
[8] Daniela P. de Farias and Ben Van Roy. The linear programming approach to approximate
dynamic programming. Operations Research, 51:850?856, 2003.
[9] Daniela Pucci de Farias and Benjamin Van Roy. On constraint sampling in the linear programming approach to approximate dynamic programming. Mathematics of Operations Research,
29(3):462?478, 2004.
[10] Carlos Guestrin, Daphne Koller, Ronald Parr, and Shobha Venkataraman. Efficient solution
algorithms for factored MDPs. Journal of Artificial Intelligence Research, 19:399?468, 2003.
[11] Reiner Horst and Hoang Tuy. Global optimization: Deterministic approaches. Springer, 1996.
[12] Michail G. Lagoudakis and Ronald Parr. Least-squares policy iteration. Journal of Machine
Learning Research, 4:1107?1149, 2003.
[13] O. L. Mangasarian. The linear complementarity problem as a separable bilinear program.
Journal of Global Optimization, 12:1?7, 1995.
[14] Remi Munos. Error bounds for approximate policy iteration. In International Conference on
Machine Learning, pages 560?567, 2003.
[15] Marek Petrik and Shlomo Zilberstein. Anytime coordination using separable bilinear programs. In Conference on Artificial Intelligence, pages 750?755, 2007.
[16] Marek Petrik and Shlomo Zilberstein. Average reward decentralized Markov decision processes. In International Joint Conference on Artificial Intelligence, pages 1997?2002, 2007.
[17] Marek Petrik and Shlomo Zilberstein. A bilinear programming approach for multiagent planning. Journal of Artificial Intelligence Research, 35:235?274, 2009.
[18] Marek Petrik and Shlomo Zilberstein. Constraint relaxation in approximate linear programs.
In International Conference on Machine Learning, pages 809?816, 2009.
[19] Marek Petrik and Shlomo Zilberstein. Robust value function approximation using bilinear programming. Technical Report UM-CS-2009-052, Department of Computer Science, University
of Massachusetts Amherst, 2009.
[20] Warren B. Powell. Approximate Dynamic Programming. Wiley-Interscience, 2007.
[21] Martin L. Puterman. Markov decision processes: Discrete stochastic dynamic programming.
John Wiley & Sons, Inc., 2005.
[22] Kenneth O. Stanley and Risto Miikkulainen. Competitive coevolution through evolutionary
complexification. Journal of Artificial Intelligence Research, 21:63?100, 2004.
[23] Richard S. Sutton and Andrew Barto. Reinforcement learning. MIT Press, 1998.
[24] Istvan Szita and Andras Lorincz. Learning Tetris using the noisy cross-entropy method. Neural
Computation, 18(12):2936?2941, 2006.
[25] Ronald J. Williams and Leemon C. Baird. Tight performance bounds on greedy policies based
on imperfect value functions. In Yale Workshop on Adaptive and Learning Systems, 1994.
9
| 3761 |@word version:1 briefly:1 polynomial:3 norm:15 risto:1 heuristically:1 pieter:1 reduction:4 initial:1 series:2 uma:2 selecting:1 daniel:1 interestingly:1 existing:5 current:1 must:1 john:1 ronald:3 realistic:1 shlomo:7 designed:2 update:1 progressively:1 stationary:1 greedy:10 selected:1 intelligence:5 short:1 provides:4 completeness:1 successive:1 simpler:2 daphne:1 unbounded:2 mathematical:2 become:2 shorthand:2 consists:1 interscience:1 tuy:1 introduce:1 theoretically:1 x0:2 hardness:1 ra:1 colspan:2 planning:1 bellman:30 discounted:3 actual:2 solver:4 increasing:1 notation:6 bounded:1 mountain:2 minimizes:9 maxa:1 finding:1 impractical:1 guarantee:9 temporal:1 concave:2 shed:1 preferable:1 um:1 demonstrates:1 control:1 grant:1 bertsekas:1 local:1 api:29 bilinear:36 despite:1 sutton:1 approximately:1 challenging:1 fastest:1 directed:1 practical:3 practice:1 minv:4 differs:1 powell:1 area:1 universal:1 empirical:2 significantly:2 matching:1 confidence:1 get:2 cannot:2 close:1 operator:1 context:1 applying:1 equivalent:1 deterministic:2 map:1 missing:1 reviewer:1 williams:1 convex:2 formulate:1 shorten:1 factored:5 notion:1 construction:1 commercial:2 programming:22 us:3 pa:1 element:3 roy:2 approximated:1 particularly:1 complementarity:1 cut:1 observed:2 solved:4 worst:1 cy:1 ensures:1 venkataraman:1 ordering:2 decrease:1 ran:1 mentioned:2 benjamin:1 complexity:6 miny:1 reward:6 constrains:2 dynamic:9 solving:15 segment:1 tight:1 petrik:7 serve:1 basis:5 farias:3 packing:1 easily:4 joint:1 represented:6 various:1 leemon:1 describe:5 artificial:5 choosing:1 refined:1 heuristic:2 widely:3 solve:3 stanford:1 otherwise:1 itself:1 noisy:1 final:1 propose:2 subtracting:1 helicopter:2 remainder:1 achieve:1 description:1 kv:7 validate:1 convergence:3 optimum:1 r1:1 converges:2 ben:1 help:1 develop:1 andrew:2 eq:16 solves:1 strong:1 c:3 drawback:2 closely:2 stochastic:3 alp:34 routing:1 violates:1 explains:1 require:3 abbeel:1 generalization:2 anonymous:1 proposition:7 practically:1 considered:2 calafiore:1 normal:1 scope:3 parr:2 smallest:1 a2:2 omitted:2 applicable:1 coordination:1 successfully:1 weighted:3 kristin:1 minimization:10 mit:1 always:2 modified:1 barto:1 office:1 zilberstein:6 corollary:2 focus:2 properly:1 consistently:2 improvement:2 vk:9 contrast:1 membership:1 typically:4 initially:1 koller:1 abp:33 interested:1 provably:3 issue:6 dual:1 arg:4 szita:1 denoted:7 priori:3 development:2 art:1 fairly:1 aware:3 ng:1 sampling:4 runtimes:1 identical:3 represents:7 broad:1 future:1 minimized:1 report:3 np:11 simplify:3 simplex:1 employ:1 few:1 spline:1 randomly:1 shobha:1 richard:1 preserve:1 divergence:1 individual:2 custom:1 evaluation:1 analyzed:2 light:1 tuple:1 partial:1 necessary:1 unless:1 incomplete:2 theoretical:3 uncertain:1 column:3 modeling:1 applicability:1 vertex:1 deviation:1 subset:1 uniform:1 too:2 optimally:1 underpowered:1 st:1 explores:1 amherst:3 randomized:1 international:3 probabilistic:1 thesis:1 unavoidable:2 leveraged:1 literal:3 leading:1 return:5 ganapathi:1 dimitri:1 potential:2 de:3 b2:4 summarized:1 coefficient:1 inc:1 baird:1 satisfy:2 depends:3 multiplicative:1 optimistic:2 analyze:4 start:1 competitive:1 option:1 carlos:1 defer:1 minimize:12 air:1 square:1 characteristic:1 efficiently:1 correspond:4 spaced:1 weak:1 unifies:1 comparably:1 notoriously:1 finer:1 whenever:1 checked:1 definition:2 lorincz:1 involved:1 proof:8 sampled:3 gain:1 massachusetts:3 knowledge:2 car:4 improves:2 anytime:1 stanley:1 actually:1 back:1 higher:1 varun:1 rand:1 formulation:20 evaluated:1 until:1 hand:3 receives:1 nonlinear:1 lack:1 somehow:1 quality:4 michele:1 scientific:1 mdp:3 believe:1 klv:7 true:1 iteratively:1 puterman:1 during:1 adelman:1 hill:3 complete:7 demonstrate:2 performs:2 l1:6 wise:2 mangasarian:2 lagoudakis:1 common:5 clause:1 discussed:2 extend:1 approximates:1 adp:6 significant:3 refer:2 ai:1 trivially:1 mathematics:1 impressive:2 closest:2 recent:1 showed:2 inequality:2 arbitrarily:1 yi:5 seen:2 minimum:1 additional:1 guestrin:1 michail:1 converge:6 maximize:1 determine:4 monotonically:3 branch:1 full:3 multiple:1 technical:3 offer:1 cross:1 divided:1 alberto:1 a1:2 feasibility:3 impact:1 parenthesis:1 scalable:1 basic:3 involving:1 controller:1 neuro:1 iteration:13 represent:3 sergey:1 achieved:3 addition:7 remarkably:1 addressed:3 appropriately:1 unlike:1 comment:1 climb:2 integer:1 easy:1 identically:1 variety:1 zi:3 coevolution:1 opposite:1 reduce:3 regarding:1 imperfect:1 whether:2 ultimate:1 effort:1 render:1 action:9 remark:2 enumerate:1 useful:4 discount:1 extensively:1 category:1 reduced:1 exist:2 notice:3 per:1 discrete:1 achieving:1 kenneth:1 v1:1 asymptotically:2 relaxation:1 separation:1 decision:4 acceptable:1 comparable:1 bound:24 ct:1 guaranteed:6 convergent:3 correspondence:1 yale:1 quadratic:1 constraint:26 precisely:1 min:8 optimality:2 separable:8 vehicular:1 martin:1 department:4 developing:1 transiting:1 combination:1 representable:3 pan:1 y0:1 son:1 wi:3 lid:2 modification:1 s1:2 restricted:3 computationally:2 equation:1 previously:1 remains:1 discus:4 count:1 eventually:1 daniela:3 needed:1 serf:1 available:4 generalizes:1 operation:3 decentralized:1 appropriate:1 robustness:1 original:1 denotes:3 k1:2 lspi:5 objective:14 added:1 strategy:2 istvan:1 evolutionary:1 separate:1 thank:1 w0:2 assuming:2 relationship:2 minimizing:3 balance:1 difficult:1 mostly:1 potentially:1 negative:2 implementation:1 reliably:1 motivates:1 policy:57 perform:3 upper:3 av:3 vertical:1 markov:4 benchmark:4 finite:7 extended:2 arbitrary:3 campi:1 pair:4 kl:3 connection:2 address:1 beyond:3 deepen:1 usually:3 below:3 summarize:2 program:46 max:1 reliable:1 marek:6 rely:2 force:2 residual:29 improve:1 mdps:10 numerous:1 transitive:8 sn:1 literature:1 l2:7 understanding:2 acknowledgement:1 determining:1 reiner:1 wisconsin:1 loss:9 multiagent:1 mixed:1 limitation:1 lv:4 hoang:1 sufficient:1 consistent:1 s0:7 share:1 row:2 compatible:1 supported:1 last:1 unfavorably:1 side:1 warren:1 deeper:2 wide:1 munos:1 van:2 regard:2 calculated:2 transition:3 cumulative:1 horst:1 commonly:3 reinforcement:6 adaptive:1 far:1 miikkulainen:1 approximate:40 emphasize:1 implicitly:1 global:4 ioffe:1 sat:2 b1:4 xi:5 search:8 iterative:1 why:1 table:2 nature:1 robust:4 inventory:1 domain:6 main:1 motivation:1 s2:3 wiley:2 position:1 momentum:1 andras:1 exponential:2 third:1 theorem:17 z0:1 specific:1 jensen:1 symbol:1 r2:1 essential:2 workshop:1 phd:1 execution:3 horizon:3 gap:1 entropy:1 depicted:1 remi:1 simply:1 expressed:1 adjustment:1 scalar:1 springer:1 corresponds:4 truth:1 satisfies:3 determines:1 pucci:1 ma:2 goal:2 formulated:3 identity:1 price:1 bennett:1 feasible:14 hard:5 change:1 infinite:3 except:1 uniformly:3 conservative:1 accepted:1 tetri:2 partly:1 formally:1 bidimensional:1 arises:1 |
3,048 | 3,762 | Submodularity Cuts and Applications
Yoshinobu Kawahara?
The Inst. of Scientific and Industrial Res. (ISIR),
Osaka Univ., Japan
Kiyohito Nagano
Dept. of Math. and Comp. Sci.,
Tokyo Inst. of Technology, Japan
[email protected]
[email protected]
Koji Tsuda
Comp. Bio. Research Center,
AIST, Japan
Jeff A. Bilmes
Dept. of Electrical Engineering,
Univ. of Washington, USA
[email protected]
[email protected]
Abstract
Several key problems in machine learning, such as feature selection and active
learning, can be formulated as submodular set function maximization. We present
herein a novel algorithm for maximizing a submodular set function under a cardinality constraint ? the algorithm is based on a cutting-plane method and is
implemented as an iterative small-scale binary-integer linear programming procedure. It is well known that this problem is NP-hard, and the approximation factor
achieved by the greedy algorithm is the theoretical limit for polynomial time. As
for (non-polynomial time) exact algorithms that perform reasonably in practice,
there has been very little in the literature although the problem is quite important for many applications. Our algorithm is guaranteed to find the exact solution
finitely many iterations, and it converges fast in practice due to the efficiency of
the cutting-plane mechanism. Moreover, we also provide a method that produces
successively decreasing upper-bounds of the optimal solution, while our algorithm
provides successively increasing lower-bounds. Thus, the accuracy of the current
solution can be estimated at any point, and the algorithm can be stopped early
once a desired degree of tolerance is met. We evaluate our algorithm on sensor
placement and feature selection applications showing good performance.
1
Introduction
In many fundamental problems in machine learning, such as feature selection and active learning,
we try to select a subset of a finite set so that some utility of the subset is maximized. A number of
such utility functions are known to be submodular, i.e., the set function f satisfies f (S) + f (T ) ?
f (S ? T ) + f (S ? T ) for all S, T ? V , where V is a finite set [2, 5]. This type of function can
be regarded as a discrete counterpart of convex functions, and includes entropy, symmetric mutual
information, information gain, graph cut functions, and so on. In recent years, treating machine
learning problems as submodular set function maximization (usually under some constraint, such as
limited cardinality) has been addressed in the community [10, 13, 22].
In this paper, we address submodular function maximization under a cardinality constraint:
s.t. |S| ? k,
max f (S)
S?V
(1)
where V = {1, 2, . . . , n} and k is a positive integer with k ? n. Note that this formulation is
considerably general and covers a broad range of problems. The main difficulty of this problem
comes from a potentially exponentially large number of locally optimal solutions. In the field of
?
URL: http://www.ar.sanken.osaka-u.ac.jp/ kawahara/
1
combinatorial optimization, it is well-known that submodular maximization is NP-hard and the
approximation factor of (1 ? 1/e) (? 0.63) achieved by the greedy algorithm [19] is the theoretical
limit of a polynomial-time algorithm for positive and nondecreasing submodular functions [3]. That
is, in the worst case, any polynomial-time algorithm cannot give a solution whose function value is
more than (1 ? 1/e) times larger than the optimal value unless P=NP. In recent years, it has been
reported that greedy-based algorithms work well in several machine-learning problems [10, 1, 13,
22]. However, in some applications of machine learning, one seeks a solution closer to the optimum
than what is guaranteed by this bound. In feature selection or sensor placement, for example, one
may be willing to spend much more time in the selecting phase, since once selected, items are used
many times or for a long duration. Unfortunately, there has been very little in the literature on
finding exact but still practical solutions to submodular maximization [17, 14, 8]. To the best of our
knowledge, the algorithm by Nemhauser and Wolsey [17] is the only way for exactly maximizing
a general form of nondecreasing submodular functions (other than naive brute force). However, as
stated below, this approach is inefficient even for moderate problem sizes.
In this paper, we present a novel algorithm for maximizing a submodular set function under a cardinality constraint based on a cutting-plane method, which is implemented as an iterative small-scale
binary-integer linear programming (BILP) procedure. To this end, we derive the submodularity cut,
a cutting plane that cuts off the feasible sets on which the objective function values are guaranteed
to be not better than current best one, and this is based on the submodularity of a function and its
Lov?asz extension [15, 16]. This cut assures convergence to the optimum in finite iterations and
allows the searching for better subsets in an efficient manner so that the algorithm can be applied
to suitably-sized problems. The existing algorithm [17] is infeasible for such problems since, as
originally presented, it has no criterion for improving the solution efficiently at each iteration (we
compare these algorithms empirically in Sect. 5.1). Moreover, we present a new way to evaluate an
upper bound of the optimal value with the help of the idea of Nemhauser and Wolsey [17]. This
enables us to judge the accuracy of the current best solution and to calculate an ?-optimal solution
for a predetermined ? > 0 (cf. Sect. 4). In our algorithm, one needs to iteratively solve smallscale BILP (and mixed integer programming (MIP) for the upper-bound) problems, which are also
NP-hard. However, due to their small size, these can be solved using efficient modern software
packages such as CPLEX. Note that BILP is a special case of MIP and more efficient to solve in
general, and the presented algorithm can be applied to any submodular functions while the existing
one needs the nondecreasing property.1 We evaluate the proposed algorithm on the applications of
sensor placement and feature selection in text classification.
The remainder of the paper is organized as follows: In Sect. 2, we present submodularity cuts and
give a general description of the algorithm using this cutting plane. Then, we describe a specific
procedure for performing the submodularity cut algorithm in Sect. 3 and the way of updating an
upper bound for calculating an ?-optimal solution in Sect. 4. And finally, we give several empirical
examples in Sect. 5, and conclude the paper in Sect. 6.
2
Submodularity Cuts and Cutting-Plane Algorithm
We start with a subset S0 ? V of some ground set V with a reasonably good lower bound ? =
f (S0 ) ? max{f (S) : S ? V }. Using this information, we cut off the feasible sets on which the
objective function values are guaranteed to be not better than f (S0 ). In this section, we address
a method for solving the submodular maximization problem (1) based on this idea along the line
of cutting-plane methods, as described by Tuy [23] (see also [6, 7]) and often successfully used in
algorithms for solving mathematical programming problems [18, 11, 20].
2.1
Lov?asz extension
For dealing with the submodular maximization problem (1) in a way analogous to the continuous
counterpart, i.e., convex maximization, we briefly describe an useful extension to submodular functions, called the Lov?asz extension [15, 16]. The relationship between the discrete and the continuous,
described in this subsection, is summarized in Table 1.
1
A submodular function is called nondecreasing if f (A) ? f (B) for (A ? B). For example, an entropy
function is nondecreasing but a cut function on nodes is not.
2
c*
Table 1: Correspondence between continuous and discrete.
(discrete)
f :2 ?R
Eq. (2)
S?V
f is submodular
Eq. (3)
V
=?
??
Thm. 1
??
H+
H-
(continuous)
f? : Rn ? R
I S ? Rn
?
f is convex
H
y1
H*
P
d1
v
y2
d2
Figure 1: Illustration of cutting plane H. For H ?
and c? , see Section 3.2.
Given any real vector p ? Rn , we denote the m distinct elements of p by p?1 > p?2 > ? ? ? > p?m .
Then, the Lov?asz extension f? : Rn ? R corresponding to a general set function f : 2V ? R, which
is not necessarily submodular, is defined as
Pm?1
f?(p) =
(?
pk ? p?k+1 )f (Uk ) + p?m f (Um ),
(2)
k=1
where Uk = {i ? V : pi ? p?k }. From the definition, f? is a piecewise linear (i.e., polyhedral) function.2 In general, f? is not convex. However, the following relationship between the submodularity
of f and the convexity of f? is given [15, 16]:
Theorem 1 For a set function f : 2V ? R and its Lov?asz extension f? : Rn ? R, f is submodular
if and only if f? is convex.
P
Now, we define I S ? {0, 1}n as I S = i?S ei , where ei is the i-th unit vector. Obviously, there is
a one-to-one correspondence between I S and S. I S is called the characteristic vector of S.3 Then,
the Lov?asz extension f? is a natural extension of f in the sense that it satisfies the following [15, 16]:
f?(I S ) = f (S) (S ? V ).
(3)
In what follows, we assume that f is submodular. Now we introduce a continuous relaxation of the
problem (1) using the Lov?asz extension f?. A polytope P ? Rn is a bounded intersection of a finite
set of half-spaces ? that is, P is of the form P = {x ? Rn : A>
j x ? bj , j = 1, ? ? ? , m}, where
Aj is a real vector and bj is a real scalar. According to the correspondence between discrete and
continuous functions described above, it is natural to replace the objective function f : 2V ? R and
the feasible region {S ? V : |S| ? k} of the problem (1) by the Lov?asz extension f? : Rn ? R and
a polytope D0 ? Rn defined by
Pn
D0 = {x ? Rn : 0 ? xi ? 1 (i = 1, ? ? ? , n), i=1 xi ? k},
respectively. The resulting problem is a convex maximization problem. For problem (1), we will use
the analogy with the way of solving the continuous problem: max {f?(x) : x ? D0 }. The question
is, can we solve it and how good is the solution?
2.2
Submodularity cuts
Here, we derive what we call the submodularity cut, a cutting plane that cuts off the feasible sets
with optimality guarantees using the submodularity of f , and with the help of the relationship between submodularity and convexity described in Thm. 1. Note that the algorithm using this cutting
plane, described later, converges to an optimal solution in a finite number of iterations (cf. Thm. 5).
The presented technique is essentially a discrete analog of concavity cut techniques for continuous
concave minimization, which rests on the following property (see, e.g., [11]).
Theorem 2 A convex function g : Rn ? R attains its global maximum over a polytope P ? Rn at
a vertex of P .
2
For a submodular function, the Lov?asz extension (2) is known to be equal to
f?(p) = sup{pT x : x ? B(f )} (p ? Rn ),
n
where B(f ) = {x ?
PR : x(S) ? f (S) (?S ? V ), x(V ) = f (V )} is the base polyhedron associated with
f [15] and x(S) = i?S xi .
3
For example in case of |V | = 6, the characteristic vector of S = {1, 3, 4} becomes I S = (1, 0, 1, 1, 0, 0).
3
First, we clarify the relation between discrete and continuous problems. Let P be a polytope with
P ? D0 . Denote by S(P ) the subsets of V whose characteristic vectors are inside of P , i.e.,
I S 0 ? P for any S 0 ? S(P ), and denote by V (P ) the set consisting of all vertices of P . Note
that any characteristic vector I S ? P is a vertex of P . Also, there is a one-to-one correspondence
between S(D0 ) and V (D0 ). Now clearly, we have
max{f (S 0 ) : S 0 ? S(P )} ? max{f?(x) : x ? P }.
(4)
If we can find a subset P? where the function value of f? is always smaller than the currently-known
? for S? ? S(P? ) is also smaller than the value. Thus, the cutting plane for the
largest value, any f (S)
?
problem max{f (x) : x ? D0 } can be applied to our problem (1) through the relationship (4).
To derive the submodularity cut, we use the following definition:
Definition 3 (?-extension) Let g : Rn ? R be a convex function, x ? Rn , ? be a real number
satisfying ? ? g(x) and t > 0. Then, a point y ? Rn defined by the following formula is called
?-extension of x in direction d ? Rn \ {0} (with respect to g) where ? ? R ? {?}:
y = x + ?d with ? = sup{t : g(x + td) ? ?}.
(5)
We may have ? = ? depending on g and d, but this is unproblematic in practice. The ?-extension
of x ? Rn can be defined with respect to the Lov?asz extension because it is a convex function.
The submodular cut algorithm is an iterative procedure. At each iteration, the algorithm keeps a
polytope P ? D0 , the current best function value ?, and a set S ? ? V satisfying f (S ? ) = ?. We
construct a submodular cut as follows. Let v ? V (P ) be a vertex of P such that v = I S for some
S ? S(P ), and let K = K(v; d1 , . . . , dn ) be a convex polyhedral cone with vertex v generated by
linearly independent vectors d1 , . . . , dn , i. e., K = {v + t1 d1 + ? ? ? + tn dn : tl ? 0}. For each
i = 1, ? ? ? , n, let y l = v + ?l dl be the ?-extension of v in direction dl with respect to f?. We choose
the vectors d1 , . . . , dn so that P ? K and ?l > 0 (cf. Sect. 3.1). These directions are not necessarily
chosen tightly on P (in fact, the directions described in Sect. 3.1 enclose P but also a set larger).
Since the vectors dl are linearly independent, there exists a unique hyperplane H = H(y 1 , ? ? ? , y n )
that contains y l (l = 1, ? ? ? , n), which we call a submodular cut. It is defined by (cf. Fig. 1)
H = {x : eT Y ?1 x = 1 + eT Y ?1 v}.
(6)
where e = (1, ? ? ? , 1) ? R and Y = ((y 1 ? v), ? ? ? , (y n ? v)). The hyperplane H generates
two halfspaces H? = {x : eT Y ?1 x ? 1 + eT Y v} and H+ = {x : eT Y ?1 x ? 1 + eT Y v}.
Obviously the point v is in the halfspace H? , and moreover, we have:
T
n
Lemma 4 Let P ? D0 be a polytope, ? be the current best function value, v be a vertex of P such
that v = I S for some S ? S(P ) and H? be the halfspace determined by the cutting plane, i.e.,
H? = {x : eT Y ?1 x ? 1 + eT Y v}, where Y = ((y 1 ? v), ? ? ? , (y n ? v)) and y 1 , . . . , y n are the
?-extensions of v in linearly independent directions d1 , . . . , dn . Then, it holds that
f (S 0 ) ? ? for all S 0 ? S(P ? H? ).
Proof Since P ? K = K(I S ; d1 , ? ? ? , dn ), it follows that P ? H? is contained in the simplex
R = [I S , y 1 , ? ? ? , y n ]. Since the Lov?asz extension f? is convex and the maximum of a convex
function over a compact convex set is attained at a vertex of the convex set (Thm. 2), the maximum
of f? over R is attained at a vertex of R. Therefore, we have
max{f?(x) : x ? P ? H? } ? max{f (x) : x ? R} = max{f?(v); f?(y ), ? ? ? , f?(y )} ? ?.
1
0
n
0
From Eq. (4), max{f (S ) : S ? S(P ? H? )} ? max{f?(x) : x ? P ? H? } ? ?.
The above lemma shows that we can cut off the feasible subsets S(P ? H? ) from S(P ) without
loss of any feasible set whose objective function value is better than ?. If S(P ) = S(P ? H? ), then
? = max{f (S) : |S| ? k} is achieved. A specific way to check whether S(P ) = S(P ? H? ) will
be given in Sect. 3.2. As v ? S(P ? H? ) and v ?
/ S(P ? H+ ), we have
|S(P )| > |S(P ? H+ )|.
(7)
The submodular cut algorithm updates P ? P ? H+ until the global optimality of ? is guaranteed.
The general description is shown in Alg. 1 (also see Fig. 2). Furthermore, the finiteness of the
algorithm is assured by the following theorem.
4
<discrete>
S0 ? S(P0)
Prop. 7
S1 ? S(P1)
Lovasz
extension
<continuous>
H0
P1=P0 ? H+0
...
Sopt-1? S(Popt-1)
...
H1
Sopt? S(Popt)
opt-1
Hopt-1
Popt=Popt-1 ? H+
Figure 2: Outline of the
submodularity cuts algorithm.
Algorithm 1 General description of the submodularity cuts algorithm.
1. Compute a subset S0 s.t. |S0 | ? k, and set a lower bound ?0 = f (S0 ).
2. Set P0 ? D0 , stop ? f alse, i ? 1 and S ? = S0 .
3. while stop=false do
4.
Construct with respect to Si?1 , Pi?1 and ?i?1 a submodularity cut H i .
i
5.
if S(Pi?1 ) = S(Pi?1 ? H?
) then
6.
stop ? true (S ? is an optimal solution and ?i?1 the optimal value).
7.
else
8.
Update ?i (using Si and other available information) and set S ? s.t. f (S ? ) = ?i .
i
9.
Compute Si ? S(Pi ), and set Pi ? Pi?1 ? H+
and i ? i + 1.
10.
end if
11. end while
Theorem 5 Alg. 1 gives an optimal solution to the problem (1) in a finite number of iterations.
Proof In the beginning, |S(D0 )| is finite. In view of (7), each iteration decreases |S(P )| by at least
1. So, the number of iterations is finite.
3
Implementation
In this section, we describe a specific way to perform Alg. 1 using a binary-integer linear programming (BILP) solver. The pseudo-code of the resulting algorithm is shown in Alg. 2.
3.1
Construction of submodularity cuts
Given a vertex of a polytope P ? D0 , which is of the form I S , we describe how to compute linearly
independent directions d1 , ? ? ? , dn for the construction of the submodularity cut at each iteration of
the algorithm (Line 4 in Alg. 1). Note that the way described here is just one option and any other
choice satisfying P ? K can be substituted.
If |S| < k, then directions d1 , . . . , dn can be chosen as ?el (l ? S) and el (l ? V \ S). Now we
focus on the case where |S| = k. Define a neighbor S(i,j) of S as
S(i,j) := (S \ {i}) ? {j} (i ? S, j ? V \ S).
That is, the neighbor S(i,j) is given by replacing one of the elements of S with that of V \ S. Note
that I S(i,j) ? I S = ej ? ei for any neighbor S(i,j) of S. Let S(i? ,j ? ) be a neighbor that maximizes
f (S(i,j) ) among all neighbors of S. Since a subset S of size k has k ? (n ? k) neighbors S(i,j)
(i ? S, j ? V \ S), this computation is O(nk). Suppose that S = {i1 , . . . , ik } with i1 = i?
and V \ S = {jk+1 , . . . , jn } with jn = j ? . If f (S(i? ,j ? ) ) > ?, we update ? ? f (S(i? ,j ? ) ) and
S ? ? S(i? ,j ? ) . Thus, in either case it holds that ? ? f (S(i? ,j ? ) ). As an example of the set of
directions {d1 , . . . , dn }, we choose
(
ej ? ? eil if l ? {1, . . . , k}
ejl ? ej ? if l ? {k + 1, . . . , n ? 1}
dl =
(8)
?ej ?
if l = n.
It is easy to see that d1 , . . . , dn are linearly independent. Moreover, we obtain the following lemma:
Lemma 6 For the directions d1 , . . . , dn defined in (8), a cone
K(I S ; d1 , . . . , dn ) = {I S + t1 d1 + ? ? ? + tn dn : tl ? 0}
Pn
contains the polytope D0 = {x ? Rn : 0 ? xl ? 1 (l = 1, ? ? ? , n), l=1 xl ? k}.
The proof of this lemma is included in the supplementary material (Sect. A). The ?-extensions, i.e.,
??s, in these directions can be obtained in closed forms. The details of this are also included in the
supplementary material (Sect. A).
5
Algorithm 2 Pseudo-code of the submodularity cuts algorithm using BILP.
1. Compute a subset S0 s.t. |S0 | ? k, and set a lower bound ?0 = f (S0 ).
2. Set P0 ? D0 , stop ? f alse, i ? 1 and S ? = S0 .
3. while stop=false do
4.
Construct with respect to Si?1 , Pi?1 and ?i?1 a submodularity cut H.
5.
Solve the BILP problem (9) with respect to Aj and bj (j = 1, ? ? ? , nk ), and let the optimal
solution and value Si and c? , respectively.
6.
if c? ? 1 + eT Y ?1 v i?1 then
7.
stop ? true (S ? is an optimal solution and ?i?1 the optimal value).
8.
else
9.
Update ?i (using Si and other available information) and set S ? s.t. f (S ? ) = ?i .
10.
Set Pi ? Pi?1 ? H+ and i ? i + 1.
11.
end if
12. end while
3.2
Stopping criterion and next starting point
Next, we address the checking of optimality, i.e., whether S(P ) = S(P ? H? ), and also finding
the next starting subset Si (respectively, in Lines 5 and 9 in Alg. 1). Let Pe ? Rn be the minimum
polytope containing S(P ). Geometrically, checking S(P ) = S(P ?H? ) can be done by considering
a parallel hyperplane H ? of H which is tangent to Pe. If H = H ? or H ? is given by translating
H towards v, then S(P ) = S(P ? H? ). Numerically, such a translation corresponds to linear
programming. Using Eq. (6), we obtain:
Proposition 7 Let c? be the optimal value of the binary integer program
max {eT Y ?1 x : Aj x ? bj , j = 1, ? ? ? , mk }.
x?{0,1}n
(9)
Then S(P ) ? H? if c? ? 1 + eT Y ?1 v.
Note that, if c? > 1 + eT Y ?1 v, then the optimal solution x? of Eq. (9) yields a subset of S(P \ H? )
which can be used as a starting subset of the next iteration (see Fig. 1).
4
Upper bound and ?-optimal solution
Although our algorithm can find an exact solution in a finite number of iterations, the computational
cost could be expensive for a high-dimensional case. Therefore, we present here an iterative update
of an upper bound of the current solution, and thus a way to allow us to obtain an ?-optimal solution.
To this end, we combine the idea of the algorithm by Nemhauser and Wolsey [17] with our cutting
plane algorithm. Note that this hybrid approach is effective only when f is nondecreasing.
If the submodular function f : 2V ? R is nondecreasing, the submodular maximization problem
(1) can be reformulated [17] as
P
max ? s.t. P
? ? f (S) + j?V \S ?j (S)yj (S ? V ),
(10)
j?V yj = k, yj ? {0, 1} (j ? V )
where ?j (S) := f (S ? {j}) ? f (S). This formulation is a MIP with regard to one continuous and
n binary variables, and has approximately 2n constraints. The first type of constraint corresponds
to all feasible subsets S, and the number of inequalities is as large as 2n . This approach is therefore
infeasible for certain problem sizes. Nemhauser and Wolsey [17] address this problem by adding the
constraints one by on and calculating a reduced MIP problem iteratively. In the worse case, however,
the number of iterations becomes equal to the case of when all constraints are added. The solution
of a maximization problem with a subset of constraints is larger than the one with all constraints, so
the good news is that this solution is guaranteed to improve (by monotonically decreasing down to
the true solution) at each iteration. In our algorithm, by contrast, the best current solution increases
monotonically to the true solution. Therefore, by adding the constraint corresponding to Si at each
iteration of our algorithm and solving the reduced MIP above, we can evaluate an upper bound of
the current solution. Thus, we can assure the optimality of a current solution, or obtain a desired
?-optimal solution using both the lower and upper bound.
6
b
b
6XEPRGXODULW\bRSW
6XEPRGXODULW\b
6XEPRGXODULW\b
1HPKDXVHUb b:ROVH\
6XEPRGXODULW\bRSW
6XEPRGXODULW\b
6XEPRGXODULW\b
1HPKDXVHUb b:ROVH\
Upper bound
Lower bound
Function value
Time (log-scale) [s]
Time (log-scale) [s]
b
k=8
k=5
Function value of the solution by the greedy algorithm
b
b
Dimensionality (n)
Dimensionality (n)
b
Time (log-scale) [s]
Figure 3: Averaged computational time (log-scale) for com- Figure 4: An example of compuputing exact and ?-optimal solutions by the submodularity cut tational time (log-scale) versus the
calculated upper and lower bounds.
algorithm and existing algorithm by Nemhauser and Wolsey.
5
Experimental Evaluation
We first empirically compare the proposed algorithm with the existing algorithm by Nemhauser and
Wolsey [17] in Sect. 5.1, and then apply the algorithm to the real-world applications of sensor placement, and feature selection in text classification (Sect. 5.2 and 5.3, respectively). In the experiments,
we used the solution by a greedy algorithm as initial subset S0 . The experiments below were run
on a 2.5GHz 64-bit workstation using Matlab and a Parallel CPLEX ver. 11.2 (8 threads) through a
mex function. If ? = ? in Eq. (5), we set ? = ?1 , where ?1 is large (i.e. ?1 = 106 ).
5.1
Artificial example
Here, we evaluate empirically and illustrate the submodularity cut algorithm (Alg. 2) with respect
to (1) computational time for exact solutions compared with the existing algorithm and (2) how
fast the algorithm can sandwich the true solution between the upper and lower bounds, using artificial datasets. The considered problem here is the K-location problem [17], i.e., the submodular
maximization problem (1) with respect to the nondecreasing submodular function:
Pm
f (S) = i=1 maxj?S cij ,
where C = cij is an m ? n nonnegative matrix and V = {1, ? ? ? , n}. We generated several matrices
C of different size n (we fixed m = n+1), and solved the above problem with respect to k = 5, 8 for
exact and ? optimal solutions, using the two algorithms. The graphs in Fig. 3 show the computational
time (log-scale) for several n and k = 5, 8, where the results were averaged over randomly generated
3 matrices C. Note that, for example, the number of combination becomes more than two hundred
millions for n = 45 and k = 8. As the figure shows, the required costs for Alg. 2 were less than the
existing algorithm, especially in the case of high search spaces. This could be because the cuttingplane algorithm searches feasible subsets in an efficient manner by eliminating worse ones with the
submodularity cuts. And Fig. 4 shows an example of the calculated upper and lower bounds vs.
time (k = 5 and n = 45). The lower bound is updated rarely and converges to the optimal solution
quickly while the upper bound decreases gradually.
5.2
Sensor placements
Our first example with real data is the sensor placements problem, where we try to select sensor
locations to minimize the variance of observations. The dataset we used here is temperature measurements at discretized finite locations V obtained using the NIMS sensor node deployed at a lake
near the University of California, Merced [9, 12] (|V | = 86).4 As in [12], we evaluatedP
the set of
locations S ? V using the averaged variance reduction f (S) = V ar(?) ? V ar(S) = n1 s Fs (S),
2
2
where Fs (S) = ?s2 ? ?s|S
is the variance reduction and ?s|S
denote the predictive variance at location s ? V after observing locations S ? V . This function is monotone and submodular. The
graphs in Fig. 5 show the computation time of our algorithm, and the accuracy improvement of our
calculated solution over that of the greedy algorithm (%), respectively, for ? = 0.05, 0.1, 0.2. Both
the computation time and improvement are large at around k = 5 compared with other choices of k.
This is because the greedy solutions are good when k is either very small or large.
4
The covariance matrix of the Gaussian process that models the measurements is available in Matlab Toolbox for Submodular Function Optimization (http://www.cs.caltech.edu/?krausea/sfo/).
7
b
b
Improvement [%]
Time (log-scale) [s]
b
Cardinality ( k )
b
Cardinality ( k )
Figure 5: Computational time (left) and accuracy improvement over the greedy algorithm (right).
Table 1: Selected words with [the values of information gain, classification precision].
k
greedy
submodularity cuts
5 (tonn,?agricultur?,trade,pct,?market?)[2.59,0.53]? (?week?,tonn,trade,pct,?washington?)[2.66,0.58]
10 ( . . .,week,oil,price,?dollar?,?offici?)[3.55,0.57] ? ( . . .,price,oil,?bank?,?produc?,?blah?)[3.88,0.62]
5.3
Feature selection in text classification
Our second real test case is feature selection in document classification using the Reuters-21578
dataset. We applied the greedy and submodularity cuts algorithms to the training set that includes
7,770 documents with 5,180 words (features) and 90 categories, where we used the information
gain as a criterion [4]. Table 1 shows the selected words by the algorithms in the cases of k =
5, 10 (for the proposed algorithm ? = 0.003 in both cases) with the values of information gain and
classification precision (tp/(tp + f p), tp; true positive, f p; false positive). For classification on the
test set (3,019 documents with 5,180 words and 90 categories), we applied a Naive Bayes classifier
with the selected features. The submodularity cuts algorithm selected several different words from
that of the greedy algorithm. We can see that the words selected by our algorithm would have high
predictive power even though the number of the chosen words is very small.
6
Conclusions
In this paper, we presented a cutting-plane algorithm for submodular maximization problems, which
can be implemented as an iterative binary-integer linear programming procedure. We derived a cutting plane procedure, called the submodularity cut, based on the submodularity of a set function
through the Lov?asz extension, and showed this cut assures that the algorithm converges to the optimum in finite iterations. Moreover, we presented a way to evaluate an upper bound of the optimal
value with the help of Nemhauser and Wolsey [17], which enables us to ensure the accuracy of the
current best solution and to calculate an intended ?-optimal solution for a predetermined ? > 0.
Our new algorithm computationally compared favorably against the existing algorithm on artificial
datasets, and also showed improved performance on the real-world applications of sensor placements and feature selection in text classification.
The submodular maximization problem treated in this paper covers broad range of applications in
machine learning. In future works, we will develop frameworks with ?-optimality guarantees for
more general problem settings such as knapsack constraints [21] and not nondecreasing submodular
functions. This will be make the submodularity cuts framework applicable to a still wider variety of
machine learning problems.
Acknowledgments
This research was supported in part by JSPS Global COE program ?Computationism as a Foundation
for the Sciences?, KAKENHI (20800019 and 21680025), the JFE 21st Century Foundation, and
the Functional RNA Project of New Energy and Industrial Technology Development Organization
(NEDO). Further support was received from a PASCAL2 grant, and by NSF grant IIS-0535100.
Also, we are very grateful to the reviewers for helpful comments.
8
References
[1] A. Das and D. Kempe. Algorithms for subset selection in linear regression. In R. E. Ladner and C. Dwork,
editors, Proc. of the 40th Annual ACM Symp. on Theory of Computing (STOC 2008), pages 45?54, 2008.
[2] J. Edmonds. Submodular functions, matroids, and certain polyhedra. In R. Guy, H. Hanani, N. Sauer,
and J. Sh?onheim, editors, Combinatorial Structures and Their Applications, pages 69?87. Gordon and
Breach, 1970.
[3] U. Feige. A threshold of ln n for approximating set cover. Journal of the ACM, 45:634?652, 1998.
[4] G. Forman. An extensive empirical study of feature selection metrics for text classification. Journal of
Machine Learning Research, 3:1289?1305, 2003.
[5] S. Fujishige. Submodular Functions and Optimization. Elsevier, second edition, 2005.
[6] F. Glover. Convexity cuts and cut search. Operations Research, 21:123?134, 1973.
[7] F. Glover. Polyhedral convexity cuts and negative edge extension. Zeitschrift f?ur Operations Research,
18:181?186, 1974.
[8] B. Goldengorin. Maximization of submodular functions: Theory and enumeration algorithms. European
Journal of Operational Research, 198(1):102?112, 2009.
[9] T. C. Harmon, R. F. Ambrose, R. M. Gilbert, J. C. Fisher, M. Stealey, and W. J. Kaiser. High resolution
river hydraulic and water quality characterization using rapidly deployable. Technical report, CENS,,
2006.
[10] S. C. H. Hoi, R. Jin, J. Zhu, and M. R. Lyu. Batch mode active learning and its application to medical
image classification. In Proc. of the 23rd int?l conf. on Machine learning (ICML 2006), pages 417?424,
2006.
[11] R. Horst and H. Tuy. Global Optimization (Deterministic Approaches). Springer, 3 edition, 1996.
[12] A. Krause, H. B. McMahan, C. Guestrin, and A. Gupta. Robust submodular observation selection. Journal
of Machine Learning Research, 9:2761?2801, 2008.
[13] A. Krause, A. Singh, and C. Guestrin. Near-optimal sensor placements in Gaussian processes: Theory,
efficient algorithms and empirical studies. Journal of Machine Learning Research, 9:235?284, 2009.
[14] H. Lee, G. L. Nemhauser, and Y. Wang. Maximizing a submodular function by integer programming:
Polyhedral results for the quadratic case. European Journal of Operational Research, 94:154?166, 1996.
[15] L. Lov?asz. Submodular functions and convexity. In A. Bachem, M. Gr?otschel, and B. Korte, editors,
Mathematical Programming ? The State of the Art, pages 235?257. 1983.
[16] K. Murota. Discrete Convex Analysis, volume 10 of Monographs on Discrete Math and Applications.
Society for Industrial & Applied, 2000.
[17] G. L. Nemhauser and L. A. Wolsey. Maximizing submodular set functions: formulations and analysis of
algorithms. In P. Hansen, editor, Studies on Graphs and Discrete Programming, volume 11 of Annals of
Discrete Mathematics. 1981.
[18] G. L. Nemhauser and L. A. Wolsey. Integer and Combinatorial Optimization. Wiley-Interscience, 1988.
[19] G. L. Nemhauser, L. A. Wolsey, and M. L. Fisher. An analysis of approximations for maximizing for
submodular set functions ? I. Mathematical Programming, 14:265?294, 1978.
[20] M. Porembski. Finitely convergent cutting planes for concave minimization. Journal of Global Optimization, 20(2):109?132, 2001.
[21] M. Sviridenko. A note on maximizing a submodular set function subject to a knapsack constraint. Operations Research Letters, 32(1):41?43, 2004.
[22] M. Thoma, H. Cheng, A. Gretton, J. Han, H. P. Kriegel, A. J. Smola, L. Song, P. S. Yu, X. Yan, and K. M.
Borgwardt. Near-optimal supervised feature selection among frequent subgraphs. In Proc. of the 2009
SIAM Conference on Data Mining (SDM 2009), pages 1075?1086, 2009.
[23] H. Tuy. Concave programming under linear constraints. Soviet Mathematics Doklady, 5:1437?1440,
1964.
9
| 3762 |@word briefly:1 eliminating:1 polynomial:4 suitably:1 willing:1 d2:1 seek:1 covariance:1 p0:4 isir:1 reduction:2 initial:1 contains:2 selecting:1 document:3 existing:7 current:10 com:1 si:8 predetermined:2 enables:2 bilp:6 treating:1 update:5 v:1 greedy:11 selected:6 half:1 item:1 plane:16 beginning:1 provides:1 math:2 node:2 location:6 characterization:1 mathematical:3 along:1 dn:13 glover:2 ik:1 combine:1 symp:1 polyhedral:4 inside:1 introduce:1 tuy:3 manner:2 interscience:1 lov:13 market:1 p1:2 discretized:1 decreasing:2 eil:1 td:1 little:2 enumeration:1 cardinality:6 increasing:1 becomes:3 solver:1 considering:1 moreover:5 bounded:1 maximizes:1 project:1 cens:1 what:3 finding:2 guarantee:2 pseudo:2 concave:3 exactly:1 smallscale:1 um:1 classifier:1 uk:2 bio:1 brute:1 unit:1 grant:2 medical:1 doklady:1 positive:4 t1:2 engineering:1 limit:2 zeitschrift:1 approximately:1 limited:1 range:2 averaged:3 practical:1 unique:1 acknowledgment:1 yj:3 practice:3 procedure:6 empirical:3 yan:1 word:7 cannot:1 selection:13 www:2 gilbert:1 deterministic:1 reviewer:1 center:1 maximizing:7 go:1 starting:3 duration:1 convex:15 resolution:1 subgraphs:1 regarded:1 osaka:3 century:1 searching:1 analogous:1 updated:1 annals:1 pt:1 construction:2 suppose:1 exact:7 programming:12 element:2 assure:1 satisfying:3 jk:1 updating:1 expensive:1 cut:39 merced:1 electrical:1 solved:2 worst:1 calculate:2 wang:1 region:1 news:1 sect:14 decrease:2 trade:2 halfspaces:1 monograph:1 convexity:5 grateful:1 solving:4 singh:1 predictive:2 efficiency:1 hydraulic:1 soviet:1 univ:2 distinct:1 fast:2 describe:4 effective:1 artificial:3 h0:1 kawahara:3 quite:1 whose:3 larger:3 spend:1 solve:4 supplementary:2 nondecreasing:9 obviously:2 sdm:1 remainder:1 frequent:1 rapidly:1 nagano:2 description:3 convergence:1 optimum:3 produce:1 converges:4 help:3 derive:3 depending:1 ac:3 illustrate:1 develop:1 wider:1 finitely:2 received:1 eq:6 implemented:3 c:1 enclose:1 come:1 judge:1 met:1 direction:10 submodularity:28 tokyo:1 translating:1 material:2 hoi:1 opt:1 proposition:1 extension:22 clarify:1 hold:2 around:1 considered:1 ground:1 lyu:1 bj:4 week:2 sanken:2 early:1 proc:3 applicable:1 combinatorial:3 currently:1 hansen:1 largest:1 successfully:1 minimization:2 lovasz:1 clearly:1 sensor:10 always:1 gaussian:2 rna:1 pn:2 ej:4 derived:1 focus:1 improvement:4 kakenhi:1 polyhedron:2 check:1 industrial:3 contrast:1 attains:1 sense:1 dollar:1 inst:2 helpful:1 elsevier:1 el:2 stopping:1 relation:1 i1:2 classification:10 among:2 development:1 art:1 special:1 kempe:1 mutual:1 field:1 once:2 equal:2 construct:3 washington:3 bachem:1 broad:2 yu:1 icml:1 future:1 simplex:1 report:1 np:4 piecewise:1 gordon:1 modern:1 randomly:1 tightly:1 maxj:1 phase:1 consisting:1 cplex:2 intended:1 sandwich:1 n1:1 organization:1 dwork:1 mining:1 evaluation:1 sh:1 edge:1 closer:1 sauer:1 unless:1 harmon:1 koji:2 re:1 tsuda:2 desired:2 mip:5 theoretical:2 stopped:1 mk:1 ar:4 cover:3 tp:3 maximization:15 cost:2 vertex:9 subset:18 hundred:1 jsps:1 gr:1 reported:1 considerably:1 st:1 borgwardt:1 fundamental:1 river:1 siam:1 lee:1 off:4 quickly:1 successively:2 containing:1 choose:2 guy:1 worse:2 conf:1 inefficient:1 cuttingplane:1 hopt:1 japan:3 summarized:1 includes:2 int:1 later:1 try:2 h1:1 view:1 closed:1 observing:1 sup:2 start:1 bayes:1 option:1 parallel:2 halfspace:2 minimize:1 accuracy:5 variance:4 characteristic:4 efficiently:1 maximized:1 yield:1 bilmes:2 comp:2 definition:3 against:1 energy:1 associated:1 proof:3 workstation:1 gain:4 stop:6 dataset:2 knowledge:1 subsection:1 dimensionality:2 organized:1 originally:1 attained:2 supervised:1 improved:1 formulation:3 done:1 though:1 onheim:1 furthermore:1 just:1 smola:1 until:1 ei:3 replacing:1 mode:1 murota:1 aj:3 quality:1 scientific:1 usa:1 oil:2 y2:1 true:6 counterpart:2 symmetric:1 iteratively:2 aist:2 criterion:3 blah:1 outline:1 tn:2 temperature:1 image:1 novel:2 functional:1 empirically:3 sopt:2 jp:4 exponentially:1 million:1 analog:1 ejl:1 volume:2 numerically:1 measurement:2 rd:1 pm:2 mathematics:2 ambrose:1 submodular:42 han:1 base:1 recent:2 showed:2 moderate:1 certain:2 inequality:1 binary:6 caltech:1 guestrin:2 minimum:1 monotonically:2 ii:1 gretton:1 d0:14 technical:1 long:1 regression:1 essentially:1 titech:1 metric:1 iteration:15 mex:1 achieved:3 krause:2 addressed:1 else:2 finiteness:1 sfo:1 rest:1 asz:13 comment:1 subject:1 fujishige:1 integer:9 call:2 near:3 easy:1 variety:1 idea:3 whether:2 thread:1 utility:2 url:1 song:1 f:2 reformulated:1 matlab:2 useful:1 korte:1 locally:1 category:2 reduced:2 http:2 nsf:1 produc:1 estimated:1 edmonds:1 discrete:12 key:1 threshold:1 graph:4 relaxation:1 geometrically:1 tational:1 year:2 cone:2 monotone:1 run:1 package:1 letter:1 lake:1 bit:1 bound:21 pct:2 guaranteed:6 nim:1 correspondence:4 convergent:1 quadratic:1 cheng:1 nonnegative:1 annual:1 placement:8 constraint:14 software:1 sviridenko:1 generates:1 optimality:5 performing:1 according:1 combination:1 smaller:2 feige:1 ur:1 s1:1 alse:2 gradually:1 pr:1 computationally:1 ln:1 assures:2 mechanism:1 end:6 available:3 operation:3 apply:1 deployable:1 batch:1 jn:2 knapsack:2 cf:4 ensure:1 coe:1 calculating:2 especially:1 approximating:1 society:1 objective:4 question:1 added:1 kaiser:1 nemhauser:11 otschel:1 sci:1 evaluate:6 polytope:9 water:1 code:2 relationship:4 illustration:1 unfortunately:1 cij:2 potentially:1 stoc:1 favorably:1 stated:1 negative:1 implementation:1 perform:2 upper:14 ladner:1 observation:2 datasets:2 finite:11 jin:1 y1:1 rn:20 thm:4 community:1 required:1 toolbox:1 extensive:1 california:1 herein:1 forman:1 address:4 kriegel:1 usually:1 below:2 program:2 max:14 pascal2:1 power:1 difficulty:1 force:1 natural:2 hybrid:1 treated:1 zhu:1 improve:1 technology:2 popt:4 naive:2 breach:1 text:5 literature:2 checking:2 tangent:1 loss:1 mixed:1 wolsey:10 analogy:1 versus:1 foundation:2 degree:1 krausea:1 s0:13 editor:4 bank:1 pi:10 translation:1 supported:1 infeasible:2 allow:1 neighbor:6 matroids:1 tolerance:1 regard:1 ghz:1 calculated:3 world:2 concavity:1 horst:1 compact:1 cutting:16 keep:1 dealing:1 hanani:1 global:5 active:3 ver:1 conclude:1 xi:3 continuous:11 iterative:5 search:3 kiyohito:1 table:4 yoshinobu:1 reasonably:2 robust:1 operational:2 improving:1 alg:8 necessarily:2 european:2 assured:1 substituted:1 da:1 pk:1 main:1 linearly:5 s2:1 reuters:1 unproblematic:1 edition:2 fig:6 tl:2 deployed:1 wiley:1 precision:2 xl:2 mcmahan:1 pe:2 theorem:4 formula:1 down:1 specific:3 showing:1 gupta:1 dl:4 exists:1 false:3 adding:2 nk:2 entropy:2 intersection:1 contained:1 scalar:1 springer:1 corresponds:2 satisfies:2 acm:2 prop:1 sized:1 formulated:1 towards:1 jeff:1 replace:1 price:2 feasible:8 hard:3 fisher:2 included:2 determined:1 hyperplane:3 lemma:5 called:5 experimental:1 rarely:1 select:2 support:1 dept:2 d1:14 |
3,049 | 3,763 | Matrix Completion from Power-Law Distributed
Samples
Raghu Meka, Prateek Jain, and Inderjit S. Dhillon
Department of Computer Sciences
University of Texas at Austin
Austin, TX 78712
{raghu,pjain,inderjit}@cs.utexas.edu
Abstract
The low-rank matrix completion problem is a fundamental problem with many
important applications. Recently, [4],[13] and [5] obtained the first non-trivial
theoretical results for the problem assuming that the observed entries are sampled
uniformly at random. Unfortunately, most real-world datasets do not satisfy this
assumption, but instead exhibit power-law distributed samples. In this paper, we
propose a graph theoretic approach to matrix completion that solves the problem
for more realistic sampling models. Our method is simpler to analyze than previous methods with the analysis reducing to computing the threshold for complete
cascades in random graphs, a problem of independent interest. By analyzing the
graph theoretic problem, we show that our method achieves exact recovery when
the observed entries are sampled from the Chung-Lu-Vu model, which can generate power-law distributed graphs. We also hypothesize that our algorithm solves
the matrix completion problem from an optimal number of entries for the popular preferential attachment model and provide strong empirical evidence for the
claim. Furthermore, our method is easy to implement and is substantially faster
than existing methods. We demonstrate the effectiveness of our method on random instances where the low-rank matrix is sampled according to the prevalent
random graph models for complex networks and present promising preliminary
results on the Netflix challenge dataset.
1
Introduction
Completing a matrix from a few given entries is a fundamental problem with many applications in
machine learning, statistics, and compressed sensing. Since completion of arbitrary matrices is not
a well-posed problem, it is often assumed that the underlying matrix comes from a restricted class.
Here we address the matrix completion problem under the natural assumption that the underlying
matrix is low-rank.
Formally, for an unknown matrix M ? Rm?n of rank at most k, given ? ? [m] ? [n], P? (M )1 and
k, the low-rank matrix completion problem is to find a matrix X ? Rm?n such that
rank(X) ? k
and
P? (X) = P? (M ).
(1.1)
Recently Candes and Recht [4], Keshavan et.al [13], Candes and Tao [5] obtained the first non-trivial
guarantees for the above problem under a few additional assumptions on the matrix M and the set of
known entries ?. At a high level, the assumptions made in the above papers can be stated as follows.
A1 M is incoherent, in the sense that the singular vectors of M are not correlated with the
standard basis vectors.
1
Throughout this paper P? : Rm?n ? Rm?n will denote the projection of a matrix onto the pairs of
indices in ?: (P? (X))ij = Xij for (i, j) ? ? and (P? (X))ij = 0 otherwise.
1
A2 The observed entries are sampled uniformly at random.
In this work we address some of the issues with assumption [A2]. For ? ? [m]?[n], let the sampling
graph G? = (U, V, ?) be the bipartite graph with vertices U = {u1 , . . . , um }, V = {v1 , . . . , vn }
and edges given by the ordered pairs in ? 2 . Then, assumption [A2] can be reformulated as follows:
A3 The sampling graph G? is an Erd?os-R?enyi random graph3 .
A prominent feature of Erd?os-R?enyi graphs is that the degrees of vertices are Poisson distributed and
are sharply concentrated about their mean. The techniques of [4, 5], [13], as will be explained later,
crucially rely on these properties of Erd?os-R?enyi graphs. However, for most large real-world graphs
such as the World Wide Web ([1]), the degree distribution deviates significantly from the Poisson
distribution and has high variance. In particular, most large matrix-completion datasets such as the
much publicized Netflix prize dataset and the Yahoo Music dataset exhibit power-law distributed
degrees, i.e., the number of vertices of degree d is proportional to d?? for a constant ? (Figure 1).
In this paper, we overcome some of the shortcomings of assumption [A3] above by considering
more realistic random graph models for the sampling graph G? . We propose a natural graph theoretic approach for matrix completion (referred to as ICMC for information cascading matrix completion) that we prove can handle sampling graphs with power-law distributed degrees. Our approach
is motivated by the models for information cascading in social networks proposed by Kempe et
al. [11, 12]. Moreover, the analysis of ICMC reduces to the problem of finding density thresholds
for complete cascades in random graphs - a problem of independent interest.
By analyzing the threshold for complete cascades in the random graph model of Chung, Lu & Vu
[6] (CLV model), we show that ICMC solves the matrix completion problem for sampling graphs
drawn from the CLV model. The bounds we obtain for matrix-completion on the CLV model are
incomparable to the main results of [4, 5, 13]. The methods of the latter papers do not apply to
models such as the CLV model that generate graphs with skewed degrees. On the other hand, for
Erdos-Renyi graphs the density requirements for ICMC are stronger than those of the above papers.
We also empirically investigate the threshold for complete cascading in other popular random graph
models such as the preferential attachment model [1], the forest-fire model [17] and the affiliation
networks model [16]. The empirical estimates we obtain for the threshold for complete cascading
in the preferential attachment model strongly suggest that ICMC solves the exact matrix-completion
problem from an optimal number of entries for sampling procedures with preferential attachment.
Our experiments demonstrate that for sampling graphs drawn from more realistic models such as
the preferential attachment, forest-fire and affiliation network models, ICMC outperforms - both in
accuracy and time - the methods of [4, 5, 3, 13] by an order of magnitude.
In summary, our main contributions are:
? We formulate the sampling process in matrix completion as generating random graphs (G? ) and
demonstrate that the sampling assumption [A3] does not hold for real-world datasets.
? We propose a novel graph theoretic approach to matrix completion (ICMC) that extensively uses
the link structure of the sampling graph. We emphasize that previously none of the methods
exploited the structure of the sampling graph.
? We prove that our method solves the matrix completion problem exactly for sampling graphs
generated from the CLV model which can generate power-law distributed graphs.
? We empirically evaluate our method on more complex random graph models and on the Netflix
Challenge dataset demonstrating the effectiveness of our method over those of [4, 5, 3, 13].
2
Previous Work and Preliminaries
The Netflix challenge has recently drawn much attention to the low-rank matrix completion problem. Most methods for matrix completion and the more general rank minimization problem with
affine constraints are based on either relaxing the non-convex rank function to a convex function
or assuming a factorization of the matrix and optimizing the resulting non-convex problem using
alternating minimization and its variants [2, 15, 18].
2
We will often abuse notation and identify edges (ui , vj ) with ordered pairs (i, j).
We consider the Erd?os-R?enyi model, where edges (ui , vj ) ? E independently with probability for p for
(i, j) ? [m] ? [n] and p is the density parameter.
3
2
Until recently, most methods for rank minimization subject to affine constraints were heuristic in
nature with few known rigorous guarantees. In a recent breakthrough, Recht et.al [20] extend the
techniques of compressed sensing to rank minimization with affine constraints. However, the results
of Recht et.al do not apply to the case of matrix completion as the constraints in matrix completion
do not satisfy the restricted isoperimetry property they assume.
Building on the work of Recht et al. [20], Candes and Recht [4] and Candes and Tao [5] showed that
minimizing the trace-norm recovers the unknown low-rank matrix exactly under certain conditions.
However, these approaches require the observed entries to be sampled uniformly at random and as
suggested by our experiments, do not work well when the observed entries are not drawn uniformly.
Independent of [4, 5], Keshavan et al. [13] also obtained similar results for matrix completion using
different techniques that generalize the works of Friedman et al. [9], Feige and Ofek [8] on the
spectrum of random graphs. However, the results of [13], crucially rely on the regularity of Erd?osR?enyi graphs and do not extend to sampling graphs with skewed degree distributions even for rank
one matrices. This is mainly because the results of Friedman et al. and Feige and Ofek on the
spectral gap of Erd?os-R?enyi graphs do not hold for graph models with skewed expected degrees (see
[6, 19]).
We also remark that several natural variants of the trimming phase of [8] and [13] did not improve
the performance in our experiments. A similar observation was made in [19], [10] who address the
problem of re-weighting the edges of graphs with skewed degrees in the context of LSA.
2.1
Random Graph Models
We focus on four popular models of random graphs all of which can generate graphs with power-law
distributed degrees. In contrast to the common descriptions of the models, we need to work with
bipartite graphs; however, the models we consider generalize naturally to bipartite graphs. Due to
space limitations we only give a (brief) description of the Chung et.al [6], and refer to the original
papers for the preferential attachment [1], forest-fire [17] and affiliation networks [16] models.
The CLV model [6] generates graphs with arbitrary expected degree sequences, p1 , . . . , pm ,
q1 , . . . , qn with p1 + . . . + pm = q1 + . . . + qn = w. In the model, a bipartite graph G = (U, V, E)
with U = {u1 , . . . , um }, V = {v1 , . . . , vn } is generated by independently placing an edge between
vertices ui , vj with probability pi qj /w for all i ? [m], j ? [n]. We define the density of an instance
of CLV model to be the expected average degree (p1 + . . . + pm )/(mn) = w/mn.
The CLV model is more general than the standard Erd?os-R?enyi model with the case pi = np, qi =
mp corresponding to the standard Erd?os-R?enyi model with density p for bipartite random graphs.
Further, by choosing weights that are power-law distributed, the CLV model can generate graphs
with power-law distributed degrees, a prominent feature of real-world graphs.
3
Matrix Completion from Information Cascading
We now present our algorithm ICMC. Consider the following standard formulation of the low-rank
matrix completion problem: Given k, ?, P? (M ) for a rank k matrix M , find X, Y such that
P? (XY T ) = P? (M ), X ? Rm?k , Y ? Rn?k .
(3.1)
Note that given X we can find Y and vice versa by solving a linear least squares regression problem. This observation is the basis for the popular alternate minimization heuristic and its variants
which outperform most methods in practice. However, analyzing the performance of alternate minimization is a notoriously hard problem. Our algorithm can be seen as a more refined version of the
alternate minimization heuristic that is more amenable to analysis. We assume that the target matrix
M is non-degenerate in the following sense.
Definition 3.1 A rank k matrix Z is non-degenerate if there exist X ? Rm?k , Y ? Rn?k ,
Z = XY T such that any k rows of X are linearly independent and any k rows of Y are linearly
independent.
Though reminiscent of the incoherence property used by Candes and Recht, Keshavan et al., nondegeneracy appears to be incomparable to the incoherence property used in the above works. Observe that a random low-rank matrix is almost surely non-degenerate.
Our method progressively computes rows of X and Y so that Equation (3.1) is satisfied. Call a
vertex ui ? U as infected if the i?th row of X has been computed (the term infected is used to reflect
3
that infection spreads by contact as in an epidemic). Similarly, call a vertex vj ? V as infected if
the j?th row of Y has been computed. Suppose that at an intermediate iteration, vertices L ? U and
R ? V are marked as infected. That is, the rows of X with indices in L and rows of Y with indices
in R have been computed exactly.
Now, for an uninfected j ? [n], to compute the corresponding row of Y , yjT ? Rk , we only need k
independent linear equations. Thus, if M is non-degenerate, to compute yjT we only need k entries
of the j?th column of M with row indices in L. Casting the condition in terms of the sampling graph
G? , yjT can be computed and vertex vj ? V be marked as infected if there are at least k edges from
vj to infected vertices in L. Analogously, xTi can be computed and the vertex ui ? U be marked as
infected if there are at least k edges from ui to previously infected vertices R.
Observe that M = XY T = XW W ?1 Y T , for any invertible matrix W ? Rk?k . Thus for nondegenerate M , without loss of generality, a set of k rows of X can be fixed to be the k ? k identity
matrix Ik . This suggests the following cascading procedure for infecting vertices in G? and progressively computing the rows of X, Y . Here L0 ? U with |L0 | = k.
ICMC(G? , P? (M ), L0 ):
1 Start with initially infected sets L = L0 ? U , R = ?. Set the k ? k sub-matrix of X with rows
in L0 to be Ik .
2 Repeat until convergence:
(a) Mark as infected all uninfected vertices in V that have at least k edges to previously infected
vertices L and add the newly infected vertices to R.
(b) For each newly infected vertex vj ? R, compute the j?th row of Y using the observed
entries of M corresponding to edges from vj to L.
(c) Mark as infected all uninfected vertices in U that have at least k edges to previously infected
vertices R and add the newly infected vertices to L.
(d) For each newly infected vertex ui ? L, compute the i?th row of X using the observed
entries of M corresponding to edges from ui to R
3 Output M ? = XY T .
We abstract the cascading procedure from above using the framework of Kempe et al. [11] for
information cascades in social networks. Let G = (W, E) be an undirected graph and fix A ? W ,
k > 0. Define ?G,k (A, 0) = A and for t > 0 define ?G,k (A, t + 1) inductively by
?G,k (A, t + 1) = ?G,k (A, t) ? {u ? W : u has at least k edges to ?G,k (A, t) }.
Definition 3.2 The influence of a set A ? W , ?G,k (A), is the number of vertices infected by the
cascading process upon termination when starting at A. That is, ?G,k (A) = | ?t ?G,k (A, t)|. We
say A is completely cascading of order k if ?G,k (A) = |W |.
We remark that using a variant of the standard depth-first search algorithm, the cascading process
above can be computed in linear time for any set A. From the discussion preceding ICMC it follows
that ICMC recovers M exactly if the cascading process starting at L0 infects all vertices of G? and
we get the following theorem.
Theorem 3.1 Let M be a non-degenerate matrix of rank k. Then, given G? = (U, V, ?), P? (M )
and L0 ? U with |L0 | = k, ICMC(G? , P? (M ), L0 ) recovers the matrix M exactly if L0 is a
completely cascading set of order k in G? .
Thus, we have reduced the matrix-completion problem to the graph-theoretic problem of finding
a completely cascading set (if it exists) in a graph. A more general case of the problem ? finding
a set of vertices that maximize influence, was studied by Kempe et al. [11] for more general cascading processes. They show the general problem of maximizing influence to be NP-hard and give
approximation algorithms for several classes of instances.
However, it appears that for most reasonable random graph models, the highest degree vertices have
large influence with high probability. In the following we investigate completely cascading sets
in random graphs and show that for CLV graphs, the k highest degree vertices form a completely
cascading set with high probability.
4
4
Information Cascading in Random Graphs
We now show that for sufficiently dense CLV graphs and fixed k, the k highest degree vertices form
a completely cascading set with high probability.
Theorem 4.1 For every ? > 0, there exists a constant c(?) such that the following holds. Consider an instance of the CLV model given by weights p1 , . . . , pm , q1 , . . . , qn with density p and
min(pi , qj ) ? c(?)k log n/pk . Then, for G = (U, V, E) generated from the model, the k highest
degree vertices of U form a completely cascading set of order k with probability at least 1 ? n?? .
Proof sketch We will show that the highest weight vertices L0 = {u1 , . . . , uk } form a completely
cascading set with high probability; the theorem follows from the above statement and the observation that the highest degree vertices of G will almost surely correspond
toP
vertices with large weights
P
in the model; we omit these details for lack of space. Let w = i pi = j qj = mnp and m ? n.
Fix a vertex ui ?
/ L0 and consider an arbitrary vertex vj ? V . Let Pji be the indicator variable
that is 1 if (ui , vj ) ? E and vj is connected to all vertices of LP
0 . Note that vertex ui will be
infected after two rounds by the cascading process starting at L0 if j Pji ? k. Now, Pr[Pji = 1] =
Q
(pi qj /w) 1?l?k (pl qj /w) and
E[P1i + . . . + Pni ] =
n
n
X
X
Y
pi q j Y pl q j
pi
= k+1 ? (
qjk+1 .
pl ) ?
w
w
w
j=1
j=1
l?k
(4.1)
1?l?k
P
Observe that i pi = w ? nk + pk (m ? k). Thus, pk ? (w ? nk)/(m ? k). Now, using the
power-mean inequality we get,
?k+1
?
? w ?k+1
q1 + . . . + qn
,
(4.2)
=n?
q1k+1 + q2k+1 + . . . + qnk+1 ? n
n
n
with equality occurring only if qj = w/n for all j. From Equations (4.1), (4.2) we have
?k
?
1
w ? nk
i
i
? k
E[P1 + . . . + Pn ] ? pi ?
m?k
n
?
?k ?
??k ?
nk
w ?k
k
= pi ? 1 ?
? 1?
?
.
w
m
mn
(4.3)
It is easy to check that under our assumptions, w ? nk 2 and m ? k 2 . Thus, (1 ? nk/w)k ? 1/e
and (1 ? k/m)?k ? 1/2e. From Equation (4.3) and our assumption pi ? c(?)k log n/pk , we get
E[P1i + . . . + Pni ] ? c(?)k log n/4e2 .
Now, since the indicator variables P1i , . . . , Pni are independent of each other, using the above lower
bound for the expectation of their sum and Chernoff bounds we get Pr[P1i + . . . + Pni ? k] ?
exp(??(c(?) log n)). Thus, for a sufficiently large constant c(?), the probability that the vertex ui
is uninfected after two rounds Pr[P1 + . . . + Pn ? k] ? 1/2m?+1 . By taking a union bound over
all vertices uk+1 , . . . , um , the probability that there is an uninfected vertex in the left partition after
two steps of cascading starting from L0 is at most 1/2m? . The theorem now follows by observing
that if the left partition is completely infected, for a suitably large constant c(?), all vertices in the
right will be infected with probability at least 1 ? 1/2m? as qj ? c(?)k log n.?
Combining the above with Theorem 3.1 we obtain exact matrix-completion for sampling graphs
drawn from the CLV model.
Theorem 4.2 Let M be a non-degenerate matrix of rank k. Then, for sampling graphs G? generated from a CLV model satisfying the conditions of Theorem 4.1, ICMC recovers the matrix M
exactly with high probability.
Remark: The above results show exact-recovery for CLV graphs with densities up to n?1/k = o(1).
As mentioned in the introduction, the above result is incomparable to the main results of [4, 5], [13].
The main
P the density requirements in the proof of Theorem 4.1 is Equation (4.2)
Pbottleneck for
relating j qjk+1 to ( j qj )k+1 , where we used the power-mean inequality. However, when the
5
expected degrees qj are skewed, say with a power-law distribution, it should be possible to obtain
much better bounds than those of Equation (4.2), hence also improving the density requirements.
Thus, in a sense the Erd?os-R?enyi graphs are the worst-case examples for our analysis.
Our empirical simulations also suggest that completely cascading sets are more likely to exist in
random graph models with power-law distributed expected degrees as compared to Erd?os-R?enyi
graphs. Intuitively, this is because of the following reasons.
? In graphs with power-law distributed degrees, the high degree vertices have much higher degrees
than the average degree of the graph. So, infecting the highest degree vertices is more likely to
infect more vertices in the first step.
? More importantly, as observed in the seminal work of Kleinberg [14] in most real-world graphs
there are a small number of vertices (hubs) that have much higher connectivity than most vertices. Thus, infecting the hubs is likely to infect a large fraction of vertices.
Thus, we expect ICMC to perform better on models that are closer to real-world graphs and have
power-law distributed degrees. In particular, as strongly supported by experiments (see Figure 3),
we hypothesize that ICMC solves exact matrix completion from an almost optimal number of entries
for sampling graphs drawn from the preferential attachment model.
Conjecture 4.3 There exists a universal constant C such that for all k ? 1, k1 , k2 ? Ck the following holds. For G = (U, V, E) generated from the preferential attachment model with parameters
m, n, k1 , k2 , the k highest degree vertices of U form a completely cascading set of order k with high
probability.
If true, the above combined with Theorem 3.1 would imply the following.
Conjecture 4.4 Let M be a non-degenerate matrix of rank k. Then, for sampling graphs G? generated from a PA model with parameters k1 , k2 ? Ck, ICMC recovers the matrix M exactly with
high probability.
Remark: To solve the matrix completion problem we need to sample at least (m + n)k entries.
Thus, the bounds above are optimal up to a constant factor. Moreover, the bounds above are stronger
than those obtainable - even information theoretically - for Erd?os-R?enyi graphs, as for Erd?os-R?enyi
graphs we need to sample ?(n log n) entries even for k = 1.
5
Experimental Results
We first demonstrate that for many real-world matrix completion datasets, the observed entries are
far from being sampled uniformly with the sampling graph having power-law distributed degrees.
We then use various random graph models to compare our method against the trace-norm based
singular value thresholding algorithm of [3], the spectral matrix completion algorithm (SMC) of
[13] and the regularized alternating least squares minimization (ALS) heuristic. Finally, we present
empirical results on the Netflix challenge dataset. For comparing with SVT and SMC, we use the
code provided by the respective authors; while we use our own implementation for ALS. Below we
provide a few implementation details for our algorithm ICMC.
Implementation Details
Consider step 2(b) of our algorithm ICMC. Let Lj be the set of vertices in L that have an edge to
vj , Lkj be any size k subset of Lj , and let X(Lkj , :) be the sub-matrix of X containing rows corresponding to vertices in Lkj . If the underlying matrix is indeed low-rank and there is no noise in the
observed entries, then for a newly infected vertex vj , the corresponding row of Y , yjT , can be computed by solving the following linear system of equations: M (Lkj , j) = X(Lkj , :)yj . To account for
noise in measurements, we compute yj by solving the following regularized least squares problem:
yj = argminy kM (Lj , j)?X(Lj , :)yk22 +?kyk22 , where ? is a regularization parameter. Similarly,
we compute xTi by solving: xi = argminx kM (i, Ri )T ? Y (Ri , :)xk22 + ?kxk22 .
Note that if ICMC fails to infect all the vertices, i.e. L ( U or R ( V , then rows of X and Y
will not be computed for vertices in U \L and V \R. Let X = [XL , XL? ], where XL is the set of
computed rows of X (for vertices in L) and XL? denotes the remaining rows of X. Similarly, let
Y = [YR , YR? ]. We estimate XL? and YR? using an alternating least squares based heuristic that solves
the following:
?? ?
? ?
???2
??
??
XL
T
T
?
?
[YR YR? ] ???? + ?kXL? k2F + ?kYR? k2F ,
min ??P? M ?
X
?
,Y
XL
?
?
L
R
F
6
Netflix Dataset (Movies)
Netflix Dataset (Users)
0
Yahoo Music Dataset (Users)
Yahoo Music Dataset (Artists)
10
?5
?15
10
4
Empirical distribution
Poisson distribution
Power?law distribution
5
3
10
10
x (Number of Users)
?10
10
?10
10
Empirical Distribution
Poisson Distribution
Power?law Distribution
Pr(X? x)
Pr(X? x)
Pr(X? x)
?10
10
Pr(X? x)
10
?5
10
10
Empirical Distribution
Poisson Distribution
Power?law Distribution
4
4
10
x (Number of movies)
(a)
10
(b)
?10
10
?15
10
5
Empirical Distribution
Poisson Distribution
Power?law Distribution
10
x (Number of users)
3
10
(c)
4
10
x (Number of artists)
(d)
Figure 1: Cumulative degree distribution of (a) movies, (b) users (Netflix dataset) and (c) artists,
(d) users (Yahoo Music dataset). Note that degree distributions in all the four cases closely follow
power-law distribution and deviate heavily from Poisson-distribution, which is assumed by SVT [3]
and SMC [13].
Chung?Lu?Vu Model
0.5
1.5
1
Forest?Fire Model
ICMC
ALS
SVT
SMC
4
2
3
RMSE
1
PA Model
6
ICMC
ALS
SVT
SMC
2
RMSE
RMSE
2.5
ICMC
ALS
SVT
SMC
RMSE
Erdos?Renyi Model
1.5
ICMC
ALS
SVT
SMC
2
1
0.5
0
500
1000
1500
n (Size of Matrix)
2000
0
0
500
1000
1500
n (Size of Matrix)
2000
0
500
1000
1500
n (Size of Matrix)
2000
0
500
1000
1500
n (Size of Matrix)
Figure 2: Results on synthetic datasets for fixed sampling density with sampling graph coming from
different Graph Models: (a) Erd?os-R?enyi model, (b) Chung-Lu-Vu model, (c) Preferential attachment model, and (d) Forest-fire model. Note that for the three power-law distribution generating
models our method (ICMC) achieves considerably lower RMSE than the existing method.
(a) Erd?os-R?enyi Graphs
n/Method
SMC
SVT ALS ICMC
500
45.51
8.88 1.09
1.28
1000
93.85 17.07 2.39
3.30
1500 214.65 38.81 4.85
6.28
2000 343.76 59.88 7.20
9.89
(c) Preferential Attachment Graphs
n/Method
SMC
SVT
ALS ICMC
500
15.05 14.40
3.97
1.94
1000
67.96 16.49
5.06
2.01
1500 178.35 24.48
9.83
3.65
2000 417.54 32.06 15.07
7.46
(b) Chung-Lu-Vu Graphs
n/Method
SMC
SVT ALS
500
35.32 14.69 1.24
1000 144.19 17.55 2.24
1500 443.48 30.99 3.89
2000 836.99 46.69 5.67
(d)Forest-fire Graphs
n/Method
SMC
SVT ALS
500
22.63
5.53 0.57
1000
85.26 11.32 1.75
1500 186.81 21.39 3.30
2000 350.98 27.37 4.84
ICMC
0.49
2.02
3.91
5.50
ICMC
0.39
1.23
2.99
5.06
Table 1: Time required (in seconds) by various methods on synthetic datasets for fixed sampling
density with sampling graph coming from different Graph Models: (a) Erd?os-R?enyi model, (b)
Chung-Lu-Vu model, (c) Preferential attachment model, and (d) Forest-fire model. Note that our
method (ICMC) is significantly faster than SVT and SMC, and has similar run-time to that of ALS.
where ? ? 0 is the regularization parameter.
Sampling distribution in Netflix and Yahoo Music Datasets
The Netflix challenge dataset contains the incomplete user-movie ratings matrix while the Yahoo
Music dataset contains the incomplete user-artist ratings matrix. For both datasets we form the corresponding bipartite sampling graphs and plot the left (users) and right (movies/artists) cumulative
degree distributions of the bipartite sampling graphs.
Figure 1 shows the cumulative degree distributions of the bipartite sampling graphs, the best powerlaw fit computed using the code provided by Clauset et.al [7] and the best Poisson distribution fit.
The figure clearly shows that the sampling graphs for the Netflix and Yahoo Music datasets are far
from regular as assumed in [4],[5],[13] and have power-law distributed degrees.
Experiments using Random Graph Models
To compare various methods, we first generate random low-rank matrices X ? Rn?n for varying n,
and sample from the generated matrices using Erd?os-R?enyi, CLV, PA and forest-fire random graph
models. We omit the results for the affiliation networks model from this paper due to lack of space;
we observed similar trends on the affiliation networks model.
7
2000
Preferential Attachment Model (m vs k)
300
m (Number of edges)
Infected Rows/Columns
Sampling Density Threshold
0
10
Erdos?Renyi
Chung?Lu
Pref. Attachment
?2
10
?2
10
?1
10
p (Sampling Density)
0
10
k
200
100
COMBMC
m=Ck+C
0
0
10
20
30
40
k (Rank of the Matrix)
50
5
10
20
25
30
Fraction of infected
rows & columns
0.98
0.95
0.87
.84
0.46 ? 10?5
RMSE
0.9603
0.9544
0.9437
0.9416
0.9602
Figure 3: Left: Fraction of infected nodes as edge density increases. Note the existence of a clear
threshold. The threshold is quite small for CLV and PA suggesting good performance of ICMC for
these models. Middle: Threshold for parameters k1 , k2 (the number of edges per node) in PA as
k increases. The threshold varies linearly with k supporting Conjecture 4.3. Right: Fraction of
infected rows and columns using ICMC for the Netflix challenge dataset.
For each random graph model we compare the relative mean square error (RMSE) on the unknown
entries achieved by our method ICMC against several existing methods. We also compare the total
time taken by each of the methods. All results represent the average over 20 runs.
Figure 2 compares the RMSE achieved by ICMC to that of SVT, SMC and ALS when rank k is fixed
to be 10, sampling density p = 0.1, and the sampling graphs are generated from the four random
graph models. Note that for the more-realistic CLV, PA, forest-fire three models ICMC outperforms
both SVT and SMC significantly and performs noticeably better than ALS. Table 1 compares the
computational time taken by each of the methods. The table shows that for all three models, ICMC
is faster than SVT and SMC by an order of magnitude and is also competitive to ALS. Note that
the performance of our method for Erdos-Renyi graphs (Figure 2 (a)) is poor, with other methods
achieving low RMSE. This is expected as the Erdos-Renyi graphs are in a sense the worst-case
examples for ICMC as explained in Section 4.
Threshold for Complete Cascading
Here we investigate the threshold for complete cascading in the random graph models. Besides
being interesting on its own, the existence of completely cascading sets is closely tied to the success
of ICMC by Theorem 3.1. Figure 3 shows the fraction of vertices infected by the cascading process
starting from the k highest degree vertices for graphs generated from the random graph models as
the edge density increases.
The left plot of Figure 3 shows the existence of a clear threshold for the density p, beyond which
the fraction of infected vertices is almost surely one. Note that the threshold is quite small for the
CLV, PA and forest-fire models, suggesting good performance of ICMC on these models. As was
explained in Section 4, the threshold is bigger for the Erd?os-R?enyi graph model.
The right plot of Figure 3 shows the threshold value (the minimum value above which the infected
fraction is almost surely one) for k1 , k2 as a function of k in the PA model. The plot shows that the
threshold is of the form Ck for a universal constant C, strongly supporting Conjectures 4.3, 4.4.
Netflix Challenge Dataset
Finally, we evaluate our method on the Netflix Challenge dataset which contains an incomplete
matrix with about 100 million ratings given by 480,189 users for 17,770 movies. The rightmost
table in Figure 3 shows the fraction of rows and columns infected by ICMC on the dataset for
several values of the rank parameter k. Note that even for a reasonably high rank of 25, ICMC
infects a high percentage (84%) of rows and columns. Also, for rank 30 the fraction of infected
rows and columns drops to almost zero, suggesting that the sampling density of the matrix is below
the sampling threshold for rank 30.
For rank k = 20, the RMSE incurred over the probe set (provided by Netflix) is 0.9437 which is
comparable to the RMSE=0.9404 achieved by the regularized Alternating Least Squares method.
More importantly, the time required by our method is 1.59 ? 103 seconds compared to 6.15 ? 104
seconds required by ALS. We remark that noise (or higher rank of the underlying matrix) can offset
our method leading to somewhat inferior results. In such a case, our method can be used for a good
initialization of the ALS method and other state-of-the-art collaborative filtering methods to achieve
better RMSE.
8
References
[1] Albert-Laszlo Barabasi and Reka Albert. Emergence of scaling in random networks. Science, 286:509,
1999.
[2] Matthew Brand. Fast online svd revisions for lightweight recommender systems. In SDM, 2003.
[3] Jian-Feng Cai, Emmanuel J. Candes, and Zuowei Shen. A singular value thresholding algorithm for
matrix completion, 2008.
[4] Emmanuel J. Cand`es and Benjamin Recht. Exact matrix completion via convex optimization. CoRR,
abs/0805.4471, 2008.
[5] Emmanuel J. Cand`es and Terence Tao. The power of convex relaxation: Near-optimal matrix completion.
CoRR, abs/0903.1476, 2009.
[6] Fan R. K. Chung, Linyuan Lu, and Van H. Vu. The spectra of random graphs with given expected degrees.
Internet Mathematics, 1(3), 2003.
[7] A. Clauset, C.R. Shalizi, and M.E.J. Newman. Power-law distributions in empirical data. SIAM Review,
page to appear, 2009.
[8] Uriel Feige and Eran Ofek. Spectral techniques applied to sparse random graphs. Random Struct. Algorithms, 27(2):251?275, 2005.
[9] Joel Friedman, Jeff Kahn, and Endre Szemer?edi. On the second eigenvalue in random regular graphs. In
STOC, pages 587?598, 1989.
[10] Christos Gkantsidis, Milena Mihail, and Ellen W. Zegura. Spectral analysis of internet topologies. In
INFOCOM, 2003.
? Tardos. Maximizing the spread of influence through a social
[11] David Kempe, Jon M. Kleinberg, and Eva
network. In KDD, pages 137?146, 2003.
? Tardos. Influential nodes in a diffusion model for social
[12] David Kempe, Jon M. Kleinberg, and Eva
networks. In ICALP, pages 1127?1138, 2005.
[13] Raghunandan H. Keshavan, Sewoong Oh, and Andrea Montanari. Matrix completion from a few entries.
CoRR, abs/0901.3150, 2009.
[14] Jon M. Kleinberg. Hubs, authorities, and communities. ACM Comput. Surv., 31(4es):5, 1999.
[15] Yehuda Koren. Factorization meets the neighborhood: a multifaceted collaborative filtering model. In
KDD, pages 426?434, 2008.
[16] Silvio Lattanazi and D. Sivakumar. Affiliation networks. In STOC, 2009.
[17] Jure Leskovec, Jon M. Kleinberg, and Christos Faloutsos. Graph evolution: Densification and shrinking
diameters. TKDD, 1(1), 2007.
[18] Yehuda Koren M. Bell. Scalable collaborative filtering with jointly derived neighborhood interpolation
weights. In ICDM, pages 43?52, 2007.
[19] Milena Mihail and Christos H. Papadimitriou. On the eigenvalue power law. In RANDOM, pages 254?
262, 2002.
[20] Benjamin Recht, Maryam Fazel, and Pablo A. Parrilo. Guaranteed minimum-rank solutions of linear
matrix equations via nuclear norm minimization, 2007.
9
| 3763 |@word middle:1 version:1 stronger:2 norm:3 suitably:1 termination:1 km:2 simulation:1 crucially:2 q1:4 contains:3 lightweight:1 rightmost:1 outperforms:2 existing:3 comparing:1 reminiscent:1 realistic:4 partition:2 kdd:2 hypothesize:2 plot:4 drop:1 progressively:2 v:1 yr:5 prize:1 qjk:2 authority:1 node:3 simpler:1 ik:2 prove:2 theoretically:1 q2k:1 indeed:1 expected:7 andrea:1 p1:6 cand:2 xti:2 considering:1 revision:1 provided:3 underlying:4 moreover:2 notation:1 prateek:1 substantially:1 infecting:3 finding:3 guarantee:2 every:1 exactly:7 um:3 rm:6 k2:5 uk:2 lsa:1 omit:2 appear:1 svt:14 analyzing:3 meet:1 incoherence:2 sivakumar:1 abuse:1 interpolation:1 initialization:1 studied:1 suggests:1 relaxing:1 factorization:2 smc:15 fazel:1 linyuan:1 yj:3 vu:7 practice:1 union:1 implement:1 yehuda:2 procedure:3 universal:2 empirical:9 bell:1 cascade:4 significantly:3 projection:1 regular:2 suggest:2 get:4 onto:1 context:1 influence:5 seminal:1 maximizing:2 attention:1 starting:5 independently:2 convex:5 formulate:1 shen:1 recovery:2 powerlaw:1 cascading:28 importantly:2 nuclear:1 ellen:1 oh:1 handle:1 tardos:2 target:1 pjain:1 suppose:1 user:10 exact:6 heavily:1 uninfected:5 us:1 pa:8 trend:1 surv:1 satisfying:1 observed:11 worst:2 clauset:2 eva:2 connected:1 highest:9 mentioned:1 benjamin:2 ui:12 inductively:1 solving:4 upon:1 bipartite:8 basis:2 completely:12 various:3 tx:1 jain:1 enyi:17 shortcoming:1 fast:1 newman:1 choosing:1 refined:1 neighborhood:2 pref:1 heuristic:5 posed:1 solve:1 quite:2 say:2 otherwise:1 compressed:2 epidemic:1 statistic:1 emergence:1 jointly:1 online:1 sequence:1 sdm:1 eigenvalue:2 cai:1 propose:3 maryam:1 coming:2 combining:1 degenerate:7 achieve:1 description:2 convergence:1 regularity:1 requirement:3 generating:2 completion:33 ij:2 solves:7 strong:1 c:1 come:1 closely:2 noticeably:1 require:1 shalizi:1 fix:2 preliminary:2 pl:3 hold:4 sufficiently:2 exp:1 claim:1 matthew:1 achieves:2 a2:3 barabasi:1 utexas:1 vice:1 minimization:9 clearly:1 ck:4 pn:2 varying:1 casting:1 l0:14 focus:1 derived:1 rank:31 prevalent:1 mainly:1 check:1 contrast:1 rigorous:1 sense:4 lj:4 initially:1 kahn:1 tao:3 issue:1 yahoo:7 art:1 breakthrough:1 kempe:5 reka:1 having:1 sampling:35 chernoff:1 placing:1 k2f:2 jon:4 papadimitriou:1 np:2 few:5 phase:1 argminx:1 fire:10 raghunandan:1 friedman:3 ab:3 interest:2 trimming:1 investigate:3 joel:1 amenable:1 laszlo:1 edge:17 closer:1 preferential:12 xy:4 respective:1 incomplete:3 re:1 theoretical:1 leskovec:1 instance:4 column:7 infected:31 vertex:54 entry:19 subset:1 varies:1 publicized:1 combined:1 synthetic:2 recht:8 density:18 fundamental:2 siam:1 considerably:1 invertible:1 terence:1 analogously:1 connectivity:1 reflect:1 satisfied:1 containing:1 chung:9 leading:1 account:1 suggesting:3 parrilo:1 infects:2 satisfy:2 mp:1 later:1 infocom:1 analyze:1 nondegeneracy:1 observing:1 netflix:15 start:1 competitive:1 candes:6 rmse:12 contribution:1 collaborative:3 square:6 accuracy:1 variance:1 who:1 correspond:1 identify:1 generalize:2 artist:5 lu:8 none:1 notoriously:1 infection:1 definition:2 against:2 e2:1 naturally:1 proof:2 recovers:5 sampled:6 newly:5 dataset:17 popular:4 kyk22:1 obtainable:1 appears:2 higher:3 follow:1 erd:17 formulation:1 though:1 strongly:3 generality:1 furthermore:1 uriel:1 until:2 hand:1 sketch:1 web:1 keshavan:4 o:16 lack:2 multifaceted:1 building:1 true:1 evolution:1 equality:1 hence:1 regularization:2 alternating:4 dhillon:1 round:2 skewed:5 qnk:1 inferior:1 prominent:2 theoretic:5 complete:7 demonstrate:4 performs:1 novel:1 recently:4 argminy:1 common:1 icmc:40 empirically:2 million:1 extend:2 relating:1 refer:1 measurement:1 versa:1 meka:1 pm:4 similarly:3 ofek:3 mathematics:1 add:2 own:2 recent:1 showed:1 optimizing:1 p1i:4 certain:1 inequality:2 affiliation:6 success:1 exploited:1 seen:1 minimum:2 additional:1 somewhat:1 preceding:1 zuowei:1 surely:4 maximize:1 reduces:1 faster:3 yjt:4 icdm:1 bigger:1 a1:1 qi:1 variant:4 regression:1 scalable:1 expectation:1 poisson:8 albert:2 iteration:1 represent:1 achieved:3 singular:3 jian:1 milena:2 subject:1 undirected:1 effectiveness:2 call:2 near:1 yk22:1 intermediate:1 easy:2 fit:2 topology:1 incomparable:3 texas:1 qj:9 motivated:1 reformulated:1 remark:5 clear:2 extensively:1 concentrated:1 diameter:1 reduced:1 generate:6 outperform:1 xij:1 exist:2 percentage:1 per:1 four:3 threshold:18 demonstrating:1 achieving:1 drawn:6 diffusion:1 v1:2 graph:97 relaxation:1 fraction:9 sum:1 mihail:2 run:2 throughout:1 almost:6 reasonable:1 vn:2 scaling:1 comparable:1 graph3:1 bound:7 internet:2 completing:1 guaranteed:1 koren:2 fan:1 constraint:4 sharply:1 ri:2 generates:1 u1:3 kleinberg:5 osr:1 min:2 conjecture:4 department:1 influential:1 according:1 alternate:3 poor:1 endre:1 feige:3 lp:1 explained:3 restricted:2 pr:7 intuitively:1 taken:2 equation:8 xk22:1 previously:4 clv:19 raghu:2 apply:2 observe:3 probe:1 spectral:4 faloutsos:1 pji:3 struct:1 existence:3 original:1 top:1 denotes:1 remaining:1 xw:1 music:7 k1:5 emmanuel:3 contact:1 feng:1 eran:1 exhibit:2 link:1 trivial:2 reason:1 assuming:2 code:2 besides:1 index:4 minimizing:1 unfortunately:1 statement:1 stoc:2 trace:2 stated:1 implementation:3 unknown:3 perform:1 recommender:1 observation:3 datasets:9 supporting:2 rn:3 arbitrary:3 community:1 rating:3 edi:1 david:2 pair:3 required:3 pablo:1 address:3 beyond:1 suggested:1 jure:1 below:2 lkj:5 challenge:8 kxl:1 power:26 natural:3 rely:2 regularized:3 szemer:1 indicator:2 isoperimetry:1 mn:3 improve:1 kxk22:1 movie:6 brief:1 imply:1 attachment:13 incoherent:1 deviate:2 review:1 relative:1 law:23 loss:1 expect:1 icalp:1 interesting:1 limitation:1 proportional:1 filtering:3 incurred:1 degree:35 affine:3 thresholding:2 sewoong:1 nondegenerate:1 pi:11 austin:2 row:25 summary:1 repeat:1 supported:1 wide:1 pni:4 taking:1 sparse:1 distributed:15 van:1 overcome:1 depth:1 world:8 cumulative:3 qn:4 computes:1 author:1 made:2 far:2 social:4 emphasize:1 erdos:5 assumed:3 xi:1 spectrum:2 search:1 table:4 promising:1 nature:1 reasonably:1 forest:10 improving:1 complex:2 vj:13 tkdd:1 did:1 pk:4 main:4 spread:2 linearly:3 dense:1 montanari:1 noise:3 referred:1 christos:3 sub:2 fails:1 shrinking:1 xl:7 comput:1 tied:1 weighting:1 renyi:5 infect:3 rk:2 theorem:11 hub:3 sensing:2 offset:1 densification:1 evidence:1 a3:3 exists:3 corr:3 magnitude:2 occurring:1 nk:6 gap:1 likely:3 ordered:2 inderjit:2 acm:1 marked:3 identity:1 gkantsidis:1 mnp:1 jeff:1 hard:2 uniformly:5 reducing:1 total:1 silvio:1 experimental:1 svd:1 e:3 brand:1 formally:1 mark:2 latter:1 evaluate:2 correlated:1 |
3,050 | 3,764 | fMRI-Based Inter-Subject Cortical Alignment Using
Functional Connectivity
Bryan R. Conroy1 Benjamin D. Singer2 James V. Haxby3? Peter J. Ramadge1
1
Department of Electrical Engineering, 2 Neuroscience Institute, Princeton University
3
Department of Psychology, Dartmouth College
Abstract
The inter-subject alignment of functional MRI (fMRI) data is important for improving the statistical power of fMRI group analyses. In contrast to existing
anatomically-based methods, we propose a novel multi-subject algorithm that derives a functional correspondence by aligning spatial patterns of functional connectivity across a set of subjects. We test our method on fMRI data collected
during a movie viewing experiment. By cross-validating the results of our algorithm, we show that the correspondence successfully generalizes to a secondary
movie dataset not used to derive the alignment.
1
Introduction
Functional MRI (fMRI) studies of human neuroanatomical organization commonly analyze fMRI
data across a population of subjects. The effective use of this data requires deriving a spatial correspondence across the set of subjects, i.e., the data must be aligned, or registered, into a common
coordinate space. Current inter-subject registration techniques derive this correspondence by aligning anatomically-defined features, e.g. major sulci and gyri, across subjects, either in the volume or
on extracted cortical surfaces. Talairach normalization [1], for example, derives a piecewise affine
transformation by matching a set of major anatomical landmarks in the brain volume. More advanced techniques match a denser set of anatomical features, such as cortical curvature [2], and
derive nonlinear transformations between a reference space and each subject?s cortical surface.
It is known, however, that an accurate inter-subject functional correspondence cannot be derived
using only anatomical features, since the size, shape and anatomical location of functional loci
vary across subjects [3], [4]. Because of this deficiency in current alignment methods, it is common practice to spatially smooth each subject?s functional data prior to a population based analysis.
However, this incurs the penalty of blurring the functional data within and across distinct cortical
regions. Thus, the functional alignment of multi-subject fMRI data remains an important problem.
We propose to register functional loci directly by using anatomical and functional data to learn an
inter-subject cortical correspondence. This approach was first explored in [5], where subject cortices
were registered by maximizing the inter-subject correlation of the functional response elicited by a
common stimulus (a movie viewing). In essence, the correspondence was selected to maximize the
correlation of the fMRI time series between subjects. This relies on the functional response being
time-locked with the experimental stimulus. Large regions of visual and auditory cortex stimulated
by a movie viewing do indeed show consistent inter-subject synchrony [6]. However, other areas in
the intrinsic [7] or default [8] system fail to exhibit significant correlations across repeated stimulus
trials. The technique of [5] is hence not expected to improve alignment in these intrinsic regions.
In contrast to [5], we propose to achieve inter-subject alignment by aligning intra-subject patterns
of cortical functional connectivity. By functional connectivity, we mean within-subject similarity of
?
This work was funded by a grant from the National Institute of Mental Health (5R01MH075706-02)
1
the temporal response of remote regions of cortex [9]. This can be estimated from fMRI data, for
example, by correlating the functional time series between pairs of cortical nodes within a subject.
This yields a dense set of functional features for each subject from which we learn an inter-subject
correspondence. Unlike other functional connectivity work (see e.g. [10]), we define connectivity
between pairs of cortical nodes rather than with respect to anatomical regions of interest. Our
approach is inspired by studies showing that the patterns of functional connectivity in the intrinsic
network are consistent across subjects [7], [11]. This suggests that our method has the potential to
learn an inter-subject functional correspondence within both extrinsic and intrinsic cortical networks.
In summary, we formulate a multi-subject cortical alignment algorithm that minimizes the difference
between functional connectivity vectors of corresponding cortical nodes across subjects. We do so
by learning a dense-deformation field on the cortex of each subject, suitably regularized to preserve
cortical topology [2]. Our key contributions are: a) the novel alignment objective, b) a principled
algorithm for accomplishing the alignment, and c) experimental verification on fMRI data.
The paper is organized as follows. In ?2 we formulate the multi-subject alignment problem, followed
by a detailed exposition of the algorithm in ?3 and ?4. Finally, we exhibit results of the algorithm
applied to multi-subject fMRI data in ?5 and draw conclusions in ?6.
2
Formulation of the Multi-Subject Alignment Problem
For each subject we are given volumetric anatomical MRI data and fMRI data. The anatomical
data is used to extract a two-dimensional surface model of cortex. This greatly facilitates cortical
based analysis and subsequent visualization [12], [13], [14]. Cortex is segmented, then each cortical
hemisphere is inflated to obtain a smooth surface, which is projected to the sphere, S 2 , represented
by a discrete spherical mesh Ms = {pk ? S 2 ; 1 ? k ? Nv /2}. The two cortical hemispheres
are hence modeled by the disjoint union S = S 2 ] S 2 , represented by the corresponding disjoint
union of mesh points M = Ms ] Ms . Anatomical cortical features, such as cortical curvature, are
functions Da : S ? RNa sampled on M . Thus, our analysis is restricted to cortex only.
The fMRI volumeric data is first aligned with the anatomical scan, then mapped onto S. This assigns
each mesh node pk ? M a ?volumetric cortical voxel? vk ? R3 , with associated functional time
series fk ? RNt . The functional time series data is then a function Df : S ? RNt sampled on M .
As indicated in the introduction, we do not directly register the fMRI time series but instead
register the functional connectivity derived from the time series. Let ?(f1 , f2 ) denote a similarity measure on pairs of time series f1 , f2 ? RNt . A useful example is empirical correlation:
?(f1 , f2 ) = corr(f1 , f2 ); another possibility is an estimate of the mutual information between
the pairwise entries of f1 , f2 . Define the functional connectivity of the fMRI data under ? as the
map C(pi , pj ) = ?(Df (pi ), Df (pj )), i.e., the similarity of the functional times series at the pairs of
cortical nodes. Functional connections both within and across cortical hemispheres are considered.
Functional connectivity can be conceptualized as the adjacency matrix of an edge-weighted graph
on all cortical nodes. The edge between nodes pi , pj is weighted by the pairwise similarity measure
?(fi , fj ) codifying the functional similarity of pi and pj . In the case of correlation, C is the correlation matrix of the time series data. For typical values of Nv (? 72, 000), the functional connectivity
data structure is huge. Hence we need efficient mechanisms for working with C.
We are given the data discussed above for Ns subjects. Subject k?s training data is specified by samples of the functions Da,k : Sk ? RNa , Df,k : Sj ? RNt , and the derived functional connectivity
Ck , all sampled on the mesh Mk , k = 1, . . . , Ns . Our objective is to learn a relation consisting
of Ns -tuples of corresponding points across the set of cortices. To do so, we could select a node
from M1 for subject 1 and learn the corresponding points on the cortices of the remaining Ns ? 1
subjects through smooth and invertible mappings gk : S1 ? Sk , k = 2, . . . , Ns . However, this arbitrarily and undesirably gives special status to one subject. Instead, we introduce a reference model
Sref = S 2 ] S 2 with mesh Mref . For each node p ? Mref on Sref , we seek to learn the Ns -tuple of
corresponding points (g1 (p), g2 (p), . . . , gNs (p)), parameterized by gk : Sref ? Sk , k = 1, . . . , Ns .
In general terms, we can now summarize our task as follows: use the functional connectivity data
Ck , in conjunction with the anatomical data Da,k , k = 1, . . . , Ns , to estimate warping functions
{gk : k = 1, . . . , Ns }, subject to specified regularity conditions, that bring some specified balance
of anatomy and functional connectivity into alignment across subjects. That said, for the remainder
2
of the paper we restrict attention to aligning only functional connectivity across subjects. There is
no doubt that anatomy must be an integral part of a full solution; but that aspect is not new, and is
already well understood. Restricting attention to the alignment of functional connectivity will allow
us to concentrate on the most novel and important aspects of our approach.
To proceed, assume a reference connectivity Cref , such that for each subject k = 1, . . . , Ns ,
pi , pj ? Mref
Ck (gk (pi ), gk (pj )) = Cref (pi , pj ) + k (pi , pj ),
(1)
where Ck (gk (pi ), gk (pj )) = ?(Df,k (gk (pi )), Df,k (gk (pj ))), and k is zero-mean random noise.
Since gk (p) may not be a mesh point, computation of Df,k (gk (p)) requires interpolation of the time
series using mesh nodes in a neighborhood of gk (p). This will be important as we proceed.
Given (1), we estimate g by maximizing a regularized log likelihood:
P
log P (C1 , . . . , CNs |g) ? ? k Reg(gk )
g? = arg
max
g=(g1 ,??? ,gNs )
(2)
where Reg(gk ) constrains each warping function gk to be smooth and invertible. Here, we will
focus on the log likelihood term and delay the discussion of regularization to ?3. Optimization of
(2) is complicated by the fact that Cref is a latent variable, so it must be estimated along with g.
We use Expectation-Maximization to iteratively alternate between computing an expectation of Cref
(E-step), and a maximum likelihood estimate of g given both the observed and estimated unobserved
data (M-step) [15]. In the E-step, the expectation of Cref , C ref , conditioned on the current estimate
of g, gb, is computed by averaging the connectivity across subjects:
PNs
C ref (pi , pj ) = 1/Ns k=1
Ck (b
gk (pi ), gbk (pj )), pi , pj ? Sref
(3)
In the M-step, the estimate gb is refined to maximize the likelihood of the full data:
gb =
arg
=
arg
max
g=(g1 ,??? ,gNs )
min
g=(g1 ,??? ,gNs )
log P (C ref , C1 , C2 , ? ? ? , CNs |g)
(4a)
PNs P
(4b)
pi ,pj ?Sref (C ref (pi , pj )
k=1
? Ck (gk (pi ), gk (pj )))2
where we have assumed that the noise in (1) is i.i.d. Gaussian. Because (4b) decouples, we can
optimize over each subject?s warp separately, i.e., these optimizations can be done in parallel:
P
(5)
gbk = arg min pi ,pj ?Sref (C ref (pi , pj ) ? Ck (gk (pi ), gk (pj )))2
gk
However, an interesting alternative is to perform these sequentially with an E-step after each that
updates the reference estimate C ref . This also allows some other interesting adaptations. We note:
(C ref (pi , pj ) ? Ck (gk (pi ), gk (pj )))2
? (C k (pi , pj ) ? Ck (gk (pi ), gk (pj )))2
(6a)
where
C k (pi , pj ) =
1
(Ns ?1)
P
n6=k
Cn (b
gn (pi ), gbn (pj )),
pi , pj ? Mref ,
(7)
is the leave-one-out template for subject k, which is indepedendent of gk . Thus, we replace (5) by:
P
(8)
gbk = arg min pi ,pj ?Sref (C k (pi , pj ) ? Ck (gk (pi ), gk (pj )))2
gk
From (5) and (8) we observe that the multi-subject alignment problem reduces to a sequence of
pairwise registrations, each of which registers one subject to an average of connectivity matrices. If
we use (5), each round of pairwise registrations can be done in parallel and the results used to update
the average template. The difficulty is the computational update of C ref . Alternatively, using (8)
we do the pairwise registrations sequentially and compute a new leave-one-out template after each
registration. This is the approach we pursue. An algorithm for solving the pairwise registration is
derived in the next section and we examine the computation of leave-one-out templates in ?4.
3
Pairwise Cortical Alignment
We now develop an algorithm for aligning one subject, with connectivity CF , to a reference, with
connectivity CR , with CF , CR ? RNv ?Nv . For concreteness, from this point forward we let
?(f1 , f2 ) = corr(f1 , f2 ) and assume that the time series have zero mean and unit norm.
3
A function g : MR ? SF maps a reference mesh point pi ? MR to g(pi ) ? SF . By interpolating the
floating subject?s times series at the points g(pi ) ? SF we obtain the associated warped functional
F
F
connectivity: C?F = [?(fg(p
, fg(p
)]. We seek gb that best matches C?F to CR in the sense:
i)
j)
gb = arg min kC?F ? CR k2f + ?Reg(g)
g
(9)
Here k ? kf is the matrix Frobenius norm and the regularization term Reg(g) serves as a prior over
the space of allowable mappings. In the following steps, we examine how to efficiently solve (9).
Step 1: Parameterizing the dependence of C?F on the warp. We first develop the dependence
of the matrix C?F on the warping function g. This requires specifying how the time series at the
warped points g(pi ) ? SF is interpolated using the time series data {fiF ? RNt , i = 1, . . . , Nv } at
the mesh points {pF
M , i = 1, . . . , Nv }. Here, we employ linear interpolation with a spherical
i ?
PNvF F
kernel ?: f F (p) =
i=1 fi ?(p, pi ), p ? SF . The kernel should be matched to the following
specific objectives: (a) The kernel should be monomodal. Since the gradient of the registration
objective depends on the derivative of the interpolation kernel, this will reduce the likelihood of the
algorithm converging to a local minimum; (b) The support of the kernel should be finite. This will
limit interpolation complexity. However, as the size of the support decreases, so will the capture
range of the algorithm. At the initial stages of the algorithm, the kernel should have a broad extent,
due to higher initial uncertainty, and become increasingly more localized as the algorithm converges.
Thus, (c) The support of the kernel should be easily adjustable.
With these considerations in mind, we select ?(p, pi ) to be a spherical radial basis function ?i :
S 2 ? R centered at pi ? S 2 and taking the form: ?i (p) = ?(d(p, pi )), p ? S 2 , where ? : [0, ?] ?
R and d(p, pi ) is the spherical geodesic distance between p and pi [16]. Then ?i (p) is monomodal
with a maximum at pi , it depends only on the distance between p and pi and is radially symmetric.
In detail, we employ the particular spherical radial basis function:
?i (p) = ?(d(p, pi )) = (1 ? (2/r) sin(d(p, pi )/2))4+ ((8/r) sin(d(p, pi )/2) + 1)
(10)
where r is a fixed parameter, and (a)+ = a1{a ? 0}. ?i (p) has two continuous derivatives and its
support is {p ? S 2 : d(p, pi ) < 2 sin?1 (r/2)}. Note that the support can be easily adjusted through
the parameter r. So the kernel has all of our desired properties.
F
We can now make the dependence of C?F on g more explicit. Let TF = [f1F , f2F , ? ? ? , fN
]. Then
v
F
F
F
e
TF = f (g(p1 )) f (g(p2 )) ? ? ? f (g(pNv )) = TF A where A = [?i (g(pj ))] is the Nv ?
Nv matrix of interpolation coefficients dependent on g and the interpolation kernel. Next, noting
that CF = TFT TF , we use A to write the post-warp correlation matrix as:
eF = DAT CF AD
C
(11)
where D = diag(d1 , d2 , ? ? ? , dNv ) serves to normalize the updated data to unit norm: dj =
kf F (g(pj ))k?1 . Finally, we use A? = AD to write:
kC?F ? CR k2f = kA?T CF A? ? CR k2f
(12)
? It is also
Here, (12) encodes the dependence of the registration objective on g through the matrix A.
?
important to note that since the interpolation kernel is locally supported, A is a sparse matrix.
Step 2: Efficient Representation/Computation of the Registration Objective. We now consider
the Nv ? Nv matrices CF and CR . At a spatial resolution of 2 mm, the spherical model of human
cortex can yield Nv ? 72, 000 total mesh points. In this situation, direct computation with CF and
CR is prohibitive. Hence we need an efficient way to represent and compute the objective (12).
For fMRI data it is reasonable to assume that Nt Nv . Hence, since the data has been centered, the
rank of CF = TFT TF and of CR = TRT TR is at most Nt ? 1. For simplicity, we make the reasonable
assumption that rank(TF ) = rank(TR ) = d. Then CF and CR can be efficiently represented by
compact d-dimensional SVDs CF = VF ?F VFT and CR = VR ?R VRT . Moveover, these can be
computed directly from SVDs of the data matrices: TF = UTF ?TF VTTF and TR = UTR ?TR VTTR . In
detail: VF = VTF , VR = VTR , ?F = ?TTF ?TF , and ?R = ?TTR ?TR .
4
The above representation avoids computing CF and CR , but we must also show that it enables
efficient evaluation of (12). To this end, introduce the following linear transformation:
? R
B = WFT AW
(13)
where WF = VF VF? , WR = VR VR? , are orthogonal with the Nv ? d columns of VF? and
VR? forming orthonormal bases for range(VF )? and range(VR )? , respectively. Write B as:
B1 B2
B=
(14)
B3 B4
with B1 ? Rd?d , B2 ? Rd?Nv , B3 ? R(Nv ?d)?d and B4 ? R(Nv ?d)?(Nv ?d) . Substituting (13)
and (14) into (12) and simplifying yields:
kC?F ? CR k2f = kB1T ?F B1 ? ?R k2f + 2kB1T ?F B2 k2f + kB2T ?F B2 k2f
(15)
with
? R
B1 = VFT AV
? R?
and B2 = VFT AV
(16)
The d ? d matrix B1 is readily computed since VF , VR are of manageable size. Computation of the
d?Nv matrix B2 depends on VR? . This has ON columns spanning the Nv ?d dimensional subspace
null(CR ). Since there is residual freedom in the choice of VR? and B2 is large, its selection merits
closer examination. Now (16) can be viewed as a projection of the rows of VFT A? onto the columns
? F ? VR B T ).
? F ? VR B T lie in null(CR ) and B T = (V ? )T (AV
of VR and VR? . The columns of AV
1
1
2
R
T
T
?
Hence a QR-factorization QR = A VF ? VR B1 yields d ON vectors in null(CR ). Choosing these
as the first d columns of VR? , yields B2 = [R , 0], i.e., B2 is very sparse.
In summary, we have derived the following efficient means of evaluating the objective. By onetime preprocessing of the time series data we obtain ?F , ?R and VF , VR . Then given a warp g,
? B1 = V T AV
? R , and finally B2 via QR factorization of
we compute: the interpolation matrix A,
F
T
T
?
A VF ? VR B1 . Then we evaluate (15).
Step 3: The Transformation Space and Regularization. We now examine the specification of g in
greater detail. We allow each mesh point to move freely (locally) in two directions. The use of such
nonlinear warp models for inter-subject cortical alignment has been validated over, for example,
rigid-body transformations [17]. To specify g, we first need to set up a coordinate system on the
sphere. Let U = {(?, ?); 0 < ? < ?, 0 < ? < 2?}. Then the sphere can be parameterized by
x : U ? R3 with x(?, ?) = (sin ? cos ?, sin ? sin ?, cos ?). Here, ? is a zenith angle measured
against one of the principal axes, and ? is an azimuthal angle measured in one of the projective
planes (i.e., xy-plane, xz-plane, or yz-plane). Note that x omits a semicircle of S 2 ; so at least two
such parameterizations are required to cover the entire sphere [18].
Consider pi ? S 2 parameterized by x(?, ?) such that pi = x(?i , ?i ). Then the warp field at pi is:
g(pi ) = x(?i + ??i , ?i + ??i ) = x(??i , ??i )
(17)
for displacements ??i and ??i . The warp g is thus parameterized by: {??i , ??i , i = 1, . . . , Nv }.
The warp g must be regularized to avoid undesired topological distortions (e.g. folding and excessive
expansion) and to avoid over-fitting the data. This is achieved by adding a regularization term to the
objective that penalizes such distortions. There are several ways this can be done. Here we follow
[14] and regularize g by penalizing both metric and areal distortion. The metric distortion term
penalizes warps that disrupt local distances between neighboring mesh nodes. This has the effect of
limiting the expansion/contraction of cortex. The areal distortion term seeks to preserve a consistent
orientation of the surface. Given a triangularization of the spherical mesh, each triangle is given
an oriented normal vector that initially points radially outward from the sphere. Constraining the
oriented area of all triangles to be positive prevents folds in the surface [14].
Step 4: Optimization of the objective. We optimize (3) over g by gradient descent. Denote the objective by S(g), let e
aij = aij dj be the (i, j)-th entry of A? = AD and a(p) =
T
[?1 (p) ?2 (p) ? ? ? ?Nv (p)] . From the parameterization of the warp (17), we see that e
aij =
5
Algorithm 1 Pairwise algorithm
Algorithm 2 Multi-subject algorithm
1: Given: SVD of floating dataset ?F , VF and
2:
3:
4:
5:
6:
7:
8:
9:
10:
s
1: Given: SVD of datasets, {?k , Vk }N
k=1
reference dataset ?R , VR
Given: Initial warp estimate g (0)
Given: Sequence r1 > r2 > ? ? ? > rM of
spatial resolutions
for m = 1 to M do
Set the kernel ?i in (10), with r = rm
Smooth the reference to resolution rm
Solve for g? in (9) by gradient descent
with initial condition g (m?1)
Set g (m) = g?
end for
Output result: g (M )
(0)
2: Initialize gk to identity, k = 1, . . . , Ns
3: for t = 1 to T do
4:
for k = 1 to Ns do
5:
Construct C k as explained in ?4
6:
Align Ck to C k by Algorithm 1 with
(t?1)
7:
8:
9:
10:
11:
initial condition gk
(t)
Set gk to the output of the alignment
(t)
Use gk to update ?k , Vk
end for
end for
(T )
(T )
Output result: g = {g1 , . . . , gNs )
Figure 1: The registration algorithms.
?i (x(?ej , ?ej ))kTF a(x(?ej , ?ej ))k?1 depends only on the warp parameters of the j th mesh node, ?ej
and ?ej . Then, by the chain rule, the partial derivative of S(g) with respect to ?ej is given by:
?S(g)
ej
??
=
PNv
i=1
?F ?CR k2 ?e
?kC
aij
f
?e
aij
ej
??
+ ? ?Reg(g)
ej
??
(18)
A similar expression is obtained for the partial derivative with respect to ?ej . Since the interpolation
kernel is supported locally, the summation in (18) is taken over a small number of terms. A full
expression for ?S/? ?ej is given in the supplemental, and that of ?Reg(g)/? ?ej in [14].
To help avoid local minima we take a multi-resolution optimization approach [19]. The registration
is run on a sequence of spatial resolutions r1 > r2 > ? ? ? > rM , with rM given by the original
resolution of the data. The result at resolution rm is used to initialize the alignment at resolution
rm+1 . The alignment for rm is performed by matching the kernel parameter r in (10) to rm . Note
that the reference dataset is also spatially smoothed at each rm by the transformation in (11), with
A = [a(p1 ) a(p2 ) ? ? ? a(pNv )]. The pairwise algorithm is summarized as Algorithm 1 in Figure 1.
4
Multi-Subject Alignment: Computing Leave-one-out Templates
We now return to the multi-subject alignment problem, which is summarized as Algorithm 2 in
Figure 1. It only remains to discuss efficient computation of the leave-one-out-template (7). Since
C k is an average of Ns ? 1 positive semi-definite matrices each of rank d, the rank d of C k is
en , the connectivity matrix of subject n after
bounded as follows d ? d ? (Ns ? 1)d. Assume that C
en = Ven ?
e n VenT .
warp gn (see (11)), has an efficient d Nv dimensional SVD representation C
To compute the SVD for C k , we exploit the sequential nature of the multi-subject alignment algoT
rithm by refining the SVD of the leave-one-out template for subject k?1, C k?1 = V k?1 ?k?1 V k?1 ,
computed in the previous iteration. This is achieved by expressing C k in terms of C k?1 :
C k = C k?1 +
1
e
Ns ?1 (Ck?1
ek )
?C
(19)
ek?1 and C
ek in terms of V k?1 :
and computing matrix decompositions for the singular vectors of C
Vek?1
Vek
= V k?1 Pk?1 + Qk?1 Rk?1
(20a)
= V k?1 Pk
(20b)
T
where Pj = V k?1 Vej ? Rd?d , for j = k ? 1, k, projects the columns of Vej onto the columns of
V k?1 . The second term of (20a), Qk?1 Rk?1 , is the QR-decomposition of the residual components
6
of Vek?1 after projection onto range(V k?1 ). Since C k?1 is an average of positive semi-definite
ek , we are sure that range(Vek ) ? range(V k?1 ), (supplementary material).
matrices that includes C
Using the matrix decompositions (20a) and (20b), C k in (19) above can be expressed as:
T
C k = V k?1 Qk?1 G V k?1 Qk?1
where G is the symmetric (d + d) ? (d + d) matrix:
"
#
e k?1 P T
e k?1 RT
e T
1
Pk?1 ?
Pk?1 ?
?k?1 0
k?1
k?1
+
G=
(
? Pk ?k Pk
T
T
e
e
0
0
0
Ns ? 1 Rk?1 ?k?1 Pk?1 Rk?1 ?k?1 Rk?1
0 )
0
(21)
(22)
We now compute the SVD of G = VG ?G VGT . Then, using (21), we obtain the SVD for C k as:
V k = V k?1
Qk?1 VG
and ?k = ?G
(23)
For a moderate number of subjects, (d + d) ? Ns d Nv , this approach is more efficient than a
e k and vectors Vek
brute-force O(Nv3 ) SVD. Additionally, it works directly on the singular values ?
ek , alleviating the need to store large Nv ? Nv matrices.
of each warped connectivity matrix C
5
Experimental Results
We tested the algorithm using fMRI data collected from 10 subjects viewing a movie split into
2 sessions separated by a short break. The data was preprocessed following [5]. For each
subject, a structural scan was acquired before each session, from which the cortical surface
model was derived (?2) and then anatomically aligned to a template using FreeSurfer (Fischl,
http://surfer.nmr.mgh.harvard.edu). Similar to [5], we find that anatomical alignment based on cortical curvature serves as a superior starting point for functional alignment over Talairach alignment.
First, functional connectivity was found for each subject and session: Ck,i , k = 1, . . . , Ns , i = 1, 2.
These were then aligned within subjects, Ck,1 ? Ck,2 , and across subjects, Ck,1 ? Cj,2 , using Algorithm 1. Since the data starts in anatomical correspondence, we expect small warp displacements
within subject and larger ones across subjects. The mean intra-subject warp displacement was 0.72
mm (? = 0.48), with 77% of the mesh nodes warped less than 1 mm and fewer than 1.5% warped by
more than the data spatial resolution (2 mm). In contrast, the mean inter-subject warp displacement
was 1.46 mm (? = 0.92 mm), with 22% of nodes warped more than 2 mm. See Figures 2(a)-(b).
In a separate analysis, each subject was aligned to its leave-one-out template on each session using
Algorithm 1, yielding a set of warps gk,i (pj ), k = 1, . . . , Ns , i = 1, 2, j = 1, . . . , Nv . To evaluate
the consistency of the correspondence derived from different sessions, we compared the warps gk,1
to gk,2 for each subject k. Here, we only consider nodes that are warped by at least the data resolution. This analysis provides a measure of the sensitivity to noise present in the fMRI data. At node
pj , we compute the angle 0 ? ? ? ? between the warp tangent vectors of gk,1 (pj ) and gk,2 (pj ).
This measures the consistency of the direction of the warp across sessions: smaller values of ? suggest a greater warp coherence across sessions. Figure 2(c) shows a histogram of ? averaged across
the cortical nodes of all 10 subjects. The tight distribution centered near ? = 0 suggests significant
consistency in the warp direction across sessions. In particular, 93% of the density for ? lies inside
?/2, 81% inside ?/4, and 58% inside ?/8. As a secondary comparison, we compute a normalized
consistency measure WNC(pj ) = d(gk,1 (pj ), gk,2 (pj ))/(d(gk,1 (pj ), pj ) + d(gk,2 (pj ), pj )), where
d(?, ?) is spherical geodesic distance. The measure takes variability in both warp angle and magnitude into account; it is bounded between 0 and 1, and WNC(pj ) = 0 only if gk,1 (pj ) = gk,2 (pj ). A
histogram for WNC is given in 2(d); WNC exhibits a peak at 0.15, with a mean of 0.28 (? = 0.22).
Finally, Algorithm 2 was applied to the first session fMRI data to learn a set of warps g =
(g1 , . . . , gNs ) for 10 subjects. The alignment required approximately 10 hours on a Intel 3.8GHz
Nehalem quad-core processor with 12GB RAM. To evaluate the alignment, we apply the warps to
the held out second session fMRI data, where subjects viewed a different segment of the movie. This
warping yields data {fgkk (pi ) } for each subject k, with interpolation performed in the original volume to avoid artificial smoothing. The cross-validated inter-subject correlation ISC(pi ) is the mean
7
0.12
0.12
0.1
0.08
0.06
0.04
0.15
0.1
0.08
0.1
0.06
0.05
0.04
0.02
0.1
0.08
Frequency
0.14
Frequency
0.16
0.14
Frequency
Frequency
0.2
0.16
0.06
0.04
0.02
0.02
0
0
1
2
3
4
Warp Distance (mm)
5
6
0
0
(a)
1
2
3
4
Warp Distance (mm)
5
0
6
0
0.2
0.4
0.6
0.8
1
Angle between tangent vectors (radians/!)
(b)
(c)
0
0
0.2
0.4
0.6
0.8
Normalized consistency
1
(d)
Figure 2: Consistency Histograms. (a) Intra-subject warp distances; (b) Inter-subject warp distances; (c) Angle
between warp vectors across sessions; (d) Across-session normalized warp consistency measure WNC.
(a) Lateral View
(b) Medial View
(c) Ventral View
(d) Lateral View
(e) Medial View
(f) Ventral View
Figure 3: Map of ISC on right cortical hemisphere, alignment: anatomical (top), functional (bottom).
correlation of each subject?s functional time series with the mean time series of the other subjects:
PNs
P
ISC(pi ) = (1/Ns ) k=1
corr(fgkk (pi ) , n6=k fgnn (pi ) ), pi ? Mref
(24)
PNv
We also compute the mean inter-subject correlation, ISC = (1/Nv ) i=1 ISC(pi ).
We compare the cross-validated ISC map with the ISC map of the second session movie viewing
computed under anatomical correspondence. Mean ISC improved by 18%, from 0.072 to 0.085. In
addition, the number of significant inter-subject correlations (ISC(pi ) > 0.1, P < 0.01) increased
by 22.9%, from 19, 362 to 23, 789. Figure 3 shows the ISC maps computed under anatomical
alignment and functional alignment on the inflated right cortical hemisphere. As expected, the areas
of improvement in inter-subject correlation are consistent with the extrinsic regions of cortex [6].
6
Conclusion
We have proposed a novel cortical registration algorithm that produces a functional correspondence
across a set of subjects. The algorithm uses the fMRI data directly to align the spatial patterns of
functional response elicited by a movie viewing. Despite the high-dimensionality of the data under
consideration, the algorithm is efficient in both space and time complexity.
By comparing the inter-subject alignments derived from different fMRI experimental sessions, we
show that the correspondence is consistent and robust to noise and variability in the fMRI temporal
response. We also cross-validate the correspondence on independent test data that was not used
to derive the alignment. On the test data, the algorithm produces a consistent increase in intersubject correlation of fMRI time series, suggesting that functional alignment of extrinsic regions of
cortex that are directly driven by the movie viewing experiment, such as visual and auditory areas,
is improved considerably. Further testing is warranted to evaluate improvement in intrinsic areas of
cortex whose response is not temporally synchronized with the experimental stimulus.
8
References
[1] J. Talairach and P. Tournoux. Co-planar Stereotaxic Atlas of the Human Brain. Thieme Publishing Group, 1988.
[2] B. Fischl, R.B.H. Tootell, and A.M. Dale. High-resolution intersubject averaging and a coordinate system for the cortical surface. Human Brain Mapping, 8:272?284, 1999.
[3] J.D.G. Watson, R. Myers, R.S.F. Frackowiak, J.V. Hajnal, R.P. Woods, J.C. Mazziotta,
S. Shipp, and S. Zeki. Area v5 of the human brain: evidence from a combined study using
positron emission tomography and magnetic resonance imaging. Cerebral Cortex, 3:79?94,
1993.
[4] J. Rademacher, V.S. Caviness, H. Steinmetz, and A.M. Galaburda. Topographical variation of
the human primary cortices: implications for neuroimaging, brain mapping and neurobiology.
Cerebral Cortex, 3:313?329, 1995.
[5] M.R. Sabuncu, B.D. Singer, B. Conroy, R.E. Bryan, P.J. Ramadge, and J.V. Haxby. Functionbased inter-subject alignment of human cortical anatomy. Cerebral Cortex Advance Access
published on May 6, 2009, DOI 10.1093/cercor/bhp085.
[6] U. Hasson, Y. Nir, G. Fuhrmann, and R. Malach. Intersubject synchronization of cortical
activity during natural vision. Science, 303:1634?1640, 2004.
[7] Y. Golland, S. Bentin, H. Gelbard, Y. Benjamini, R. Heller, Y. Nir, U. Hasson, and R. Malach.
Extrinsic and intrinsic systems in the posterior cortex of the human brain revealed during natural sensory stimulation. Cerebral Cortex, 17:766?777, 2007.
[8] M.E. Raichle, A.M. MacLeod, A.Z. Snyder, W.J. Powers, D.A. Gusnard, and G.L. Shulman.
A default mode of brain function. PNAS, 98:676?682, 2001.
[9] K.J. Friston. Functional and effective connectivity in neuroimaging. Human Brain Mapping,
2:56?78, 1994.
[10] Michael D. Greicius, Ben Krasnow, Allan L. Reiss, and Vinod Menon. Functional connectivity
in the resting brain: A network analysis of the default mode hypothesis. PNAS, 100:253?258,
2003.
[11] J.L. Vincent, A.Z. Snyder, M.D. Fox, B.J. Shannon, J.R. Andrews, M.E. Raichle, and R.L.
Buckner. Coherent spontaneous activity identifies a hippocampal-parietal memory network. J.
Neurophysiol, 96:3517?3531, 2006.
[12] D.C. Van Essen, H.A. Drury, J. Dickson, J. Harwell, D. Hanlon, and C.H. Anderson. An
integrated software suite for surface-based analyses of cerebral cortex. J. Am. Med. Inform.
Assoc., 8:443?459, 2001.
[13] A.M. Dale, B. Fischl, and M.I. Sereno. Cortical surface-based analysis. i. segmentation and
surface reconstruction. NeuroImage, 9:179?194, 1999.
[14] B. Fischl, M.I. Sereno, and A.M. Dale. Cortical surface-based analysis. ii. inflation, flattening,
and a surface-based coordinate system. NeuroImage, 9:195?207, 1999.
[15] G.J. McLachlan and T. Krishnan. The EM Algorithm and Extensions. Wiley, 1997.
[16] G.E. Fasshauer and L.L. Schumaker. Scattered data fitting on the sphere. Proceedings of the
international conference on mathematical methods for curves and surfaces II, pages 117?166,
1998.
[17] B.A. Ardekani, A.H. Bachman, S.C. Strother, Y. Fujibayashi, and Y. Yonekura. Impact of
inter-subject image registration on group analysis of fmri data. International Congress Series,
1265:49?59, 2004.
[18] M. Do Carmo. Differential Geometry of Curves and Surfaces. Prentice Hall, 1976.
[19] R. Bajcsy and S. Kovacic. Multiresolution elastic matching. Computer Vision, Graphics, and
Image Processing, 46:1?21, 1989.
9
| 3764 |@word trial:1 mri:3 manageable:1 norm:3 suitably:1 d2:1 seek:3 azimuthal:1 vek:5 simplifying:1 contraction:1 decomposition:3 bachman:1 incurs:1 fif:1 tr:5 initial:5 series:19 existing:1 current:3 ka:1 nt:2 comparing:1 must:5 readily:1 mesh:15 subsequent:1 fn:1 hajnal:1 shape:1 enables:1 haxby:1 atlas:1 update:4 medial:2 fasshauer:1 selected:1 prohibitive:1 fewer:1 parameterization:1 plane:4 positron:1 short:1 core:1 mental:1 provides:1 parameterizations:1 node:17 location:1 mathematical:1 rnt:5 along:1 c2:1 become:1 direct:1 differential:1 fitting:2 inside:3 introduce:2 acquired:1 pairwise:9 inter:20 expected:2 allan:1 indeed:1 p1:2 examine:3 xz:1 multi:12 brain:9 inspired:1 bentin:1 spherical:8 quad:1 pf:1 dnv:1 project:1 matched:1 bounded:2 null:3 thieme:1 minimizes:1 pursue:1 supplemental:1 unobserved:1 transformation:6 suite:1 temporal:2 vtf:1 decouples:1 rm:10 k2:1 nmr:1 brute:1 unit:2 grant:1 assoc:1 positive:3 before:1 engineering:1 understood:1 local:3 congress:1 limit:1 despite:1 interpolation:10 approximately:1 suggests:2 specifying:1 co:3 ramadge:1 factorization:2 projective:1 greicius:1 locked:1 range:6 averaged:1 ktf:1 testing:1 practice:1 union:2 definite:2 displacement:4 area:6 empirical:1 semicircle:1 matching:3 projection:2 radial:2 suggest:1 cannot:1 onto:4 selection:1 tootell:1 prentice:1 optimize:2 map:6 maximizing:2 conceptualized:1 attention:2 starting:1 formulate:2 resolution:11 simplicity:1 assigns:1 parameterizing:1 rule:1 d1:1 deriving:1 orthonormal:1 regularize:1 population:2 coordinate:4 variation:1 updated:1 limiting:1 spontaneous:1 alleviating:1 us:1 hypothesis:1 harvard:1 malach:2 observed:1 bottom:1 electrical:1 capture:1 svds:2 region:7 remote:1 decrease:1 principled:1 benjamin:1 complexity:2 constrains:1 geodesic:2 solving:1 tight:1 segment:1 f2:7 blurring:1 basis:2 triangle:2 neurophysiol:1 easily:2 vent:1 frackowiak:1 represented:3 pnv:4 separated:1 distinct:1 effective:2 doi:1 artificial:1 neighborhood:1 refined:1 choosing:1 whose:1 supplementary:1 solve:2 denser:1 distortion:5 larger:1 g1:6 sequence:3 myers:1 propose:3 reconstruction:1 schumaker:1 remainder:1 adaptation:1 neighboring:1 aligned:5 wnc:5 achieve:1 multiresolution:1 frobenius:1 validate:1 normalize:1 qr:4 regularity:1 r1:2 rademacher:1 produce:2 leave:7 converges:1 ben:1 help:1 derive:4 develop:2 andrew:1 measured:2 intersubject:3 p2:2 inflated:2 synchronized:1 concentrate:1 direction:3 anatomy:3 centered:3 human:9 viewing:7 zenith:1 strother:1 material:1 adjacency:1 f1:7 summation:1 adjusted:1 codifying:1 extension:1 mm:9 inflation:1 considered:1 hall:1 normal:1 mgh:1 mapping:5 surfer:1 substituting:1 major:2 ventral:2 vary:1 tf:9 successfully:1 weighted:2 mclachlan:1 wft:1 rna:2 gaussian:1 rather:1 ck:16 avoid:4 cr:17 ej:13 conjunction:1 derived:8 focus:1 validated:3 ax:1 vk:3 refining:1 rank:5 likelihood:5 emission:1 improvement:2 greatly:1 contrast:3 sense:1 wf:1 buckner:1 am:1 dependent:1 rigid:1 entire:1 integrated:1 initially:1 relation:1 kc:4 raichle:2 arg:6 orientation:1 resonance:1 spatial:7 special:1 initialize:2 mutual:1 smoothing:1 field:2 construct:1 broad:1 ven:1 k2f:7 excessive:1 fmri:25 stimulus:4 piecewise:1 employ:2 oriented:2 steinmetz:1 preserve:2 national:1 floating:2 geometry:1 consisting:1 cns:2 freedom:1 organization:1 interest:1 huge:1 possibility:1 essen:1 intra:3 evaluation:1 alignment:35 yielding:1 held:1 chain:1 implication:1 accurate:1 isc:10 edge:2 tuple:1 integral:1 closer:1 xy:1 partial:2 orthogonal:1 fox:1 penalizes:2 desired:1 deformation:1 mk:1 increased:1 column:7 gn:2 dickson:1 cover:1 maximization:1 entry:2 delay:1 graphic:1 aw:1 considerably:1 combined:1 density:1 peak:1 sensitivity:1 international:2 invertible:2 michael:1 connectivity:28 warped:7 derivative:4 tft:2 return:1 ek:5 doubt:1 account:1 potential:1 suggesting:1 harwell:1 b2:10 summarized:2 includes:1 coefficient:1 register:4 depends:4 ad:3 performed:2 break:1 view:6 analyze:1 undesirably:1 start:1 elicited:2 complicated:1 parallel:2 synchrony:1 contribution:1 accomplishing:1 qk:5 efficiently:2 yield:6 algot:1 vincent:1 published:1 processor:1 inform:1 volumetric:2 reiss:1 against:1 frequency:4 james:1 associated:2 radian:1 sampled:3 auditory:2 dataset:4 radially:2 dimensionality:1 organized:1 cj:1 segmentation:1 higher:1 follow:1 planar:1 response:6 specify:1 improved:2 formulation:1 done:3 monomodal:2 anderson:1 stage:1 correlation:13 working:1 nonlinear:2 mode:2 indicated:1 mazziotta:1 menon:1 b3:2 effect:1 normalized:3 hence:6 regularization:4 spatially:2 symmetric:2 iteratively:1 undesired:1 round:1 sin:6 during:3 essence:1 m:3 trt:1 ttr:1 hippocampal:1 allowable:1 gns:6 fj:1 bring:1 sabuncu:1 image:2 consideration:2 novel:4 fi:2 ef:1 common:3 superior:1 functional:47 stimulation:1 b4:2 volume:3 cerebral:5 discussed:1 m1:1 resting:1 significant:3 expressing:1 areal:2 rd:3 fk:1 consistency:7 session:14 benjamini:1 dj:2 funded:1 specification:1 access:1 cortex:21 surface:15 similarity:5 base:1 aligning:5 align:2 curvature:3 functionbased:1 posterior:1 hemisphere:5 moderate:1 driven:1 pns:3 store:1 carmo:1 arbitrarily:1 vej:2 watson:1 shulman:1 minimum:2 greater:2 mr:2 freely:1 maximize:2 semi:2 ii:2 full:3 pnas:2 reduces:1 smooth:5 segmented:1 match:2 cross:4 sphere:6 post:1 a1:1 impact:1 converging:1 vision:2 expectation:3 df:7 metric:2 iteration:1 normalization:1 kernel:13 represent:1 histogram:3 achieved:2 c1:2 folding:1 addition:1 golland:1 separately:1 singular:2 unlike:1 sure:1 nv:26 subject:84 med:1 validating:1 facilitates:1 bajcsy:1 structural:1 near:1 noting:1 constraining:1 split:1 revealed:1 vinod:1 krishnan:1 psychology:1 dartmouth:1 topology:1 restrict:1 gbk:3 reduce:1 cn:1 expression:2 gb:6 penalty:1 peter:1 proceed:2 useful:1 detailed:1 outward:1 locally:3 tomography:1 gyrus:1 http:1 neuroscience:1 disjoint:2 estimated:3 extrinsic:4 bryan:2 anatomical:16 wr:1 fischl:4 discrete:1 write:3 snyder:2 group:3 key:1 zeki:1 sulcus:1 preprocessed:1 pj:46 penalizing:1 registration:13 ram:1 graph:1 imaging:1 concreteness:1 wood:1 run:1 angle:6 parameterized:4 uncertainty:1 reasonable:2 draw:1 coherence:1 vf:11 followed:1 correspondence:15 fold:1 topological:1 activity:2 hasson:2 deficiency:1 software:1 encodes:1 interpolated:1 aspect:2 min:4 department:2 alternate:1 across:23 smaller:1 increasingly:1 em:1 s1:1 anatomically:3 restricted:1 explained:1 taken:1 visualization:1 remains:2 discus:1 r3:2 fail:1 mechanism:1 singer:1 locus:2 mind:1 merit:1 serf:3 end:4 generalizes:1 gusnard:1 apply:1 observe:1 magnetic:1 alternative:1 original:2 neuroanatomical:1 rnv:1 remaining:1 cf:11 top:1 publishing:1 macleod:1 exploit:1 yz:1 dat:1 warping:4 objective:11 move:1 already:1 triangularization:1 v5:1 primary:1 dependence:4 rt:1 said:1 exhibit:3 gradient:3 subspace:1 distance:8 separate:1 mapped:1 lateral:2 landmark:1 evaluate:4 collected:2 sref:7 extent:1 spanning:1 modeled:1 freesurfer:1 balance:1 ramadge1:1 neuroimaging:2 gk:44 adjustable:1 perform:1 tournoux:1 av:5 datasets:1 finite:1 gelbard:1 descent:2 vft:4 parietal:1 situation:1 neurobiology:1 variability:2 smoothed:1 princeton:1 pair:4 required:2 specified:3 connection:1 omits:1 registered:2 conroy:1 coherent:1 hour:1 pattern:4 summarize:1 max:2 memory:1 power:2 difficulty:1 examination:1 regularized:3 force:1 natural:2 friston:1 residual:2 advanced:1 improve:1 movie:9 temporally:1 identifies:1 fuhrmann:1 extract:1 health:1 n6:2 nir:2 prior:2 heller:1 tangent:2 kf:2 synchronization:1 expect:1 interesting:2 krasnow:1 localized:1 vg:2 affine:1 verification:1 consistent:6 pi:57 row:1 summary:2 supported:2 aij:5 allow:2 warp:31 institute:2 template:9 taking:1 sparse:2 fg:2 ghz:1 van:1 curve:2 default:3 cortical:35 evaluating:1 avoids:1 dale:3 forward:1 commonly:1 sensory:1 projected:1 preprocessing:1 voxel:1 sj:1 compact:1 status:1 correlating:1 sequentially:2 b1:8 assumed:1 tuples:1 alternatively:1 disrupt:1 continuous:1 latent:1 sk:3 gbn:1 stimulated:1 additionally:1 learn:7 nature:1 robust:1 elastic:1 improving:1 expansion:2 warranted:1 interpolating:1 onetime:1 da:3 diag:1 flattening:1 pk:9 dense:2 noise:4 sereno:2 repeated:1 ref:8 body:1 intel:1 en:2 rithm:1 scattered:1 vr:18 n:22 wiley:1 neuroimage:2 explicit:1 sf:5 lie:2 rk:5 specific:1 showing:1 explored:1 r2:2 evidence:1 derives:2 intrinsic:6 utr:1 restricting:1 adding:1 corr:3 sequential:1 hanlon:1 f1f:1 magnitude:1 conditioned:1 forming:1 visual:2 prevents:1 expressed:1 g2:1 moveover:1 talairach:3 relies:1 extracted:1 viewed:2 identity:1 exposition:1 replace:1 typical:1 averaging:2 principal:1 total:1 secondary:2 vrt:1 experimental:5 svd:8 shannon:1 select:2 college:1 support:5 scan:2 topographical:1 stereotaxic:1 reg:6 tested:1 vtr:1 |
3,051 | 3,765 | A unified framework for high-dimensional analysis of
M -estimators with decomposable regularizers
Sahand Negahban
Department of EECS
UC Berkeley
sahand [email protected]
Pradeep Ravikumar
Department of Computer Sciences
UT Austin
[email protected]
Martin J. Wainwright
Department of Statistics
Department of EECS
UC Berkeley
[email protected]
Bin Yu
Department of Statistics
Department of EECS
UC Berkeley
[email protected]
Abstract
High-dimensional statistical inference deals with models in which the the number of parameters p is comparable to or larger than the sample size n. Since it
is usually impossible to obtain consistent procedures unless p/n ? 0, a line of
recent work has studied models with various types of structure (e.g., sparse vectors; block-structured matrices; low-rank matrices; Markov assumptions). In such
settings, a general approach to estimation is to solve a regularized convex program
(known as a regularized M -estimator) which combines a loss function (measuring
how well the model fits the data) with some regularization function that encourages the assumed structure. The goal of this paper is to provide a unified framework for establishing consistency and convergence rates for such regularized M estimators under high-dimensional scaling. We state one main theorem and show
how it can be used to re-derive several existing results, and also to obtain several
new results on consistency and convergence rates. Our analysis also identifies
two key properties of loss and regularization functions, referred to as restricted
strong convexity and decomposability, that ensure the corresponding regularized
M -estimators have fast convergence rates.
1
Introduction
In many fields of science and engineering (among them genomics, financial engineering, natural language processing, remote sensing, and social network analysis), one encounters statistical inference
problems in which the number of predictors p is comparable to or even larger than the number of
observations n. Under this type of high-dimensional scaling, it is usually impossible to obtain statistically consistent estimators unless one restricts to subclasses of models with particular structure.
For instance, the data might be sparse in a suitably chosen basis, could lie on some manifold, or the
dependencies among the variables might have Markov structure specified by a graphical model.
In such settings, a common approach to estimating model parameters is is through the use of a
regularized M -estimator, in which some loss function (e.g., the negative log-likelihood of the data)
is regularized by a function appropriate to the assumed structure. Such estimators may also be
interpreted from a Bayesian perspective as maximum a posteriori estimates, with the regularizer
reflecting prior information. In this paper, we study such regularized M -estimation procedures,
and attempt to provide a unifying framework that both recovers some existing results and provides
1
new results on consistency and convergence rates under high-dimensional scaling. We illustrate
some applications of this general framework via three running examples of constrained parametric
structures. The first class is that of sparse vector models; we consider both the case of ?hard-sparse?
models which involve an explicit constraint on the number on non-zero model parameters, and also
a class of ?weak-sparse? models in which the ordered coefficients decay at a certain rate. Second,
we consider block-sparse models, in which the parameters are matrix-structured, and entire rows are
either zero or not. Our third class is that of low-rank matrices, which arise in system identification,
collaborative filtering, and other types of matrix completion problems.
To motivate the need for a unified analysis, let us provide a brief (and hence necessarily incomplete)
overview of the broad range of past and on-going work on high-dimensional inference. For the case
of sparse regression, a popular regularizer is the !1 norm of the parameter vector, which is the sum of
the absolute values of the parameters. A number of researchers have studied the Lasso [15, 3] as well
as the closely related Dantzig selector [2] and provided conditions on various aspects of its behavior,
including !2 -error bounds [7, 1, 21, 2] and model selection consistency [22, 19, 6, 16]. For generalized linear models (GLMs) and exponential family models, estimators based on !1 -regularized
maximum likelihood have also been studied, including results on risk consistency [18] and model
selection consistency [11]. A body of work has focused on the case of estimating Gaussian graphical
models, including convergence rates in Frobenius and operator norm [14], and results on operator
norm and model selection consistency [12]. Motivated by inference problems involving block-sparse
matrices, other researchers have proposed block-structured regularizers [17, 23], and more recently,
high-dimensional consistency results have been obtained for model selection and parameter consistency [4, 8].
In this paper, we derive a single main theorem, and show how we are able to rederive a wide range
of known results on high-dimensional consistency, as well as some novel ones, including estimation error rates for low-rank matrices, sparse matrices, and ?weakly?-sparse vectors. Due to space
constraints, many of the technical details are deferred to the full-length version of this conference
paper.
2
Problem formulation and some key properties
In this section, we begin with a precise formulation of the problem, and then develop some key
properties of the regularizer and loss function. In particular, we define a notion of decomposability
! = ?! ? ?? of the
for regularizing functions r, and then prove that when it is satisfied, the error ?
regularized M -estimator must satisfy certain constraints We use these constraints to define a notion
of restricted strong convexity that the loss function must satisfy.
2.1
Problem set-up
Consider a random variable Z with distribution P taking values in a set Z. Let Z1n := {Z1 , . . . , Zn }
denote n observations drawn in an i.i.d. manner from P, and suppose ?? ? Rp is some parameter
of this distribution. We consider the problem of estimating ?? from the data Z1n , and in order to do
so, we consider the following class of regularized M -estimators. Let L : Rp ? Z n %? R be some
loss function that assigns a cost to any parameter ? ? Rp , for a given set of observations Z1n . Let
r : Rp %? R denote a regularization function. We then consider the regularized M -estimator given
by
"
#
?! ? arg min L(?; Z n ) + ?n r(?) ,
(1)
1
??Rp
where ?n > 0 is a user-defined regularization penalty. For ease of notation, in the sequel, we adopt
the shorthand L(?) for L(?; Z1n ). Throughout the paper, we assume that the loss function L is
convex and differentiable, and that the regularizer r is a norm.
! ? in some error metric
Our goal is to provide general techniques for deriving bounds on the error ???
?
?
!
!
d. A common example is the !2 -norm d(??? ) := &??? &2 . As discussed earlier, high-dimensional
parameter estimation is made possible by structural constraints on ?? such as sparsity, and we will
see that the behavior of the error is determined by how well these constraints are captured by the
regularization function r(?). We now turn to the properties of the regularizer r and the loss function
L that underlie our analysis.
2
2.2
Decomposability
Our first condition requires that the regularization function r be decomposable, in a sense to be
defined precisely, with respect to a family of subspaces. This notion is a formalization of the manner
in which the regularization function imposes constraints on possible parameter vectors ?? ? Rp . We
begin with some abstract definitions, which we then illustrate with a number of concrete examples.
Take some arbitrary inner product space H, and let & ?& 2 denote the norm induced by the inner
product. Consider a pair (A, B) of subspaces of H such that A ? B ? . For a given subspace A and
vector u ? H, we let ?A (u) := argminv?A &u ? v&2 denote the orthogonal projection of u onto A.
We let V = {(A, B) | A ? B ? } be a collection of subspace pairs. For a given statistical model,
our goal is to construct subspace collections V such that for any given ?? from our model class, there
exists a pair (A, B) ? V with &?A (?? )&2 ? &?? &2 , and &?B (?? )&2 ? 0. Of most interest to us are
subspace pairs (A, B) in which this property holds but the subspace A is relatively small and B is
relatively large. Note that A represents the constraints underlying our model class, and imposed by
our regularizer. For the bulk of the paper, we assume that H = Rp and use the standard Euclidean
inner product (which should be assumed unless otherwise specified).
As a first concrete (but toy) example, consider the model class of all vectors ?? ? Rp , and the subspace collection T that consists of a single subspace pair (A, B) = (Rp , 0). We refer to this choice
(V = T ) as the trivial subspace collection. In this case, for any ?? ? Rp , we have ?A (?? ) = ?? and
?B (?? ) = 0. Although this collection satisfies our desired property, it is not so useful since A = Rp
is a very large subspace. As a second example, consider the class of s-sparse parameter vectors
?? ? Rp , meaning that ?i? )= 0 only if i ? S, where S is some s-sized subset of {1, 2, . . . , p}. For
any given subset S and its complement S c , let us define the subspaces
A(S) = {? ? Rp | ?S c = 0}, and B(S) = {? ? Rp | ?S = 0},
and the s-sparse subspace collection S = {(A(S), B(S)) | S ? {1, . . . , p}, |S| = s}. With this
set-up, for any s-sparse parameter vector ?? , we are guaranteed that there exists some (A, B) ? S
such that ?A (?? ) = ?? and ?B (?? ) = 0. In this case, the property is more interesting, since the
subspaces A(S) are relatively small as long as |S| = s + p.
With this set-up, we say that the regularizer r is decomposable with respect to a given subspace pair
(A, B) if
r(u + z) = r(u) + r(z) for all u ? A and z ? B.
(2)
In our subsequent analysis, we impose the following condition on the regularizer:
Definition 1. The regularizer r is decomposable with respect to a given subspace collection V,
meaning that it is decomposable for each subspace pair (A, B) ? V.
Note that any regularizer is decomposable with respect to the trivial subspace collection
T = {(Rp , 0)}. It will be of more interest to us when the regularizer decomposes with respect to
a larger collection V that includes subspace pairs (A, B) in which A is relatively small and B is
relatively large. Let us illustrate with some examples.
? Sparse vectors and !1 norm regularization. Consider a model involving s-sparse regression vectors ?? ? Rp , and recall the definition of the s-sparse subspace collection S discussed above. We
claim that the !1 -norm regularizer r(u) = &u&1 is decomposable with respect to S. Indeed, for
any s-sized subset S and vectors u ? A(S) and v ? B(S), we have &u + v&1 = &u&1 + &v&1 , as
required.
? Group-structured sparse matrices and !1,q matrix norms. Various statistical problems involve
matrix-valued parameters ? ? Rk?m ; examples include multivariate regression problems or
(inverse) covariance matrix estimation. We can define an inner product on such matrices via
$k $m
,,?, ?-- = trace(?T ?) and the induced (Frobenius) norm i=1 j=1 ?2i,j . Let us suppose
that ? satisfies a group sparsity condition, meaning that the ith row, denoted ?i , is non-zero only
if i ? S ? {1, . . . , k} and the cardinality of S is controlled. For a given subset S, we can define
the subspace pair
"
#
B(S) = ? ? Rk?m | ?i = 0 for all i ? S c ,
and A(S) = (B(S))? ,
For some fixed s ? k, we then consider the collection
V = {(A(S), B(S)) | S ? {1, . . . , k}, |S| = s},
3
which is a group-structured analog of the s-sparse set S for vectors. For any q ? [1, ?], now
$k $m
suppose that the regularizer is the !1 /!q matrix norm, given by r(?) = i=1 [ j=1 |?ij |q ]1/q ,
corresponding to applying the !q norm to each row and then taking the !1 -norm of the result. It
can be seen that the regularizer r(?) = |||?|||1,q is decomposable with respect to the collection V.
? Low-rank matrices and nuclear norm. The estimation of low-rank matrices arises in various contexts, including principal component analysis, spectral clustering, collaborative filtering, and matrix completion. In particular, consider the class of matrices ? ? Rk?m that have
rank r ? min{k, m}. For any given matrix ?, we let row(?) ? Rm and col(?) ? Rk denote its
row space and column space respectively. For a given pair of r-dimensional subspaces U ? Rk
and V ? Rm , we define a pair of subspaces A(U, V ) and B(U, V ) of Rk?m as follows:
"
#
A(U, V ) := ? ? Rk?m | row(?) ? V, col(?) ? U , and
(3a)
"
#
k?m
?
?
B(U, V ) := ? ? R
| row(?) ? V , col(?) ? U .
(3b)
Note that A(U, V ) ? B ? (U, V ), as is required by our construction. We then consider the collection V = {(A(U, V ), B(U, V )) | U ? Rk , V ? Rm }, where (U, V ) range over all pairs of
r-dimensional subspaces. Now suppose that we regularize with the nuclear norm r(?) = |||?|||1 ,
corresponding to the sum of the singular values of the matrix ?. It can be shown that the nuclear
norm is decomposable with respect to V. Indeed, since any pair of matrices M ? A(U, V ) and
M % ? B(U, V ) have orthogonal row and column spaces, we have |||M + M % |||1 = |||M |||1 + |||M % |||1
(e.g., see the paper [13]).
Thus, we have demonstrated various models and regularizers in which decomposability is satisfied
with interesting subspace collections V. We now show that decomposability has important con! = ?! ? ?? , where ?! ? Rp is any optimal solution of the regularized
sequences for the error ?
M -estimation procedure (1). In order to state a lemma that captures this fact, we need to define the
dual norm of the regularizer, given by r? (v) := supu?Rp &u,v'
r(u) . For the regularizers of interest, the
dual norm can be obtained via some easy calculations. For instance, given a vector ? ? Rp and
r(?) = &?&1 , we have r? (?) = &?&? . Similarly, given a matrix ? ? Rk?m and the nuclear norm
regularizer r(?) = |||?|||1 , we have r? (?) = |||?|||2 , corresponding to the operator norm (or maximal
singular value).
Lemma 1. Suppose ?! is an optimal solution of the regularized M -estimation procedure (1), with
! ? . Furthermore, suppose that the regularization penalty is strictly positive
associated error ? = ???
?
?
with ?n ? 2 r (?L(? )). Then for any (A, B) ? V
! + 4r(?A? (?? )).
! ? 3r(?B ? (?))
r(?B (?))
This property plays an essential role in our definition of restricted strong convexity and subsequent
analysis.
2.3
Restricted Strong Convexity
! ? L(?? )
Next we state our assumption on the loss function L. In general, guaranteeing that L(?)
?
is small is not sufficient to show that ?! and ? are close. (As a trivial example, consider a loss
function that is identically zero.) The standard way to ensure that a function is ?not too flat? is via
the notion of strong convexity?in particular, by requiring that there exist some constant ? > 0 such
that L(?? + ?) ? L(?? ) ? ,?L(?? ), ?- ? ? d2 (?) for all ? ? Rp . In the high-dimensional setting,
where the number of parameters p may be much larger than the sample size, the strong convexity
assumption need not be satisfied. As a simple example, consider the usual linear regression model
y = X?? + w, where y ? Rn is the response vector, ?? ? Rp is the unknown parameter vector,
X ? Rn?p is the design matrix, and w ? Rn is a noise vector, with i.i.d. zero mean elements. The
1
least-squares loss is given by L(?) = 2n
&y ? X?&22 , and has the Hessian H(?) = n1 X T X. It is
easy to check that the p ? p matrix H(?) will be rank-deficient whenever p > n, showing that the
least-squares loss cannot be strongly convex (with respect to d(?) = & ?& 2 ) when p > n.
! must lie within a restricted set,
Herein lies the utility of Lemma 1: it guarantees that the error ?
so that we only need the loss function to be strongly convex for a limited set of directions. More
precisely, we have:
4
Definition 2. Given some subset C ? Rp and error norm d(?), we say that the loss function L
satisfies restricted strong convexity (RSC) (with respect to d(?)) with parameter ?(L) > 0 over C if
L(?? + ?) ? L(?? ) ? ,?L(?? ), ?-
? ?(L) d2 (?)
for all ? ? C.
(4)
In the statement of our results, we will be interested in loss functions that satisfy RSC over sets
C(A, B, &) that are indexed by a subspace pair (A, B) and a tolerance & ? 0 as follows:
"
#
C(A, B, &) := ? ? Rp | r(?B (?)) ? 3r(?B ? (?)) + 4r(?A? (?? )), d(?) ? & . (5)
In the special case of least-squares regression with hard sparsity constraints, the RSC condition corresponds to a lower bound on the sparse eigenvalues of the Hessian matrix X T X, and is essentially
equivalent to a restricted eigenvalue condition introduced by Bickel et al. [1].
3
Convergence rates
We are now ready to state a general result that provides bounds and hence convergence rates for
the error d(?! ? ?? ). Although it may appear somewhat abstract at first sight, we illustrate that this
result has a number of concrete consequences for specific models. In particular, we recover the best
known results about estimation in s-sparse models with general designs [1, 7], as well as a number
of new results, including convergence rates for estimation under !q -sparsity constraints, estimation
in sparse generalized linear models, estimation of block-structured sparse matrices and estimation
of low-rank matrices.
In addition to the regularization parameter ?n and RSC constant ?(L) of the loss function, our
general result involves a quantity that relates the error metric d to the regularizer r; in particular, for
any set A ? Rp , we define
?(A) :=
sup
r(u),
(6)
{u?Rp | d(u)=1}
so that r(u) ? ?(A)d(u) for u ? A.
Theorem 1 (Bounds for general models). For a given subspace collection V, suppose that the regularizer r is decomposable, and consider the regularized M -estimator (1) with ?n ? 2 r? (?L(?? )).
Then, for any pair of subspaces (A, B) ? V and tolerance & ? 0 such that the loss function L satisfies restricted strong convexity over C(A, B, &), we have
%
'
()
1 &
d(?! ? ?? ) ? max &,
2 ?(B ? ) ?n + 2 ?n ?(L) r(?A? (?? )) .
(7)
?(L)
The proof is motivated by arguments used in past work on high-dimensional estimation (e.g., [9,
14]); we provide the details in the full-length version. The remainder of this paper is devoted to illustrations of the consequences of Theorem 1 for specific models. In all of these uses of Theorem 1,
we choose the regularization parameter as small as possible?namely, ?n = 2 r? (?L(?? )). Although Theorem 1 allows for more general choices, in this conference version, we focus exclusively
on the case when d(?) to be the !2 -norm, In addition, we choose a tolerance parameter & = 0 for all
of the results except for the weak-sparse models treated in Section 3.1.2.
3.1
Bounds for linear regression
Consider the standard linear regression model y = X?? + w, where ?? ? Rp is the regression
vector, X ? Rn?p is the design matrix, and w ? Rn is a noise vector. Given the observations
(y, X), our goal is to estimate the regression vector ?? . Without any structural constraints on ?? ,
we can apply Theorem
1 with the trivial subspace collection T = {(Rp , 0)} to establish a rate
'
&?! ? ?? &2 = O(? p/n) for ridge regression, which holds as long as X is full-rank (and hence
requires n > p). Here we consider the sharper bounds that can be obtained when it is assumed that
?? is an s-sparse vector.
5
3.1.1
Lasso estimates of hard sparse models
?
More precisely, let us consider
" 1 estimating 2an s-sparse#regression vector ? by solving the Lasso
program ?! ? arg min??Rp 2n &y ? X?&2 + ?n &?&1 . The Lasso is a special case of our M 1
estimator (1) with r(?) = &?&1 , and L(?) = 2n
&y ? X?&22 . Recall the definition of the s-sparse
subspace collection S from Section 2.2. For this problem, let us set & = 0 so that the restricted strong
convexity set (5) reduces to C(A, B, 0) = {? ? Rp | &?S c &1 ? 3&?S &1 }. Establishing restricted
strong convexity for the least-squares loss is equivalent to ensuring the following bound on the
design matrix:
&X?&22 /n ? ?(L) &?&22
for all ? ? Rp such that &?S &1 ? 3&?S &1 .
(8)
As mentioned previously, this condition is essentially the same as the restricted eigenvalue condition
developed by Bickel et al. [1]. In very recent work, Raskutti et al. [10] show that condition (8) holds
with high probability for various random ensembles of Gaussian matrices with non-i.i.d. elements.
In addition to?the RSC condition, we assume that X has bounded column norms (specifically,
&Xi &2 ? 2 n for all i = 1, . . . , p), and that the noise vector w ? Rn has i.i.d. elements with zero-mean and sub-Gaussian tails (i.e., there exists some constant ? > 0 such that
P[|wi | > t] ? exp(?t2 /2? 2 ) for all t > 0). Under these conditions, we recover as a corollary of
Theorem 1 the following known result [1, 7].
Corollary 1. Suppose that the true vector ?? ? Rp is exactly s-sparse with support S, and that the
2
design matrix X satisfies condition (8). If we solve the the Lasso with ?2n = 16? nlog p , then with
probability at least 1 ? c1 exp(?c2 n?2n ), the solution satisfies
*
8?
s log p
?
.
(9)
&?! ? ? &2 ?
?(L)
n
Proof. As noted previously, the !1 -regularizer is decomposable for the sparse subspace collection
S, while condition (8) ensures that RSC holds for all sets C(A, B, 0) with (A, B) ? S. We must
verify that the given choice of regularization satisfies ?n ? 2 r? (?L(?? )). Note that r? (?) = & ? &? ,
and moreover that ?L(?? ) = X T w/n. Under the column normalization condition
+on the design
matrix X and the sub-Gaussian nature of the noise, it follows that &X T w/n&? ? 4? 2 logn p with
high probability. The bound in Theorem 1 is thus applicable, and it remains to compute the form
that its different'
terms take in this special case. For the !1 -regularizer and the !2 error metric, we
have ?(AS ) = |S|. Given the hard+sparsity assumption, r(?S? c ) = 0, so that Theorem 1 implies
s log p
2 ?
8?
that &?! ? ?? &2 ? ?(L)
s?n = ?(L)
n , as claimed.
3.1.2
Lasso estimates of weak sparse models
We now consider models that satisfy a weak sparsity assumption. More concretely,
that ??
$p suppose
p
q
lies in the !q -?ball? of radius Rq ?namely, the set Bq (Rq ) := {? ? R |
i=1 |?i | ? Rq } for
some q ? (0, 1]. Our analysis exploits the fact that any ?? ? Bq (Rq ) can be well approximated by
an s-sparse vector (for an appropriately chosen sparsity index s). It is natural to approximate ?? by
a vector supported on the set S = {i | |?i? | ? ? }. For any choice of threshold ? > 0, it can be
shown that |S| ? Rq ? ?q , and it is optimal to choose ? equal to the same regularization parameter
?n from Corollary 1 (see the full-length version for details). Accordingly, we consider the s-sparse
n
subspace collection S with subsets of size s = Rq ??q
n . We assume that the noise vector w ? R
is as defined above and that the columns are normalized as in the previous section. We also assume
that the matrix X satisfies the condition
, log p - 12
&Xv&2 ? ?1 &v&2 ? ?2
&v&1
for constants ?1 , ?2 > 0.
(10)
n
Raskutti et al. [10] show that this property holds with high probablity for suitable Gaussian random
matrices. Under this condition, it can be verified that RSC
?(L) = ?1 /2 over the set
+ holds with
'
1 .
.
.
/
16 ? 2 log p 1?q/2
2
C A(S), B(S), &n ), where &n = 4/?1 + 4/?1 )Rq
. The following result,
n
which we obtain by applying Theorem 1 in this setting, is new to the best of our knowledge:
6
Corollary 2. Suppose that the true vector ?? ? Bq (Rq ), and the design matrix X satisfies condi2
tion (10). If we solve the Lasso with ?2n = 16? nlog p , then with probability 1 ? c1 exp(?c2 n?2n ), the
solution satisfies
0*
11?q/2 2
? 3
1
16 ? 2 log p
2
2
?
2
!
&? ? ? &2 ? Rq
+'
.
(11)
n
?(L)
?(L)
We note that both of the rates?for hard-sparsity in Corollary 1 and weak-sparsity in Corollary 2?
are known to be optimal1 in a minimax sense [10].
3.2
Bounds for generalized linear models
Our next example is a generalized linear model with canonical link function, where the distribution
of response y ? Y based on a predictor x ? Rp is given by p(y | x; ?? ) = exp(y,?? , x- ?
a(,?? , X-) + d(y)), for some fixed functions a : R %? R and d : Y %? R, where &x&? ? A, and
|y| ? B. We consider estimating ?? from observations {(xi , yi )}ni=1 by !1 -regularized maximum
"
#
. $n
/
$n
likelihood ?! ? arg min??Rp ? n1 ,?,
+ n1 i=1/a(,?, xi -) + &?&1 . This is a special
i=1 yi x.i -$
$
n
n
case of our M -estimator (1) with L(?) = ?,?, n1 i=1 yi xi - + n1 i=1 a(,?, xi -), and r(?) =
n?p
th
&?&1 . Let X ? R
denote the matrix with i row xi . For analysis, we again use the s-sparse
subspace collection S and & = 0. With these choices, it can be verified that an appropriate version of
the RSC will hold if the second derivative a%% is strongly convex, and the design matrix X satisfies a
version of the condition (8).
Corollary 3. Suppose that the true vector ?? ? Rp is exactly s-sparse with support S, and the
model (a, X) satisfies an RSC condition. Suppose that we compute the !1 -regularized MLE with
2 2
?2n = 32A Bn log p . Then with probability 1 ? c1 exp(?c2 n?2n ), the solution satisfies
*
16AB s log p
?
!
&? ? ? &2 ?
.
(12)
?(L)
n
We defer the proof to the full-length version due to space constraints.
3.3
Bounds for sparse matrices
In this section, we consider some extensions of our results to estimation of regression matrices.
Various authors have proposed extensions of the Lasso based on regularizers that have more structure
than the !1 norm (e.g., [17, 20, 23, 5]). Such regularizers allow one to impose various types of
block-sparsity constraints, in which groups of parameters are assumed to be active (or inactive)
simultaneously. We assume that the observation model takes on the form Y = X?? + W , where
?? ? Rk?m is the unknown fixed set of parameters, X ? Rn?k is the design matrix, and W ?
Rn?m is the noise matrix. As a loss function, we use the Frobenius norm n1 L(?) = |||Y ? X?|||2F ,
and as a regularizer, we use the !1,q -matrix norm for some q ? 1, which takes the form |||?|||1,q =
$k
i=1 &(?i1 , . . . , ?im )&q . We refer to the resulting estimator as the q-group Lasso. We define the
quantity ?(m; q) = 1 if q ? (1, 2] and ?(m; q) = m1/2?1/q if q > 2. We then set the regularization
parameter as follows:
?
4 4?
? [?(m; q) log k + Cq m1?1/q ] if q > 1
n+
?n =
4? log(km)
for q = 1.
n
Corollary 4. Suppose that the true parameter matix ?? has non-zero rows only for indices i ? S ?
{1, . . . , k} where |S| = s, and that the design matrix X ? Rn?k satisfies condition (8). Then with
probability at least 1 ? c1 exp(?c2 n?2n ), the q-block Lasso solution satisfies
! ? ?? |||F
|||?
?
2
?(S)?n .
?(L)
(13)
1
Raskutti et al. [10] show that the rate (11) is achievable by solving the computationally intractable problem
of minimizing L(?) over the "q -ball.
7
The proof is provided in the full-length version;
? here we consider three special cases
? of the above
result. A simple argument shows that ?(S) = s if q ? 2, and ?(S) = m1/q?1/2 s if q ? [1, 2].
For q = 1, solving the group Lasso is identical solving a+Lasso problem with sparsity sm and
ambient dimension km, and the resulting upper bound
8?
?(L)
s m log(km)
n
reflects this fact (compare
&+ s log k ' sm (
8?
to Corollary 1). For the case q = 2, Corollary 4 yields the upper bound ?(L)
+
n
n ,
k
which also has a natural interpretation: the term s log
captures the difficulty of finding the s nonn
sm
zero rows out of the total k, whereas the term n captures the difficulty of estimating the sm free
parameters in the matrix (once the non-zero rows have
+ been determined). We note that recent work
?
c m s log k
?
by Lounici et al. [4] established the bound O( ?(L)
+ sm
n
n ), which is equivalent apart
&+ s log k
' (
?
8?
from a term m. Finally, for q = ?, we obtain the upper bound ?(L)
+ m ns .
n
3.4
Bounds for estimating low rank matrices
Finally, we consider the implications of our main result for the problem of estimating low-rank matrices. This structural assumption is a natural variant of sparsity, and has been studied by various
authors (see the paper [13] and references therein). To illustrate our main theorem in this context, let us consider the following instance of low-rank matrix learning. Given a low-rank matrix
?? ? Rk?m , suppose that we are given n noisy observations of the form Yi = ,,Xi , ?? -- + Wi ,
where Wi ? N (0, 1) and ,,A, B-- := trace(AT B). Such an observation model arises in system identification settings in control theory [13]. The following regularized M -estimator can be
considered in order to estimate the desired low-rank matrix ?? :
n
1 5
|Yi ? ,,Xi , ?)--|2 + |||?|||1 ,
(14)
min
??Rm?p 2n
i=1
where the regularizer, |||?|||1 , is the nuclear norm, or the sum of the singular values of ?. Recall
the rank-r collection V defined for low-rank matrices in Section 2.2. Let ?? = U ?W T be the
singular value decomposition (SVD) of ?? , so that U ? Rk?r and W ? Rm?r are orthogonal, and
? ? Rr?r is a diagonal matrix. If we let A = A(U, W ) and B = B(U, W ), then, ?B (?? ) = 0, so
that by Lemma 1 we have that |||?B (?)|||1 ? 3 |||?B ? (?)|||1 . Thus, for restricted strong convexity to
hold it can be shown that the design matrices Xi must satisfy
n
15
|,,Xi , ?--|2 ? ?(L) |||?|||2F
for all ? such that |||?B (?)|||1 ? 3 |||?B ? (?)|||1 ,
(15)
n i=1
and satisfy the appropriate analog of the column-normalization condition. As with analogous conditions for sparse linear regression, these conditions hold w.h.p. for various non-i.i.d. Gaussian
random matrices.2
Corollary 5. Suppose that the true matrix ?? has rank r + min(k, m), and that the design ?
matrices
?
? m,
{Xi } satisfy condition (15). If we solve the regularized M -estimator (14) with ?n = 16 k+
n
then with probability at least 1 ? c1 exp(?c2 (k + m)), we have
6* rk * rm 7
16
?
! ? ? |||F ?
|||?
+
.
(16)
?(L)
n
n
?
?
Proof. Note that if rank(?? ) = r, then |||?? |||1 ? r|||?? |||F so that ?(B ? ) = 2r, since the
subspace B(U, V )? consists of matrices with rank at most 2r. All that remains is to show that
?n ? 2 r? (?L(?? )). Standard analysis gives that the dual norm to ||| ? |||1 is the operator norm,
||| ?$
|||2 . Applying this observation we may construct a bound$on the operator norm of ?L(?? ) =
n
n
1
k
m 1
T 2
T 2
i . Given unit vectors u ? R and v ? R , n
i=1 Xi W$
i=1 |,,Xi , vu --| ? |||vu |||F = 1.
n
n
Therefore, n1 i=1 (uT Xi v)Wi ? N (0, n1 ). A standard argument shows that the supremum over all
?
?
? m with probability at least 1?c1 exp(?c2 (k +m)),
unit vectors u and v is bounded above by 8 k+
n
verifying that ?n ? 2r? (?L(?? )) with high probability.
2
This claim involves some use of concentration of measure and Gaussian comparison inequalities analogous
to arguments in Raskutti et al. [10]; see the full-length length version for details.
8
References
[1] P. Bickel, Y. Ritov, and A. Tsybakov. Simultaneous analysis of Lasso and Dantzig selector.
Submitted to Annals of Statistics, 2008.
[2] E. Candes and T. Tao. The Dantzig selector: Statistical estimation when p is much larger than
n. Annals of Statistics, 35(6):2313?2351, 2007.
[3] S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM J.
Sci. Computing, 20(1):33?61, 1998.
[4] K. Lounici, M. Pontil, A. B. Tsybakov, and S. van de Geer. Taking advantage of sparsity in
multi-task learning. Arxiv, 2009.
[5] L. Meier, S. Van de Geer, and P. B?uhlmann. The group lasso for logistic regression. Journal
of the Royal Statistical Society, Series B, 70:53?71, 2008.
[6] N. Meinshausen and P. B?uhlmann. High-dimensional graphs and variable selection with the
Lasso. Annals of Statistics, 34:1436?1462, 2006.
[7] N. Meinshausen and B. Yu. Lasso-type recovery of sparse representations for high-dimensional
data. Annals of Statistics, 37(1):246?270, 2009.
[8] G. Obozinski, M. J. Wainwright, and M. I. Jordan. Union support recovery in high-dimensional
multivariate regression. Technical report, Department of Statistics, UC Berkeley, August 2008.
[9] S. Portnoy. Asymptotic behavior of M-estimators of p regression parameters when p2 /n is
large: I. consistency. Annals of Statistics, 12(4):1296?1309, 1984.
[10] G. Raskutti, M. J. Wainwright, and B. Yu. Minimax rates of estimation for high-dimensional
linear regression over !q -balls. Technical Report arXiv:0910.2042, UC Berkeley, Department
of Statistics, 2009.
[11] P. Ravikumar, M. J. Wainwright, and J. Lafferty. High-dimensional Ising model selection using
!1 -regularized logistic regression. Annals of Statistics, 2008. To appear.
[12] P. Ravikumar, M. J. Wainwright, G. Raskutti, and B. Yu. High-dimensional covariance estimation by minimizing !1 -penalized log-determinant divergence. Technical Report 767, Department of Statistics, UC Berkeley, September 2008.
[13] B. Recht, M. Fazel, and P. A. Parrilo. Guaranteed minimum-rank solutions of linear matrix
equations via nuclear norm minimization. Allerton Conference, 2007.
[14] A.J. Rothman, P.J. Bickel, E. Levina, and J. Zhu. Sparse permutation invariant covariance
estimation. Electron. J. Statist., 2:494?515, 2008.
[15] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, Series B, 58(1):267?288, 1996.
[16] J. Tropp. Just relax: Convex programming methods for identifying sparse signals in noise.
IEEE Trans. Info Theory, 52(3):1030?1051, March 2006.
[17] B. Turlach, W.N. Venables, and S.J. Wright. Simultaneous variable selection. Technometrics,
27:349?363, 2005.
[18] S. Van de Geer. High-dimensional generalized linear models and the lasso. Annals of Statistics,
36(2):614?645, 2008.
[19] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using !1 constrained quadratic programming (Lasso). IEEE Trans. Information Theory, 55:2183?2202,
May 2009.
[20] M. Yuan and Y. Lin. Model selection and estimation in regression with grouped variables.
Journal of the Royal Statistical Society B, 1(68):49, 2006.
[21] C. Zhang and J. Huang. Model selection consistency of the lasso selection in high-dimensional
linear regression. Annals of Statistics, 36:1567?1594, 2008.
[22] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning
Research, 7:2541?2567, 2006.
[23] P. Zhao, G. Rocha, and B. Yu. Grouped and hierarchical model selection through composite
absolute penalties. Annals of Statistics, 37(6A):3468?3497, 2009.
9
| 3765 |@word determinant:1 version:9 achievable:1 norm:31 turlach:1 suitably:1 d2:2 km:3 bn:1 covariance:3 decomposition:2 series:2 exclusively:1 past:2 wainwrig:1 existing:2 must:5 subsequent:2 accordingly:1 ith:1 probablity:1 provides:2 allerton:1 zhang:1 c2:6 yuan:1 prove:1 shorthand:1 consists:2 combine:1 manner:2 indeed:2 behavior:3 multi:1 cardinality:1 provided:2 estimating:8 begin:2 notation:1 underlying:1 bounded:2 moreover:1 interpreted:1 developed:1 unified:3 finding:1 guarantee:1 berkeley:9 subclass:1 exactly:2 rm:6 control:1 unit:2 underlie:1 appear:2 positive:1 engineering:2 xv:1 consequence:2 establishing:2 might:2 therein:1 studied:4 dantzig:3 meinshausen:2 ease:1 limited:1 range:3 statistically:1 fazel:1 vu:2 atomic:1 block:7 union:1 supu:1 procedure:4 pontil:1 composite:1 projection:1 onto:1 close:1 selection:13 operator:5 cannot:1 risk:1 impossible:2 applying:3 context:2 equivalent:3 imposed:1 demonstrated:1 convex:6 focused:1 decomposable:11 recovery:3 assigns:1 identifying:1 estimator:18 deriving:1 nuclear:6 regularize:1 financial:1 rocha:1 notion:4 analogous:2 annals:9 construction:1 suppose:15 play:1 user:1 programming:2 us:1 element:3 approximated:1 ising:1 role:1 portnoy:1 capture:3 verifying:1 ensures:1 pradeepr:1 remote:1 mentioned:1 rq:9 convexity:11 motivate:1 weakly:1 solving:4 basis:2 various:10 regularizer:22 fast:1 saunders:1 larger:5 solve:4 valued:1 say:2 relax:1 otherwise:1 statistic:14 noisy:2 sequence:1 differentiable:1 eigenvalue:3 rr:1 advantage:1 nlog:2 product:4 maximal:1 remainder:1 frobenius:3 convergence:8 guaranteeing:1 derive:2 illustrate:5 completion:2 stat:1 develop:1 ij:1 z1n:4 p2:1 strong:11 c:1 involves:2 implies:1 direction:1 radius:1 closely:1 bin:1 rothman:1 im:1 strictly:1 extension:2 hold:9 considered:1 wright:1 exp:8 claim:2 electron:1 bickel:4 adopt:1 estimation:20 applicable:1 uhlmann:2 utexas:1 venables:1 grouped:2 reflects:1 minimization:1 gaussian:7 sight:1 shrinkage:1 corollary:11 focus:1 rank:20 likelihood:3 check:1 sense:2 posteriori:1 inference:4 entire:1 going:1 interested:1 i1:1 tao:1 arg:3 among:2 dual:3 denoted:1 logn:1 constrained:2 special:5 uc:6 field:1 construct:2 equal:1 once:1 identical:1 represents:1 broad:1 yu:6 t2:1 report:3 simultaneously:1 divergence:1 n1:8 attempt:1 ab:1 technometrics:1 interest:3 deferred:1 pradeep:1 regularizers:6 devoted:1 implication:1 ambient:1 condi2:1 orthogonal:3 unless:3 indexed:1 incomplete:1 euclidean:1 bq:3 re:1 desired:2 rsc:9 instance:3 column:6 earlier:1 measuring:1 zn:1 cost:1 decomposability:5 subset:6 predictor:2 too:1 dependency:1 eec:5 recht:1 negahban:1 siam:1 sequel:1 concrete:3 again:1 satisfied:3 choose:3 huang:1 derivative:1 zhao:2 toy:1 parrilo:1 de:3 includes:1 coefficient:1 satisfy:7 tion:1 sup:1 recover:2 candes:1 defer:1 collaborative:2 square:4 ni:1 ensemble:1 yield:1 weak:5 bayesian:1 identification:2 researcher:2 submitted:1 binyu:1 simultaneous:2 whenever:1 definition:6 associated:1 proof:5 recovers:1 con:1 popular:1 recall:3 knowledge:1 ut:2 reflecting:1 response:2 formulation:2 lounici:2 ritov:1 strongly:3 furthermore:1 just:1 glms:1 tropp:1 logistic:2 verify:1 true:5 requiring:1 normalized:1 regularization:14 hence:3 deal:1 encourages:1 noted:1 generalized:5 ridge:1 meaning:3 novel:1 recently:1 common:2 raskutti:6 overview:1 discussed:2 analog:2 tail:1 m1:3 interpretation:1 refer:2 nonn:1 consistency:13 similarly:1 language:1 multivariate:2 recent:3 perspective:1 apart:1 argminv:1 claimed:1 certain:2 inequality:1 yi:5 captured:1 seen:1 minimum:1 somewhat:1 impose:2 signal:1 relates:1 full:7 reduces:1 technical:4 levina:1 calculation:1 long:2 lin:1 ravikumar:3 mle:1 controlled:1 ensuring:1 involving:2 regression:21 variant:1 essentially:2 metric:3 arxiv:2 normalization:2 c1:6 addition:3 whereas:1 singular:4 appropriately:1 induced:2 deficient:1 lafferty:1 jordan:1 structural:3 easy:2 identically:1 fit:1 lasso:21 inner:4 inactive:1 motivated:2 utility:1 sahand:2 penalty:3 hessian:2 useful:1 involve:2 tsybakov:2 statist:1 exist:1 restricts:1 canonical:1 tibshirani:1 bulk:1 group:7 key:3 threshold:2 drawn:1 verified:2 graph:1 sum:3 inverse:1 family:2 throughout:1 scaling:3 comparable:2 bound:17 guaranteed:2 quadratic:1 constraint:13 precisely:3 flat:1 aspect:1 argument:4 min:6 martin:1 relatively:5 department:9 structured:6 ball:3 march:1 wi:4 restricted:12 invariant:1 computationally:1 equation:1 previously:2 remains:2 turn:1 pursuit:1 apply:1 hierarchical:1 appropriate:3 spectral:1 encounter:1 rp:34 running:1 ensure:2 include:1 clustering:1 graphical:2 unifying:1 exploit:1 establish:1 society:3 quantity:2 parametric:1 concentration:1 usual:1 diagonal:1 september:1 subspace:34 link:1 sci:1 manifold:1 trivial:4 length:7 index:2 illustration:1 cq:1 minimizing:2 statement:1 sharper:1 info:1 trace:2 negative:1 design:12 unknown:2 upper:3 observation:9 markov:2 sm:5 precise:1 rn:9 arbitrary:1 august:1 sharp:1 introduced:1 complement:1 pair:15 required:2 specified:2 namely:2 z1:1 meier:1 herein:1 established:1 trans:2 able:1 usually:2 sparsity:14 program:2 including:6 max:1 royal:3 wainwright:6 suitable:1 natural:4 treated:1 regularized:19 difficulty:2 zhu:1 minimax:2 brief:1 identifies:1 ready:1 genomics:1 prior:1 asymptotic:1 loss:19 permutation:1 interesting:2 filtering:2 sufficient:1 consistent:2 imposes:1 austin:1 row:12 penalized:1 supported:1 free:1 allow:1 wide:1 taking:3 absolute:2 sparse:39 tolerance:3 van:3 dimension:1 concretely:1 made:1 collection:21 author:2 social:1 approximate:1 selector:3 supremum:1 active:1 assumed:5 xi:14 decomposes:1 nature:1 necessarily:1 main:4 noise:7 arise:1 body:1 referred:1 n:1 formalization:1 sub:2 explicit:1 exponential:1 col:3 lie:4 rederive:1 third:1 theorem:12 rk:13 specific:2 showing:1 sensing:1 decay:1 exists:3 essential:1 intractable:1 chen:1 ordered:1 corresponds:1 satisfies:15 obozinski:1 goal:4 sized:2 donoho:1 hard:5 determined:2 except:1 specifically:1 principal:1 lemma:4 total:1 geer:3 svd:1 support:3 arises:2 regularizing:1 |
3,052 | 3,766 | Region-based Segmentation and Object Detection
Stephen Gould1
Tianshi Gao1
Daphne Koller2
Department of Electrical Engineering, Stanford University
2
Department of Computer Science, Stanford University
{sgould,tianshig,koller}@cs.stanford.edu
1
Abstract
Object detection and multi-class image segmentation are two closely related tasks
that can be greatly improved when solved jointly by feeding information from
one task to the other [10, 11]. However, current state-of-the-art models use a
separate representation for each task making joint inference clumsy and leaving
the classification of many parts of the scene ambiguous.
In this work, we propose a hierarchical region-based approach to joint object
detection and image segmentation. Our approach simultaneously reasons about
pixels, regions and objects in a coherent probabilistic model. Pixel appearance
features allow us to perform well on classifying amorphous background classes,
while the explicit representation of regions facilitate the computation of more sophisticated features necessary for object detection. Importantly, our model gives
a single unified description of the scene?we explain every pixel in the image and
enforce global consistency between all random variables in our model.
We run experiments on the challenging Street Scene dataset [2] and show significant improvement over state-of-the-art results for object detection accuracy.
1
Introduction
Object detection is one of the great challenges of computer vision, having received continuous
attention since the birth of the field. The most common modern approaches scan the image for
candidate objects and score each one. This is typified by the sliding-window object detection approach [22, 20, 4], but is also true of most other detection schemes (such as centroid-based methods [13] or boundary edge methods [5]). The most successful approaches combine cues from
inside the object boundary (local features) with cues from outside the object (contextual cues),
e.g., [9, 20, 6]. Recent works are adopting a more holistic approach by combining the output of multiple vision tasks [10, 11] and are reminiscent of some of the earliest work in computer vision [1].
However, these recent works use a different representation for each subtask, forcing information
sharing to be done through awkward feature mappings. Another difficulty with these approaches
is that the subtask representations can be inconsistent. For example, a bounding-box based object
detector includes many pixels within each candidate detection window that are not part of the object itself. Furthermore, multiple overlapping candidate detections contain many pixels in common.
How these pixels should be treated is ambiguous in such approaches. A model that uniquely identifies each pixel is not only more elegant, but is also more likely to produce reliable results since it
encodes a bias of the true world (i.e., a visible pixel belongs to only one object).
In this work, we propose a more integrated region-based approach that combines multi-class image segmentation with object detection. Specifically, we propose a hierarchical model that reasons
simultaneously about pixels, regions and objects in the image, rather than scanning arbitrary windows. At the region level we label pixels as belonging to one of a number of background classes
(currently sky, tree, road, grass, water, building, mountain) or a single foreground class. The foreground class is then further classified, at the object level, into one of our known object classes
(currently car and pedestrian) or unknown.
1
Our model builds on the scene decomposition model of Gould et al. [7] which aims to decompose
an image into coherent regions by dynamically moving pixel between regions and evaluating these
moves relative to a global energy objective. These bottom-up pixel moves result in regions with coherent appearance. Unfortunately, complex objects such as people or cars are composed of several
dissimilar regions which will not be combined by this bottom-up approach. Our new hierarchical approach facilitates both bottom-up and top-down reasoning about the scene. For example, we
can propose an entire object comprised of multiple regions and evaluate this joint move against our
global objective. Thus, our hierarchical model enjoys the best of two worlds: Like multi-class image
segmentation, our model uniquely explains every pixel in the image and groups these into semantically coherent regions. Like object detection, our model uses sophisticated shape and appearance
features computed over candidate object locations with precise boundaries. Furthermore, our joint
model over regions and objects allows context to be encoded through direct semantic relationships
(e.g., ?car? is usually found on ?road?).
2
Background and Related Work
Our method inherits features from the sliding-window object detector works, such as Torralba et al.
[19] and Dalal and Triggs [4], and the multi-class image segmentation work of Shotton et al. [16].
We further incorporate into our model many novel ideas for improving object detection via scene
context. The innovative works that inspire ours include predicting camera viewpoint for estimating the real world size of object candidates [12], relating ?things? (objects) to nearby ?stuff? (regions) [9], co-occurrence of object classes [15], and general scene ?gist? [18].
Recent works go beyond simple appearance-based context and show that holistic scene understanding (both geometric [11] and more general [10]) can significantly improve performance by
combining related tasks. These works use the output of one task (e.g., object detection) to provide
features for other related tasks (e.g., depth perception). While they are appealing in their simplicity, current models are not tightly coupled and may result in incoherent outputs (e.g., the pixels in
a bounding box identified as ?car? by the object detector, may be labeled as ?sky? by an image
segmentation task). In our method, all tasks use the same region-based representation which forces
consistency between variables. Intuitively this leads to more robust predictions.
The decomposition of a scene into regions to provide the basis for vision tasks exists in some
scene parsing works. Notably, Tu et al. [21] describe an approach for identifying regions in the
scene. Their approach has only be shown to be effective on text and faces, leaving much of the
image unexplained. Sudderth et al. [17] relate scenes, objects and parts in a single hierarchical
framework, but do not provide an exact segmentation of the image. Gould et al. [7] provides a complete description of the scene using dynamically evolving decompositions that explain every pixel
(both semantically and geometrically). However, the method cannot distinguish between between
foreground objects and often leaves them segmented into multiple dissimilar pieces. Our work builds
on this approach with the aim of classifying objects.
Other works attempt to integrate tasks such as object detection and multi-class image segmentation into a single CRF model. However, these models either use a different representation for object
and non-object regions [23] or rely on a pixel-level representation [16]. The former does not enforce
label consistency between object bounding boxes and the underlying pixels while the latter does not
distinguish between adjacent objects of the same class.
Recent work by Gu et al. [8] also use regions for object detection instead of the traditional slidingwindow approach. However, unlike our method, they use a single over-segmentation of the image
and make the strong assumption that each segment represents a (probabilistically) recognizable object part. Our method, on the other hand, assembles objects (and background regions) using segments from multiple different over-segmentations. The multiple over-segmentations avoids errors
made by any one segmentation. Furthermore, we incorporate background regions which allows us to
eliminate large portions of the image thereby reducing the number of component regions that need
to be considered for each object.
Liu et al. [14] use a non-parametric approach to image labeling by warping a given image onto a
large set of labeled images and then combining the results. This is a very effective approach since it
scales easily to a large number of classes. However, the method does not attempt to understand the
scene semantics. In particular, their method is unable to break the scene into separate objects (e.g., a
row of cars will be parsed as a single region) and cannot capture combinations of classes not present
in the training set. As a result, the approach performs poorly on most foreground object classes.
2
3
Region-based Model for Object Detection
We now present an overview of our joint object detection and scene segmentation model. This model
combines scene structure and semantics in a coherent energy function.
3.1 Energy Function
Our model builds on the work of Gould et al. [7] which aims to decompose a scene into a number (K)
of semantically consistent regions. In that work, each pixel p in the image I belongs to exactly one
region, identified by its region-correspondence variable Rp ? {1, . . . , K}. The r-th region is then
simply the set of pixels Pr whose region-correspondence variable equals r, i.e., Pr = {p : Rp = r}.
In our notation we will always use p and q to denote pixels, r and s to denote regions, and o to denote
objects. Double indices indicate pairwise terms between adjacent entities (e.g., pq or rs).
Regions, while visually coherent, may not encompass entire objects. Indeed, in the work of Gould
et al. [7] foreground objects tended to be over-segmented into multiple regions. We address this deficiency by allowing an object to be composed of many regions (rather than trying to force dissimilar
regions to merge). The object to which a region belongs is denoted by its object-correspondence
variable Or ? {?, 1, . . . , N }. Some regions, such as background, do not belong to any object
which we denote
S by Or = ?. Like regions, the set of pixels that comprise the o-th object is denoted by Po = r:Or =o Pr . Currently, we do not allow a single region or object to be composed of
multiple disconnected components.
Random variables are associated with the various entities (pixels, regions and objects) in our
model. Each pixel has a local appearance feature vector ?p ? Rn (see [7]). Each region has an
appearance variable Ar that summarizes the appearance of the region as a whole, a semantic class
label Sr (such as ?road? or ?foreground object?), and an object-correspondence variable Or . Each
object, in turn, has an associated object class label Co (such as ?car? or ?pedestrian?). The final
component in our model is the horizon which captures global geometry information. We assume
that the image was taken by a camera with horizontal axis parallel to the ground and model the
horizon v hz ? [0, 1] as the normalized row in the image corresponding to its location. We quantize
v hz into the same number of rows as the image.
We combine the variables in our model into a single coherent energy function that captures the
structure and semantics of the scene. The energy function includes terms for modeling the location
of the horizon, region label preferences, region boundary quality, object labels, and contextual relationships between objects and regions. These terms are described in detail below. The combined
energy function E(R, S, O, C, v hz | I, ?) has the form:
X
X
X
X
bdry
ctxt
E = ? hz (v hz ) +
?rreg (Sr , v hz ) +
?rs
+
?oobj (Co , v hz ) +
?or
(Co , Sr ) (1)
r
r,s
o
o,r
where for notational clarity the subscripts on the factors indicate that they are functions of the pixels
(appearance and shape) belonging to the regions, i.e., ?rreg is also a function of Pr , etc. It is assumed
that all terms are conditioned on the observed image I and model parameters ?. The summation
over context terms includes all ordered pairs of adjacent objects and regions, while the summation
over boundary terms is over unordered pairs of regions. An illustration of the variables in the energy
function is shown in Figure 1.
The first three energy terms are adapted from the model of [7]. We briefly review them here:
Horizon term. The ? hz term captures the a priori location of the horizon in the scene and, in our
model, is implemented as a log-gaussian ? hz (v hz ) = ? log N (v hz ; ?, ? 2 ) with parameters ? and ?
learned from labeled training images.
Knowing the location of the horizon allows us to compute the world height of an object in the
scene. Using the derivation from Hoiem et al. [12], it can be shown that the height yk of an object
b
(or region) in the scene can be approximated as yk ? h vvhzt ?v
where h is the height of the camera
?vb
origin above the ground, and vt and vb are the row of the top-most and bottom-most pixels in the
object/region, respectively. In our current work, we assume that all images were taken from the
b
same height above the ground, allowing us to use vvhzt ?v
as a feature in our region and object terms.
?vb
reg
Region term. The region term ? in our energy function captures the preference for a region
to be assigned different semantic labels (currently sky, tree, road, grass, water, building, mountain,
foreground). For convenience we include the v hz variable in this term to provide rough geometry
information. If a region is associated with an object, then we constrain the assignment of its class
label to foreground (e.g., a ?sky? region cannot be part of a ?car? object).
3
Procedure SceneInference
Generate over-segmentation dictionary ?
Initialize Rp using any of the over-segmentations
Repeat until convergence
Phase 1:
Propose a pixel move {Rp : p ? ?} ? r
Update region and boundary features
Run inference over regions S and v hz
Phase 2:
Propose a pixel {Rp } ? r or region move {Or } ? o
Update region, boundary and object features
Run inference over regions and objects (S, C) and v hz
Compute total energy E
If (E < E min ) then
Accept move and set E min = E
Else reject move
Figure 1: Illustration of the entities in our model (left) and inference algorithm (right). See text for details.
P
More formally, let Nr be the number of pixels in region r, i.e., Nr = p 1{Rp = r}, and let
?r : Pr , v hz , I 7? Rn denote the features for the r-th region. The region term is then
?
if Or 6= ? and Sr 6= foreground
?rreg (Sr , v hz ) =
(2)
?? reg Nr log ? (Sr | ?r ; ?reg ) otherwise
where ?(?) is the multi-class logit ?(y | x; ?) =
P
exp{?yT x}
n
o
T
y ? exp ?y ? x
and ? reg is the relative weight of the
region term versus the other terms in the model.
Boundary term. The term ? bdry penalizes two adjacent regions with similar appearance or lack
of boundary contrast. This helps to merge coherent pixels into a single region. We combine two
metrics in this term: the first captures region similarity as a whole, the secondp
captures contrast along
the common boundary between the regions. Specifically, let d (x, y; S) = (x ? y)T S ?1 (x ? y)
denote the Mahalanobis distance between vectors x and y, and Ers be the set of pixels along the
boundary. Then the boundary term is
X
2
2
1
1
bdry
bdry
?rs
= ?A
? |Ers | ? e? 2 d(Ar ,As ;?A ) + ??bdry
e? 2 d(?p ,?q ;?? )
(3)
(p,q)?Ers
where the ?A and ?? are the image-specific pixel appearance covariance matrix computed over all
pixels and neighboring pixels, respectively.
In our experiments we restrict ?A to be diagonal and set
bdry
?? = ?I with ? = E k?p ? ?q k2 as in Shotton et al. [16]. The parameters ?A
and ??bdry encode
the trade-off between the region similarity and boundary contrast terms and weight them against the
other terms in the energy function (Equation 1).
Note that the boundary term does not include semantic class or object information. The term
purely captures segmentation coherence in terms of appearance.
Object term. Going beyond the model in [7], we include object terms ? obj in our energy function
that score the likelihood of a group of regions being assigned a given object label. We currently
classify objects as either car, pedestrian or unknown. The unknown class includes objects like trash
cans, street signs, telegraph poles, traffic cones, bicycles, etc. Like the region
term, the object term
is defined by a logistic function that maps object features ?o : Po , v hz , I 7? Rn to probability of
each object class. However, since our region layer already identifies foreground regions, we would
like our energy to improve only when we recognize known object classes. We therefore bias the
object term to give zero contribution to the energy for the class unknown.1 Formally we have
?nobj (Co , v hz ) = ?? obj No log ? Co | ?o ; ?obj ? log ? unknown | ?o ; ?obj
(4)
where No is the number of pixels belonging to the object.
Context term. Intuitively, contextual information which relates objects to their local background
can improve object detection. For example, Heitz and Koller [9] showed that detection rates improve by relating ?things? (objects) to ?stuff? (background). Our model has a very natural way of
1
This results in the technical condition of allowing Or to take the value ? for unknown foreground regions
without affecting the energy.
4
encoding such relationships through pairwise energy terms between objects Co and regions Sr . We
do not encode contextual relationships between region classes (i.e., Sr and Ss ) since these rarely
help.2 Contextual relationships between foreground objects (i.e., Co and Cm ) may be beneficial
(e.g., people found on bicycles), but are not considered in this work. Formally, the context term is
ctxt
?or
(Co , Sr ) = ?? ctxt log ? Co ? Sr | ?or ; ?ctxt
(5)
where ?or : (Po , Pr , I) 7? Rn is a pairwise feature vector for object o and region r, ?(?) is the
multi-class logit, and ? ctxt weights the strength of the context term relative to other terms in the
energy function. Since the pairwise context term is between objects and (background) regions it
grows linearly with the number of object classes. This has a distinct advantage over approaches
which include a pairwise term between all classes resulting in quadratic growth.
3.2 Object Detectors
Performing well at object detection requires more than simple region appearance features. Indeed,
the power of state-of-the-art object detectors is their ability to model localized appearance and general shape characteristics of an object class. Thus, in addition to raw appearance features, we append
to our object feature vector ?o features derived from such object detection models. We discuss two
methods for adapting state-of-the-art object detector technologies for this purpose.
In the first approach, we treat the object detector as a black-box that returns a score per (rectangular) candidate window. However, recall that an object in our model is defined by a contiguous
set of pixels Po , not a rectangular window. In the black-box approach, we naively place a bounding
box (at the correct aspect ratio) around these pixels and classify the entire contents of the box. To
make classification more robust we search candidate windows in a small neighborhood (defined over
scale and position) around this bounding box, and take as our feature the output of highest scoring
window. In our experiments we test this approach using the HOG detector of Dalal and Triggs [4]
which learns a linear SVM classifier over feature vectors constructed by computing histograms of
gradient orientations in fixed-size overlapping cells within the candidate window.
Note that in the above black-box approach many of the pixels within the bounding box are not
actually part of the object (consider, for example, an L-shaped region). A better approach is to mask
out all pixels not belonging to the object. In our implementation, we use a soft mask that attenuates
the intensity of pixels outside the object based on their distance to the object boundary (see Figure 2).
This has the dual advantage of preventing hard edge artifacts and being less sensitive to segmentation
errors. The masked window is used at both training and test time. In our experiments we test this
more integrated approach using the patch-based features of Torralba et al. [19, 20]. Here features
are extracted by matching small rectangular patches at various locations within the masked window
and combining these weak responses using boosting. Object appearance and shape are captured by
operating on both the original (intensity) image and the edge-filtered image.
For both approaches, we append the score (for each object) from the object detection classifiers?
linear SVM or boosted decision trees?to the object feature vector ?o .
(a) full window
(b) hard region mask
(c) hard window
(d) soft region mask
(e) soft window
Figure 2: Illustration of soft mask for proposed object regions.
An important parameter for sliding-window detectors is the base scale at which features are extracted. Scale-invariance is achieved by successively down-sampling the image. Below the basescale, feature matching becomes inaccurate, so most detectors will only find objects above some
minimum size. Clearly there exists a trade-off between the desire to detect small objects, feature
quality, and computational cost. To reduce the computational burden of running our model on
high-resolution images while still being able to identify small objects, we employ a multi-scale approach. Here we run our scene decomposition algorithm on a low-resolution (320 ? 240) version
of the scene, but extract features from the original high-resolution version. That is, when we extract
object-detector features we map the object pixels Po onto the original image and extract our features
at the higher resolution.
2
The most informative region-to-region relationship is that sky tends to be above ground (road, grass, or
water). This information is already captured by including the horizon in our region term.
5
4
Inference and Learning
We now describe how we perform inference and learn the parameters of our energy function.
4.1 Inference
We use a modified version of the hill-climbing inference algorithm described in Gould et al. [7],
which uses multiple over-segmentations to propose large moves in the energy space. An overview
of this procedure is shown in the right of Figure 1. We initialize the scene by segmenting the
image using an off-the-shelf unsupervised segmentation algorithm (in our experiments we use meanshift [3]). We then run inference using a two-phased approach.
In the first phase, we want to build up a good set of initial regions before trying to classify them as
objects. Thus we remove the object variables O and C from the model and artificially increase the
bdry
boundary term weights (??bdry and ?A
) to promote merging. In this phase, the algorithm behaves
exactly as in [7] by iteratively proposing re-assignments of pixels to regions (variables R) and recomputes the optimal assignment to the remaining variables (S and v hz ). If the overall energy for the
new configuration is lower, the move is accepted, otherwise the previous configuration is restored
and the algorithm proposes a different move. The algorithm proceeds until no further reduction in
energy can be found after exhausting all proposal moves from a pre-defined set (see Section 4.2).
In the second phase, we anneal the boundary term weights and introduce object variables over
all foreground regions. We then iteratively propose merges and splits of objects (variables O) as
well as high-level proposals (see Section 4.2 below) of new regions generated from sliding-window
object candidates (affecting both R and O). After a move is proposed, we recompute the optimal
assignment to the remaining variables (S, C and v hz ). Again, this process repeats until the energy
cannot be reduced by any of the proposal moves.
Since only part of the scene is changing during any iteration we only need to recompute the
features and energy terms for the regions affected by a move. However, inference is still slow given
the sophisticated features that need to be computed and the large number of moves considered.
To improve running time, we leave the context terms ? ctxt out of the model until the last iteration
through the proposal moves. This allows us to maximize each region term independently during
each proposal step?we use an iterated conditional modes (ICM) update to optimize v hz after the
region labels have been inferred. After introducing the context term, we use max-product belief
propagation to infer the optimal joint assignment to S and C. Using this approach we can process
an image in under five minutes.
4.2 Proposal Moves
We now describe the set of pixel and region proposal moves considered by our algorithm. These
moves are relative to the current best scene decomposition and are designed to take large steps in
the energy space to avoid local minima. As discussed above, each move is accepted if it results in a
lower overall energy after inferring the optimal assignment for the remaining variables.
The main set of pixel moves are described in [7] but briefly repeated here for completeness.
The most basic move is to merge two adjacent regions. More sophisticated moves involve local
re-assignment of pixels to neighboring regions. These moves are proposed from a pre-computed
dictionary of image segments ?. The dictionary is generated by varying the parameters of an unsupervised over-segmentation algorithm (in our case mean-shift [3]) and adding each segment ? to
the dictionary. During inference, these segments are used to propose a re-assignment of all pixels
in the segment to a neighboring region or creation of new region. These bottom-up proposal moves
work well for background classes, but tend to result in over-segmented foreground classes which
have heterogeneous appearance, for example, one would not expect the wheels and body of a car to
be grouped together by a bottom-up approach.
An analogous set of moves can be used for merging two adjacent objects or assigning regions
to objects. However, if an object is decomposed into multiple regions, this bottom-up approach is
problematic as multiple such moves may be required to produce a complete object. When performed
independently, these moves are unlikely to improve the energy. We get around this difficulty by
introducing a new set of powerful top-down proposal moves based on object detection candidates.
Here we use pre-computed candidates from a sliding-window detector to propose new foreground
regions with corresponding object variable. Instead of proposing the entire bounding-box from the
detector, we propose the set of intersecting segments (from our segmentation dictionary ?) that are
fully contained within the bounding-box in a single move.
6
E XPERIMENT
Patch baseline
HOG baseline
Patch RB (w/o cntxt)
Patch RB (full model)
HOG RB (w/o cntxt)
HOG RB (full model)
C ARS
0.40
0.35
0.55
0.56
0.58
0.57
P ED .
0.15
0.37
0.22
0.21
0.35
0.35
Figure 3: PR curves for car (left) and pedestrian (right) detection on the Street Scene dataset [2]. The table
shows 11-pt average precision for variants of the baseline sliding-window and our region-based (RB) approach.
4.3 Learning
We learn the parameters of our model from labeled training data in a piecewise fashion. First, the
individual terms are learned using the maximum-likelihood objective for the subset of variables
within each term. The relative weights (? reg , ? obj , etc.) between the terms are learned through crossvalidation on a subset of the training data. Boosted pixel appearance features (see [7]) and object
detectors are learned separately and their output provided as input features to the combined model.
For both the base object detectors and the parameters of the region and object terms, we use a
closed-loop learning technique where we first learn an initial set of parameters from training data.
We then run inference on our training set and record mistakes made by the algorithm (false-positives
for object detection and incorrect moves for the full algorithm). We augment the training data with
these mistakes and re-train. This process gives a significant improvement to the final results.
5
Experiments
We conduct experiments on the challenging Street Scene dataset [2]. This is a dataset consisting of
3547 high-resolution images of urban environments. We rescaled the images to 320 ? 240 before
running our algorithm. The dataset comes with hand-annotated region labels and object boundaries.
However, the annotations use rough overlapping polygons, so we used Amazon?s Mechanical Turk
to improve the labeling of the background classes only. We kept the original object polygons to be
consistent with other results on this dataset.
We divided the dataset into five folds?the first fold (710 images) was used for testing and the
remaining four used for training. The multi-class image segmentation component of our model
achieves an overall pixel-level accuracy of 84.2% across the eight semantic classes compared to
83.0% for the pixel-based baseline method described in [7]. More interesting was our object detection performance. The test set contained 1183 cars and 293 pedestrians with average size of 86 ? 48
and 22 ? 49 pixels, respectively. Many objects are occluded making this a very difficult dataset.
Since our algorithm produces MAP estimation for the scene we cannot simply generate a
precision-recall curve by varying the object classifier threshold as is usual for reporting object detection results. Instead we take the max-marginals for each Cn variable at convergence of our algorithm
and sweep over thresholds for each object separately to generate a curve. An attractive aspect of this
approach is that our method does not have overlapping candidates and hence does not require arbitrary post-processing such as non-maximal suppression of sliding-window detections.
Our results are shown in Figure 3. We also include a comparison to two baseline sliding-window
approaches. Our method significantly improves over the baselines for car detection. For pedestrian
detection, our method shows comparable performance to the HOG baseline which has been specifically engineered for this task. Notice that our method does not achieve 100% recall (even at low
precision) due to the curves being generated from the MAP assignment in which pixels have already
been grouped into regions. Unlike the baselines, this forces only one candidate object per region.
However, by trading-off the strength (and hence operating point) of the energy terms in our model
we can increase the maximum recall for a given object class (e.g., by increasing the weight of the
object term by a factor of 30 we were able to increase pedestrian recall from 0.556 to 0.673).
Removing the pairwise context term does not have a significant affect on our results. This is
due to the encoding of semantic context through the region term and the fact that all images were
of urban scenes. However, we believe that on a dataset with more varied backgrounds (e.g., rural
scenes) context would play a more important role.
We show some example output from our algorithm in Figure 4. The first row shows the original
image (left) together with annotated regions and objects (middle-left), regions (middle-right) and
predicted horizon (right). Notice how multiple regions get grouped together into a single object.
The remaining rows show a selection of results (image and annotated output) from our method.
7
Figure 4: Qualitative results from our experiments. Top row shows original image, annotated regions and
objects, region boundaries, and predicted horizon. Other examples show original image (left) and overlay
colored by semantic class and detected objects (right).
6
Discussion
In this paper we have presented a hierarchical model for joint object detection and image segmentation. Our novel approach overcomes many of the problems associated with trying to combine related
vision tasks. Importantly, our method explains every pixel in the image and enforces consistency between random variables from different tasks. Furthermore, our model is encapsulated in a modular
energy function which can be easily analyzed and improved as new computer vision technologies
become available.
One of the difficulties in our model is learning the trade-off between energy terms?too strong a
boundary penalty and all regions will be merged together, while too weak a penalty and the scene
will be split into too many segments. We found that a closed-loop learning regime where mistakes
from running inference on the training set are used to increase the diversity of training examples
made a big difference to performance.
Our work suggests a number of interesting directions for future work. First, our greedy inference
procedure can be replaced with a more sophisticated approach that makes more global steps. More
importantly, our region-based model has the potential for providing holistic unified understanding
of an entire scene. This has the benefit of eliminating many of the implausible hypotheses that
plague current computer vision algorithms. Furthermore, by clearly delineating what is recognized,
our framework directly present hypotheses for objects that are currently unknown providing the
potential for increasing our library of characterized objects using a combination of supervised and
unsupervised techniques.
Acknowledgments. This work was supported by the NSF under grant IIS 0917151, MURI contract
N000140710747, and The Boeing Company. We thank Pawan Kumar and Ben Packer for helpful discussions.
8
References
[1] H.G. Barrow and J.M. Tenenbaum. Computational vision. IEEE, 1981.
[2] S. Bileschi and L. Wolf. A unified system for object detection, texture recognition, and context analysis
based on the standard model feature set. In BMVC, 2005.
[3] D. Comaniciu and P. Meer. Mean shift: A robust approach toward feature space analysis. PAMI, 2002.
[4] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[5] V. Ferrari, L. Fevrier, F. Jurie, and C. Schmid. Groups of adjacent contour segments for object detection.
PAMI, 2008.
[6] M. Fink and P. Perona. Mutual boosting for contextual inference. In NIPS, 2003.
[7] Stephen Gould, Rick Fulton, and Daphne Koller. Decompsing a scene into geometric and semantically
consistent regions. In ICCV, 2009.
[8] C. Gu, J. J. Lim, P. Arbelaez, and J. Malik. Recognition using regions. In CVPR, 2009.
[9] G. Heitz and D. Koller. Learning spatial context: Using stuff to find things. In ECCV, 2008.
[10] G. Heitz, S. Gould, A. Saxena, and D. Koller. Cascaded classification models: Combining models for
holistic scene understanding. In NIPS, 2008.
[11] D. Hoiem, A. A. Efros, and M. Hebert. Closing the loop on scene interpretation. CVPR, 2008.
[12] D. Hoiem, A. A. Efros, and M. Hebert. Putting objects in perspective. IJCV, 2008.
[13] B. Leibe, A. Leonardis, and B. Schiele. Combined object categorization and segmentation with an implicit
shape model. In ECCV, 2004.
[14] C. Liu, J. Yuen, and A. Torralba. Nonparametric scene parsing: Label transfer via dense scene alignment.
In CVPR, 2009.
[15] A. Rabinovich, A. Vedaldi, C. Galleguillos, E. Wiewiora, and S. Belongie. Objects in context. In ICCV,
2007.
[16] J. Shotton, J. Winn, C. Rother, and A. Criminisi. TextonBoost: Joint appearance, shape and context
modeling for multi-class object recognition and segmentation. In ECCV, 2006.
[17] E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. Describing visual scenes using transformed objects
and parts. In IJCV, 2007.
[18] A. Torralba, K. P. Murphy, W. T. Freeman, and M. A. Rubin. Context-based vision system for place and
object recognition, 2003.
[19] A. Torralba, K. Murphy, and W. Freeman. Sharing features: efficient boosting procedures for multiclass
object detection. In CVPR, 2004.
[20] A. Torralba, K. Murphy, and W. Freeman. Contextual models for object detection using boosted random
fields. In NIPS, 2004.
[21] Z. Tu, X. Chen, A. L. Yuille, and S.-C. Zhu. Image parsing: Unifying segmentation, detection, and
recognition. In ICCV, 2003.
[22] P. Viola and M. J. Jones. Robust real-time face detection. IJCV, 2004.
[23] C. Wojek and B. Schiele. A dynamic conditional random field model for joint labeling of object and scene
classes. In ECCV, 2008.
9
| 3766 |@word rreg:3 version:3 briefly:2 dalal:3 middle:2 eliminating:1 logit:2 triggs:3 r:3 decomposition:5 covariance:1 textonboost:1 thereby:1 reduction:1 initial:2 liu:2 configuration:2 score:4 hoiem:3 fevrier:1 ours:1 current:5 contextual:7 assigning:1 reminiscent:1 parsing:3 visible:1 wiewiora:1 informative:1 shape:6 remove:1 designed:1 gist:1 update:3 grass:3 cue:3 leaf:1 greedy:1 record:1 filtered:1 colored:1 provides:1 boosting:3 recompute:2 location:6 preference:2 completeness:1 daphne:2 five:2 height:4 along:2 constructed:1 direct:1 become:1 incorrect:1 qualitative:1 ijcv:3 combine:6 inside:1 recognizable:1 introduce:1 pairwise:6 mask:5 notably:1 indeed:2 multi:10 freeman:4 decomposed:1 company:1 window:20 increasing:2 becomes:1 provided:1 estimating:1 underlying:1 notation:1 what:1 mountain:2 cm:1 proposing:2 unified:3 sky:5 every:4 saxena:1 stuff:3 growth:1 fink:1 exactly:2 delineating:1 k2:1 classifier:3 grant:1 segmenting:1 before:2 positive:1 engineering:1 local:5 treat:1 tends:1 mistake:3 encoding:2 subscript:1 merge:3 pami:2 black:3 dynamically:2 suggests:1 challenging:2 co:10 jurie:1 phased:1 acknowledgment:1 camera:3 enforces:1 testing:1 procedure:4 evolving:1 significantly:2 reject:1 adapting:1 matching:2 pre:3 road:5 vedaldi:1 get:2 cannot:5 onto:2 convenience:1 wheel:1 selection:1 context:18 optimize:1 map:4 yt:1 go:1 attention:1 rural:1 independently:2 rectangular:3 resolution:5 simplicity:1 identifying:1 amazon:1 importantly:3 meer:1 ferrari:1 analogous:1 pt:1 play:1 exact:1 us:2 hypothesis:2 origin:1 approximated:1 recognition:5 muri:1 labeled:4 bottom:7 observed:1 role:1 electrical:1 solved:1 capture:8 region:117 trade:3 highest:1 rescaled:1 yk:2 subtask:2 environment:1 schiele:2 occluded:1 dynamic:1 segment:9 creation:1 purely:1 yuille:1 basis:1 gu:2 easily:2 joint:9 po:5 various:2 polygon:2 derivation:1 train:1 distinct:1 recomputes:1 describe:3 effective:2 detected:1 labeling:3 outside:2 neighborhood:1 birth:1 whose:1 encoded:1 stanford:3 modular:1 cvpr:5 s:1 otherwise:2 ability:1 jointly:1 itself:1 final:2 advantage:2 propose:11 product:1 maximal:1 tu:2 neighboring:3 combining:5 loop:3 holistic:4 poorly:1 achieve:1 description:2 crossvalidation:1 convergence:2 double:1 produce:3 categorization:1 leave:1 ben:1 object:157 help:2 received:1 strong:2 implemented:1 c:1 predicted:2 indicate:2 come:1 trading:1 direction:1 closely:1 correct:1 annotated:4 merged:1 criminisi:1 human:1 engineered:1 explains:2 require:1 feeding:1 trash:1 decompose:2 yuen:1 summation:2 around:3 considered:4 ground:4 visually:1 great:1 exp:2 mapping:1 bicycle:2 efros:2 dictionary:5 torralba:7 achieves:1 purpose:1 estimation:1 encapsulated:1 label:12 currently:6 unexplained:1 sensitive:1 grouped:3 rough:2 clearly:2 gaussian:1 always:1 aim:3 modified:1 rather:2 avoid:1 shelf:1 boosted:3 varying:2 rick:1 probabilistically:1 earliest:1 encode:2 derived:1 inherits:1 improvement:2 notational:1 likelihood:2 greatly:1 contrast:3 centroid:1 baseline:8 detect:1 suppression:1 helpful:1 inference:15 assembles:1 inaccurate:1 integrated:2 entire:5 accept:1 eliminate:1 unlikely:1 koller:5 perona:1 going:1 transformed:1 semantics:3 pixel:51 overall:3 classification:3 orientation:1 dual:1 denoted:2 priori:1 augment:1 proposes:1 art:4 spatial:1 initialize:2 mutual:1 field:3 equal:1 comprise:1 having:1 shaped:1 sampling:1 represents:1 jones:1 unsupervised:3 promote:1 foreground:15 future:1 piecewise:1 employ:1 modern:1 oriented:1 composed:3 simultaneously:2 tightly:1 recognize:1 individual:1 packer:1 murphy:3 replaced:1 geometry:2 phase:5 consisting:1 pawan:1 attempt:2 detection:39 alignment:1 analyzed:1 edge:3 necessary:1 tree:3 conduct:1 penalizes:1 re:4 classify:3 modeling:2 soft:4 typified:1 ar:3 contiguous:1 assignment:9 rabinovich:1 cost:1 pole:1 introducing:2 subset:2 masked:2 comprised:1 successful:1 too:3 scanning:1 combined:4 probabilistic:1 off:5 telegraph:1 contract:1 together:4 meanshift:1 intersecting:1 again:1 successively:1 return:1 potential:2 diversity:1 unordered:1 includes:4 pedestrian:7 piece:1 performed:1 break:1 closed:2 traffic:1 portion:1 parallel:1 annotation:1 amorphous:1 contribution:1 accuracy:2 characteristic:1 identify:1 climbing:1 weak:2 raw:1 iterated:1 tianshi:1 classified:1 explain:2 detector:15 implausible:1 tended:1 sharing:2 ed:1 against:2 energy:29 turk:1 associated:4 dataset:9 recall:5 lim:1 car:12 improves:1 segmentation:27 sophisticated:5 actually:1 higher:1 supervised:1 awkward:1 inspire:1 improved:2 response:1 bmvc:1 done:1 box:12 furthermore:5 implicit:1 until:4 hand:2 horizontal:1 overlapping:4 lack:1 propagation:1 logistic:1 mode:1 quality:2 artifact:1 grows:1 believe:1 building:2 facilitate:1 contain:1 true:2 normalized:1 former:1 hence:2 assigned:2 galleguillos:1 iteratively:2 semantic:7 attractive:1 adjacent:7 mahalanobis:1 during:3 uniquely:2 comaniciu:1 ambiguous:2 trying:3 hill:1 complete:2 crf:1 performs:1 reasoning:1 image:47 novel:2 common:3 behaves:1 overview:2 belong:1 discussed:1 interpretation:1 relating:2 marginals:1 significant:3 consistency:4 closing:1 pq:1 moving:1 similarity:2 operating:2 etc:3 base:2 recent:4 showed:1 perspective:1 belongs:3 forcing:1 vt:1 scoring:1 captured:2 minimum:2 recognized:1 wojek:1 maximize:1 stephen:2 sliding:8 multiple:12 encompass:1 relates:1 full:4 infer:1 ii:1 segmented:3 technical:1 characterized:1 divided:1 post:1 prediction:1 variant:1 basic:1 heterogeneous:1 vision:9 metric:1 histogram:2 iteration:2 adopting:1 achieved:1 cell:1 proposal:9 background:12 affecting:2 addition:1 want:1 separately:2 winn:1 else:1 sudderth:2 leaving:2 unlike:2 sr:10 hz:21 tend:1 elegant:1 facilitates:1 thing:3 inconsistent:1 obj:5 shotton:3 split:2 affect:1 identified:2 restrict:1 reduce:1 idea:1 cn:1 knowing:1 multiclass:1 shift:2 penalty:2 involve:1 nonparametric:1 tenenbaum:1 reduced:1 generate:3 overlay:1 problematic:1 nsf:1 notice:2 sign:1 per:2 rb:5 affected:1 group:3 putting:1 four:1 threshold:2 urban:2 clarity:1 changing:1 ctxt:6 kept:1 geometrically:1 cone:1 run:6 powerful:1 place:2 reporting:1 patch:5 coherence:1 summarizes:1 decision:1 vb:3 comparable:1 layer:1 distinguish:2 correspondence:4 fold:2 quadratic:1 adapted:1 strength:2 deficiency:1 constrain:1 scene:41 encodes:1 nearby:1 aspect:2 innovative:1 min:2 kumar:1 performing:1 gould:7 department:2 combination:2 disconnected:1 belonging:4 beneficial:1 across:1 appealing:1 making:2 intuitively:2 iccv:3 pr:7 taken:2 equation:1 turn:1 discus:1 describing:1 available:1 eight:1 leibe:1 hierarchical:6 enforce:2 occurrence:1 rp:6 original:7 top:4 running:4 include:6 remaining:5 unifying:1 parsed:1 build:4 warping:1 move:31 objective:3 already:3 sweep:1 malik:1 restored:1 parametric:1 usual:1 traditional:1 nr:3 diagonal:1 xperiment:1 gradient:2 fulton:1 distance:2 separate:2 unable:1 thank:1 entity:3 street:4 arbelaez:1 reason:2 water:3 toward:1 willsky:1 rother:1 index:1 relationship:6 illustration:3 ratio:1 providing:2 difficult:1 unfortunately:1 relate:1 hog:5 append:2 boeing:1 implementation:1 attenuates:1 unknown:7 perform:2 allowing:3 barrow:1 viola:1 precise:1 rn:4 varied:1 arbitrary:2 intensity:2 inferred:1 pair:2 required:1 mechanical:1 sgould:1 plague:1 coherent:8 learned:4 merges:1 nip:3 address:1 beyond:2 able:2 proceeds:1 usually:1 perception:1 below:3 leonardis:1 regime:1 challenge:1 reliable:1 including:1 max:2 belief:1 power:1 difficulty:3 treated:1 force:3 predicting:1 rely:1 natural:1 cascaded:1 zhu:1 scheme:1 improve:7 technology:2 library:1 identifies:2 axis:1 incoherent:1 coupled:1 extract:3 schmid:1 text:2 review:1 understanding:3 geometric:2 relative:5 fully:1 expect:1 interesting:2 versus:1 localized:1 integrate:1 consistent:3 rubin:1 viewpoint:1 classifying:2 row:7 eccv:4 repeat:2 last:1 supported:1 hebert:2 enjoys:1 bias:2 allow:2 understand:1 face:2 benefit:1 boundary:20 depth:1 heitz:3 world:4 evaluating:1 avoids:1 curve:4 preventing:1 contour:1 made:3 overcomes:1 global:5 assumed:1 belongie:1 continuous:1 search:1 table:1 n000140710747:1 learn:3 transfer:1 robust:4 improving:1 quantize:1 complex:1 artificially:1 anneal:1 bileschi:1 main:1 dense:1 linearly:1 bounding:8 whole:2 big:1 gao1:1 repeated:1 icm:1 body:1 clumsy:1 fashion:1 slow:1 precision:3 position:1 inferring:1 explicit:1 candidate:13 learns:1 down:3 minute:1 removing:1 specific:1 er:3 svm:2 exists:2 naively:1 burden:1 false:1 merging:2 adding:1 texture:1 conditioned:1 horizon:9 chen:1 simply:2 appearance:18 likely:1 visual:1 desire:1 ordered:1 contained:2 wolf:1 extracted:2 conditional:2 content:1 hard:3 specifically:3 reducing:1 semantically:4 exhausting:1 total:1 invariance:1 accepted:2 rarely:1 formally:3 people:2 latter:1 scan:1 dissimilar:3 incorporate:2 evaluate:1 reg:5 |
3,053 | 3,767 | A Generalized Natural Actor-Critic Algorithm
Tetsuro Morimura? , Eiji Uchibe? , Junichiro Yoshimoto? , Kenji Doya?
?: IBM Research ? Tokyo, Kanagawa, Japan
?: Okinawa Institute of Science and Technology, Okinawa, Japan
[email protected], {uchibe,jun-y,doya}@oist.jp
Abstract
Policy gradient Reinforcement Learning (RL) algorithms have received substantial attention, seeking stochastic policies that maximize the average (or discounted
cumulative) reward. In addition, extensions based on the concept of the Natural
Gradient (NG) show promising learning efficiency because these regard metrics
for the task. Though there are two candidate metrics, Kakade?s Fisher Information
Matrix (FIM) for the policy (action) distribution and Morimura?s FIM for the stateaction joint distribution, but all RL algorithms with NG have followed Kakade?s
approach. In this paper, we describe a generalized Natural Gradient (gNG) that
linearly interpolates the two FIMs and propose an efficient implementation for the
gNG learning based on a theory of the estimating function, the generalized Natural Actor-Critic (gNAC) algorithm. The gNAC algorithm involves a near optimal
auxiliary function to reduce the variance of the gNG estimates. Interestingly, the
gNAC can be regarded as a natural extension of the current state-of-the-art NAC
algorithm [1], as long as the interpolating parameter is appropriately selected. Numerical experiments showed that the proposed gNAC algorithm can estimate gNG
efficiently and outperformed the NAC algorithm.
1 Introduction
Policy Gradient Reinforcement Learning (PGRL) attempts to find a policy that maximizes the average (or time-discounted) reward, based on gradient ascent in the policy parameter space [2, 3, 4].
Since it is possible to handle the parameters controlling the randomness of the policy, the PGRL,
rather than the value-based RL, can find the appropriate stochastic policy and has succeeded in several practical applications [5, 6, 7]. However, depending on the tasks, PGRL methods often require
an excessively large number of learning time-steps to construct a good stochastic policy, due to the
learning plateau where the optimization process falls into a stagnant state, as was observed for a
very simple Markov Decision Process (MDP) with only two states [8]. In this paper, we propose a
new PGRL algorithm, a generalized Natural Actor-Critic (gNAC) algorithm, based on the natural
gradient [9].
Because ?natural gradient? learning is the steepest gradient method in a Riemannian space and the
direction of the natural gradient is defined on that metric, it is an important issue how to design
the Riemannian metric. In the framework of PGRL, the stochastic policies are represented as parametric probability distributions. Thus the Fisher Information Matrices (FIMs) with respect to the
policy parameter induce appropriate Riemannian metrics. Kakade [8] used an average FIM for the
policy over the states and proposed a natural policy gradient (NPG) learning. Kakade?s FIM has
been widely adopted and various algorithms for the NPG learning have been developed by many
researchers [1, 10, 11]. These are based on the actor-critic framework, called the natural actor-critic
(NAC) [1]. Recently, the concept of ?Natural State-action Gradient? (NSG) learning has been proposed in [12], which shows potential to reduce the learning time spent by being better at avoiding
the learning plateaus than the NPG. This natural gradient is on the FIM of the state-action joint
distribution as the Riemannian metric for RL, which is directly associated with the average rewards
as the objective function. Morimura et al. [12] showed that the metric of the NSG corresponds with
1
the changes in the stationary state-action joint distribution. In contrast, the metric of the NPG takes
into account only changes in the action distribution and ignores changes in the state distribution,
which also depends on the policy in general. They also showed experimental results with exact gradients where the NSG learning outperformed NPG learning, especially with large numbers of states
in the MDP. However, no algorithm for estimating the NSG has been proposed, probably because
the estimation for the derivative of log stationary state distribution was difficult [13]. Therefore, the
development of a tractable algorithm for NSG would be of great importance, and this is the one of
the primary goals of this paper.
Meanwhile, it would be very difficult to select an appropriate FIM because it would be dependent on
the given task. Accordingly, we created a linear interpolation of both of the FIMs as a generalized
Natural Gradient (gNG) and derived an efficient approach to estimate the gNG by applying the
theory of the estimating function for stochastic models [14] in Section 3. In Section 4, we derive
a gNAC algorithm with an instrumental variable, where a policy parameter is updated by a gNG
estimate that is a solution of the estimating function derived in Section 3, and show that the gNAC
can be regarded as a natural extension of the current state-of-the-art NAC algorithm [1]. To validate
the performance of the proposed algorithm, numerical experiments are shown in Section 5, where
the proposed algorithm can estimate the gNG efficiently and outperformed the NAC algorithm [1].
2 Background of Policy Gradient and Natural Gradient for RL
We briefly review the policy gradient and natural gradient learning as gradient ascent methods for
RL and also present the motivation of the gNAC approach.
2.1 Policy Gradient Reinforcement Learning
PGRL is modeled on a discrete-time Markov Decision Process (MDP) [15, 16]. It is defined by the
quintuplet (S, A, p, r, ?), where S 3 s and A 3 a are finite sets of states and actions, respectively.
Also, p : S ? A ? S ? [0, 1] is a state transition probability function of a state s, an action a,
and the following state s+1 , i.e.1 , p(s+1 |s, a) , Pr(s+1 |s, a). R : S ? A ? S ? R is a bounded
reward function of s, a, and s+1 , which defines an immediate reward r = R(s, a, s+1 ) observed by
a learning agent at each time step. The action probability function ? : A ? S ? R d ? [0, 1] uses a,
s, and a policy parameter ? ? Rd to define the decision-making rule of the learning agent, which is
also called a policy, i.e., ?(a|s; ?) , Pr(a|s, ?). The policy is normally parameterized by users and
is controlled by tuning ?. Here, we make two assumptions in the MDP.
Assumption 1 The policy is always differentiable with respect to ? and is non-redundant for the
task, i.e., the statistics F a (?) ? Rd?d (defined in Section 2.2) are always bounded and non-singular.
Assumption 2 The Markov chain M(?) , {S, A, p, ?, ?} is always ergodic (irreducible and aperiodic).
Under Assumption 2, there exists a unique stationary state distribution d ?(s) , Pr(s|M(?)), which
is equal to the limiting distribution and independent of the initial state, d ?(s0 ) = limt?? Pr(S+t =
s0 |S =Ps, M(?)), ? s ? S. This distribution satisfies the balance equation: d?(s+1 ) =
P
s?S
a?A p(s+1 |s, a)?(a|s;?)d?(s).
The goal of PGRL is to find the policy parameter ? ? that maximizes the average of the immediate
rewards, the average reward,
XX X
?(?) , E? [r] =
d?(s)?(a|s;?)p(s+1 |s, a)R(s, a, s+1 ),
(1)
s?S a?A s+1 ?S
where E? [a] denotes the expectation of a on the Markov chain M(?). The derivative of the average
reward for (1) with respect to the policy parameter, ?? ?(?) , [??(?)/??1 , ..., ??(?)/??d ]> , which
is referred to as the Policy Gradient (PG), is
?? ?(?) = E? [ r?? ln{d?(s)?(a|s;?)}] .
1
Although to be precise it should be Pr(S+1 = s+1 |St = s, A = a) for the random variables S+1 , S, and A,
we write Pr(s+1 |s, a) for simplicity. The same simplification is applied to the other distributions.
2
Therefore, the average reward ?(?) will be increased by updating the policy parameter as ? :=
? + ??? ?(?), where := denotes the right-to-left substitution and ? is a sufficiently small learning
rate. This framework is called the PGRL [4].
It is noted that the ordinary PGRL methods omit the differences in sensitivities and the correlations between the elements of ?, as defined by the probability distributions of the MDP, while most
probability distributions expressed in the MDP have some form of a manifold structure instead of
a Euclidean structure. Accordingly, the updating direction of the policy parameter by the ordinary
gradient method will be different from the steepest directions on these manifolds. Therefore, the
optimization process sometimes falls into a stagnant state, commonly called a plateau [8, 12].
2.2 Natural Gradients for PGRL
To avoid the plateau problem, the concept of the natural gradient was proposed by Amari [9], which
is a gradient method on a Riemannian space. The parameter space being a Riemannian space implies
that the parameter ? ? Rd is on the manifold with the Riemannian metric G(?) ? Rd?d (a semipositive definite matrix), instead of being on a Euclidean manifold of an arbitrarily parameterized
policy, and the squared length of a small incremental vector ?? connecting ? to ? + ?? is given
by k??k2G = ?? >G(?)??, where > denotes the transpose. Under the constraint k??k 2G = ?2
for a sufficiently small constant ?, the steepest ascent direction of the function ?(?) on the manifold
G(?) is given by
eG(?) ?(?) = G(?)?1 ?? ?(?),
?
which is called the natural gradient (NG). Accordingly, to (locally) maximize ?(?), ? is incrementally updated with
eG(?) ?(?).
? := ? + ? ?
The direction of the NG is defined using a Riemannian metric. Thus, an appropriate choice of the
Riemannian metric for the task is required. With RL, two kinds of Fisher Information Matrices
(FIMs) F (?) have been proposed as the Riemannian metric matrices G(?): 2
(I) Kakade [8] focuses only on the changes in the policy (action) distributions and proposes defining
the metric matrix with the notation ?? a? b? , (?? a? )b? , as
F a (?) , E? ?? ln?(a|s;?)?? ln?(a|s;?)> = E? [Fa (?, s)] ,
(2)
>
where Fa (?, s) , E? ?? ln?(a|s;?)?? ln?(a|s;?) |s is the FIM of the policy at a state s. The NG
eF a (?) ?(?) = F a (?)?1 ?? ?(?), is called the Natural Policy Gradient (NPG).
on this FIM, ?
(II) Considering that the average reward ?(?) in (1) is affected not only by the policy distributions
?(a|s;?) but also by the stationary state distribution d?(s), Moimura et al. [12] proposed the use of
the FIM of the state-action joint distribution for RL,
i
h
>
Fs,a (?) , E? ?? ln {d?(s)?(a|s;?)} ?? ln {d?(s)?(a|s;?)} = Fs (?) + F a (?),
(3)
P
where Fs (?) , s?S d?(s)?? ln d?(s)?? ln d?(s)> is the FIM of d?(s). The NG on this FIM,
eFs,a (?) ?(?) = Fs,a (?)?1 ?? ?(?), is called the Natural State-action Gradient (NSG).
?
Some algorithms for the NPG learning, such as NAC [1] and NTD [10, 11], can be successfully
implemented using modifications of the actor-critic frameworks based on the LSTDQ(?) [18] and
TD(?) [16]. In contrast, no tractable algorithm for the NSG learning has been proposed to date.
However, it has been suggested that the NSG learning it better than the NPG learning due to the
three differences [12]: (a) The NSG learning appropriately benefits from the concepts of Amari?s
NG learning, since the metric Fs,a (?) necessarily and sufficiently accounts for the probability distribution that the average reward depends on. (b) Fs,a (?) is an analogy to the Hessian matrix of
the average reward. (c) Numerical experiments show a strong tendency to avoid entrapment in a
learning plateau3 , especially with large numbers of states. Therefore, the development of a tractable
algorithm for NSG is important, and this is one of the goals of our work.
2
The reason for using F (?) as G(?) is because the FIM Fx (?) is a unique metric matrix of the secondorder Taylor expansion of the Kullback-Leibler divergence Pr(x|?+??) from Pr(x|?) [17].
3
Although there were numerical experiments involving the NSG in [12], they computed the NSG analytically with the state transition probabilities and the reward function, which is typically unknown in RL.
3
On the other hand, it was proven that the metric of NPG learning, F a (?), accounts for the infinite
time-steps joint distribution in the Markov chain M(?) [19, 1], while the metric of NSG learning,
Fs,a (?) accounts only for the single time-step distribution, which is the stationary state-action joint
distribution d?(s)?(a|s;?). Accordingly, the mixing time of M(?) might be drastically changed with
NSG learning compared to NPG learning, since the mixing time depends on the multiple (not necessarily infinite) time-steps rather than the single time-step, i.e., while various policies can lead to the
same stationary state distribution, Markov chains associated with these policies have different mixing times. A larger mixing time makes it difficult for the learning agent to explore the environment
and to estimate the gradient with finite samples. The ranking of the performances of the NPG and
NSG learning will be dependent on the RL task properties. Thus, we consider a mixture of NPG and
NSG as a generalized NG (gNG) and propose the approach of ?generalized Natural Actor-Critic?
(gNAC), in which the policy parameter of an actor is updated by an estimate of the gNG of a critic.
3 Generalized Natural Gradient for RL
First we explain the definition and properties of the generalized Natural Gradient (gNG). Then we
introduce the estimating functions to build up a foundation for an efficient estimation of the gNG.
3.1 Definition of gNG for RL
In order to define an interpolation between NPG and NSG with a parameter ? ? [0, 1], we consider
a linear interpolation from the FIM of (2) for the NPG to the FIM of (3) for the NSG, written as
(4)
F?s,a (?, ?) , ?Fs (?) + F a (?).
Then the natural gradient of the interpolated FIM is
eF?s,a (?,?) ?(?) = F?s,a (?, ?)?1 ?? ?(?),
?
(5)
which we call the ?generalized natural gradient? for RL with the interpolating parameter ?, gNG(?).
Obviously, gNG(? = 0) and gNG(? = 1) are equivalent to the NPG and the NSG, respectively. When
? is equal to 1/t, this FIM F?s,a (?, ?) is equivalent to the FIM of the t time-steps joint distribution
from the stationary state distribution d?(s) on M(?) [12]. Thus, this interpolation controlled by ? can
be interpreted as a continuous interpolation with respect to the time-steps of the joint distribution,
so that ? : 1 ? 0 is inversely proportional to t : 1 ? ?. The term ?generalized? of gNG(?) reflects
the generalization as the time steps on the joint distribution that the NG follows.
3.2 Estimating Function of gNG(?)
We provide a general view of the estimation of the gNG(?) using the theory of the estimating function, which provides well-established results for parameter estimation [14].
Such a function g ? Rd for an estimator ? ? Rd (and a variable x) is called an estimating function
when it satisfies these conditions for all ?:
E? [g(x, ? ? )] = 0
(6)
det | E? [?? g(x, ?)] | 6= 0,
E? g(x, ?)> g(x, ?) < ?,
?
?
(7)
?
?
(8)
where ? ? and det | ? | denote the exact solution of this estimation and the determinant, respectively.
Proposition 1 The d-dimensional (random) function
0
g?,?
(s, a; ?) , ?? ln{d?(s)?(a|s;?)} r ? ?? ln{d?(s)? ?(a|s;?)}> ?
(9)
0
is an estimating function for gNG(?), such that the unique solution of E ? [g?,?
(s, a; ?)] = 0 with
respect to ? is equal to the gNG(?).
Proof: From (1) and (4), the equation
0
0 = E? [g?,?
(s, a; ? ? )] = ?? ?(?) ? F?s,a (?, ?)? ?
holds. Thus, ? ? is equal to the gNG(?) from (5). The remaining conditions from (7) and (8), which
the estimating function must satisfy, also obviously hold (under Assumption 1).
4
In order to estimate gNG(?) by using the estimating function (9) with finite T samples on M(?), the
simultaneous equation
T ?1
1 X 0
b) = 0
g (st , at ; ?
T t=0 ?,?
b which is also called the M-estimator [20], is an unbiased
is solved with respect to ?. The solution ?,
b = ? ? holds in the limit as T ? ?.
estimate of gNG(?), so that ?
Note that the conduct of solving the estimating function of (9) is equivalent to the linear regression
with the instrumental variable ?? ln{d?(s)?(a|s;?)} where the regressand, the regressor, and the
model parameter (estimator) are r (or R(s, a)), ?? ln{d?(s)? ?(a|s;?)}, and ?, respectively [21], so
that the regression residuals ?r ? ?? ln{d?(s)? ?(a|s;?)}> ?? are not correlated with the instrumental
variables ?? ln{d?(s)?(a|s;?)}.
3.3 Auxiliary Function of Estimating Function
Although we made a simple algorithm implementing the gNAC approach with the M-estimator of
the estimating function in (9), the performance of the estimation of gNG(?) may be unacceptable
for real RL applications, since the variance of the estimates of gNG(?) tends to become too large.
For that reason, we extend the estimating function using (9) by embedding an auxiliary function to
create space for improvement in (9).
Lemma 1 The d-dimensional (random) function is an estimating function for gNG(?),
g?,?(s, a; ?) , ?? ln{d?(s)?(a|s;?)} r ? ?? ln{d?(s)? ?(a|s;?)}> ? ? ?(s, s+1 ) ,
(10)
where ?(s, s+1 ) is called the auxiliary function for (9):
(11)
?(s, s+1 ) , c + b(s) ? b(s+1 ).
The c and b(s) are an arbitrary bounded constant and an arbitrary bounded function of the state.
respectively.
Proof: See supplementary material.
Let G? denote the class of such functions g? with various auxiliary functions ?. An optimal auxb , is defined by the
iliary function, which leads to minimizing the variance of the gNG estimate ?
?
optimality criterion of the estimating functions [22]. An estimating function g ?,?
is optimal in G?,? if
?
?
>
det | ?g??,? | ? det | ?g?,? | where ?g? , E? g?,?(s, a; ? )g?,?(s, a; ? ) .
Lemma 2 Let us approximate (or assume)
(12)
r ? E? [R(s, a, s+1 )|s, a] , R(s, a),
?(s, s+1 ) ? E? [?(s, s+1 )|s, a]
, ?(s, a).
P|S|
(13)
If the policy is non-degenerate for the task (so the dimension of ?, d, is equal to i=1 (|Ai | ? 1),
where |S| and |Ai | are the numbers of states and the available actions at state si , respectively) and
? ? denotes the gNG(?), then the ?near? optimal auxiliary function ? ? in the ?near? optimal estimating
?
(s, a; ?) satisfies4
function g?,?
R(s, a) = ?? ln{d?(s)? ?(a|s;?)}>? ? + E? [?? (s, s+1 )|s, a].
(14)
Proof Sketch: The covariance matrix for the criterion of the auxiliary function ? is approximated as
?g? ? E? ?? ln{d?(s)?(a|s;?)}?? ln{d?(s)?(a|s;?)}> (R(s, a) ? ?? ln{d?(s)? ?(a|s;?)}> ?? ? ?(s, a))2
?g .
(15)
,?
?
The function ?(s, a) usually has |S| degrees of freedom over all of the (s, a) couplets with the ergodicity of M(?), because ?b(s) ? b(s+1 )? in ? has (|S| ? 1) degrees of freedom over all of the (s, s+1 )
4
The ?near? of the near estimating function comes from the approximations of (12) and (13), which implicitly assume that the sum of the (co)variances, E? [(r ? R(s, a))2 + (?(s, a, s+1 ) ? ?(s, a))2 ? 2(r ?
R(s, a))(?(s, a, s+1 )??(s, a))|s, a], are not large. This assumption seems to hold in many RL tasks.
5
P|S|
couplets. The value of ?? ln{d?(s)? ?(a|s;?)}> ? has i=1 (|Ai | ? 1) degrees of freedom. R(s, a)
P|S|
has at most i=1 |Ai | degrees of freedom. Therefore, there exist ?? and ?? ln{d?(s)? ?(a|s;?)}> ? ?
that satisfy (14). Remembering that ?? ln{d?(s)? ?(a|s;?)}> ?? is the approximator of R(s, a) (or r)
and ? ? is independent of the choice of ? due to Lemma 1, we know that ? ? = ? ? holds. Therefore,
if the estimating function has an auxiliary function ?? satisfying (14), the criterion of the optimality
? g? | = 0 due to (15).
for ? is minimized for det | ?
?
From Lemma 2, the near optimal auxiliary function ?? can be regarded as minimizb? (s, a; ?) ,
ing the mean squared residuals to zero between R(s, a) and the estimator R
?
b)
?? ln{d?(s)? ?(a|s;?)}> ? + ?(s, s+1 ). Thus, the meaning of this near optimality of g?,?
(s, a; ?
is interpreted as a near minimization of the Euclidean distance between r and its approximator
b?? (s, a; ?
b ), so that ?? works to reduce the distance of the regressand r and the subspace of the
R
b In particular, R(s, a) is almost in this subregressor ?? ln{d?(s)? ?(a|s;?)} of the M-estimator ?.
b = ? ? . Lemma 2 leads directly to Corollary 1.
space at the point ?
Corollary 1 Let b??=0 (s) and c??=0 be the functions in the near optimal auxiliary function ?? (s, s+1 )
at ? = 0, then b??=0 (s) and c??=0 are equal to the (un-discounted) state value function [23] and the
average reward, respectively.
Proof: For all s, ?,and ?, the following equation holds,
X
?? ?(a|s;?) = 0.
E? ?? ln{d?(s)?=0 ?(a|s;?)}> ? | s = ?>
a?A
Therefore, the following equation, which is the same as the definition of the value function b ??=0 (s)
with the average reward c??=0 as the solution of the Poisson equation [23], can be derived from (14):
?
s.
b??=0 (s) + c??=0 = E? [r + b??=0 (s+1 ) | s],
4 A Generalized NAC Algorithm
We now propose a useful instrumental variable for the gNG(?) estimation and then derive a gNAC
algorithm along with an algorithm for ?? ln d?(s) estimation.
4.1 Bias from Estimation of ?? ln d?(s)
For computation of the M-estimator of g?,?(s, a; ?) as the gNG(?) estimate on M(?), the computations of both of the derivatives, ?? ln?(a|s;?) and ?? ln d?(s), are required. While we can easily
compute ?? ln?(a|s;?) since we have parameterized the policy, we cannot compute the Logarithm
stationary State distribution Derivative (LSD) ?? ln d?(s) analytically unless the state transition
probabilities and the reward function are known. Thus, we use the LSD estimate from the algob ? ln d?(s) usually have some estimation errors with
rithm, LSLSD [13]. These LSD estimates ?
b ? ln d?(s) = ?? ln d?(s) + (s), where
finite samples, while the estimates are unbiased, so that ?
(s) is an d-dimensional random variable satisfying E{(s)|s} = 0.
In such cases, the estimate of gNG(?) from the estimating function (9) or (10) would be biased,
because the first condition of (6) for g?,? is violated unless E? [(s)(s)> ] = 0. Thus, in Section
4.2, we consider a refinement of the instrumental variable as the part ? ? ln{d?(s)?(a|s;?)} in the
estimating function (10), since the instrumental variable can be replaced with any function I that
5
satisfies their conditions
for any s, ?, and ? [22] and makes the solution ? ? ibecome the gNG(?):
h
b ? ln d?(s)? + ?? ln?(a|s;?)}> ? ? ? ?(s, s+1 ) = 0,
E? I(r ? {?
(16)
E?
h
b ? ln d?(s)? + ?? ln?(a|s;?)}> ] | 6= 0,
det | E? [I {?
i
b ? ln d?(s)? + ?? ln?(a|s;?)}> ? ? ?(s, s+1 ))2 I> I < ?.
(r ? {?
4.2 Instrumental variables of near optimal estimating function for gNG(?)
We use a linear function to introduce the auxiliary function (defined in (11)),
? ? [?(s+1 )> 0]> )> ?,
?(s, s+1 ; ?) , (?(s)
5
These correspond to the conditions for the estimating function, (6), (7), and (8).
6
(17)
(18)
A Generalized Natural Actor-Critic Algorithm with LSLSD(?)
Given: A policy ?(a|s;?) with an adjustable ? and a feature vector function of the state, ?(s).
Initialize: ?, ? ? [0, 1], ?, ? ? [0, 1),.
Set: A := 0; B := 0; C := 0; D := 0; E := 0; x := 0; y := 0.
For t = 0 to T ? 1 do
b?
Critic: Compute the gNG(?) estimate ?
? t , st+1 )> ;
A := ?A + ?? ln?(at |st ;?)?? ln?(at |st ;?)> ; B := ?B + ?? ln?(at |st ;?)?(s
>
>
?
?
? t )?(s
? t , st+1 )> ;
C := ?C + ?(st )?? ln?(at |st ;?) ; D := ?D + ?(st )?(st ) ; E := ?E + ?(s
?
x := ?x + rt ?? ln?(at |st ;?); y := ?y + rt ?(st ); ? := ?LSLSD(?) algorithm? [13]
? > ? ? BE ?1 (C + ?D?)}?1 (x ? BE ?1 y)
b ? := {A + ?C
?
b
Actor: Update ? by the gNG(?)estimate ?
b?;
? := ? + ??
End
Return: the policy ?(a|s;?).
? is the sub-matrix of C getting off the lowest row.
?C
where ? ? R|S|+1 and ?(s) ? R|S| are the model parameter and the regressor (feature vector
?
function) of the state s, respectively, and ?(s)
, [?(s)> , 1]> . We assume that the set of ?(s) is
linearly independent. Accordingly, the whole model parameter of the estimating function is now
[? > , ? > ]> , $.
We propose the following instrumental variable
>
?
I ? (s, a) , [?? ln?(a|s;?), ?(s)]
.
(19)
?
Because this instrumental variable I has the desirable property as shown in Theorem 1, the esti?
mating function g?,?
(s, a; $) with I ? is a useful function, even if the LSD is estimated.
Theorem 1 To estimate gNG(?), let I ? (s, a) be used for the estimating function as
?
b ? ln d?(s)? + ?? ln?(a|s;?))> ? ? ?(s, s+1 ; ?) ,
g?,?
(s, a; $) = I ? (s, a) r ? (?
(20)
eF?s,a (?,?) ?(?), and the auxiland ? ? and ? ? be the solutions, so ? ? is equal to the gNG(?), ? ? = ?
?
iary function with ? is the near optimal auxiliary function provided in Lemma 2, ?(s, s +1 ; ? ? ) =
?? (s, s+1 ), even if the LSD estimates include (zero mean) random noises.
Proof Sketch: (i) The condition (18) for the instrumental variable is satisfied due to Assumpb ? ln d?(s)?? ln?(a|s;?)> ] = 0 and Assumption 1, the condition (17)
tion 1. (ii) Considering E? [?
?
det | E? [?$ g?,? ] | 6= 0, is satisfied. This guarantees that the solution $ ? , [? ?> , ? ?> ]> of (20) that
?
e?
satisfies E? [g?,?
]=0 is unique. (iii) Assuming that Theorem 1 is true so that ?? ? = ?
?(?)?
Fs,a (?,?)
>
?
?
?
?
and ??(s, s+1 ; ? )=? (s, s+1 )? hold, then E[g?,? (s, a; $ )|s, a] becomes I (s, a) r ? R(s, a)
from (14) and its expectation over M(?) becomes equal to 0. This means that (20) also satisfies the
condition (16). From (i), (ii), and (iii), this theorem is proven.
The optimal instrumental variable I ? (s, a) with respect to the variance minimization is derived
straightforwardly with the results of [21, 24]. However, since I ? is usually to be estimated, we do
not adress I ? here. Note that the proposed I ? (s, a) of (19) can be computed analytically.
4.3 A Generalized Natural Actor-Critic Algorithm with LSLSD
We can straightforwardly derive a generalized Natural Actor-Critic algorithm, gNAC(?), by solv?
b ? ln d?(s) , ?> ?(s).
ing the estimating function g?,?
(s, a; $) in (20), using the LSD estimate ?
However, since ? in the mode parameter is not required in updating the policy parameter ?, to reduce the computational cost, we compute only ? by using the results of the block matrices, The
above algorithm table shows an instance of the gNAC(?) algorithm with LSLSD(?) [13] with the
forgetting parameter ? for the statistics, the learning rate of the policy ?, and the the definitions
? t?1 , st ) , [?(st?1 , st )> , 1]> .
?(st?1 , st ) , [?(st?1 ) ? ?(st )] and ?(s
Note that the LSD estimate is not used at all in the proposed gNAC(? = 0). In addition, note
that gNAC(? = 0) is equivalent to a non-episodic NAC algorithm modified to optimize the average
reward, instead of the discounted cumulative reward [1]. This interpretation is consistent with the
results of Corollary 1.
7
2
gNG(1) estimate with ?
gNG(0.5) estimate with ?
gNG(0.25) estimate with ?
gNG(1) estimate without ?
gNG(0.5) estimate without ?
gNG(0.25) estimate without ?
Angle [radian]
1.5
1
(B)
1
Average Reward
(A)
0.5
0.8
0.6
0.4
gNAC(1)
gNAC(0.5)
gNAC(0.25)
NAC = gNAC(0)
AC
0.2
0
0
200
400
600
Time Step
800
0
0
1000
1
2
3
Time Step
4
5
5
x 10
Figure 1: Averages and standard deviations over 50 independent episodes: (A) The angles between
the true gNG(?) and estimates with and without the auxiliary function ?(s, s +1 , ?) on the 5-states
MDP, (B) The learning performances (average rewards) for the various (N)PGRL algorithms with
the auxiliary functions on the 30-states MDP.
5 Numerical Experiment
We studied the results of the proposed gNAC algorithm with the various ? = {0, 0.25, 0.5, 1} and
randomly synthesized MDPs with |S| = {5, 30} states and |A| = 2 actions. Starting with the performance baseline of the existing PG methods, we used Konda?s actor-critic algorithm [23]. This
algorithm uses the baseline function in which the state value estimates are estimated by LSTD(0)
[25], while the original version did not use any baseline function. Note that gNAC(? = 0) can be
regarded as the NAC proposed by [1], which serves as the baseline for the current state-of-the-art
PGRL algorithm. We initialized the setting of the MDP in each episode so the set of the actions was
always |A| = {l, m}. The state transition probability function was set by using the Dirichlet distribution Dir(? ? R2 ) and the uniform distribution U(a; b) generating an integer from 1 to a other than
b: we first initialized it such that p(s0 |s, a) := 0, ? (s0 , s, a) and then with q(s, a) ? Dir(?=[.3, .3])
and s\b ? U(|S|; b),
p(s|s, m) := q1 (s, m)
p(s+1|s, l) := q1 (s, l)
,
,
p(x\s |s, m) := q2 (s, m)
p(x\s+1 |s, l) := q2 (s, l)
where s0 = 1 and s0 = |S|+1 are the identical states. The the reward function R(s, a, s+1 ) was set
temporarily with the Gaussian distribution N(? = 0, ? 2 = 1), normalized so that max? ?(?) = 1 and
min? ?(?) = ?1; R(s, a, s+1 ) := 2(R(s, a, s+1 ) ? min? ?(?))/(max? ?(?) ? min? ?(?)) ? 1. The
policy is represented by the sigmoidal function: ?(l|s; ?) = 1/(1 + exp(?? > ?(s))). Each ith element of the initial policy parameter ?0 ? R|S| and the feature vector of the jth state, ?(sj ) ? R|S| ,
were drawn from N(0, 1) and N(?ij , 0.5), respectively, where ?ij is the Kronecker delta. Figure 1 (A)
shows the angles between the true gNG(?) and the gNG(?) estimates with and without the auxiliary
function ?(s, s+1 , ?) at ? := 0 (fixed policy), ? := 1, ? := 0. The estimation without the auxiliary function was implemented by solving the estimating function of (9). We can confirm that the
?
estimate using g?,?
(s, a; $) in (20) that implements the near-optimal estimating function became
a much more efficient estimator than without the auxiliary function. Figure 1 (B) shows the comparison results in terms of the learning performances, where the learning rates for the gNACs and
Konda?s actor-critic were set as ? := 3 ? 10?4 and ?Konda := 60?. The other hyper parameters
? := 1 ? ? and ? := 0 were the same in each of the algorithms. We thus confirmed that our
gNAC(? > 0) algorithm outperformed the current state-of-the-art NAC algorithm (gNAC(? = 0)).
6 Summary
In this paper, we proposed a generalized NG (gNG) learning algorithm that combines two Fisher
information matrices for RL. The theory of the estimating function provided insight to prove some
important theoretical results from which our proposed gNAC algorithm was derived. Numerical experiments showed that the gNAC algorithm can estimate gNGs efficiently and that it can outperform
a current state-of-the-art NAC algorithm. In order to utilize the auxiliary function of the estimating
function for the gNG, we defined an auxiliary function on the criterion of the near optimality of the
estimating function, by minimizing the distance between the immediate reward as the regressand
and the subspace of the regressors of the gNG at the solution of the gNG. However, it may be possible to use different criterion, such as the optimality on the Fisher information matrix metric instead
of the Euclidean metric. Also, an analysis of the properties of gNG itself will be necessary to more
deeply understand the properties and efficacy of our proposed gNAC algorithm.
8
References
[1] J. Peters, S. Vijayakumar, and S. Schaal. Natural actor-critic. In European Conference on Machine
Learning, 2005.
[2] V. Gullapalli. A stochastic reinforcement learning algorithm for learning real-valued functions. Neural
Networks, 3(6):671?692, 1990.
[3] R. J. Williams. Simple statistical gradient-following algorithms for connectionist reinforcement learning.
Machine Learning, 8:229?256, 1992.
[4] J. Baxter and P. Bartlett. Infinite-horizon policy-gradient estimation. Journal of Artificial Intelligence
Research, 15:319?350, 2001.
[5] R. Tedrake, T.W. T. W. Zhang, and H. S. Seung. Stochastic policy gradient reinforcement learning on a
simple 3D biped. In IEEE International Conference on Intelligent Robots and Systems, 2004.
[6] J. Peters and S. Schaal. Policy gradient methods for robotics. In IEEE International Conference on
Intelligent Robots and Systems, 2006.
[7] S. Richter, D. Aberdeen, and J. Yu. Natural actor-critic for road traffic optimisation. In Advances in
Neural Information Processing Systems. MIT Press, 2007.
[8] S. Kakade. A natural policy gradient. In Advances in Neural Information Processing Systems, volume 14.
MIT Press, 2002.
[9] S. Amari. Natural gradient works efficiently in learning. Neural Computation, 10(2):251?276, 1998.
[10] T. Morimura, E. Uchibe, and K. Doya. Utilizing natural gradient in temporal difference reinforcement
learning with eligibility traces. In International Symposium on Information Geometry and its Applications, pages 256?263, 2005.
[11] S. Bhatnagar, R. Sutton, M. Ghavamzadeh, and M. Lee. Incremental natural actor-critic algorithms. In
Advances in Neural Information Processing Systems, pages 105?112. MIT Press, 2008.
[12] T. Morimura, E. Uchibe, J. Yoshimoto, and K. Doya. A new natural policy gradient by stationary distribution metric. In European Conference on Machine Learning and Principles and Practice of Knowledge
Discovery in Databases, 2008.
[13] T. Morimura, E. Uchibe, J. Yoshimoto, J. Peters, and K. Doya. Derivatives of logarithmic stationary
distributions for policy gradient reinforcement learning. Neural Computation. (in press).
[14] V. Godambe. Estimating function. Oxford Science, 1991.
[15] D. P. Bertsekas. Dynamic Programming and Optimal Control, Volumes 1 and 2. Athena Scientific, 1995.
[16] R. S. Sutton and A. G. Barto. Reinforcement Learning. MIT Press, 1998.
[17] S. Amari and H. Nagaoka. Method of Information Geometry. Oxford University Press, 2000.
[18] M. G. Lagoudakis and R. Parr. Least-squares policy iteration. Journal of Machine Learning Research,
4:1107?1149, 2003.
[19] D. Bagnell and J. Schneider. Covariant policy search. In Proceedings of the International Joint Conference
on Artificial Intelligence, July 2003.
[20] S. Amari and M. Kawanabe. Information geometry of estimating functions in semi-parametric statistical
models. Bernoulli, 3(1), 1997.
[21] A. C. Singh and R. P. Rao. Optimal instrumental variable estimation for linear models with stochastic
regressors using estimating functions. In Symposium on Estimating Functions, pages 177?192, 1996.
[22] B. Chandrasekhar and B. K. Kale. Unbiased statistical estimating functions in presence of nuisance
parameters. Journal of Statistical Planning and. Inference, 9:45?54, 1984.
[23] V. S. Konda and J. N. Tsitsiklis. On actor-critic algorithms. SIAM Journal on Control and Optimization,
42(4):1143?1166, 2003.
[24] T. Ueno, M. Kawanabe, T. Mori, S. Maeda, and S. Ishii. A semiparametric statistical approach to modelfree policy evaluation. In International Conference on Machine Learning, pages 857?864, 2008.
[25] J. A. Boyan. Technical update: Least-squares temporal difference learning. Machine Learning, 49(23):233?246, 2002.
9
| 3767 |@word determinant:1 version:1 briefly:1 instrumental:12 seems:1 covariance:1 pg:2 q1:2 initial:2 substitution:1 efficacy:1 interestingly:1 existing:1 current:5 com:1 si:1 written:1 must:1 numerical:6 update:2 stationary:10 intelligence:2 selected:1 accordingly:5 steepest:3 ith:1 provides:1 sigmoidal:1 zhang:1 unacceptable:1 along:1 become:1 symposium:2 prove:1 combine:1 introduce:2 forgetting:1 planning:1 discounted:4 td:1 considering:2 becomes:2 provided:2 estimating:38 bounded:4 xx:1 maximizes:2 notation:1 lowest:1 kind:1 interpreted:2 q2:2 developed:1 guarantee:1 esti:1 temporal:2 stateaction:1 control:2 normally:1 omit:1 bertsekas:1 tends:1 limit:1 sutton:2 lsd:7 oxford:2 interpolation:5 might:1 studied:1 co:1 gnac:26 practical:1 unique:4 practice:1 block:1 definite:1 implement:1 richter:1 episodic:1 induce:1 road:1 cannot:1 applying:1 godambe:1 optimize:1 equivalent:4 williams:1 attention:1 starting:1 kale:1 ergodic:1 simplicity:1 rule:1 estimator:8 insight:1 regarded:4 utilizing:1 embedding:1 handle:1 fx:1 updated:3 limiting:1 controlling:1 user:1 exact:2 programming:1 us:2 secondorder:1 element:2 approximated:1 satisfying:2 updating:3 database:1 observed:2 solved:1 episode:2 deeply:1 substantial:1 environment:1 reward:22 seung:1 dynamic:1 efs:1 okinawa:2 ghavamzadeh:1 singh:1 solving:2 efficiency:1 easily:1 joint:10 represented:2 various:5 describe:1 artificial:2 hyper:1 solv:1 widely:1 larger:1 supplementary:1 valued:1 amari:5 npg:15 statistic:2 nagaoka:1 itself:1 obviously:2 differentiable:1 propose:5 date:1 mixing:4 degenerate:1 validate:1 getting:1 p:1 generating:1 incremental:2 spent:1 depending:1 derive:3 ac:1 ij:2 received:1 strong:1 auxiliary:19 kenji:1 involves:1 implies:1 implemented:2 come:1 direction:5 aperiodic:1 tokyo:1 stochastic:8 material:1 implementing:1 require:1 fims:4 generalization:1 proposition:1 extension:3 hold:7 sufficiently:3 exp:1 great:1 parr:1 estimation:13 outperformed:4 ntd:1 create:1 successfully:1 reflects:1 minimization:2 mit:4 always:4 gaussian:1 modified:1 rather:2 avoid:2 barto:1 corollary:3 derived:5 focus:1 schaal:2 improvement:1 bernoulli:1 contrast:2 ishii:1 k2g:1 baseline:4 inference:1 dependent:2 typically:1 issue:1 morimura:6 development:2 proposes:1 art:5 initialize:1 equal:8 construct:1 ng:10 identical:1 yu:1 minimized:1 semipositive:1 connectionist:1 intelligent:2 irreducible:1 randomly:1 divergence:1 replaced:1 geometry:3 attempt:1 freedom:4 evaluation:1 mixture:1 chain:4 succeeded:1 necessary:1 unless:2 conduct:1 euclidean:4 taylor:1 logarithm:1 initialized:2 theoretical:1 increased:1 instance:1 rao:1 ordinary:2 cost:1 deviation:1 pgrl:12 uniform:1 too:1 straightforwardly:2 dir:2 st:20 international:5 sensitivity:1 siam:1 vijayakumar:1 lee:1 off:1 regressor:2 connecting:1 squared:2 satisfied:2 derivative:5 return:1 japan:2 account:4 potential:1 satisfy:2 ranking:1 depends:3 tion:1 view:1 traffic:1 square:2 became:1 variance:5 efficiently:4 correspond:1 bhatnagar:1 confirmed:1 researcher:1 iary:1 randomness:1 plateau:4 explain:1 simultaneous:1 definition:4 mating:1 associated:2 riemannian:10 proof:5 radian:1 knowledge:1 though:1 ergodicity:1 correlation:1 hand:1 sketch:2 incrementally:1 defines:1 mode:1 scientific:1 mdp:9 nac:12 excessively:1 concept:4 true:3 unbiased:3 normalized:1 analytically:3 leibler:1 eg:2 yoshimoto:3 eligibility:1 nuisance:1 noted:1 criterion:5 generalized:16 modelfree:1 meaning:1 ef:3 recently:1 lagoudakis:1 rl:16 jp:2 volume:2 extend:1 interpretation:1 synthesized:1 ai:4 rd:6 tuning:1 biped:1 robot:2 actor:18 showed:4 arbitrarily:1 remembering:1 schneider:1 maximize:2 redundant:1 july:1 ii:3 semi:1 multiple:1 desirable:1 ing:2 technical:1 long:1 controlled:2 involving:1 regression:2 optimisation:1 metric:20 expectation:2 poisson:1 iteration:1 sometimes:1 limt:1 robotics:1 addition:2 background:1 semiparametric:1 singular:1 appropriately:2 biased:1 ascent:3 probably:1 call:1 integer:1 near:13 presence:1 iii:2 baxter:1 reduce:4 gng:52 det:7 gullapalli:1 fim:17 bartlett:1 f:9 peter:3 interpolates:1 hessian:1 action:15 useful:2 locally:1 eiji:1 outperform:1 exist:1 stagnant:2 estimated:3 delta:1 discrete:1 write:1 affected:1 drawn:1 utilize:1 uchibe:5 sum:1 angle:3 parameterized:3 regressand:3 almost:1 doya:5 decision:3 followed:1 simplification:1 constraint:1 kronecker:1 interpolated:1 optimality:5 min:3 kakade:6 making:1 modification:1 pr:8 ln:54 equation:6 mori:1 know:1 tractable:3 end:1 serf:1 adopted:1 available:1 kawanabe:2 appropriate:4 original:1 denotes:4 remaining:1 include:1 dirichlet:1 konda:4 especially:2 build:1 seeking:1 objective:1 parametric:2 primary:1 fa:2 rt:2 bagnell:1 gradient:42 subspace:2 distance:3 athena:1 manifold:5 reason:2 assuming:1 length:1 modeled:1 balance:1 minimizing:2 difficult:3 trace:1 implementation:1 design:1 policy:53 unknown:1 adjustable:1 markov:6 finite:4 immediate:3 defining:1 precise:1 arbitrary:2 required:3 established:1 suggested:1 usually:3 maeda:1 max:2 natural:37 boyan:1 residual:2 technology:1 mdps:1 inversely:1 created:1 jun:1 review:1 discovery:1 proportional:1 analogy:1 proven:2 approximator:2 foundation:1 agent:3 degree:4 consistent:1 s0:6 principle:1 critic:18 ibm:2 row:1 changed:1 summary:1 transpose:1 jth:1 drastically:1 bias:1 tsitsiklis:1 understand:1 institute:1 fall:2 benefit:1 regard:1 dimension:1 transition:4 cumulative:2 ignores:1 commonly:1 reinforcement:9 made:1 refinement:1 regressors:2 sj:1 approximate:1 implicitly:1 kullback:1 confirm:1 continuous:1 un:1 search:1 table:1 promising:1 kanagawa:1 expansion:1 interpolating:2 meanwhile:1 necessarily:2 european:2 did:1 linearly:2 motivation:1 whole:1 noise:1 tetsuro:2 adress:1 referred:1 rithm:1 sub:1 candidate:1 theorem:4 r2:1 exists:1 importance:1 horizon:1 aberdeen:1 logarithmic:1 explore:1 expressed:1 temporarily:1 tedrake:1 lstd:1 covariant:1 corresponds:1 satisfies:5 ueno:1 goal:3 oist:1 fisher:5 change:4 infinite:3 lemma:6 called:10 experimental:1 tendency:1 select:1 violated:1 avoiding:1 correlated:1 |
3,054 | 3,768 | Relax then Compensate:
On Max-Product Belief Propagation and More
Adnan Darwiche
Computer Science Department
University of California, Los Angeles
Los Angeles, CA 90095
[email protected]
Arthur Choi
Computer Science Department
University of California, Los Angeles
Los Angeles, CA 90095
[email protected]
Abstract
We introduce a new perspective on approximations to the maximum a posteriori
(MAP) task in probabilistic graphical models, that is based on simplifying a given
instance, and then tightening the approximation. First, we start with a structural
relaxation of the original model. We then infer from the relaxation its deficiencies, and compensate for them. This perspective allows us to identify two distinct
classes of approximations. First, we find that max-product belief propagation can
be viewed as a way to compensate for a relaxation, based on a particular idealized
case for exactness. We identify a second approach to compensation that is based
on a more refined idealized case, resulting in a new approximation with distinct
properties. We go on to propose a new class of algorithms that, starting with a
relaxation, iteratively seeks tighter approximations.
1
Introduction
Relaxations are a popular approach for tackling intractable optimization problems. Indeed, for finding the maximum a posteriori (MAP) assignment in probabilistic graphical models, relaxations play
a key role in a variety of algorithms. For example, tree-reweighted belief propagation (TRW-BP) can
be thought of as a linear programming relaxation of an integer program for a given MAP problem
[1, 2]. Branch-and-bound search algorithms for finding optimal MAP solutions, such as [3, 4], rely
on structural relaxations, such as mini-bucket approximations, to provide upper bounds [4, 5].
Whether a relaxation is used as an approximation on its own, or as a guide for finding optimal
solutions, a trade-off is typically made between the quality of an approximation and the complexity
of computing it. We illustrate here instead how it is possible to tighten a given relaxation itself,
without impacting its structural complexity.
More specifically, we propose here an approach to approximating a given MAP problem by performing two steps. First, we relax the structure of a given probabilistic graphical model, which results in
a simpler model whose MAP solution provides an upper bound on that of the original. Second, we
compensate for the relaxation by introducing auxiliary parameters, which we use to restore certain
properties, leading to a tighter approximation. We shall in fact propose two distinct properties on
which a compensation can be based. The first is based on a simplified case where a compensation
can be guaranteed to yield exact results. The second is based on a notion of an ideal compensation,
that seeks to correct for a relaxation more directly. As we shall see, the first approach leads to a
new semantics for the max-product belief propagation algorithm. The second approach leads to
another approximation that further yields upper bounds on the MAP solution. We further propose
an algorithm for finding such a compensation, that starts with a relaxation and iteratively provides
monotonically decreasing upper bounds on the MAP solution (at least empirically).
Proofs of results are given in the auxiliary Appendix.
1
2
MAP Assignments
Q
Let M be a factor graph over a set of variables X, inducing a distribution Pr (x) ? a ?a (xa )
where x = {X1 = x1 , . . . , Xn = xn } is an assignment of factor graph variables Xi to states xi , and
where a is an index to the factor ?a (Xa ) over
Q the domain Xa ? X. We seek the maximum a
posteriori (MAP) assignment x? = argmaxx a ?a (xa ). We denote the log of the value of a MAP
assignment x? by:
Y
X
map? = log max
?a (xa ) = max
log ?a (xa )
x
x
a
a
which we refer to more simply as the MAP value. Note that there may be multiple MAP assignments
x? , so we may refer to just the value map? when the particular assignment is not relevant. Next,
if z is an assignment over variables Z ? X, then let x ? z denote that x and z are compatible
assignments, i.e., they set their common variables to the same states. Consider then the MAP value
under a partial assignment z:
X
map(z) = max
log ?a (xa ).
x?z
a
We will, in particular, be interested in the MAP value map(X = x) where we assume a single variable X is set to a particular state x. We shall also refer to these MAP values more generally as
map(.), without reference to any particular assignment.
3
Relaxation
The structural relaxations that we consider here are based on the relaxation of equivalence constraints from a model M, where an equivalence constraint Xi ? Xj is a factor ?eq (Xi , Xj ) over
two variables Xi and Xj that have the same states. Further, ?eq (xi , xj ) is 1 if xi = xj and 0 otherwise. We call an assignment x valid, with respect to an equivalence constraint Xi ? Xj , if it sets
variables Xi and Xj to the same state, and invalid otherwise. Note that when we remove an equivalence constraint from a model M, the values map(x) for valid configurations x do not change, since
log 1 = 0. However, the values map(x) for invalid configurations can increase, since they are ??
prior to the removal. In fact, they could overtake the optimal value map? . Thus, the MAP value
after relaxing an equivalence constraint in M is an upper bound on the original MAP value.
It is straightforward to augment a model M to another where equivalence constraints can be relaxed.
Consider, for example, a factor ?1 (A, B, C). We can replace the variable C in this factor with a
clone variable C ? , resulting in a factor ?1? (A, B, C ? ). When we now add the factor ?2 (C, C ? ) for the
equivalence constraint C ? C ? , we have a new model M? which is equivalent to the original model
M, in that an assignment x in M corresponds to an assignment x? in M? , where assignment x? sets
a variable and its clone to the same state. Moreover, the value map(x) in model M is the same as
the value map? (x? ) in model M? .
We note that a number of structural relaxations can be reduced to the removal of equivalence constraints, including relaxations found by deleting edges [6, 7], as well as mini-bucket approximations
[5, 4]. In fact, the example above can be considered a relaxation where we delete a factor graph
edge C ? ?1 , substituting clone C ? in place of variable C. Note that mini-bucket approximations
in particular have enabled algorithms for solving MAP problems via branch-and-bound search [3, 4].
4
Compensation
Suppose that we have a model M with MAP values map(.). Say that we remove the equivalence
constraints in M, resulting in a relaxed model with MAP values r-map(.). Our goal is to identify
a compensated model M? with MAP values c-map(.) that is as tractable to compute as the values
r-map(.), but yielding tighter approximations of the original values map(.).
To this end, we introduce into the relaxation additional factors ?ij;i (Xi ) and ?ij;j (Xj ) for each
equivalence constraint Xi ? Xj that we remove. Equivalently, we can introduce the log factors
?(Xi ) = log ?ij;i (Xi ) and ?(Xj ) = log ?ij;j (Xj ) (we omit the additional factor indices, as they
2
will be unambiguous from the context). These new factors add new parameters into the approximation, which we shall use to recover a weaker notion of equivalence into the model. More specifically,
given a set of equivalence constraints Xi ? Xj to relax, we have the original MAP values map(.),
the relaxation r-map(.) and the compensation c-map(.), where:
P
P
? map(z) = maxx?z a log ?a (xa ) + Xi?Xj log ?eq (Xi = xi , Xj = xj )
P
? r-map(z) = maxx?z a log ?a (xa )
P
P
? c-map(z) = maxx?z a log ?a (xa ) + Xi?Xj ?(Xi = xi ) + ?(Xj = xj )
Note that the auxiliary factors ? of the compensation do not introduce additional complexity to the
relaxation, in the sense that the treewidth of the resulting model is the same as that of the relaxation.
Consider then the case where an optimal assignment x? for the relaxation happens to set variables
Xi and Xj to the same state x, for each equivalence constraint Xi ? Xj that we relaxed. In this
case, the optimal solution for the relaxation is also an optimal solution for the original model, i.e.,
r-map? = map? . On the other hand, if a relaxation?s optimal assignment sets Xi and Xj to different
states, then it is not a valid assignment for the original model M, as it violates the equivalence
constraint and thus has log probability ??.
Consider, for a given equivalence constraint Xi ? Xj , the relaxation?s MAP values r-map(Xi = x)
and r-map(Xj = x) when we set, respectively, a single variable Xi or Xj to a state x. If for all states
x we find that r-map(Xi = x) 6= r-map(Xj = x), then we can infer that the MAP assignment sets
variables Xi and Xj to different states: the MAP value when we set Xi to a state x is different than
the MAP value when we set Xj to the same state. We can then ask of a compensation, for all states
x, that c-map(Xi = x) = c-map(Xj = x), enforcing a weaker notion of equivalence. In this case, if
there is a MAP assignment that sets variable Xi to a state x, then there is at least a MAP assignment
that sets variable Xj to the same state, even if there is no MAP assignment that sets both Xi and Xj
to the same state at the same time.
We now want to identify parameters ?(Xi ) and ?(Xj ) to compensate for a relaxation in this manner.
We propose two approaches: (1) based on a condition for exactness in a special case, and (2) based
on a notion of ideal compensations. To get the intuitions behind these approaches, we consider first
the simplified case where a single equivalence constraint is relaxed.
4.1
Intuitions: Splitting a Model into Two
Consider the case where relaxing a single equivalence constraint Xi ? Xj splits a model M into two
independent sub-models, Mi and Mj , where sub-model Mi contains variable Xi and sub-model
Mj contains variable Xj . Intuitively, we would like the parameters added in one sub-model to
summarize the relevant information about the other sub-model. In this way, each sub-model could
independently identify their optimal sub-assignments. For example, we can use the parameters:
?(Xi = x) = mapj (Xj = x) and ?(Xj = x) = mapi (Xi = x).
Since sub-models Mi and Mj become independent after relaxing the single equivalence constraint
Xi ? Xj , computing these parameters is sufficient to reconstruct the MAP solution for the original
model M. In particular, we have that ?(Xi = x) + ?(Xj = x) = map(Xi = x, Xj = x), and further
that map? = maxx [?(Xi = x) + ?(Xj = x)].
We propose then that the parameters of a compensation, with MAP values c-map(.), should satisfy
the following condition:
c-map(Xi = x) = c-map(Xj = x) = ?(Xi = x) + ?(Xj = x) + ?
(1)
for all states x. Here ? is an arbitrary normalization constant, but the choice ? = 12 c-map? results in
simpler semantics. The following proposition confirms that this choice of parameters does indeed
reflect our earlier intuitions, showing that this choice allows us to recover exact solutions in the
idealized case when a model is split into two.
Proposition 1 Let map(.) denote the MAP values of a model M, and let c-map(.) denote the MAP
values of a compensation that results from relaxing an equivalence constraint Xi ? Xj that split M
into two independent sub-models. Then the compensation has parameters satisfying Equation 1 iff
c-map(Xi = x) = c-map(Xj = x) = map(Xi = x, Xj = x) + ?.
3
Note that the choice ? = 12 c-map? implies that ?(Xi = x) + ?(Xj = x) = map(Xi = x, Xj = x) in
the case where relaxing an equivalent constraint splits a model into two.
In the case where relaxing an equivalence constraint does not split a model into two, a compensation
satisfying Equation 1 at least satisfies a weaker notion of equivalence. We might expect that such a
compensation may lead to more meaningful, and hopefully more accurate, approximations than a relaxation. Indeed, this compensation will eventually lead to a generalized class of belief propagation
approximations. Thus, we call a compensation satisfying Equation 1 a REC - BP approximation.
4.2
Intuitions: An Ideal Compensation
In the case where a single equivalence constraint Xi ? Xj is relaxed, we may imagine the possibility
of an ?ideal? compensation where, as far as computing the MAP solution is concerned, a compensated model is as good as a model where the equivalence constraint was not relaxed. Consider then
the following proposal of an ideal compensation, which has the following two properties. First, it
has valid configurations:
c-map(Xi = x)
=
c-map(Xj = x)
=
c-map(Xi = x, Xj = x)
for all states x. Second it has scaled values for valid configurations:
c-map(Xi = x, Xj = x)
=
? ? map(Xi = x, Xj = x).
for all states x, and for some ? > 1. If a compensation has valid configurations, then its optimal
solution sets variables Xi and Xj to the same state, and is thus a valid assignment for the original instance (it satisfies the equivalence constraint). Moreover, if it has scaled values, then the
compensation further allows us to recover the MAP value as well. A compensation having valid
configurations and scaled values is thus ideal as it is sufficient for us to recover the exact solution.
It may not always be possible to find parameters that lead to an ideal compensation. However, we
propose that a compensation?s parameters should satisfy:
c-map(Xi = x)
=
c-map(Xj = x)
=
2 ? [?(Xi = x) + ?(Xj = x)]
(2)
for all states x, where we choose ? = 2. As the following proposition tells us, if a compensation is
an ideal one, then it must at least satisfy Equation 2.
Proposition 2 Let map(.) denote the MAP values of a model M, and let c-map(.) denote the MAP
values of a compensation that results from relaxing an equivalence constraint Xi ? Xj in M. If
c-map(.) has valid configurations and scaled values, then c-map(.) satisfies Equation 2.
We thus call a compensation satisfying Equation 2 a REC - I compensation.
We note that other values of ? > 1 can be used, but the choice ? = 2 given above results in simpler semantics. In particular, if a compensation happens to satisfy c-map(Xi = x) =
c-map(Xj = x) = c-map(Xi = x, Xj = x) for some state x, we have that ?(Xi = x) + ?(Xj = x) =
map(Xi = x, Xj = x) (i.e., the parameters alone can recover an original MAP value).
Before we discuss the general case where we relax multiple equivalence constraints, we highlight
first a few properties shared by both REC - BP and REC - I compensations, that shall follow from more
general results that we shall present. First, if the optimal assignment x? for a compensation sets the
variables Xi and Xj to the same state, then: (1) the assignment x? is also optimal for the original
model M; and (2) 21 c-map? = map? . In the case where x? does not set variables Xi and Xj to the
same state, the value c-map? gives at least an upper bound that is no worse than the bound given by
the relaxation alone. In particular:
1
map? ? c-map? ? r-map? .
2
Thus, at least in the case where a single equivalence constraint is relaxed, the compensations implied
by Equations 1 and 2 do indeed tighten a relaxation (see the auxiliary Appendix for further details).
4.3
General Properties
In this section, we identify the conditions that compensations should satisfy in the more general case
where multiple equivalence constraints are relaxed, and further highlight some of their properties.
4
Suppose that k equivalence constraints Xi ? Xj are relaxed from a given model M. Then compensations REC - BP and REC - I seek to recover into the relaxation two weaker notions of equivalence.
First, a REC - BP compensation has auxiliary parameters satisfying:
c-map(Xi = x) = c-map(Xj = x) = ?(Xi = x) + ?(Xj = x) + ?
where ? =
k
?
1+k c-map .
?
We then approximate the exact MAP value map by the value
(3)
1
?
1+k c-map .
The following theorem relates REC - BP to max-product belief propagation.
Theorem 1 Let map(.) denote the MAP values of a model M, and let c-map(.) denote the MAP
values of a compensation that results from relaxing enough equivalence constraints Xi ? Xj in M
to render it fully disconnected. Then a compensation whose parameters satisfy Equation 3 has
values exp{c-map(Xi = x)} that correspond to the max-marginals of a fixed-point of max-product
belief propagation run on M, and vice-versa.
Loopy max-product belief propagation is thus the degenerate case of a REC - BP compensation, when
the approximation is fully disconnected (by deleting every factor graph edge, as defined in Section 3). Approximations need not be this extreme, and more structured approximations correspond
to instances in the more general class of iterative joingraph propagation approximations [8, 6].
Next, a REC - I compensation has parameters satisfying:
c-map(Xi = x) = c-map(Xj = x) = (1 + k)[?(Xi = x) + ?(Xj = x)]
We again approximate the exact MAP value map? with the value
(4)
1
?
1+k c-map .
In both compensations, it is possible to determine if the optimal assignment x? of a compensation is
an optimal assignment for the original model M: we need only check that it is a valid assignment.
Theorem 2 Let map(.) denote the MAP values of a model M, and let c-map(.) denote the MAP values of a compensation that results from relaxing k equivalence constraints Xi ? Xj . If the compensation has parameters satisfying either Eqs. 3 or 4, and if x? is an optimal assignment for the com1
pensation that is also valid, then: (1) x? is optimal for the model M, and (2) 1+k
c-map? = map? .
This result is analogous to results for max-product BP, TRW-BP, and related algorithms [9, 2, 10].
A REC - I compensation has additional properties over a REC - BP compensation. First, a REC - I compensation yields upper bounds on the MAP value, whereas REC - BP does not yield a bound in general.
Theorem 3 Let map(.) denote the MAP values of a model M, and let c-map(.) denote the MAP
values of a compensation that results from relaxing k equivalence constraints Xi ? Xj . If the com1
pensation has parameters satisfying Equation 4, then map? ? 1+k
c-map? .
We remark now that a relaxation alone has analogous properties. If an assignment x? is optimal
for a relaxation with MAP values r-map(.), and it is also a valid assignment for a model M (i.e.,
it does not violate the equivalence constraints Xi ? Xj ), then x? is also optimal for M, where
r-map(x? ) = map(x? ) (since they are composed of the same factor values). If an assignment x? of
a relaxation is not valid for model M, then the MAP value of the relaxation is an upper bound on
the original MAP value. On the other hand, REC - I compensations are tighter approximations than
the corresponding relaxation, at least in the case when a single equivalence constraint is relaxed:
map? ? 21 c-map? ? r-map? . When we relax multiple equivalence constraints we find, at least
empirically, that REC - I bounds are never worse than relaxations, although we leave this point open.
The following theorem has implications for MAP solvers that rely on relaxations for upper bounds.
Theorem 4 Let map(.) denote the MAP values of a model M, and let c-map(.) denote the MAP values of a compensation that results from relaxing k equivalence constraints Xi ? Xj . If the compensation has parameters satisfying Eq. 4, and if z is a partial assignment that sets the same sign to vari1
ables Xi and Xj , for any equivalence constraint Xi ? Xj relaxed, then: map(z) ? 1+k
c-map(z).
Algorithms, such as those in [3, 4], perform a depth-first branch-and-bound search to find an optimal
MAP solution. They rely on upper bounds of a MAP solution, under partial assignments, in order to
prune the search space. Thus, any method capable of providing upper bounds tighter than those of a
relaxation, can potentially have an impact in the performance of a branch-and-bound MAP solver.
5
Algorithm 1 RelaxEq-and-Compensate (REC)
input: a model M with k equivalence constraints Xi ? Xj
output: a compensation M?
main:
1: M?0 ? result of relaxing all Xi ? Xj in M
2: add to M?0 the factors ?(Xi ), ?(Xj ), for each Xi ? Xj
3: initialize all parameters ?0 (Xi = x), ?0 (Xj = x), e.g., to 21 r-map?
4: t ? 0
5: while parameters have not converged do
6:
t?t+1
7:
for each equivalence constraint Xi ? Xj do
8:
update parameters ?(Xi = x)t , ?(Xj = x)t , computed using compensation M?t?1 , by:
9:
for REC - BP: Equations 5 & 6
10:
for REC - I: Equations 7 & 8
11:
?t (Xi ) ? q ? ?t (Xi ) + (1 ? q) ? ?t?1 (Xi ) and ?t (Xj ) ? q ? ?t (Xj ) + (1 ? q) ? ?t?1 (Xj )
12: return M?t
5
An Algorithm to Find Compensations
Up to this point, we have not discussed how to actually find the auxiliary parameters ?(Xi = x) and
?(Xj = x) of a compensation. However, Equations 3 and 4 naturally suggest iterative algorithms for
finding REC - BP and REC - I compensations. Consider, for the case of REC - BP, the fact that parameters
satisfy Equation 3 iff they satisfy:
?(Xi = x) = c-map(Xj = x) ? ?(Xj = x) ? ?
?(Xj = x) = c-map(Xi = x) ? ?(Xi = x) ? ?
This suggests an iterative fixed-point procedure for finding the parameters of a compensation that
satisfy Equation 3. First, we start with an initial compensation with MAP values c-map0 (.), where
parameters have been initialized to some value. For an iteration t > 0, we can update our parameters
using the compensation from the previous iteration:
?t (Xi = x)
?t (Xj = x)
= c-mapt?1 (Xj = x) ? ?t?1 (Xj = x) ? ?t?1
= c-mapt?1 (Xi = x) ? ?t?1 (Xi = x) ? ?t?1
(5)
(6)
k
where ?t?1 = 1+k
c-map?t?1 . If at some point, the parameters of one iteration do not change in
the next, then we can say that the iterations have converged, and that the compensation satisfies
Equation 3. Similarly, for REC - I compensations, we use the update equations:
?t (Xi = x) =
?t (Xj = x) =
1
1+k c-mapt?1 (Xj = x) ? ?t?1 (Xj = x)
1
1+k c-mapt?1 (Xi = x) ? ?t?1 (Xi = x)
(7)
(8)
to identify compensations that satisfy Equation 4.
Algorithm 1 summarizes our proposal to compensate for a relaxation, using the iterative procedures
for REC - BP and REC - I. We refer to this algorithm more generically as RelaxEq-and-Compensate
(REC). Note that in Line 11, we further damp the updates by q, which is typical for such algorithms
(we use q = 12 ). Note also that in Line 3, we suggest that we initialize parameters by 21 r-map? . The
1
consequence of this is that our initial compensation has the MAP value 1+k
c-map?0 = r-map? .1 That
is, the initial compensation is equivalent to the relaxation, for both REC - BP and REC - I. Typically,
both algorithms tend to have compensations with decreasing MAP values. REC - BP may eventually
have MAP values that oscillate however, and may not converge. On the other hand, by Theorem 3,
we know that a REC - I compensation must yield an upper bound on the true MAP value map? .
Starting with an initial upper bound r-map? from the relaxation, REC - I yields, at least empirically,
monotonically decreasing upper bounds on the true MAP value from iteration to iteration. We
explore this point further in the following section.
1
P
c-map?0 = maxx c-map0 (x) = maxx [r-map(x) + Xi?Xj ?(Xi = x) + ?(Xj = x)]
= maxx [r-map(x) + k ? r-map? ] = r-map? + k ? r-map?
6
random grid (REC-BP)
1.0
1.0
0.6
0.4
0.2
iterations
random grid (REC-I)
5000
0.0
0
1.0
0.2
5000
iterations
random frustrated grid (REC-I)
0.6
0.6
0.4
0.4
0.2
iterations
5000
0.0
0
1.0
5000
iterations
random frustrated grid (REC-I)
0.6
0.4
0.2
0.0
0
0.8
approximation error
0.8
approximation error
0.8
0.0
0
approximation error
0.6
0.4
0.2
random frustrated grid (REC-BP)
0.8
approximation error
0.6
0.4
0.0
0
1.0
0.8
approximation error
0.8
random frustrated grid (REC-I)
approximation error
1.0
0.2
iterations
5000
0.0
0
iterations
5000
Figure 1: The REC algorithm in 10 ? 10 grids. Left column: random grids, using REC - BP (top) and
REC - I (bottom). Center column: frustrated grids, using REC - I with p = 21 (top), p = 31 (bottom).
Right column: frustrated grids, using REC - BP (top) with a fully disconnected relaxation, and REC - I
(bottom) with a relaxation with max cluster size 3.
6
Experiments
Our goal in this section is to highlight the degree to which different types of compensations can
tighten a relaxation, as well as to highlight the differences in the iterative algorithms to find them.
We evaluated our compensations using randomly parametrized 10 ? 10 grid networks. We judge the
quality of an approximation by the degree to which a compensation is able to improve a relaxation.
1
c-map? ?map?
which is zero when the compensation is
In particular, we measured the error E = 1+k
r-map? ?map?
exact, and one when the compensation is equivalent to the relaxation (remember that we initialize the
REC algorithm, for both types of compensations, with parameters that led to an initial compensation
1
with an optimal MAP value 1+k
c-map?0 = r-map? ). Note also that we use no instances where the
?
error E is undefined, i.e., r-map ? c-map? = 0, where the relaxation alone was able to recover the
exact solution.
We first consider grid networks where factors ?a (xi , xj ) were assigned to grid edges (i, j), with
values drawn uniformly at random from 0 to 1 (we assigned no factors to nodes). We assumed first
the coarsest possible relaxation, one that results in a fully disconnected approximation, and where
the MAP value is found by maximizing factors independently.2 We expect a relaxation?s upper
bound to be quite loose in this case.
Consider first Figure 1 (left), where we generated ten random grid networks (we plotted only ten
for clarity) and plotted the compensation errors (y-axis) as they evolved over iterations (x-axis). At
iteration 0, the MAP value of each compensation is equivalent to that of the relaxation (by design).
We see that, once we start iterating, that both methods of compensation can tighten the approximation of our very coarse relaxation. For REC - BP, we do so relatively quickly (in fewer iterations),
and to exact or near-exact levels (note that the 10 instances plotted behave similarly). For REC - I,
convergence is slower, but the compensation is still a significant improvement over the relaxation.
Moreover, it is apparent that further iterations would benefit the compensation further.
We next generated random grid networks with frustrated interactions. In particular, each edge was
given either an attractive factor or repulsive factor, at random each with probability 21 . An attractive
factor ?a (Xi , Xj ) was given a value at random from 1 ? p to 1 if xi = xj and a value from 0 to
2
? and
For each factor ?a and for each variable X in ?a , we replaced variable X with a unique clone X
? When we then relax all equivalence constraints, the resulting
introduced the equivalence constraint X ? X.
factor graph is fully disconnected. This corresponds to deleting all factor graph edges, as described in Section 3.
7
p if xi 6= xj , which favors configurations xi = xj when p ? 21 . Similarly for repulsive factors,
which favors instead configurations where xi 6= xj . It is well known that belief propagation tends to
not converge in networks with frustrated interactions [11]. Non-convergence is the primary failure
mode for belief propagation, and in such cases, we may try to use instead REC - I. We generated 10
random grid networks with p = 21 and another 10 networks with p = 13 . Although the frustration
in these networks is relatively mild, REC - BP did not converge in any of these cases. On the other
hand, REC - I compensations were relatively well behaved, and produced monotonically decreasing
upper bounds on the MAP value; see Figure 1 (center). Although the degree of compensation is not
as dramatic, we note that we are compensating for a very coarse relaxation (fully disconnected).
1
In Figure 1 (right), we considered frustrated grid networks where p = 10
, where REC - BP converged
in only one of 10 networks generated. Moreover, we can see in that one instance, REC - BP converges
below the true MAP value; remember that by Theorem 3, REC - I compensations always yield upper
bounds. In the case of REC - I, the compensations did not improve significantly on the fully disconnected relaxations (not shown). It is, however, straightforward to try less extreme relaxations. For
example, we used the mini-buckets-based approach to relaxation proposed in [4], and identified relaxed models M? with jointrees that had a maximum cluster size of 3 (c.f., [12] which re-introduced
constraints over triples). Surprisingly, this was enough for REC - I to compensate for the relaxation
completely (to within 10?8 ) in 7 of the 10 instances plotted. REC - BP benefits from added structure
as well, converging and compensating completely (to within 10?4 ) in 9 of 10 instances (not plotted).
7
Discussion
There are two basic concepts underlying our proposed framework. The first is to relax a problem by
dropping equivalence constraints. The second is that of compensating for a relaxation in ways that
can capture existing algorithms as special cases, and in ways that allow us to design new algorithms.
The idea of using structural relaxations for upper-bounding MAP solutions in probabilistic graphical
models goes back to mini-bucket approximations [13], which can be considered to be a particular
way of relaxing equivalence constraints from a model [4]. In this paper, we propose further a way
to compensate for these relaxations, by restoring a weaker notion of equivalence. One approach to
compensation identified a generalized class of max-product belief propagation approximations. We
then identified a second approach that led to another class of approximations that we have observed
to yield tighter upper bounds on MAP solutions as compared to a relaxation alone.
An orthogonal approach to upper-bounding MAP solutions is based on linear programming (LP)
relaxations, which has seen significant interest in recent years [1, 2]. This perspective is based on
formulating MAP problems as integer programs, whose solutions are upper-bounded by tractable LP
relaxations. A related approach based on Lagrangian relaxations is further capable of incorporating
structural simplifications [14]. Indeed, there has been significant interest in identifying a precise
connection between belief propagation and LP relaxations [2, 10].
In contrast to the above approaches, compensations further guarantee, in Theorem 4, upper bounds
on MAP solutions under any partial assignment (without rerunning the algorithm). This property
has the potential to impact algorithms, such as [3, 4], that rely on such upper bounds, under partial
assignments, to perform a branch-and-bound search for optimal MAP solutions.3 Further, as we
approximate MAP by computing it exactly in a compensated model, we avoid the difficulties that algorithms such as max-product BP and related algorithms face, which infer MAP assignments using
max-marginals (which may not have unique maximal states), which is based on local information
only [1]. The perspective that we propose further allows us to identify the intuitive differences between belief propagation and an upper-bound approximation, namely that they arise from different
notions of compensation. We hope that this perspective will enable the design of new approximations, especially in domains where specific notions of compensation may suggest themselves.
Acknowledgments
This work has been partially supported by NSF grant #IIS-0916161.
3
We investigated the use of REC - I approximations in depth-first branch-and-bound search for solving
weighted Max-SAT problems, where we were able to use a more specialized iterative algorithm [15].
8
References
[1] Martin J. Wainwright, Tommi Jaakkola, and Alan S. Willsky. MAP estimation via agreement
on trees: message-passing and linear programming. IEEE Transactions on Information Theory,
51(11):3697?3717, 2005.
[2] Amir Globerson and Tommi Jaakkola. Fixing max-product: Convergent message passing algorithms for MAP LP-relaxations. In NIPS, pages 553?560, 2008.
[3] Radu Marinescu, Kalev Kask, and Rina Dechter. Systematic vs. non-systematic algorithms for
solving the MPE task. In UAI, pages 394?402, 2003.
[4] Arthur Choi, Mark Chavira, and Adnan Darwiche. Node splitting: A scheme for generating
upper bounds in Bayesian networks. In UAI, pages 57?66, 2007.
[5] Rina Dechter and Irina Rish. Mini-buckets: A general scheme for bounded inference. J. ACM,
50(2):107?153, 2003.
[6] Arthur Choi and Adnan Darwiche. An edge deletion semantics for belief propagation and its
practical impact on approximation quality. In AAAI, pages 1107?1114, 2006.
[7] Arthur Choi and Adnan Darwiche. Approximating the partition function by deleting and then
correcting for model edges. In UAI, pages 79?87, 2008.
[8] Rina Dechter, Kalev Kask, and Robert Mateescu. Iterative join-graph propagation. In UAI,
pages 128?136, 2002.
[9] Martin J. Wainwright, Tommi Jaakkola, and Alan S. Willsky. Tree consistency and bounds on
the performance of the max-product algorithm and its generalizations. Statistics and Computing, 14:143?166, 2004.
[10] Yair Weiss, Chen Yanover, and Talya Meltzer. MAP estimation, linear programming and belief
propagation with convex free energies. In UAI, 2007.
[11] Gal Elidan, Ian McGraw, and Daphne Koller. Residual belief propagation: Informed scheduling for asynchronous message passing. In UAI, 2006.
[12] David Sontag, Talya Meltzer, Amir Globerson, Tommi Jaakkola, and Yair Weiss. Tightening
LP relaxations for MAP using message passing. In UAI, pages 503?510, 2008.
[13] Rina Dechter. Mini-buckets: a general scheme for approximation in automated reasoning.
In Proc. International Joint Conference on Artificial Intelligence (IJCAI), pages 1297?1302,
1997.
[14] Jason K. Johnson, Dmitry M. Malioutov, and Alan S. Willsky. Lagrangian relaxation for
MAP estimation in graphical models. In Proceedings of the 45th Allerton Conference on
Communication, Control and Computing, pages 672?681, 2007.
[15] Arthur Choi, Trevor Standley, and Adnan Darwiche. Approximating weighted Max-SAT problems by compensating for relaxations. In Proceedings of the 15th International Conference on
Principles and Practice of Constraint Programming (CP), pages 211?225, 2009.
9
| 3768 |@word mild:1 adnan:5 open:1 confirms:1 seek:4 simplifying:1 dramatic:1 initial:5 configuration:9 contains:2 existing:1 rish:1 tackling:1 must:2 dechter:4 partition:1 remove:3 update:4 v:1 alone:5 intelligence:1 fewer:1 amir:2 provides:2 coarse:2 node:2 allerton:1 simpler:3 daphne:1 become:1 darwiche:6 manner:1 introduce:4 indeed:5 themselves:1 compensating:4 decreasing:4 talya:2 solver:2 moreover:4 underlying:1 bounded:2 evolved:1 informed:1 finding:6 gal:1 guarantee:1 remember:2 every:1 exactly:1 scaled:4 control:1 grant:1 omit:1 before:1 local:1 tends:1 consequence:1 might:1 equivalence:46 suggests:1 relaxing:13 unique:2 acknowledgment:1 globerson:2 restoring:1 practical:1 practice:1 procedure:2 maxx:7 thought:1 aychoi:1 significantly:1 suggest:3 get:1 scheduling:1 context:1 equivalent:5 map:197 lagrangian:2 compensated:3 center:2 maximizing:1 go:2 straightforward:2 starting:2 independently:2 convex:1 splitting:2 identifying:1 correcting:1 enabled:1 notion:9 analogous:2 imagine:1 play:1 suppose:2 exact:9 programming:5 agreement:1 satisfying:9 rec:56 bottom:3 role:1 observed:1 capture:1 rina:4 trade:1 intuition:4 complexity:3 solving:3 completely:2 joint:1 distinct:3 artificial:1 tell:1 refined:1 whose:3 quite:1 apparent:1 say:2 relax:7 otherwise:2 reconstruct:1 favor:2 statistic:1 itself:1 propose:9 interaction:2 product:11 maximal:1 relevant:2 iff:2 degenerate:1 intuitive:1 inducing:1 los:4 convergence:2 cluster:2 ijcai:1 mapt:4 generating:1 leave:1 converges:1 illustrate:1 fixing:1 measured:1 ij:4 eq:5 auxiliary:6 c:2 treewidth:1 implies:1 judge:1 tommi:4 correct:1 enable:1 violates:1 generalization:1 proposition:4 tighter:6 considered:3 exp:1 substituting:1 estimation:3 proc:1 vice:1 weighted:2 hope:1 exactness:2 always:2 avoid:1 jaakkola:4 improvement:1 check:1 contrast:1 sense:1 posteriori:3 inference:1 chavira:1 marinescu:1 typically:2 koller:1 interested:1 semantics:4 augment:1 impacting:1 special:2 initialize:3 once:1 never:1 having:1 few:1 randomly:1 composed:1 replaced:1 irina:1 interest:2 message:4 possibility:1 generically:1 extreme:2 yielding:1 undefined:1 behind:1 implication:1 accurate:1 edge:8 capable:2 partial:5 arthur:5 orthogonal:1 tree:3 initialized:1 re:1 plotted:5 delete:1 instance:8 column:3 earlier:1 assignment:38 loopy:1 introducing:1 johnson:1 damp:1 com1:2 overtake:1 clone:4 international:2 probabilistic:4 off:1 systematic:2 quickly:1 again:1 reflect:1 frustration:1 aaai:1 choose:1 worse:2 leading:1 return:1 potential:1 satisfy:10 idealized:3 try:2 jason:1 mpe:1 start:4 recover:7 yield:8 identify:8 correspond:2 bayesian:1 produced:1 malioutov:1 converged:3 trevor:1 failure:1 energy:1 naturally:1 proof:1 mi:3 popular:1 ask:1 actually:1 back:1 trw:2 follow:1 wei:2 evaluated:1 xa:10 just:1 hand:4 hopefully:1 propagation:18 mode:1 quality:3 behaved:1 concept:1 true:3 assigned:2 iteratively:2 reweighted:1 attractive:2 unambiguous:1 standley:1 generalized:2 cp:1 reasoning:1 common:1 specialized:1 empirically:3 discussed:1 marginals:2 refer:4 significant:3 kask:2 versa:1 jointrees:1 grid:17 consistency:1 similarly:3 had:1 add:3 own:1 recent:1 perspective:5 certain:1 seen:1 additional:4 relaxed:12 prune:1 determine:1 converge:3 monotonically:3 elidan:1 ii:1 relates:1 branch:6 multiple:4 violate:1 infer:3 alan:3 compensate:10 impact:3 converging:1 basic:1 iteration:16 normalization:1 proposal:2 whereas:1 want:1 tend:1 integer:2 call:3 structural:7 near:1 ideal:8 split:5 enough:2 concerned:1 meltzer:2 variety:1 xj:101 automated:1 identified:3 idea:1 angeles:4 whether:1 render:1 sontag:1 passing:4 oscillate:1 remark:1 generally:1 iterating:1 ten:2 reduced:1 nsf:1 sign:1 shall:6 dropping:1 key:1 drawn:1 clarity:1 graph:7 relaxation:71 year:1 run:1 place:1 appendix:2 summarizes:1 bound:32 guaranteed:1 simplification:1 convergent:1 constraint:44 deficiency:1 bp:27 ucla:2 ables:1 formulating:1 performing:1 coarsest:1 relatively:3 martin:2 department:2 structured:1 radu:1 disconnected:7 lp:5 happens:2 intuitively:1 pr:1 bucket:7 equation:17 discus:1 eventually:2 loose:1 know:1 tractable:2 end:1 repulsive:2 yair:2 slower:1 original:14 top:3 graphical:5 especially:1 approximating:3 implied:1 added:2 primary:1 rerunning:1 parametrized:1 enforcing:1 willsky:3 index:2 mini:7 providing:1 equivalently:1 robert:1 potentially:1 tightening:2 design:3 perform:2 upper:25 compensation:86 behave:1 communication:1 precise:1 arbitrary:1 introduced:2 david:1 namely:1 connection:1 california:2 deletion:1 nip:1 able:3 below:1 summarize:1 program:2 max:19 including:1 belief:16 deleting:4 wainwright:2 difficulty:1 rely:4 restore:1 residual:1 yanover:1 scheme:3 improve:2 axis:2 prior:1 removal:2 fully:7 expect:2 highlight:4 triple:1 degree:3 sufficient:2 principle:1 compatible:1 mateescu:1 surprisingly:1 supported:1 free:1 asynchronous:1 guide:1 weaker:5 allow:1 face:1 benefit:2 depth:2 xn:2 valid:13 made:1 simplified:2 far:1 tighten:4 transaction:1 approximate:3 mcgraw:1 dmitry:1 uai:7 sat:2 assumed:1 xi:104 search:6 iterative:7 mj:3 ca:2 argmaxx:1 investigated:1 domain:2 did:2 main:1 bounding:2 arise:1 x1:2 join:1 sub:9 ian:1 theorem:9 choi:5 specific:1 showing:1 intractable:1 incorporating:1 chen:1 led:2 simply:1 explore:1 partially:1 corresponds:2 satisfies:4 frustrated:9 acm:1 viewed:1 goal:2 invalid:2 replace:1 shared:1 change:2 specifically:2 typical:1 uniformly:1 meaningful:1 mark:1 |
3,055 | 3,769 | Fast, smooth and adaptive regression in metric spaces
Samory Kpotufe
UCSD CSE
Abstract
It was recently shown that certain nonparametric regressors can escape the curse
of dimensionality when the intrinsic dimension of data is low ([1, 2]). We prove
some stronger results in more general settings. In particular, we consider a regressor which, by combining aspects of both tree-based regression and kernel regression, adapts to intrinsic dimension, operates on general metrics, yields a smooth
function, and evaluates in time O(log n). We derive a tight convergence rate of
the form n?2/(2+d) where d is the Assouad dimension of the input space.
1
Introduction
Relative to parametric methods, nonparametric regressors require few structural assumptions on the
function being learned. However, their performance tends to deteriorate as the number of features
increases. This so-called curse of dimensionality is quantified by various lower bounds on the convergence rates of the form n?2/(2+D) for data in RD (see e.g. [3, 4]). In other words, one might
require a data size exponential in D in order to attain a low risk.
Fortunately, it is often the case that data in RD has low intrinsic complexity, e.g. the data is near a
manifold or is sparse, and we hope to exploit such situations. One simple approach, termed manifold
learning (e.g. [5, 6, 7]), is to embed the data into a lower dimensional space where the regressor
might work well. A recent approach with theoretical guarantees for nonparametric regression, is
the study of adaptive procedures, i.e. ones that operate in RD but attain convergence rates that
depend just on the intrinsic dimension of data. An initial result [1] shows that for data on a ddimensional manifold, the asymptotic risk at a point x ? RD depends just on d and on the behavior
of the distribution in a neighborhood of x. Later, [2] showed that a regressor based on the RPtree
of [8] (a hierarchical partitioning procedure) is not only fast to evaluate, but is adaptive to Assouad
dimension, a measure which captures notions such as manifold dimension and data sparsity. The
related notion of box dimension (see e.g. [9]) was shown in an earlier work [10] to control the risk
of nearest neighbor regression, although adaptivity was not a subject of that result.
This work extends the applicability of such adaptivity results to more general uses of nonparametric
regression. In particular, we present an adaptive regressor which, unlike RPtree, operates on a
general metric space where only distances are provided, and yields a smooth function, an important
property in many domains (see e.g. [11] which considers the smooth control of a robotic tool based
on noisy outside input). In addition, our regressor can be evaluated in time just O(log n), unlike
kernel or nearest neighbor regression. The evaluation time for these two forms of regression is
lower bounded by the number of sample points contributing to the regression estimate. For nearest
neighbor
regression,
this number is given by a parameter kn whose optimal setting (see [12]) is
(
)
O n2/(2+d) . For kernel regression, given an optimal bandwidth h ? n?1/(2+d) (see [12]), we
would expect about nhd ? n2/(2+d) points in the ball B(x, h) around a query point x.
We note that there exist many heuristics for speeding up kernel regression, which generally combine
fast proximity search procedures with other elaborate methods for approximating the kernel weights
(see e.g. [13, 14, 15]). There are no rigorous bounds on either the achievable speedup or the risk of
the resulting regressor.
1
Figure 1: Left and Middle- Two r-nets at different scales r, each net inducing a partition of the sample X. In
each case, the gray points are the r-net centers. For regression each center contributes the average Y value of
the data points assigned to them (points in the cells). Right- Given an r-net and a bandwidth h, a kernel around
a query point x weights the Y -contribution of each center to the regression estimate for x.
Our regressor integrates aspects of both tree-based regression and kernel regression. It constructs
partitions of the input dataset X = {Xi }n1 , and uses a kernel to select a few sets within a given
partition, each set contributing its average
output)Y value to the estimate. We show that such a
(
regressor achieves an excess risk of O n?2/(2+d) , where d is the Assouad dimension of the input
)
(
data space. This is a tighter convergence rate than the O n?2/(2+O(d log d) of RPtree regression
(see [2]). Finally, the evaluation time of O(log n) is arrived at by modifying the cover tree proximity
search procedure of [16]. Unlike in [16], this guarantee requires no growth assumption on the data
distribution.
We?ll now proceed with a more detailed presentation of the results in the next section, followed by
technical details in sections 3 and 4.
2
Detailed overview of results
n
We?re given i.i.d training data (X, Y) = {(Xi , Yi )}1 , where the input variable X belongs to a metric
space X where the distance between points is given by the metric ?, and the output Y belongs to a
subset Y of some Euclidean space. We?ll let ?X and ?Y denote the diameters of X and Y.
Assouad dimension: The Assouad or doubling dimension of X is defined as the smallest d such
that any ball can be covered by 2d balls of half its radius.
Examples: A d-dimensional affine subspace of a Euclidean space RD has Assouad dimension O(d)
[9]. A d-dimensional submanifold of a Euclidean space RD has Assouad dimension O(d) subject
to a bound on its curvature [8]. A d-sparse data space in RD , i.e. one where each data point has at
most d non zero coordinates, has Assouad dimension O(d log D) [8, 2].
The algorithm has no knowledge of the dimension d, nor of ?Y , although we assume ?X is known
(or can be upper-bounded).
.
Regression function: We assume the regression function f (x) = E [Y |X = x] is Lipschitz, i.e.
0
0
there exists ? , unknown, such that ?x, x ? X , kf (x) ? f (x )k ? ? ? ? (x, x0 ).
Excess risk: Our performance criteria for a regressor fn (x) is the integrated excess l2 risk:
2 .
2
2
2
kfn ? f k = E kfn (X) ? f (X)k = E kfn (X) ? Y k ? E kf (X) ? Y k .
X
2.1
X,Y
X,Y
(1)
Algorithm overview
We?ll consider a set of partitions of the data induced by a hierarchy of r-nets of X. Here an r-net Qr
is understood to be both an r-cover of X (all points in X are within r of some point in Qr ), and an
r-packing (the points in Qr are at least r apart). The details on how to build the r-nets are covered
in section 4. For now, we?ll consider a class of regressors defined over these nets (as illustrated in
Figure 1), and we?ll describe how to select a good regressor out of this class.
{
}
.
Partitions of X: The r-nets are denoted by Qr , r ? {?X /2i }I+2
, where I = dlog ne, and
0
{
}
Qr ? X. Each Q ? Qr , r ? {?X /2i }I+2
induces a partition {X(q), q ? Q} of X, where
0
2
.
X(q) designate
all those points in X whose closest point in Q is q. We set nq = |X(q)|, and
?
Y?q = n1q i:Xi ?X(q) Yi .
Admissible kernels: We assume that K(u) is a non increasing function of u ? [0, ?); K is
positive on u ? [0, 1), maximal at u = 0, and vanishes for u ? 1. To simplify notation, we?ll often
let K(x, q, h) denote K(? (x, q) /h).
}
{
Regressors: For each Q ? Qr , r ? {?X /2i }I+2
, and given a bandwidth h, we define the fol0
lowing regressor:
?
nq (K(x, q, h) + )
fn,Q (x) =
wq (x)Y?q , where wq = ?
.
(2)
0
0
0
q ?Q nq (K(x, q , h) + )
q?Q
The positive constant ensures that the estimate remains well defined when K(x, q, h) = 0. We
assume ? K(1/2)/n2 . We can view (K(?) + ) as the effective kernel which never vanishes. It is
clear that the learned function fn,Q inherits any degree of smoothness from the kernel function K,
i.e. if K is of class C k , then so is fn,Q .
Selecting the final regressor: For fixed n, K(?), and {Qr , r ? {?X /2i }I+2
}, equation (2) above
0
defines a class of regressors parameterized by r ? {?X /2i }I+2
,
and
the
bandwidth
h. We want
0
to pick a good regressor out of this class. We can reduce the search space right away by noticing
that we need r = ?(h): if r >> h then B(x, h) ? Qr is empty for most x since the points in Qr
are over r apart, and if r << h then B(x, h) ? Qr might contain a lot of points, thus increasing
evaluation time. So for each choice of h, we will set r = h/4, which will yield good guarantees on
computational and prediction performance. The final regressor is selected as follows.
.
.
Draw a new sample (X0 , Y0 ) of size n. As before let I = dlog ne, and define H = {?X /2i }I0 . For
0
0
every h ? H, pick the r-net Qh/4 and test fn,Qh/4 on (X , Y ); let the empirical risk be minimized
?n
.
fn,Q (X 0 ) ? Y 0
2 . Return fn,Q
at ho , i.e. ho = argminh?H 1
as the final regressor.
n
i=1
h/4
i
i
ho /4
Fast evaluation: Each regressor fn,Qh/4 (x) can be estimated quickly on points x by traversing
(nested) r-nets as described in detail in section 4.
2.2
Computational and prediction performance
The cover property ensures that for some h, Qh/4 is a good summary of local information (for
prediction performance), while the packing property ensures that few points in Qh/4 fall in B(x, h)
(for fast evaluation). We have the following main result.
( (
)2 (
)2 )
?Y
??X
Theorem 1. Let d be the Assouad dimension of X and let n ? max 9, ??X , ?Y
.
(a) The final regressor selected satisfies
2
2d/(2+d)
E
fn,Qho /4 ? f
? C (??X )
(
?2Y
n
?
)2/(2+d)
+
3?2Y
ln(n log n)
,
n
where C depends on the Assouad dimension d and on K(0)/K(1/2).
(b) fn,Qho /4 (x) can be computed in time C 0 log n, where C 0 depends just on d.
Part (a) of Theorem 1 is given by Corollary 1 of section 3, and does not depend on how the r-nets
are built; part (b) follows from Lemma 4 of section 4 which specifies the nets.
3 Risk analysis
Throughout this section we assume 0 < h < ?X and we let Q = Qh/4 . We?ll bound the risk for
fn,Q for any fixed choice of h, and then show that the final h0 selected yields a good risk. The results
in this section only require the fact that Q is a cover of data and thus preserves local information,
while the packing property is needed in the next section for fast evaluation.
3
.
Define fen,Q (x) = EY|X fn,Q (x), i.e. the conditional expectation of the estimate, for X fixed. We
have the following standard decomposition of the excess risk into variance and bias terms:
2
2
2
?x ? X , E kfn,Q (x) ? f (x)k = E
fn,Q (x) ? fen,Q (x)
+
fen,Q (x) ? f (x)
. (3)
Y|X
Y|X
We?ll proceed by bounding each term separately in the following two lemmas, and then combining
these bounds in Lemma 3. We?ll let ? denote the marginal measure over X and ?n denote the
corresponding empirical measure.
Lemma 1 (Variance at x). Fix X, and let Q be an h4 -net of X, 0 < h < ?X . Consider x ? X such
that X ? (B(x, h/4)) 6= ?. We have
2
2K(0)?2Y
.
E
fn,Q (x) ? fen,Q (x)
?
K(1/2) ? n?n (B(x, h/4))
Y|X
?
2
Proof. Remember that for independent random vectors vi with expectation 0, E k i vi k =
?
2
i E kvi k . We apply this fact twice in the inequalities below, given that, conditioned on X and
Q ? X, the Yi values are mutually independent and so are the Y?q values. We have
2
?
)
2 ?
(
2
2
?
?
?
?
Y
?
E
Y
?
w
(x)
E
w
(x)
Y
?
E
Y
E
fn,Q (x) ? fen,Q (x)
= E
q
q
q
q
q
q
Y|X
Y|X
Y|X
Y|X
Y|X
q?Q
q?Q
?
(
)
2 ?
?
?2Y
1
2
?
wq2 (x) E
=
Y
?
E
Y
w
(x)
i
i
q
nq
Y|X
Y|X
i:Xi ?X(q) nq
q?Q
q?Q
(
{
{
}) ?
}
?2
?2
?
max wq (x) Y
wq = max wq (x) Y
q?Q
q?Q
nq
nq
q?Q
=
2K(0)?2Y
(K(x, q, h) + )?2Y
?
?
.
0
q 0 ?Q nq 0 (K(x, q , h) + )
q?Q nq K(x, q, h)
max ?
q?Q
(4)
To bound the fraction in (4), we lower-bound the denominator as:
?
?
?
nq K(1/2) ? K(1/2) ? n?n (B(x, h/4)).
nq K(x, q, h) ?
nq K(x, q, h) ?
q?Q
q:?(x,q)?h/2
q:?(x,q)?h/2
The last inequality follows by remarking that, since Q is an h4 -cover of X, the ball B(x, h/4) can
only contain points from ?q:?(x,q)?h/2 X(q). Plug this last inequality into (4) and conclude.
Lemma 2 (Bias at x). As before, fix X, and let Q be an h4 -net of X, 0 < h < ?X . Consider x ? X
such that X ? (B(x, h/4)) 6= ?. We have
2
?2Y
e
.
fn,Q (x) ? f (x)
? 2?2 h2 +
n
Proof. We have
2
2
?
?
? wq (x) ?
w
(x)
e
q
2
(f (Xi ) ? f (x))
kf (Xi ) ? f (x)k ,
?
fn,Q (x) ? f (x)
=
nq
q?Q nq Xi ?X(q)
q?Q
Xi ?X(q)
where we just applied Jensen?s inequality on the norm square. We bound the r.h.s by breaking the
summation over two subsets of Q as follows.
? wq (x) ?
? wq (x) ?
2
2
kf (Xi ) ? f (x)k ?
?2 ? (Xi , x)
nq
nq
Xi ?X(q)
q:?(x,q)<h
?
?
q:?(x,q)<h
wq (x)
nq
?
q:?(x,q)<h
2
?2 (? (x, q) + ? (q, Xi )) ?
Xi ?X(q)
?
q:?(x,q)<h
4
Xi ?X(q)
wq (x)
nq
?
Xi ?X(q)
25 2 2
? h ? 2?2 h2 .
16
Next, we have
?
q:?(x,q)?h
=
?
wq (x)
nq
?2Y
?
?
?
?
= ?2Y ?1 +
?
nq (K(x, q, h) + )
q:?(x,q)?h
?
1+ ?
K(1/2)
q:?(x,q)?h
wq (x)?2Y
q:?(x,q)?h
q:?(x,q)<h
(
?
2
kf (Xi ) ? f (x)k ?
Xi ?X(q)
nq +
q:?(x,q)?h
? ?2Y
?
nq
nq
??1
?
q:?(x,q)<h
nq (K(x, q, h) + ) ?
?
?
?
?
nq
q:?(x,q)?h
)?1
? ?2Y
(
)?1
?2Y
K(1/2)
1+
?
,
n
1+n
where the second inequality is due to the fact that, since ?n (B(x, h/4)) > 0, the set B(x, h/2) ? Q
cannot be empty (remember that Q is an h4 -cover of X). This concludes the argument.
Lemma 3 (Integrated excess risk). Let Q be an h4 -net of X, 0 < h < ?X . We have
?2Y
+ 2?2 h2 ,
n ? (h/?X )d
(X,Y)
where C0 depends on the Assouad dimension d and on K(0)/K(1/2).
2
E kfn,Q ? f k ? C0
2
Proof. Applying Fubini?s theorem, the expected excess risk, E(X,Y) kfn,Q ? f k , can be written as
)
2(
E E kfn,Q (X) ? f (X)k 1{?n (B(X,h/4))>0} + 1{?n (B(X,h/4))=0} .
X (X,Y)
By lemmas 1 and 2 we have for X = x fixed,
]
?2Y 1{?n (B(x,h/4))>0}
+ 2?2 h2 +
X
n?n (B(x, h/4))
(X,Y)
(
)
2?2Y
?2
? C1
+ 2?2 h2 + Y ,
n?(B(x, h/4))
n
]
[
1
2
? np
where for the last inequality we used the fact that for a binomial b(n, p), E {b(n,p)>0}
b(n,p)
lemma 4.1 of [12]).
2
E kfn,Q (x) ? f (x)k 1{?n (B(x,h/4))>0}
[
? C1 E
?2Y
n
(5)
(see
For the case where B(x, h/4) is empty, we have
2
E kfn,Q (x) ? f (x)k 1{?n (B(x,h/4))=0}
(X,Y)
n
? ?2Y E 1{?n (B(x,h/4))=0} = ?2Y (1 ? ?(B(x, h/4))
X
? ?2Y e?n?(B(x,h/4)) ?
?2Y
.
n?(B(x, h/4))
Combining (6) and (5), we can then bound the expected excess risk as
[
]
3C1 ?2Y
?2
1
2
E kfn,Q ? f k ?
E
+ 2?2 h2 + Y .
(7)
X ?(B(X, h/4))
n
n
(X,Y)
The expectation on the r.h.s is bounded using a standard covering argument (see e.g. [12]). Let
h
{zi }N
1 be an 8 -cover of X . Notice that for any zi , x ? B(zi , h/8) implies B(x, h/4) ? B(zi , h/8).
We therefore have
[
[
[
]
] ?
]
N
N
?
1{X?B(zi ,h/8)}
1{X?B(zi ,h/8)}
1
E
?
E
?
E
X ?(B(X, h/4))
X
X
?(B(X, h/4))
?(B(X, h/8))
i=1
i=1
)d
(
?X
, where C2 depends just on d.
= N ? C2
h
We conclude by combining the above with (7) to obtain
3C1 C2 ?2Y
?2Y
2
2 2
E kfn,Q ? f k ?
+
2?
h
+
.
n(h/?X )d
n
(X,Y)
5
(6)
( (
)2 (
)2 )
?Y
??X
Corollary 1. Let n ? max 9, ??X , ?Y
. The final regressor selected satisfies
2
2d/(2+d)
E
fn,Qho /4 ? f
? C (??X )
(
?2Y
n
?
)2/(2+d)
+
3?2Y
ln(n log n)
,
n
where C depends on the Assouad dimension d and on K(0)/K(1/2).
(
( 2 )1/(2+d) )
? = C3 ?d/(2+d) ?2Y
Proof outline. Let h
? H. We note that n is lower bounded so
X
? n
? is in H. We have by Lemma 3 that for h,
?
that such an h
( 2 )2/(2+d)
2
?Y
2d/(2+d)
? f
? C0 (??X )
E
fn,Qh/4
.
?
X,Y
n
Applying McDiarmid?s to?
the empirical risk followed by a union bound over H, we have that, with
probability at least 1 ? 1/ n over the choice of (X0 , Y0 ), for all h ? H
?
?
n
2 1 ?
0
0
2
fn,Q (Xi ) ? Yi
? ?Y ln(|H| n) .
E
fn,Qh/4 (X) ? Y
?
h/4
X,Y
n i=0
n
?
?
2
2
ln(|H| n)
2
It follows that E fn,Qho /4 (X) ? Y
? E
fn,Qh/4
(X) ? Y
+ 2?Y
, which
?
X,Y
X,Y
n
?
2
?
2
n)
. Take the expectation (given
by (1) implies
fn,Qho /4 ? f
?
fn,Qh/4
? f
+2?2Y ln(|H|
?
n
the randomness in the two samples) over this last inequality and conclude.
4
Fast evaluation
In this section we show how to modify the cover-tree procedure of [16] to enable fast evaluation of
.
fn,Qh/4 for any h ? H = {?X /2i }I1 , I = dlog ne.
The cover-tree performs proximity search by navigating a hierarchy of nested r-nets of X. The
navigating-nets of [17] implement the same basic idea. They require additional book-keeping to
enable range queries of the form X ? B(x, h), for a query point x. Here we need to perform range
searches of the form Qh/4 ? B(x, h) and our book-keeping will therefore be different from [17].
Note that, for each h and Qh/4 , one could use a generic range search procedure such as [17] with
the data in Qh/4 as input, but this requires building a separate data structure for each h, which is
expensive. We use a single data structure.
4.1
The hierarchy of nets
{
}n
Consider an ordering X(i) 1 of the data points obtained as follows: X(1) and X(2) are the farthest
{
}
points in X; inductively for 2 < i < n, X(i) in X is the farthest point from X(1) , . . . , X(i?1) ,
where the distance to a set is defined as the minimum distance to a point in the set.
{
}I+2
{
}
For r ? ?X /2i 0 , define Qr = X(1) , . . . , X(i) where i ? 1 is the highest index such that
(
{
})
? X(i) , X(1) , . . . , X(i?1) ? r. Notice that, by construction, Qr is an r-net of X.
4.2
Data structure
The data structure consists of an acyclic directed graph, and range sets defined below.
{
}n
Neighborhood graph: The nodes of the graph are the X(i) 1 , and the edges are given by the
following parent-child relationship: starting at r = ?X /2, the parent of each node in Qr \ Q2r is
the point it is closest to in Q2r . The graph is implemented by maintaining an ordered{ list of
}nchildren
for each node, where the order is given by the children?s appearance in the sequence X(i) 1 . These
relationships are depicted in Figure 2.
6
00
11
11
00
00
11
00
11
11
00
00
11
1
0
0
1
0
1
1
0
0
1
0
1
11
00
00
11
00
11
11
00
00
11
00
11
11
00
00
11
00
11
11
00
00
11
00
11
X(1)
X(2)
X(3)
11
00
00
11
00
11
X(4)
11
00
00
11
00
11
X(5)
X(6)
X(1)
X(2)
X(3)
X(4)
X(5)
X(6)
Figure 2: The r-nets (rows of left subfigure) are implicit to an ordering of the data. They define a parent-child
relationship implemented by the neighborhood graph (right), the structure traversed for fast evaluation.
These ordered lists of children
used to implement the operation nextChildren defined itera{ are }
n
tively as follows. Given Q ? X(i) 1 , let visited children denote any child of q ? Q that a previous
call to nextChildren has already returned. The call nextChildren (Q) returns children
{ of}qn ? Q
that have not yet been visited, starting with the unvisited child with lowest index in X(i) 1 , say
X(i) , and returning all unvisited children in Qr , the first net containing X(i) , i.e. X(i) ? Qr \ Q2r ;
r is also returned. The children returned are then marked off as visited. The time complexity of this
routine is just the number of children returned.
{
}?
Range sets: For each node X(i) and each r ? ?X /2i 0 , we maintain a set of neighbors of X(i)
{
(
)
}
.
in Qr defined as R(i),r = q ? Qr : ? X(i) , q ? 8r .
4.3
Evaluation
Procedure evaluate(x, h)
Q ? Q?X ;
repeat
Q0 , r ? nextChildren (Q);
Q00 ? Q ? Q0 ;
if r < h/4 or Q0 = ? then // We reached past Qh/4 .
X(i) ? argminq?Q ? (x, q); // Closest point to x in Qh/4 .
Q ? R(i),h/4 ? B(x, h); // Search in a range of 2h around X(i) .
Break loop ;
if ? (x, Q00 ) ? h + 2r then // The set Qh/4 ? B(x, h) is empty.
Q ? ?;
Break loop ;
Q ? {q ? Q00 , ? (x, q) < ? (x, Q00 ) + 2r};
until . . . ;
//At this point Q = Qh/4 ? B(x, h).
return
?P
?
P
P
?
?
?
q?Q nq (K(x, q, h) + )Yq +
q?Qh/4 nq Yq ?
q?Q nq Yq
?
?
fn,Qh/4 (x) ?
;
P
P
q?Q nq (K(x, q, h) + ) + n ?
q?Q nq
The evaluation procedure consists of quickly identifying the closest point X(i) to x in Qh/4 and
then searching in the range of X(i) for the points in Qh/4 ? B(x, h). The identification of X(i) is
done by going down the levels of nested nets, and discarding those points (and their descendants)
00
that are
( certain
) to be farther to x than X(i) (we will argue that ?? (x, Q ) + 2r? is an upper-bound
on ? x, X(i) ). Also, if x is far enough from all points at the current level (second if-clause), we
can safely stop early because B(x, h) is unlikely to contain points from Qh/4 (we?ll see that points
in Qh/4 are all within 2r of their ancestor at the current level).
Lemma 4. The call to procedure evaluate (x,h) correctly evaluates fn,Qh/4 (x) and has time
complexity C log (?X /h) + log n where C is at most 28d .
7
Proof. We first show that the algorithm correctly returns fn,Q?h (x), and we then argue its run time.
Correctness of evaluate. The procedure( works by first finding
) the
( closest point to
) x in Qh/4 ,
say X(i) , and then identifying all nodes in R(i),h/4 ? B(x, h) = Qh/4 ? B(x, h) (see the first
if-clause). We just have to show that this closest point X(i) is correctly identified.
We?ll argue the following loop invariant I: at the beginning of the loop, X(i) is either in Q00 =
Q ? Q0 or is a descendant of a node in Q0 . Let?s consider some iteration where I holds (it certainly
does in the first iteration).
If the first if-clause is entered, then Q is contained in Qh/4 but Q0 is not, so X(i) must be in Q and
we correctly return.
Suppose the first if-clause is not entered. Now let X(j) be the ancestor
in Q)0 of X(i) or let it be X(i)
(
??
00
itself if it?s in Q . Let r be as defined in evaluate, we have ? X(i) , X(j) < k=0 r/2k = 2r by
going down the parent-child relations. It follows that
(
)
(
)
(
)
(
)
? (x, Q00 ) ? ? x, X(j) ? ? x, X(i) + ? X(i) , X(j) < ? x, X(i) + 2r.
(
)
In other words, we(have ? )x, X(i) > ? (x, Q00 ) ? 2r. Thus, if the second if-clause is entered, we
necessarily have ? x, X(i) > h, i.e. B(x, h) ? Qh/4 = ? and we correctly return.
Now assume none of the if-clauses is entered. Let X(j) ? Q00 be any of the points removed from
Q00 to obtain the next Q. Let X(k) be a child of X(j) that has not yet been visited, or a descendant
of such a child. If neither such X(j) or X(k)
I must
hold
( is X(i)) then, by definition,
(
) at the next
00
iteration. We sure have X(j) 6= X(i) since ? x,(X(j) ? ? (x,
Q
)
+
2r
?
?
x,
X
+ 2r. Now
(i)
)
??
k
r/2
=
2r.
We
thus have
notice
that,
by
the
same
argument
as
above,
?
X
,
X
<
(j)
(k)
k=0
(
)
(
)
(
)
? x, X(k) > ? x, X(j) ? 2r ? ? x, X(i) so we know X(j) 6= X(i) .
Runtime of evaluate. Starting from Q?X , a different net Qr is reached at every iteration, and the
loop stops when we reach past Qh/4 . Therefore the loop is entered at most log (?X /h/4) times. In
each iteration, most of the work is done parsing through Q00 , besides time spent for the range search
in the last iteration. So the total runtime is O (log (?X /h/4) ? max |Q00 |) + range search time. We
just need to bound max |Q00 | ? max |Q| + max |Q0 | and the range search time.
The following fact (see e.g. Lemma 4.1 of [9]) will come in handy: consider r1 and r2 such that
r1 /r2 is a power of 2, and let B ? X be a ball of radius r1 ; since X has Assouad dimension d,
the smallest r2 -cover of B is of size at most (r1 /r2 )d , and the largest r2 -packing of B is of size at
most (r1 /r2 )2d . This is true for any metric space, and therefore holds for X which is of Assouad
dimension at most d by inclusion in X .
Let Q0 ? Qr so that Q ? Q2r at the beginning of some iteration. Let q ? Q, the children of q in Q0
are not in Q2r and therefore are all within 2r of Q; since these children an r-packing of B(q, 2r),
there are at most 22d of them. Thus, max |Q0 | ? 22d max |Q|.
(
)
Initially Q = Q?X so we have |Q| ? 22d since Q?X is a ?X -packing of X ? B X(1) , 2?X . At
the end of each iteration we have Q ? B(x, ? (x, Q00 ) + 2r). Now ? (x, Q00 ) ? h + 2r ? 4r + 2r
since the if-clauses were not entered if we got to the end of the iteration. Thus, Q is an r-packing of
B(x, 8r), and therefore max |Q| ? 28d .
R(i),h/4 ? 28d since R(i),h/4 is an h -packing
To finish,
the
range
search
around
X
takes
time
(i)
4
(
)
of B X(i) , 2h .
Acknowledgements
This work was supported by the National Science Foundation (under grants IIS-0347646, IIS0713540, and IIS-0812598) and by a fellowship from the Engineering Institute at the Los Alamos
National Laboratory. Many thanks to the anonymous NIPS reviewers for the useful comments, and
thanks to Sanjoy Dasgupta for advice on the presentation.
8
References
[1] P. Bickel and B. Li. Local polynomial regression on unknown manifolds. Tech. Re. Dep. of Stats. UC
Berkley, 2006.
[2] S. Kpotufe. Escaping the curse of dimensionality with a tree-based regressor. COLT, 2009.
[3] C. J. Stone. Optimal rates of convergence for non-parametric estimators. Ann. Statist., 8:1348?1360,
1980.
[4] C. J. Stone. Optimal global rates of convergence for non-parametric estimators. Ann. Statist., 10:1340?
1353, 1982.
[5] S. Roweis and L. Saul. Nonlinear dimensionality reduction by locally linear embedding. Science,
290:2323?2326, 2000.
[6] M. Belkin and N. Niyogi. Laplacian eigenmaps for dimensionality reduction and data representation.
Neural Computation, 15:1373?1396, 2003.
[7] J.B. TenenBaum, V. De Silva, and J. Langford. A global geometric framework for non-linear dimensionality reduction. Science, 290:2319?2323, 2000.
[8] S. Dasgupta and Y. Freund. Random projection trees and low dimensional manifolds. STOC, 2008.
[9] K. Clarkson. Nearest-neighbor searching and metric space dimensions. Nearest-Neighbor Methods for
Learning and Vision: Theory and Practice, 2005.
[10] S. Kulkarni and S. Posner. Rates of convergence of nearest neighbor estimation under arbitrary sampling.
IEEE Transactions on Information Theory, 41, 1995.
[11] S. Schaal and C. Atkeson. Robot Juggling: An Implementation of Memory-based Learning. Control
Systems Magazine, IEEE, 1994.
[12] L. Gyorfi, M. Kohler, A. Krzyzak, and H. Walk. A Distribution Free Theory of Nonparametric Regression.
Springer, New York, NY, 2002.
[13] D. Lee and A. Grey. Faster gaussian summation: Theory and experiment. UAI, 2006.
[14] D. Lee and A. Grey. Fast high-dimensional kernel summations using the monte carlo multipole method.
NIPS, 2008.
[15] C. Atkeson, A. Moore, and S. Schaal. Locally weighted learning. AI Review, 1997.
[16] A. Beygelzimer, S. Kakade, and J. Langford. Cover trees for nearest neighbors. ICML, 2006.
[17] R. Krauthgamer and J. Lee. Navigating nets: Simple algorithms for proximity search. SODA, 2004.
9
| 3769 |@word middle:1 polynomial:1 achievable:1 stronger:1 norm:1 c0:3 grey:2 decomposition:1 pick:2 reduction:3 initial:1 selecting:1 past:2 current:2 beygelzimer:1 yet:2 written:1 parsing:1 must:2 fn:29 partition:6 half:1 selected:4 nq:30 beginning:2 farther:1 cse:1 node:6 mcdiarmid:1 n1q:1 h4:5 c2:3 descendant:3 prove:1 consists:2 combine:1 x0:3 deteriorate:1 expected:2 behavior:1 nor:1 curse:3 increasing:2 itera:1 provided:1 bounded:4 notation:1 lowest:1 q2r:5 lowing:1 finding:1 guarantee:3 safely:1 remember:2 every:2 growth:1 runtime:2 returning:1 partitioning:1 control:3 farthest:2 grant:1 positive:2 before:2 understood:1 local:3 modify:1 tends:1 engineering:1 might:3 twice:1 quantified:1 range:11 gyorfi:1 directed:1 union:1 practice:1 implement:2 handy:1 procedure:10 empirical:3 attain:2 got:1 projection:1 word:2 cannot:1 risk:16 applying:2 reviewer:1 center:3 starting:3 identifying:2 stats:1 estimator:2 posner:1 embedding:1 searching:2 notion:2 coordinate:1 hierarchy:3 qh:30 construction:1 suppose:1 magazine:1 us:2 expensive:1 capture:1 ensures:3 ordering:2 highest:1 removed:1 vanishes:2 complexity:3 inductively:1 depend:2 tight:1 packing:8 various:1 fast:10 describe:1 effective:1 monte:1 query:4 neighborhood:3 outside:1 h0:1 whose:2 heuristic:1 q00:14 say:2 niyogi:1 noisy:1 itself:1 final:6 sequence:1 net:25 maximal:1 combining:4 loop:6 entered:6 adapts:1 roweis:1 inducing:1 qr:20 los:1 convergence:7 empty:4 parent:4 r1:5 spent:1 derive:1 nearest:7 dep:1 ddimensional:1 implemented:2 implies:2 come:1 radius:2 modifying:1 enable:2 require:4 fix:2 anonymous:1 tighter:1 summation:3 designate:1 traversed:1 hold:3 proximity:4 around:4 achieves:1 early:1 smallest:2 bickel:1 estimation:1 integrates:1 visited:4 largest:1 correctness:1 tool:1 weighted:1 hope:1 gaussian:1 corollary:2 inherits:1 schaal:2 tech:1 rigorous:1 i0:1 integrated:2 unlikely:1 initially:1 relation:1 ancestor:2 going:2 i1:1 colt:1 denoted:1 uc:1 marginal:1 construct:1 never:1 sampling:1 icml:1 minimized:1 np:1 simplify:1 escape:1 few:3 belkin:1 preserve:1 national:2 n1:1 maintain:1 evaluation:11 certainly:1 edge:1 traversing:1 tree:8 euclidean:3 walk:1 re:2 theoretical:1 subfigure:1 earlier:1 cover:11 applicability:1 subset:2 alamo:1 submanifold:1 eigenmaps:1 kn:1 thanks:2 lee:3 off:1 regressor:19 quickly:2 containing:1 book:2 return:6 unvisited:2 li:1 de:1 depends:6 vi:2 later:1 view:1 lot:1 break:2 reached:2 contribution:1 square:1 variance:2 yield:4 identification:1 none:1 carlo:1 randomness:1 reach:1 definition:1 evaluates:2 kfn:11 proof:5 stop:2 dataset:1 knowledge:1 dimensionality:6 routine:1 fubini:1 evaluated:1 box:1 done:2 just:9 implicit:1 until:1 langford:2 nonlinear:1 defines:1 gray:1 building:1 contain:3 true:1 assigned:1 q0:10 laboratory:1 moore:1 illustrated:1 ll:11 covering:1 criterion:1 stone:2 arrived:1 outline:1 performs:1 silva:1 recently:1 clause:7 overview:2 tively:1 ai:1 smoothness:1 rd:7 inclusion:1 robot:1 berkley:1 curvature:1 closest:6 recent:1 showed:1 belongs:2 apart:2 termed:1 certain:2 inequality:7 yi:4 fen:5 minimum:1 fortunately:1 additional:1 wq2:1 ey:1 ii:2 smooth:4 technical:1 faster:1 plug:1 laplacian:1 prediction:3 regression:21 basic:1 denominator:1 vision:1 expectation:4 metric:7 iteration:9 kernel:12 cell:1 c1:4 addition:1 want:1 separately:1 argminq:1 fellowship:1 operate:1 unlike:3 sure:1 comment:1 subject:2 induced:1 call:3 structural:1 near:1 enough:1 qho:5 finish:1 zi:6 bandwidth:4 identified:1 escaping:1 reduce:1 idea:1 krzyzak:1 clarkson:1 juggling:1 returned:4 proceed:2 york:1 generally:1 useful:1 detailed:2 covered:2 clear:1 nonparametric:5 locally:2 statist:2 induces:1 tenenbaum:1 diameter:1 specifies:1 exist:1 notice:3 estimated:1 correctly:5 dasgupta:2 neither:1 graph:5 fraction:1 run:1 parameterized:1 noticing:1 soda:1 extends:1 throughout:1 draw:1 bound:12 followed:2 aspect:2 argument:3 speedup:1 ball:5 y0:2 kakade:1 dlog:3 invariant:1 ln:5 equation:1 mutually:1 remains:1 needed:1 know:1 end:2 operation:1 apply:1 hierarchical:1 away:1 generic:1 ho:3 binomial:1 multipole:1 krauthgamer:1 maintaining:1 exploit:1 build:1 approximating:1 already:1 parametric:3 navigating:3 subspace:1 distance:4 separate:1 evaluate:6 manifold:6 argue:3 considers:1 besides:1 index:2 relationship:3 stoc:1 implementation:1 unknown:2 kpotufe:2 perform:1 upper:2 situation:1 ucsd:1 arbitrary:1 c3:1 learned:2 nip:2 below:2 remarking:1 sparsity:1 built:1 max:12 memory:1 power:1 yq:3 ne:3 concludes:1 speeding:1 review:1 geometric:1 l2:1 acknowledgement:1 kf:5 contributing:2 relative:1 asymptotic:1 freund:1 expect:1 adaptivity:2 acyclic:1 h2:6 foundation:1 degree:1 affine:1 row:1 summary:1 repeat:1 last:5 keeping:2 supported:1 free:1 bias:2 institute:1 neighbor:8 fall:1 saul:1 sparse:2 dimension:21 qn:1 adaptive:4 regressors:5 atkeson:2 far:1 transaction:1 excess:7 global:2 robotic:1 nhd:1 uai:1 conclude:3 xi:18 search:12 contributes:1 necessarily:1 domain:1 main:1 bounding:1 n2:3 child:16 advice:1 elaborate:1 ny:1 samory:1 exponential:1 breaking:1 admissible:1 theorem:3 down:2 embed:1 discarding:1 kvi:1 jensen:1 list:2 r2:6 intrinsic:4 exists:1 conditioned:1 depicted:1 appearance:1 ordered:2 contained:1 doubling:1 springer:1 nested:3 satisfies:2 assouad:14 conditional:1 marked:1 presentation:2 ann:2 lipschitz:1 operates:2 lemma:11 called:1 total:1 sanjoy:1 select:2 wq:12 kulkarni:1 argminh:1 kohler:1 |
3,056 | 377 | Proximity Effect Corrections in Electron Beam
Lithography Using a Neural Network
Robert C. Frye
AT &T Bell Laboratories
600 Mountain Avenue
Murray Hill. NJ 08854
Kevin D. Cummings*
AT&T Bell Laboratories
600 Mountain Avenue
Murray Hill. NJ 08854
Edward A. Rietman
AT&T Bell Laboratories
600 Mountain Avenue
Murray Hill. NJ 08854
Abstract
We have used a neural network to compute corrections for images written
by electron beams to eliminate the proximity effects caused by electron
scattering. Iterative methods are effective. but require prohibitively
computation time. We have instead trained a neural network to perform
equivalent corrections. resulting in a significant speed-up. We have
examined hardware implementations using both analog and digital
electronic networks. Both had an acceptably small error of 0.5% compared
to the iterative results. Additionally. we verified that the neural network
correctly generalized the solution of the problem to include patterns not
contained in its training set. We have experimentally verified this approach
on a Cambridge Instruments EBMF 10.5 exposure system.
1 INTRODUCTION
Scattering imposes limitations on the minImum feature sizes that can be reliably
obtained with electron beam lithography. Linewidth corrections can be used to control
the dimensions of isolated features (i.e. intraproximity. Sewell. 1978). but meet with
little success when dealing with the same features in a practical context. where they are
surrounded by other features (i.e. interproximity). Local corrections have been
proposed using a self-consistent method of computation for the desired incident dose
pattern (parikh. 1978). Such techniques require inversion of large matrices and
prohibitive amounts of computation time. Lynch et al .? 1982. have proposed an
analytical method for proximity corrections based on a solution of a set of approximate
equations, resulting in a considerable improvement in speed.
The method that we present here, using a neural network. combines the computational
simplicity of the method of Lynch et al. with the accuracy of the self-consistent
methods. The first step is to determine the scattered energy profile of the electron
beam which depends on the substrate structure, beam size and electron energy. This is
? Present address: Motorola Inc. Phoenix Corporate Research Laboratories, 2100 East Elliot Rd. Tempe,
AZ 85284.
443
444
Frye, Cummings, and Rietman
then used to compute spatial vanatlons in the dosage that result when a particular
image is scattered. This can be used to iteratively compute a corrected image for the
input pattern. The goal of the correction is to adjust the written image so that the
incident pattern of dose after scattering approximates the desired one as closely as
possible. We have used this iterative method on a test image to form a training set for
a neural network. The architecture of this network: was chosen to incorporate the basic
mathematical structure as the analytical method of Lynch et ai., but relies on an
adaptive procedure to determine its characteristic parameters.
2 CALCULATING PROXIMITY CORRECTED PATTERNS
We determined the radial distribution of scattered dose from a single pixel by using a
Monte-Carlo simulation for a variety of substrates and electron beam energies
(Cummings, 1989). As an example problem, we looked at resist on a heavy metal
substrate. (These are of interest in the fabrication of masks for x-ray lithography.) For
a 20 KeV electron beam this distribution, or "proximity function," can be approximated
by the analytical expression
I
[ e-(r/a'i
fer) = 1t(I+v+~)
0.2 +
where
a. = 0.038 Jlm, 'Y = 0.045 Jlm,
~
= 0.36 Jlm,
v
= 3.49
and ~
= 6.42.
The unscattered image is assumed to be composed of an array of pixels, Io(x,y). For a
beam with a proximity function fer) like the one given above, the image after scattering
will be
00
ls(x,y) =
00
L L
Io(x-m,y-n) f?m2+n2)'/.),
which is the discrete convolution of the original image with the lineshape fer). The
approach suggested by analogy with signal processing is to deconvolve the image by
an inverse filtering operation. This method cannot be used, however, because it is
impossible to generate negative amounts of electron exposure. Restricting the beam to
positive exposures makes the problem inherently nonlinear, and we must rely instead
on an iterative, rather than analytical, solution.
Figure 1 shows the pattern that we used to generate a training set for the neural
network. This pattern was chosen to include examples of the kinds of features that are
difficult to resolve because of proximity effects. Minimum feature sizes in the pattern
ore 0.25 Jlm and the overall image, using 0.125 Jlm pixels, is 180 pixels (22.5 J.lm) on a
side, for a total of 32,400 pixels. The initial incident dose pattern for the iterative
correction of this image started with a relative exposure value of 100% for exposed
pixels and 0 for unexposed ones. The scattered intensity distribution was computed
from this incident dose using the discrete two-dimensional convolution with the
summation truncated to a finite range, roo For the example proximity function 95% of
the scattered intensity is contained within a radius of 1.125 J.lm (9 pixels) and this
value was used for roo The scattered intensity distribution was computed and compared
with the desired pattern of 100% for exposed and 0 for unexposed pixels. The
Proximity Effect Corrections in Electron Beam Lithography
difference between the resulting scattered and desired distributions is the error. This
error was subtracted from the dose pattern to be used for the next iteration. However,
since negative doses are not allowed, negative regions in the correction were truncated
to zero.
~
IIIII
IIIII
11111
11111
1
180
pixels
?????
?????
.,
~m
1
Figure 1: Training pattern
Using this algorithm, a pixel that receives a dosage that is too small will have a
negative error, and on the next iteration its intensity will be increased. Unexposed
pixels (i.e. regions where resist is to be removed) will always have some dosage
scattered into them from adjacent features, and will consequently always show a
positive error. Because the written dose in these regions is always zero, rather than
negative, it is impossible for the iterative solution to completely eliminate the error in
the final scattered distribution. However, the nonlinear exposure properties of the resist
will compensate for this. Moreover, since all exposed features receive a uniform dose
after correction, it is possible to choose a resist with the optimal contrast properties for
the pattern.
Although this iterative method is effective, it is also time consuming. Each iteration on
the test pattern required about 1 hour to run on a 386 based computer. Four iterations
were required before the smallest features in the resist were properly resolved. Even
the expected order of magnitude speed increase from a large mainframe computer is
not sufficient to correct the image from a full sized chip consisting of several billion
pixels. The purpose of the neural network is to do these same calculations, but in a
much shorter time.
3 NETWORK ARCHITECTURE AND TRAINING
Figure 2 shows the relationship between the image being corrected and the neural
network. The correction for one pixel takes into account the image surrounding it.
Since the neighborhood must include all of the pixels that contribute appreciable
scattered intensity to the central pixel being corrected, the size of the network was
determined by the same maximum radius, ro = 1.125 Ilm, that characterized the
scattering proximity function. The large number of inputs would be difficult to manage
in an analog network if these inputs were general analog signals, but fortunately the
input data are binary, and can be loaded into an analog network using digital shift
registers.
445
446
Frye, Cummings, and Rietman
Figure 3 shows a schematic diagram of the analog network. The binary signals from
the shift registers representing a portion of the image were connected to the buffer
amplifiers through 10 Kil resistors. Each was connected to only one summing node,
corresponding to its radial distance from the center pixel. This stage converted the 19 x
19 binary representation of the image into 10 analog voltages that represented the
radial distribution of the surrounding intensity. The summing amplifier at the output
was connected to these 10 nodes by variable resistors. This resulted in an output that
was a weighted sum of the radial components.
summing
node
corrected
pixel
L..--_ _ networl\-k-----'
pixel to be
corrected
Figure 2: Network configuration
fixed
weights
1
1
o
O<>-'YV'f-.....:~
1
1
binary 0
inputs 1
!a(x,y) 0
1
1
\7""~-o-I >-J.{Ir-..
~~--~~
analog
output
buffer
amplifiers
o
o
o
Figure 3: Schematic diagram of the analog network
Proximity Effect Corrections in Electron Beam Lithography
Functionally. this network does the operation
9
VO\1t = ~ wr<Io>r'
r-=O
where wr are the weight coefficients set by the adjustable resistors and <10>r are the
average values of the pixel intensity at radius r. The form of this relationship is
identical to the one proposed by Lynch et al. but uses an adaptive method. rather than
an analytical one. to detennine the coefficients wr'
The prototype analog hardware network was built on a wire wrap board using
74HCl64 8 bit CMOS static shift registers and LM324 quad operational amplifiers for
the active devices. The resistors in the first layer were 10 KO thin-film resistors in
dual-in-line packages and had a tolerance of 1%. The ten adjustable resistors in the
second layer of the network were 10 turn precision trimmers. Negative weights were
made by inverting the sign of the voltage at the buffer amplifiers. For comparison. we
also evaluated a digital hardware implementation of this network. It was implemented
on a floating point array processor built by Eighteen Eight Laboratories using an
AT &T DSP-32 chip operating at 8 MFLOPs peak rate. The mathematical operation
perfonned by the network is equivalent to a two-dimensional convolution of the input
image with an adaptively learned floating point kernel.
The adjustable weight values for both networks were determined using the delta rule of
Widrow and Hoff (1960). For each pixel in the trial pattern of Figure I there was a
corresponding desired output computed by the iterative method. Each pixel in the test
image. its surroundings and corresponding analog corrected value (computed by the
iterative method) constituted a single learning trial. and the overall image contained
32,400 of them. We found that the weight values stabilized after two passes through
the test image.
4 NEURAL NETWORK PERFORMANCE
The accuracy of both the analog and digital networks. compared to the iterative
solution. was comparable. Both showed an average error for the test image of 0.5%,
and a maximum error of 9% on any particular pixel. The accuracy of the networks on
images other than the one used to train them was comparable. averaging about 0.5%
overall.
Convolution with an adaptively-leamed kernel is itself a relatively efficient
computational algorithm. The iterative method required 4 hours to compute the
correction for the 32,400 pixel example. Equivalent results were obtained by
convolution in about 6.5 minutes using the same computer. Examination of the
assembled code for the software network showed that the correction for each pixel
required the execution of about 30 times fewer instructions than for the iterative
method.
The analog hardware generated corrections for the same example in 13.5 seconds.
Almost 95% of this time was used for input/output operations between the network and
the computer. It was the time required for the I/O. rather than the speed of the circuit,
that limited the dynamic perfonnance of this system. Clearly. with improved I/O
hardware. the analog network could be made to compute these corrections much more
quickly.
447
448
Frye, Cummings, and Rietman
The same algorithm, running on the digital floating point array processor perfonned the
correction for this example problem in 4.5 seconds. The factor of three improvement
over the analog hardware was primarily a result of the decreased time needed for I/O
in the DSP-based network. The digital network was not appreciably more accurate than
the analog one, indicating that the overall accuracy of operation was determined
primarily by the network architecture rather than by limitations in the implementation.
These results are summarized in Table 1.
Table 1: Comparison of computational speed for various methods.
METHOD
Iteration
Software network
Analog hardware network
Digital hardware network
SPEED
6 years /mm2
100 days /mm2
2 days /mm2
18 hours /mm2
5 EXPERIMENTAL VERIFICATION
Recently, we have evaluated this method experimentally using a Cam bridge
Instruments EB:r..1F 10.5 exposure system (Cummings, et al., 1990). The test image
was 1 mm 2 and contained 11,165 Cambridge shapes and 6.7x107 pixels. The substrate
was silicon with 0.5 Jlm of SAL601-ER7 resist exposed at 20 KeV beam energy. The
range of the scattered electrons is more than three times greater for these conditions
than in the tests described above, requiring a network about ten times larger. The
neural network computations were done using the digital floating point array processor,
and required about 18 hours to correct the entire image. Input to the program was
Cambridge source code, which was converted to a bit-mapped array, corrected by the
neural network and then decomposed into new Cambridge source code.
Figure 4 shows SEM micrographs comparing one of the test structures written with
and without the neural network correction. This test structure consists of a 10 Jlm
square pad next to a 1 Jlm wide line, separated by a gap of 0.5 Jlm. Note in the
uncorrected pattern that the line widens in the region adjacent to the large pad, and the
webs of resist extending into the gap. This is caused by excess dosage scattered into
these regions from the large pad. In the corrected pattern, the dosage in these regions
has been adjusted, resulting in a uniform exposure after scattering and greatly
improved pattern resolution.
6 CONCLUSIONS
The results of our trial experiments clearly demonstrate the computational benefits of a
neural network for this particular application. The trained analog hardware network
performed the corrections more than 1000 times faster than the iterative method using
the same computer, and the digital processor was 3000 times faster. This technique is
readily applicable to a variety of direct write exposure systems that have the capability
to write with variable exposure times. Implementation of the network on more
sophisticated computers with readily available coprocessors can directly lead to another
order of magnitude improvement in speed, making it practical to correct full chip-sized
images.
Proximity Effect Corrections in Electron Beam Lithography
The performance of the analog network suggests that with improved speed of I/O
between the computer and the network, it would be possible to obtain much faster
operation. The added flexibility and generality of the digital approach, however, is a
considerable advantage.
1.0._
H
I
$'~
Figure 4: Comparison of a test structure written with and without correction
Acknowledgments
We thank S. Waaben and W. T. Lynch for useful discussions, suggestions and
information, and J. Brereton who assisted in building the hardware and trial patterns
for initial evaluation. We also thank C. Biddick, C. Lockstampfor, S. Moccio and B.
Vogel for technical support in the experimental verification.
References
H. Sewell, "Control of Pattern Dimensions in Electron Lithography," J. Vac. Sci.
Technol. 15, 927 (1978).
M. Parikh, "Self-Consistent Proximity Effect Correction Technique for Resist Exposure
(SPECTRE)," J. Vac. Sci. Technol. 15, 931 (1978).
W.T. Lynch, T. E. Smith and W. Fichtner, "An Algorithm for Proximity Effecl
Correction with E-Beam Exposure," InCt. Conf. on Microlithography, Microcircuit
Engineering pp 309-314, Grenoble (1982).
K. D. Cummings "Determination of Proximity Parameters for Electron Beam
Lithography," AT&T Bell Laboratories Internal Memorandum.
B. Widrow and M. E. Hoff, "Adaptive Switching Circuits," IRE WESCON Convention
Record, Part 4, 96-104 (1960).
K. D. Cummings, R. C. Frye and E. A. Rietman, "Using a Neural Network to
Proximity Correct Patterns Written with a Cambridge EBMF 10.5 Electron Beam
Exposure System," Applied Phys. Lett. 57 1431 (1990).
449
| 377 |@word trial:4 inversion:1 instruction:1 simulation:1 initial:2 configuration:1 comparing:1 written:6 readily:2 must:2 shape:1 prohibitive:1 device:1 fewer:1 leamed:1 smith:1 record:1 ire:1 node:3 contribute:1 mathematical:2 direct:1 consists:1 combine:1 ray:1 mask:1 expected:1 decomposed:1 resolve:1 little:1 motorola:1 quad:1 moreover:1 circuit:2 rietman:5 mountain:3 kind:1 nj:3 jlm:9 prohibitively:1 ro:1 control:2 acceptably:1 positive:2 before:1 engineering:1 local:1 io:3 switching:1 tempe:1 meet:1 micrographs:1 eb:1 examined:1 suggests:1 limited:1 range:2 practical:2 acknowledgment:1 procedure:1 bell:4 radial:4 cannot:1 deconvolve:1 context:1 impossible:2 equivalent:3 center:1 exposure:12 l:1 resolution:1 simplicity:1 m2:1 rule:1 array:5 memorandum:1 substrate:4 us:1 approximated:1 region:6 connected:3 removed:1 cam:1 dynamic:1 trained:2 exposed:4 completely:1 resolved:1 chip:3 represented:1 various:1 surrounding:2 train:1 separated:1 effective:2 monte:1 kevin:1 neighborhood:1 film:1 larger:1 roo:2 itself:1 final:1 advantage:1 analytical:5 fer:3 flexibility:1 az:1 billion:1 extending:1 cmos:1 widrow:2 edward:1 implemented:1 uncorrected:1 convention:1 radius:3 closely:1 correct:4 require:2 summation:1 adjusted:1 correction:24 mm:1 proximity:16 assisted:1 electron:16 lm:2 smallest:1 purpose:1 applicable:1 bridge:1 appreciably:1 weighted:1 clearly:2 lynch:6 always:3 rather:5 voltage:2 dsp:2 improvement:3 properly:1 greatly:1 contrast:1 eliminate:2 entire:1 pad:3 pixel:25 overall:4 dual:1 spatial:1 hoff:2 identical:1 mm2:4 thin:1 primarily:2 dosage:5 grenoble:1 surroundings:1 composed:1 resulted:1 floating:4 consisting:1 amplifier:5 interest:1 mflop:1 evaluation:1 adjust:1 accurate:1 shorter:1 perfonnance:1 desired:5 isolated:1 dose:8 increased:1 uniform:2 fabrication:1 too:1 adaptively:2 peak:1 quickly:1 central:1 manage:1 choose:1 sewell:2 conf:1 account:1 converted:2 lithography:8 ilm:1 summarized:1 coefficient:2 inc:1 caused:2 register:3 depends:1 performed:1 portion:1 yv:1 capability:1 square:1 ir:1 accuracy:4 loaded:1 characteristic:1 who:1 carlo:1 processor:4 phys:1 energy:4 pp:1 spectre:1 static:1 kev:2 sophisticated:1 scattering:6 cummings:8 day:2 improved:3 evaluated:2 done:1 microcircuit:1 generality:1 stage:1 receives:1 web:1 iiiii:2 nonlinear:2 building:1 effect:7 requiring:1 laboratory:6 iteratively:1 elliot:1 adjacent:2 self:3 generalized:1 hill:3 demonstrate:1 vo:1 image:25 parikh:2 recently:1 kil:1 networl:1 phoenix:1 analog:18 approximates:1 functionally:1 significant:1 silicon:1 cambridge:5 ai:1 rd:1 had:2 operating:1 showed:2 buffer:3 binary:4 success:1 minimum:2 fortunately:1 greater:1 determine:2 signal:3 full:2 corporate:1 technical:1 faster:3 characterized:1 calculation:1 determination:1 compensate:1 schematic:2 basic:1 ko:1 iteration:5 kernel:2 beam:16 receive:1 ore:1 decreased:1 diagram:2 source:2 vogel:1 pass:1 variety:2 architecture:3 prototype:1 avenue:3 shift:3 expression:1 useful:1 amount:2 ten:2 hardware:10 generate:2 stabilized:1 sign:1 delta:1 correctly:1 wr:3 x107:1 discrete:2 write:2 four:1 verified:2 sum:1 year:1 run:1 inverse:1 package:1 almost:1 electronic:1 comparable:2 bit:2 layer:2 software:2 speed:8 relatively:1 making:1 equation:1 turn:1 needed:1 instrument:2 available:1 operation:6 detennine:1 eight:1 subtracted:1 original:1 running:1 include:3 widens:1 calculating:1 murray:3 added:1 looked:1 wrap:1 distance:1 thank:2 mapped:1 sci:2 code:3 relationship:2 difficult:2 robert:1 negative:6 implementation:4 reliably:1 adjustable:3 perform:1 convolution:5 wire:1 eighteen:1 finite:1 technol:2 truncated:2 intensity:7 inverting:1 required:6 resist:8 learned:1 hour:4 assembled:1 address:1 suggested:1 pattern:21 program:1 built:2 perfonned:2 rely:1 examination:1 representing:1 started:1 relative:1 suggestion:1 limitation:2 filtering:1 analogy:1 digital:10 incident:4 metal:1 consistent:3 imposes:1 sufficient:1 verification:2 surrounded:1 heavy:1 side:1 wide:1 tolerance:1 benefit:1 dimension:2 lett:1 made:2 adaptive:3 frye:5 excess:1 approximate:1 dealing:1 active:1 wescon:1 summing:3 assumed:1 consuming:1 iterative:13 table:2 additionally:1 inherently:1 operational:1 sem:1 constituted:1 profile:1 n2:1 allowed:1 scattered:12 board:1 precision:1 dos:1 resistor:6 minute:1 vac:2 restricting:1 magnitude:2 execution:1 gap:2 contained:4 relies:1 goal:1 sized:2 consequently:1 appreciable:1 considerable:2 experimentally:2 determined:4 corrected:9 averaging:1 total:1 experimental:2 east:1 indicating:1 internal:1 support:1 incorporate:1 |
3,057 | 3,770 | Heavy-Tailed Symmetric Stochastic Neighbor
Embedding
Zhirong Yang
The Chinese University of Hong Kong
Helsinki University of Technology
[email protected]
Irwin King
The Chinese University of Hong Kong
[email protected]
Zenglin Xu
The Chinese University of Hong Kong
Saarland University & MPI for Informatics
[email protected]
Erkki Oja
Helsinki University of Technology
[email protected]
Abstract
Stochastic Neighbor Embedding (SNE) has shown to be quite promising for data
visualization. Currently, the most popular implementation, t-SNE, is restricted to
a particular Student t-distribution as its embedding distribution. Moreover, it uses
a gradient descent algorithm that may require users to tune parameters such as
the learning step size, momentum, etc., in finding its optimum. In this paper, we
propose the Heavy-tailed Symmetric Stochastic Neighbor Embedding (HSSNE)
method, which is a generalization of the t-SNE to accommodate various heavytailed embedding similarity functions. With this generalization, we are presented
with two difficulties. The first is how to select the best embedding similarity
among all heavy-tailed functions and the second is how to optimize the objective
function once the heavy-tailed function has been selected. Our contributions then
are: (1) we point out that various heavy-tailed embedding similarities can be characterized by their negative score functions. Based on this finding, we present a parameterized subset of similarity functions for choosing the best tail-heaviness for
HSSNE; (2) we present a fixed-point optimization algorithm that can be applied to
all heavy-tailed functions and does not require the user to set any parameters; and
(3) we present two empirical studies, one for unsupervised visualization showing
that our optimization algorithm runs as fast and as good as the best known t-SNE
implementation and the other for semi-supervised visualization showing quantitative superiority using the homogeneity measure as well as qualitative advantage
in cluster separation over t-SNE.
1 Introduction
Visualization as an important tool for exploratory data analysis has attracted much research effort
in recent years. A multitude of visualization approaches, especially the nonlinear dimensionality
reduction techniques such as Isomap [9], Laplacian Eigenmaps [1], Stochastic Neighbor Embedding
(SNE) [6], manifold sculpting [5], and kernel maps with a reference point [8], have been proposed.
Although they are reported with good performance on tasks such as unfolding an artificial manifold,
they are often not successful at visualizing real-world data with high dimensionalities.
A common problem of the above methods is that most mapped data points are crowded together in
the center without distinguished gaps that isolate data clusters. It was recently pointed out by van
der Maaten and Hinton [10] that the ?crowding problem? can be alleviated by using a heavy-tailed
distribution in the low-dimensional space. Their method, called t-Distributed Stochastic Neighbor
Embedding (t-SNE), is adapted from SNE with two major changes: (1) it uses a symmetrized cost
function; and (2) it employs a Student t-distribution with a single degree of freedom (T1 ). In this
way, t-SNE can achieve remarkable superiority in the discovery of clustering structure in highdimensional data.
The t-SNE development procedure in [10] is restricted to the T1 distribution as its embedding similarity. However, different data sets or other purposes of dimensionality reduction may require generalizing t-SNE to other heavy-tailed functions. The original t-SNE derivation provides little information for users on how to select the best embedding similarity among all heavy-tailed functions.
Furthermore, the original t-SNE optimization algorithm is not convenient when the symmetric SNE
is generalized to use various heavy-tailed embedding similarity functions since it builds on the gradient descent approach with momenta. As a result, several optimization parameters need to be
manually specified. The performance of the t-SNE algorithm depends on laborious selection of the
optimization parameters. For instance, a large learning step size might cause the algorithm to diverge, while a conservative one might lead to slow convergence or poor annealed results. Although
comprehensive strategies have been used to improve the optimization performance, they might be
still problematic when extended to other applications or embedding similarity functions.
In this paper we generalize t-SNE to accommodate various heavy-tailed functions with two major
contributions: (1) we propose to characterize heavy-tailed embedding similarities in symmetric SNE
by their negative score functions. This further leads to a parameterized subset facilitating the choice
of the best tail-heaviness; and (2) we present a general algorithm for optimizing the symmetric SNE
objective with any heavy-tailed embedding similarities.
The paper is organized as follows. First we briefly review the related work of SSNE and t-SNE in
Section 2. In Section 3, we present the generalization of t-SNE to our Heavy-tailed Symmetric SNE
(HSSNE) method. Next, a fixed-point optimization algorithm for HSSNE is provided and its convergence is discussed in Section 4. In Section 5, we relate the EM-like behavior of the fixed-point
algorithm to a pairwise local mixture model for an in-depth analysis of HSSNE. Section 6 presents
two sets of experiments, one for unsupervised and the other for semi-supervised visualization. Finally, conclusions are drawn in Section 7.
2
Symmetric Stochastic Neighbor Embedding
Suppose the pairwise similarities of a set of m-dimensional
data points X = {xi }ni=1 are encoded
P
n?n
in a symmetric matrix P ? R+ , where Pii = 0 and ij Pij = 1. Symmetric Stochastic Neighbor
Embedding (SSNE) [4, 10] seeks r-dimensional (r ? m) representations of X , denoted by Y =
{yi }ni=1 , such that
X
Pij
J (Y ) = DKL (P ||Q) =
Pij log
(1)
Qij
i6=j
P
is minimized, where Qij = qij / a6=b qab are the normalized similarities in low-dimensional embedding and
?
?
qij = exp ?kyi ? yj k2 , qii = 0.
(2)
The optimization of SSNE uses the gradient descent method with
X
?J
=4
(Pij ? Qij )(yi ? yj ).
(3)
?yi
j
A momentum term is added to the gradient in order to speed up the optimization:
?
?
?J ??
(t)
(t?1)
,
Y (t+1) = Y (t) + ?
+
?(t)
Y
?
Y
?
?Y Y =Y (t)
(t)
(4)
(t)
where Y (t) = [y1 . . . yn ] ? Rr?n is the solution in matrix form at iteration t; ? is the learning
rate; and ?(t) is the momentum amount at iteration t. Compared with an earlier method Stochastic
Neighbor Embedding (SNE) [6], SSNE uses a symmetrized cost function with simpler gradients.
Most mapped points in the SSNE visualizations are often compressed near the center of the visualizing map without clear gaps that separate clusters of the data. The t-Distributed Stochastic Neighbor
Embedding (t-SNE) [10] addresses this crowding problem by using the Student t-distribution with a
single degree of freedom
qij = (1 + kyi ? yj k2 )?1 , qii = 0,
(5)
as the embedding similarity distribution, which has a heavier tail than the Gaussian used in SNE and
SSNE. For brevity we denote such distribution by T1 . Using this distribution yields the gradient of
t-SNE:
X
?J
=4
(Pij ? Qij )(yi ? yj )(1 + kyi ? yj k2 )?1 .
(6)
?yi
j
In addition, t-SNE employs a number of strategies to overcome the difficulties in the optimization
based on gradient descent.
3
Heavy-tailed SNE characterized by negative score functions
As the gradient derivation in [10] is restricted to the T1 distribution, we derive the gradient with a
general function that converts squared distances to similarities, with T1 as a special case. In addition,
the direct chain rule used in [10] may cause notational clutter and conceal the working components in
the gradients. We instead employ the Lagrangian technique to simplify the derivation. Our approach
can provide more insights of the working factor brought by the heavy-tailed functions.
Minimizing J (Y ) in Equation (1) with respect to Y is equivalent to the optimization problem:
X
qij
maximize L(q, Y ) =
Pij log P
(7)
q,Y
a6=b qab
ij
subject to qij = H(kyi ? yj k2 ),
(8)
where the embedding similarity function H(? ) ? 0 can be any function that is monotonically decreasing with respect to ? for ? > 0. Note that H is not required to be defined as a probability
function because the symmetric SNE objective already involves normalization over all data pairs.
The extended objective using the Lagrangian technique is given by
X
X
?
?
qij
? Y)=
L(q,
Pij log P
+
?ij qij ? H(kyi ? yj k2 ) .
(9)
a6=b qab
ij
ij
? Y )/?qij = 0 yields ?ij = 1/ P
Setting ? L(q,
a6=b qab ? Pij /qij . Inserting these Lagrangian multipliers to the gradient with respect to yi , we have
!
?
?
?
X
? Y)
h(kyi ? yj k2 )
? L(q,
Pij
?J (Y )
1
P
? qij ? ?
=?
=4
?
(yi ? yj ) (10)
?yi
?yi
qij
qij
a6=b qab
j
X
=4
(Pij ? Qij )S(kyi ? yj k2 )(yi ? yj ),
(11)
j
where h(? ) = dH(? )/d? and
d log H(? )
(12)
d?
2
is the negative score function of H. For notational simplicity, we also write Sij = S(kyi ? yj k ).
S(? ) = ?
We propose to characterize the tail heaviness of the similarity function H, relative to the one that
leads to the Gaussian, by its negative score function S, also called tail-heaviness function in this
paper. In this characterization, there is a functional operator S that maps every similarity function to
a tail-heaviness function. For the baseline Gaussian similarity, H(? ) = exp(?? ), we have S(H) =
1, i.e. S(H)(? ) = 1 for all ? . As for the Student t-distribution of a single degree of freedom,
H(? ) = (1 + ? )?1 and thus S(H) = H.
The above observation inspires us to further parameterize a family of tail-heaviness functions by
the power of H: S(H, ?) = H ? for ? ? 0, where a larger ? value corresponds to a heavier-tailed
embedding similarity function. Such a function H can be determined by solving the first-order
differential equation ?d log H(? )/d? = [H(? )]? , which gives
H(? ) = (?? + c)?1/?
(13)
1
?? 0 (Gaussian)
?=0.5
?=1 (T1)
0.9
0.8
?=1.5
?=2
H(?)=(1+??)?1/?
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
0
1
2
?
3
4
5
Figure 1: Several functions in the power family.
with c a constant. Here we set c = 1 for a consistent generalization of SNE and t-SNE. Thus the
Gaussian embedding similarity function, i.e. H(? ) = exp(?? ), is achieved when ? ? 0. Figure 1
shows a number of functions in the power family.
4
A fixed-Point optimization algorithm
Unlike many other dimensionality reduction approaches that can be solved by eigendecomposition
in a single step, SNE and its variants require iterative optimization methods. Substantial efforts have
been devoted to improve the efficiency and robustness of t-SNE optimization. However it remains
unknown whether such a comprehensive implementation also works for other types of embedding
similarity functions. Manually adjusting the involved parameters such as the learning rate and the
momentum for every function is rather time-consuming and infeasible in practice.
Here we propose to optimize symmetric SNE by a fixed-point algorithm. After rearranging the terms
in ?J /?yi = 0 (see Equation (11)), we obtain the following update rule:
P
(t) P
(t)
Yki
j Bij +
j (Aij ? Bij ) Ykj
(t+1)
P
Yki
=
,
(14)
j Aij
(t)
(t)
(t)
(t)
where Aij = Pij S(kyi ? yj k2 ) and Bij = Qij S(kyi ? yj k2 ). Our optimization algorithm
for HSSNE simply involves the iterative application of Equation (14). Compared with the original
t-SNE optimization algorithm, our method requires no user-provided parameters such as the learning step size and momentum, which is more convenient for applications. The fixed-point algorithm
usually converges, with the result satisfying the stationary condition ?J /?Y = 0. However, it is
known that the update rule (14) can diverge in some cases, for example, when Yki are large. Therefore, a proof without extra conditions cannot be constructed. Here we provide two approximative
theoretical justifications for the algorithm.
Denote ? = Y ? Y (t) and ? the gradient of J with respect to Y . Let us first approximate the
HSSNE objective by the first-order Taylor expansion at the current estimate Y (t) :
X
(t)
J (Y ) ? Jlin (Y ) = J (Y (t) ) +
?ki ?ki .
(15)
ki
Then we can construct an upper bound of Jlin (Y ):
G(Y, Y (t) ) = Jlin (Y ) +
1X 2 X
?ki
Aia
2
a
(16)
ki
as Pia and Sia are all nonnegative. The bound is tight at Y = Y (t) , i.e. G(Y (t) , Y (t) ) = Jlin (Y (t) ).
Equating ?G(Y, Y (t) )/?Y = 0 implements minimization of G(Y, Y (t) ) and yields the update rule
(14). Iteratively applying the update rule (14) thus results in a monotonically decreasing sequence
of the linear approximation of HSSNE objective: Jlin (Y (t) ) ? G(Y (t+1) , Y (t) ) ? Jlin (Y (t+1) ).
Even if the second-order terms in the Taylor expansion of J (Y ) are also considered, the update rule
(t)
(t+1)
? Yki are small. Let DA and DB be diagonal matrices with
(14) is P
still justified if Yki or
P Yki
A
B
Dii = j Aij and Dii = j Bij . We can write J (Y ) = Jquad (Y ) + O(?3 ), where
1X
Jquad (Y ) = Jlin (Y ) +
?ki ?lj Hijkl .
(17)
2
ijkl
?
?
With the approximated Hessian Hijkl = ?kl (DA ? A) ? (DB ? B) ij , the updating term Uki in
P
(t)
(t?1)
(t)
Newton?s method Yki = Yki
? Uki can be determined by ki Hijkl Uki = ?lj . Solving this
equation by directly inverting the huge tensor H is however infeasible in practice and thus usually
implemented by iterative methods such as
?
?
(A + DB ? B)U (v) + ?(t) ki
(v+1)
.
(18)
Uki
=
A
Dii
Such iterations albeit still form a costly inner loop over v. To overcome this, we initialize U (0) = 0
and only employ the first iteration of each inner loop. Then one can find that such an approximated
(t+1)
?
(t)
(t)
Newton?s update rule Yki
= Yki ? DkiA is identical to Equation (14). Such a first-step approxii
imation technique has also been used in the Mean Shift algorithm as a generalized ExpectationMaximization solution [2].
5 A local mixture interpretation
Further rearranging the update rule can give us more insights of the properties of SSNE solutions:
h
i
P
(t)
(t)
(t)
Qij
Aij Ykj + Pij
(Yki ? Ykj )
j
(t+1)
P
Yki
=
.
(19)
j Aij
One can see that the above update rule mimics the maximization step in the EM-algorithm for
classical Gaussian mixture model (e.g. [7]), or more particularly, the Mean Shift method [3, 2].
This resemblance inspires us to find an alternative interpretation of the SNE behavior in terms of a
particular mixture model.
Given the current estimate Y (t) , the fixed-point update rule actually performs minimization of
X
(t)
Pij Sij kyi ? ?ij k2 ,
(20)
(t)
(t)
where ?ij = yj +
bound of
Qij
Pij
?
ij
(t)
(t)
?
yi ? yj
log
X
. This problem is equivalent to maximizing the Jensen lower
?
?
(t)
Pij Sij exp ?kyi ? ?ij k2 .
(21)
ij
(t)
In this form, ?ij can be regarded as the mean of the j-th mixture component for the i-th embedded
data point, while the product Pij Sij can be thought as the mixing coefficients1 . Note that each data
sample has its own mixing coefficients because of locality sensitivity.
For the converged estimate, i.e., Y (t+1) = Y (t) = Y ? , we can rewrite the mixture without the
logarithm as
( ?
)
?2
X
Qij
?
? 2
kyi ? yj k .
Pij Sij exp ? 1 ?
(22)
Pij
ij
Maximizing this quantity clearly explains the ingredients of symmetric SNE: (1) Pij reflects that
symmetric SNE favors close pairs in the input space, which is also adopted by most other locality
1
The data samples in such a symmetric mixture model do not follow the independent and identically distributed (i.i.d.) assumption because the mixing coefficient rows are not summed to the same number. Nevertheless, this does not affect our subsequent pairwise analysis.
preserving methods. (2) As discussed in Section 3, Sij characterizes the tail heaviness of the embedding similarity function. For the baseline Gaussian similarity, this reduces to one and thus has no
effect. For heavy-tailed similarities, Sij can compensate for mismatched dimensionalities between
the input space and its embedding. (3) The first factor in the exponential emphasizes the distance
graph matching, which underlies the success of SNE and its variants for capturing the global data
structure compared with many other approaches that rely on only variance constraints [10]. A pair
of Qij that approximates Pij well can increase the exponential, while a pair with a poor mismatch
yields little contribution to the mixture. (4) Finally, as credited in many other continuity preserving methods, the second factor in the exponential forces that close pairs in the input space are also
situated nearby in the embedding space.
6
Experiments
6.1
t-SNE for unsupervised visualization
In this section we present experiments of unsupervised visualization with T1 distribution, where our
Fixed-Point t-SNE is compared with the original Gradient t-SNE optimization method as well as
another dimensionality reduction approach, Laplacian Eigenmap [1]. Due to space limitation, we
only focus on three data sets, iris, wine, and segmentation (training subset) from the UCI repository2 .
We followed the instructions in [10] for calculating Pij and choosing the learning rate ? and momentum amount ?(t) for Gradient t-SNE. Alternatively, we excluded two tricks, ?early compression?
and ?early exaggeration?, that are described in [10] from the comparison of long-run optimization
because they apparently belong to the initialization stage. Here both Fixed-Point and Gradient tSNEs execute with the same initialization which uses the ?early compression? trick and pre-runs the
Gradient t-SNE for 50 iterations as suggested in [10].
The visualization quality can be quantified using the ground truth class information. We adopt the
measurement of the homogeneity of nearest neighbors:
homogeneity = ?/n,
(23)
where ? is the number of mapped points belonging to the same class with their nearest neighbor and
n again is the total number of points. A larger homogeneity generally indicates better separability
of the classes.
The experimental results are shown in Figure 2. Even though having a globally optimal solution, the
Laplacian Eigenmap yields poor visualizations, since none of the classes can be isolated. By contrast, both t-SNE methods achieve much higher homogeneities and most clusters are well separated
in the visualization plots. Comparing the two t-SNE implementations, one can see that our simple fixed-point algorithm converges even slightly faster than the comprehensive and carefully tuned
Gradient t-SNE. Besides efficiency, our approach performs as good as Gradient t-SNE in terms of
both t-SNE objectives and homogeneities of nearest neighbors for these data sets.
6.2
Semi-supervised visualization
Unsupervised symmetric SNE or t-SNE may perform poorly for some data sets in terms of identifying classes. In such cases it is better to include some supervised information and apply semisupervised learning to enhance the visualization.
Let us consider another data set vehicle from the LIBSVM repository3 . The top-left plot in Figure 3
demonstrates a poor visualization using unsupervised Gradient t-SNE. Next, suppose 10% of the
intra-class relationships are known. We can construct a supervised matrix u where uij = P
1 if xi and
xj are known to belong to the same class and 0 otherwise. After normalizing Uij = uij / a6=b uab ,
we calculate the semi-supervised similarity matrix P? = (1??)P +?U , where the trade-off parameter
? is set to 0.5 in our experiments. All SNE learning algorithms remain unchanged except that P is
replaced with P? .
2
3
http://archive.ics.uci.edu/ml/
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
iris
segmentation
wine
1.6
2
Gradient t?SNE
Fixed?Point t?SNE
1.4
1.6
Gradient t?SNE
Fixed?Point t?SNE
Gradient t?SNE
Fixed?Point t?SNE
1.8
1.4
1.6
1.2
1.2
1.4
0.8
t?SNE cost
t?SNE cost
t?SNE cost
1
1.2
1
1
0.8
0.6
0.8
0.6
0.4
0.6
0.2
0.4
0
0.2
0
0.2
0.4
0.6
0.8
learning time (seconds)
1
1.2
0.4
0
0.2
Laplacian Eigenmap, homogeneity=0.47, D =1.52
0.6
0.8
1
1.2
learning time (seconds)
1.4
1.6
1.8
0.2
0
0.5
1
1.5
learning time (seconds)
2
2.5
Laplacian Eigenmap, homogeneity=0.38, D =1.67
KL
KL
1
2
3
0.05
0.4
0.03
Laplacian Eigenmap, homogeneity=0.42, DKL=1.86
1
2
3
0.03
0.04
0.02
0.03
0.02
1
2
3
4
5
6
7
0.02
0.01
0.01
0.01
0
0
0
?0.01
?0.01
?0.01
?0.02
?0.02
?0.03
?0.04
?0.02
?0.03
?0.03
?0.05
?0.03
?0.05
?0.04
?0.03
?0.02
?0.01
0
0.01
0.02
0.03
0.04
0.05
?0.04
?0.03
?0.01
0
0.01
0.02
?0.02
10
?0.01
0
0.01
0.02
0.03
0.04
0.03
Gradient t?SNE, homogeneity=0.96, DKL=0.36
Gradient t?SNE, homogeneity=0.96, DKL=0.15
1
2
3
15
?0.02
Gradient t?SNE, homogeneity=0.86, DKL=0.24
1
2
3
20
10
15
5
1
2
3
4
5
6
7
10
5
5
0
0
0
?5
?5
?5
?10
?10
?10
?15
?15
?20
?20
?15
?20
?15
?10
?5
0
5
10
15
?10
Fixed?Point t?SNE, homogeneity=0.95, DKL=0.16
0
5
10
?20
?15
?10
?5
0
5
10
15
20
25
Fixed?Point t?SNE, homogeneity=0.83, DKL=0.24
Fixed?Point t?SNE, homogeneity=0.97, DKL=0.37
1
2
3
8
?5
1
2
3
15
10
6
10
1
2
3
4
5
6
7
4
5
2
5
0
0
?2
0
?4
?6
?5
?5
?8
?10
?10
?10
?12
?10
?8
?6
?4
?2
0
2
4
6
8
10
?10
?5
0
5
10
?15
?10
?5
0
5
10
Figure 2: Unsupervised visualization on three data sets. Column 1 to 3 are results of iris, wine and
segmentation, respectively. The first row comprises the learning times of Gradient and Fixed-Point
t-SNEs. The second to fourth rows are visualizations using Laplacian Eigenmap, Gradient t-SNE,
and Fixed-Point t-SNE, respectively.
15
unsupervised Gradient t?SNE, homogeneity=0.69, DKL=3.24
1
2
3
4
40
3
1
2
3
4
30
30
?=0, homogeneity=0.79, DKL=2.78
semi?supervised Gradient t?SNE, homogeneity=0.92, DKL=2.58
40
50
2
20
20
1
10
10
0
0
?10
0
?10
?1
?20
?20
?30
?2
?40
?30
?50
?40
?20
0
20
?40
?30
40
?20
?=0.5, homogeneity=0.87, DKL=2.71
?10
0
10
20
30
40
?3
?3
?=1, homogeneity=0.94, DKL=2.60
6
1
2
3
4
?2
?1
0
1
2
?=1.5, homogeneity=0.96, DKL=2.61
15
25
20
10
4
15
5
10
2
5
0
0
0
?5
?5
?2
?10
?10
?4
?6
?6
1
2
3
4
?4
?15
?2
0
2
4
6
?20
?20
?15
1
2
3
4
?15
?20
?10
?5
0
5
10
15
?25
?30
1
2
3
4
?20
?10
0
10
20
30
Figure 3: Semi-supervised visualization for the vehicle data set. The plots titled with ? values are
produced using the fixed-point algorithm of the power family of HSSNE.
The top-middle plot in Figure 3 shows that inclusion of some supervised information improves the
homogeneity (0.92) and visualization, where Class 3 and 4 are identifiable, but the classes are still
very close to each other, especially Class 1 and 2 heavily mixed. We then tried the power family
of HSSNE with ? ranging from 0 to 1.5, using our fixed-point algorithm. It can be seen that with
? increased, the cyan and magenta clusters become more separate and Class 1 and 2 can also be
identified. With ? = 1 and ? = 2, the HSSNEs implemented by our fixed-point algorithm achieve
even higher homogeneities (0.94 and 0.96, respectively) than the Gradient t-SNE. On the other hand,
too large ? may increase the number of outliers and the Kullback-Leibler divergence.
7
Conclusions
The working mechanism of Heavy-tailed Symmetric Stochastic Neighbor Embedding (HSSNE) has
been investigated rigorously. The several findings are: (1) we propose to use a negative score function to characterize and parameterize the heavy-tailed embedding similarity functions; (2) this finding has provided us with a power family of functions that convert distances to embedding similarities; and (3) we have developed a fixed-point algorithm for optimizing SSNE, which greatly saves the
effort in tuning program parameters and facilitates the extensions and applications of heavy-tailed
SSNE. We have compared HSSNE against t-SNE and Laplacian Eigenmap using UCI and LIBSVM
repositories. Two sets of experimental results from unsupervised and semi-supervised visualization
indicate that our method is efficient, accurate, and versatile over the other two approaches.
Our future work might include further empirical studies on the learning speed and robustness of
HSSNE by using more extensive, especially large-scale, experiments. It also remains important to
investigate acceleration techniques in both initialization and long-run stages of the learning.
8
Acknowledgement
The authors appreciate the reviewers for their extensive and informative comments for the improvement of this paper. This work is supported by a grant from the Research Grants Council of the Hong
Kong Special Administrative Region, China (Project No. CUHK 4128/08E).
References
[1] M. Belkin and P. Niyogi. Laplacian eigenmaps and spectral techniques for embedding and
clustering. Advances in neural information processing systems, 14:585?591, 2002.
[2] M. A. Carreira-Perpi?na? n. Gaussian mean-shift is an em algorithm. IEEE Transactions On
Pattern Analysis And Machine Intelligence, 29(5):767?776, 2007.
[3] D. Comaniciu and M. Peter. Mean Shift: A robust approach toward feature space analysis.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 24(5):603?619, 2002.
[4] J. A. Cook, I. Sutskever, A. Mnih, and G. E. Hinton. Visualizing similarity data with a mixture
of maps. In Proceedings of the 11th International Conference on Artificial Intelligence and
Statistics, volume 2, pages 67?74, 2007.
[5] M. Gashler, D. Ventura, and T. Martinez. Iterative non-linear dimensionality reduction with
manifold sculpting. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in
Neural Information Processing Systems 20, pages 513?520. MIT Press, Cambridge, MA, 2008.
[6] G. Hinton and S. Roweis. Stochastic neighbor embedding. Advances in Neural Information
Processing Systems, 15:833?840, 2003.
[7] G. J. McLachlan and D. Peel. Finite Mixture Models. Wiley, 2000.
[8] J. A. K. Suykens. Data visualization and dimensionality reduction using kernel maps with a
reference point. IEEE Transactions on Neural Networks, 19(9):1501?1517, 2008.
[9] J. B. Tenenbaum, V. Silva, and J. C. Langford. A global geometric framework for nonlinear
dimensionality reduction. Science, 290(5500):2319?2323, Dec. 2000.
[10] L. van der Maaten and G. Hinton. Visualizing data using t-SNE. Journal of Machine Learning
Research, 9:2579?2605, 2008.
| 3770 |@word kong:4 repository:1 middle:1 briefly:1 compression:2 instruction:1 seek:1 tried:1 versatile:1 accommodate:2 crowding:2 reduction:7 score:6 tuned:1 current:2 comparing:1 attracted:1 subsequent:1 informative:1 plot:4 update:9 stationary:1 intelligence:3 selected:1 cook:1 provides:1 characterization:1 cse:2 simpler:1 saarland:1 constructed:1 direct:1 differential:1 become:1 qualitative:1 qij:22 pairwise:3 behavior:2 zlxu:1 globally:1 decreasing:2 little:2 provided:3 project:1 moreover:1 developed:1 finding:4 quantitative:1 every:2 k2:11 demonstrates:1 platt:1 grant:2 superiority:2 yn:1 t1:7 local:2 credited:1 might:4 initialization:3 equating:1 quantified:1 china:1 qii:2 yj:17 practice:2 implement:1 procedure:1 empirical:2 thought:1 alleviated:1 convenient:2 matching:1 pre:1 cannot:1 close:3 selection:1 operator:1 applying:1 optimize:2 equivalent:2 map:5 lagrangian:3 center:2 maximizing:2 annealed:1 www:1 reviewer:1 simplicity:1 identifying:1 rule:10 insight:2 regarded:1 embedding:33 exploratory:1 justification:1 suppose:2 heavily:1 user:4 us:5 approximative:1 trick:2 satisfying:1 approximated:2 updating:1 particularly:1 csie:1 solved:1 parameterize:2 calculate:1 region:1 trade:1 substantial:1 rigorously:1 solving:2 tight:1 rewrite:1 efficiency:2 various:4 derivation:3 separated:1 fast:1 artificial:2 choosing:2 quite:1 encoded:1 larger:2 otherwise:1 compressed:1 favor:1 niyogi:1 statistic:1 advantage:1 rr:1 sequence:1 propose:5 product:1 yki:12 inserting:1 uci:3 loop:2 mixing:3 poorly:1 achieve:3 roweis:2 sutskever:1 convergence:2 cluster:5 optimum:1 converges:2 derive:1 nearest:3 ij:14 expectationmaximization:1 implemented:2 involves:2 indicate:1 pii:1 stochastic:11 libsvmtools:1 dii:3 explains:1 require:4 generalization:4 ntu:1 extension:1 considered:1 ground:1 ic:1 exp:5 snes:1 major:2 sculpting:2 early:3 adopt:1 heavytailed:1 purpose:1 wine:3 currently:1 council:1 tool:1 reflects:1 mclachlan:1 unfolding:1 minimization:2 brought:1 clearly:1 mit:1 gaussian:8 rather:1 focus:1 notational:2 improvement:1 indicates:1 hk:2 contrast:1 greatly:1 baseline:2 lj:2 uij:3 koller:1 among:2 denoted:1 development:1 special:2 initialize:1 summed:1 once:1 construct:2 having:1 manually:2 identical:1 unsupervised:9 mimic:1 minimized:1 future:1 simplify:1 employ:4 belkin:1 oja:2 divergence:1 homogeneity:23 comprehensive:3 replaced:1 freedom:3 peel:1 huge:1 investigate:1 mnih:1 intra:1 laborious:1 mixture:10 devoted:1 chain:1 heaviness:7 accurate:1 taylor:2 logarithm:1 isolated:1 theoretical:1 instance:1 column:1 earlier:1 increased:1 a6:6 maximization:1 cost:5 pia:1 subset:3 successful:1 eigenmaps:2 inspires:2 too:1 characterize:3 reported:1 zenglin:1 international:1 sensitivity:1 off:1 informatics:1 diverge:2 enhance:1 together:1 na:1 squared:1 again:1 student:4 crowded:1 coefficient:2 depends:1 vehicle:2 apparently:1 characterizes:1 contribution:3 ni:2 variance:1 yield:5 generalize:1 ijkl:1 emphasizes:1 produced:1 none:1 converged:1 against:1 involved:1 proof:1 adjusting:1 popular:1 dimensionality:9 improves:1 organized:1 segmentation:3 carefully:1 actually:1 higher:2 supervised:10 follow:1 execute:1 though:1 furthermore:1 stage:2 tkk:2 langford:1 working:3 hand:1 nonlinear:2 continuity:1 quality:1 resemblance:1 semisupervised:1 effect:1 normalized:1 multiplier:1 isomap:1 excluded:1 symmetric:16 iteratively:1 leibler:1 visualizing:4 comaniciu:1 mpi:1 iris:3 hong:4 generalized:2 performs:2 silva:1 ranging:1 fi:2 recently:1 common:1 functional:1 volume:1 tail:8 discussed:2 approximates:1 interpretation:2 belong:2 measurement:1 cambridge:1 tuning:1 i6:1 pointed:1 inclusion:1 similarity:28 etc:1 own:1 recent:1 optimizing:2 zhirong:2 success:1 der:2 yi:12 uab:1 preserving:2 seen:1 maximize:1 cuhk:3 monotonically:2 semi:7 reduces:1 faster:1 characterized:2 compensate:1 long:2 dkl:14 laplacian:9 variant:2 underlies:1 iteration:5 kernel:2 normalization:1 achieved:1 suykens:1 dec:1 justified:1 addition:2 qab:5 extra:1 unlike:1 archive:1 comment:1 isolate:1 subject:1 db:3 facilitates:1 near:1 yang:2 uki:4 identically:1 affect:1 xj:1 identified:1 inner:2 shift:4 whether:1 heavier:2 effort:3 titled:1 peter:1 hessian:1 cause:2 generally:1 clear:1 tune:1 amount:2 clutter:1 tenenbaum:1 situated:1 http:2 repository3:1 problematic:1 write:2 nevertheless:1 drawn:1 kyi:13 libsvm:2 graph:1 year:1 convert:2 run:4 parameterized:2 fourth:1 family:6 separation:1 maaten:2 capturing:1 ki:8 bound:3 cyan:1 followed:1 nonnegative:1 identifiable:1 adapted:1 constraint:1 helsinki:2 erkki:2 nearby:1 speed:2 poor:4 belonging:1 remain:1 slightly:1 em:3 separability:1 tw:1 outlier:1 restricted:3 sij:7 equation:6 visualization:21 remains:2 cjlin:1 mechanism:1 singer:1 imation:1 adopted:1 apply:1 spectral:1 distinguished:1 save:1 alternative:1 robustness:2 symmetrized:2 aia:1 original:4 top:2 clustering:2 include:2 conceal:1 newton:2 calculating:1 chinese:3 especially:3 build:1 classical:1 unchanged:1 appreciate:1 tensor:1 objective:7 added:1 already:1 quantity:1 strategy:2 costly:1 diagonal:1 gradient:30 distance:3 separate:2 mapped:3 manifold:3 toward:1 sia:1 besides:1 relationship:1 minimizing:1 ventura:1 sne:74 relate:1 negative:6 implementation:4 exaggeration:1 unknown:1 perform:1 upper:1 observation:1 datasets:1 finite:1 descent:4 hinton:4 extended:2 y1:1 inverting:1 pair:5 required:1 specified:1 kl:3 extensive:2 address:1 suggested:1 usually:2 pattern:2 mismatch:1 program:1 power:6 difficulty:2 rely:1 force:1 improve:2 technology:2 review:1 geometric:1 discovery:1 acknowledgement:1 relative:1 embedded:1 mixed:1 limitation:1 remarkable:1 ingredient:1 eigendecomposition:1 degree:3 pij:21 consistent:1 editor:1 heavy:20 row:3 supported:1 infeasible:2 aij:6 mismatched:1 neighbor:14 van:2 distributed:3 overcome:2 depth:1 world:1 author:1 transaction:3 approximate:1 kullback:1 ml:1 global:2 consuming:1 xi:2 alternatively:1 iterative:4 tailed:21 promising:1 robust:1 rearranging:2 expansion:2 investigated:1 da:2 martinez:1 facilitating:1 xu:1 slow:1 wiley:1 momentum:7 comprises:1 exponential:3 administrative:1 bij:4 magenta:1 perpi:1 showing:2 jensen:1 multitude:1 normalizing:1 albeit:1 gap:2 locality:2 generalizing:1 simply:1 corresponds:1 truth:1 dh:1 ma:1 king:2 acceleration:1 change:1 carreira:1 determined:2 except:1 conservative:1 called:2 total:1 experimental:2 select:2 highdimensional:1 repository2:1 irwin:1 brevity:1 eigenmap:7 ykj:3 |
3,058 | 3,771 | Human Rademacher Complexity
Xiaojin Zhu1 , Timothy T. Rogers2 , Bryan R. Gibson1
Department of {1 Computer Sciences, 2 Psychology}
University of Wisconsin-Madison. Madison, WI 15213
[email protected], [email protected], [email protected]
Abstract
We propose to use Rademacher complexity, originally developed in computational
learning theory, as a measure of human learning capacity. Rademacher complexity measures a learner?s ability to fit random labels, and can be used to bound
the learner?s true error based on the observed training sample error. We first review the definition of Rademacher complexity and its generalization bound. We
then describe a ?learning the noise? procedure to experimentally measure human
Rademacher complexities. The results from empirical studies showed that: (i)
human Rademacher complexity can be successfully measured, (ii) the complexity depends on the domain and training sample size in intuitive ways, (iii) human learning respects the generalization bounds, (iv) the bounds can be useful in
predicting the danger of overfitting in human learning. Finally, we discuss the
potential applications of human Rademacher complexity in cognitive science.
1
Introduction
Many problems in cognitive psychology arise from questions of capacity. How much information
can human beings hold in mind and deploy in simple memory tasks [19, 15, 6]? What kinds of
functions can humans easily acquire when learning to classify items [29, 7], and do they have biases for learning some functions over others[10]? Is there a single domain-general answer to these
questions, or is the answer domain-specific [28]? How do human beings avoid over-fitting learning
examples when acquiring knowledge that allows them to generalize [20]? Such questions are central
to a variety of research in cognitive psychology, but only recently have they begun to be placed on a
formal mathematical footing [7, 9, 5].
Machine learning offers a variety of formal approaches to measuring the capacity of a learning system, with concepts such as Vapnik-Chervonenkis (VC) dimension [27, 25, 12] and Rademacher
complexity [1, 13, 24]. Based on these notions of capacity, one can quantify the generalization
performance of a classifier, and the danger of over-fitting, by bounding its future test error using
its observed training sample error. In this paper, we show how one such concept?Rademacher
complexity?can be measured in humans, based on their performance in a ?learning the noise? procedure. We chose Rademacher complexity (rather than the better-known VC dimension) because
it is particularly amenable to experimental studies, as discussed in Section 5. We assess whether
human capacity varies depending on the nature of the materials to be categorized, and empirically
test whether human generalization behavior respects the error bounds in a variety of categorization
tasks. The results validate Rademacher complexity as a meaningful measure of human learning
capacity, and provide a new perspective on the human tendency to overfit training data in category
learning tasks. We note that our aim is not to develop a new formal approach to complexity, but
rather to show how a well-studied formal measure can be computed for human beings.
1
2
Rademacher Complexity
Background and definitions. Let X be a domain of interest, which in psychology corresponds to a
stimulus space. For example, X could be an infinite set of images parametrized by some continuous
parameters, a finite set of words, etc.. We will use x ? X to denote an instance (e.g., an image
or a word) from the domain; precisely how x is represented is immaterial. We assume there is an
underlying marginal distribution PX on X , such that x is sampled with probability PX (x) during
training and testing. For example, PX can be uniform on the set of words.
Let f : X 7? R be a real-valued function. This corresponds to a hypothesis that predicts the label
of any instance in the domain. The label can be a continuous value for regression, or {?1, 1} for
binary classification. Let F be the set of such functions, or the hypothesis space, that we consider.
For example, in machine learning F may be the set of linear classifiers. In the present work, we will
take F to be the (possibly infinite) set of hypotheses from X to binary classes {?1, 1} that humans
can come up with.
Rademacher complexity (see for example [1]) measures the capacity of the hypothesis space F by
how easy it is for F to fit random noise. Consider a sample of n instances: x1 , . . . , xn drawn
i.i.d. from PX . Now generate n random numbers ?1 , . . . , ?n , each taking value -1 P
or 1 with equal
n
probability. For a given function f ? F, its fit to the random numbers is defined as | i=1 ?i f (xi )|.
This is easier to understand when f produces -1, 1 binary labels. In this case, the ??s can be thought
of as random labels, and {(xi , ?i )}ni=1 as a training sample. The fit measures how f ?s predictions
match the random labels on the training sample: if f perfectly predicts the ??s, or completely the
opposite by flipping the classes, then the fit is maximized at n; if f ?s predictions are orthogonal to
the ??s, the fit is minimized at 0.
Pn
The fit of a set of functions F is defined as supf ?F | i=1 ?i f (xi )|. That is, we are fitting the
particular training sample by finding the hypothesis in F with the best fit. If F is rich, it will be
easier to find a hypothesis f ? F that matches the random labels, and its fit will be large. On the
other hand, if F is simple (e.g., in the extreme containing only one function f ), it is unlikely that
f (xi ) will match ?i , and its fit will be close to zero.
Finally, recall that {(xi , ?i )}ni=1 is a particular random training sample. If, for every random training
sample of size n, there always exists some f ? F (which may be different each time) that matches
it, then F is very good at fitting random noise. This also means that F is prone to overfitting, whose
very definition is to learn noise. This is captured by taking the expectation over training samples:
Definition 1 (Rademacher Complexity). For a set of real-valued functions F with domain X , a
distribution PX on X , and a size n, the Rademacher complexity R(F, X , PX , n) is
"
R(F, X , PX , n) = Ex?
iid
#
n
2 X
?i f (xi ) ,
sup
n
f ?F
i=1
(1)
iid
where the expectation is over x = x1 , . . . , xn ? PX , and ? = ?1 , . . . , ?n ? Bernoulli( 12 , 12 ) with
values ?1.
Rademacher complexity depends on the hypothesis space F, the domain X , the distribution on the
domain PX , as well as the training sample size n. The size n is relevant because for a fixed F,
it will be increasingly difficult to fit random noise as n gets larger. On the other hand, it is worth
noting that Rademacher complexity is independent of any future classification tasks. For example,
in Section 4 we will discuss two different tasks on the same X (set of words): classifying a word by
its emotional valence, or by its length. These two tasks will share the same Rademacher complexity.
In general, the value of Rademacher complexity will depend on the range of F. In the special case
when F is a set of functions mapping x to {?1, 1}, R(F, X , PX , n) is between 0 and 2.
A particularly important property of Rademacher complexity is that it can be estimated from random
(1)
(1)
(m)
(m)
samples. Let {(xi , ?i )}ni=1 , . . . , {(xi , ?i )}ni=1 be m random samples of size n each. In
Section 3, these will correspond to m different subjects. The following theorem is an extension of
Theorem 11 in [1]. The proof follows from McDiarmid?s inequality [16].
2
Theorem 1. Let F be a set of functions mapping to [?1, 1]. For any integers n, m,
?
?
n
2
m
?
?
X
X
1
nm
2
(j)
(j)
P R(F, X , PX , n) ?
sup
?i f (xi ) ? ? 2 exp ?
?
?
m
8
f ?F n
j=1
(2)
i=1
Theorem 1 allows us to estimate the expectation in (1) with random samples, which is of practical
importance. It remains to compute the supremum in (1). In Section 3, we will discuss our procedure
to approximate the supremum in the case of human learning.
Generalization Error Bound. We state a generalization error bound by Bartlett and Mendelson
(Theorem 5 in [1]) as an important application of Rademacher complexity. Consider any binary
classification task of predicting a label in Y = {?1, 1} for x ? X . For example, the label y could be
the emotional valence (positive=1 vs. negative=-1) of a word x. In general, a binary classification
task is characterized by a joint distribution PXY on X ? {?1, 1}. Training data and future test
iid
data consist of instance-label pairs (x, y) ? PXY . Let F be a set of binary classifiers that map
X to {?1, 1}. For f ? F, let (y 6= f (x)) be an indicator function which is 1 if y 6= f (x), and 0
otherwise. P
On a training sample {(xi , yi )}ni=1 of size n, the observed training sample error of f is
n
1
e?(f ) = n i=1 (yi 6= f (xi )). The more interesting quantity is the true error of f , i.e., how well
f can generalize to future test data: e(f ) = E
[(y 6= f (x))]. Rademacher complexity
iid
(x,y) ? PXY
allows us to bound the true error using training sample error as follows.
Theorem 2. (Bartlett and Mendelson) Let F be a set of functions mapping X to {?1, 1}. Let
PXY be a probability distribution on X ? {?1, 1} with marginal distribution PX on X . Let
iid
{(xi , yi )}ni=1 ? PXY be a training sample of size n. For any ? > 0, with probability at least
1 ? ?, every function f ? F satisfies
r
ln(1/?)
R(F, X , PX , n)
+
.
(3)
e(f ) ? e?(f ) ?
2
2n
The probability 1 ? ? is over random draws of the training sample. That is, if one draws a large
number of training samples of size n each, then (3) is expected to hold on 1 ? ? fraction of those
samples. The bound has two factors, one from the Rademacher complexity and the other from
the confidence parameter ? and training sample size n. When the bound is tight, training sample
error is a good indicator of true error, and we can be confident that overfitting is unlikely. A tight
bound requires the Rademacher complexity to be close to zero. On the other hand, if the Rademacher
complexity is large, or n is too small, or the requested confidence 1 ? ? is overly stringent, the bound
can be loose. In that case, there is a danger of overfitting. We will demonstrate this generalization
error bound on four different human classification tasks in Section 4.
3
Measuring Human Rademacher Complexity by Learning the Noise
Our aim is to measure the Rademacher complexity of the human learning system for a given stimulus
space X , distribution of instances PX , and sample-size n. By ?human learning system,? we mean
the set of binary classification functions that an average human subject can come up with on the
domain X , under the experiment conditions described below. We will denote this set of functions F
with Ha , that is, ?average human.?
We make two assumptions. The first is the assumption of universality [2]: every individual has the
same Ha . It allows us to pool subjects together. This assumption can be loosened in the future.
For instance, F could be defined as the set of functions that a particular individual or group can
employ in the learning task, such as a given age-group, education level, or other special population.
A second assumption is required to compute the supremum on Ha . Obviously, we cannot measure
and compare the performance of every single function contained in Ha . Instead, we assume that,
when making their classification judgments, participants use the best function at their disposal?so
that the errors they make when tested on the training instances reflect the error generated by the
best-performing function in Ha , thus providing a direct measure of the supremum. In essence, the
assumption is that participants are doing their best to perform the task.
3
With the two assumptions above, we can compute human Rademacher complexity for a given
stimulus domain X , distribution PX , and sample size n, by assessing how well human participants are able to learn randomly-assigned labels. Each participant is presented with a training
sample {(xi , ?i )}ni=1 where the ??s are random ?1 labels, and asked to learn the instance-label
mapping. The subject is not told that the labels are random. We assume that the subject will
search within Ha forPthe best hypothesis (?rule?), which is the one that minimizes training error:
n
f ? = argmaxf ?Ha i=1 ?i f (xi ) = argminf ?Ha e?(f ). We do not directly observe f ? . Later, we
ask the subject to classify the same training instances {xi }ni=1 using what she has learned. Her classification labels will be f ? (x
), . . . , f ? (xn ),which
we do observe. We then approximate the supre 21P
P
n
n
mum as follows: supf ?Ha n i=1 ?i f (xi ) ? n2 i=1 ?i f ? (xi ). For the measured Rademacher
complexity to reflect actual learning capacity on the set Ha , it is important to prevent participants
from simply doing rote learning. With these considerations, we propose the following procedure to
estimate human Rademacher complexity.
Procedure. Given domain X , distribution PX , training sample size n, and number of subjects m,
(1)
(1)
(m)
(m)
we generate m random samples of size n each: {(xi , ?i )}ni=1 , . . . , {(xi , ?i )}ni=1 , where
(j) iid
(j) iid
xi ? PX and ?i ? Bernoulli( 21 , 12 ) with value ?1, for j = 1 . . . m. The procedure is paperand-pencil based, and consists of three steps:
(j)
(j)
Step 1. Participant j is shown a printed sheet with the training sample {(xi , ?i )}ni=1 , where
(j)
(j)
each instance xi is paired with its random label ?i (shown as ?A? and ?B? instead of -1,1 for
convenience). the participant is informed that there are only two categories; the order does not matter; they have three minutes to study the sheet; and later they will be asked to use what they have
learned to categorize more instances into ?A? or ?B?.
Step 2. After three minutes the sheet is taken away. To prevent active maintenance of training
items in working memory the participant performs a filler task consisting of ten two-digit addition/subtraction questions.
(j)
Step 3. The participant is given another sheet with the same training instances {xi }ni=1 but no
labels. The order of the n instances is randomized and different from step 1. The participant is not
told that they are the same training instances, and is asked to categorize each instance as ?A? or ?B?
and is encouraged to guess if necessary. There is no time limit.
(j)
(j)
(encoded as ?1).
We estimate
Let f (j) (x1 ), . . . , f (j) (xn ) be subject j?s answers
Pm 2 Pn
(j) (j) (j)
1
R(Ha , X , PX , n) by m j=1 n i=1 ?i f (xi ). We also conduct a post-experiment interview where the subject reports any insights or hypotheses they may have on the categories.
Materials To instantiate the general procedure, one needs to specify the domain X and an associated
marginal distribution PX . For simplicity, in all our experiments PX is the uniform distribution over
the corresponding domain. We conducted experiments on example domains. They are not of specific interest in themselves but nicely illustrate many interesting properties of human Rademacher
complexity: (1) The ?Shape? Domain. X consists of 321 computer-generated 3D shapes [3]. The
shapes are parametrized by a real number x ? [0, 1], such that small x produces spiky shapes, while
large x produces smooth ones. A few instances and their parameters are shown in Figure 1(a). It
is important to note that this internal structure is unnecessary to the definition or measurement of
Rademacher complexity per se. However, in Section 4 we will define some classification tasks that
utilize this internal structure. Participants have little existing knowledge about this domain. (2) The
?Word? Domain. X consists of 321 English words. We start with the Wisconsin Perceptual Attribute Ratings Database [18], which contains words rated by 350 undergraduates for their emotional
valence. We sort the words by their emotion valence, and take the 161 most positive and the 160
most negative ones for use in the study. A few instances and their emotion ratings are shown in
Figure 1(b). Participants have rich knowledge about this domain. The size of the domain for shapes
and words was matched to facilitate comparison.
Participants were 80 undergraduate students, participating for partial course credit. They were
divided evenly into eight groups. Each group of m = 10 subjects worked on a unique combination
of the Shape or the Word domain, and training sample size n in 5, 10, 20, or 40, using the procedure
defined previously.
4
rape
-5.60
2
1.5
1
0.5
0
0
10
20
30
funeral
-5.47
???
???
fun
4.91
laughter
4.95
(b) examples from the Word domain
Rademacher complexity
Rademacher complexity
0
1/4
1/2
3/4
1
(a) examples from the Shape domain
killer
-5.55
2
1.5
1
0.5
0
0
40
n
10
20
30
40
n
(c) R(Ha , Shape, uniform, n)
(d) R(Ha , Word, uniform, n)
Figure 1: Human Rademacher complexity on the ?Shape? and ?Word? domains.
Results. Figures 1(c,d) show the measured human Rademacher complexities on the domains
X =Shape and Word respectively, with distribution PX =uniform, and with different training sample sizes n. The error bars are 95% confidence intervals. Several interesting observations can be
made from the data:
Observation 1: human Rademacher complexities in both domains decrease as n increases. This is
anticipated, as it should be harder to learn a larger number of random labels. Indeed, when n = 5,
our interviews show that, in both domains, 9 out of 10 participants offered some spurious rules
of the random labels. For example, one subject thought the shape categories were determined by
whether the shape ?faces? downward; another thought the word categories indicated whether the
word contains the letter T. Such beliefs, though helpful in learning the particular training samples,
amount to over-fitting the noise. In contrast, when n = 40, about half the participants indicated that
they believed the labels to be random, as spurious ?rules? are more difficult to find.
Observation 2: human Rademacher complexities are significantly higher in the Word domain than
in the Shape domain, for n = 10, 20, 40 respectively (t-tests, p < 0.05). The higher complexity
indicates that, for the same sample sizes, participants are better able to find spurious explanations of
the training data for the Words than for the Shapes. Two distinct strategies were apparent in the Word
domain interviews: (i) Some participants created mnemonics. For example, one subject received the
training sample (grenade, B), (skull, A), (conflict, A), (meadow, B), (queen, B), and came up with
the following story: ?a queen was sitting in a meadow and then a grenade was thrown (B = before),
then this started a conflict ending in bodies & skulls (A = after).? (ii) Other participants came up
with idiosyncratic, but often imperfect, rules. For instance, whether the item ?tastes good,? ?relates
to motel service,? or ?physical vs. abstract.? We speculate that human Rademacher complexities on
other domains can be drastically different too, reflecting the richness of the participant?s pre-existing
knowledge about the domain.
Observation 3: many of these human Rademacher complexities are relatively large. This means that
under those X , PX , n, humans have a large capacity to learn arbitrary labels, and so will be more
prone to overfit on real (i.e., non-random) tasks. We will present human generalization experiments
in Section 4. It is also interesting to note that both Rademacher complexities at n = 5 are less than 2:
under our procedure, participants are not perfect at remembering the labels of merely five instances.
4
Bounding Human Generalization Errors
We reiterate the interpretation of human Rademacher complexity for psychology. It does not predict
e? (how well humans perform when training for a given task). Instead, Theorem 2 bounds e ? e?,
the ?amount of overfitting? (sometimes also called ?instability?) when the subject switches from
training to testing. A tight (close to 0) bound guarantees no severe overfitting: humans? future
5
joy
5.19
Table 1: Human learning performance abides by the generalization error bounds.
condition
Shape-+
n=5
Shape-+
n=40
Shape-+n=5
Shape-+n=40
ID
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
e?
0.00
0.00
0.00
0.00
0.00
0.05
0.03
0.03
0.00
0.00
0.00
0.00
0.00
0.00
0.20
0.12
0.32
0.15
0.15
0.03
bound e
1.35
1.35
1.35
1.35
1.35
0.39
0.36
0.36
0.34
0.34
1.35
1.35
1.35
1.35
1.55
0.46
0.66
0.49
0.49
0.36
e
0.05
0.22
0.10
0.09
0.07
0.04
0.14
0.03
0.04
0.01
0.23
0.27
0.21
0.40
0.18
0.16
0.50
0.08
0.11
0.10
condition
WordEmotion
n=5
WordEmotion
n=40
WordLength
n=5
WordLength
n=40
ID
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
e?
0.00
0.00
0.00
0.00
0.00
0.70
0.00
0.00
0.62
0.00
0.00
0.00
0.00
0.00
0.00
0.12
0.45
0.00
0.15
0.15
bound e
1.43
1.43
1.43
1.43
1.43
1.23
0.53
0.53
1.15
0.53
1.43
1.43
1.43
1.43
1.43
0.65
0.98
0.53
0.68
0.68
e
0.58
0.46
0.04
0.03
0.31
0.65
0.04
0.00
0.53
0.05
0.46
0.69
0.55
0.26
0.57
0.51
0.55
0.00
0.29
0.37
test performance e will be close to their training performance e?. This does not mean they will do
well: e? could be large and thus e is similarly large. A loose bound, in contrast, is a warning sign for
overfitting: good training performance (small e?) may not reflect learning of the correct categorization
rule, and so does not entail good performance on future samples (i.e., e can be much larger than e?).
We now present four non-random category-learning tasks to illustrate these points.
Materials. We consider four very different binary classification tasks to assess whether Theorem 2
holds for all of them. The tasks are: (1) Shape-+: Recall the Shape domain is parametrized by
x ? [0, 1]. The task has a linear decision boundary at x = 0.5, i.e., P (y = 1|x) = 0 if x < 0.5,
and 1 if x ? 0.5. It is well-known that people can easily learn such boundaries, so this is a fairly
easy task on the domain. (2) Shape-+-: This task is also on the Shape domain, but with a nonlinear
decision boundary. The negative class is on both ends while the positive class is in the middle:
P (y = 1|x) = 0 if x ? [0, 0.25) ? (0.75, 1], and 1 if x ? [0.25, 0.75]. Prior research suggests that
people have difficulty learning nonlinearly separable categories [28, 7], so this is a harder task. Note,
however, that the two shape tasks share the same Rademacher complexity, and therefore have the
same bound for the same n. (3) WordEmotion: This task is on the Word domain. P (y = 1|x) = 0
if word x has a negative emotion rating in the Wisconsin Perceptual Attribute Ratings Database, and
P (y = 1|x) = 1 otherwise. (4) WordLength: P (y = 1|x) = 0 if word x has 5 or less letters,
and P (y = 1|x) = 1 otherwise. The two word tasks are drastically different in that one focuses on
semantics and the other on orthography, but they share the same Rademacher complexity and thus
the same bound (for the same n), because the underlying domain is the same.
Procedure. The procedure is identical to that in Section 3 except for two things: (i) Instead
iid
of random labels ?, we sample labels y ? P (y|x) appropriate for each task. (ii) In step 3,
(j)
in addition to the training instances {xi }ni=1 , the j th subject is also given 100 test instances
(j) n+100
{xi }i=n+1 , sampled from PX . The order of the training and test instances is randomized.
The true test labels y are sampled from P (y|x). We
compute the participant?s training samPn
(j)
ple error as e?(f (j) ) = 1/n i=1 yi 6= f (j) (xi ) , and estimate her generalization error as
Pn+100
(j)
e(f (j) ) = 1/100 i=n+1 yi 6= f (j) (xi ) .
Participants were 40 additional students, randomly divided into 8 groups of five each. Each group
worked on one of the four tasks, with training sample size n=5 or 40.
Results. We present the performance of individual participants in Table 1: e? is the observed training error for that subject, ?bound e? is the 95% confidence (i.e., ? = 0.05) bound on test error:
6
1.5
bound
observed
^
e?e
1
Word,5
0.5
Shape,5
Word,40
Shape,40
0
0
0.5
1
1.5
2
Rademacher complexity
Figure 2: Human Rademacher complexity predicts the trend of overfitting.
p
e?(f ) + R(F, X , PX , n)/2 + ln(1/?)/2n, and e is the observed test error. We also present the
aggregated results across subjects and tasks in Figure 2, comparing the bound on e ? e? (the ?amount
of overfitting,? RHS of (3)) vs. the observed e ? e?, as the underlying Rademacher complexity varies.
We make two more observations:
Observation 4: Theorem 2 holds for every participant. Table 1 provides empirical support that our
application of computational learning theory to human learning is viable. In fact, for our choice of
? = 0.05, Theorem 2 allows the bound to fail on about two (5% of 40) participants ? which did
not happen. Of course, some of the ?bound e? are vacuous (greater than 1) as it is well-known that
bounds in computational learning theory are not always tight [14], but others are reasonably tight
(e.g., on Shape-+ with n = 40).
Observation 5: the larger the Rademacher complexity, the worse the actual amount of overfitting
e ? e?. Figure 2 shows that as R increases, e ? e? increases (solid line; error bar ?standard error;
averaged over the two different tasks with the same domain and n, as noted in the graph). The bound
on e ? e? (dotted line; RHS of (3)) has the same trend, although, being loose, it is higher up. This
seems to be true regardless of the classification task. For example, the Word domain and n = 5 has
a large Rademacher complexity 1.76, and both task WordLength and task WordEmotion severely
overfit: In task WordLength with n = 5, all subjects had zero training error but had large test error,
suggesting that their good performance on the training items reflects overfitting. Accordingly, the
explanations offered during the post-test interviews for this group spuriously fit the training items
but did not reflect the true categorization rule. Subject 111 thought that the class decision indicated
?things you can go inside,? while subject 114 thought the class indicated an odd or even number of
syllables. Similarly, on task WordEmotion with n = 5, three out of five subjects overfit the training
items. Subject 102 received the training items (daylight, 1), (hospital, -1), (termite, -1), (envy, -1),
(scream, -1), and concluded that class 1 is ?anything related to omitting[sic] light,? and proceeded
to classify the test items as such.
5
Discussions and Future Work
Is our study on memory or learning? This distinction is not necessarily relevant here, as we adopt
an abstract perspective which analyzes the human system as a black box that produces labels, and
both learning and memory contribute to the process being executed in that black box. We do have
evidence from post-interviews that Figure 1 does not merely reflect list-length effects from memory
studies: (i) participants treated the study as a category-learning and not a memory task ? they were
not told that the training and test items are the same when we estimate R; (ii) the memory load was
identical in the shape and the word domains, yet the curves differ markedly; (iii) participants were
able to articulate the ?rules? they were using to categorize the items.
Much recent research has explored the relationship between the statistical complexity of some categorization task and the ease with which humans learn the task [7, 5, 9, 11]. Rademacher complexity
is different: it indexes not the complexity of the X 7? Y categorization task, but the sophistication
of the learner in domain X (note Y does not appear in Rademacher complexity). Greater complexity indicates, not a more difficult categorization task, but a greater tendency to overfit sparse data.
7
On the other hand, our definition of Rademacher complexity depends only on the domain, distribution, and sample size. In human learning, other factors also contribute to learnability, such as the
instructions, motivation, time to study, and should probably be incorporated into the complexity.
Human Rademacher complexity has interesting connections to other concepts. The VCdimension [27, 25, 12] is another capacity measure. Let {x1 , . . . , xm } ? X be a subset of
the domain. Let (f (x1 ), . . . , f (xm )) be a ?1-valued vector which is the classifications made
by some f ? F. If F is rich enough such that its members can produce all 2m vectors:
{(f (x1 ), . . . , f (xm )) : f ? F} = {?1, 1}m , then we say that the subset is shattered by F. The
VC-dimension of F is the size of the largest subset that can be shattered by F, or ? if F can shatter
arbitrarily large subsets. Unfortunately, human VC-dimension seems difficult to measure experimentally: First, shattering requires validating an exponential (2m ) number of classifications on a
given subset. Second, to determine that the VC-dimension is m, one needs to show that no subset
of size m + 1 can be shattered. However, the number of such subsets can be infinite. A variant,
?effective VC-dimension?, may be empirically estimated from a training sample [26]. This is left
for future research. Normalized Maximum Likelihood (NML) uses a similar complexity measure
for a model class [21], the connection merits further study ([23], p.50).
Human Rademacher complexity might help to advance theories of human cognition in many ways.
First, human Rademacher complexity can provide a means of testing computational models of human concept learning. Traditionally, such models are assessed by comparing their performance to
human performance in terms of classification error. A new approach would be to derive or empirically estimate the Rademacher complexity of the computational models, and compare that to human
Rademacher complexity. A good computational model should match humans in this regard.
Second, our procedure could be used to measure human Rademacher complexity in individuals or
special populations, including typically and atypically-developing children and adults. Relating individual Rademacher complexity to standard measures of learning ability or IQ may shed light on
the relationship between complexity, learning, and intelligence. Many IQ tests measure the participant?s ability to generalize the pattern in words or images. Individuals with very high Rademacher
complexity may actually perform worse by being ?distracted? by other potential hypotheses.
Third, human Rademacher complexity may help explain the human tendency to discern patterns in
random stimuli, such as the well-known Rorschach inkblot test, ?illusory correlations? [4], or ?falsememory? effect [22]. These effects may be viewed as spurious rule-fitting to (or generalization of)
the observed data, and Human Rademacher complexity may quantify the possibility of observing
such an effect.
Fourth, cognitive psychologists have long entertained an interest in characterizing the capacity of
different mental processes such as, for instance, the capacity limitations of short-term memory [19,
6]. In this vein, our work suggests a different kind of metric for assessing the capacity of the human
learning system.
Finally, human Rademacher complexity can help experimental psychologists to determine the
propensity of overfitting in their stimulus materials. We have seen that human Rademacher complexity can be much higher in some domains (e.g. Word) than others (e.g. Shape). Our procedure could
be used to measure the human Rademacher complexity of many standard concept-learning materials
in cognitive science, such as the Greebles used by Tarr and colleagues [8] and the circle-and-line
stimuli of McKinley & Nosofsky [17].
Acknowledgment: We thank the reviewers for their helpful comments. XZ thanks Michael Coen for discussions that lead to the realization of the difficulties in measuring human VC dimension. This work is supported
in part by AFOSR grant FA9550-09-1-0313 and the Wisconsin Alumni Research Foundation.
References
[1] P. L. Bartlett and S. Mendelson. Rademacher and Gaussian complexities: risk bounds and structural
results. Journal of Machine Learning Research, 3:463?482, 2002.
[2] A. Caramazza and M. McCloskey. The case for single-patient studies. Cognitive Neuropsychology,
5(5):517?527, 1988.
[3] R. Castro, C. Kalish, R. Nowak, R. Qian, T. Rogers, and X. Zhu. Human active learning. In Advances in
Neural Information Processing Systems (NIPS) 22. 2008.
8
[4] L. J. Chapman. Illusory correlation in observational report. Journal of Verbal Learning and Verbal
Behavior, 6:151?155, 1967.
[5] N. Chater and P. Vitanyi. Simplicity: A unifying principle in cognitive science? Trends in Cognitive
Science, 7(1):19?22, 2003.
[6] N. Cowan. The magical number 4 in short-term memory: A reconsideration of mental storage capacity.
Behavioral and Brain Sciences, 24:87?185, 2000.
[7] J. Feldman. Minimization of boolean complexity in human concept learning. Nature, 407:630?633, 2000.
[8] I. Gauthier and M. Tarr. Becoming a ?greeble? expert: Exploring mechanisms for face recognition. Vision
Research, 37(12):1673?1682, 1998.
[9] N. Goodman, J. B. Tenenbaum, J. Feldman, and T. L. Griffiths. A rational analysis of rule-based concept
learning. Cognitive Science, 32(1):108?133, 2008.
[10] T. L. Griffiths, B. R. Christian, and M. L. Kalish. Using category structures to test iterated learning as a
method for identifying inductive biases. Cognitive Science, 32:68?107, 2008.
[11] T. L. Griffiths and J. B. Tenenbaum. From mere coincidences to meaningful discoveries. Cognition,
103(2):180?226, 2007.
[12] M. J. Kearns and U. V. Vazirani. An Introduction to Computational Learning Theory. MIT Press, 1994.
[13] V. I. Koltchinskii and D. Panchenko. Rademacher processes and bounding the risk of function learning.
In E. Gine, D. Mason, and J. Wellner, editors, High Dimensional Probability II, pages 443?459. MIT
Press, 2000.
[14] J. Langford. Tutorial on practical prediction theory for classification. Journal of Machine Learning
Research, 6:273?306, 2005.
[15] R. L. Lewis. Interference in short-term memory: The magical number two (or three) in sentence processing. Journal of Psycholinguistic Research, 25(1):93?115, 1996.
[16] C. McDiarmid. On the method of bounded differences. In Surveys in Combinatorics 1989, pages 148?
188. Cambridge University Press, 1989.
[17] S. C. McKinley and R. M. Nosofsky. Selective attention and the formation of linear decision boundaries.
Journal of Experimental Psychology: Human Perception & Performance, 22(2):294?317, 1996.
[18] D. Medler, A. Arnoldussen, J. Binder, and M. Seidenberg. The Wisconsin perceptual attribute ratings
database, 2005. http://www.neuro.mcw.edu/ratings/.
[19] G. Miller. The magical number seven plus or minus two: Some limits on our capacity for processing
information. Psychological Review, 63(2):81?97, 1956.
[20] R. C. O?Reilly and J. L. McClelland. Hippocampal conjunctive encoding, storage, and recall: Avoiding a
tradeoff. Hippocampus, 4:661?682, 1994.
[21] J. Rissanen. Strong optimality of the normalized ML models as universal codes and information in data.
IEEE Transaction on Information Theory, 47(5):17121717, 2001.
[22] H. L. Roediger and K. B. McDermott. Creating false memories: Remembering words not presented in
lists. Journal of Experimental Psychology: Learning, Memory and Cognition, 21(4):803?814, 1995.
[23] T. Roos. Statistical and Information-Theoretic Methods for Data Analysis. PhD thesis, Department of
Computer Science, University of Helsinki, 2007.
[24] J. Shawe-Taylor and N. Cristianini. Kernel Methods for Pattern Analysis. Cambridge University Press,
New York, NY, USA, 2004.
[25] V. Vapnik. Statistical Learning Theory. Wiley-Interscience, 1998.
[26] V. Vapnik, E. Levin, and Y. Le Cun. Measuring the VC-dimension of a learning machine. Neural Computation, 6:851?876, 1994.
[27] V. N. Vapnik and A. Y. Chervonenkis. On the uniform convergence of relative frequencies of events to
their probabilities. Theory of Probability and its Applications, 16(2):264?280, 1971.
[28] W. D. Wattenmaker. Knowledge structures and linear separability: Integrating information in object and
social categorization. Cognitive Psychology, 28(3):274?328, 1995.
[29] W. D. Wattenmaker, G. I. Dewey, T. D. Murphy, and D. L. Medin. Linear separability and concept
learning: Context, relational properties, and concept naturalness. Cognitive Psychology, 18(2):158?194,
1986.
9
| 3771 |@word proceeded:1 middle:1 seems:2 hippocampus:1 instruction:1 minus:1 solid:1 harder:2 contains:2 chervonenkis:2 existing:2 comparing:2 universality:1 yet:1 zhu1:1 conjunctive:1 happen:1 shape:28 rote:1 christian:1 joy:1 v:3 half:1 instantiate:1 guess:1 item:10 intelligence:1 accordingly:1 footing:1 short:3 fa9550:1 mental:2 provides:1 contribute:2 mcdiarmid:2 five:3 mathematical:1 shatter:1 direct:1 viable:1 consists:3 fitting:6 behavioral:1 inside:1 interscience:1 indeed:1 expected:1 behavior:2 themselves:1 xz:1 brain:1 actual:2 little:1 underlying:3 matched:1 bounded:1 what:3 argmaxf:1 kind:2 killer:1 minimizes:1 developed:1 informed:1 finding:1 warning:1 guarantee:1 every:5 fun:1 shed:1 classifier:3 grant:1 appear:1 positive:3 before:1 service:1 limit:2 severely:1 encoding:1 id:2 becoming:1 black:2 chose:1 might:1 koltchinskii:1 studied:1 plus:1 suggests:2 binder:1 abides:1 ease:1 range:1 medin:1 averaged:1 practical:2 unique:1 acknowledgment:1 testing:3 digit:1 procedure:13 danger:3 empirical:2 universal:1 thought:5 printed:1 significantly:1 reilly:1 word:32 confidence:4 pre:1 griffith:3 integrating:1 get:1 cannot:1 close:4 convenience:1 sheet:4 storage:2 risk:2 context:1 instability:1 www:1 map:1 reviewer:1 go:1 regardless:1 attention:1 survey:1 simplicity:2 identifying:1 qian:1 rule:9 insight:1 population:2 notion:1 traditionally:1 deploy:1 us:1 hypothesis:10 trend:3 recognition:1 particularly:2 predicts:3 database:3 vein:1 observed:8 coincidence:1 richness:1 decrease:1 neuropsychology:1 panchenko:1 complexity:76 asked:3 cristianini:1 immaterial:1 depend:1 tight:5 learner:3 completely:1 easily:2 joint:1 represented:1 distinct:1 describe:1 effective:1 formation:1 whose:1 encoded:1 larger:4 valued:3 apparent:1 say:1 otherwise:3 ability:3 kalish:2 obviously:1 interview:5 propose:2 relevant:2 realization:1 intuitive:1 validate:1 participating:1 convergence:1 assessing:2 rademacher:67 produce:5 categorization:7 perfect:1 object:1 help:3 depending:1 develop:1 illustrate:2 vcdimension:1 derive:1 measured:4 iq:2 odd:1 received:2 strong:1 c:2 come:2 quantify:2 differ:1 nml:1 correct:1 attribute:3 vc:8 human:67 stringent:1 observational:1 material:5 rogers:1 education:1 generalization:12 articulate:1 extension:1 exploring:1 hold:4 credit:1 exp:1 mapping:4 predict:1 cognition:3 adopt:1 label:26 propensity:1 largest:1 successfully:1 reflects:1 minimization:1 mit:2 always:2 gaussian:1 aim:2 rather:2 avoid:1 pn:3 chater:1 caramazza:1 focus:1 she:1 bernoulli:2 indicates:2 likelihood:1 contrast:2 helpful:2 scream:1 shattered:3 unlikely:2 typically:1 her:2 spurious:4 selective:1 semantics:1 classification:15 special:3 fairly:1 marginal:3 equal:1 emotion:3 nicely:1 tarr:2 encouraged:1 identical:2 shattering:1 chapman:1 atypically:1 anticipated:1 future:9 minimized:1 others:3 stimulus:6 report:2 employ:1 few:2 randomly:2 individual:6 murphy:1 consisting:1 thrown:1 interest:3 possibility:1 severe:1 extreme:1 light:2 amenable:1 nowak:1 partial:1 necessary:1 orthogonal:1 wordlength:5 conduct:1 iv:1 taylor:1 circle:1 psychological:1 instance:23 classify:3 boolean:1 measuring:4 queen:2 jerryzhu:1 subset:7 uniform:6 levin:1 conducted:1 too:2 learnability:1 answer:3 varies:2 confident:1 thanks:1 randomized:2 told:3 pool:1 michael:1 together:1 nosofsky:2 thesis:1 central:1 nm:1 reflect:5 containing:1 possibly:1 worse:2 cognitive:12 creating:1 expert:1 suggesting:1 potential:2 speculate:1 student:2 matter:1 combinatorics:1 depends:3 reiterate:1 later:2 doing:2 sup:2 observing:1 start:1 sort:1 participant:28 ass:2 ni:13 maximized:1 correspond:1 judgment:1 sitting:1 greeble:1 miller:1 generalize:3 iterated:1 iid:8 mere:1 worth:1 explain:1 psycholinguistic:1 definition:6 colleague:1 frequency:1 proof:1 associated:1 sampled:3 rational:1 begun:1 ask:1 illusory:2 recall:3 knowledge:5 actually:1 reflecting:1 disposal:1 originally:1 higher:4 specify:1 reconsideration:1 though:1 box:2 spiky:1 correlation:2 overfit:5 hand:4 working:1 langford:1 gauthier:1 nonlinear:1 sic:1 indicated:4 facilitate:1 omitting:1 effect:4 concept:9 true:7 normalized:2 usa:1 inductive:1 pencil:1 assigned:1 alumnus:1 during:2 essence:1 noted:1 anything:1 hippocampal:1 theoretic:1 demonstrate:1 performs:1 loosened:1 image:3 consideration:1 recently:1 empirically:3 physical:1 discussed:1 interpretation:1 relating:1 measurement:1 cambridge:2 feldman:2 pm:1 similarly:2 shawe:1 had:2 entail:1 meadow:2 etc:1 showed:1 recent:1 perspective:2 inequality:1 binary:8 came:2 arbitrarily:1 yi:5 mcdermott:1 captured:1 analyzes:1 additional:1 remembering:2 greater:3 seen:1 subtraction:1 aggregated:1 determine:2 ii:5 relates:1 smooth:1 match:5 characterized:1 offer:1 believed:1 mnemonic:1 long:1 divided:2 post:3 paired:1 prediction:3 neuro:1 regression:1 maintenance:1 variant:1 patient:1 expectation:3 metric:1 vision:1 gine:1 sometimes:1 kernel:1 orthography:1 background:1 addition:2 interval:1 concluded:1 goodman:1 markedly:1 probably:1 subject:21 comment:1 validating:1 thing:2 member:1 cowan:1 integer:1 structural:1 noting:1 iii:2 easy:2 enough:1 forpthe:1 variety:3 switch:1 fit:12 psychology:9 perfectly:1 opposite:1 imperfect:1 tradeoff:1 whether:6 bartlett:3 wellner:1 york:1 useful:1 se:1 amount:4 ten:1 tenenbaum:2 category:9 mcclelland:1 generate:2 http:1 tutorial:1 dotted:1 sign:1 estimated:2 overly:1 per:1 bryan:1 group:7 four:4 rissanen:1 dewey:1 drawn:1 wisc:3 prevent:2 utilize:1 graph:1 merely:2 fraction:1 letter:2 you:1 fourth:1 discern:1 draw:2 decision:4 bound:30 syllable:1 pxy:5 vitanyi:1 precisely:1 worked:2 helsinki:1 optimality:1 performing:1 separable:1 px:24 relatively:1 department:2 developing:1 combination:1 across:1 increasingly:1 separability:2 wi:1 skull:2 cun:1 making:1 castro:1 psychologist:2 interference:1 taken:1 ln:2 remains:1 previously:1 discus:3 loose:3 fail:1 mechanism:1 mind:1 merit:1 end:1 naturalness:1 eight:1 observe:2 away:1 appropriate:1 magical:3 greebles:1 madison:2 emotional:3 unifying:1 question:4 quantity:1 flipping:1 strategy:1 valence:4 thank:1 capacity:15 parametrized:3 evenly:1 spuriously:1 seven:1 length:2 code:1 index:1 relationship:2 providing:1 acquire:1 daylight:1 difficult:4 executed:1 idiosyncratic:1 unfortunately:1 argminf:1 negative:4 perform:3 observation:7 finite:1 relational:1 incorporated:1 distracted:1 arbitrary:1 rating:6 vacuous:1 pair:1 required:1 nonlinearly:1 connection:2 sentence:1 conflict:2 learned:2 distinction:1 nip:1 adult:1 able:3 bar:2 below:1 pattern:3 xm:3 perception:1 including:1 memory:12 explanation:2 belief:1 event:1 difficulty:2 treated:1 predicting:2 indicator:2 zhu:1 rated:1 created:1 started:1 xiaojin:1 review:2 prior:1 taste:1 discovery:1 relative:1 wisconsin:5 afosr:1 interesting:5 limitation:1 age:1 foundation:1 offered:2 principle:1 editor:1 story:1 classifying:1 share:3 prone:2 course:2 placed:1 supported:1 english:1 drastically:2 bias:2 formal:4 understand:1 verbal:2 taking:2 face:2 characterizing:1 sparse:1 regard:1 boundary:4 dimension:8 xn:4 ending:1 curve:1 rich:3 made:2 ple:1 social:1 transaction:1 vazirani:1 approximate:2 supremum:4 ml:1 roos:1 overfitting:12 active:2 unnecessary:1 xi:28 continuous:2 search:1 seidenberg:1 table:3 nature:2 learn:7 reasonably:1 requested:1 roediger:1 mckinley:2 necessarily:1 domain:44 did:2 rh:2 bounding:3 noise:8 arise:1 motivation:1 n2:1 child:1 categorized:1 x1:6 body:1 envy:1 ny:1 wiley:1 laughter:1 exponential:1 perceptual:3 third:1 theorem:10 minute:2 load:1 specific:2 list:2 explored:1 mason:1 evidence:1 exists:1 mendelson:3 consist:1 vapnik:4 undergraduate:2 false:1 importance:1 phd:1 entertained:1 downward:1 easier:2 supf:2 sophistication:1 timothy:1 simply:1 contained:1 mccloskey:1 wattenmaker:2 acquiring:1 corresponds:2 satisfies:1 coen:1 lewis:1 viewed:1 experimentally:2 ttrogers:1 infinite:3 determined:1 except:1 kearns:1 called:1 hospital:1 experimental:4 tendency:3 meaningful:2 internal:2 rape:1 people:2 support:1 assessed:1 categorize:3 filler:1 mum:1 tested:1 avoiding:1 ex:1 |
3,059 | 3,772 | Reconstruction of Sparse Circuits Using
Multi-neuronal Excitation (RESCUME)
Tao Hu and Dmitri B. Chklovskii
Janelia Farm Research Campus, HHMI
19700 Helix Drive, Ashburn, VA 20147
hut, [email protected]
Abstract
One of the central problems in neuroscience is reconstructing synaptic
connectivity in neural circuits. Synapses onto a neuron can be probed by
sequentially stimulating potentially pre-synaptic neurons while monitoring
the membrane voltage of the post-synaptic neuron. Reconstructing a large
neural circuit using such a ?brute force? approach is rather time-consuming
and inefficient because the connectivity in neural circuits is sparse. Instead,
we propose to measure a post-synaptic neuron?s voltage while stimulating
sequentially random subsets of multiple potentially pre-synaptic neurons.
To reconstruct these synaptic connections from the recorded voltage we
apply a decoding algorithm recently developed for compressive sensing.
Compared to the brute force approach, our method promises significant
time savings that grow with the size of the circuit. We use computer
simulations to find optimal stimulation parameters and explore the
feasibility of our reconstruction method under realistic experimental
conditions including noise and non-linear synaptic integration. Multineuronal stimulation allows reconstructing synaptic connectivity just from
the spiking activity of post-synaptic neurons, even when sub-threshold
voltage is unavailable. By using calcium indicators, voltage-sensitive dyes,
or multi-electrode arrays one could monitor activity of multiple postsynaptic neurons simultaneously, thus mapping their synaptic inputs in
parallel, potentially reconstructing a complete neural circuit.
1
In tro d uc tio n
Understanding information processing in neural circuits requires systematic characterization
of synaptic connectivity [1, 2]. The most direct way to measure synapses between a pair of
neurons is to stimulate potentially pre-synaptic neuron while recording intra-cellularly from
the potentially post-synaptic neuron [3-8]. This method can be scaled to reconstruct multiple
synaptic connections onto one neuron by combining intracellular recordings from the postsynaptic neuron with photo-activation of pre-synaptic neurons using glutamate uncaging [913] or channelrhodopsin [14, 15], or with multi-electrode arrays [16, 17]. Neurons are
sequentially stimulated to fire action potentials by scanning a laser beam (or electrode
voltage) over a brain slice, while synaptic weights are measured by recording post-synaptic
voltage.
Although sequential excitation of single potentially pre-synaptic neurons could reveal
connectivity, such a ?brute force? approach is inefficient because the connectivity among
neurons is sparse. Even among nearby neurons in the cerebral cortex, the probability of
connection is only about ten percent [3-8]. Connection probability decays rapidly with the
1
distance between neurons and falls below one percent on the scale of a cortical column [3,
8]. Thus, most single-neuron stimulation trials would result in zero response making the
brute force approach slow, especially for larger circuits.
Another drawback of the brute force approach is that single-neuron stimulation cannot be
combined efficiently with methods allowing parallel recording of neural activity, such as
calcium imaging [18-22], voltage-sensitive dyes [23-25] or multi-electrode arrays [17, 26].
As these techniques do not reliably measure sub-threshold potential but report only spiking
activity, they would reveal only the strongest connections that can drive a neuron to fire [2730]. Therefore, such combination would reveal only a small fraction of the circuit.
We propose to circumvent the above limitations of the brute force approach by stimulating
multiple potentially pre-synaptic neurons simultaneously and reconstructing individual
connections by using a recently developed method called compressive sensing (CS) [31-35].
In each trial, we stimulate F neurons randomly chosen out of N potentially pre-synaptic
neurons and measure post-synaptic activity. Although each measurement yields only a
combined response to stimulated neurons, if synaptic inputs sum linearly in a post-synaptic
neuron, one can reconstruct the weights of individual connections by using an optimization
algorithm. Moreover, if the synaptic connections are sparse, i.e. only K << N potentially
pre-synaptic neurons make synaptic connections onto a post-synaptic neuron, the required
number of trials M ~ K log(N/K), which is much less than N [31-35].
The proposed method can be used even if only spiking activity is available. Because multiple
neurons are driven to fire simultaneously, if several of them synapse on the post-synaptic
neuron, they can induce one or more spikes in that neuron. As quantized spike counts carry
less information than analog sub-threshold voltage recordings, reconstruction requires a
larger number of trials. Yet, the method can be used to reconstruct a complete feedforward
circuit from spike recordings.
Reconstructing neural circuit with multi-neuronal excitation may be compared with mapping
retinal ganglion cell receptive fields. Typically, photoreceptors are stimulated by white-noise
checkerboard stimulus and the receptive field is obtained by Reverse Correlation (RC) in
case of sub-threshold measurements or Spike-Triggered Average (STA) of the stimulus [36,
37]. Although CS may use the same stimulation protocol, for a limited number of trials, the
reconstruction quality is superior to RC or STA.
2
Ma pp i ng sy na pti c inp ut s o nto o n e ne uro n
We start by formalizing the problem of mapping synaptic connections from a population of
N potentially pre-synaptic neurons onto a single neuron, as exemplified by granule cells
synapsing onto a Purkinje cell (Figure 1a). Our experimental protocol can be illustrated
using linear algebra formalism, Figure 1b. We represent synaptic weights as components of a
column vector x, where zeros represent non-existing connections. Each row in the
stimulation matrix A represents a trial, ones indicating neurons driven to spike once and
zeros indicating non-spiking neurons. The number of rows in the stimulation matrix A is
equal to the number of trials M. The column vector y represents M measurements of
membrane voltage obtained by an intra-cellular recording from the post-synaptic neuron:
y = Ax.
(1)
In order to recover individual synaptic weights, Eq. (1) must be solved for x. RC (or STA)
solution to this problem is x = (ATA)-1AT y, which minimizes (y-Ax)2 if M>N. In the case M<N,
the corresponding expression x = AT(AAT )-1y is a solution to the following problem:
min x
l2
?
=
N
i =1
xi2 , subject to y = Ax.
Given prior knowledge that the connectivity is sparse, we propose to recover x by solving
instead:
min x
where x
l0
l0
,
subject to y = Ax,
is the l0-norm of x or the number of non-zero elements. Under certain conditions
2
N
[25-29], this solution can be obtained by minimizing the l1-norm: x
l1
= ? x i using linear
i =1
programming [38] or by iterative greedy algorithms [39, 40]. In this paper, we used a
particularly efficient Compressive Sampling Matching Pursuit (CoSaMP) algorithm [41, 42].
We simulate the proposed reconstruction method in silico by generating a neural network,
simulating experimental measurements, and recovering synaptic weights. We draw unitless
synaptic weights from a distribution derived from electrophysiological measurements [4, 5,
43, 44] containing a delta-function at zero and an exponential distribution with a unit mean
(Figure 2a). We generate an M-by-N stimulation matrix A by setting F randomly chosen
entries in each row to one, and the rest to zero. We compute the measurement vector y by
multiplying A and x. Then, we use the CoSaMP algorithm to recover synaptic weights, xr,
from A and y. Figure 2b compares a typical result of such reconstruction and a result of RC
with originally generated non-zero synaptic weights x. Despite using fewer measurements
than required in the brute force approach, CS achieves perfect reconstruction while RC
yields a worse result [45].
(a)
(b)
y
A
x
K
synaptic
connections
M
M N
measurements
N neurons
Figure 1: Mapping synapses onto one neuron. a) A potentially post-synaptic neuron (red)
receives synaptic connection (blue) from K neurons out of N potentially pre-synaptic
neurons. b) Linear algebra representation of the experimental protocol. The column vector x
contains synaptic weights from N potentially pre-synaptic neurons: K blue squares represent
existing connections, white squares represent absent connections. The matrix A represents
the sequence of stimulation: black squares in each row represent stimulated neurons in each
trial. The column vector y contains measured membrane voltage in the red neuron.
3
(a)
K = 30 non-zero synaptic weights
N = 500 potential pre-synaptic neurons
M = 200 trials
F = 50 neurons stimulated per trial
2
Synaptic weights
1
0.98
0.96
0.94
0.92
//
0.08
0.06
0.04
synaptic weights
synaptic weights recoverd by CS
synaptic weights recoverd by RC
(b)
1
0.02
Probability
0
100
200
300
400
500
Potential pre?synaptic neurons
Figure 2: Reconstruction of synaptic weights onto one neuron. a) Synaptic weights are drawn
from the empirically motivated probability distribution. b) Reconstruction by CS (red)
coincides perfectly with generated synaptic weights (blue), achieving 60% improvement in
the number of trials over the brute force approach. RC result (green) is significantly worse.
3
Mi n i mu m nu mb e r o f m ea sure m e nt s a s a fu nc t io n o f
ne t wo rk si ze a n d sp a r s e nes s
In order to understand intuitively why the number of trials can be less than the number of
potential synapses, note that the minimum number of trials, i.e. information or entropy, is
3
given by the logarithm of the total number of possible connectivity patterns. If connections
are binary, the number of different connectivity patterns onto a post-synaptic neuron from N
neurons is 2N, and hence the minimum number of trials is N. However, prior knowledge that
only K connections are present reduces the number of possible connectivity patterns from 2N
to the binomial coefficient, CK N ~ (N/K)K. Thus, the number of trials dramatically reduces
from N to K log(N/K) << N for a sparse circuit.
In this section we search computationally for the minimum number of trials required for
exact reconstruction as a function of the number of non-zero synaptic weights K out of N
potentially pre-synaptic neurons. First, note that the number of trials depends on the number
of stimulated neurons F. If F = 1 we revert to the brute force approach and the number of
measurements is N, while for F = N, the measurements are redundant and no finite number
suffices. As the minimum number of measurements is expected to scale as K logN, there
must be an optimal F which makes each measurement most informative about x.
To determine the optimal number of stimulated neurons F for given K and N, we search for
the minimum number of trials M, which allows a perfect reconstruction of the synaptic
connectivity x. For each F, we generate 50 synaptic weight vectors and attempt
reconstruction from sequentially increasing numbers of trials. The value of M, at which all
50 recoveries are successful (up to computer round-off error), estimates the number of trial
needed for reconstruction with probability higher than 98%. By repeating this procedure 50
times for each F, we estimate the mean and standard deviation of M. We find that, for given
N and K, the minimum number of trials, M, as a function of the number of stimulated
neurons, F, has a shallow minimum. As K decreases, the minimum shifts towards larger F
because more neurons should be stimulated simultaneously for sparser x. For the explored
range of simulation parameters, the minimum is located close to 0.1N.
Next, we set F = 0.1N and explore how the minimum number of measurements required for
exact reconstruction depends on K and N. Results of the simulations following the recipe
described above are shown in Figure 3a. As expected, when x is sparse, M grows
approximately linearly with K (Figure 3b), and logarithmically with N (Figure 3c).
N = 1000
180
25
160
20
140
120
15
100
10
80
5
150 250
400
650
1000
Number of potential connections (N)
K = 30
220
200
(a)
200
220
Number of measurements (M)
30
Number of measurements (M)
Number of actual connections (K)
Number of necessary measurements (M)
(b)
180
160
140
120
100
80
5
10
15
20
25
30
Number of actual connections (K)
210
(c)
200
190
180
170
160
150
140
130
120
2
10
10
3
Number of potential connections (N)
Figure 3: a) Minimum number of measurements M required for reconstruction as a function
of the number of actual synapses, K, and the number of potential synapses, N. b) For given
N, we find M ~ K. c) For given K, we find M ~ logN (note semi-logarithmic scale in c).
4
R o b ust nes s o f re con st r uc t io n s t o noi se a n d v io la tio n o f
si m pli fy in g a ss umpt io n s
To make our simulation more realistic we now take into account three possible sources of
noise: 1) In reality, post-synaptic voltage on a given synapse varies from trial to trial [4, 5,
46-52], an effect we call synaptic noise. Such noise detrimentally affects reconstructions
because each row of A is multiplied by a different instantiation of vector x. 2) Stimulation of
neurons may be imprecise exciting a slightly different subset of neurons than intended
and/or firing intended neurons multiple times. We call this effect stimulation noise. Such
noise detrimentally affects reconstructions because, in its presence, the actual measurement
matrix A is different from the one used for recovery. 3) A synapse may fail to release neurotransmitter with some probability.
Naturally, in the presence of noise, reconstructions cannot be exact. We quantify the
4
reconstruction
x ? xr
l2
=
error
?
N
i =1
by
the
normalized
x ? xr
l2?error
l2
/ xl ,
where
2
( xi ? xri ) 2 . We plot normalized reconstruction error in brute force
approach (M = N = 500 trials) as a function of noise, as well as CS and RC reconstruction
errors (M = 200, 600 trials), Figure 4.
2
0.9
Normalized reconstruction error ||x-x|| /||x||
1
r 2
For each noise source, the reconstruction error of the brute force approach can be achieved
with 60% fewer trials by CS method for the above parameters (Figure 4). For the same
number of trials, RC method performs worse. Naturally, the reconstruction error decreases
with the number of trials. The reconstruction error is most sensitive to stimulation noise and
least sensitive to synaptic noise.
1
(a)
1
(b)
0.9
0.8
0.8
0.8
0.7
0.7
0.6
0.6
0.5
0.5
0.4
0.4
0.4
0.3
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0.7
RC: M=200
RC: M=600
Brute force method: M=500
CS: M=200
CS: M=600
0.6
0.5
0
0.05
0.1
0.15
Synaptic noise level
0.2
0.25
0
(c)
0.9
0.05
0.1
0.15
Stimulation noise level
0.2
0.25
0
0
0.05
0.1
0.15
0.2
0.25
Synaptic failure probability
Figure 4: Impact of noise on the reconstruction quality for N = 500, K = 30, F = 50. a)
Recovery error due to trial-to-trial variation in synaptic weight. The response y is calculated
using the synaptic connectivity x perturbed by an additive Gaussian noise. The noise level is
given by the coefficient of variation of synaptic weight. b) Recovery error due to stimulation
noise. The matrix A used for recovery is obtained from the binary matrix used to calculate
the measurement vector y by shifting, in each row, a fraction of ones specified by the noise
level to random positions. c) Recovery error due to synaptic failures.
The detrimental effect of the stimulation noise on the reconstruction can be eliminated by
monitoring spiking activity of potentially pre-synaptic neurons. By using calcium imaging
[18-22], voltage-sensitive dyes [23] or multi-electrode arrays [17, 26] one could record the
actual stimulation matrix. Because most random matrices satisfy the reconstruction
requirements [31, 34, 35], the actual stimulation matrix can be used for a successful recovery
instead of the intended one.
If neuronal activity can be monitored reliably, experiments can be done in a different mode
altogether. Instead of stimulating designated neurons with high fidelity by using highly
localized and intense light, one could stimulate all neurons with low probability. Random
firing events can be detected and used in the recovery process. The light intensity can be
tuned to stimulate the optimal number of neurons per trial.
Next, we explore the sensitivity of the proposed reconstruction method to the violation of
simplifying assumptions. First, whereas our simulation assumes that the actual number of
connections, K, is known, in reality, connectivity sparseness is known a priori only
approximately. Will this affect reconstruction results? In principle, CS does not require prior
knowledge of K for reconstruction [31, 34, 35]. For the CoSaMP algorithm, however, it is
important to provide value K larger than the actual value (Figure 5a). Then, the algorithm
will find all the actual synaptic weights plus some extra non-zero weights, negligibly small
when compared to actual ones. Thus, one can provide the algorithm with the value of K
safely larger than the actual one and then threshold the reconstruction result according to the
synaptic noise level.
Second, whereas we assumed a linear summation of inputs [53], synaptic integration may be
2
non-linear [54]. We model non-linearity by setting y = yl + ? yl , where yl represents
linearly summed synaptic inputs. Results of simulations (Figure 5b) show that although nonlinearity can significantly degrade CS reconstruction quality, it still performs better than RC.
5
(b)
0.45
0.4
Actual K = 30
0.35
0.3
0.25
0.2
0.15
0.1
0.05
0
10
20
30
40
50
60
Normalized reconstruction error ||x-xr||2/||x||2
Normalized reconstrcution error ||x-xr||2/||x||2
(a)
0.9
0.8
0.7
CS
RC
0.6
0.5
0.4
0.3
0.2
0.1
0
-0.15-0.12-0.09-0.06-0.03
0
0.03 0.06 0.09 0.12 0.15
Relative strength of the non-linear term ? ? mean(y )
K fed to CoSaMP
l
Figure 5: Sensitivity of reconstruction error to the violation of simplifying assumptions for N
= 500, K = 30, M = 200, F = 50. a) The quality of the reconstruction is not affected if the
CoSaMP algorithm is fed with the value of K larger than actual. b) Reconstruction error
computed in 100 realizations for each value of the quadratic term relative to the linear term.
5
Ma pp i ng sy na pti c inp ut s o nto a n e uro na l po pu la tio n
Until now, we considered reconstruction of synaptic inputs onto one neuron using subthreshold measurements of its membrane potential. In this section, we apply CS to
reconstructing synaptic connections onto a population of potentially post-synaptic neurons.
Because in CS the choice of stimulated neurons is non-adaptive, by recording from all
potentially post-synaptic neurons in response to one sequence of trials one can reconstruct a
complete feedforward network (Figure 6).
(a)
x
p(y=1)
A
Normalized reconstruction error
||x/||x||2?xr/||xr||2||2
y
(b)
(c)
100
(d)
500
700
900
1100
Number of spikes
1
STA
CS
0.8
0.6
0.4
0.2
0
1000
0
300
3000
5000
7000
9000
Number of trials (M)
Ax
Figure 6: Mapping of a complete feedforward network. a) Each post-synaptic neuron (red)
receives synapses from a sparse subset of potentially pre-synaptic neurons (blue). b) Linear
algebra representation of the experimental protocol. c) Probability of firing as a function of
synaptic current. d) Comparison of CS and STA reconstruction error using spike trains for
N = 500, K = 30 and F = 50.
Although attractive, such parallelization raises several issues. First, patching a large number
of neurons is unrealistic and, therefore, monitoring membrane potential requires using
different methods, such as calcium imaging [18-22], voltage sensitive dyes [23-25] or multielectrode arrays [17, 26]. As these methods can report reliably only spiking activity, the
measurement is not analog but discrete. Depending on the strength of summed synaptic
inputs compared to the firing threshold, the postsynaptic neuron may be silent, fire once or
multiple times. As a result, the measured response y is quantized by the integer number of
spikes. Such quantized measurements are less informative than analog measurements of the
sub-threshold membrane potential. In the extreme case of only two quantization levels, spike
or no spike, each measurement contains only 1 bit of information. Therefore, to achieve
reasonable reconstruction quality using quantized measurements, a larger number of trials
M>>N is required.
We simulate circuit reconstruction from spike recordings in silico as follows. First, we draw
synaptic weights from an experimentally motivated distribution. Second, we generate a
6
random stimulation matrix and calculate the product Ax. Third, we linear half-wave rectify
this product and use the result as the instantaneous firing rate for the Poisson spike generator
(Figure 6c). We used a rectifying threshold that results in 10% of spiking trials as typically
observed in experiments. Fourth, we reconstruct synaptic weights using STA and CS and
compare the results with the generated weights. We calculated mean error over 100
realizations of the simulation protocol (Figure 6d).
Due to the non-linear spike generating procedure, x can be recovered only up to a scaling
factor. We propose to calibrate x with a few brute-force measurements of synaptic weights.
Thus, in calculating the reconstruction error using l2 norm, we normalize both the generated
and recovered synaptic weights. Such definition is equivalent to the angular error, which is
often used to evaluate the performance of STA in mapping receptive field [37, 55].
Why is CS superior to STA for a given number of trials (Figure 6d)? Note that spikeless
trials, which typically constitute a majority, also carry information about connectivity. While
STA discards these trials, CS takes them into account. In particular, CoSaMP starts with the
STA solution as zeroth iteration and improves on it by using the results of all trials and the
sparseness prior.
6
D i s c uss ion
We have demonstrated that sparse feedforward networks can be reconstructed by stimulating
multiple potentially pre-synaptic neurons simultaneously and monitoring either subthreshold or spiking response of potentially post-synaptic neurons. When sub-threshold
voltage is recorded, significantly fewer measurements are required than in the brute force
approach. Although our method is sensitive to noise (with stimulation noise worse than
synapse noise), it is no less robust than the brute force approach or RC.
The proposed reconstruction method can also recover inputs onto a neuron from spike
counts, albeit with more trials than from sub-threshold potential measurements. This is
particularly useful when intra-cellular recordings are not feasible and only spiking can be
detected reliably, for example, when mapping synaptic inputs onto multiple neurons in
parallel. For a given number of trials, our method yields smaller error than STA.
The proposed reconstruction method assumes linear summation of synaptic inputs (both
excitatory and inhibitory) and is sensitive to non-linearity of synaptic integration. Therefore,
it is most useful for studying connections onto neurons, in which synaptic integration is
close to linear. On the other hand, multi-neuron stimulation is closer than single-neuron
stimulation to the intrinsic activity in the live brain and can be used to study synaptic
integration under realistic conditions.
In contrast to circuit reconstruction using intrinsic neuronal activity [56, 57], our method
relies on extrinsic stimulation of neurons. Can our method use intrinsic neuronal activity
instead? We see two major drawbacks of such approach. First, activity of non-monitored presynaptic neurons may significantly distort reconstruction results. Thus, successful
reconstruction would require monitoring all active pre-synaptic neurons, which is rather
challenging. Second, reliable reconstruction is possible only when the activity of presynaptic neurons is uncorrelated. Yet, their activity may be correlated, for example, due to
common input.
We thank Ashok Veeraraghavan for introducing us to CS, Anthony Leonardo for making a retina
dataset available for the analysis, Lou Scheffer and Hong Young Noh for commenting on the
manuscript and anonymous reviewers for helpful suggestions.
References
[1] Luo, L., Callaway, E.M. & Svoboda, K. (2008) Genetic dissection of neural circuits. Neuron
57(5):634-660.
[2] Helmstaedter, M., Briggman, K.L. & Denk, W. (2008) 3D structural imaging of the brain with
photons and electrons. Current opinion in neurobiology 18(6):633-641.
[3] Holmgren, C., Harkany, T., Svennenfors, B. & Zilberter, Y. (2003) Pyramidal cell communication
within local networks in layer 2/3 of rat neocortex. Journal of Physiology 551:139-153.
[4] Markram, H. (1997) A network of tufted layer 5 pyramidal neurons. Cerebral Cortex 7(6):523-533.
7
[5] Markram, H., Lubke, J., Frotscher, M., Roth, A. & Sakmann, B. (1997) Physiology and anatomy of
synaptic connections between thick tufted pyramidal neurones in the developing rat neocortex.
Journal of Physiology 500(2):409-440.
[6] Thomson, A.M. & Bannister, A.P. (2003) Interlaminar connections in the neocortex. Cerebral
Cortex 13(1):5-14.
[7] Thomson, A.M., West, D.C., Wang, Y. & Bannister, A.P. (2002) Synaptic connections and small
circuits involving excitatory and inhibitory neurons in layers 2-5 of adult rat and cat neocortex: triple
intracellular recordings and biocytin labelling in vitro. Cerebral Cortex 12(9):936-953.
[8] Song, S., Sjostrom, P.J., Reigl, M., Nelson, S. & Chklovskii, D.B. (2005) Highly nonrandom
features of synaptic connectivity in local cortical circuits. Plos Biology 3(3):e68.
[9] Callaway, E.M. & Katz, L.C. (1993) Photostimulation using caged glutamate reveals functional
circuitry in living brain slices. Proceedings of the National Academy of Sciences of the United States
of America 90(16):7661-7665.
[10] Dantzker, J.L. & Callaway, E.M. (2000) Laminar sources of synaptic input to cortical inhibitory
interneurons and pyramidal neurons. Nature Neuroscience 3(7):701-707.
[11] Shepherd, G.M. & Svoboda, K. (2005) Laminar and columnar organization of ascending
excitatory projections to layer 2/3 pyramidal neurons in rat barrel cortex. Journal of Neuroscience
25(24):5670-5679.
[12] Nikolenko, V., Poskanzer, K.E. & Yuste, R. (2007) Two-photon photostimulation and imaging of
neural circuits. Nature Methods 4(11):943-950.
[13] Shoham, S., O'connor, D.H., Sarkisov, D.V. & Wang, S.S. (2005) Rapid neurotransmitter
uncaging in spatially defined patterns. Nature Methods 2(11):837-843.
[14] Gradinaru, V., Thompson, K.R., Zhang, F., Mogri, M., Kay, K., Schneider, M.B. & Deisseroth, K.
(2007) Targeting and readout strategies for fast optical neural control in vitro and in vivo. Journal of
Neuroscience 27(52):14231-14238.
[15] Petreanu, L., Huber, D., Sobczyk, A. & Svoboda, K. (2007) Channelrhodopsin-2-assisted circuit
mapping of long-range callosal projections. Nature Neuroscience 10(5):663-668.
[16] Na, L., Watson, B.O., Maclean, J.N., Yuste, R. & Shepard, K.L. (2008) A 256?256 CMOS
Microelectrode Array for Extracellular Neural Stimulation of Acute Brain Slices. Solid-State Circuits
Conference, 2008. ISSCC 2008. Digest of Technical Papers. IEEE International.
[17] Fujisawa, S., Amarasingham, A., Harrison, M.T. & Buzsaki, G. (2008) Behavior-dependent shortterm assembly dynamics in the medial prefrontal cortex. Nature Neuroscience 11(7):823-833.
[18] Ikegaya, Y., Aaron, G., Cossart, R., Aronov, D., Lampl, I., Ferster, D. & Yuste, R. (2004) Synfire
chains and cortical songs: temporal modules of cortical activity. Science 304(5670):559-564.
[19] Ohki, K., Chung, S., Ch'ng, Y.H., Kara, P. & Reid, R.C. (2005) Functional imaging with cellular
resolution reveals precise micro-architecture in visual cortex. Nature 433(7026):597-603.
[20] Stosiek, C., Garaschuk, O., Holthoff, K. & Konnerth, A. (2003) In vivo two-photon calcium
imaging of neuronal networks. Proceedings of the National Academy of Sciences of the United States
of America 100(12):7319-7324.
[21] Svoboda, K., Denk, W., Kleinfeld, D. & Tank, D.W. (1997) In vivo dendritic calcium dynamics in
neocortical pyramidal neurons. Nature 385(6612):161-165.
[22] Sasaki, T., Minamisawa, G., Takahashi, N., Matsuki, N. & Ikegaya, Y. (2009) Reverse optical
trawling for synaptic connections in situ. Journal of Neurophysiology 102(1):636-643.
[23] Zecevic, D., Djurisic, M., Cohen, L.B., Antic, S., Wachowiak, M., Falk, C.X. & Zochowski, M.R.
(2003) Imaging nervous system activity with voltage-sensitive dyes. Current Protocols in
Neuroscience Chapter 6:Unit 6.17.
[24] Cacciatore, T.W., Brodfuehrer, P.D., Gonzalez, J.E., Jiang, T., Adams, S.R., Tsien, R.Y., Kristan,
W.B., Jr. & Kleinfeld, D. (1999) Identification of neural circuits by imaging coherent electrical
activity with FRET-based dyes. Neuron 23(3):449-459.
[25] Taylor, A.L., Cottrell, G.W., Kleinfeld, D. & Kristan, W.B., Jr. (2003) Imaging reveals synaptic
targets of a swim-terminating neuron in the leech CNS. Journal of Neuroscience 23(36):11402-11410.
[26] Hutzler, M., Lambacher, A., Eversmann, B., Jenkner, M., Thewes, R. & Fromherz, P. (2006)
High-resolution multitransistor array recording of electrical field potentials in cultured brain slices.
Journal of Neurophysiology 96(3):1638-1645.
[27] Egger, V., Feldmeyer, D. & Sakmann, B. (1999) Coincidence detection and changes of synaptic
efficacy in spiny stellate neurons in rat barrel cortex. Nature Neuroscience 2(12):1098-1105.
[28] Feldmeyer, D., Egger, V., Lubke, J. & Sakmann, B. (1999) Reliable synaptic connections between
pairs of excitatory layer 4 neurones within a single 'barrel' of developing rat somatosensory cortex.
Journal of Physiology 521:169-190.
[29] Peterlin, Z.A., Kozloski, J., Mao, B.Q., Tsiola, A. & Yuste, R. (2000) Optical probing of neuronal
circuits with calcium indicators. Proceedings of the National Academy of Sciences of the United States
of America 97(7):3619-3624.
8
[30] Thomson, A.M., Deuchars, J. & West, D.C. (1993) Large, deep layer pyramid-pyramid single
axon EPSPs in slices of rat motor cortex display paired pulse and frequency-dependent depression,
mediated presynaptically and self-facilitation, mediated postsynaptically. Journal of Neurophysiology
70(6):2354-2369.
[31] Baraniuk, R.G. (2007) Compressive sensing. Ieee Signal Processing Magazine 24(4):118-120.
[32] Candes, E.J. (2008) Compressed Sensing. Twenty-Second Annual Conference on Neural
Information Processing Systems, Tutorials.
[33] Candes, E.J., Romberg, J.K. & Tao, T. (2006) Stable signal recovery from incomplete and
inaccurate measurements. Communications on Pure and Applied Mathematics 59(8):1207-1223.
[34] Candes, E.J. & Tao, T. (2006) Near-optimal signal recovery from random projections: Universal
encoding strategies? Ieee Transactions on Information Theory 52(12):5406-5425.
[35] Donoho, D.L. (2006) Compressed sensing. Ieee Transactions on Information Theory 52(4):12891306.
[36] Ringach, D. & Shapley, R. (2004) Reverse Correlation in Neurophysiology. Cognitive Science
28:147-166.
[37] Schwartz, O., Pillow, J.W., Rust, N.C. & Simoncelli, E.P. (2006) Spike-triggered neural
characterization. Journal of Vision 6(4):484-507.
[38] Candes, E.J. & Tao, T. (2005) Decoding by linear programming. Ieee Transactions on Information
Theory 51(12):4203-4215.
[39] Needell, D. & Vershynin, R. (2009) Uniform Uncertainty Principle and Signal Recovery via
Regularized Orthogonal Matching Pursuit. Foundations of Computational Mathematics 9(3):317-334.
[40] Tropp, J.A. & Gilbert, A.C. (2007) Signal recovery from random measurements via orthogonal
matching pursuit. Ieee Transactions on Information Theory 53(12):4655-4666.
[41] Dai, W. & Milenkovic, O. (2009) Subspace Pursuit for Compressive Sensing Signal
Reconstruction. Ieee Transactions on Information Theory 55(5):2230-2249.
[42] Needell, D. & Tropp, J.A. (2009) CoSaMP: Iterative signal recovery from incomplete and
inaccurate samples. Applied and Computational Harmonic Analysis 26(3):301-321.
[43] Varshney, L.R., Sjostrom, P.J. & Chklovskii, D.B. (2006) Optimal information storage in noisy
synapses under resource constraints. Neuron 52(3):409-423.
[44] Brunel, N., Hakim, V., Isope, P., Nadal, J.P. & Barbour, B. (2004) Optimal information storage
and the distribution of synaptic weights: perceptron versus Purkinje cell. Neuron 43(5):745-757.
[45] Napoletani, D. & Sauer, T.D. (2008) Reconstructing the topology of sparsely connected
dynamical networks. Physical Review E 77(2):026103.
[46] Allen, C. & Stevens, C.F. (1994) An evaluation of causes for unreliability of synaptic
transmission. Proceedings of the National Academy of Sciences of the United States of America
91(22):10380-10383.
[47] Hessler, N.A., Shirke, A.M. & Malinow, R. (1993) The probability of transmitter release at a
mammalian central synapse. Nature 366(6455):569-572.
[48] Isope, P. & Barbour, B. (2002) Properties of unitary granule cell-->Purkinje cell synapses in adult
rat cerebellar slices. Journal of Neuroscience 22(22):9668-9678.
[49] Mason, A., Nicoll, A. & Stratford, K. (1991) Synaptic transmission between individual pyramidal
neurons of the rat visual cortex in vitro. Journal of Neuroscience 11(1):72-84.
[50] Raastad, M., Storm, J.F. & Andersen, P. (1992) Putative Single Quantum and Single Fibre
Excitatory Postsynaptic Currents Show Similar Amplitude Range and Variability in Rat Hippocampal
Slices. European Journal of Neuroscience 4(1):113-117.
[51] Rosenmund, C., Clements, J.D. & Westbrook, G.L. (1993) Nonuniform probability of glutamate
release at a hippocampal synapse. Science 262(5134):754-757.
[52] Sayer, R.J., Friedlander, M.J. & Redman, S.J. (1990) The time course and amplitude of EPSPs
evoked at synapses between pairs of CA3/CA1 neurons in the hippocampal slice. Journal of
Neuroscience 10(3):826-836.
[53] Cash, S. & Yuste, R. (1999) Linear summation of excitatory inputs by CA1 pyramidal neurons.
Neuron 22(2):383-394.
[54] Polsky, A., Mel, B.W. & Schiller, J. (2004) Computational subunits in thin dendrites of pyramidal
cells. Nature Neuroscience 7(6):621-627.
[55] Paninski, L. (2003) Convergence properties of three spike-triggered analysis techniques.
Network: Computation in Neural Systems 14(3):437-464.
[56] Okatan, M., Wilson, M.A. & Brown, E.N. (2005) Analyzing functional connectivity using a
network likelihood model of ensemble neural spiking activity. Neural Computation 17(9):1927-1961.
[57] Timme, M. (2007) Revealing network connectivity from response dynamics. Physical Review
Letters 98(22):224101.
9
| 3772 |@word neurophysiology:4 trial:41 milenkovic:1 norm:3 hu:1 simulation:7 pulse:1 simplifying:2 postsynaptically:1 solid:1 briggman:1 carry:2 deisseroth:1 contains:3 efficacy:1 united:4 tuned:1 genetic:1 existing:2 current:4 recovered:2 nt:1 luo:1 clements:1 activation:1 yet:2 si:2 must:2 ust:1 cottrell:1 realistic:3 additive:1 informative:2 motor:1 plot:1 medial:1 greedy:1 fewer:3 half:1 nervous:1 record:1 characterization:2 quantized:4 org:1 zhang:1 rc:14 direct:1 isscc:1 shapley:1 commenting:1 huber:1 expected:2 rapid:1 behavior:1 nto:2 multi:7 brain:6 actual:13 increasing:1 campus:1 formalizing:1 circuit:22 moreover:1 linearity:2 barrel:3 minimizes:1 nadal:1 developed:2 compressive:5 ca1:2 nonrandom:1 temporal:1 safely:1 scaled:1 schwartz:1 brute:15 unit:2 control:1 unreliability:1 reid:1 okatan:1 aat:1 local:2 io:4 despite:1 encoding:1 analyzing:1 jiang:1 firing:5 approximately:2 black:1 plus:1 zeroth:1 evoked:1 challenging:1 limited:1 callaway:3 range:3 xr:7 procedure:2 universal:1 significantly:4 physiology:4 matching:3 imprecise:1 pre:18 induce:1 inp:2 projection:3 shoham:1 revealing:1 onto:13 cannot:2 close:2 targeting:1 romberg:1 storage:2 silico:2 live:1 gilbert:1 equivalent:1 demonstrated:1 reviewer:1 roth:1 thompson:1 resolution:2 recovery:13 needell:2 pure:1 array:7 facilitation:1 kay:1 population:2 variation:2 pli:1 target:1 cultured:1 svoboda:4 exact:3 programming:2 magazine:1 us:1 element:1 logarithmically:1 ze:1 particularly:2 located:1 mammalian:1 holthoff:1 sparsely:1 observed:1 negligibly:1 module:1 coincidence:1 solved:1 wang:2 electrical:2 calculate:2 readout:1 connected:1 plo:1 decrease:2 noi:1 leech:1 mu:1 denk:2 dynamic:3 terminating:1 raise:1 solving:1 algebra:3 po:1 cat:1 neurotransmitter:2 america:4 chapter:1 laser:1 revert:1 train:1 fast:1 detected:2 larger:7 s:1 reconstruct:6 compressed:2 farm:1 noisy:1 triggered:3 sequence:2 reconstruction:50 propose:4 mb:1 product:2 photostimulation:2 poskanzer:1 westbrook:1 combining:1 realization:2 rapidly:1 achieve:1 cacciatore:1 academy:4 buzsaki:1 normalize:1 recipe:1 convergence:1 electrode:5 cosamp:7 requirement:1 transmission:2 generating:2 perfect:2 cmos:1 adam:1 egger:2 redman:1 depending:1 measured:3 eq:1 epsps:2 recovering:1 c:20 somatosensory:1 quantify:1 anatomy:1 drawback:2 thick:1 stevens:1 opinion:1 require:2 suffices:1 anonymous:1 stellate:1 dendritic:1 summation:3 assisted:1 hut:1 considered:1 mapping:8 electron:1 circuitry:1 major:1 achieves:1 sensitive:9 gaussian:1 cossart:1 rather:2 ck:1 cash:1 voltage:16 wilson:1 ax:6 l0:3 derived:1 release:3 improvement:1 transmitter:1 likelihood:1 contrast:1 kristan:2 helpful:1 dependent:2 inaccurate:2 typically:3 tao:4 microelectrode:1 tank:1 issue:1 among:2 fidelity:1 noh:1 logn:2 priori:1 integration:5 summed:2 uc:2 frotscher:1 field:4 equal:1 saving:1 once:2 ng:3 sampling:1 eliminated:1 biology:1 represents:4 thin:1 report:2 stimulus:2 micro:1 few:1 retina:1 sta:11 randomly:2 simultaneously:5 falk:1 national:4 individual:4 intended:3 cns:1 fire:4 attempt:1 detection:1 organization:1 aronov:1 interneurons:1 highly:2 intra:3 situ:1 evaluation:1 violation:2 extreme:1 light:2 chain:1 konnerth:1 fu:1 closer:1 necessary:1 sauer:1 intense:1 orthogonal:2 incomplete:2 taylor:1 logarithm:1 re:1 column:5 formalism:1 isope:2 purkinje:3 calibrate:1 ca3:1 introducing:1 deviation:1 subset:3 entry:1 uniform:1 successful:3 scanning:1 varies:1 perturbed:1 e68:1 feldmeyer:2 combined:2 vershynin:1 st:1 international:1 sensitivity:2 systematic:1 off:1 yl:3 decoding:2 na:4 connectivity:17 andersen:1 central:2 recorded:2 containing:1 prefrontal:1 worse:4 cognitive:1 inefficient:2 chung:1 checkerboard:1 account:2 potential:13 photon:3 takahashi:1 retinal:1 coefficient:2 satisfy:1 depends:2 red:4 start:2 recover:4 wave:1 parallel:3 candes:4 vivo:3 rectifying:1 square:3 lubke:2 efficiently:1 sy:2 yield:3 unitless:1 subthreshold:2 ensemble:1 identification:1 monitoring:5 multiplying:1 drive:2 synapsis:10 strongest:1 synaptic:103 definition:1 distort:1 failure:2 pp:2 frequency:1 storm:1 naturally:2 mi:1 monitored:2 con:1 dataset:1 knowledge:3 ut:2 improves:1 electrophysiological:1 veeraraghavan:1 amplitude:2 uncaging:2 ea:1 manuscript:1 originally:1 higher:1 response:7 synapse:6 done:1 nikolenko:1 just:1 angular:1 correlation:2 until:1 hand:1 receives:2 tropp:2 synfire:1 kleinfeld:3 mode:1 quality:5 reveal:3 stimulate:4 grows:1 effect:3 normalized:6 brown:1 hence:1 spatially:1 illustrated:1 white:2 attractive:1 round:1 ringach:1 self:1 excitation:3 mel:1 coincides:1 rat:10 hong:1 hippocampal:3 complete:4 thomson:3 neocortical:1 performs:2 l1:2 allen:1 percent:2 tro:1 instantaneous:1 kozloski:1 recently:2 harmonic:1 patching:1 superior:2 common:1 stimulation:23 spiking:10 empirically:1 vitro:3 functional:3 cohen:1 rust:1 shepard:1 cerebral:4 physical:2 analog:3 katz:1 significant:1 measurement:30 connor:1 mathematics:2 nonlinearity:1 janelia:2 rectify:1 stable:1 cortex:11 acute:1 pu:1 dye:6 driven:2 reverse:3 discard:1 certain:1 binary:2 watson:1 minimum:11 dai:1 schneider:1 ashok:1 determine:1 redundant:1 living:1 semi:1 signal:7 multiple:9 simoncelli:1 ohki:1 reduces:2 technical:1 hhmi:2 long:1 post:17 paired:1 va:1 feasibility:1 impact:1 involving:1 vision:1 poisson:1 fujisawa:1 iteration:1 represent:5 cerebellar:1 pyramid:2 achieved:1 cell:8 beam:1 ion:1 whereas:2 chklovskii:3 sjostrom:2 harrison:1 grow:1 source:3 pyramidal:9 extra:1 rest:1 parallelization:1 sure:1 shepherd:1 recording:12 subject:2 caged:1 call:2 integer:1 structural:1 near:1 presence:2 unitary:1 feedforward:4 affect:3 architecture:1 perfectly:1 topology:1 silent:1 absent:1 shift:1 expression:1 recoverd:2 motivated:2 swim:1 wo:1 song:2 multineuronal:1 neurones:2 cause:1 constitute:1 action:1 depression:1 deep:1 dramatically:1 useful:2 se:1 repeating:1 neocortex:4 ten:1 ashburn:1 generate:3 inhibitory:3 tutorial:1 neuroscience:14 delta:1 per:2 extrinsic:1 blue:4 discrete:1 probed:1 promise:1 affected:1 threshold:10 monitor:1 drawn:1 achieving:1 bannister:2 imaging:10 dmitri:1 fraction:2 sum:1 fibre:1 synapsing:1 letter:1 fourth:1 baraniuk:1 uncertainty:1 reasonable:1 putative:1 draw:2 gonzalez:1 scaling:1 bit:1 layer:6 multielectrode:1 barbour:2 display:1 laminar:2 quadratic:1 annual:1 activity:19 strength:2 constraint:1 leonardo:1 nearby:1 simulate:2 min:2 optical:3 extracellular:1 reigl:1 designated:1 according:1 developing:2 combination:1 membrane:6 smaller:1 slightly:1 reconstructing:8 postsynaptic:4 pti:2 jr:2 spiny:1 shallow:1 making:2 mitya:1 intuitively:1 dissection:1 computationally:1 resource:1 nicoll:1 count:2 fail:1 xi2:1 needed:1 fed:2 ascending:1 photo:1 studying:1 available:2 pursuit:4 polsky:1 multiplied:1 apply:2 simulating:1 altogether:1 lampl:1 binomial:1 assumes:2 assembly:1 calculating:1 especially:1 granule:2 presynaptically:1 spike:16 digest:1 receptive:3 strategy:2 detrimental:1 subspace:1 distance:1 thank:1 lou:1 schiller:1 majority:1 degrade:1 nelson:1 presynaptic:2 fy:1 cellular:3 matsuki:1 minimizing:1 nc:1 tufted:2 potentially:20 xri:1 reliably:4 calcium:7 sakmann:3 twenty:1 allowing:1 neuron:96 finite:1 subunit:1 neurobiology:1 communication:2 precise:1 variability:1 nonuniform:1 intensity:1 pair:3 required:7 specified:1 connection:29 coherent:1 nu:1 adult:2 biocytin:1 below:1 exemplified:1 pattern:4 dynamical:1 including:1 green:1 reliable:2 shifting:1 unrealistic:1 event:1 force:15 circumvent:1 regularized:1 glutamate:3 indicator:2 ne:4 shortterm:1 mediated:2 prior:4 understanding:1 l2:5 review:2 friedlander:1 relative:2 suggestion:1 limitation:1 yuste:5 versus:1 localized:1 generator:1 triple:1 foundation:1 channelrhodopsin:2 exciting:1 principle:2 helix:1 uncorrelated:1 row:6 ata:1 excitatory:6 course:1 detrimentally:2 understand:1 hakim:1 perceptron:1 fall:1 markram:2 sparse:9 slice:8 calculated:2 cortical:5 pillow:1 quantum:1 adaptive:1 transaction:5 reconstructed:1 varshney:1 sequentially:4 instantiation:1 active:1 reveals:3 photoreceptors:1 assumed:1 consuming:1 xi:1 search:2 iterative:2 why:2 reality:2 stimulated:10 nature:10 rescume:1 robust:1 helmstaedter:1 unavailable:1 dendrite:1 european:1 anthony:1 protocol:6 sp:1 intracellular:2 linearly:3 noise:24 kara:1 stratford:1 neuronal:7 west:2 scheffer:1 slow:1 probing:1 axon:1 sub:7 position:1 mao:1 exponential:1 xl:1 third:1 young:1 rk:1 sensing:6 explored:1 decay:1 ikegaya:2 mason:1 intrinsic:3 quantization:1 albeit:1 sequential:1 labelling:1 tio:3 sparseness:2 sparser:1 columnar:1 entropy:1 tsien:1 logarithmic:1 paninski:1 explore:3 timme:1 ganglion:1 visual:2 malinow:1 brunel:1 ch:1 relies:1 ma:2 stimulating:5 donoho:1 towards:1 ferster:1 feasible:1 experimentally:1 change:1 typical:1 called:1 total:1 sasaki:1 experimental:5 la:2 indicating:2 maclean:1 aaron:1 evaluate:1 correlated:1 |
3,060 | 3,773 | Modeling Social Annotation Data
with Content Relevance using a Topic Model
Tomoharu Iwata
Takeshi Yamada
Naonori Ueda
NTT Communication Science Laboratories
2-4 Hikaridai, Seika-cho, Soraku-gun, Kyoto, Japan
{iwata,yamada,ueda}@cslab.kecl.ntt.co.jp
Abstract
We propose a probabilistic topic model for analyzing and extracting contentrelated annotations from noisy annotated discrete data such as web pages stored
in social bookmarking services. In these services, since users can attach annotations freely, some annotations do not describe the semantics of the content, thus
they are noisy, i.e. not content-related. The extraction of content-related annotations can be used as a preprocessing step in machine learning tasks such as text
classification and image recognition, or can improve information retrieval performance. The proposed model is a generative model for content and annotations, in
which the annotations are assumed to originate either from topics that generated
the content or from a general distribution unrelated to the content. We demonstrate
the effectiveness of the proposed method by using synthetic data and real social
annotation data for text and images.
1 Introduction
Recently there has been great interest in social annotations, also called collaborative tagging or
folksonomy, created by users freely annotating objects such as web pages [7], photographs [9],
blog posts [23], videos [26], music [19], and scientific papers [5]. Delicious [7], which is a social
bookmarking service, and Flickr [9], which is an online photo sharing service, are two representative
social annotation services, and they have succeeded in collecting huge numbers of annotations. Since
users can attach annotations freely in social annotation services, the annotations include those that do
not describe the semantics of the content, and are, therefore, not content-related [10]. For example,
annotations such as ?nikon? or ?canon? in a social photo service often represent the name of the
manufacturer of the camera with which the photographs were taken, or annotations such as ?2008?
or ?november? indicate when they were taken. Other examples of content-unrelated annotations
include those designed to remind the annotator such as ?toread?, those identifying qualities such as
?great?, and those identifying ownership.
Content-unrelated annotations can often constitute noise if used for training samples in machine
learning tasks, such as automatic text classification and image recognition. Although the performance of a classifier can generally be improved by increasing the number of training samples, noisy
training samples have a detrimental effect on the classifier. We can improve classifier performance
if we can employ huge amounts of social annotation data from which the content-unrelated annotations have been filtered out. Content-unrelated annotations may also constitute noise in information
retrieval. For example, a user may wish to retrieve a photograph of a Nikon camera rather than a
photograph taken by a Nikon camera.
In this paper, we propose a probabilistic topic model for analyzing and extracting content-related
annotations from noisy annotated data. A number of methods for automatic annotation have been
proposed [1, 2, 8, 16, 17]. However, they implicitly assume that all annotations are related to content,
1
Symbol
D
W
T
K
Nd
Md
wdn
zdn
tdm
cdm
rdm
Table 1: Notation
Description
number of documents
number of unique words
number of unique annotations
number of topics
number of words in the dth document
number of annotations in the dth document
nth word in the dth document, wdn ? {1, ? ? ? , W }
topic of the nth word in the dth document, zdn ? {1, ? ? ? , K}
mth annotation in the dth document, tdm ? {1, ? ? ? , T }
topic of the mth annotation in the dth document, cdm ? {1, ? ? ? , K}
relevance to the content of the mth annotation of the dth document,
rdm = 1 if relevant, rdm = 0 otherwise
and to the best of our knowledge, no attempt has been made to extract content-related annotations
automatically. The extraction of content-related annotations can improve performance of machine
learning and information retrieval tasks. The proposed model can also be used for the automatic
generation of content-related annotations.
The proposed model is a generative model for content and annotations. It first generates content, and
then generates the annotations. We assume that each annotation is associated with a latent variable
that indicates whether it is related to the content or not, and the annotation originates either from
the topics that generated the content or from a content-unrelated general distribution depending on
the latent variable. The inference can be achieved based on collapsed Gibbs sampling. Intuitively
speaking, this approach considers an annotation to be content-related when it is almost always attached to objects in a specific topic. As regards real social annotation data, the annotations are not
explicitly labeled as content related/unrelated. The proposed model is an unsupervised model, and
so can extract content-related annotations without content relevance labels.
The proposed method is based on topic models. A topic model is a hierarchical probabilistic model,
in which a document is modeled as a mixture of topics, and where a topic is modeled as a probability distribution over words. Topic models are successfully used for a wide variety of applications
including information retrieval [3, 13], collaborative filtering [14], and visualization [15] as well as
for modeling annotated data [2].
The proposed method is an extension of the correspondence latent Dirichlet allocation (CorrLDA) [2], which is a generative topic model for contents and annotations. Since Corr-LDA assumes
that all annotations are related to the content, it cannot be used for separating content-related annotations from content-unrelated ones. A topic model with a background distribution [4] assumes
that words are generated either from a topic-specific distribution or from a corpus-wide background
distribution. Although this is a generative model for documents without annotations, the proposed
model is related to the model in the sense that data may be generated from a topic-unrelated distribution depending on a latent variable.
In the rest of this paper, we assume that the given data are annotated document data, in which the
content of each document is represented by words appearing in the document, and each document
has both content-related and content-unrelated annotations. The proposed model is applicable to a
wide range of discrete data with annotations. These include annotated image data, where each image
is represented with visual words [6], and annotated movie data, where each movie is represented by
user ratings.
2
Proposed method
Suppose that, we have a set of D documents, and each document consists of a pair of words and
d
annotations (wd , td ), where wd = {wdn }N
n=1 is the set of words in a document that represents the
Md
content, and td = {tdm }m=1 is the set of assigned annotations, or tags. Our notation is summarized
in Table 1.
2
?
?
?
?
z
w
c
t
r
N
?
?
K
K+1
?
?
M D
Figure 1: Graphical model representation of the proposed topic model with content relevance.
The proposed topic model first generates the content, and then generates the annotations. The generative process for the content is the same as basic topic models, such as latent Dirichlet allocation
(LDA) [3]. Each document has topic proportions ?d that are sampled from a Dirichlet distribution.
For each of the Nd words in the document, a topic zdn is chosen from the topic proportions, and then
word wdn is generated from a topic-specific multinomial distribution ?zdn . In the generative process for annotations, each annotation is assessed as to whether it is related to the content or not. In
particular, each annotation is associated with a latent variable rdm with value rdm = 0 if annotation
tdm is not related to the content; rdm = 1 otherwise. If the annotation is not related to the content,
rdm = 0, annotation tdm is sampled from general topic-unrelated multinomial distribution ?0 . If
the annotation is related to the content, rdm = 1, annotation tdm is sampled from topic-specific
multinomial distribution ?cdm , where cdm is the topic for the annotation. Topic cdm is sampled
d
uniform randomly from topics zd = {zdn }N
n=1 that have previously generated the content. This
means that topic cdm is generated from a multinomial distribution, in which P (cdm = k) = NNkd
,
d
where Nkd is the number of words that are assigned to topic k in the dth document.
In summary, the proposed model assumes the following generative process for a set of annotated
documents {(wd , td )}D
d=1 ,
1. Draw relevance probability ? ? Beta(?)
2. Draw content-unrelated annotation probability ?0 ? Dirichlet(?)
3. For each topic k = 1, ? ? ? , K:
(a) Draw word probability ?k ? Dirichlet(?)
(b) Draw annotation probability ?k ? Dirichlet(?)
4. For each document d = 1, ? ? ? , D:
(a) Draw topic proportions ?d ? Dirichlet(?)
(b) For each word n = 1, ? ? ? , Nd :
i. Draw topic zdn ? Multinomial(?d )
ii. Draw word wdn ? Multinomial(?zdn )
(c) For each annotation m = 1, ? ? ? , Md :
i. Draw topic cdm ? Multinomial({ NNkd
}K
k=1 )
d
ii. Draw relevance rdm ? Bernoulli(?)
{
Multinomial(?0 )
if rdm = 0
iii. Draw annotation tdm ?
Multinomial(?cdm ) otherwise
where ?, ? and ? are Dirichlet distribution parameters, and ? is a beta distribution parameter. Figure 1 shows a graphical model representation of the proposed model, where shaded and unshaded
nodes indicate observed and latent variables, respectively.
As with Corr-LDA, the proposed model first generates the content and then generates the annotations
by modeling the conditional distribution of latent topics for annotations given the topics for the
content. Therefore, it achieves a comprehensive fit of the joint distribution of content and annotations
and finds superior conditional distributions of annotations given content [2].
The joint distribution on words, annotations, topics for words, topics for annotations, and relevance
given parameters is described as follows:
P (W , T , Z, C, R|?, ?, ?, ?) = P (Z|?)P (W |Z, ?)P (T |C, R, ?)P (R|?)P (C|Z), (1)
3
Md
D
D
D
where W = {wd }D
d=1 , T = {td }d=1 , Z = {zd }d=1 , C = {cd }d=1 , cd = {cdm }m=1 , R =
Md
D
{rd }D
d=1 , and rd = {rdm }m=1 . We can integrate out multinomial distribution parameters, {?d }d=1 ,
K
K
{?k }k=1 and {?k0 }k0 =0 , because we use Dirichlet distributions for their priors, which are conjugate
to multinomial distributions. The first term on the right hand side of (1) is calculated by P (Z|?) =
?D ?
D
d=1 P (zd |?d )P (?d |?)d?d , and we have the following equation by integrating out {?d }d=1 ,
)D ? Q
(
k ?(Nkd +?)
P (Z|?) = ?(?K)
d ?(Nd +?K) , where ?(?) is the gamma function. Similarly, the second
?(?)K
(
)K ? Q
)
w ?(Nkw +?)
term is given as follows, P (W |Z, ?) = ?(?W
k ?(Nk +?W ) , where Nkw is the number
?(?)W
?
of times word w has been assigned to topic k, and Nk =
w Nkw . The third term is given as
(
)K+1 ? Q
?(M
+?)
)
0
0
k0 t
t
follows, P (T |C, R, ?) = ?(?T
k0 ?(Mk0 +?T ) , where k ? {0, ? ? ? , K}, and k = 0
?(?)T
indicates irrelevant to the content. Mk0 t is the number of times annotation t has been ?
identified
as content-unrelated if k 0 = 0, or as content-related topic k 0 if k 0 6= 0, and Mk0 =
t Mk0 t .
The Bernoulli parameter ? can also be integrated out because we use a beta distribution for the
prior, which is conjugate to a Bernoulli distribution. The fourth term is given as follows, P (R|?) =
?(2?) ?(M0 +?)?(M ?M0 +?)
, where M is the number of annotations, and M0 is the number of content?(?)2
?(M +2?)
0
)Mkd
? ? (
, where
unrelated annotations. The fifth term is given as follows, P (C|Z) = d k NNkd
d
0
Mkd
is the number of annotations that are assigned to topic k in the dth document.
The inference of the latent topics Z given content W and annotations T can be efficiently computed
using collapsed Gibbs sampling [11]. Given the current state of all but one variable, zj , where
j = (d, n), the assignment of a latent topic to the nth word in the dth document is sampled from,
(
) 0
Nkd\j + ? Nkwj \j + ? Nkd\j + 1 Nd ? 1 Mkd
P (zj = k|W , T , Z\j , C, R) ?
,
Nd\j + ?K Nk\j + ?W
Nkd\j
Nd
where \j represents the count when excluding the nth word in the dth document. Given the current
state of all but one variable, ri , where i = (d, m), the assignment of either relevant or irrelevant to
the mth annotation in the dth document is estimated as follows,
P (ri = 0|W , T , Z, C, R\i ) ?
P (ri = 1|W , T , Z, C, R\i ) ?
M0\i + ? M0ti \i + ?
,
M\i + 2? M0\i + ?T
M\i ? M0\i + ? Mci ti \i + ?
.
M\i + 2?
Mci \i + ?T
(2)
The assignment of a topic to a content-unrelated annotation is estimated as follows,
P (ci = k|ri = 0, W , T , Z, C\i , R\i ) ?
Nkd
,
Nd
(3)
and the assignment of a topic to a content-related annotation is estimated as follows,
P (ci = k|ri = 1, W , T , Z, C\i , R\i ) ?
Mkti \i + ? Nkd
.
Mk\i + ?T Nd
(4)
The parameters ?, ?, ?, and ? can be estimated by maximizing the joint distribution (1) by the
fixed-point iteration method described in [21].
3
3.1
Experiments
Synthetic content-unrelated annotations
We evaluated the proposed method quantitatively by using labeled text data from the 20 Newsgroups
corpus [18] and adding synthetic content-unrelated annotations. The corpus contains about 20,000
articles categorized into 20 discussion groups. We considered these 20 categories as content-related
annotations, and we also randomly attached dummy categories to training samples as contentunrelated annotations. We created two types of training data, 20News1 and 20News2, where the
4
former was used for evaluating the proposed method when analyzing data with different numbers
of content-unrelated annotations per document, and the latter was used with different numbers of
unique content-unrelated annotations. Specifically, in the 20News1 data, the unique number of
content-unrelated annotations was set at ten, and the number of content-unrelated annotations per
document was set at {1, ? ? ? , 10}. In the 20News2 data, the unique number of content-unrelated
annotations was set at {1, ? ? ? , 10}, and the number of content-unrelated annotations per document
was set at one. We omitted stop-words and words that occurred only once. The vocabulary size was
52,647. We sampled 100 documents from each of the 20 categories, for a total of 2,000 documents.
We used 10 % of the samples as test data.
We compared the proposed method with MaxEnt and Corr-LDA. MaxEnt represents a maximum
entropy model [22] that estimates the probability distribution that maximizes entropy under the
constraints imposed by the given data. MaxEnt is a discriminative classifier and achieves high performance as regards text classification. In MaxEnt, the hyper-parameter that maximizes the performance was chosen from {10?3 , 10?2 , 10?1 , 1}, and the input word count vector was normalized so
that the sum of the elements was one. Corr-LDA [2] is a topic model for words and annotations that
does not take the relevance to content into consideration. For the proposed method and Corr-LDA,
we set the number of latent topics, K, to 20, and estimated latent topics and parameters by using
collapsed Gibbs sampling and the fixed-point iteration method, respectively.
We evaluated the predictive performance of each method using the perplexity of held-out contentrelated annotations given the content. A lower perplexity represents higher predictive performance.
In the proposed method, we calculated the probability of content-related annotation t in the dth
?
is a
document given the training samples as follows, P (t|d, D) ? k ??dk ??kt , where ??dk = NNkd
d
M
+?
kt
point estimate of the topic proportions for annotations, and ??kt =
is a point estimate of the
Mk +?T
annotation multinomial distribution. Note that no content-unrelated annotations were attached to the
test samples. The average perplexities and standard deviations over ten experiments on the 20News1
and 20News2 data are shown in Figure 2 (a). In all cases, when content-unrelated annotations were
included, the proposed method achieved the lowest perplexity, indicating that it can appropriately
predict content-related annotations. Although the perplexity achieved by MaxEnt was slightly lower
than that of the proposed method without content-unrelated annotations, the performance of MaxEnt
deteriorated greatly when even one content-unrelated annotation was attached. Since MaxEnt is a
supervised classifier, it considers all attached annotations to be content-related even if they are not.
Therefore, its perplexity is significantly high when there are fewer content-related annotations per
document than unrelated annotations as with the 20News1 data. In contrast, since the proposed
method considers the relevance to the content for each annotation, it always offered low perplexity
even if the number of content-unrelated annotations was increased. The perplexity achieved by
Corr-LDA was high because it does not consider the relevance to the content as in MaxEnt.
We evaluated the performance in terms of extracting content-related annotations. We considered extraction as a binary classification problem, in which each annotation is classified as either contentrelated or content-unrelated. As the evaluation measurement, we used F-measure, which is the
harmonic mean of precision and recall. We compared the proposed method to a baseline method
in which the annotations are considered to be content-related if any of the words in the annotations
appear in the document. In particular, when the category name is ?comp.graphics?, if ?computer? or
?graphics? appears in the document, it is considered to be content-related. We assume that the baseline method knows that content-unrelated annotations do not appear in any document. Therefore,
the precision of the baseline method is always one, because the number of false positive samples is
zero. Note that this baseline method does not support image data, because words in the annotations
never appear in the content. F-measures for the 20News1 and 20News2 data are shown in Figure 2 (b). A higher F-measure represents higher classification performance. The proposed method
achieved high F-measures with a wide range of ratios of content-unrelated annotations. All of the
F-measures achieved by the proposed method exceeded 0.89, and the F-measure without unrelated
annotations was one. This result implies that it can flexibly handle cases with different ratios of
content-unrelated annotations. The F-measures achieved by the baseline method were low because
annotations might be related to the content even if the annotations did not appear in the document.
On the other hand, the proposed method considers that annotations are related to the content when
the topic, or latent semantics, of the content and the topic of the annotations are similar even if the
annotations did not appear in the document.
5
20News1
45
40
Proposed
Corr-LDA
MaxEnt
F-measure
perplexity
35
30
25
20
15
1
1
0.8
0.8
lambda
50
0.6
0.4
0.2
Estimated
True
0.6
0.4
0.2
Proposed
Baseline
10
5
0
0
2
4
6
8
10
0
0
number of content-unrelated annotations per document
2
4
6
8
10
0
number of content-unrelated annotations per document
2
4
6
8
10
number of content-unrelated annotations per document
20News2
16
Proposed
Corr-LDA
MaxEnt
F-measure
perplexity
14
12
10
1
1
0.8
0.8
lambda
18
0.6
0.4
8
0.2
6
0
Estimated
True
0.6
0.4
0.2
Proposed
Baseline
0
2
4
6
8
10
number of unique content-unrelated annotations
(a) Perplexity
0
0
2
4
6
8
10
number of unique content-unrelated annotations
(b) F-measure
0
2
4
6
8
10
number of unique content-unrelated annotations
?
(c) ?
Figure 2: (a) Perplexities of the held-out content-related annotations, (b) F-measures of content
relevance, and (c) Estimated content-related annotation ratios in 20News data.
Figure 2 (c) shows the content-related annotation ratios as estimated by the following equation,
? = M ?M0 +? , with the proposed method. The estimated ratios are about the same as the true
?
M +2?
ratios.
3.2
Social annotations
We analyzed the following three sets of real social annotation data taken from two social bookmarking services and a photo sharing service, namely Hatena, Delicious, and Flickr.
From the Hatena data, we used web pages and their annotations in Hatena::Bookmark [12], which
is a social bookmarking service in Japan, that were collected using a similar method to that used
in [25, 27]. Specifically, first, we obtained a list of URLs of popular bookmarks for October 2008.
We then obtained a list of users who had bookmarked the URLs in the list. Next, we obtained a new
list of URLs that had been bookmarked by the users. By iterating the above process, we collected
a set of web pages and their annotations. We omitted stop-words and words and annotations that
occurred in fewer than ten documents. We omitted documents with fewer than ten unique words
and also omitted those without annotations. The numbers of documents, unique words, and unique
annotations were 39,132, 8,885, and 43,667, respectively. From the Delicious data, we used web
pages and their annotations [7] that were collected using the same method used for the Hatena data.
The numbers of documents, unique words, and unique annotations were 65,528, 30,274, and 21,454,
respectively. From the Flickr data, we used photographs and their annotations Flickr [9] that were
collected in November 2008 using the same method used for the Hatena data. We transformed photo
images into visual words by using scale-invariant feature transformation (SIFT) [20] and k-means as
described in [6]. We omitted annotations that were attached to fewer than ten images. The numbers
of images, unique visual words, and unique annotations were 12,711, 200, and 2,197, respectively.
For the experiments, we used 5,000 documents that were randomly sampled from each data set.
Figure 3 (a)(b)(c) shows the average perplexities over ten experiments and their standard deviation
for held-out annotations in the three real social annotation data sets with different numbers of topics.
Figure 3 (d) shows the result with the Patent data as an example of data without content unrelated
annotations. The Patent data consist of patents published in Japan from January to March in 2004,
to which International Patent Classification (IPC) codes were attached by experts according to their
content. The numbers of documents, unique words, and unique annotations (IPC codes) were 9,557,
6
4000
9000
Proposed
CorrLDA
Proposed
CorrLDA
8000
3500
7000
perplexity
perplexity
3000
2500
6000
5000
2000
4000
1500
3000
1000
2000
0
20
40
60
80
100
0
20
number of topics
40
60
(a) Hatena
100
(b) Delicious
2200
2000
Proposed
CorrLDA
Proposed
CorrLDA
2000
1800
1800
1600
perplexity
perplexity
80
number of topics
1600
1400
1400
1200
1200
1000
1000
800
800
600
0
20
40
60
80
100
0
number of topics
20
40
60
80
100
number of topics
(c) Flickr
(d) Patent
Figure 3: Perplexities of held-out annotations with different numbers of topics in social annotation
data (a)(b)(c), and in data without content unrelated annotations (d).
canada banking toread
London river london history reference imported England
blog ruby rails cell person misc Ruby plugin cpu ajax javascript exif php
future distribution internet prediction Internet computer computers no_tag bandwidth
film Art good mindfuck movies list blog
ricette cucina cooking italy search recipes italian food cook news reference searchengine list italiano links
ruby git diff useful triage imported BookmarksBar blog
SSD toread ssd
c# interview programming C# .net todo language tips microsoft
google gmail googlecalendar Web-2.0 Gmail via:mento.info
Figure 4: Examples of content-related annotations in the Delicious data extracted by the proposed
method. Each row shows annotations attached to a document; content-unrelated annotations are
shaded.
104,621, and 6,117, respectively. With the Patent data, the perplexities of the proposed method
and Corr-LDA were almost the same. On the other hand, with the real social annotation data, the
proposed method achieved much lower perplexities than Corr-LDA. This result implies that it is
important to consider relevance to the content when analyzing noisy social annotation data. The
perplexity of Corr-LDA with social annotation data gets worse as the number of topics increases
because Corr-LDA overfits noisy content-unrelated annotations.
The upper half of each table in Table 2 shows probable content-unrelated annotations in the leftmost
column, and probable annotations for some topics, which were estimated with the proposed method
using 50 topics. The lower half in (a) and (b) shows probable words in the content for each topic.
With the Hatena data, we translated Japanese words into English, and we omitted words that had
the same translated meaning in a topic. For content-unrelated annotations, words that seemed to
be irrelevant to the content were extracted, such as ?toread?, ?later?, ?*?, ???, ?imported?, ?2008?,
?nikon?, and ?cannon?. Each topic has characteristic annotations and words, for example, Topic1
in the Hatena data is about programming, Topic2 is about games, and Topic3 is about economics.
Figure 4 shows some examples of the extraction of content-related annotations.
7
Table 2: The ten most probable content-unrelated annotations (leftmost column), and the ten most
probable annotations for some topics (other columns), estimated with the proposed method using 50
topics. Each column represents one topic. The lower half in (a) and (b) shows probable words in the
content.
(a) Hatena
unrelated
toread
web
later
great
document
troll
*
?
summary
memo
Topic1
programming
development
dev
webdev
php
java
software
ruby
opensource
softwaredev
development
web
series
hp
technology
management
source
usage
project
system
Topic2
game
animation
movie
Nintendo
movie
event
xbox360
DS
PS3
animation
game
animation
movie
story
work
create
PG
mr
interesting
world
Topic3
economics
finance
society
business
economy
reading
investment
japan
money
company
year
article
finance
economics
investment
company
day
management
information
nikkei
Topic4
science
research
biology
study
psychology
mathematics
pseudoscience
knowledge
education
math
science
researcher
answer
spirit
question
human
ehara
proof
mind
brain
Topic5
food
cooking
gourmet
recipe
cook
life
fooditem
foods
alcohol
foodie
eat
use
omission
water
decision
broil
face
input
miss
food
Topic6
linux
tips
windows
security
server
network
unix
mysql
mail
Apache
in
setting
file
server
case
mail
address
connection
access
security
Topic7
politics
international
oversea
society
history
china
world
international
usa
news
japan
country
usa
china
politics
aso
mr
korea
human
people
Topic8
pc
apple
iphone
hardware
gadget
mac
cupidity
technology
ipod
electronics
yen
product
digital
pc
support
in
note
price
equipment
model
Topic9
medical
health
lie
government
agriculture
food
mentalhealth
mental
environment
science
rice
banana
medical
diet
hospital
poison
eat
incident
korea
jelly
(b) Delicious
reference
web
imported
design
internet
online
cool
toread
tools
blog
money
finance
economics
business
economy
Finance
financial
investing
bailout
finances
money
financial
credit
market
economic
october
economy
banks
government
bank
video
music
videos
fun
entertainment
funny
movies
media
Video
film
music
video
link
tv
movie
itunes
film
amazon
play
interview
opensource
software
programming
development
linux
tools
rails
ruby
webdev
rubyonrails
project
code
server
ruby
rails
source
file
version
files
development
food
recipes
recipe
cooking
Food
Recipes
baking
health
vegetarian
diy
recipe
food
recipes
make
wine
made
add
love
eat
good
windows
linux
sysadmin
Windows
security
computer
microsoft
network
Linux
ubuntu
windows
system
microsoft
linux
software
file
server
user
files
ubuntu
art
photo
photography
photos
Photography
Art
inspiration
music
foto
fotografia
art
photography
photos
camera
vol
digital
images
2008
photo
tracks
shopping
shop
Shopping
home
wishlist
buy
store
fashion
gifts
house
buy
online
price
cheap
product
order
free
products
rating
card
iphone
mobile
hardware
games
iPhone
apple
tech
gaming
mac
game
iphone
apple
ipod
mobile
game
games
pc
phone
mac
touch
education
learning
books
book
language
library
school
teaching
Education
research
book
legal
theory
books
law
university
students
learning
education
language
(c) Flickr
2008
nikon
canon
white
yellow
red
photo
italy
california
color
4
dance
bar
dc
digital
concert
bands
music
washingtondc
dancing
work
sea
sunset
sky
clouds
mountains
ocean
panorama
south
ireland
oregon
autumn
trees
tree
mountain
fall
garden
bortescristian
geotagged
mud
natura
rock
house
party
park
inn
coach
creature
halloween
mallory
night
beach
travel
vacation
camping
landscape
texas
lake
cameraphone
md
sun
family
portrait
cute
baby
boy
kids
brown
closeup
08
galveston
island
asia
landscape
rock
blue
tour
plant
tourguidesoma
koh
samui
Conclusion
We have proposed a topic model for extracting content-related annotations from noisy annotated
data. We have confirmed experimentally that the proposed method can extract content-related annotations appropriately, and can be used for analyzing social annotation data. In future work, we will
determine the number of topics automatically by extending the proposed model to a nonparametric Bayesian model such as the Dirichlet process mixture model [24]. Since the proposed method
is, theoretically, applicable to various kinds of annotation data, we will confirm this in additional
experiments.
8
References
[1] K. Barnard, P. Duygulu, D. Forsyth, N. de Freitas, D. M. Blei, and M. I. Jordan. Matching words and
pictures. Journal of Machine Learning Research, 3:1107?1135, 2003.
[2] D. M. Blei and M. I. Jordan. Modeling annotated data. In SIGIR ?03: Proceedings of the 26th Annual
International ACM SIGIR Conference on Research and Development in Information Retrieval, pages
127?134, 2003.
[3] D. M. Blei, A. Y. Ng, and M. I. Jordan. Latent Dirichlet allocation. Journal of Machine Learning
Research, 3:993?1022, 2003.
[4] C. Chemudugunta, P. Smyth, and M. Steyvers. Modeling general and specific aspects of documents
with a probabilistic topic model. In B. Sch?olkopf, J. Platt, and T. Hoffman, editors, Advances in Neural
Information Processing Systems 19, pages 241?248. MIT Press, 2007.
[5] CiteULike. http://www.citeulike.org.
[6] G. Csurka, C. Dance, J. Willamowski, L. Fan, and C. Bray. Visual categorization with bags of keypoints.
In ECCV International Workshop on Statistical Learning in Computer Vision, 2004.
[7] Delicious. http://delicious.com.
[8] S. Feng, R. Manmatha, and V. Lavrenko. Multiple Bernoulli relevance models for image and video
annotation. In CVPR ?04: Proceedings of the IEEE Computer Society Conference on Computer Vision
and Pattern Recognition, volume 2, pages 1002?1009, 2004.
[9] Flickr. http://flickr.com.
[10] S. Golder and B. A. Huberman. Usage patterns of collaborative tagging systems. Journal of Information
Science, 32(2):198?208, 2006.
[11] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy of
Sciences, 101 Suppl 1:5228?5235, 2004.
[12] Hatena::Bookmark. http://b.hatena.ne.jp.
[13] T. Hofmann. Probabilistic latent semantic analysis. In UAI ?99: Proceedings of 15th Conference on
Uncertainty in Artificial Intelligence, pages 289?296, 1999.
[14] T. Hofmann. Collaborative filtering via Gaussian probabilistic latent semantic analysis. In Proceedings
of the 26th Annual International ACM SIGIR Conference on Research and Development in Information
Retrieval, pages 259?266. ACM Press, 2003.
[15] T. Iwata, T. Yamada, and N. Ueda. Probabilistic latent semantic visualization: topic model for visualizing
documents. In KDD ?08: Proceeding of the 14th ACM SIGKDD International Conference on Knowledge
Discovery and Data Mining, pages 363?371. ACM, 2008.
[16] J. Jeon, V. Lavrenko, and R. Manmatha. Automatic image annotation and retrieval using cross-media
relevance models. In SIGIR ?03: Proceedings of the 26th Annual International ACM SIGIR Conference
on Research and Development in Information Retrieval, pages 119?126. ACM, 2003.
[17] J. Jeon and R. Manmatha. Using maximum entropy for automatic image annotation. In CIVR ?04:
Proceedings of the 3rd International Conference on Image and Video Retrieval, pages 24?32, 2004.
[18] K. Lang. NewsWeeder: learning to filter netnews. In ICML ?95: Proceedings of the 12th International
Conference on Machine Learning, pages 331?339, 1995.
[19] Last.fm. http://www.last.fm.
[20] D. G. Lowe. Distinctive image features from scale-invariant keypoints. International Journal of Computer
Vision, 60(2):91?110, 2004.
[21] T. Minka. Estimating a Dirichlet distribution. Technical report, M.I.T., 2000.
[22] K. Nigam, J. Lafferty, and A. McCallum. Using maximum entropy for text classification. In Proceedings
of IJCAI-99 Workshop on Machine Learning for Information Filtering, pages 61?67, 1999.
[23] Technorati. http://technorati.com.
[24] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei. Hierarchical Dirichlet processes. Journal of the
American Statistical Association, 101(476):1566?1581, 2006.
[25] X. Wu, L. Zhang, and Y. Yu. Exploring social annotations for the semantic web. In WWW ?06: Proceedings of the 15th International Conference on World Wide Web, pages 417?426. ACM, 2006.
[26] YouTube. http://www.youtube.com.
[27] D. Zhou, J. Bian, S. Zheng, H. Zha, and C. L. Giles. Exploring social annotations for information retrieval.
In WWW ?08: Proceeding of the 17th International Conference on World Wide Web, pages 715?724.
ACM, 2008.
9
| 3773 |@word version:1 proportion:4 nd:9 git:1 pg:1 electronics:1 manmatha:3 contains:1 series:1 document:51 freitas:1 current:2 wd:4 com:4 lang:1 gmail:2 citeulike:2 kdd:1 hofmann:2 cheap:1 designed:1 concert:1 generative:7 fewer:4 cook:2 half:3 ubuntu:2 intelligence:1 mccallum:1 yamada:3 filtered:1 blei:4 mental:1 math:1 node:1 troll:1 triage:1 org:1 lavrenko:2 zhang:1 beta:3 consists:1 poison:1 theoretically:1 tagging:2 market:1 seika:1 love:1 brain:1 automatically:2 td:4 cpu:1 food:8 company:2 window:4 increasing:1 gift:1 project:2 estimating:1 unrelated:48 notation:2 maximizes:2 medium:2 lowest:1 mountain:2 kind:1 finding:1 transformation:1 autumn:1 sky:1 collecting:1 ti:1 fun:1 finance:5 classifier:5 platt:1 originates:1 medical:2 appear:5 cooking:3 positive:1 service:10 plugin:1 analyzing:5 might:1 china:2 shaded:2 co:1 range:2 unique:17 camera:4 investment:2 bookmarking:4 significantly:1 java:1 matching:1 word:42 integrating:1 griffith:1 get:1 cannot:1 closeup:1 collapsed:3 www:5 unshaded:1 imposed:1 maximizing:1 economics:4 flexibly:1 sigir:5 amazon:1 identifying:2 financial:2 retrieve:1 steyvers:2 handle:1 deteriorated:1 suppose:1 play:1 user:8 hatena:11 programming:4 smyth:1 element:1 recognition:3 imported:4 labeled:2 sunset:1 observed:1 cloud:1 news:3 sun:1 environment:1 nkd:7 predictive:2 distinctive:1 translated:2 joint:3 k0:4 represented:3 various:1 describe:2 london:2 artificial:1 hyper:1 netnews:1 film:3 cvpr:1 annotating:1 otherwise:3 noisy:7 online:3 beal:1 net:1 interview:2 rock:2 propose:2 inn:1 product:3 relevant:2 academy:1 description:1 olkopf:1 recipe:7 ijcai:1 extending:1 sea:1 categorization:1 object:2 depending:2 school:1 cool:1 indicate:2 implies:2 annotated:9 filter:1 human:2 education:4 government:2 shopping:2 civr:1 probable:6 extension:1 exploring:2 considered:4 credit:1 great:3 predict:1 m0:7 achieves:2 omitted:6 agriculture:1 wine:1 travel:1 applicable:2 halloween:1 label:1 bag:1 create:1 successfully:1 tool:2 aso:1 hoffman:1 mit:1 gaussian:1 cdm:10 always:3 rather:1 zhou:1 cannon:1 sysadmin:1 mobile:2 kid:1 bernoulli:4 indicates:2 greatly:1 contrast:1 sigkdd:1 equipment:1 baseline:7 sense:1 tech:1 inference:2 economy:3 integrated:1 mth:4 italian:1 transformed:1 semantics:3 classification:7 development:7 art:4 once:1 never:1 extraction:4 beach:1 sampling:3 ng:1 biology:1 represents:6 park:1 yu:1 unsupervised:1 icml:1 future:2 report:1 quantitatively:1 employ:1 baking:1 randomly:3 gamma:1 national:1 comprehensive:1 jeon:2 microsoft:3 attempt:1 wdn:5 interest:1 huge:2 mining:1 zheng:1 evaluation:1 mixture:2 analyzed:1 pc:3 held:4 kt:3 naonori:1 succeeded:1 korea:2 tree:2 maxent:10 mk:2 increased:1 column:4 modeling:5 portrait:1 giles:1 dev:1 assignment:4 mac:3 deviation:2 tour:1 uniform:1 mk0:4 mallory:1 graphic:2 stored:1 answer:1 synthetic:3 cho:1 person:1 international:13 river:1 probabilistic:7 diy:1 tip:2 linux:5 management:2 lambda:2 worse:1 book:4 expert:1 american:1 japan:5 de:1 summarized:1 student:1 oregon:1 forsyth:1 explicitly:1 later:2 csurka:1 lowe:1 overfits:1 red:1 zha:1 annotation:166 collaborative:4 yen:1 mud:1 php:2 opensource:2 who:1 efficiently:1 characteristic:1 landscape:2 yellow:1 bayesian:1 comp:1 researcher:1 apple:3 published:1 confirmed:1 classified:1 history:2 flickr:8 sharing:2 minka:1 associated:2 proof:1 sampled:7 stop:2 popular:1 recall:1 knowledge:3 color:1 appears:1 exceeded:1 higher:3 supervised:1 day:1 diet:1 asia:1 improved:1 bian:1 evaluated:3 tomoharu:1 d:1 hand:3 web:12 touch:1 night:1 gaming:1 google:1 lda:13 quality:1 scientific:2 name:2 effect:1 usage:2 normalized:1 true:3 usa:2 brown:1 former:1 assigned:4 inspiration:1 laboratory:1 misc:1 semantic:4 white:1 visualizing:1 game:7 cute:1 leftmost:2 ruby:6 demonstrate:1 image:15 harmonic:1 consideration:1 meaning:1 recently:1 photography:3 superior:1 multinomial:12 apache:1 ipc:2 patent:6 attached:8 jp:2 volume:1 association:1 occurred:2 measurement:1 gibbs:3 automatic:5 rd:3 mathematics:1 similarly:1 hp:1 teaching:1 ssd:2 language:3 had:3 access:1 money:3 add:1 italy:2 irrelevant:3 perplexity:21 phone:1 store:1 server:4 tdm:7 blog:5 binary:1 nintendo:1 delicious:8 life:1 baby:1 canon:2 additional:1 mr:2 freely:3 determine:1 ii:2 multiple:1 keypoints:2 kyoto:1 ntt:2 technical:1 england:1 bookmarked:2 cross:1 retrieval:10 post:1 prediction:1 basic:1 vision:3 iteration:2 represent:1 suppl:1 achieved:8 cell:1 background:2 source:2 country:1 appropriately:2 sch:1 rest:1 file:5 south:1 lafferty:1 spirit:1 effectiveness:1 jordan:4 extracting:4 topic2:2 iii:1 coach:1 variety:1 newsgroups:1 fit:1 psychology:1 rdm:11 identified:1 bandwidth:1 fm:2 economic:1 texas:1 politics:2 whether:2 url:3 soraku:1 speaking:1 constitute:2 generally:1 iterating:1 useful:1 takeshi:1 amount:1 nonparametric:1 ten:8 band:1 hardware:2 category:4 http:7 zj:2 estimated:12 exif:1 dummy:1 per:7 track:1 blue:1 zd:3 chemudugunta:1 discrete:2 topic1:2 vol:1 group:1 dancing:1 nikon:5 sum:1 bookmark:3 year:1 unix:1 fourth:1 uncertainty:1 almost:2 family:1 ueda:3 wu:1 funny:1 home:1 draw:10 lake:1 decision:1 banking:1 internet:3 ipod:2 correspondence:1 fan:1 annual:3 bray:1 constraint:1 ri:5 software:3 tag:1 generates:6 aspect:1 todo:1 duygulu:1 eat:3 cslab:1 tv:1 according:1 march:1 conjugate:2 slightly:1 island:1 intuitively:1 invariant:2 koh:1 taken:4 legal:1 equation:2 visualization:2 previously:1 count:2 know:1 mind:1 italiano:1 photo:9 manufacturer:1 hierarchical:2 ocean:1 appearing:1 vacation:1 assumes:3 dirichlet:13 include:3 entertainment:1 graphical:2 music:5 hikaridai:1 society:3 iphone:4 feng:1 question:1 md:6 detrimental:1 ireland:1 link:2 card:1 separating:1 gun:1 topic:73 originate:1 mail:2 considers:4 collected:4 water:1 code:3 modeled:2 remind:1 ratio:6 october:2 boy:1 javascript:1 info:1 memo:1 design:1 teh:1 upper:1 mkd:3 november:2 january:1 communication:1 excluding:1 banana:1 dc:1 omission:1 zdn:7 canada:1 rating:2 pair:1 namely:1 connection:1 security:3 california:1 address:1 dth:13 bar:1 pattern:2 reading:1 including:1 nkw:3 video:7 garden:1 event:1 business:2 attach:2 nth:4 alcohol:1 shop:1 improve:3 movie:8 technology:2 library:1 ne:1 picture:1 created:2 extract:3 health:2 text:6 prior:2 discovery:1 law:1 plant:1 generation:1 interesting:1 filtering:3 allocation:3 annotator:1 digital:3 integrate:1 incident:1 offered:1 article:2 kecl:1 editor:1 story:1 bank:2 cd:2 row:1 eccv:1 summary:2 last:2 free:1 english:1 side:1 wide:6 fall:1 face:1 fifth:1 regard:2 calculated:2 vocabulary:1 evaluating:1 world:4 seemed:1 made:2 preprocessing:1 party:1 social:22 implicitly:1 confirm:1 buy:2 uai:1 corpus:3 assumed:1 willamowski:1 discriminative:1 search:1 latent:17 investing:1 table:5 nigam:1 japanese:1 did:2 noise:2 animation:3 categorized:1 gadget:1 representative:1 creature:1 fashion:1 precision:2 wish:1 lie:1 house:2 rail:3 third:1 specific:5 sift:1 symbol:1 list:6 dk:2 consist:1 workshop:2 false:1 adding:1 corr:12 ci:2 corrlda:5 nk:3 entropy:4 photograph:5 visual:4 newsweeder:1 iwata:3 extracted:2 rice:1 acm:9 conditional:2 ownership:1 price:2 barnard:1 content:115 experimentally:1 youtube:2 included:1 specifically:2 diff:1 huberman:1 panorama:1 miss:1 called:1 total:1 hospital:1 mci:2 indicating:1 support:2 people:1 latter:1 assessed:1 relevance:14 dance:2 |
3,061 | 3,774 | No evidence for active sparsification
in the visual cortex
Pietro Berkes, Benjamin L. White, and J?ozsef Fiser
Volen Center for Complex Systems
Brandeis University, Waltham, MA 02454
Abstract
The proposal that cortical activity in the visual cortex is optimized for sparse neural activity is one of the most established ideas in computational neuroscience.
However, direct experimental evidence for optimal sparse coding remains inconclusive, mostly due to the lack of reference values on which to judge the measured
sparseness. Here we analyze neural responses to natural movies in the primary
visual cortex of ferrets at different stages of development and of rats while awake
and under different levels of anesthesia. In contrast with prediction from a sparse
coding model, our data shows that population and lifetime sparseness decrease
with visual experience, and increase from the awake to anesthetized state. These
results suggest that the representation in the primary visual cortex is not actively
optimized to maximize sparseness.
1
Introduction
It is widely believed that one of the main principles underlying functional organization of the early
visual system is the reduction of the redundancy of relayed input from the retina. Such a transformation would form an optimally efficient code, in the sense that the amount of information transmitted
to higher visual areas would be maximal. Sparse coding refers to a possible implementation of this
general principle, whereby each stimulus is encoded by a small subset of neurons. This would allow
the visual system to transmit information efficiently and with a small number of spikes, improving
the signal-to-noise ratio, reducing the energy cost of encoding, improving the detection of ?suspicious coincidences?, and increasing storage capacity in associative memories [1, 2]. Computational
models that optimize the sparseness of the responses of hidden units to natural images have been
shown to reproduce the basic features of the receptive fields (RFs) of simple cells in V1 [3, 4, 5].
Moreover, manipulation of the statistics of the environment of developing animals leads to changes
in the RF structure that can be predicted by sparse coding models [6].
Unfortunately, attempts to verify this principle experimentally have so far remained inconclusive.
Electrophysiological studies performed in primary visual cortex agree in reporting high sparseness
values for neural activity [7, 8, 9, 10, 11, 12]. However, it is contested whether the high degree of
sparseness is due to a neural representation which is optimally sparse, or is an epiphenomenon due
to neural selectivity [10, 12]. This controversy is mostly due to a lack of reference measurement
with which to judge the sparseness of the neural representation in relative, rather than absolute
terms. Another problem is that most of these studies have been performed on anesthetized animals
[7, 9, 10, 11, 12], even though the effect of anesthesia might bias sparseness measurements (cf.
Sec. 6).
In this paper, we report results from electrophysiological recordings from primary visual cortex (V1)
of ferrets at various stages of development, from eye opening to adulthood, and of rats at different
levels of anesthesia, from awake to deeply anesthetized, with the goal of testing the optimality of
the neural code by studying changes in sparseness under different conditions. We compare this data
1
with theoretical predictions: 1) sparseness should increase with visual experience, and thus with
age, as the visual system adapts to the statistics of the visual environment; 2) sparseness should be
maximal in the ?working regime? of the animal, i.e. for alert animals, and decrease with deeper
levels of anesthesia. In both cases, the neural data shows a trend opposite to the one expected in a
sparse coding system, suggesting that the visual system is not actively optimizing the sparseness of
its representation.
The paper is organized as follows: We first introduce and discuss the lifetime and population sparseness measures we will be using throughout the paper. Next, we present the classical, linear sparse
coding model of natural images, and derive an equivalent, stochastic neural network, whose output
firing rates correspond to Monte Carlo samples from the posterior distribution of visual elements
given an image. In the rest of the paper, we make use of this neural architecture in order to predict
changes in sparseness over development and under anesthesia, and compare these predictions with
electrophysiological recordings.
2
Lifetime and population sparseness
The diverse benefits of sparseness mentioned in the introduction rely on different aspects of the
neural code, which are captured to a different extent by two sparseness measures, referred to as
lifetime and population sparseness. Lifetime sparseness measures the distribution of the response
of an individual cell to a set of stimuli, and is thus related to the cell?s selectivity. This quantity
characterizes the energy costs of coding with a set of neurons. On the other hand, the assessment
of coding efficiency, as used by Treves and Rolls [13], is based upon the assumption that different
stimuli activate small, distinct subsets of cells. These requirements of efficient coding are based upon
the instantaneous population activity to stimuli and need to take into consideration the population
sparseness of neural response. Average lifetime and population sparseness are identical if the units
are statistically independent, in which case the distribution is called ergodic [10, 14]. In practice,
neural dependencies (Fig. 3C) and residual dependencies in models [15] cause the two measures to
be different.
Here we will use three measures of sparseness, two quantifying population sparseness, and one
lifetime sparseness. To make a comparison with previous studies easier, we computed population
and lifetime sparseness using a common measure introduced by Treves and Rolls [13] and perfected
by Vinje and Gallant [8]:
?
P
2 ? ,
N
i=1 |ri |/N
?
?
(1)
TR = ?1 ? PN
? (1 ? 1/N ) ,
2 /N
r
i=1 i
where ri represents firing rates, and i indexes time in the case of lifetime sparseness, and neurons
for population sparseness. TR is defined between zero (less sparse) and one (more sparse), and
depends on the shape of the distribution. For monotonic, non-negative distributions, such as that of
firing rates, an exponential decay corresponds to TR = 0.5, and values smaller and larger than 0.5
indicate distributions with lighter and heavier tails, respectively [14]. For population sparseness, we
rescale
qP the firing rate distribution by their standard deviation in time for the modelling results, and by
T
2
t=1 rt /T for experimental data, as firing rate is non-negative. Moreover, in neural recordings
we discard bins with no neural activity, as population TR is undefined in this case. TR does not
depend on multiplicative changes firing rate, since it is invariant to rescaling the rates by a constant
factor. However, it is not invariant to additive firing rate changes. This seems to be adequate for
our purposes, as the arguments for sparseness involve metabolic costs and coding arguments like
redundancy reduction that are sensitive to overall firing rates. Previous studies have shown that
alternative measures of population and lifetime sparseness are highly correlated, therefore our choice
does not affect the final results [15, 10].
We also report a second measure of population sparseness known as activity sparseness (AS), which
is a direct translation of the definition of sparse codes as having a small number of neurons active at
any time [15]:
AS = 1 ? nt /N ,
2
(2)
Figure 1: Generative weights of the sparse coding model at the beginning (A) and end (B) of learning.
where nt is defined as the number of neurons with activity larger than a given threshold at time t,
and N is the number of units. AS = 1 means that no neuron was active above the threshold, while
AS = 0 means that all of all neurons were active. The threshold is set to be one standard deviation
for the modeling results, or equivalently the upper 68th percentile of the distribution for neural
firing rates. AS gives a very intuitive account of population sparseness, and is invariant to both
multiplicative and additive changes in firing rate. However, since it discards most of the information
about the shape of the distribution, it is a less sensitive measure than TR.
3
Sparse coding model
The sparseness assumption that natural scenes can be described by a small number of elements
is generally translated in a model with sparsely distributed hidden units xk , representing visual
elements, that combine linearly to form an image y [3]:
p(xk ) = psparse (xk ) ? exp(f (xk )) ,
p(y|x) = Normal(y;
Gx, ?y2 )
k = 1, . . . , K
,
(3)
(4)
where K is the number of hidden units, G is the mixing matrix (also called the generative weights)
and ?y2 is the variance of the input noise. Here we set the sparse prior distribution to a Student-t
distribution with ? degrees of freedom,
1
p(xk ) =
Z
1 xk 2
1+
? ?
? ?+1
2
,
(5)
with ? chosen such that the distribution has unit variance. This is a common prior for sparse coding models [3], and its analytical form allows the development of efficient inference and learning
algorithms [16, 17].
The goal of learning is to adapt the model?s parameters in order to best explain the observed data,
i.e., to maximize the marginal likelihood
X
XZ
log p(yt |G) =
log p(yt |x, G)p(x)dx
(6)
t
t
with respect to G. We learn the weights using a Variational Expectation Maximization (VEM)
algorithm, as described by Berkes et al. [17], with the difference that the generative weights are
not treated as random variables, but as parameters with norm fixed to 1, in order to avoid potential
confounds in successive analysis.
The model was applied to 9 ? 9 pixel natural image patches, randomly chosen from 36 natural
images from the van Hateren database, preprocessed as described in [5]. The dimensionality of the
patches was reduced to 36 and the variances normalized by Principal Component Analysis. The
model parameters were chosen to be K = 48 and ? = 2.5, a very sparse, slightly overcomplete
representation. These parameters are very close to the ones that were found to be optimal for natural
images [17]. The input noise was fixed to ?y2 = 0.08. The generative weights were initialized at
random, with norm 1. We performed 1500 iterations of the VEM algorithm, using a new batch of
3600 patches at each iteration. Fig. 1 shows the generative weights at the start and at the end of
learning. As expected from previous studies [3, 5], after learning the basis vectors are shaped like
Gabor wavelets and resemble simple cell RFs.
3
Figure 2: Neural implementation of Gibbs sampling in a sparse coding model. A) Neural network architecture.
B) Mode of the activation probability of a neuron as a function of the total (feed-forward and recurrent) input,
for a Student-t prior with ? = 2.05 and unit variance.
4
Sampling, sparse coding neural network
In order to gain some intuition about the neural operations that may underlie inference in this model,
we derive an equivalent neural network architecture. It has been suggested that neural activity is
best interpreted as samples from the posterior probability of an internal, probabilistic model of the
sensory input. This assumption is consistent with many experimental observations, including high
trial-by-trial variability and spontaneous activity in awake animals [18, 19, 20]. Moreover, sampling
can be performed in parallel and asynchronously, making it suitable for a neural architecture. Assuming that neural activity corresponds to Gibbs sampling from the posterior probability over visual
elements in the sparse coding model, we obtain the following expression for the distribution of the
firing rate of a neuron, given a visual stimulus and the current state of the other neurons representing
the image [18]:
p(xk |xi6=k , y) ? p(y|x)p(xk )
1
? exp ? 2 (y T y ? 2y T Gx ? x T Rx) + f (xk ) ,
2?y
(7)
(8)
where R = ?GT G. Expanding the exponent, eliminating the terms that do not depend on xk , and
noting that Rkk = ?1, since the generative weights have unit norm, we get
?
?
X
X
1
1
1
Gik yi )xk + 2 (
Rjk xj )xk ? 2 x2k + f (xk )? . (9)
p(xk |xi6=k , y) ? exp ? 2 (
?y i
?y
2?y
j6=k
Sampling in a sparse coding model can thus be achieved by a simple neural network, where the k-th
neuron integrates visual information through feed?forward connections from input yi with weights
Gik /?y2 , and information from other neurons via recurrent connections Rjk /?y2 (Fig. 2A). Neural
activity is then generated stochastically according to Eq. 9: The exponential activation function gives
higher probability to higher rates with increasing input to the neuron, while the terms depending on
x2k and f (xk ) penalize large firing rates. Fig. 2B shows the mode of the activation probability (Eq. 9)
as a function of the total input to a neuron.
5
Active sparsification over learning
A simple, intuitive prediction for a system that optimizes for sparseness is that the sparseness of its
representation should increase over learning. Since a sparse coding system, including our model,
might not directly maximize our measures of sparseness, TR and AS, we verify this intuition by
analyzing the model?s representation of images at various stages of learning. We selected at random
a new set of 1800 patches to be used as test stimuli. For every patch, we collected 50 Monte Carlo
samples, using Gibbs sampling (Eq. 9) combined with an annealing scheme that starts by drawing
samples from the model?s prior distribution and continues to sample as the prior is deformed into
the posterior [21]. This procedure ensures that the final samples come from the whole posterior distribution, which is highly multimodal in overcomplete models, and therefore that our analysis is not
4
Figure 3: Development of sparseness, (A) over learning for the sparse coding model of natural images and (B)
over age for neural responses in ferrets. (A) The lines indicate the average sparseness over units and samples.
Error bars are one standard deviation over samples. Since the three measures have very different values, we
report the change in sparseness in percent of the first iteration. Colored text: absolute values of sparseness at
the end of learning. (B) The lines indicate the average sparseness for different animals. Error bars represent
standard error of the mean (SEM). (C) KL divergence between the distribution of neural responses and the
factorized distribution of neural responses. Error bars are SEM.
biased by the posterior distribution becoming more (or less) complex over learning. Fig. 3A shows
the evolution of sparseness with learning. As anticipated, both population and lifetime sparseness
increase monotonically.
Having confirmed our intuition with the sparse coding model, we turn to data from electrophysiological recordings. We analyzed multi-unit recordings from arrays of 16 electrodes implanted in the
primary visual cortex of 15 ferrets at various stages of development, from eye opening at postnatal
day 29 or 30 (P29-30) to adulthood at P151 (see Suppl Mat for experimental details). Over this
maturation period, the visual system of ferrets adapts to the statistics of the environment [22, 23].
For each animal, neural activity was recorded and collected in 10 ms bins for 15 sessions of 100
seconds each (for a total of 25 minutes), during which the animal was shown scenes from a movie.
We find that all three measures of sparseness decrease significantly with age1 . Thus, during a period
when the cortex actively adapts to the visual environment, the representation in primary visual cortex becomes less sparse, suggesting that the optimization of sparseness is not a primary objective for
learning in the visual system. The decrease in population sparseness seems to be due to an increase
in the dependencies between neurons: Fig. 3C shows the Kullback-Leibler divergence between the
joint distribution P of neural activity in 2 ms bins and the same distribution, factorized to eliminate
QN
neural dependencies, i.e., P? (r1 , . . . rN ) := i=1 P (ri ). The KL divergence increases with age
(Spearman?s ? = 0.73, P < 0.01), indicating an increase in neural dependencies.
6
Active sparsification and anesthesia
The sparse coding neural network architecture of Fig. 2 makes explicit that an optimal sparse coding
representation requires a process of active sparsification: In general, because of input noise and the
overcompleteness of the representation, there are multiple possible combinations of visual elements
that could account for a given image. To select among these combinations the most sparse solution,
a competition between possible alternative interpretations must occur.
Consider for example a simple system with one input variable and two hidden units, such that
y = x1 + 1.3 ? x2 + , with Gaussian noise . Given an observed value, y, there are infinitely many
solutions to this equality, as shown by the dotted line in Fig. 4B for y = 2. These stimulus?induced
correlations in the posterior are known as explaining away. Among all the solutions, the ones compatible with the sparse prior over x1 and x2 are given higher probability, giving raise to a bimodal
1
Lifetime sparseness, TR: effect of age is significant, Spearman?s ? = ?0.65, P < 0.01; differences in
mean between the four age groups in Fig. 3 are significant, ANOVA, P = 0.02, multiple comparison tests with
Tukey-Kramer correction shows the mean of group P29-30 is different from that of groups P83-92 and P129151 with P < 0.05; Population sparseness, TR: Spearman?s ? = ?0.75, P < 0.01; ANOVA P < 0.01,
multiple comparison shows the mean of group P29-30 is different from that of group P129-151 with P < 0.05;
Activity sparseness, AS: Spearman?s ? = ?0.79, P < 0.01; ANOVA P < 0.01, multiple comparison shows
the mean of group P29-30 is different from that of groups P83-92 and P129-151 with P < 0.05.
5
Figure 4: Active sparsification. Contour lines correspond to the 5, 25, 50, 75, 90, and 95 percentile of the
distributions. A) Prior probability. B) Posterior probability given observed value y = 2. The dotted line
indicates all solutions to 2 = x1 +1.3?x2 . C) Posterior probability with weakened recurrent weights (? = 0.5).
Figure 5: Active sparsification and anesthesia. A) Percent change in sparseness as the recurrent connections are
weakened for various values of ?. Error bars are one standard deviation over samples. Colored text: absolute
values of sparseness at the end of learning. B) Average sparseness measures for V1 responses at various levels
of anesthesia. Error bars are SEM.
distribution centered around the two sparse solutions x1 = 0, x2 = 1.54, and x1 = 2, x2 = 0.
From Eq. 9, it is clear that the recurrent connections are necessary in order to keep the activity of the
neurons on the solution line, while the stochastic activation function makes sparse neural responses
more likely. This active sparsification process is stronger for overcomplete representations, for when
the generative weights are non-orthogonal (in which cases |rij | 0), and for when the input noise
is large, which makes the contribution from the prior more important.
In a system that optimizes sparseness, disrupting the active sparsification process will lead to lower
lifetime and population sparseness. For example, if we reduce the strength of the recurrent connections in the neural network architecture (Eq. 9) by a factor ?,
?
?
X
X
1
1
1
Gik yi )xk + 2 ?(
Rjk xj )xk ? 2 x2k + f (xk )? , (10)
p(xk |xi6=k , y) ? exp ? 2 (
?y i
?y
2?y
j6=k
the neurons become more decoupled, and try to separately account for the input, as illustrated in
Fig. 4C. The decoupling will result in a reduction of population sparseness, as multiple neurons
become active to explain the same input. Also, lifetime sparseness will decrease, as the lack of
competition between units means that individual units will be active more often.
Fig. 5 shows the effect of reducing the strength of recurrent connections in the model of natural images. We analyzed the parameters of the sparse coding model at the end of learning, and substituted
the Gibbs sampling posterior distribution of Eq. 9 with the one in Eq. 10 for various values of ?. As
predicted, decreasing ? leads to a decrease in all sparseness measures.
We argue that a similar disruption of the active sparsification process can be obtained in electrophysiological experiments by comparing neural responses at different levels of isoflurane anesthesia. In
general, the evoked, feed-forward responses of V1 neurons under anesthesia are thought to remain
6
Figure 6: Neuronal response to a 3.75 Hz full-field stimulation under different levels of anesthesia. Error bars
are SEM. A) Signal and noise amplitudes. B) Signal-to-noise ratio.
largely intact: Despite a decrease in average firing rate, the selectivity of neurons to orientation,
frequency, and direction of motion has been shown to be very similar in awake and anesthetized
animals [24, 25, 26]. On the other hand, anesthesia disrupts contextual effects like figure-ground
modulation [26] and pattern motion [27], which are known to be mediated by top-down and recurrent connections. Other studies have shown that, at low concentrations, isoflurane anesthesia leaves
the visual input to the cortex mostly intact, while the intracortical recurrent and top-down signals
are disrupted [28, 29]. Thus, if the representation in the visual cortex is optimally sparse, disrupting
the active sparsification by anesthesia should decrease sparseness.
We analyzed multi-unit neural activity from bundles of 16 electrodes implanted in primary visual
cortex of 3 adult Long-Evans rats (5-11 units per recording session, for a total of 39 units). Recordings were made in the awake state and under four levels on anesthesia, from very light to deep (corresponding to concentrations of isoflurane between 0.6 and 2.0%) (see Suppl Mat for experimental
details). In order to confirm that the effect of the anesthetic does not prevent visual information to
reach the cortex, we presented the animals with a full-field periodic stimulus (flashing) at 3.75 Hz
for 2 min in the awake state, and 3 min under anesthesia. The Fourier spectrum of the spikes trains
on individual channels shows sharp peaks at the stimulation frequency in all states. We measured the
response to the signal by the average amplitude of the Fourier spectrum between 3.7 and 3.8 Hz, and
defined the amplitude of the noise, due to spontaneous activity and neural variability, as the average
amplitude between 1 and 3.65 Hz (the amplitudes in this band are found to be noisy but uniform).
The amplitude of the evoked signal decreases with increasing isoflurane concentration, due to a decrease in overall firing rate; however, the background noise is also suppressed with anesthesia, so
that overall the signal-to-noise ratio does not decrease significantly with anesthesia (Fig. 6, ANOVA,
P=0.46).
We recorded neural responses while the rats were shown a two minute movie recorded from a camera mounted on the head of a person walking in the woods. Neural activity was collected in 25 ms
bins. All three sparseness measures increase significantly with increasing concentration of isoflurane2 (Fig. 5B). Contrary to what is expected in a sparse-coding system, the data suggests that the
contribution of lateral and top-down connections in the awake state leads to a less sparse code.
7
Discussion
We examined multi-electrode recordings from primary visual cortex of ferrets over development,
and of rats at different levels of anesthesia. We found that, contrary to predictions based on theoretical considerations regarding optimal sparse coding systems, sparseness decreases with visual
experience, and increases with increasing concentration of anesthetic. These data suggest that the
2
Lifetime sparseness, TR: ANOVA with different anesthesia groups, P < 0.01; multiple comparison tests
with Tukey-Kramer correction shows the mean of awake group is different from the mean of all other groups
with P < 0.05; Population sparseness, TR: ANOVA, P < 0.01; multiple comparison shows the mean of
the awake group is different from that of the light, medium, and deep anesthesia groups, P < 0.05; Activity
sparseness, AS: ANOVA P < 0.01, multiple comparison shows the mean of the awake group is different from
that of the light, medium, and deep anesthesia groups, P < 0.05.
7
high sparseness levels that have been reported in previous accounts of sparseness in the visual cortex
[7, 8, 9, 10, 11, 12], and which are otherwise consistent with our measurements (Fig. 3B, 5), are
most likely a side effect of the high selectivity of neurons, or an overestimation due to the effect of
anesthesia (Fig. 5; with the exception of [8], where sparseness was measured on awake animals),
but do not indicate an active optimization of sparse responses (cf. [10]).
Our measurements of sparseness from neural data are based on multi-unit recording. By collecting
spikes from multiple cells, we are in fact reporting a lower bound of the true sparseness values.
While a precise measurement of the absolute value of these quantities would require single-unit
measurement, our conclusions are based on relative comparisons of sparseness under different conditions, and are thus not affected.
Our theoretical predictions were verified with a common sparse coding model [3]. The model assumes linear summation in the generative process, and a particular sparse prior over the hidden unit.
Despite these specific choices, we expect the model results to be general to the entire class of sparse
coding models. In particular, the choice of comparing neural responses with Monte Carlo samples
from the model?s posterior distribution was taken in agreement with experimental results that report
high neural variability. Alternatively, one could assume a deterministic neural architecture, with
a network dynamic that would drive the activity of the units to values that maximize the image
probability [3, 30, 31]. In this scenario, neural activity would converge to one of the modes of the
distributions in Fig. 4, leading us to the same conclusions regarding the evolution of sparseness.
Although our analysis found no evidence for active sparsification in the primary visual cortex, ideas
derived from and closely related to the sparse coding principle are likely to remain important for our
understanding of visual processing. Efficient coding remains a most plausible functional account of
coding in more peripheral parts of the sensory pathway, and particularly in the retina, from where
raw visual input has to be sent through the bottleneck formed by the optic nerve without significant
loss of information [32, 33]. Moreover, computational models of natural images are being extended
from being strictly related to energy constraints and information transmission, to the more general
view of density estimation in probabilistic, generative models [34, 35]. This view is compatible with
our finding that the representation in the visual cortex becomes more dependent with age, and is less
sparse in the awake condition than under anesthesia: We speculate that such dependencies reflect
inference in a hierarchical generative model, where signals from lateral, recurrent connections in V1
and from feedback projections from higher areas are integrated with incoming evidence, in order to
solve ambiguities at the level of basic image features using information from a global interpretation
of the image [26, 19, 27, 20].
References
[1] D.J. Field. What is the goal of sensory coding? Neural Computation, 6(4):559?601, 1994.
[2] B.A. Olshausen and D.J. Field. Sparse coding of sensory inputs. Current Opinion in Neurobiology,
14(4):481?487, 2004.
[3] B.A. Olshausen and D.J. Field. Emergence of simple-cell receptive field properties by learning a sparse
code for natural images. Nature, 381(6583):607?609, 1996.
[4] A.J. Bell and T.J. Sejnowski. The ?independent components? of natural scenes are edge filters. Vision
Research, 37(23):3327?3338, 1997.
[5] J.H. van Hateren and A. van der Schaaf. Independent component filters of natural images compared with
simple cells in primary visual cortex. Proc. R. Soc. Lond. B, 265:359?366, 1998.
[6] A.S. Hsu and P. Dayan. An unsupervised learning model of neural plasticity: Orientation selectivity in
goggle-reared kittens. Vision Research, 47(22):2868?2877, 2007.
[7] R. Baddeley, L.F. Abbott, M.C.A. Booth, F. Sengpiel, T. Freeman, E. Wakeman, and E.T. Rolls. Responses
of neurons in primary and inferior temporal visual cortices to natural scenes. Proceedings of the Royal
Society B: Biological Sciences, 264(1389):1775?1783, 1997.
[8] W.E. Vinje and J.L. Gallant. Sparse coding and decorrelation in primary visual cortex during natural
vision. Science, 297(5456):1273?1276, 2000.
[9] M. Weliky, J. Fiser, R.H. Hunt, and D.N. Wagner. Coding of natural scenes in primary visual cortex.
Neuron, 37(4):703?718, 2003.
[10] S.R. Lehky, T.J. Sejnowski, and R. Desimone. Selectivity and sparseness in the responses of striate
complex cells. Vision Research, 45(1):57?73, 2005.
8
[11] S.C. Yen, J. Baker, and C.M. Gray. Heterogeneity in the responses of adjacent neurons to natural stimuli
in cat striate cortex. Journal of Neurophysiology, 97(2):1326?1341, 2007.
[12] D.J. Tolhurst, D. Smyth, and I.D. Thompson. The sparseness of neuronal responses in ferret primary
visual cortex. Journal of Neuroscience, 29(9):2355?2370, 2009.
[13] A. Treves and E.T. Rolls. What determines the capacity of autoassociative memories in the brain? Network: Computation in Neural Systems, 2(4):371?397, 1991.
[14] P. Foldiak and D. Endres. Sparse coding. Scholarpedia, 3(1):2984, 2008.
[15] B. Willmore and D.J. Tolhurst. Characterizing the sparseness of neural codes. Network: Computation in
Neural Systems, 12:255?270, 2001.
[16] S. Osindero, M. Welling, and G.E. Hinton. Topographic product models applied to natural scene statistics.
Neural Computation, 18:381?344, 2006.
[17] P. Berkes, R. Turner, and M. Sahani. On sparsity and overcompleteness in image models. In Advances in
Neural Information Processing Systems, volume 20. MIT Press, 2008. Cambridge, MA.
[18] P.O. Hoyer and A. Hyvarinen. Interpreting neural response variability as monte carlo sampling of the posterior. In Advances in Neural Information Processing Systems, volume 15. MIT Press, 2003. Cambridge,
MA.
[19] T.S. Lee and D. Mumford. Hierarchical Bayesian inference in the visual cortex. Journal of the Optical
Society of America A, 20(7):1434?1448, 2003.
[20] P. Berkes, G. Orban, M. Lengyel, and J. Fiser. Matching spontaneous and evoked activity in V1: a
hallmark of probabilistic inference. Frontiers in Systems Neuroscience, 2009. Conference Abstract:
Computational and systems neuroscience.
[21] S. Kirkpatrick, C.D. Gelatt, and M.P. Vecchi. Optimization by simulated annealing. Science, 220:671?
680, 1983.
[22] B. Chapman and M.P. Stryker. Development of orientation selectivity in ferret visual cortex and effects
of deprivation. Journal of Neuroscience, 13:5251?5262, 1993.
[23] L.E. White, D.M. Coppola, and D. Fitzpatrick. The contribution of sensory experience to the maturation
of orientation selectivity in ferret visual cortex. Nature, 411:1049?1052, 2001.
[24] P.H. Schiller, B.L. Finlay, and S.F. Volman. Quantitative studies of single-cell properties in monkey striate
cortex. I. Spatiotemporal organization of receptive fields. Journal of Neurophysiology, 39(6):1288?1319,
1976.
[25] D.M. Snodderly and M. Gur. Organization of striate cortex of alert, trained monkeys (Macaca fascicularis): ongoing activity, stimulus selectivity, and widths of receptive field activating regions. Journal of
Neurophysiology, 74(5):2100?2125, 1995.
[26] V.A.F. Lamme, K. Zipser, and H. Spekreijse. Figure-ground activity in primary visual cortex is suppressed
by anesthesia. PNAS, 95:3263?3268, 1998.
[27] K. Guo, P.J. Benson, and C. Blakemore. Pattern motion is present in V1 of awake but not anaesthetized
monkeys. European Journal of Neuroscience, 19:1055?1066, 2004.
[28] O. Detsch, C. Vahle-Hinz, E. Kochs, M. Siemers, and B. Bromm. Isoflurane induces dose-dependent
changes of thalamic somatosensory information transfer. Brain Research, 829:77?89, 1999.
[29] H. Hentschke, C. Schwarz, and A. Bernd. Neocortex is the major target of sedative concentrations of
volatile anaesthetics: strong depression of firing rates and increase of GABA-A receptor-mediated inhibition. European Jounal of Neuroscience, 21(1):93?102, 2005.
[30] P. Dayan and L.F. Abbott. Theoretical Neuroscience: Computational and Mathematical Modeling of
Neural Systems. MIT Press, 2001.
[31] C.J. Rozell, D.H. Johnson, R.G. Baraniuk, and B.A. Olshausen. Sparse coding via thresholding and local
competition in neural circuits. Neural Computation, 20:2526?2563, 2008.
[32] J.J. Atick. Could information theory provide an ecological theory of sensory processing?
Computation in Neural Systems, 3(2):213?251, 1992.
Network:
[33] V. Balasubramanian and M.J. Berry. Evidence for metabolically efficient codes in the retina. Network:
Computation in Neural Systems, 13(4):531?553, 2002.
[34] Y. Karklin and M.S. Lewicki. A hierarchical bayesian model for learning non-linear statistical regularities
in non-stationary natural signals. Neural Computation, 17(2):397?423, 2005.
[35] M.J. Wainwright and E.P. Simoncelli. Scale mixtures of gaussians and the statistics of natural images. In
Advances in Neural Information Processing Systems. MIT Press, 2000. Cambridge, MA.
9
| 3774 |@word deformed:1 trial:2 neurophysiology:3 eliminating:1 seems:2 norm:3 stronger:1 tr:11 reduction:3 current:2 comparing:2 nt:2 contextual:1 activation:4 dx:1 must:1 evans:1 additive:2 plasticity:1 shape:2 stationary:1 generative:10 selected:1 leaf:1 xk:19 beginning:1 postnatal:1 colored:2 tolhurst:2 gx:2 successive:1 relayed:1 mathematical:1 anesthesia:25 alert:2 direct:2 become:2 suspicious:1 combine:1 pathway:1 introduce:1 expected:3 disrupts:1 xz:1 multi:4 brain:2 freeman:1 decreasing:1 balasubramanian:1 increasing:5 becomes:2 p83:2 underlying:1 moreover:4 baker:1 factorized:2 medium:2 circuit:1 what:3 interpreted:1 monkey:3 finding:1 sparsification:11 transformation:1 temporal:1 quantitative:1 every:1 collecting:1 unit:20 underlie:1 local:1 willmore:1 despite:2 encoding:1 receptor:1 analyzing:1 firing:15 becoming:1 modulation:1 might:2 weakened:2 examined:1 evoked:3 suggests:1 blakemore:1 hunt:1 statistically:1 camera:1 testing:1 practice:1 procedure:1 area:2 bell:1 gabor:1 significantly:3 thought:1 projection:1 matching:1 refers:1 suggest:2 get:1 close:1 storage:1 optimize:1 equivalent:2 deterministic:1 center:1 yt:2 thompson:1 ergodic:1 array:1 gur:1 population:21 transmit:1 spontaneous:3 snodderly:1 target:1 smyth:1 lighter:1 agreement:1 trend:1 element:5 rozell:1 particularly:1 walking:1 continues:1 sparsely:1 database:1 observed:3 coincidence:1 rij:1 region:1 ensures:1 decrease:12 deeply:1 mentioned:1 benjamin:1 environment:4 intuition:3 overestimation:1 dynamic:1 controversy:1 trained:1 depend:2 raise:1 upon:2 efficiency:1 basis:1 translated:1 multimodal:1 joint:1 various:6 cat:1 america:1 train:1 distinct:1 activate:1 monte:4 sejnowski:2 whose:1 encoded:1 widely:1 larger:2 plausible:1 solve:1 drawing:1 otherwise:1 statistic:5 topographic:1 emergence:1 noisy:1 final:2 associative:1 asynchronously:1 analytical:1 maximal:2 product:1 mixing:1 adapts:3 intuitive:2 competition:3 macaca:1 electrode:3 requirement:1 r1:1 transmission:1 regularity:1 derive:2 recurrent:10 depending:1 measured:3 rescale:1 eq:7 strong:1 soc:1 predicted:2 resemble:1 judge:2 waltham:1 indicate:4 come:1 direction:1 somatosensory:1 closely:1 filter:2 stochastic:2 centered:1 opinion:1 bin:4 require:1 activating:1 rkk:1 biological:1 summation:1 strictly:1 frontier:1 correction:2 around:1 koch:1 ground:2 normal:1 exp:4 predict:1 fitzpatrick:1 major:1 early:1 purpose:1 estimation:1 proc:1 integrates:1 sensitive:2 schwarz:1 overcompleteness:2 mit:4 gaussian:1 rather:1 pn:1 avoid:1 sengpiel:1 derived:1 spekreijse:1 modelling:1 likelihood:1 indicates:1 contrast:1 sense:1 inference:5 dependent:2 dayan:2 eliminate:1 entire:1 integrated:1 hidden:5 reproduce:1 pixel:1 overall:3 among:2 orientation:4 exponent:1 development:8 animal:11 schaaf:1 marginal:1 field:9 having:2 shaped:1 sampling:8 chapman:1 identical:1 represents:1 unsupervised:1 anticipated:1 report:4 stimulus:10 opening:2 retina:3 randomly:1 divergence:3 individual:3 attempt:1 freedom:1 detection:1 organization:3 highly:2 kirkpatrick:1 analyzed:3 mixture:1 undefined:1 light:3 bundle:1 edge:1 desimone:1 necessary:1 experience:4 orthogonal:1 decoupled:1 initialized:1 overcomplete:3 theoretical:4 dose:1 modeling:2 goggle:1 maximization:1 cost:3 contested:1 deviation:4 subset:2 uniform:1 johnson:1 osindero:1 optimally:3 reported:1 coppola:1 dependency:6 periodic:1 spatiotemporal:1 endres:1 combined:1 person:1 disrupted:1 peak:1 density:1 probabilistic:3 lee:1 reflect:1 recorded:3 ambiguity:1 stochastically:1 leading:1 rescaling:1 actively:3 suggesting:2 account:5 potential:1 intracortical:1 speculate:1 coding:36 sec:1 student:2 depends:1 scholarpedia:1 performed:4 multiplicative:2 try:1 view:2 analyze:1 characterizes:1 tukey:2 start:2 thalamic:1 parallel:1 vem:2 contribution:3 yen:1 formed:1 roll:4 variance:4 largely:1 efficiently:1 correspond:2 confounds:1 raw:1 bayesian:2 carlo:4 rx:1 confirmed:1 drive:1 j6:2 lengyel:1 explain:2 reach:1 definition:1 energy:3 frequency:2 gain:1 hsu:1 dimensionality:1 electrophysiological:5 organized:1 amplitude:6 nerve:1 feed:3 higher:5 day:1 maturation:2 response:21 though:1 lifetime:15 stage:4 atick:1 fiser:3 correlation:1 working:1 hand:2 assessment:1 lack:3 mode:3 gray:1 olshausen:3 effect:8 verify:2 y2:5 normalized:1 true:1 evolution:2 equality:1 p29:4 leibler:1 illustrated:1 white:2 adjacent:1 during:3 width:1 inferior:1 whereby:1 percentile:2 rat:5 m:3 disrupting:2 motion:3 interpreting:1 percent:2 disruption:1 image:20 variational:1 instantaneous:1 consideration:2 hallmark:1 common:3 volatile:1 functional:2 stimulation:2 qp:1 volume:2 tail:1 interpretation:2 measurement:6 significant:3 cambridge:3 gibbs:4 session:2 cortex:29 inhibition:1 gt:1 berkes:4 posterior:12 foldiak:1 optimizing:1 optimizes:2 discard:2 manipulation:1 selectivity:9 scenario:1 ecological:1 yi:3 der:1 transmitted:1 captured:1 fascicularis:1 converge:1 maximize:4 period:2 monotonically:1 signal:9 multiple:9 full:2 pnas:1 simoncelli:1 adapt:1 believed:1 long:1 prediction:6 basic:2 implanted:2 vision:4 expectation:1 adulthood:2 iteration:3 represent:1 suppl:2 achieved:1 cell:10 penalize:1 proposal:1 bimodal:1 background:1 separately:1 annealing:2 ferret:9 biased:1 rest:1 recording:9 induced:1 hz:4 sent:1 contrary:2 zipser:1 noting:1 affect:1 xj:2 architecture:7 opposite:1 reduce:1 idea:2 regarding:2 bottleneck:1 whether:1 expression:1 heavier:1 cause:1 adequate:1 deep:3 autoassociative:1 generally:1 depression:1 clear:1 involve:1 amount:1 neocortex:1 band:1 lehky:1 induces:1 reduced:1 dotted:2 neuroscience:8 per:1 diverse:1 mat:2 affected:1 group:14 redundancy:2 four:2 threshold:3 preprocessed:1 anesthetic:2 anova:7 prevent:1 verified:1 abbott:2 finlay:1 v1:7 pietro:1 wood:1 baraniuk:1 reporting:2 throughout:1 patch:5 x2k:3 bound:1 activity:24 strength:2 occur:1 optic:1 constraint:1 awake:14 ri:3 scene:6 x2:5 aspect:1 fourier:2 argument:2 optimality:1 min:2 lond:1 orban:1 vecchi:1 optical:1 developing:1 according:1 volman:1 peripheral:1 combination:2 gaba:1 spearman:4 smaller:1 slightly:1 remain:2 suppressed:2 making:1 benson:1 invariant:3 taken:1 agree:1 remains:2 discus:1 turn:1 end:5 studying:1 operation:1 gaussians:1 reared:1 hierarchical:3 away:1 gelatt:1 alternative:2 batch:1 top:3 assumes:1 cf:2 giving:1 classical:1 society:2 anaesthetized:1 objective:1 quantity:2 spike:3 mumford:1 receptive:4 primary:16 rt:1 concentration:6 striate:4 stryker:1 hoyer:1 schiller:1 lateral:2 capacity:2 simulated:1 argue:1 extent:1 collected:3 rjk:3 assuming:1 code:8 index:1 ratio:3 psparse:1 equivalently:1 mostly:3 unfortunately:1 negative:2 implementation:2 gallant:2 upper:1 neuron:24 observation:1 heterogeneity:1 extended:1 variability:4 volen:1 head:1 precise:1 rn:1 neurobiology:1 hinton:1 sharp:1 treves:3 introduced:1 bernd:1 metabolically:1 kl:2 optimized:2 connection:9 established:1 adult:1 suggested:1 bar:6 pattern:2 regime:1 sparsity:1 rf:3 including:2 memory:2 royal:1 wainwright:1 suitable:1 decorrelation:1 natural:20 rely:1 treated:1 karklin:1 residual:1 turner:1 representing:2 scheme:1 movie:3 eye:2 mediated:2 sahani:1 text:2 prior:9 understanding:1 berry:1 wakeman:1 relative:2 loss:1 expect:1 mounted:1 vinje:2 age:6 degree:2 weliky:1 consistent:2 principle:4 metabolic:1 thresholding:1 translation:1 compatible:2 bias:1 allow:1 deeper:1 side:1 explaining:1 characterizing:1 wagner:1 anesthetized:4 sparse:46 absolute:4 benefit:1 distributed:1 van:3 feedback:1 cortical:1 contour:1 qn:1 sensory:6 forward:3 made:1 brandeis:1 far:1 welling:1 hyvarinen:1 kullback:1 keep:1 confirm:1 global:1 active:17 incoming:1 alternatively:1 spectrum:2 learn:1 channel:1 nature:2 expanding:1 xi6:3 decoupling:1 sem:4 transfer:1 improving:2 complex:3 european:2 substituted:1 main:1 linearly:1 whole:1 noise:11 x1:5 neuronal:2 fig:16 referred:1 explicit:1 exponential:2 epiphenomenon:1 wavelet:1 deprivation:1 minute:2 remained:1 down:3 specific:1 lamme:1 decay:1 gik:3 evidence:5 inconclusive:2 flashing:1 sparseness:74 easier:1 booth:1 likely:3 infinitely:1 visual:45 lewicki:1 monotonic:1 corresponds:2 determines:1 ma:4 goal:3 kramer:2 quantifying:1 change:9 experimentally:1 reducing:2 principal:1 called:2 total:4 experimental:6 ozsef:1 intact:2 indicating:1 select:1 exception:1 internal:1 guo:1 hateren:2 ongoing:1 baddeley:1 kitten:1 correlated:1 |
3,062 | 3,775 | Riffled Independence for Ranked Data
Jonathan Huang, Carlos Guestrin
School of Computer Science,
Carnegie Mellon University
{jch1,guestrin}@cs.cmu.edu
Abstract
Representing distributions over permutations can be a daunting task due to
the fact that the number of permutations of n objects scales factorially in n.
One recent way that has been used to reduce storage complexity has been to
exploit probabilistic independence, but as we argue, full independence assumptions impose strong sparsity constraints on distributions and are unsuitable
for modeling rankings. We identify a novel class of independence structures,
called riffled independence, which encompasses a more expressive family of
distributions while retaining many of the properties necessary for performing
efficient inference and reducing sample complexity. In riffled independence, one
draws two permutations independently, then performs the riffle shuffle, common
in card games, to combine the two permutations to form a single permutation.
In ranking, riffled independence corresponds to ranking disjoint sets of objects
independently, then interleaving those rankings. We provide a formal introduction
and present algorithms for using riffled independence within Fourier-theoretic
frameworks which have been explored by a number of recent papers.
1
Introduction
Distributions over permutations play an important role in applications such as multi-object tracking,
visual feature matching, and ranking. In tracking, for example, permutations represent joint
assignments of individual identities to track positions, and in ranking, permutations represent
the preference orderings of a list of items. Representing distributions over permutations is a
notoriously difficult problem since there are n! permutations, and standard representations, such
as graphical models, are ineffective due to the mutual exclusivity constraints typically associated
with permutations. The quest for exploitable problem structure has led researchers to consider a
number of possibilities including distribution sparsity [17, 9], exponential family parameterizations [15, 5, 14, 16], algebraic/Fourier structure [13, 12, 6, 7], and probabilistic independence [8].
While sparse distributions have been successfully applied in certain tracking domains, we argue that
they are less suitable in ranking problems where it might be necessary to model indifference over a
number of objects. In contrast, Fourier methods handle smooth distributions well but are not easily
scalable without making aggressive independence assumptions [8]. In this paper, we argue that
while probabilistic independence might be useful in tracking, it is a poor assumption in ranking.
We propose a novel generalization of independence, called riffled independence, which we argue to
be far more suitable for modeling distributions over rankings, and develop algorithms for working
with riffled independence in the Fourier domain. Our major contributions are as follows.
? We introduce an intuitive generalization of independence on permutations, which we call riffled
independence, and show it to be a more appropriate notion of independence for ranked data,
offering possibilities for efficient inference and reduced sample complexity.
? We introduce a novel family of distributions, called biased riffle shuffles, that are useful for riffled
independence and propose an algorithm for computing its Fourier transform.
? We provide algorithms that can be used in the Fourier-theoretic framework of [13, 8, 7] for joining
riffle independent factors (RiffleJoin), and for teasing apart the riffle independent factors from a
joint (RiffleSplit), and provide theoretical and empirical evidence that our algorithms perform well.
1
P(?:?(j)=i), Y={1,2,5}
4
4
4
4
6
6
6
6
2
j
(a)
4
6
2
j
(b)
4
6
P(?:?(j)=i), Y={4,5,6}
2
i
2
i
2
i
2
P(?:?(j)=i), Y={2,4,5}
2
i
P(?:?(j)=i), Y={1,2,3}
i
P(?:?(j)=i)
2
j
(c)
4
6
4
6
2
j
4
6
(d)
2
j
4
6
(e)
? = {4, 5, 6} independent, where black means
Figure 1: Example first-order matrices with X = {1, 2, 3}, X
h(? : ?(j) = i) = 0. In each case, there is some 3-subset Y which X is constrained to map to with probability
one. By rearranging rows, one sees that independence imposes block-diagonal structure on the matrices.
2
Distributions on permutations and independence relations
In the context of ranking, a permutation ? = [?1 , . . . , ?n ] represents a one-to-one mapping from
n objects to n ranks, where, by ?j = i (or ?(j) = i), we mean that the j th object is assigned rank
i under ?. If we are ranking a list of fruits/vegetables enumerated as (1) Artichoke, (2) Broccoli,
(3) Cherry, and (4) Dates, then the permutation ? = [?A ?B ?C ?D ] = [2 3 1 4] ranks Cherry first,
Artichoke second, Broccoli third, Dates last. The set of permutations of {1, . . . , n} forms a group
with respect to function composition called the symmetric group (written Sn ). We write ? ? to
denote the permutation resulting from ? composed with ? (thus [? ?](j) = ? (?(j))). A distribution
h(?), defined over Sn , can be viewed as a joint distribution over the n variables ? = (?1 , . . . , ?n )
(where ?j ? {1, . . . , n}), subject to mutual exclusivity constraints ensuring that objects i and j
never map to the same rank (h(?i = ?j ) = 0 whenever i 6= j). Since there are n! permutations, it is
intractable to represent entire distributions and one can hope only to maintain compact summaries.
There have been a variety of methods proposed for summarizing distributions over permutations
ranging from older ad-hoc methods such as maintaining k-best hypotheses [17] to the more recent
Fourier-based methods which maintain a set of low-order summary statistics [18, 2, 11, 7]. The firstorder summary, for example, stores a marginal probability of the form h(? : ?(j) = i) for every
pair (i, j) and thus requires storing a matrix of only O(n2 ) numbers. For example, we might store
the probability that apples are ranked first. More generally, one might store the sth -order marginals,
which are marginal probabilities of s-tuples. The second-order marginals, for example, take the form
h(? : ?(k, `) = (i, j)), and require O(n4 ) storage. Low-order marginals correspond, in a certain
sense, to the low-frequency Fourier coefficients of a distribution over permutations. For example,
the first-order matrix of h(?) can be reconstructed exactly from O(n2 ) of the lowest frequency
Fourier coefficients of h(?), and the second-order matrix from O(n4 ) of the lowest frequency
Fourier coefficients. In general, one requires O(n2s ) coefficients to exactly reconstruct sth -order
marginals, which quickly becomes intractable for moderately large n. To scale to larger problems,
Huang et al. [8] demonstrated that, by exploiting probabilistic independence, one could dramatically
improve the scalability of Fourier-based methods, e.g., for tracking problems, since confusion in
data association only occurs over small independent subgroups of objects in many problems.
Probabilistic independence on permutations. Probabilistic independence assumptions on the
symmetric group can simply be stated as follows. Consider a distribution h defined over Sn . Let X
? be its complement ({p + 1, . . . , n}) with size
be a p-subset of {1, . . . , n}, say, {1, . . . , p} and let X
q = n ? p. We say that ?X = (?1 , ?2 , . . . , ?p ) and ?X? = (?p+1 , . . . , ?n ) are independent if
h(?) = f (?1 , ?2 , . . . , ?p ) ? g(?p+1 , . . . , ?n ).
Storing the parameters for the above distribution requires keeping O(p! + q!) probabilities instead
of the much larger O(n!) size required for general distributions. Of course, O(p! + q!) can still be
quite large. Typically, one decomposes the distribution recursively and stores factors exactly for
small enough factors, or compresses factors using Fourier coefficients (but using higher frequency
terms than what would be possible without the independence assumption). In order to exploit
probabilistic independence in the Fourier domain, Huang et al. [8] proposed algorithms for joining
factors and splitting distributions into independent components in the Fourier domain.
Restrictive first-order conditions. Despite its utility for many tracking problems, however, we
argue that the independence assumption on permutations implies a rather restrictive constraint on
distributions, rendering independence highly unrealistic in ranking applications. In particular, using
the mutual exclusivity property, it can be shown [8] that, if ?X and ?X? are independent, then for
some fixed p-subset Y ? {1, . . . , n}, ?X is a permutation of elements in Y and ?X? is a permutation
of its complement, Y? , with probability 1. Continuing with our vegetable/fruit example with n = 4,
2
if the vegetables and fruits rankings, ?veg = [?A ?B ] and ?f ruit = [?C ?D ], are known to be independent, then for Y = {1, 2}, the vegetables are ranked first and second with probability one, and
the fruits are ranked third and last with probability one. Huang et al. [8] refer to this as the first-order
condition because of the block structure imposed upon first-order marginals (see Fig. 1). In sports
tracking, the first-order condition might say, quite reasonably, that there is potential identity confusion within tracks for the red team and within tracks for the blue team but no confusion between the
two teams. In our ranking example however, the first-order condition forces the probability of any
vegetable being in third place to be zero, even though both vegetables will, in general, have nonzero
marginal probability of being in second place, which seems quite unrealistic. In the next section, we
overcome the restrictive first-order condition with the more flexible notion of riffled independence.
3
Going beyond full independence: Riffled independence
The riffle (or dovetail) shuffle [1] is perhaps the most popular method of card shuffling, in
? = {p + 1, . . . , n},
which one cuts a deck of n cards into two piles, X = {1, . . . , p} and X
of sizes p and q = n ? p, respectively, and successively drops the cards, one by one,
so that the piles are interleaved (see Fig. 2) into one deck again.
Inspired by riffle
shuffles, we present a novel relaxation of full independence, which we call riffled independence.
Rankings that are riffle independent are formed by independently selecting
rankings for two disjoint subsets of objects, then interleaving the rankings using a riffle shuffle to form a ranking over all objects. For example,
we might first ?cut the deck? into two piles, vegetables (X) and fruits
? independently decide that Broccoli is preferred over Artichoke
(X),
(?B < ?A ) and that Dates is preferred over Cherry (?D < ?C ), then interleave the fruit and vegetable rankings to form ?B < ?D < ?A < ?C
(i.e. ? = [3 1 4 2]). Intuitively, riffled independence models complex
? while allowing correlations
relationships within each of set X and X
Figure 2: Riffle shuffling a
between sets to be modeled only by a constrained form of shuffling.
standard deck of cards.
Riffle shuffling distributions. Mathematically, shuffles are modeled
as random walks on Sn . The ranking ? 0 after a shuffle is generated from the ranking prior to that
shuffle, ?, by drawing a permutation, ? from a shuffling distribution m(? ), and setting ? 0 = ? ?.
0 0
Given the distribution P over
P ?, we can find the distribution h (? ) after the shuffle via convolution:
h0 (? 0 ) = [m ? h] (? 0 ) = {?,? : ?0 =? ?} m(? )h(?). Note that we use the ? symbol to denote the
convolution operation.
The question is, what are the shuffling distributions m that correspond to riffle shuffles? To answer
this question, we use the distinguishing property of the riffle shuffle, that, after cutting the deck into
two piles of size p and q = n ? p, the relative ranking relations within each pile are preserved. Thus,
if the ith card lies above the j th in one of the piles, then after shuffling, the ith card remains above
the j th . In our example, relative rank preservation says that if Artichoke is preferred over Broccoli
prior to shuffling, it is preferred over Broccoli after shuffling. Any allowable riffle shuffling distribution must therefore assign zero probability to permutations which do not preserve relative ranking
relations. The set of permutations which do preserve these relations have a simple description.
Definition 1 (Riffle shuffling distribution). Define the set of (p, q)-interleavings as:
?p,q ? {?Y = [Y(1) Y(2) . . . Y(p) Y?(1) Y?(2) . . . Y?(q) ] : Y ? {1, . . . , n}, |Y | = p} ? Sn , n = p + q,
where Y(1) represents the smallest element of Y , Y(2) the second smallest, etc. A distribution mp,q
on Sn is called a riffle shuffling distribution if it assigns nonzero probability only to elements in ?p,q .
The (p, q)-interleavings can be shown to preserve the relative ranking relations within each
? = {p + 1, . . . , n} upon multiplication. In our vegof the subsets X = {1, . . . , p} and X
etable/fruits example, we have n = 4, p = 2, and so the collection of subsets of size
p are: { {1, 2}, {1, 3}, {1, 4}, {2, 3}, {2, 4}, {3, 4} } , and the set of (2, 2)-interleavings
is given by: ?p,q = {[1 2 3 4], [1 3 2 4], [1 4 2 3], [2 3 1 4], [2 4 1 3], [3 4 1 2]}.
Note that
|?p,q | = np = nq = 4!/(2!2!) = 6. One possible riffle shuffling distribution on S4
might, for example, assign uniform probability (munif
2,2 (?) = 1/6) to each permutation in ?2,2
and zero probability to everything else, reflecting indifference between vegetables and fruits. We
now formally define our generalization of independence where a distribution which fully factors
independently is allowed to undergo a single riffle shuffle.
3
? = {p + 1, . . . , n}
Definition 2 (Riffled independence). The subsets X = {1, . . . , p} and X
are said to be riffle independent if h = mp,q ? (f (?p ) ? g(?q )), with respect to some riffle
shuffling distribution mp,q and distributions f, g, respectively. We denote riffled independence by:
h = f ?mp,q g, and refer to f, g as riffled factors.
To draw from h, one independently draws a permutation ?p , of cards {1, . . . , p}, a permutation ?q ,
of cards {p + 1, . . . , n}, and a (p, q)-interleaving ?Y , then shuffles to obtain ? = ?Y [?p ?q ]. In our
example, the rankings ?p = [2 1] (Broccoli preferred to Artichoke) and ?q = [4 3] (Cherry preferred
to Dates) are selected, then shuffled (multiplied by ?{1,3} = [1 3 2 4]) to obtain ? = [3 1 4 2].
We remark that setting mp,q to be the delta distribution on any of the (p, q)-interleavings in ?p,q
recovers the definition of ordinary probabilistic independence, and thus riffled independence is a
strict generalization thereof. Just as in the full independence regime, where the distributions f and
? in the riffled independence regime, they can
g are marginal distributions of rankings of X and X,
?
be thought of as marginal distributions of the relative rankings of X and X.
Biased riffle shuffles. There is, in the general case, a significant increase in storage required for
riffled independence over full independence. In addition to the O(p! + q!) storage required for
distributions f and g, we now require O( np ) storage for the nonzero terms of the riffle shuffling
distribution mp,q . Instead of representing all possible riffle shuffling distributions, however, we
now introduce a family of useful riffle shuffling distributions which can be described using only
a handful of parameters. The simplest riffle shuffling distribution is the uniform riffle shuffle,
munif
p,q , which assigns uniform probability to all (p, q)-interleavings and zero probability to all other
models potentially complex
elements in Sn . Used in the context of riffled independence, munif
p,q
? but only captures the simplest possible correlations across subsets. We
relations within X and X,
might, for example, have complex preference relations amongst vegetables and amongst fruits, but
be completely indifferent with respect to the subsets, vegetables and fruits, as a whole.
There is a simple recursive method for uniformly drawing (p, q)-interleavings. Starting
with a deck of n cards cut into a left pile
({1, . . . , p}) and a right pile ({p + 1, . . . , n}),
pick one of the piles with probability proportional to its size (p/n for the left pile, q/n for
the right) and drop the bottommost card, thus
mapping either card p or card n to rank n. Then
recurse on the n ? 1 remaining undropped cards,
drawing a (p ? 1, q)-interleaving if the right pile
was picked, or a (p, q ? 1)-interleaving if the
left pile was picked. See Alg. 1.
1
2
3
4
5
6
7
8
D RAW R IFFLE U NIF (p, q, n)
// (p + q = n)
with prob q/n
// drop from right pile
? ? ? D RAW R IFFLE U?NIF (p, q ? 1, n ? 1)
? ? (i) if i < n
foreach i do ?(i) ?
n
if i = n
otherwise
// drop from left pile
? ? ? D RAW R IFFLE U NIF (p ? 1, q, n ? 1)
foreach8i do
< ? ? (i)
if i < p
n
if i = p
?(i) ?
: ?
? (i ? 1) if i > p
return ?
1: Recurrence for drawing ? ?
It is natural to consider generalizations where Algorithm
unif
one is preferentially biased towards dropping mp,q (Base case: return ? = [1] if n = 1).
cards from the left hand over the right hand (or vice-versa). We model this bias using a simple
one-parameter family of distributions in which cards from the left and right piles drop with
probability proportional to ?p and (1 ? ?)q, respectively, instead of p and q. We will refer to ? as
the bias parameter, and the family of distributions parameterized by ? as the biased riffle shuffles.1
In the context of rankings, biased riffle shuffles provide a simple model for expressing groupwise
? or vice-versa. The bias parameter ? can
preferences (or indifference) for an entire subset X over X
be thought of as a knob controlling the preference for one subset over the other, and might reflect, for
example, a preference for fruits over vegetables, or perhaps indifference between the two subsets.
Setting ? = 0 or 1 recovers the full independence assumption, preferring objects in X (vegetables)
? (fruits) with probability one (or vice-versa), and setting ? = .5, recovers the
over objects in X
uniform riffle shuffle (see Fig. 3). Finally, there are a number of straightforward generalizations of
the biased riffle shuffle that one can use to realize richer distributions. For example, ? might depend
on the number of cards that have been dropped from each pile (allowing perhaps, for distributions
to prefer crunchy fruits over crunchy vegetables, but soft vegetables over soft fruits).
1
The recurrence in Alg. 1 has appeared in various forms in literature [1]. We are the first to (1) use the
recurrence to Fourier transform mp,q , and to (2) consider biased versions. The biased riffle shuffles in [4] are
not similar to our biased riffle shuffles. See Appendix for details.
4
P(?:?(j)=i), ?=1.50e?01
P(?:?(j)=i), ?=5.00e?01
P(?:?(j)=i), ?=8.50e?01
P(?:?(j)=i), ?=01
5
5
10
10
15
20
15
5
10
j
(a)
15
20
20
15
5
10
j
(b)
15
20
20
i
5
10
i
5
10
i
5
10
i
i
P(?:?(j)=i), ?=00
15
5
10
j
(c)
15
20
20
15
5
10
j
(d)
15
20
20
5
10
j
15
20
(e)
? = {11, . . . , 20}, riffle indepenFigure 3: First-order matrices with a deck of 20 cards, X = {1, . . . , 10}, X
dent and various settings of ?. Note that nonzero blocks ?bleed? into zero regions (compare to Fig. 1). Setting
? = 0 or 1 recovers full independence, where a subset of objects is preferred over the other with probability one.
4
Between independence and conditional independence
We have presented riffle independent distributions as fully independent distributions which have
been convolved by a certain class of shuffling distributions. In this section, we provide an alternative
view of riffled independence based on conditional independence, showing that the notion of riffled
independence lies somewhere between full and conditional independence.
In Section 3, we formed a ranking by first independently drawing permutations ?p and ?q , of object
sets {1, . . . , p} (vegetables) and {p + 1, . . . , n} (fruits), respectively, drawing a (p, q)-interleaving
(i.e., a relative ranking permutation, ?Y ? ?p,q ), and shuffling to form ? = ?Y [?p ?q ]. Thus, an object i ? {1, . . . , p} is ranked in position ?Y (?p (i)) after shuffling (and an object j ? {p + 1, . . . , n}
is ranked in position ?Y (?q (j))). An equivalent way to form the same ?, however, is to first draw
an interleaving ?Y ? ?p,q , then, conditioned on the choice of Y , draw independent permutations of
the sets Y and Y? . In our example, we might first draw the (2,2)-interleaving [1 3 2 4] (so that after
shuffling, we would obtain ?V eg < ?F ruit < ?V eg < ?F ruit ). Then we would draw a permutation
of the vegetable ranks (Y = {1, 3}), say, [3 1], and a permutation of the fruit ranks (Y? = {2, 4}),
[4 2], to obtain a final ranking over all items: ? = [3 1 4 2], or ?B < ?D < ?A < ?C .
It is tempting to think that riffled independence is exactly the conditional independence assumption,
in which case the distribution would factor as h(?) = h(Y ) ? h(?X |Y ) ? h(?X? |Y ). The general
case of conditional independence, however, has O( np (p! + q! + 1)) parameters, while riffled
independence requires only O( np + p! + q!) parameters.
We now provide a simple correspondence between the conditional independence view of riffled
independence presented in this section to the shuffle theoretic definition from Section 3 (Def. 2).
Define the map ?, which, given a permutation of Y (or Y? ), returns the permutation in ?p ? Sp (or
Sq ) such that [?p ]i is the rank of [?X ]i relative to the set Y . For example, if the permutation of
the vegetable ranks is ?X = [3 1] (with Artichoke ranked third, Broccoli first), then ?(?X ) = [2 1]
since, relative to the set of vegetables, Artichoke is ranked second, and Broccoli first.
Proposition 3. Consider a riffle independent h = f ?mp,q g. For each ? ? Sn , h factors as h(?) =
h(Y ) ? h(?X |Y ) ? h(?X? |Y ), with h(Y ) = m(?Y ), h(?X |Y ) = f (?(?X )), and h(?X? ) = g(?(?X? )).
Proposition 3 is useful because it shows that the probability of a single ranking can be computed
without summing over the entire symmetric group (a convolution)? a fact that might not be
obvious from Definition 2. The factorization h(?) = m(?Y )f (?(?X ))g(?(?X? )) also suggests that
riffled independence behaves essentially like full independence (without the first-order condition),
where, in addition to the independent variables ?X and ?X? , we also independently randomize
over the subset Y . An immediate consequence is thatjust as in the full independence regime,
conditioning operations on certain observations and MAP (maximum a posteriori) assignment
problems decompose according to riffled independence structure.
Proposition 4 (Probabilistic inference decompositions). Consider riffle independent prior and likelihood functions, hprior and hlike , on Sn which factor as: hprior = fprior ?mprior gprior
and hlike = flike ?mlike glike , respectively. The posterior distribution under Bayes rule
can be written as the riffle independent distribution: hpost ? (fprior flike ) ?mprior mlike
(gprior glike ),where the symbol denotes the pointwise product operation.
A similar result allows us to also perform MAP assignments by maximizing each of the distributions
mp,q , f and g, independently and combining the results. As a corollary, it follows that conditioning
on simple pairwise ranking likelihood functions (that depend only on whether object i is preferred
to object j) decomposes along riffled independence structures.
5
4
R IFFLE J OIN (fb, gb)
hb0 = J OIN(fb, gb) ;
foreach frequency
h
i level i do
?
b
[
hi ? m
? hb0 i ;
p,q
5
return b
h;
1
2
3
1
2
3
4
i
5
6
Algorithm 2: Pseudocode for RiffleJoin
5
R IFFLE S PLIT (b
h)
foreach frequency level i do
? unif ?T
hb0 i ? m
b p,q i ? b
hi ;
0
b
b
[f , gb] ? S PLIT(h ) ;
Normalize f? and g?;
return f?, g?;
Algorithm 3: Pseudocode for RiffleSplit
Fourier domain algorithms: RiffleJoin and RiffleSplit
In this section, we present two algorithms for working with riffled independence in the Fourier theoretic framework of [13, 8, 7] ? one algorithm for merging riffled factors to form a joint distribution
(RiffleJoin), and one for extracting riffled factors from a joint (RiffleSplit). We begin with a brief
introduction to Fourier theoretic inference on permutations (see [11, 7] for a detailed exposition).
Unlike its analog on the real line, the Fourier transform of a function on Sn takes the form of a
collection of Fourier coefficient matrices ordered with respect to frequency. Discussing the analog
of frequency for functions on Sn , is beyond the scope of our paper, and, given a distribution h,
we simply index the Fourier coefficient matrices of h as b
h0 , b
h1 , . . . , b
hK ordered with respect to
b
some measure of increasing complexity. We use h to denote the complete collection of Fourier
coefficient matrices. One rough way to understand this complexity, as mentioned in Section 2,
is by the fact that the low-frequency Fourier coefficient matrices of a distribution can be used to
reconstruct low-order marginals. For example, the first-order matrix of marginals of h can always
? 0 and h
? 1 . As on the real line, many of the familiar properties of
be reconstructed from the matrices h
the Fourier transform continue to hold. The following are several basic properties used in this paper:
Proposition 5 (Properties of the Fourier transform, see [2]). Consider any f, g : Sn ? R.
\
? (Linearity) For any ?, ? ? R, [?f
+ ?g]i = ?fbi + ?b
gi holds at all frequency levels i.
? (Convolution) The Fourier transform of a convolution is a product of Fourier transforms:
[f[
? g]i = fbi ? gbi , for each frequency level i, where the operation ? is matrix multiplication.
P
? (Normalization) The first coefficient matrix, f?0 , is a scalar and equals
f (?).
??Sn
A number of papers in recent years ([13, 6, 8, 7]) have considered approximating distributions
over permutations using a truncated (bandlimited) set of Fourier coefficients and have proposed
inference algorithms that operate on these Fourier coefficient matrices. For example, one can
perform generic marginalization, Markov chain prediction, and conditioning operations using only
Fourier coefficients without ever having to perform an inverse Fourier transform. Huang et al. [8]
introduced Fourier domain algorithms, Join and Split, for combining independent factors to form
joints and for extracting the factors from a joint distribution, respectively.
In this section, we provide generalizations of the algorithms in [8] that we call RiffleJoin and Riffle? = {p + 1, . . . , n} and that we are given a riffle indeSplit. We will assume that X = {1, . . . , p}, X
pendent distribution h : Sn ? R (h = f ?mp,q g). We also, for the purposes of this section, assume
that the parameters for the distribution mp,q are known, though it will not matter for the RiffleSplit
algorithm. Although we begin each of the following discussions as if all of the Fourier coefficients
are provided, we will be especially interested in algorithms that work well in cases where only a truncated set of Fourier coefficients are present, and where h is only approximately riffle independent.
RiffleJoin. Given the Fourier coefficients of f , g, and m, we can compute the Fourier coefficients
of h using Definition 2 by applying the Join algorithm from [8] and the Convolution Theorem
(Prop. 5), which tells us that the Fourier transform of a convolution can be written as a pointwise
? ? , our RiffleJoin algorithm simply calls the Join
product of Fourier transforms. To compute the h
algorithm on fb and gb, and convolves the result by m
b (see Alg. 2). In general, it may be intractable
to Fourier transform the riffle shuffling distribution mp,q . However, for the class of biased riffle
? by employing
d
shuffles from Section 3, one can efficiently compute the low-frequency terms of m
p,q
the recurrence relation in Alg. 1. In particular, Alg. 1 expresses a biased riffle shuffle on Sn as a
linear combination of biased riffle shuffles on Sn?1 . By invoking linearity of the Fourier transform
? via a dynamic programming approach. To the best of
d
(Prop. 5), one can efficiently compute m
p,q
our knowledge, we are the first to compute the Fourier transform of riffle shuffling distributions.
6
RiffleSplit. Given the Fourier coefficients of the riffle independent distribution h, we would like to
tease apart the riffle factors f and g. From the RiffleJoin algorithm, we saw that for each frequency
? i = [m
d
level i, h
d
p,q ]i ?[f ? g]i . The first solution to the splitting problem that might occur is to perform
?1
a deconvolution by multiplying each b
hi term by the inverse of the matrix [m
d
d
p,q ]i (to form [m
p,q ]i ?
b
hi ) and call the Split algorithm from [8] on the result. Unfortunately, the matrix [m
d
p,q ]i is, in general,
unif T
non-invertible. Instead, our RiffleSplit algorithm left-multiplies each b
hi term by m
b p,q i , which
can be shown to be equivalent to convolving the distribution h by the ?dual shuffle?, m? , defined as
?1
m? (?) = munif
). While convolving by m? does not produce a distribution that factors indep,q (?
pendently, the Split algorithm from [8] can still be shown to recover the Fourier transforms f? and g?:
? as input), returns f? and g? exactly.
Theorem 6. If h = f ?m g, then RiffleSplit (Alg. 3) (with h
p,q
As with RiffleJoin, it is necessary Fourier transform munif
p,q , which we can again accomplish via the
recurrence in Alg. 1. One must also normalize the output of Split to sum to one via Prop. 5.
Theoretical guarantees. We now briefly summarize several results which show how, (1) our
algorithms perform when called with a truncated set of Fourier coefficients, and (2) when RiffleSplit
is called on a distribution which is only approximately riffle independent.
Theorem 7. Given enough Fourier terms to reconstruct the k th -order marginals of f and g, RiffleJoin returns enough Fourier terms to exactly reconstruct the k th -order marginals of h. Likewise,
given enough Fourier terms to reconstruct the k th -order marginals of h, RiffleSplit returns enough
Fourier terms to exactly reconstruct the k th -order marginals of both f and g.
Theorem 8. Let h be any distribution on Sn and mp,q any riffle shuffling distribution on Sn . If
[fb0 , gb0 ] = R IFFLE S PLIT(b
h), then (f 0 , g 0 ) is the minimizer of the problem:
P
P
minimizef,g DKL (h||f ?mp,q g), (subject to: ?p f (?p ) = 1, ?q g(?q ) = 1),
where DKL is the Kullback-Leibler divergence.
6
Experiments
In this section, we validate our algorithms and show that riffled independence exists in real data.
APA dataset. The APA dataset [3] is a collection of 5738 ballots from a 1980 presidential election
of the American Psychological Association where members ordered five candidates from favorite
? that are closest to riffle
to least favorite. We first perform an exhaustive search for subsets X and X
independent (with respect to DKL ), and find that candidate 2 is nearly riffle independent of the
remaining candidates. In Fig. 4(a) we plot the true vote distribution and the best approximation by a
distribution in which candidate 2 is riffle independent of the rest. For comparison, we plot the result
of splitting off candidate 3 instead of candidate 2, which one can see to be an inferior approximation.
The APA, as described by Diaconis [3], is divided into ?academicians and clinicians who are on
uneasy terms?. In 1980, candidates {1, 3} and {4, 5} fell on opposite ends of this political spectrum
with candidate 2 being somewhat independent. Diaconis conjectured that voters choose one group
over the other, and then choose within. We are now able to verify his conjecture in a riffled
independence sense. After removing candidate 2 from the distribution, we perform a search within
candidates {1, 3, 4, 5} to again find nearly riffle independent subsets. We find that X = {1, 3} and
? = {4, 5} are very nearly riffle independent and thus are able to verify that candidate sets {2},
X
{1, 3}, {4, 5} are indeed grouped in a riffle independent sense in the APA data. Finally since there
are two opposing groups within the APA, the riffle shuffling distribution for sets {1, 3} and {4, 5} is
not well approximated by a biased riffle shuffle. Instead, we fit a mixture of two biased riffle shuffles
to the data and found the bias parameters of the mixture components to be ?1 ? .67 and ?2 ? .17,
indicating that the two components oppose each other (since ?1 and ?2 lie on either side of .5).
Sushi dataset. The sushi dataset [10] consists of 5000 full rankings of ten types of sushi. Compared to the APA data, it has more objects, but fewer examples. We divided the data into training
and test sets and estimated the true distribution in three ways: (1) directly from samples,(2) using
a riffle independent distribution (split evenly into two groups of five) with the optimal shuffling
distribution m, and (3) with a biased riffle shuffle (and optimal bias ?). Fig. 4(b) plots testset
log-likelihood as a function of training set size ? we see that riffle independence assumptions can
help significantly to lower the sample complexity of learning. Biased riffle shuffles, as can be seen,
7
Remove candidate {2} (DKL=0.0398)
Log?likelihood of held?out test set
true distribution
Remove candidate {3} (DKL=0.1878)
10
20
30
40
50
60
70
80
APA ranking distribution
90
100
110
0.15
0.1
0.05
0
2
4
6
8
10
Ranks i = 1 (favorite) through 10 (least favorite)
(c) First-order probabilities of Uni (sea
urchin) (Sushi dataset) rankings.
100
200
400
800
Training set size
1600
3200
100
20
Split algorithm
15
RiffleSplit algorithm
10
5
0
Full model
?9000
(b) Average log-likelihood of held out
test examples from the Sushi dataset
100
200
300
400
Sample sizes
500
600
(d) Estimating a riffle independent distribution using various sample sizes
Elapsed time in second
0.2
Riffle Independent w/optimal m
?8500
50
25
Estimated from 1000 examples
Estimated from 100 examples
Biased riffle indep. approx.
KL divergence from truth (20 trials)
Probability of Uni/Sea Urchin in rank i
0.3
?8000
120
(a) Purple line: approximation to vote distribution when candidate 2 is riffle independent;
Blue line: approximation when candidate 3 is riffle independent.
0.25
Biased riffle independent w/optimal ?
?7500
80
60
40
3rd order (O(n4) terms)
2nd order (O(n4) terms)
1st order (O(n2) terms)
20
0
10
20
n, p=n/2
30
40
(e) Running time plot of RiffleJoin
Figure 4: Experiments
are a useful learning bias with very small samples. As an illustration, see Fig. 4(c) which shows the
first-order marginals of Uni (Sea Urchin) rankings, and the biased riffle approximation.
Approximation accuracy. To understand the behavior of RiffleSplit in approximately riffle independent situations, we draw sample sets of varying sizes from a riffle independent distribution on
S8 (with bias parameter ? = .25) and use RiffleSplit to estimate the riffle factors from the empirical
distribution. In Fig. 4(d), we plot the KL-divergence between the true distribution and that obtained
by applying RiffleJoin to the estimated riffle factors. With small sample sizes (far less than 8!), we
are able to recover accurate approximations despite the fact that the empirical distributions are not
exactly riffle independent. For comparison, we ran the experiment using the Split algorithm [8]
to recover the riffle factors. Somewhat surprisingly, one can show (see Appendix) that Split also
recovers the riffle factors, albeit without the optimality guarantee that we have shown for Rifflesplit
(Theorem 8) and therefore requires far more samples to reliably approximate h.
Running times. In general, the complexity of Split is cubic (O(d3 )) in the dimension of each
Fourier coefficient matrix [8]. The complexity of RiffleJoin/RiffleSplit is O(n2 d3 ), in the worst case
when p ? O(n). If we precompute the Fourier coefficients of mp,q , (which requires O(n2 d3 )) for
each coefficient matrix, then the complexity of RiffleSplit is also O(d3 ). In Fig. 4(e), we plot running
times of RiffleJoin (no precomputation) as a function of n (setting p = dn/2e) scaling up to n = 40.
7
Future Directions and Conclusions
There are many open questions. For example, several papers note that graphical models cannot
compactly represent distributions over permutations due to mutual exclusivity. An interesting
question which our paper opens, is whether it is possible to use something similar to graphical
models by substituting conditional generalizations of riffled independence for ordinary conditional
independence. Other possibilities include going beyond the algebraic approach and studying riffled
independence in non-Fourier frameworks and developing statistical (riffled) independence tests.
In summary, we have introduced riffled independence and discussed how to exploit such structure
in a Fourier-theoretic framework. Riffled independence is a new tool for analyzing ranked data and
has the potential to offer novel insights into datasets both new and old. We believe that it will lead
to the development of fast inference and low sample complexity learning algorithms.
Acknowledgements
This work is supported in part by the ONR under MURI N000140710747, and the Young Investigator Program grant N00014-08-1-0752. We thank K. El-Arini for feedback on an initial draft.
8
References
[1] D. Bayer and P. Diaconis. Trailing the dovetail shuffle to its lair. The Annals of Probability, 1992.
[2] P. Diaconis. Group Representations in Probability and Statistics. IMS Lecture Notes, 1988.
[3] Persi Diaconis. A generalization of spectral analysis with application to ranked data. The Annals of
Statistics, 17(3):949?979, 1989.
[4] J. Fulman. The combinatorics of biased riffle shuffles. Combinatorica, 18(2):173?184, 1998.
[5] D. P. Helmbold and M. K. Warmuth. Learning permutations with exponential weights. In COLT, 2007.
[6] J. Huang, C. Guestrin, and L. Guibas. Efficient inference for distributions on permutations. In NIPS,
2007.
[7] J. Huang, C. Guestrin, and L. Guibas. Fourier theoretic probabilistic inference over permutations. JMLR,
10, 2009.
[8] J. Huang, C. Guestrin, X. Jiang, and L. Guibas. Exploiting probabilistic independence for permutations.
In AISTATS, 2009.
[9] S. Jagabathula and D. Shah. Inferring rankings under constrained sensing. In NIPS, 2008.
[10] Toshihiro Kamishima. Nantonac collaborative filtering: recommendation based on order responses. In
KDD, pages 583?588, 2003.
[11] R. Kondor. Group Theoretical Methods in Machine Learning. PhD thesis, Columbia University, 2008.
[12] R. Kondor and K. M. Borgwardt. The skew spectrum of graphs. In ICML, pages 496?503, 2008.
[13] R. Kondor, A. Howard, and T. Jebara. Multi-object tracking with representations of the symmetric group.
In AISTATS, 2007.
[14] G. Lebanon and Y. Mao. Non-parametric modeling of partially ranked data. In NIPS, 2008.
[15] M. Meila, K. Phadnis, A. Patterson, and J. Bilmes. Consensus ranking under the exponential model.
Technical Report 515, University of Washington, Statistics Department, April 2007.
[16] J. Petterson, T. Caetano, J. McAuley, and J. Yu. Exponential family graph matching and ranking. CoRR,
abs/0904.2623, 2009.
[17] D.B. Reid. An algorithm for tracking multiple targets. IEEE Trans. on Automatic Control, 6:843?854,
1979.
[18] J. Shin, N. Lee, S. Thrun, and L. Guibas. Lazy inference on object identities in wireless sensor networks.
In IPSN, 2005.
9
| 3775 |@word groupwise:1 trial:1 version:1 briefly:1 interleave:1 seems:1 kondor:3 nd:1 unif:3 open:2 decomposition:1 invoking:1 pick:1 recursively:1 mcauley:1 initial:1 selecting:1 offering:1 written:3 must:2 realize:1 kdd:1 remove:2 drop:5 plot:6 selected:1 fewer:1 item:2 nq:1 warmuth:1 ith:2 draft:1 parameterizations:1 preference:5 five:2 along:1 dn:1 consists:1 combine:1 introduce:3 pairwise:1 indeed:1 behavior:1 multi:2 inspired:1 election:1 increasing:1 becomes:1 begin:2 provided:1 linearity:2 estimating:1 lowest:2 what:2 guarantee:2 every:1 firstorder:1 precomputation:1 exactly:8 control:1 grant:1 reid:1 dropped:1 sushi:5 consequence:1 despite:2 joining:2 analyzing:1 jiang:1 approximately:3 might:13 black:1 voter:1 suggests:1 factorization:1 oppose:1 recursive:1 block:3 sq:1 shin:1 empirical:3 thought:2 significantly:1 matching:2 cannot:1 storage:5 context:3 applying:2 equivalent:2 map:5 demonstrated:1 imposed:1 maximizing:1 straightforward:1 starting:1 independently:9 splitting:3 assigns:2 helmbold:1 rule:1 insight:1 his:1 handle:1 notion:3 annals:2 controlling:1 play:1 target:1 programming:1 distinguishing:1 hypothesis:1 element:4 approximated:1 cut:3 muri:1 exclusivity:4 role:1 capture:1 worst:1 region:1 caetano:1 indep:2 ordering:1 shuffle:32 ran:1 mentioned:1 complexity:10 moderately:1 dynamic:1 depend:2 bottommost:1 upon:2 patterson:1 completely:1 compactly:1 interleavings:6 easily:1 joint:7 convolves:1 various:3 fast:1 tell:1 h0:2 exhaustive:1 quite:3 richer:1 larger:2 say:5 drawing:6 reconstruct:6 otherwise:1 presidential:1 statistic:4 gi:1 think:1 transform:12 final:1 hoc:1 propose:2 product:3 combining:2 date:4 intuitive:1 description:1 validate:1 normalize:2 scalability:1 exploiting:2 sea:3 produce:1 object:21 help:1 develop:1 school:1 pendent:1 strong:1 c:1 implies:1 direction:1 ipsn:1 everything:1 require:2 assign:2 generalization:9 broccoli:8 decompose:1 proposition:4 enumerated:1 mathematically:1 dent:1 hold:2 considered:1 guibas:4 mapping:2 scope:1 substituting:1 trailing:1 major:1 smallest:2 purpose:1 saw:1 grouped:1 vice:3 successfully:1 tool:1 hope:1 rough:1 sensor:1 always:1 rather:1 varying:1 knob:1 corollary:1 rank:12 likelihood:5 nantonac:1 hk:1 contrast:1 political:1 summarizing:1 sense:3 posteriori:1 inference:9 el:1 typically:2 entire:3 relation:8 going:2 interested:1 dual:1 flexible:1 colt:1 retaining:1 multiplies:1 development:1 constrained:3 mutual:4 marginal:5 equal:1 never:1 having:1 washington:1 represents:2 yu:1 icml:1 nearly:3 future:1 np:4 report:1 composed:1 preserve:3 divergence:3 diaconis:5 individual:1 petterson:1 familiar:1 maintain:2 opposing:1 ab:1 possibility:3 highly:1 indifferent:1 mixture:2 recurse:1 held:2 chain:1 cherry:4 accurate:1 bayer:1 necessary:3 continuing:1 old:1 walk:1 theoretical:3 psychological:1 modeling:3 soft:2 assignment:3 ordinary:2 subset:16 uniform:4 n2s:1 answer:1 accomplish:1 st:1 borgwardt:1 preferring:1 probabilistic:11 off:1 lee:1 plit:3 invertible:1 quickly:1 again:3 reflect:1 thesis:1 successively:1 huang:8 choose:2 arini:1 convolving:2 american:1 return:8 aggressive:1 potential:2 coefficient:22 matter:1 combinatorics:1 mp:16 ranking:40 ad:1 view:2 picked:2 h1:1 red:1 bayes:1 carlos:1 recover:3 contribution:1 collaborative:1 formed:2 purple:1 accuracy:1 who:1 likewise:1 efficiently:2 correspond:2 identify:1 raw:3 multiplying:1 notoriously:1 researcher:1 apple:1 bilmes:1 whenever:1 definition:6 frequency:13 thereof:1 obvious:1 associated:1 recovers:5 dataset:6 popular:1 persi:1 knowledge:1 reflecting:1 higher:1 gbi:1 response:1 daunting:1 april:1 though:2 just:1 correlation:2 nif:3 working:2 hand:2 expressive:1 perhaps:3 believe:1 verify:2 true:4 assigned:1 shuffled:1 symmetric:4 nonzero:4 leibler:1 eg:2 game:1 recurrence:5 inferior:1 allowable:1 theoretic:7 complete:1 confusion:3 performs:1 ranging:1 novel:5 common:1 behaves:1 pseudocode:2 conditioning:3 foreach:3 association:2 analog:2 s8:1 discussed:1 marginals:12 ims:1 mellon:1 composition:1 refer:3 significant:1 versa:3 expressing:1 shuffling:27 approx:1 rd:1 meila:1 automatic:1 artichoke:7 etc:1 base:1 something:1 posterior:1 closest:1 recent:4 conjectured:1 apart:2 store:4 certain:4 n00014:1 onr:1 continue:1 discussing:1 guestrin:5 seen:1 somewhat:2 impose:1 tempting:1 preservation:1 full:12 multiple:1 smooth:1 technical:1 offer:1 divided:2 dkl:5 ensuring:1 prediction:1 scalable:1 basic:1 essentially:1 cmu:1 represent:4 normalization:1 dovetail:2 preserved:1 addition:2 else:1 biased:20 operate:1 unlike:1 rest:1 ineffective:1 strict:1 subject:2 fell:1 undergo:1 member:1 call:5 extracting:2 split:9 enough:5 rendering:1 variety:1 independence:73 marginalization:1 fit:1 opposite:1 reduce:1 whether:2 utility:1 gb:4 algebraic:2 remark:1 dramatically:1 useful:5 generally:1 vegetable:19 detailed:1 transforms:3 s4:1 ten:1 simplest:2 reduced:1 delta:1 disjoint:2 track:3 estimated:4 blue:2 carnegie:1 write:1 dropping:1 express:1 group:10 d3:4 graph:2 relaxation:1 year:1 sum:1 prob:1 parameterized:1 inverse:2 ballot:1 place:2 family:7 decide:1 draw:8 prefer:1 appendix:2 scaling:1 interleaved:1 def:1 hi:5 apa:7 correspondence:1 occur:1 constraint:4 handful:1 fourier:54 optimality:1 performing:1 conjecture:1 department:1 developing:1 according:1 combination:1 poor:1 precompute:1 across:1 sth:2 n4:4 making:1 intuitively:1 jagabathula:1 remains:1 skew:1 end:1 studying:1 operation:5 multiplied:1 appropriate:1 fbi:2 generic:1 spectral:1 phadnis:1 alternative:1 shah:1 convolved:1 compress:1 denotes:1 remaining:2 running:3 include:1 graphical:3 maintaining:1 unsuitable:1 somewhere:1 exploit:3 restrictive:3 especially:1 approximating:1 question:4 occurs:1 randomize:1 parametric:1 diagonal:1 said:1 amongst:2 thank:1 card:18 thrun:1 evenly:1 argue:5 consensus:1 modeled:2 relationship:1 pointwise:2 index:1 illustration:1 preferentially:1 difficult:1 unfortunately:1 potentially:1 stated:1 lair:1 reliably:1 perform:8 allowing:2 convolution:7 observation:1 markov:1 datasets:1 howard:1 truncated:3 immediate:1 situation:1 ever:1 team:3 jebara:1 introduced:2 complement:2 pair:1 required:3 kl:2 gb0:1 elapsed:1 subgroup:1 nip:3 trans:1 beyond:3 able:3 regime:3 sparsity:2 appeared:1 summarize:1 encompasses:1 program:1 including:1 unrealistic:2 suitable:2 bandlimited:1 ranked:12 force:1 natural:1 representing:3 older:1 improve:1 brief:1 columbia:1 sn:18 prior:3 literature:1 acknowledgement:1 multiplication:2 relative:8 riffle:76 fully:2 permutation:45 lecture:1 interesting:1 proportional:2 filtering:1 imposes:1 fruit:16 storing:2 veg:1 row:1 pile:16 course:1 summary:4 surprisingly:1 wireless:1 last:2 keeping:1 tease:1 supported:1 formal:1 bias:7 understand:2 side:1 undropped:1 factorially:1 sparse:1 overcome:1 dimension:1 feedback:1 fb:3 collection:4 testset:1 far:3 employing:1 lebanon:1 reconstructed:2 approximate:1 compact:1 uni:3 preferred:8 cutting:1 kullback:1 summing:1 tuples:1 urchin:3 spectrum:2 search:2 decomposes:2 toshihiro:1 favorite:4 n000140710747:1 reasonably:1 rearranging:1 alg:7 complex:3 domain:6 sp:1 oin:2 aistats:2 whole:1 n2:5 allowed:1 exploitable:1 fig:9 join:3 cubic:1 position:3 inferring:1 mao:1 exponential:4 lie:3 candidate:15 jmlr:1 third:4 interleaving:8 young:1 theorem:5 removing:1 showing:1 symbol:2 explored:1 list:2 sensing:1 evidence:1 deconvolution:1 intractable:3 exists:1 albeit:1 merging:1 corr:1 phd:1 conditioned:1 led:1 simply:3 deck:7 visual:1 lazy:1 indifference:4 ordered:3 tracking:9 sport:1 scalar:1 partially:1 recommendation:1 corresponds:1 minimizer:1 truth:1 kamishima:1 prop:3 conditional:8 identity:3 viewed:1 exposition:1 towards:1 clinician:1 reducing:1 uniformly:1 called:7 vote:2 indicating:1 formally:1 combinatorica:1 quest:1 jonathan:1 riffled:38 investigator:1 |
3,063 | 3,776 | Object discovery and identi?cation
Charles Kemp & Alan Jern
Department of Psychology
Carnegie Mellon University
{ckemp,ajern}@cmu.edu
Fei Xu
Department of Psychology
University of California, Berkeley
fei [email protected]
Abstract
Humans are typically able to infer how many objects their environment contains
and to recognize when the same object is encountered twice. We present a simple statistical model that helps to explain these abilities and evaluate it in three
behavioral experiments. Our ?rst experiment suggests that humans rely on prior
knowledge when deciding whether an object token has been previously encountered. Our second and third experiments suggest that humans can infer how many
objects they have seen and can learn about categories and their properties even
when they are uncertain about which tokens are instances of the same object.
From an early age, humans and other animals [1] appear to organize the ?ux of experience into a
series of encounters with discrete and persisting objects. Consider, for example, a young child who
grows up in a home with two dogs. At a relatively early age the child will solve the problem of object
discovery and will realize that her encounters with dogs correspond to views of two individuals rather
than one or three. The child will also solve the problem of identi?cation, and will be able to reliably
identify an individual (e.g. Fido) each time it is encountered.
This paper presents a Bayesian approach that helps to explain both object discovery and identi?cation. Bayesian models are appealing in part because they help to explain how inferences are guided
by prior knowledge. Imagine, for example, that you see some photographs taken by your friends
Alice and Bob. The ?rst shot shows Alice sitting next to a large statue and eating a sandwich,
and the second is similar but features Bob rather than Alice. The statues in each photograph look
identical, and probably you will conclude that the two photographs are representations of the same
statue. The sandwiches in the photographs also look identical, but probably you will conclude that
the photographs show different sandwiches. The prior knowledge that contributes to these inferences appears rather complex, but we will explore some much simpler cases where prior knowledge
guides identi?cation.
A second advantage of Bayesian models is that they help to explain how learners cope with uncertainty. In some cases a learner may solve the problem of object discovery but should maintain
uncertainty when faced with identi?cation problems. For example, I may be quite certain that I have
met eight different individuals at a dinner party, even if I am unable to distinguish between two
guests who are identical twins. In other cases a learner may need to reason about several related
problems even if there is no de?nitive solution to any one of them. Consider, for example, a young
child who must simultaneously discover which objects her world contains (e.g. Mother, Father, Fido,
and Rex) and organize them into categories (e.g. people and dogs). Many accounts of categorization
seem to implicitly assume that the problem of identi?cation must be solved before categorization
can begin, but we will see that a probabilistic approach can address both problems simultaneously.
Identi?cation and object discovery have been discussed by researchers from several disciplines,
including psychology [2, 3, 4, 5, 6], machine learning [7, 8], statistics [9], and philosophy [10].
Many machine learning approaches can handle identity uncertainty, or uncertainty about whether
two tokens correspond to the same object. Some approaches such such as BLOG [8] are able in
addition to handle problems where the number of objects is not speci?ed in advance. We propose
1
that some of these approaches can help to explain human learning, and this paper uses a simple
BLOG-style approach [8] to account for human inferences.
There are several existing psychological models of identi?cation, and the work of Shepard [11],
Nosofsky [3] and colleagues is probably the most prominent. Models in this tradition usually focus
on problems where the set of objects is speci?ed in advance and where identity uncertainty arises
as a result of perceptual noise. In contrast, we focus on problems where the number of objects
must be inferred and where identity uncertainty arises from partial observability rather than noise.
A separate psychological tradition focuses on problems where the number of objects is not ?xed in
advance. Developmental psychologists, for example, have used displays where only one object token
is visible at any time to explore whether young infants can infer how many different objects have
been observed in total [4]. Our work emphasizes some of the same themes as this developmental
research, but we go beyond previous work in this area by presenting and evaluating a computational
approach to object identi?cation and discovery.
The problem of deciding how many objects have been observed is sometimes called individuation [12] but here we treat individuation as a special case of object discovery. Note, however, that
object discovery can also refer to cases where learners infer the existence of objects that have never
been observed. Unobserved-object discovery has received relatively little attention in the psychological literature, but is addressed by statistical models including including species-sampling models [9] and capture-recapture models [13]. Simple statistical models of this kind will not address
some of the most compelling examples of unobserved-object discovery, such as the discovery of the
planet Neptune, or the ability to infer the existence of a hidden object by following another person?s
gaze [14]. We will show, however, that a simple statistical approach helps to explain how humans
infer the existence of objects that they have never seen.
1
A probabilistic account of object discovery and identi?cation
Object discovery and identi?cation may depend on many kinds of observations and may be supported by many kinds of prior knowledge. This paper considers a very simple setting where these
problems can be explored. Suppose that an agent is learning about a world that contains nw white
balls and n ? nw gray balls. Let f (oi ) indicate the color of ball oi , where each ball is white
(f (oi ) = 1) or gray (f (oi ) = 0). An agent learns about the world by observing a sequence of object
tokens. Suppose that label l(j) is a unique identi?er of token j?in other words, suppose that the
jth token is a token of object ol(j) . Suppose also that the jth token is observed to have feature value
g(j). Note the difference between f and g: f is a vector that speci?es the color of the n balls in the
world, and g is a vector that speci?es the color of the object tokens observed thus far.
We de?ne a probability distribution over token sequences by assuming that a world is sampled from
a prior P (n, nw ) and that tokens are sampled from this world. The full generative model is:
1
if n ? 1000
n
P (n) ?
(1)
0
otherwise
nw | n ? Uniform(0, n)
l(j) | n ? Uniform(1, n)
g(j) = f (ol(j) )
(2)
(3)
(4)
A prior often used for inferences about a population of unknown size is the scale-invariant Jeffreys
prior P (n) = n1 [15]. We follow this standard approach here but truncate at n = 1000. Choosing
some upper bound is convenient when implementing the model, and has the advantage of producing
a prior that is proper (note that the Jeffreys prior is improper). Equation 2 indicates that the number
of white balls nw is sampled from a discrete uniform distribution. Equation 3 indicates that each
token is generated by sampling one of the n balls in the world uniformly at random, and Equation 4
indicates that the color of each token is observed without noise.
The generative assumptions just described can be used to de?ne a probabilistic approach to object discovery and identi?cation. Suppose that the observations available to a learner consist of a
fully-observed feature vector g and a partially-observed label vector lobs . Object discovery and identi?cation can be addressed by using the posterior distribution P (l|g, lobs ) to make inferences about
the number of distinct objects observed and about the identity of each token. Computing the posterior distribution P (n|g, lobs ) allows the learner to make inferences about the total number of objects
2
in the world. In some cases, the learner may solve the problem of unobserved-object discovery by
realizing that the world contains more objects than she has observed thus far.
The next sections explore the idea that the inferences made by humans correspond approximately
to the inferences of this ideal learner. Since the ideal learner allows for the possible existence of
objects that have not yet been observed, we refer to our model as the open world model. Although
we make no claim about the psychological mechanisms that might allow humans to approximate
the predictions of the ideal learner, in practice we need some method for computing the predictions
of our model. Since the domains we consider are relatively small, all results in this paper were
computed by enumerating and summing over the complete set of possible worlds.
2
Experiment 1: Prior knowledge and identi?cation
The introduction described a scenario (the statue and sandwiches example) where prior knowledge
appears to guide identi?cation. Our ?rst experiment explores a very simple instance of this idea. We
consider a setting where participants observe balls that are sampled with replacement from an urn.
In one condition, participants sample the same ball from the urn on four consecutive occasions and
are asked to predict whether the token observed on the ?fth draw is the same ball that they saw on
the ?rst draw. In a second condition participants are asked exactly the same question about the ?fth
token but sample four different balls on the ?rst four draws. We expect that these different patterns
of data will shape the prior beliefs that participants bring to the identi?cation problem involving the
?fth token, and that participants in the ?rst condition will be substantially more likely to identify the
?fth token as a ball that they have seen before.
Although we consider an abstract setting involving balls and urns the problem we explore has some
real-world counterparts. Suppose, for example, that a colleague wears the same tie to four formal
dinners. Based on this evidence you might be able to estimate the total number of ties that he owns,
and might guess that he is less likely to wear a new tie to the next dinner than a colleague who wore
different ties to the ?rst four dinners.
Method. 12 adults participated for course credit. Participants interacted with a computer interface
that displayed an urn, a robotic arm and a beam of UV light. The arm randomly sampled balls from
the urn, and participants were told that each ball had a unique serial number that was visible only
under UV light. After some balls were sampled, the robotic arm moved them under the UV light and
revealed their serial numbers before returning them to the urn. Other balls were returned directly to
the urn without having their serial numbers revealed. The serial numbers were alphanumeric strings
such as ?QXR182??note that these serial numbers provide no information about the total number
of objects, and that our setting is therefore different from the Jeffreys tramcar problem [15].
The experiment included ?ve within-participant conditions shown in Figure 1. The observations for
each condition can be summarized by a string that indicates the number of tokens and the serial
numbers of some but perhaps not all tokens. The 1 1 1 1 1 condition in Figure 1a is a case
where the same ball (without loss of generality, we call it ball 1) is drawn from the urn on ?ve
consecutive occasions. The 1 2 3 4 5 condition in Figure 1b is a case where ?ve different balls
are drawn from the urn. The 1
condition in Figure 1d is a case where ?ve draws are
made, but only the serial number of the ?rst ball is revealed. Within any of the ?ve conditions,
all of the balls had the same color (white or gray), but different colors were used across different
conditions. For simplicity, all draws in Figure 1 are shown as white balls.
On the second and all subsequent draws, participants were asked two questions about any token that
was subsequently identi?ed. They ?rst indicated whether the token was likely to be the same as the
ball they observed on the ?rst draw (the ball labeled 1 in Figure 1). They then indicated whether
the token was likely to be a ball that they had never seen before. Both responses were provided on a
scale from 1 (very unlikely) to 7 (very likely). At the end of each condition, participants were asked
to estimate the total number of balls in the urn. Twelve options were provided ranging from ?exactly
1? to ?exactly 12,? and a thirteenth option was labeled ?more than 12.? Responses to each option
were again provided on a seven point scale.
Model predictions and results. The comparisons of primary interest involve the identi?cation
questions in conditions 1a and 1b. In condition 1a the open world model infers that the total number
of balls is probably low, and becomes increasingly con?dent that each new token is the same as the
3
b)
1 1 1 1 1
?NEW
= NEW
1 2 3 4 5
5
5
3
3
3
3
1
1
1
1
Open world
7
5
DP mixture
7
0.66
0.66
1
1
0.66
0.66
0.33
0.33
0
0
7
13
1
13
9
0.66
9
0.33
5
0.33
5
0
1
0
1
1
d)
e)
7
5
5
5
3
3
3
1
1
1
13
13
13
9
9
9
5
5
5
1
1
1
# Balls
1
3
5
7
9
11
+12
7
1
3
5
7
9
11
+12
7
Human
1
1 ?
(1)(?)
1 2 ?
(1)(2)(?)
(1)(2)(3)(?)
1 2 3 ?
(1)(2)(3)(4)(?)
1 2 3 4 ?
0
1 ?
(1)(?)
1 1 ?
(1)(1)(?)
1 1 1 ?
(1)(1)(1)(?)
(1)(1)(1)(1)(?)
1 1 1 1 ?
0.33
0
1 ?
(1)(?)
1 1 ?
(1)(1)(?)
1 1 1 ?
(1)(1)(1)(?)
(1)(1)(1)(1)(?)
1 1 1 1 ?
0.33
1
# Balls
0.66
# Balls
1
3
5
7
9
11
+12
# Balls
1 ?
(1)(?)
1 2 ?
(1)(2)(?)
(1)(2)(3)(?)
1 2 3 ?
(1)(2)(3)(4)(?)
1 2 3 4 ?
1
1
3
5
7
9
11
+12
1
Open world
c)
?BALL
= 1(1)
= NEW
BALL
(1) ?NEW
NEW
5
PY mixture
Human
7
? = 1
BALL
(1)
1
3
5
7
9
11
+12
a)
# Balls
Figure 1: Model predictions and results for the ?ve conditions in experiment 1. The left columns
in (a) and (b) show inferences about the identi?cation questions. In each plot, the ?rst group of
bars shows predictions about the probability that each new token is the same ball as the ?rst ball
drawn from the urn. The second group of bars shows the probability that each new token is a ball
that has never been seen before. The right columns in (a) and (b) and the plots in (c) through (e)
show inferences about the total number of balls in each urn. All human responses are shown on
the 1-7 scale used for the experiment. Model predictions are shown as probabilities (identi?cation
questions) or ranks (population size questions).
?rst object observed. In condition 1b the model infers that the number of balls is probably high, and
becomes increasingly con?dent that each new token is probably a new ball.
The rightmost charts in Figures 1a and 1b show inferences about the total number of balls and
con?rm that humans expect the number of balls to be low in condition 1a and high in condition 1b.
Note that participants in condition 1b have solved the problem of unobserved-object discovery and
inferred the existence of objects that they have never seen. The leftmost charts in 1a and 1b show
responses to the identi?cation questions, and the ?nal bar in each group of four shows predictions
about the ?fth token sampled. As predicted by the model, participants in 1a become increasingly
con?dent that each new token is the same object as the ?rst token, but participants in 1b become
increasingly con?dent that each new token is a new object. The increase in responses to the new ball
questions in Figure 1b is replicated in conditions 2d and 2e of Experiment 2, and therefore appears
to be reliable.
4
The third and fourth rows of Figures 1a and 1b show the predictions of two alternative models that
are intuitively appealing but that fail to account for our results. The ?rst is the Dirichlet Process (DP)
mixture model, which was proposed by Anderson [16] as an account of human categorization. Unlike most psychological models of categorization, the DP mixture model reserves some probability
mass for outcomes that have not yet been observed. The model incorporates a prior distribution over
partitions?in most applications of the model these partitions organize objects into categories, but
Anderson suggests that the model can also be used to organize object tokens into classes that correspond to individual objects. The DP mixture model successfully predicts that the ball 1 questions
will receive higher ratings in 1a than 1b, but predicts that responses to the new ball question will
be identical across these two conditions. According to this model, the probability that a new token
?
corresponds to a new object is m+?
where ? is a hyperparameter and m is the number of tokens
observed thus far. Note that this probability is the same regardless of the identities of the m tokens
previously observed.
The Pitman Yor (PY) mixture model in the fourth row is a generalization of the DP mixture model
that uses a prior over partitions de?ned by two hyperparameters [17]. According to this model, the
probability that a new token corresponds to a new object is ?+k?
m+? , where ? and ? are hyperparameters
and k is the number of distinct objects observed so far. The ?exibility offered by a second hyperparameter allows the model to predict a difference in responses to the new ball questions across the
two conditions, but the model does not account for the increasing pattern observed in condition 1b.
Most settings of ? and ? predict that the responses to the new ball questions will decrease in condition 1b. A non-generic setting of these hyperparameters with ? = 0 can generate the ?at predictions
in Figure 1, but no setting of the hyperparameters predicts the increase in the human responses.
Although the PY and DP models both make predictions about the identi?cation questions, neither
model can predict the total number of balls in the urn. Both models assume that the population of
balls is countably in?nite, which does not seem appropriate for the tasks we consider.
Figures 1c through 1d show results for three control conditions. Like condition 1a, 1c and 1d are
cases where exactly one serial number is observed. Like conditions 1a and 1b, 1d and 1e are cases
where exactly ?ve tokens are observed. None of these control conditions produces results similar to
conditions 1a and 1b, suggesting that methods which simply count the number of tokens or serial
numbers will not account for our results.
In each of the ?nal three conditions our model predicts that the posterior distribution on the number
of balls n should decay as n increases. This prediction is not consistent with our data, since most
participants assigned equal ratings to all 13 options, including ?exactly 12 balls? and ?more than
12 balls.? The ?at responses in Figures 1c through 1e appear to indicate a generic desire to express
uncertainty, and suggest that our ideal learner model accounts for human responses only after several
informative observations have been made.
3
Experiment 2: Object discovery and identity uncertainty
Our second experiment focuses on object discovery rather than identi?cation. We consider cases
where learners make inferences about the number of objects they have seen and the total number
of objects in the urn even though there is substantial uncertainty about the identities of many of the
tokens observed. Our probabilistic model predicts that observations of unidenti?ed tokens can in?uence inferences about the total number of objects, and our second experiment tests this prediction.
Method. 12 adults participated for course credit. The same participants took part in Experiments
1 and 2, and Experiment 2 was always completed after Experiment 1. Participants interacted with
the same computer interface in both conditions, and the seven conditions in Experiment 2 are shown
in Figure 2. Note that each condition now includes one or more gray tokens. In 2a, for example,
there are four gray tokens and none of these tokens is identi?ed. All tokens were sampled with
replacement, and the condition labels in Figure 2 summarize the complete set of tokens presented in
each condition. Within each condition the tokens were presented in a pseudo-random order?in 2a,
for example, the gray and white tokens were interspersed with each other.
Model predictions and results. The cases of most interest are the inferences about the total number
of balls in conditions 2a and 2c. In both conditions participants observe exactly four white tokens
and all four tokens are revealed to be the same ball. The gray tokens in each condition are never
identi?ed, but the number of these tokens varies across the conditions. Even though the identities
5
1 1 1 1 1 1 1 1
?NEW
= NEW
7
7
5
5
7
5
3
3
3
3
1
1
1
1
9
0.33
5
0.33
5
0
1
0
1
# Balls
c)
1 2 3 4
?NEW
= NEW
5
3
3
3
3
1
1
1
1
1
13
1
13
0.66
9
0.66
9
0.33
5
0.33
5
0
1
0
1
e)
? = 1
BALL
(1)
?NEW
= NEW
1
# Balls
g)
1
5
3
3
3
3
1
1
1
13
1
13
1
13
0.66
9
9
9
0.33
5
5
5
0
1
1
1
# Balls
1
3
5
7
9
11
+12
7
5
1
3
5
7
9
11
+12
7
5
[ ]x1x1(1)(?)
1 ?
1 2 ?
[ ]x1x1
(1)(2)(?)
1 2 3 ?
[ ]x3 x3
(1)(2)(3)(?)
7
5
[ ]x1x1(1)(?)
1 ?
[ ]x1x1
(1)(2)(?)
1 2 ?
[ ]x3 x3
(1)(2)(3)(?)
1 2 3 ?
Human
7
Open world
f)
1 2 3 4
7
x1(1)(?)
1 ?
1 2 ?
[ ]x1x1(1)(2)(?)
1 2 3 ?
[ ]x1 x1
(1)(2)(3)(?)
# Balls
x1(1)(?)
1 ?
[ ]x1x1(1)(2)(?)
1 2 ?
[ ]x1 x1
(1)(2)(3)(?)
1 2 3 ?
5
1
3
5
7
9
11
+12
5
[ ]x3x3(1)(?)
1 ?
1 1 ?
[ ]x6x6(1)(1)(?)
1 1 1 ?
[ ]x9 x9
(1)(1)(1)(?)
7
5
[ ]x3x3(1)(?)
1 ?
[ ]x6x6(1)(1)(?)
1 1 ?
[ ]x9 x9
(1)(1)(1)(?)
1 1 1 ?
7
Human
? = 1
BALL
(1)
Open world
7
?NEW
= NEW
# Balls
d)
1 1 1 1
? = 1
BALL
(1)
[ ]x3x3(1)(?)
1 ?
13
0.66
[ ]x3x3(1)(?)
1 ?
1
9
1
3
5
7
9
11
+12
13
[ ]x2x2(1)(?)
1 ?
1 1 ?
[ ]x3x3(1)(1)(?)
1 1 1 ?
[ ]x3x3
(1)(1)(1)(?)
1
0.66
1
3
5
7
9
11
+12
? = 1
BALL
(1)
# Balls
1
3
5
7
9
11
+12
?NEW
= NEW
5
[ ]x2x2(1)(?)
1 ?
[ ]x3x3(1)(1)(?)
1 1 ?
[ ]x3x3
(1)(1)(1)(?)
1 1 1 ?
Human
7
Open world
b)
1 1 1 1
? = 1
BALL
(1)
1
3
5
7
9
11
+12
a)
# Balls
Figure 2: Model predictions and results for the seven conditions in Experiment 2. The left columns
in (a) through (e) show inferences about the identi?cation questions, and the remaining plots show
inferences about the total number of balls in each urn.
of the gray tokens are never revealed, the open world model can use these observations to guide its
inference about the total number of balls. In 2a, the proportions of white tokens and gray tokens
are equal and there appears to be only one white ball, suggesting that the total number of balls is
around two. In 2c grey tokens are now three times more common, suggesting that the total number
of balls is larger than two. As predicted, the human responses in Figure 2 show that the peak of the
distribution in 2a shifts to the right in 2c. Note, however, that the model does not accurately predict
the precise location of the peak in 2c.
Some of the remaining conditions in Figure 2 serve as controls for the comparison between 2a and
2c. Conditions 2a and 2c differ in the total number of tokens observed, but condition 2b shows that
6
this difference is not the critical factor. The number of tokens observed is the same across 2b and 2c,
yet the inference in 2b is more similar to the inference in 2a than in 2c. Conditions 2a and 2c also
differ in the proportion of white tokens observed, but conditions 2f and 2g show that this difference
is not suf?cient to explain our results. The proportion of white tokens observed is the same across
conditions 2a, 2f, and 2g, yet only 2a provides strong evidence that the total number of balls is
low. The human inferences for 2f and 2g show the hint of an alternating pattern consistent with the
inference that the total number of balls in the urn is even. Only 2 out of 12 participants generated
this pattern, however, and the majority of responses are near uniform. Finally, conditions 2d and
2e replicate our ?nding from Experiment 1 that the identity labels play an important role. The only
difference between 2a and 2e is that the four labels are distinct in the latter case, and this single
difference produces a predictable divergence in human inferences about the total number of balls.
4
Experiment 3: Categorization and identity uncertainty
Experiment 2 suggested that people make robust inferences about the existence and number of unobserved objects in the presence of identity uncertainty. Our ?nal experiment explores categorization
in the presence of identity uncertainty. We consider an extreme case where participants make inferences about the variability of a category even though the tokens of that category have never been
identi?ed.
Method. The experiment included two between subject conditions, and 20 adults were recruited for
each condition. Participants were asked to reason about a category including eggs of a given species,
where eggs in the same category might vary in size. The interface used in Experiments 1 and 2 was
adapted so that the urn now contained two kinds of objects: notepads and eggs. Participants were
told that each notepad had a unique color and a unique label written on the front. The UV light
played no role in the experiment and was removed from the interface: notepads could be identi?ed
by visual inspection, and identifying labels for the eggs were never shown.
In both conditions participants observed a sequence of 16 tokens sampled from the urn. Half of the
tokens were notepads and the others were eggs, and all egg tokens were identical in size. Whenever
an egg was sampled, participants were told that this egg was a Kwiba egg. At the end of the condition, participants were shown a set of 11 eggs that varied in size and asked to rate the probability
that each one was a Kwiba egg. Participants then made inferences about the total number of eggs
and the total number of notepads in the urn.
The two conditions were intended to lead to different inferences about the total number of eggs in
the urn. In the 4 egg condition, all items (notepad and eggs) were sampled with replacement. The
8 notepad tokens included two tokens of each of 4 notepads, suggesting that the total number of
notepads was 4. Since the proportion of egg tokens and notepad tokens was equal, we expected
participants to infer that the total number of eggs was roughly four. In the 1 egg condition, four
notepads were observed in total, but the ?rst three were sampled without replacement and never
returned to the urn. The ?nal notepad and the egg tokens were always sampled with replacement.
After the ?rst three notepads had been removed from the urn, the remaining notepad was sampled
about half of the time. We therefore expected participants to infer that the urn probably contained
a single notepad and a single egg by the end of the experiment, and that all of the eggs they had
observed were tokens of a single object.
Model. We can simultaneously address identi?cation and categorization by combining the open
world model with a Gaussian model of categorization. Suppose that the members of a given category
(e.g. Kwiba eggs) vary along a single continuous dimension (e.g. size). We assume that the egg
sizes are distributed according to a Gaussian with known mean and unknown variance ? 2 . For
convenience, we assume that the mean is zero (i.e. we measure size with respect to the average) and
?
use the standard inverse-gamma prior on the variance: p(? 2 ) ? (? 2 )?(?+1) e? ?2 . Since we are
interested only in qualitative predictions of the model, the precise values of the hyperparameters are
not very important. To generate the results shown in Figure 3 we set ? = 0.5 and ? = 2.
R
Before observing any eggs, the marginal distribution on sizes is p(x) = p(x|? 2 )p(? 2 )d? 2 . Suppose now that we observe m random samples from the category and that each one has size zero.
If m is large then these observations provide strong evidence that the variance ? 2 is small, and the
posterior distribution p(x|m) will be tightly peaked around zero. If m, is small, however, then the
posterior distribution will be broader.
7
2
? Category pdf (1 egg)
1
2
1
0
0
7
7
5
5
3
3
1
1
=
p4 (x) ? p1 (x)
Category pdf (4 eggs)
p1 (x)
p4 (x)
a)
Model differences
0.1
0
?0.1
?2
0
2 x (size)
Human differences
12
8
10
6
4
0.4
0.2
0
?0.2
?0.4
2
12
8
10
6
4
2
?2
0
2 x (size)
?2
0
2 x (size)
b) Number of eggs (4 eggs)
Number of eggs (1 egg)
c)
?4 ?2
0
2
4 (size)
Figure 3: (a) Model predictions for Experiment 3. The ?rst two panels show the size distributions
inferred for the two conditions, and the ?nal panel shows the difference of these distributions. The
difference curve for the model rises to a peak of around 1.6 but has been truncated at 0.1. (b)
Human inferences about the total number of eggs in the urn. As predicted, participants in the 4
egg condition believe that the urn contains more eggs. (c) The difference of the size distributions
generated by participants in each condition. The central peak is absent but otherwise the curve is
qualitatively similar to the model prediction.
The categorization model described so far is entirely standard, but note that our experiment considers
a case where T , the observed stream of object tokens, is not suf?cient to determine m, the number of
distinct objects observed. We therefore use the open world model to generate a posterior distribution
a marginal distribution over size by integrating out both m and ? 2 : p(x|T ) =
Rover m, 2and compute
p(x|? )p(? 2 |m)p(m|T )d? 2 dm. Figure 3a shows predictions of this ?open world + Gaussian?
model for the two conditions in our experiment. Note that the difference between the curves for the
two conditions has the characteristic Mexican-hat shape produced by a difference of Gaussians.
Results. Inferences about the total number of eggs suggested that our manipulation succeeded.
Figure 3b indicates that participants in the 4 egg condition believed that they had seen more eggs
than participants in the 1 egg condition. Participants in both conditions generated a size distribution
for the category of Kwiba eggs, and the difference of these distributions is shown in Figure 3c.
Although the magnitude of the differences is small, the shape of the difference curve is consistent
with the model predictions. The x = 0 bar is the only case that diverges from the expected Mexican
hat shape, and this result is probably due to a ceiling effect?80% of participants in both conditions
chose the maximum possible rating for the egg with mean size (size zero), leaving little opportunity
for a difference between conditions to emerge. To support the qualitative result in Figure 3c we
computed the variance of the curve generated by each individual participant and tested the hypothesis
that the variances were greater in the 1 egg condition than in the 4 egg condition. A Mann-Whitney
test indicated that this difference was marginally signi?cant (p < 0.1, one-sided).
5
Conclusion
Parsing the world into stable and recurring objects is arguably our most basic cognitive achievement [2, 10]. This paper described a simple model of object discovery and identi?cation and evaluated it in three behavioral experiments. Our ?rst experiment con?rmed that people rely on prior
knowledge when solving identi?cation problems. Our second and third experiments explored problems where the identities of many object tokens were never revealed. Despite the resulting uncertainty, we found that participants in these experiments were able to track the number of objects they
had seen, to infer the existence of unobserved objects, and to learn and reason about categories.
Although the tasks in our experiments were all relatively simple, future work can apply our approach to more realistic settings. For example, a straightforward extension of our model can handle
problems where objects vary along multiple perceptual dimensions and where observations are corrupted by perceptual noise. Discovery and identi?cation problems may take several different forms,
but probabilistic inference can help to explain how all of these problems are solved.
Acknowledgments We thank Bobby Han, Faye Han and Maureen Satyshur for running the experiments.
8
References
[1] E. A. Tibbetts and J. Dale. Individual recognition: it is good to be different. Trends in Ecology
and Evolution, 22(10):529?237, 2007.
[2] W. James. Principles of psychology. Holt, New York, 1890.
[3] R. M. Nosofsky. Attention, similarity, and the identi?cation-categorization relationship. Journal of Experimental Psychology: General, 115:39?57, 1986.
[4] F. Xu and S. Carey. Infants? metaphysics: the case of numerical identity. Cognitive Psychology,
30:111?153, 1996.
[5] L. W. Barsalou, J. Huttenlocher, and K. Lamberts. Basing categorization on individuals and
events. Cognitive Psychology, 36:203?272, 1998.
[6] L. J. Rips, S. Blok, and G. Newman. Tracing the identity of objects. Psychological Review,
113(1):1?30, 2006.
[7] A. McCallum and B. Wellner. Conditional models of identity uncertainty with application
to noun coreference. In L. K. Saul, Y. Weiss, and L. Bottou, editors, Advances in Neural
Information Processing Systems 17, pages 905?912. MIT Press, Cambridge, MA, 2005.
[8] B. Milch, B. Marthi, S. Russell, D. Sontag, D. L. Ong, and A. Kolobov. BLOG: Probabilistic
models with unknown objects. In Proceedings of the 19th International Joint Conference on
Arti?cial Intelligence, pages 1352?1359, 2005.
[9] J. Bunge and M. Fitzpatrick. Estimating the number of species: a review. Journal of the
American Statistical Association, 88(421):364?373, 1993.
[10] R. G. Millikan. On clear and confused ideas: an essay about substance concepts. Cambridge
University Press, New York, 2000.
[11] R. N. Shepard. Stimulus and response generalization: a stochastic model relating generalization to distance in psychological space. Psychometrika, 22:325?345, 1957.
[12] A. M. Leslie, F. Xu, P. D. Tremoulet, and B. J. Scholl. Indexing and the object concept:
developing ?what? and ?where? systems. Trends in Cognitive Science, 2(1):10?18, 1998.
[13] J. D. Nichols. Capture-recapture models. Bioscience, 42(2):94?102, 1992.
[14] G. Csibra and A. Volein. Infants can infer the presence of hidden objects from referential gaze
information. British Journal of Developmental Psychology, 26:1?11, 2008.
[15] H. Jeffreys. Theory of Probability. Oxford University Press, Oxford, 1961.
[16] J. R. Anderson. The adaptive nature of human categorization. Psychological Review, 98(3):
409?429, 1991.
[17] J. Pitman. Combinatorial stochastic processes, 2002. Notes for Saint Flour Summer School.
9
| 3776 |@word proportion:4 replicate:1 open:11 grey:1 essay:1 arti:1 shot:1 contains:5 series:1 rightmost:1 existing:1 yet:4 must:3 parsing:1 written:1 realize:1 planet:1 visible:2 subsequent:1 alphanumeric:1 partition:3 shape:4 informative:1 cant:1 realistic:1 plot:3 numerical:1 infant:3 generative:2 half:2 guess:1 item:1 intelligence:1 inspection:1 mccallum:1 realizing:1 provides:1 location:1 simpler:1 along:2 become:2 qualitative:2 fth:5 behavioral:2 expected:3 roughly:1 p1:2 metaphysics:1 ol:2 little:2 increasing:1 becomes:2 begin:1 discover:1 provided:3 estimating:1 panel:2 mass:1 confused:1 psychometrika:1 what:1 xed:1 kind:4 substantially:1 string:2 unobserved:6 pseudo:1 berkeley:2 cial:1 tie:4 exactly:7 returning:1 rm:1 control:3 appear:2 organize:4 producing:1 arguably:1 before:6 treat:1 despite:1 oxford:2 approximately:1 might:4 chose:1 twice:1 suggests:2 alice:3 unique:4 acknowledgment:1 practice:1 x3:4 nite:1 area:1 convenient:1 word:1 integrating:1 holt:1 suggest:2 convenience:1 milch:1 py:3 go:1 attention:2 regardless:1 straightforward:1 simplicity:1 identifying:1 population:3 handle:3 imagine:1 suppose:8 play:1 rip:1 us:2 hypothesis:1 trend:2 recognition:1 predicts:5 labeled:2 huttenlocher:1 observed:31 role:2 solved:3 capture:2 improper:1 decrease:1 removed:2 russell:1 substantial:1 environment:1 developmental:3 predictable:1 asked:6 ong:1 depend:1 solving:1 coreference:1 serve:1 rover:1 learner:12 joint:1 distinct:4 newman:1 choosing:1 outcome:1 quite:1 larger:1 solve:4 otherwise:2 ability:2 statistic:1 advantage:2 sequence:3 took:1 propose:1 p4:2 combining:1 fido:2 moved:1 achievement:1 rst:19 interacted:2 diverges:1 produce:2 categorization:12 object:72 help:7 friend:1 school:1 kolobov:1 received:1 strong:2 predicted:3 signi:1 indicate:2 met:1 differ:2 guided:1 subsequently:1 stochastic:2 human:25 implementing:1 mann:1 generalization:3 dent:4 extension:1 around:3 credit:2 deciding:2 nw:5 predict:5 claim:1 reserve:1 fitzpatrick:1 nitive:1 early:2 consecutive:2 vary:3 label:7 combinatorial:1 saw:1 basing:1 successfully:1 mit:1 csibra:1 always:2 gaussian:3 rather:5 dinner:4 eating:1 broader:1 focus:4 she:1 rank:1 indicates:5 contrast:1 tradition:2 am:1 inference:29 typically:1 unlikely:1 her:2 hidden:2 scholl:1 interested:1 animal:1 noun:1 special:1 marginal:2 equal:3 never:11 having:1 sampling:2 satyshur:1 identical:5 jern:1 look:2 peaked:1 future:1 others:1 stimulus:1 hint:1 randomly:1 bunge:1 gamma:1 tightly:1 simultaneously:3 recognize:1 ve:7 individual:7 divergence:1 intended:1 replacement:5 sandwich:4 maintain:1 n1:1 ecology:1 interest:2 x2x2:2 flour:1 mixture:7 extreme:1 light:4 succeeded:1 partial:1 experience:1 bobby:1 uence:1 uncertain:1 psychological:8 instance:2 column:3 compelling:1 whitney:1 leslie:1 uniform:4 father:1 front:1 rex:1 varies:1 corrupted:1 person:1 individuation:2 explores:2 twelve:1 peak:4 international:1 probabilistic:6 told:3 discipline:1 gaze:2 nosofsky:2 again:1 central:1 x9:4 cognitive:4 american:1 style:1 account:8 suggesting:4 de:4 twin:1 summarized:1 includes:1 stream:1 view:1 observing:2 participant:35 option:4 carey:1 tremoulet:1 oi:4 chart:2 variance:5 who:4 characteristic:1 correspond:4 identify:2 sitting:1 bayesian:3 lambert:1 accurately:1 emphasizes:1 produced:1 none:2 marginally:1 researcher:1 bob:2 cation:28 explain:8 whenever:1 ed:8 colleague:3 james:1 dm:1 bioscience:1 con:6 sampled:14 knowledge:8 color:7 infers:2 appears:4 higher:1 follow:1 response:14 wei:1 evaluated:1 though:3 generality:1 anderson:3 just:1 gray:9 perhaps:1 indicated:3 grows:1 believe:1 effect:1 concept:2 nichols:1 counterpart:1 evolution:1 assigned:1 alternating:1 white:11 lob:3 occasion:2 prominent:1 leftmost:1 pdf:2 presenting:1 complete:2 x3x3:8 bring:1 interface:4 ranging:1 charles:1 common:1 shepard:2 interspersed:1 discussed:1 he:2 association:1 relating:1 x6x6:2 mellon:1 refer:2 cambridge:2 mother:1 uv:4 had:8 persisting:1 wear:2 stable:1 han:2 similarity:1 posterior:6 scenario:1 manipulation:1 certain:1 blog:3 seen:9 faye:1 greater:1 speci:4 determine:1 full:1 multiple:1 infer:10 alan:1 believed:1 serial:9 prediction:19 involving:2 basic:1 cmu:1 sometimes:1 beam:1 receive:1 addition:1 participated:2 thirteenth:1 addressed:2 leaving:1 unlike:1 probably:8 subject:1 recruited:1 member:1 incorporates:1 seem:2 call:1 near:1 presence:3 ideal:4 revealed:6 psychology:8 observability:1 idea:3 enumerating:1 shift:1 absent:1 whether:6 wellner:1 returned:2 sontag:1 statue:4 york:2 clear:1 involve:1 referential:1 category:13 generate:3 ajern:1 track:1 carnegie:1 discrete:2 hyperparameter:2 express:1 group:3 four:12 drawn:3 neither:1 nal:5 inverse:1 barsalou:1 you:4 uncertainty:14 fourth:2 home:1 draw:7 entirely:1 bound:1 summer:1 distinguish:1 display:1 played:1 encountered:3 adapted:1 fei:2 your:1 urn:25 relatively:4 ned:1 department:2 developing:1 according:3 truncate:1 ball:76 across:6 maureen:1 increasingly:4 appealing:2 psychologist:1 jeffreys:4 intuitively:1 invariant:1 indexing:1 sided:1 taken:1 ceiling:1 equation:3 previously:2 count:1 mechanism:1 fail:1 end:3 rmed:1 available:1 gaussians:1 eight:1 observe:3 apply:1 generic:2 appropriate:1 alternative:1 encounter:2 hat:2 existence:7 dirichlet:1 remaining:3 running:1 completed:1 saint:1 opportunity:1 question:14 primary:1 dp:6 distance:1 unable:1 separate:1 thank:1 majority:1 seven:3 considers:2 kemp:1 reason:3 assuming:1 relationship:1 guest:1 rise:1 reliably:1 proper:1 unknown:3 upper:1 observation:8 displayed:1 truncated:1 variability:1 precise:2 varied:1 inferred:3 rating:3 dog:3 california:1 identi:34 marthi:1 address:3 able:5 beyond:1 adult:3 usually:1 pattern:4 bar:4 suggested:2 recurring:1 summarize:1 including:5 reliable:1 belief:1 critical:1 event:1 rely:2 arm:3 ne:2 nding:1 faced:1 prior:17 literature:1 discovery:21 review:3 fully:1 expect:2 loss:1 suf:2 age:2 agent:2 offered:1 consistent:3 principle:1 editor:1 ckemp:1 row:2 course:2 token:72 supported:1 jth:2 guide:3 allow:1 formal:1 wore:1 saul:1 emerge:1 pitman:2 yor:1 distributed:1 tracing:1 curve:5 dimension:2 world:23 evaluating:1 dale:1 made:4 qualitatively:1 replicated:1 adaptive:1 party:1 far:5 cope:1 approximate:1 implicitly:1 countably:1 robotic:2 summing:1 owns:1 conclude:2 continuous:1 learn:2 nature:1 robust:1 contributes:1 bottou:1 complex:1 domain:1 noise:4 hyperparameters:5 child:4 xu:4 x1:6 cient:2 egg:41 theme:1 perceptual:3 third:3 learns:1 young:3 british:1 substance:1 er:1 explored:2 decay:1 evidence:3 consist:1 magnitude:1 photograph:5 simply:1 explore:4 likely:5 visual:1 desire:1 contained:2 ux:1 partially:1 corresponds:2 ma:1 x1x1:6 conditional:1 identity:16 included:3 uniformly:1 mexican:2 total:28 called:1 specie:3 e:2 experimental:1 people:3 support:1 latter:1 arises:2 philosophy:1 evaluate:1 exibility:1 tested:1 |
3,064 | 3,777 | Matrix Completion from Noisy Entries
Raghunandan H. Keshavan?, Andrea Montanari??, and Sewoong Oh?
Abstract
Given a matrix M of low-rank, we consider the problem of reconstructing it from
noisy observations of a small, random subset of its entries. The problem arises
in a variety of applications, from collaborative filtering (the ?Netflix problem?)
to structure-from-motion and positioning. We study a low complexity algorithm
introduced in [1], based on a combination of spectral techniques and manifold
optimization, that we call here O PT S PACE. We prove performance guarantees
that are order-optimal in a number of circumstances.
1
Introduction
Spectral techniques are an authentic workhorse in machine learning, statistics, numerical analysis,
and signal processing. Given a matrix M , its largest singular values ?and the associated singular
vectors? ?explain? the most significant correlations in the underlying data source. A low-rank approximation of M can further be used for low-complexity implementations of a number of linear
algebra algorithms [2].
In many practical circumstances we have access only to a sparse subset of the entries of an m ? n
matrix M . It has recently been discovered that, if the matrix M has rank r, and unless it is too
?structured?, a small random subset of its entries allow to reconstruct it exactly. This result was first
proved by Cand?es and Recht [3] by analyzing a convex relaxation indroduced by Fazel [4]. A tighter
analysis of the same convex relaxation was carried out in [5]. A number of iterative schemes to solve
the convex optimization problem appeared soon thereafter [6, 7, 8] (also see [9] for a generalization).
In an alternative line of work, the authors of [1] attacked the same problem using a combination
of spectral techniques and manifold optimization: we will refer to their algorithm as O PT S PACE.
O PT S PACE is intrinsically of low complexity, the most complex operation being computing r singular values and the corresponding singular vectors of a sparse m ? n matrix. The performance
guarantees proved in [1] are comparable with the information theoretic lower bound: roughly
nr max{r, log n} random entries are needed to reconstruct M exactly (here we assume m of order n). A related approach was also developed in [10], although without performance guarantees for
matrix completion.
The above results crucially rely on the assumption that M is exactly a rank r matrix. For many
applications of interest, this assumption is unrealistic and it is therefore important to investigate
their robustness. Can the above approaches be generalized when the underlying data is ?well approximated? by a rank r matrix? This question was addressed in [11] within the convex relaxation
approach of [3]. The present paper proves a similar robustness result for O PT S PACE. Remarkably the guarantees we obtain are order-optimal in a variety of circumstances, and improve over the
analogus results of [11].
?
?
Department of Electrical Engineering, Stanford University
Departments of Statistics, Stanford University
1
1.1
Model definition
Let M be an m ? n matrix of rank r, that is
M = U ?V T .
(1)
where U has dimensions m ? r, V has dimensions n ? r, and ? is a diagonal r ? r matrix. We
assume that each entry of M is perturbed, thus producing an ?approximately? low-rank matrix N ,
with
Nij = Mij + Zij ,
(2)
where the matrix Z will be assumed to be ?small? in an appropriate sense.
Out of the m ? n entries of N , a subset E ? [m] ? [n] is revealed. We let N E be the m ? n matrix
that contains the revealed entries of N , and is filled with 0?s in the other positions
Nij if (i, j) ? E ,
E
(3)
Nij
=
0 otherwise.
The set E will be uniformly random given its size |E|.
1.2
Algorithm
For the reader?s convenience, we recall the algorithm introduced in [1], which we will analyze here.
The basic idea is to minimize the cost function F (X, Y ), defined by
F (X, Y ) ?
F(X, Y, S)
?
min F(X, Y, S) ,
S?Rr?r
1 X
(Nij ? (XSY T )ij )2 .
2
(4)
(5)
(i,j)?E
Here X ? Rn?r , Y ? Rm?r are orthogonal matrices, normalized by X T X = m1, Y T Y = n1.
Minimizing F (X, Y ) is an a priori difficult task, since F is a non-convex function. The key insight
is that the singular value decomposition (SVD) of N E provides an excellent initial guess, and that the
minimum can be found with high probability by standard gradient descent after this initialization.
Two caveats must be added to this decription: (1) In general the matrix N E must be ?trimmed?
to eliminate over-represented rows and columns; (2) For technical reasons, we consider a slightly
modified cost function to be denoted by Fe(X, Y ).
O PT S PACE( matrix N E )
e E be the output;
1: Trim N E , and let N
e E , Tr (N
e E ) = X0 S0 Y T ;
2: Compute the rank-r projection of N
0
3: Minimize Fe(X, Y ) through gradient descent, with initial condition (X0 , Y0 ).
eE.
We may note here that the rank of the matrix M , if not known, can be reliably estimated from N
We refer to the journal version of this paper for further details.
The various steps of the above algorithm are defined as follows.
Trimming. We say that a row is ?over-represented? if it contains more than 2|E|/m revealed entries
(i.e. more than twice the average number of revealed entries). Analogously, a column is overe E is obtained
represented if it contains more than 2|E|/n revealed entries. The trimmed matrix N
E
E
E
f and Z
e are defined similarly.
from N by setting to 0 over-represented rows and columns. M
E
E
E
e
f
e
Hence, N = M + Z .
Rank-r projection. Let
min(m,n)
eE =
N
X
?i xi yiT ,
i=1
2
(6)
e E , with singular vectors ?1 ? ?2 ? . . . . We then define
be the singular value decomposition of N
r
X
e E ) = mn
Tr (N
?i xi yiT .
|E| i=1
(7)
e E ) is the best rank-r approximation to N
e E in Frobenius
Apart from an overall normalization, Tr (N
norm.
Minimization. The modified cost function Fe is defined as
Fe(X, Y )
= F (X, Y ) + ? G(X, Y )
(j) 2
m
n
X
X
||X (i) ||2
||Y ||
? F (X, Y ) + ?
G1
+?
G1
,
3?0 r
3?0 r
i=1
j=1
(8)
(9)
where X (i) denotes the i-th row of X, and Y (j) the j-th row of Y . See Section 1.3 below for the
definition of ?0 . The function G1 : R+ ? R is such that G1 (z) = 0 if z ? 1 and G1 (z) =
2
e(z?1) ? 1 otherwise. Further, we can choose ? = ?(n?).
Let us stress that the regularization term is mainly introduced for our proof technique to work (and
a broad family of functions G1 would work as well). In numerical experiments we did not find any
performance loss in setting ? = 0.
One important feature of O PT S PACE is that F (X, Y ) and Fe(X, Y ) are regarded as functions of
the r-dimensional subspaces of Rm and Rn generated (respectively) by the columns of X and Y .
This interpretation is justified by the fact that F (X, Y ) = F (XA, Y B) for any two orthogonal
matrices A, B ? Rr?r (the same property holds for Fe). The set of r dimensional subspaces of Rm
is a differentiable Riemannian manifold G(m, r) (the Grassman manifold). The gradient descent
algorithm is applied to the function Fe : M(m, n) ? G(m, r) ? G(n, r) ? R. For further details on
optimization by gradient descent on matrix manifolds we refer to [12, 13].
1.3
Main results
e E provides a reasonable
Our first result shows that, in great generality, the rank-r projection of N
approximation of M . Throughout this paper, without loss of generality, we assume ? ? m/n ? 1.
Theorem 1.1. Let N = M + Z, where M has rank r and |Mij | ? Mmax for all (i, j) ? [m] ? [n],
and assume that the subset of revealed entries E ? [m] ? [n] is uniformly random with size |E|.
Then there exists numerical constants C and C ? such that
1/2
?
nr?3/2
1
E
? n r? eE
e
?
||M ? Tr (N )||F ? CMmax
||Z ||2 ,
(10)
+C
|E|
|E|
mn
with probability larger than 1 ? 1/n3 .
Projection onto rank-r matrices through SVD is pretty standard (although trimming is crucial for
achieving the above guarantee). The key point here is that a much better approximation is obtained
by minimizing the cost Fe(X, Y ) (step 3 in the pseudocode above), provided M satisfies an appropriate incoherence condition. Let M = U ?V T be a low rank matrix, and assume, without loss of
generality, U T U = m1 and V T V = n1. We say that M is (?0 , ?1 )-incoherent if the following
conditions hold.
Pr
Pr
2
A1. For all i ? [m], j ? [n] we have, k=1 Uik
? ?0 r, k=1 Vik2 ? ?0 r.
Pr
A2. There exists ?1 such that | k=1 Uik (?k /?1 )Vjk | ? ?1 r1/2 .
Theorem 1.2. Let N = M + Z, where M is a (?0 , ?1 )-incoherent matrix of rank r, and assume
that the subset of revealed entries E ? [m] ? [n] is uniformly random with size |E|. Further, let
c be the output of O PT S PACE on
?min = ?1 ? ? ? ? ? ?r = ?max with ?max /?min ? ?. Let M
E
?
input N . Then there exists numerical constants C and C such that if
?
?
(11)
|E| ? Cn ??2 max ?0 r ? log n ; ?20 r2 ??4 ; ?21 r2 ??4 ,
3
then, with probability at least 1 ? 1/n3 ,
?
1
? 2 n ?r
c
?
||M ? M ||F ? C ?
||Z E ||2 .
|E|
mn
(12)
provided that the right-hand side is smaller than ?min .
Apart from capturing the effect of additive noise, these two theorems improve over the work of [1]
even in the noiseless case. Indeed they provide quantitative bounds in finite dimensions, while the
results of [1] were only asymptotic.
1.4
Noise models
In order to make sense of the above results, it is convenient to consider a couple of simple models
for the noise matrix Z:
Independent entries model. We assume that Z?s entries are independent random variables, with zero
mean E{Zij } = 0 and sub-gaussian tails. The latter means that
x2
P{|Zij | ? x} ? 2 e? 2?2 ,
(13)
for some bounded constant ? 2 .
Worst case model. In this model Z is arbitrary, but we have an uniform bound on the size of its
entries: |Zij | ? Zmax .
eE , which is bounded as
The basic parameter entering our main results is the operator norm of Z
follows.
Theorem 1.3. If Z is a random matrix drawn according to the independent entries model, then
there is a constant C such that,
?
1/2
?|E| log |E|
||ZeE ||2 ? C?
,
(14)
n
with probability at least 1 ? 1/n3 .
If Z is a matrix from the worst case model, then
for any realization of E.
2|E|
||ZeE ||2 ? ? Zmax ,
n ?
(15)
Note that for |E| = ?(n log n) , no row or column is over-represented with high probability. It
follows that in the regime of |E| for which the conditions of Theorem 1.2 are satisfied, we have
eE . Then, among the other things, this result implies that for the independent entries model
ZE = Z
the right-hand side of our error estimate, Eq. (12), is with high probability smaller than ?min , if
3/2
4
2
|E| ? Cr?
? 2 n log n ? (?/?min ) . For the worst case model, the same statement is true if Zmax ?
?min /C r? .
Due to space constraints, the proof of Theorem 1.3 will be given in the journal version of this paper.
1.5
Comparison with related work
Let us begin by mentioning that a statement analogous to our preliminary Theorem 1.1 was proved
in [14]. Our result however applies to any number of revealed entries, while the one of [14] requires
|E| ? (8 log n)4 n (which for n ? 5 ? 108 is larger than n2 ).
As for Theorem 1.2, we will mainly compare our algorithm with the convex relaxation approach
recently analyzed in [11]. Our basic setting is indeed the same, while the algorithms are rather
different.
4
Convex Relaxation
Lower Bound
rank-r projection
OptSpace : 1 iteration
2 iterations
3 iterations
10 iterations
1
RMSE
0.8
0.6
0.4
0.2
0
0
100
200
300
400
500
600
|E|/n
Figure 1: Root mean square error achieved by O PT S PACE for reconstructing a random rank-2 matrix, as a
function of the number of observed entries |E|, and of the number of line minimizations. The performance of
nuclear norm minimization and an information theory lower bound are also shown.
Figure 1 compares the average root mean square error for the two algorithms as a function of |E|.
e Ve T
Here M is a random rank r = 2 matrix of dimension m = n = 600, generated by letting M = U
?
eij , Veij i.i.d. N (0, 20/ n). The noise is distributed according to the independent entries
with U
model with Zij ? N (0, 1). This example is taken from [11] Figure 2, from which we took the
data for the convex relaxation approach, as well as the information theory lower bound. After one
iteration, O PT S PACE has a smaller root mean square error than [11], and in about 10 iterations it
becomes indistiguishable from the information theory lower bound.
Next let us compare our main result with the performance guarantee in [11], Theorem 7. Let us
stress that we require some bound on the condition number ?, while the analysis of [11, 5] requires
a stronger incoherence assumption. As far as the error bound is concerned, [11] proved
r
1
2
n
c ? M ||F ? 7
?
||Z E ||F + ? ||Z E ||F .
||M
(16)
|E|
mn
n ?
(The constant
in front of the first term is in fact slightly smaller than 7 in [11], but in any case larger
?
than 4 2).
Theorem 1.2 improves over this result in several respects: (1) We do not have the second term on
the right hand side of (16), that actually increases with the number of observed entries; (2) Our
error decreases as n/|E| rather than (n/|E|)1/2 ; (3) The noise enters Theorem 1.2 through the
E
E
operator norm ||Z E ||2 instead of its Frobenius norm
?||Z ||F ? ||Z ||2 . For E uniformly random,
E
E
one expects ||Z ||F to be roughly of order ||Z ||p
entries
2 n. For instance, within the intependent
p
model with bounded variance ?, ||Z E ||F = ?( |E|) while ||Z E ||2 is of order |E|/n (up to
logarithmic terms).
2
Some notations
The matrix M to be reconstructed takes the form (1) where U ? Rm?r , V ? Rn?r . We write
?
U = [u1 ,?
u2 , . . . , ur ] and V = [v1 , v2 , . . . , vr ] for the columns of the two factors, with ||ui || = m,
T
T
||vi || = n, and ui uj = 0, vi vj = 0 for i 6= j (there is no loss of generality in this, since
normalizations can be absorbed by redefining ?).
5
We shall write ? = diag(?1 , . . . , ?r ) with ?1 ? ?2 ? ? ? ? ? ?r > 0. The maximum and minimum singular values will also be denoted by ?max = ?1 and ?min = ?r . Further, the maximum
size of an entry of M is Mmax ? maxij |Mij |.
Probability is taken with respect to the uniformly
? random subset E ? [m] ? [n] given |E| and (eventually) the noise matrix Z. Define ? ? |E|/ mn. In the case when m = n, ? corresponds to the
average number of revealed entries per row or column. Then it is?convenient to work with a model in
which each
revealed independently
? with probability ?/ mn. Since, with high probability
? entry is?
?
|E| ? [? ? n ? A n log n, ? ? n + A n log n], any guarantee on the algorithm performances
that holds within one model, holds within the other model as well if we allow for a vanishing shift
in ?. We will use C, C ? etc. to denote universal numerical constants.
?
Given a vector x ? Rn , ||x|| will denote its Euclidean norm. For a matrix X ? Rn?n , ||X||F is
its Frobenius norm, and ||X||2 its operator norm (i.e. ||X||2 = supu6=0 ||Xu||/||u||). The standard
scalar product between vectors or matrices will sometimes be indicated by hx, yi or hX, Y i, respectively. Finally, we use the standard combinatorics notation [N ] = {1, 2, . . . , N } to denote the set of
first N integers.
3
Proof of Theorem 1.1
As explained in the introduction, the crucial idea is to consider the singular value decomposition
e E instead of the original matrix N E . Apart from a trivial rescaling, these
of the trimmed matrix N
singular values are close to the ones of the original matrix M .
Lemma 3.1. There exists a numerical constant C such that, with probability greater than 1 ? 1/n3 ,
r
?
? 1 eE
q
+ ||Z ||2 ,
(17)
? ?q ? CMmax
?
?
?
where it is understood that ?q = 0 for q > r.
Proof. For any matrix A, let ?q (A) denote the qth singular value of A. Then, ?q (A+B) ? ?q (A)+
?1 (B), whence
?
q
fE )/? ? ?q + ?1 (ZeE )/?
? ?q ? ?q (M
?
r
? 1 eE
+ ||Z ||2 ,
? CMmax
?
?
where the second inequality follows from the following Lemma as shown in [1].
Lemma 3.2 (Keshavan, Montanari, Oh, 2009 [1]). There exists a numerical constant C such that,
with probability larger than 1 ? 1/n3 ,
r
?
1
mn fE
?
?
M
?
M
?
CM
.
(18)
max
?
?
mn
2
We will now prove Theorem 1.1.
?
Proof. (Theorem 1.1) For any matrix A of rank at most 2r, ||A||F ? 2r||A||2 , whence
?
?
X
2r
mn
1
eE ?
e E )||F ? ?
M ?
?
||M ? Tr (N
N
?i xi yiT
?
mn
mn
i?r+1
2
?
?
?
?
2r
mn fE
mn eE
mn
? ?
M +
||Z ||2 +
?r+1
M ?
?
?
?
mn
2
p
?
? 2CMmax 2?r/? + (2 2r/?) ||ZeE ||2
1/2
?
?
nr?3/2
n r?
?
? C Mmax
+2 2
||ZeE ||2 .
|E|
|E|
This proves our claim.
6
4
Proof of Theorem 1.2
Recall that the cost function is defined over the Riemannian manifold M(m, n) ? G(m, r)?G(n, r).
The proof of Theorem 1.2 consists in controlling the behavior of F in a neighborhood of u = (U, V )
(the point corresponding to the matrix M to be reconstructed). Throughout the proof we let K(?)
be the set of matrix couples (X, Y ) ? Rm?r ? Rn?r such that ||X (i) ||2 ? ?r, ||Y (j) ||2 ? ?r for
all i, j
4.1
Preliminary remarks and definitions
Given x1 = (X1 , Y1 ) and xp
2 = (X2 , Y2 ) ? M(m, n), two points on this manifold, their distance
is defined as d(x1 , x2 ) = d(X1 , X2 )2 + d(Y1 , Y2 )2 , where, letting (cos ?1 , . . . , cos ?r ) be the
singular values of X1T X2 /m,
d(X1 , X2 ) = ||?||2 .
Given S achieving the minimum in Eq. (4), it is also convenient to introduce the notations
q
d? (x, u) ? ?2min d(x, u)2 + ||S ? ?||2F ,
q
d+ (x, u) ? ?2max d(x, u)2 + ||S ? ?||2F .
4.2
(19)
(20)
(21)
Auxiliary lemmas and proof of Theorem 1.2
The proof is based on the following two lemmas that generalize and sharpen analogous bounds in
[1] (for proofs we refer to the journal version of this paper).
Lemma 4.1.
? There exists numerical
? constants C0 , C1 , C2 such that the following happens. Assume
? ? C0 ?0 r ? max{ log n ; ?0 r ?(?min /?max )4 } and ? ? ?min /(C0 ?max ). Then,
?
?
F (x) ? F (u) ? C1 n? ? d? (x, u)2 ? C1 n r?||Z E ||2 d+ (x, u) ,
(22)
?
?
2
2
E
F (x) ? F (u) ? C2 n? ? ?max d(x, u) + C2 n r?||Z ||2 d+ (x, u) ,
(23)
for all x ? M(m, n) ? K(4?0 ) such that d(x, u) ? ?, with probability at least 1 ? 1/n4 . Here
S ? Rr?r is the matrix realizing the minimum in Eq. (4).
Corollary 4.2. There exist a constant C such that, under the hypotheses of Lemma 4.1
?
r
||S ? ?||F ? C?max d(x, u) + C
||Z E ||2 .
?
Further, for an appropriate choice of the constants in Lemma 4.1, we have
?
r
?max (S) ? 2?max + C
||Z E ||2 ,
?
?
1
r
||Z E ||2 .
?min (S) ? ?min ? C
2
?
(24)
(25)
(26)
Lemma 4.3.?There exists numerical constants C0?
, C1 , C2 such that the following happens. Assume
? ? C0 ?0 r ? (?max /?min )2 max{ log n ; ?0 r ?(?max /?min )4 } and ? ? ?min /(C0 ?max ).
Then,
2
?
r?max ||Z E ||2
2
2 4
e
||grad F (x)|| ? C1 n? ?min d(x, u) ? C2
,
(27)
??min ?min +
for all x ? M(m, n) ? K(4?0 ) such that d(x, u) ? ?, with probability at least 1 ? 1/n4 . (Here
[a]+ ? max(a, 0).)
We can now turn to the proof of our main theorem.
7
Proof. (Theorem 1.2). Let ? = ?min /C0 ?max with C0 large enough so that the hypotheses of
Lemmas 4.1 and 4.3 are verified.
Call {xk }k?0 the sequence of pairs (Xk , Yk ) ? M(m, n) generated by gradient descent. By assumption, the following is true with a large enough constant C:
2
?
?min
||Z E ||2 ? ?
?min .
(28)
C r ?max
Further, by using Corollary 4.2 in Eqs. (22) and (23) we get
?
2
F (x) ? F (u) ? C1 n? ??2min d(x, u)2 ? ?0,?
,
? 2
2
2
F (x) ? F (u) ? C2 n? ??max d(x, u) + ?0,+ ,
(29)
(30)
where
?0,? ? C
?
r?max ||Z E ||2
,
??min ?min
?0,+ ? C
?
r?max ||Z E ||2
.
??min ?max
(31)
By Eq. (28), we can assume ?0,+ ? ?0,? ? ?/10.
For ? ? C??21 r2 (?max /?min )4 as per our assumptions,
using Eq. (28) in Theorem 1.1, together
?
with the bound d(u, x0 ) ? ||M ? X0 SY0T ||F /n ??min , we get
d(u, x0 ) ?
?
.
10
(32)
We make the following claims :
1. xk ? K(4?0 ) for all k.
Indeed without loss of generality we can assume x0 ? K(3?0 ) (because otherwise we can
rescale those lines of X0 , Y0 that violate the constraint). Therefore Fe (x0 ) = F (x0 ) ?
?
4C2 n? ??2max ? 2 /100. On the other hand Fe(x) ? ?(e1/9 ? 1) for x 6? K(4?0 ).
Since?Fe(xk ) is a non-increasing sequence, the thesis follows provided we take ? ?
C2 n? ??2min .
2. d(xk , u) ? ?/10 for all k.
Assuming ? ? C??21 r2 (?max /?min )6 , we have d(x0 , u)2 ? (?2min /C ? ?2max )(?/10)2 .
Also assuming Eq. (28) with large enough C we can show the following. For all xk such
that d(xk , u) ? [?/10, ?], we have Fe(x) ? F (x) ? F (x0 ). This contradicts the monotonicity of Fe(x), and thus proves the claim.
Since the cost function is twice differentiable, and because of the above, the sequence {xk } converges to
? = x ? K(4?0 ) ? M(m, n) : d(x, u) ? ? , grad Fe(x) = 0 .
(33)
By Lemma 4.3 for any x ? ?,
d(x, u) ? C
?
r?max ||Z E ||2
??min ?min
(34)
which implies the thesis using Corollary 4.2.
Acknowledgements
This work was partially supported by a Terman fellowship, the NSF CAREER award CCF-0743978
and the NSF grant DMS-0806211.
8
References
[1] R. H. Keshavan, A. Montanari, and S. Oh. Matrix completion from a few entries.
arXiv:0901.3150, January 2009.
[2] A. Frieze, R. Kannan, and S. Vempala. Fast monte-carlo algorithms for finding low-rank
approximations. J. ACM, 51(6):1025?1041, 2004.
[3] E. J. Cand`es and B. Recht.
Exact matrix completion via convex optimization.
arxiv:0805.4471, 2008.
[4] M. Fazel. Matrix Rank Minimization with Applications. PhD thesis, Stanford University, 2002.
[5] E. J. Cand`es and T. Tao. The power of convex relaxation: Near-optimal matrix completion.
arXiv:0903.1476, 2009.
[6] J-F Cai, E. J. Cand`es, and Z. Shen. A singular value thresholding algorithm for matrix completion. arXiv:0810.3286, 2008.
[7] S. Ma, D. Goldfarb, and L. Chen. Fixed point and Bregman iterative methods for matrix rank
minimization. arXiv:0905.1643, 2009.
[8] K. Toh and S. Yun. An accelerated proximal gradient algorithm for nuclear norm regularized
least squares problems. http://www.math.nus.edu.sg/?matys, 2009.
[9] J. Wright, A. Ganesh, S. Rao, and Y. Ma. Robust principal component analysis: Exact recovery
of corrupted low-rank matrices. arXiv:0905.0233, 2009.
[10] K. Lee and Y. Bresler. Admira: Atomic decomposition for minimum rank approximation.
arXiv:0905.0044, 2009.
[11] E. J. Cand`es and Y. Plan. Matrix completion with noise. arXiv:0903.3131, 2009.
[12] A. Edelman, T. A. Arias, and S. T. Smith. The geometry of algorithms with orthogonality
constraints. SIAM J. Matr. Anal. Appl., 20:303?353, 1999.
[13] P.-A. Absil, R. Mahony, and R. Sepulchrer. Optimization Algorithms on Matrix Manifolds.
Princeton University Press, 2008.
[14] D. Achlioptas and F. McSherry. Fast computation of low-rank matrix approximations. J. ACM,
54(2):9, 2007.
9
| 3777 |@word version:3 norm:9 stronger:1 c0:8 crucially:1 decomposition:4 tr:5 initial:2 contains:3 zij:5 qth:1 toh:1 must:2 additive:1 numerical:9 guess:1 xk:8 zmax:3 vanishing:1 realizing:1 smith:1 caveat:1 provides:2 math:1 c2:8 vjk:1 edelman:1 prove:2 consists:1 introduce:1 x0:11 indeed:3 roughly:2 cand:5 behavior:1 andrea:1 increasing:1 becomes:1 provided:3 begin:1 underlying:2 bounded:3 notation:3 cm:1 developed:1 finding:1 guarantee:7 quantitative:1 exactly:3 rm:5 grant:1 producing:1 engineering:1 understood:1 analyzing:1 incoherence:2 approximately:1 twice:2 initialization:1 appl:1 co:2 mentioning:1 fazel:2 practical:1 atomic:1 universal:1 projection:5 convenient:3 get:2 convenience:1 onto:1 mahony:1 operator:3 close:1 www:1 independently:1 convex:10 shen:1 recovery:1 insight:1 regarded:1 nuclear:2 oh:3 analogous:2 pt:9 controlling:1 exact:2 hypothesis:2 approximated:1 ze:1 observed:2 electrical:1 enters:1 worst:3 decrease:1 yk:1 complexity:3 ui:2 algebra:1 represented:5 various:1 fast:2 monte:1 neighborhood:1 stanford:3 solve:1 larger:4 say:2 reconstruct:2 otherwise:3 statistic:2 g1:6 noisy:2 sequence:3 rr:3 differentiable:2 cai:1 took:1 product:1 realization:1 frobenius:3 x1t:1 r1:1 converges:1 completion:7 rescale:1 ij:1 eq:7 auxiliary:1 implies:2 require:1 hx:2 generalization:1 preliminary:2 tighter:1 admira:1 hold:4 wright:1 great:1 claim:3 a2:1 largest:1 minimization:5 gaussian:1 modified:2 rather:2 cr:1 corollary:3 rank:26 mainly:2 absil:1 sense:2 whence:2 eliminate:1 tao:1 overall:1 among:1 denoted:2 priori:1 plan:1 broad:1 terman:1 few:1 frieze:1 ve:1 geometry:1 raghunandan:1 n1:2 interest:1 trimming:2 investigate:1 analyzed:1 mcsherry:1 bregman:1 zee:5 orthogonal:2 unless:1 filled:1 euclidean:1 nij:4 instance:1 column:7 rao:1 optspace:1 cost:6 entry:27 subset:7 expects:1 uniform:1 too:1 front:1 perturbed:1 corrupted:1 proximal:1 recht:2 siam:1 lee:1 analogously:1 together:1 thesis:3 satisfied:1 choose:1 rescaling:1 combinatorics:1 vi:2 root:3 analyze:1 netflix:1 rmse:1 collaborative:1 minimize:2 square:4 variance:1 generalize:1 carlo:1 explain:1 definition:3 dm:1 associated:1 proof:13 riemannian:2 couple:2 proved:4 intrinsically:1 recall:2 improves:1 actually:1 generality:5 xa:1 achlioptas:1 correlation:1 hand:4 keshavan:3 ganesh:1 indicated:1 effect:1 normalized:1 true:2 y2:2 ccf:1 hence:1 regularization:1 entering:1 goldfarb:1 mmax:3 generalized:1 stress:2 yun:1 theoretic:1 workhorse:1 motion:1 recently:2 pseudocode:1 tail:1 interpretation:1 m1:2 significant:1 refer:4 similarly:1 sharpen:1 access:1 etc:1 apart:3 inequality:1 yi:1 minimum:5 greater:1 signal:1 violate:1 positioning:1 technical:1 e1:1 award:1 a1:1 basic:3 circumstance:3 noiseless:1 arxiv:8 iteration:6 normalization:2 sometimes:1 achieved:1 c1:6 justified:1 remarkably:1 fellowship:1 addressed:1 singular:13 source:1 crucial:2 thing:1 call:2 integer:1 ee:9 near:1 revealed:10 enough:3 concerned:1 variety:2 idea:2 cn:1 grad:2 shift:1 trimmed:3 remark:1 http:1 exist:1 nsf:2 estimated:1 per:2 pace:9 write:2 shall:1 supu6:1 thereafter:1 key:2 authentic:1 yit:3 achieving:2 drawn:1 verified:1 v1:1 matr:1 relaxation:7 family:1 reader:1 reasonable:1 throughout:2 comparable:1 capturing:1 bound:11 constraint:3 orthogonality:1 n3:5 x2:6 u1:1 min:34 vempala:1 structured:1 department:2 according:2 combination:2 smaller:4 slightly:2 reconstructing:2 y0:2 ur:1 contradicts:1 n4:2 happens:2 explained:1 pr:3 taken:2 turn:1 eventually:1 needed:1 letting:2 operation:1 v2:1 spectral:3 appropriate:3 alternative:1 robustness:2 original:2 denotes:1 prof:3 uj:1 question:1 added:1 nr:3 diagonal:1 gradient:6 subspace:2 distance:1 manifold:8 trivial:1 reason:1 kannan:1 assuming:2 minimizing:2 difficult:1 fe:17 statement:2 implementation:1 reliably:1 anal:1 observation:1 finite:1 descent:5 attacked:1 january:1 y1:2 discovered:1 rn:6 arbitrary:1 introduced:3 pair:1 redefining:1 nu:1 below:1 appeared:1 regime:1 max:31 maxij:1 unrealistic:1 power:1 rely:1 regularized:1 mn:15 scheme:1 improve:2 carried:1 incoherent:2 sg:1 acknowledgement:1 asymptotic:1 loss:5 bresler:1 filtering:1 xp:1 s0:1 sewoong:1 thresholding:1 row:7 supported:1 soon:1 side:3 allow:2 sparse:2 distributed:1 dimension:4 author:1 far:1 reconstructed:2 trim:1 monotonicity:1 assumed:1 xi:3 iterative:2 pretty:1 robust:1 career:1 matys:1 excellent:1 complex:1 vj:1 diag:1 did:1 main:4 montanari:3 noise:7 n2:1 xu:1 x1:5 uik:2 vr:1 sub:1 position:1 theorem:20 r2:4 exists:7 aria:1 phd:1 chen:1 logarithmic:1 eij:1 absorbed:1 partially:1 scalar:1 u2:1 applies:1 mij:3 corresponds:1 satisfies:1 acm:2 ma:2 uniformly:5 lemma:11 principal:1 e:5 svd:2 grassman:1 latter:1 arises:1 accelerated:1 princeton:1 |
3,065 | 3,778 | Probabilistic Relational PCA
Wu-Jun Li
Dit-Yan Yeung
Dept. of Comp. Sci. and Eng.
Hong Kong University of Science and Technology
Hong Kong, China
{liwujun,dyyeung}@cse.ust.hk
Zhihua Zhang
School of Comp. Sci. and Tech.
Zhejiang University
Zhejiang 310027, China
[email protected]
Abstract
One crucial assumption made by both principal component analysis (PCA) and
probabilistic PCA (PPCA) is that the instances are independent and identically
distributed (i.i.d.). However, this common i.i.d. assumption is unreasonable for
relational data. In this paper, by explicitly modeling covariance between instances
as derived from the relational information, we propose a novel probabilistic dimensionality reduction method, called probabilistic relational PCA (PRPCA), for
relational data analysis. Although the i.i.d. assumption is no longer adopted in
PRPCA, the learning algorithms for PRPCA can still be devised easily like those
for PPCA which makes explicit use of the i.i.d. assumption. Experiments on realworld data sets show that PRPCA can effectively utilize the relational information
to dramatically outperform PCA and achieve state-of-the-art performance.
1
Introduction
Using a low-dimensional embedding to summarize a high-dimensional data set has been widely
used for exploring the structure in the data. The methods for discovering such low-dimensional
embedding are often referred to as dimensionality reduction (DR) methods. Principal component
analysis (PCA) [13] is one of the most popular DR methods with great success in many applications.
As a more recent development, probabilistic PCA (PPCA) [21] provides a probabilistic formulation of PCA [13] based on a Gaussian latent variable model [1]. Compared with the original nonprobabilistic derivation of PCA in [12], PPCA possesses a number of practical advantages. For example, PPCA can naturally deal with missing values in the data; the expectation-maximization (EM)
algorithm [9] used to learn the parameters in PPCA may be more efficient for high-dimensional data;
it is easy to generalize the single model in PPCA to the mixture model case; furthermore, PPCA as
a probabilistic model can naturally exploit Bayesian methods [2].
Like many existing DR methods, both PCA and PPCA are based on some assumptions about the
data. One assumption is that the data should be represented as feature vectors all of the same
dimensionality. Data represented in this form are sometimes referred to as flat data [10]. Another
one is the so-called i.i.d. assumption, which means that the instances are assumed to be independent
and identically distributed (i.i.d.).
However, the data in many real-world applications, such as web pages and research papers, contain
relations or links between (some) instances in the data in addition to the textual content information which is represented in the form of feature vectors. Data of this sort, referred to as relational
data1 [10, 20], can be found in such diverse application areas as web mining [3, 17, 23, 24], bioinformatics [22], social network analysis [4], and so on. On one hand, the link structure among instances
1
In this paper, we use document classification as a running example for relational data analysis. Hence, for
convenience of illustration, the specific term ?textual content information? is used in the paper to refer to the
feature vectors describing the instances. However, the algorithms derived in this paper can be applied to any
relational data in which the instance feature vectors can represent any attribute information.
1
cannot be exploited easily when traditional DR methods such as PCA are applied to relational data.
Very often, the useful relational information is simply discarded. For example, a citation/reference
relation between two papers provides very strong evidence for them to belong to the same topic even
though they may bear low similarity in their content due to the sparse nature of the bag-of-words
representation, but the relational information is not exploited at all when applying PCA or PPCA.
One possible use of the relational information in PCA or PPCA is to first convert the link structure
into the format of flat data by extracting some additional features from the links. However, as argued in [10], this approach fails to capture some important structural information in the data. On the
other hand, the i.i.d. assumption underlying PCA and PPCA is unreasonable for relational data. In
relational data, the attributes of the connected (linked) instances are often correlated and the class
label of one instance may have an influence on that of a linked instance. For example, in biology,
interacting proteins are more likely to have the same biological functions than those without interaction. Therefore, PCA and PPCA, or more generally most existing DR methods based on the i.i.d.
assumption, are not suitable for relational data analysis.
In this paper, a novel probabilistic DR method called probabilistic relational PCA (PRPCA) is proposed for relational data analysis. By explicitly modeling the covariance between instances as derived from the relational information, PRPCA seamlessly integrates relational information and textual content information into a unified probabilistic framework. Two learning algorithms, one based
on a closed-form solution and the other based on an EM algorithm [9], are proposed to learn the
parameters of PRPCA. Although the i.i.d. assumption is no longer adopted in PRPCA, the learning
algorithms for PRPCA can still be devised easily like those for PPCA which makes explicit use of
the i.i.d. assumption. Extensive experiments on real-world data sets show that PRPCA can effectively utilize the relational information to dramatically outperform PCA and achieve state-of-the-art
performance.
2
Notation
We use boldface uppercase letters, such as K, to denote matrices, and boldface lowercase letters,
such as z, to denote vectors. The ith row and the jth column of a matrix K are denoted by Ki?
and K?j , respectively. Kij denotes the element at the ith row and jth column of K. zi denotes the
ith element of z. KT is the transpose of K, and K?1 is the inverse of K. K 0 means that K
is positive semi-definite (psd) and K 0 means that K is positive definite (pd). tr(?) denotes the
trace of a matrix and etr(?) , exp(tr(?)). P ? Q denotes the Kronecker product [11] of P and Q.
| ? | denotes the determinant of a matrix. In is the identity matrix of size n ? n. e is a vector of 1s,
the dimensionality of which depends on the context. We overload N (?) for both multivariate normal
distributions and matrix variate normal distributions [11]. h?i denotes the expectation operation and
cov(?) denotes the covariance operation.
Note that in relational data, there exist both content and link observations. As in [21], {tn }N
n=1
denotes a set of observed d-dimensional data (content) vectors, the d ? q matrix W denotes the q
principal axes (or called factor loadings), ? denotes the data sample mean, and xn = WT (tn ? ?)
denotes the corresponding q principal components (or called latent variables) of tn . We further use
the d ? N matrix T to denote the content matrix with T?n = tn , and the q ? N matrix X to denote
the latent variables of T with X?n = WT (tn ? ?). For relational data, the N ? N matrix A
denotes the adjacency (link) matrix of the N instances. In this paper, we assume that the links are
undirected. For those data with directed links, we will convert the directed links into undirected
links which can keep the original physical meaning of the links. This will be described in detail in
Section 4.1.1, and an example will be given in Section 5. Hence, Aij = 1 if there exists a relation
between instances i and j, and otherwise Aij = 0. Moreover, we always assume that there exist no
self-links, i.e., Aii = 0.
3
Probabilistic PCA
To set the stage for the next section which introduces our PRPCA model, we first briefly present
the derivation for PPCA [21], which was originally based on (vector-based) multivariate normal
distributions, from the perspective of matrix variate normal distributions [11].
2
If we use ? to denote the Gaussian noise process and assume that ? and the latent variable matrix
X follow these distributions:
? ? Nd,N (0, ? 2 Id ? IN ),
X ? Nq,N (0, Iq ? IN ),
(1)
we can express a generative model as follows: T = WX + ?eT + ?.
Based on some properties of matrix variate normal distributions in [11], we get the following results:
T | X ? Nd,N (WX + ?eT , ? 2 Id ? IN ),
T ? Nd,N ?eT , (WWT + ? 2 Id ) ? IN . (2)
Let C = WWT + ? 2 Id . The corresponding
log-likelihood of the observation
matrix T is then
i
Nh
d ln(2?) + ln |C| + tr(C?1 S) ,
(3)
L = ln p(T) = ?
2
N
(T ??)(T ??)T
(T??eT )(T??eT )T
where S =
= n=1 ?n N ?n
. We can see that S is just the sample
N
covariance matrix of the content observations. It is easy to see that this log-likelihood form is the
same as that in [21]. Using matrix notations, the graphical model of PPCA based on matrix variate
normal distributions is shown in Figure 1(a).
P
Iq
?
W
Iq
?
W
IN
X
?2
??1
X
?2
T
T
(a) Model of PPCA
(b) Model of PRPCA
Figure 1: Graphical models of PPCA and PRPCA, in which T is the observation matrix, X is the latent
variable matrix, ?, W and ? 2 are the parameters to learn, and the other quantities are kept constant.
4
Probabilistic Relational PCA
PPCA assumes that all the observations are independent and identically distributed. Although this
i.i.d. assumption can make the modeling process much simpler and has achieved great success in
many traditional applications, this assumption is however very unreasonable for relational data [10].
In relational data, the attributes of connected (linked) instances are often correlated.
In this section, a probabilistic relational PCA model, called PRPCA, is proposed to integrate both
the relational information and the content information seamlessly into a unified framework by eliminating the i.i.d. assumption. Based on our reformulation of PPCA using matrix variate notations as
presented in the previous section, we can obtain PRPCA just by introducing some relatively simple
(but very effective) modifications. A promising property is that the computation needed for PRPCA
is as simple as that for PPCA even though we have eliminated the restrictive i.i.d. assumption.
4.1
Model Formulation
Assume that the latent variable matrix X has the following distribution:
X ? Nq,N (0, Iq ? ?).
(4)
According to Corollary 2.3.3.1 in [11], we can get cov(Xi? ) = ? (i ? {1, . . . , q}), which means
that ? actually reflects the covariance between the instances. From (1), we can see that cov(Xi? ) =
IN for PPCA, which also coincides with the i.i.d. assumption of PPCA.
Hence, to eliminate the i.i.d. assumption for relational data, one direct way is to use a non-identity
covariance matrix ? for the distribution of X in (4). This ? should reflect the physical meaning
(semantics) of the relations between instances, which will be discussed in detail later. Similarly, we
can also change the IN in (1) to ? for ? to eliminate the i.i.d. assumption for the noise process.
4.1.1
Relational Covariance Construction
Because the covariance matrix ? in PRPCA is constructed from the relational information in the
data, we refer to it as relational covariance here.
The goal of PCA and PPCA is to find those principal axes onto which the retained variance under
projection is maximal [13, 21]. For one specific X, the retained variance is tr[XXT ]. If we rewrite
exp{? 12 tr[XXT ]}
exp{tr[? 21 XXT ]}
=
, we have the following observation:
p(X) in (1) as p(X) =
qN/2
(2?)
(2?)qN/2
3
Observation 1 For PPCA, the larger the retained variance of X, i.e., the more X approaches the
destination point, the lower is the probability density at X given by the prior.
Here, the destination point refers to the point where the goal of PPCA is achieved, i.e., the retained
variance is maximal. Moreover, we use the retained variance as a measure to define the gap between
two different points. The smaller is the gap between the retained variance of two points, the more
they approach each other.
Because the design principle of PRPCA is similar to that of PPCA, our working hypothesis here is
that Observation 1 can also guide us to design the relational covariance of PRPCA. Its effectiveness
will be empirically verified in Section 5.
In PRPCA, we assume that the attributes of two linked instances are positively correlated.2 Under
this assumption, the ideal goal of PRPCA should be to make the latent representations of two instances as close as possible if there exists a relation (link) between them. Hence, the measure to
define the gap between two points refers to the closeness of the linked instances, i.e., the summation
of the Euclidean distances between the linked instances. Based on Observation 1, the more X approaches the destination point, the lower should be the probability density at X given by the prior.
Hence, under the latent space representation X, the closer the linked instances are, the lower should
be the probability density at X given by the prior. We will prove that if we set ? = ??1 where
? , ?IN + (IN + A)T (IN + A) with ? being typically a very small positive number to make
? 0, we can get an appropriate prior for PRPCA. Note that Aij = 1 if there exists a relation
between instances i and j, and otherwise Aij = 0. Because AT = A, we can also express ? as
? = ?IN + (IN + A)(IN + A).
? denote a diagonal matrix whose diagonal elements D
? ii = P Aij . It is easy to prove that
Let D
j
? which means that Bij = (AA)ij if i 6= j and Bii = 0. We can
? ii . Let B = AA ? D,
(AA)ii = D
PN
?
get ? = (1+?)IN +2A+AA = (1+?)IN + D+(2A+B).
Because Bij = k=1 Aik Akj for i 6=
j, we can see that Bij is the number of paths, each with path length 2, from instance i to instance j
in the original adjacency graph A. Because the attributes of two linked instances are positively
correlated, Bij actually reflects the degree of correlation between instance i and instance j. Let us
take the paper citation graph as an example to illustrate this. The existence of a citation relation
between two papers often implies that they are about the same topic. If paper i cites paper k and
paper k cites paper j, it is highly likely that paper i and paper j are about the same topic. If there
exists another paper a 6= k linking both paper i and paper j as well, the confidence that paper i and
paper j are about the same topic will increase. Hence, the larger Bij is, the stronger is the correlation
PN
between instance i and instance j. Because Bij = k=1 Aik Akj = AT?i A?j , Bij can also be seen
as the similarity between the link vectors of instance i and instance j. Therefore, B can be seen as a
weight matrix (corresponding to a weight graph) derived from the original adjacency matrix A, and
B is also consistent with the physical meaning underlying A.
Letting G = 2A + B,3 we can find that G actually combines the original graph reflected by A
and the derived graph reflected by B to get a new graph, and puts a weight 2Aij + Bij on the edge
between instance i and instance j in the new graph. The new weight graph reflected by G is also
consistent with the physical meaning underlying
A. Letting L , D ? G, where D is a diagonal
P
matrix whose diagonal elements Dii = j Gij and L is called the Laplacian matrix [6] of G, we
?
? , (1+?)IN + D+D,
?
can get ? = (1+?)IN + D+D?L.
If we define another diagonal matrix D
?
we can get ? = D ? L. Then we have
tr[X?XT ] =
N
X
N
N
XX
? ii kX?i k2 ? 1
D
Gij kX?i ? X?j k2 .
2
i=1
i=1 j=1
(5)
2
Links with other physical meanings, such as the directed links in web graphs [25], can be transformed into
links satisfying the assumption in PRPCA via some preprocessing strategies. One such strategy to preprocess
the WebKB data set [8] will be given as an example in Section 5.
3
This means that we put a 2:1 ratio between A and B. Other ratios can be obtained by setting ? =
?IN + (?IN + A)(?IN + A) = ?IN + ?2 IN + 2?A + B. Preliminary results show that PRPCA is not
sensitive to ? as long as ? is not too large, but we omit the detailed results here because they are out of the
scope of this paper.
4
Letting ? = ??1 , we can get p(X) =
exp{tr[? 12 X?XT ]}
(2?)qN/2 |? |?q/2
=
exp{? 12 tr[X?XT ]}
(2?)qN/2 |? |?q/2
.
PN ?
2
The first term i=1 D
ii kX?i k in (5) can be treated as a measure of weighted variance of all the
? ii is, the more weight will be put on
instances in the latent space. We can see that the larger D
?
instance i, which is reasonable because Dii mainly reflects the degree of instance i in the graph.
It is easy to see that, for those latent representations having a fixed value of weighted variance
PN ?
2
i=1 Dii kX?i k , the closer the latent representations of two linked entities are, the larger is their
contribution to tr[X?XT ], and subsequently the less is their contribution to p(X). This means
that under the latent space representation X, the closer the linked instances are, the lower is the
probability density at X given by the prior. Hence, we can get an appropriate prior for X by setting
? = ??1 in (4).
4.1.2
Model
With the constructed relational covariance ?, the generative model of PRPCA is defined as follows:
? ? Nd,N (0, ? 2 Id ? ?),
X ? Nq,N (0, Iq ? ?),
T = WX + ?eT + ?,
where ? = ??1 .
We can further obtain the following results:
T | X ? Nd,N (WX + ?eT , ? 2 Id ? ?),
T ? Nd,N ?eT , (WWT + ? 2 Id ) ? ? .
(6)
The graphical model of PRPCA is illustrated in Figure 1(b), from which we can see that the difference between PRPCA and PPCA lies solely in the difference between ? and IN . Comparing (6) to
(2), we can find that the observations of PPCA are sampled independently while those of PRPCA
are sampled with correlation. In fact, PPCA may be seen as a degenerate case of PRPCA as detailed
below in Remark 1:
Remark 1 When the i.i.d. assumption holds, i.e., all Aij = 0, PRPCA degenerates to PPCA by
setting ? = 0. Note that the only role that ? plays is to make ? 0. Hence, in our implementation,
we always set ? to a very small positive value, such as 10?6 . Actually, we may even set ? to 0,
because ? does not have to be pd. When ? 0, we say T follows a singular matrix variate
normal distribution [11], and all the derivations for PRPCA are still correct. In our experiment,
we find that the performance under ? = 0 is almost the same as that under ? = 10?6 . Further
deliberation is out of the scope of this paper.
As in PPCA, we set C = WWT + ? 2 Id . Then the log-likelihood of the observation matrix T in
PRPCA is
i
Nh
d ln(2?) + ln |C| + tr(C?1 H) + c,
(7)
L1 = ln p(T) = ?
2
where c = ? d2 ln |?| can be seen as a constant independent of the parameters ?, W and ? 2 , and
(T??eT )?(T??eT )T
H=
.
N
It is interesting to compare (7) with (3). We can find that to learn the parameters W and ? 2 , the
only difference between PRPCA and PPCA lies in the difference between H and S. Hence, all the
learning techniques derived previously for PPCA are also potentially applicable to PRPCA simply
by substituting S with H.
4.2
Learning
By setting the gradient of L1 with respect to ? to 0, we can get the maximum-likelihood estimator
e
(MLE) for ? as follows: ? = eTT?
?e .
As in PPCA [21], we devise two methods to learn W and ? 2 in PRPCA, one based on a closed-form
solution and the other based on EM.
5
4.2.1
Closed-Form Solution
Theorem 1 The log-likelihood in (7) is maximized when
Pd
WM L = Uq (?q ?
2
1/2
?M
R,
L Iq )
2
?M
L
=
i=q+1
?i
,
d?q
where ?1 ? ?2 ? ? ? ? ? ?d are the eigenvalues of H, ?q is a q ? q diagonal matrix containing
the first q largest eigenvalues, Uq is a d ? q matrix in which the q column vectors are the principal
eigenvectors of H corresponding to ?q , and R is an arbitrary q ? q orthogonal rotation matrix.
The proof of Theorem 1 makes use of techniques similar to those in Appendix A of [21] and is
omitted here.
4.2.2
EM Algorithm
During the EM learning process, we treat {W, ? 2 } as parameters, X as missing data and {T, X} as
complete data. The EM algorithm operates by alternating between the E-step and M-step. Here we
only briefly describe the updating rules and their derivation can be found in a longer version which
can be downloaded from http://www.cse.ust.hk/?liwujun.
In the E-step, the expectation of the complete-data log-likelihood with respect to the distribution of
the missing data X is computed. To compute the expectation of the complete-data log-likelihood,
we only need to compute the following sufficient statistics:
hXi = M?1 WT (T ? ?eT ),
T
hX?XT i = N ? 2 M?1 + hXi?hXiT ,
(8)
2
where M = W W + ? Iq . Note that all these statistics are computed based on the parameter
values obtained from the previous iteration.
In the M-step, to maximize the expectation of the complete-data log-likelihood, the parameters
{W, ? 2 } are updated as follows:
fT )
tr(H ? HWM?1 W
f = HW(? 2 Iq + M?1 WT HW)?1 ,
W
?
e2 =
.
(9)
d
f
Note that we use W here to denote the old value and W for the updated new value.
4.3
Complexity Analysis
Suppose there are ? nonzero elements in ?. We can see that the computation cost for H is
O(dN + d?). In many applications ? is typically a constant multiple of N . Hence, we can say
that the time complexity for computing H is O(dN ). For the closed-form solution, we have to invert a d ? d matrix. Hence, the computation cost is O(dN + d3 ). For EM, because d is typically
larger than q, we can see that the computation cost is O(dN + d2 qT ), where T is the number of EM
iterations. If the data are of very high dimensionality, EM will be more efficient than the closed-form
solution.
5
Experiments
Although PPCA possesses additional advantages when compared with the original non-probabilistic
formulation of PCA, they will get similar DR results when there exist no missing values in the data.
If the task is to classify instances in the low-dimensional embedding, the classifiers based on the
embedding results of PCA and PPCA are expected to achieve comparable results. Hence, in this
paper, we only adopt PCA as the baseline to study the performance of PRPCA. For the EM algorithm
of PRPCA, we use PCA to initialize W, ? 2 is initialized to 10?6 , and ? = 10?6 . Because the EM
algorithm and the closed-form solution achieve similar results, we only report the results of the EM
algorithm of PRPCA in the following experiments.
5.1
Data Sets and Evaluation Scheme
Here, we only briefly describe the data sets and evaluation scheme for space saving. More detailed
information about them can be found in the longer version.
6
We use three data sets to evaluate PRPCA. The first two data sets are Cora [16] and WebKB [8]. We
adopt the same strategy as that in [26] to preprocess these two data sets. The third data set is the
PoliticalBook data set used in [19]. For WebKB, according to the semantics of authoritative pages
and hub pages [25], we first preprocess the link structure of this data set as follows: if two web
pages are co-linked by or link to another common web page, we add a link between these two pages.
Then all the original links are removed. After preprocessing, all the directed links are converted into
undirected links.
The Cora data set contains four subsets: DS, HA, ML and PL. The WebKB data set also contains
four subsets: Cornell, Texas, Washington and Wisconsin. We adopt the same strategy as that in [26]
to evaluate PRPCA on the Cora and WebKB data sets. For the PoliticalBook data set, we use the
testing procedure of the latent Wishart process (LWP) model [15] for evaluation.
5.2
Convergence Speed of EM
We use the DS and Cornell data sets to illustrate the convergence speed of the EM learning procedure
of PRPCA. The performance on other data sets has similar characteristics, which is omitted here.
With q = 50, the average classification accuracy based on 5-fold cross validation against the number
of EM iterations T is shown in Figure 2. We can see that PRPCA achieves very promising and stable
performance after a very small number of iterations. We set T = 5 in all our following experiments.
5.3
Visualization
We use the PoliticalBook data set to visualize the DR results of PCA and PRPCA. For the sake of
visualization, q is set to 2. The results are depicted in Figure 3. We can see that it is not easy to
separate the two classes in the latent space of PCA. However, the two classes are better separated
from each other in the latent space of PRPCA. Hence, better clustering or classification performance
can be expected when the examples are clustered or classified in the latent space of PRPCA.
PCA
Accuracy
0.9
DS
Cornell
0.8
0.7
0.6
0.5
0
10
20
30
40
50
T
PRPCA
0.4
0.2
0.2
0.1
0
0
?0.2
?0.1
?0.4
?0.2
?0.4
?0.2
0
0.2
0.4
?0.4
?0.2
0
0.2
Figure 2: Convergence speed Figure 3: Visualization of data points in the latent spaces of PCA and
of the EM learning procedure of
PRPCA.
5.4
PRPCA for the PoliticalBook data set. The positive and negative examples
are shown as red crosses and blue circles, respectively.
Performance
The dimensionality of Cora and WebKB is moderately high, but the dimensionality of PoliticalBook
is very high. We evaluate PRPCA on these two different kinds of data to verify its effectiveness in
general settings.
Performance on Cora and WebKB The average classification accuracy with its standard deviation
based on 5-fold cross validation against the dimensionality of the latent space q is shown in Figure 4.
We can find that PRPCA can dramatically outperform PCA on all the data sets under any dimensionality, which confirms that the relational information is very informative and PRPCA can utilize
it very effectively.
We also perform comparison between PRPCA and those methods evaluated in [26]. The methods
include: SVM on content, which ignores the link structure in the data and applies SVM only on
the content information in the original bag-of-words representation; SVM on links, which ignores
the content information and treats the links as features, i.e, the ith feature is link-to-pagei ; SVM on
link-content, in which the content features and link features of the two methods above are combined
to give the feature representation; directed graph regularization (DGR), which is introduced in [25];
PLSI+PHITS, which is described in [7]; link-content MF, which is the joint link-content matrix
factorization (MF) method in [26]. Note that Link-content sup. MF in [26] is not adopted here
for comparison. Because during the DR procedure link-content sup. MF employs additional label
7
DS
HA
ML
0.8
0.7
PL
0.65
0.8
0.65
0.4
10
20
30
q
40
PCA
Accuracy
0.7
0.65
PCA
PRPCA
0.35
Accuracy
Accuracy
Accuracy
0.5
0.45
0.6
0.75
0.75
0.6
0.55
0.7
0.65
PCA
PRPCA
0.6
PRPCA
10
50
20
Cornell
30
q
40
0.55
50
10
Texas
0.95
0.95
0.9
0.9
20
30
q
40
0.4
50
0.75
10
20
30
q
40
50
0.75
PCA
PRPCA
10
20
30
q
10
40
40
50
0.92
0.9
0.88
PCA
PRPCA
0.86
50
30
q
Wisconsin
0.92
0.84
20
0.94
Accuracy
0.85
0.8
PCA
PRPCA
PCA
PRPCA
Washington
0.96
Accuracy
Accuracy
Accuracy
0.8
0.5
0.45
0.94
0.85
0.55
10
20
30
q
40
0.9
0.88
PCA
PRPCA
0.86
50
0.84
10
20
30
q
40
50
Figure 4: Comparison between PRPCA and PCA on Cora and WebKB.
information which is not employed by other DR methods, it is unfair to directly compare it with
other methods. As in the link-content MF method, we set q = 50 for PRPCA. The results are shown
in Figure 5. We can see that PRPCA and link-content MF achieve the best performance among all
the evaluated methods. Compared with link-content MF, PRPCA performs slightly better on DS and
HA while performing slightly worse on ML and Texas, and achieves comparable performance on the
other data sets. We can conclude that the overall performance of PRPCA is comparable with that of
link-content MF. Unlike link-content MF which is transductive in nature, PRPCA naturally supports
inductive inference. More specifically, we can apply the learned transformation matrix of PRPCA
to perform DR for the unseen test data, while link-content MF can only perform DR for those data
available during the training phase. Very recently, another method proposed by us, called relation
regularized matrix factorization (RRMF) [14], has achieved better performance than PRPCA on the
Cora data set. However, similar to link-content MF, RRMF cannot be used for inductive inference
1
either.
0.8
Accuracy
0.6
0.5
DS
HA
ML
0.9
Accuracy
SVM on content
SVM on links
SVM on link?content
DGR
PLSI+PHITS
link?content MF
PRPCA
0.7
0.8
0.7
0.6
0.5
PL
Cornell
Texas WashingtonWisconsin
Figure 5: Comparison between PRPCA and other methods on Cora and WebKB.
Performance on PoliticalBook As in mixed graph Gaussian process (XGP) [19] and LWP [15], we
randomly choose half of the whole data for training and the rest for testing. This subsampling process is repeated for 100 rounds and the average area under the ROC curve (AUC) with its standard
deviation is reported in Table 1, where GPC is a Gaussian process classifier [18] trained on the original feature representation, and relational Gaussian process (RGP) is the method in [5]. For PCA and
PRPCA, we first use them to perform DR, and then a Gaussian process classifier is trained based on
the low-dimensional representation. Here, we set q = 5 for both PCA and PRPCA. We can see that
on this data set, PRPCA also dramatically outperforms PCA and achieves performance comparable
with the state of the art. Note that RGP and XGP cannot learn a low-dimensional embedding for
the instances. Although LWP can also learn a low-dimensional embedding for the instances, the
computation cost to obtain a low-dimensional embedding for a test instance is O(N 3 ) because it has
to invert the kernel matrix defined on the training data.
Table 1: Performance on the PoliticalBook data set. Results for GPC, RGP and XGP are taken from [19]
where the standard deviation is not reported.
GPC
RGP
XGP
LWP
PCA
PRPCA
0.92
0.98
0.98
0.98 ? 0.02
0.92 ? 0.03
0.98 ? 0.02
Acknowledgments
Li and Yeung are supported by General Research Fund 621407 from the Research Grants Council
of Hong Kong. Zhang is supported in part by 973 Program (Project No. 2010CB327903). We thank
Yu Zhang for some useful comments.
8
References
[1] D. J. Bartholomew and M. Knott. Latent Variable Models and Factor Analysis. Kendall?s
Library of Statistics,7, second edition, 1999.
[2] C. M. Bishop. Bayesian PCA. In NIPS 11, 1998.
[3] J. Chang and D. M. Blei. Relational topic models for document networks. In AISTATS, 2009.
[4] J. Chang, J. L. Boyd-Graber, and D. M. Blei. Connections between the lines: augmenting
social networks with text. In KDD, pages 169?178, 2009.
[5] W. Chu, V. Sindhwani, Z. Ghahramani, and S. S. Keerthi. Relational learning with Gaussian
processes. In NIPS 19, 2007.
[6] F. Chung. Spectral Graph Theory. Number 92 in Regional Conference Series in Mathematics.
American Mathematical Society, 1997.
[7] D. A. Cohn and T. Hofmann. The missing link - a probabilistic model of document content
and hypertext connectivity. In NIPS 13, 2000.
[8] M. Craven, D. DiPasquo, D. Freitag, A. McCallum, T. M. Mitchell, K. Nigam, and S. Slattery.
Learning to extract symbolic knowledge from the world wide web. In AAAI/IAAI, pages 509?
516, 1998.
[9] A. Dempster, N. Laird, and D. Rubin. Maximum likelihood from incomplete data via the EM
algorithm. Journal of the Royal Statistical Society, 39(1):1?38, 1977.
[10] L. Getoor and B. Taskar. Introduction to Statistical Relational Learning. The MIT Press, 2007.
[11] A. K. Gupta and D. K. Nagar. Matrix Variate Distributions. Chapman & Hall/CRC, 2000.
[12] H. Howard. Analysis of a complex of statistical variables into principal components. Journal
of Educational Psychology, 27:417?441, 1933.
[13] I. T. Jolliffe. Principal Component Analysis. Springer, second edition, 2002.
[14] W.-J. Li and D.-Y. Yeung. Relation regularized matrix factorization. In IJCAI, 2009.
[15] W.-J. Li, Z. Zhang, and D.-Y. Yeung. Latent Wishart processes for relational kernel learning.
In AISTATS, pages 336?343, 2009.
[16] A. McCallum, K. Nigam, J. Rennie, and K. Seymore. Automating the construction of internet
portals with machine learning. Information Retrieval, 3(2):127?163, 2000.
[17] R. Nallapati, A. Ahmed, E. P. Xing, and W. W. Cohen. Joint latent topic models for text and
citations. In KDD, pages 542?550, 2008.
[18] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. The MIT
Press, 2006.
[19] R. Silva, W. Chu, and Z. Ghahramani. Hidden common cause relations in relational learning.
In NIPS 20. 2008.
[20] B. Taskar, P. Abbeel, and D. Koller. Discriminative probabilistic models for relational data. In
UAI, pages 485?492, 2002.
[21] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal Of The
Royal Statistical Society Series B, 61(3):611?622, 1999.
[22] J.-P. Vert. Reconstruction of biological networks by supervised machine learning approaches.
In Elements of Computational Systems Biology, 2009.
[23] T. Yang, R. Jin, Y. Chi, and S. Zhu. A Bayesian framework for community detection integrating
content and link. In UAI, 2009.
[24] T. Yang, R. Jin, Y. Chi, and S. Zhu. Combining link and content for community detection: a
discriminative approach. In KDD, pages 927?936, 2009.
[25] D. Zhou, B. Sch?olkopf, and T. Hofmann. Semi-supervised learning on directed graphs. In
NIPS 17, 2004.
[26] S. Zhu, K. Yu, Y. Chi, and Y. Gong. Combining content and link for classification using matrix
factorization. In SIGIR, 2007.
9
| 3778 |@word kong:3 determinant:1 version:2 briefly:3 eliminating:1 loading:1 stronger:1 nd:6 d2:2 confirms:1 eng:1 covariance:11 tr:12 reduction:2 contains:2 series:2 document:3 outperforms:1 existing:2 comparing:1 chu:2 ust:2 wx:4 informative:1 kdd:3 hofmann:2 fund:1 generative:2 discovering:1 half:1 nq:3 mccallum:2 ith:4 blei:2 provides:2 cse:2 simpler:1 zhang:4 mathematical:1 dn:4 constructed:2 direct:1 prove:2 freitag:1 combine:1 expected:2 chi:3 project:1 xx:1 underlying:3 notation:3 moreover:2 webkb:9 kind:1 unified:2 transformation:1 k2:2 classifier:3 grant:1 omit:1 positive:5 treat:2 id:8 path:2 solely:1 china:2 co:1 factorization:4 zhejiang:2 directed:6 practical:1 acknowledgment:1 testing:2 definite:2 procedure:4 area:2 yan:1 vert:1 projection:1 boyd:1 word:2 confidence:1 refers:2 ett:1 integrating:1 protein:1 symbolic:1 get:11 convenience:1 cannot:3 onto:1 close:1 put:3 context:1 applying:1 influence:1 www:1 missing:5 educational:1 williams:1 independently:1 sigir:1 estimator:1 rule:1 embedding:7 updated:2 construction:2 play:1 suppose:1 aik:2 hypothesis:1 element:6 satisfying:1 updating:1 observed:1 role:1 ft:1 taskar:2 capture:1 hypertext:1 connected:2 removed:1 lwp:4 pd:3 slattery:1 complexity:2 moderately:1 dempster:1 trained:2 rewrite:1 easily:3 aii:1 joint:2 represented:3 xxt:3 derivation:4 separated:1 effective:1 describe:2 whose:2 widely:1 larger:5 say:2 rennie:1 otherwise:2 cov:3 statistic:3 unseen:1 transductive:1 laird:1 deliberation:1 advantage:2 eigenvalue:2 propose:1 reconstruction:1 interaction:1 product:1 maximal:2 combining:2 degenerate:2 achieve:5 olkopf:1 convergence:3 ijcai:1 iq:8 illustrate:2 gong:1 augmenting:1 ij:1 qt:1 school:1 strong:1 c:1 implies:1 correct:1 attribute:5 subsequently:1 dii:3 adjacency:3 crc:1 argued:1 hx:1 abbeel:1 clustered:1 preliminary:1 biological:2 summation:1 exploring:1 pl:3 hold:1 hall:1 normal:7 exp:5 great:2 scope:2 visualize:1 substituting:1 achieves:3 adopt:3 omitted:2 integrates:1 applicable:1 bag:2 label:2 sensitive:1 council:1 largest:1 reflects:3 weighted:2 cora:8 mit:2 gaussian:8 always:2 pn:4 zhou:1 cornell:5 corollary:1 derived:6 ax:2 zju:1 likelihood:9 mainly:1 seamlessly:2 tech:1 hk:2 baseline:1 rgp:4 inference:2 lowercase:1 eliminate:2 typically:3 hidden:1 relation:10 koller:1 transformed:1 semantics:2 overall:1 among:2 classification:5 denoted:1 development:1 art:3 initialize:1 saving:1 having:1 washington:2 eliminated:1 chapman:1 biology:2 yu:2 report:1 employ:1 randomly:1 phase:1 keerthi:1 psd:1 detection:2 mining:1 highly:1 evaluation:3 introduces:1 mixture:1 uppercase:1 kt:1 edge:1 closer:3 orthogonal:1 incomplete:1 euclidean:1 old:1 initialized:1 circle:1 kij:1 column:3 modeling:3 classify:1 instance:41 dipasquo:1 maximization:1 cost:4 introducing:1 deviation:3 subset:2 too:1 reported:2 combined:1 density:4 akj:2 automating:1 probabilistic:17 destination:3 connectivity:1 reflect:1 aaai:1 containing:1 choose:1 wishart:2 dr:13 worse:1 american:1 chung:1 li:4 converted:1 explicitly:2 depends:1 later:1 closed:6 kendall:1 linked:11 sup:2 red:1 wm:1 sort:1 xing:1 contribution:2 accuracy:13 variance:8 characteristic:1 maximized:1 preprocess:3 generalize:1 bayesian:3 comp:2 classified:1 against:2 hwm:1 e2:1 naturally:3 proof:1 sampled:2 ppca:37 iaai:1 popular:1 mitchell:1 knowledge:1 dimensionality:9 actually:4 wwt:4 originally:1 tipping:1 supervised:2 follow:1 reflected:3 formulation:3 evaluated:2 though:2 furthermore:1 just:2 stage:1 correlation:3 d:6 hand:2 working:1 web:6 cohn:1 contain:1 verify:1 inductive:2 hence:13 regularization:1 alternating:1 nonzero:1 illustrated:1 deal:1 round:1 during:3 self:1 auc:1 coincides:1 hong:3 complete:4 cb327903:1 tn:5 performs:1 l1:2 silva:1 meaning:5 novel:2 recently:1 common:3 data1:1 rotation:1 phits:2 physical:5 empirically:1 cohen:1 nh:2 belong:1 discussed:1 linking:1 refer:2 mathematics:1 similarly:1 bartholomew:1 hxi:2 stable:1 seymore:1 longer:4 similarity:2 add:1 multivariate:2 recent:1 plsi:2 perspective:1 nagar:1 dyyeung:1 success:2 exploited:2 devise:1 seen:4 additional:3 employed:1 maximize:1 semi:2 ii:6 multiple:1 ahmed:1 cross:3 long:1 retrieval:1 dept:1 devised:2 mle:1 xgp:4 laplacian:1 expectation:5 yeung:4 iteration:4 sometimes:1 represent:1 kernel:2 achieved:3 invert:2 addition:1 singular:1 crucial:1 sch:1 rest:1 unlike:1 posse:2 regional:1 comment:1 undirected:3 effectiveness:2 extracting:1 structural:1 yang:2 ideal:1 identically:3 easy:5 variate:7 zi:1 psychology:1 cn:1 texas:4 pca:44 cause:1 remark:2 dramatically:4 useful:2 generally:1 detailed:3 eigenvectors:1 gpc:3 liwujun:2 dit:1 http:1 outperform:3 exist:3 blue:1 diverse:1 express:2 four:2 reformulation:1 d3:1 verified:1 utilize:3 kept:1 graph:14 convert:2 realworld:1 inverse:1 letter:2 almost:1 reasonable:1 wu:1 appendix:1 comparable:4 ki:1 internet:1 fold:2 kronecker:1 flat:2 sake:1 speed:3 performing:1 format:1 relatively:1 according:2 craven:1 smaller:1 slightly:2 em:17 modification:1 taken:1 ln:7 visualization:3 previously:1 describing:1 jolliffe:1 needed:1 letting:3 adopted:3 available:1 operation:2 unreasonable:3 apply:1 appropriate:2 spectral:1 bii:1 uq:2 existence:1 original:9 denotes:12 running:1 assumes:1 clustering:1 include:1 graphical:3 subsampling:1 exploit:1 restrictive:1 ghahramani:2 society:3 quantity:1 strategy:4 traditional:2 diagonal:6 gradient:1 distance:1 link:47 separate:1 sci:2 entity:1 thank:1 topic:6 etr:1 boldface:2 length:1 retained:6 illustration:1 ratio:2 potentially:1 trace:1 negative:1 design:2 implementation:1 perform:4 observation:11 knott:1 discarded:1 howard:1 jin:2 relational:42 interacting:1 arbitrary:1 community:2 introduced:1 extensive:1 connection:1 learned:1 textual:3 nip:5 below:1 summarize:1 program:1 royal:2 suitable:1 getoor:1 treated:1 regularized:2 zhu:3 scheme:2 technology:1 library:1 jun:1 extract:1 text:2 prior:6 wisconsin:2 bear:1 nonprobabilistic:1 mixed:1 interesting:1 validation:2 integrate:1 downloaded:1 degree:2 authoritative:1 zhzhang:1 consistent:2 sufficient:1 rubin:1 principle:1 row:2 supported:2 transpose:1 rasmussen:1 jth:2 aij:7 guide:1 wide:1 sparse:1 distributed:3 curve:1 xn:1 world:3 qn:4 ignores:2 made:1 preprocessing:2 dgr:2 social:2 citation:4 keep:1 ml:4 uai:2 assumed:1 conclude:1 xi:2 discriminative:2 latent:21 table:2 promising:2 learn:7 nature:2 nigam:2 complex:1 aistats:2 whole:1 noise:2 edition:2 nallapati:1 repeated:1 graber:1 positively:2 referred:3 roc:1 fails:1 explicit:2 lie:2 unfair:1 third:1 bij:8 hw:2 theorem:2 specific:2 xt:5 bishop:2 hub:1 svm:7 gupta:1 evidence:1 closeness:1 exists:4 effectively:3 portal:1 kx:4 gap:3 mf:12 depicted:1 simply:2 likely:2 zhihua:1 chang:2 applies:1 sindhwani:1 aa:4 cite:2 springer:1 identity:2 goal:3 content:32 change:1 specifically:1 operates:1 wt:4 principal:9 called:8 gij:2 support:1 bioinformatics:1 overload:1 evaluate:3 correlated:4 |
3,066 | 3,779 | Graph Zeta Function in the Bethe Free Energy and
Loopy Belief Propagation
Yusuke Watanabe
The Institute of Statistical Mathematics
10-3 Midori-cho, Tachikawa
Tokyo 190-8562, Japan
[email protected]
Kenji Fukumizu
The Institute of Statistical Mathematics
10-3 Midori-cho, Tachikawa
Tokyo 190-8562, Japan
[email protected]
Abstract
We propose a new approach to the analysis of Loopy Belief Propagation (LBP) by
establishing a formula that connects the Hessian of the Bethe free energy with the
edge zeta function. The formula has a number of theoretical implications on LBP.
It is applied to give a sufficient condition that the Hessian of the Bethe free energy
is positive definite, which shows non-convexity for graphs with multiple cycles.
The formula clarifies the relation between the local stability of a fixed point of
LBP and local minima of the Bethe free energy. We also propose a new approach
to the uniqueness of LBP fixed point, and show various conditions of uniqueness.
1
Introduction
Pearl?s belief propagation [1] provides an efficient method for exact computation in the inference
with probabilistic models associated to trees. As an extension to general graphs allowing cycles,
Loopy Belief Propagation (LBP) algorithm [2] has been proposed, showing successful performance
in various problems such as computer vision and error correcting codes.
One of the interesting theoretical aspects of LBP is its connection with the Bethe free energy [3]. It
is known, for example, the fixed points of LBP correspond to the stationary points of the Bethe free
energy. Nonetheless, many of the properties of LBP such as exactness, convergence and stability are
still unclear, and further theoretical understanding is needed.
This paper theoretically analyzes LBP by establishing a formula asserting that the determinant of
the Hessian of the Bethe free energy equals the reciprocal of the edge zeta function up to a positive
factor. This formula derives a variety of results on the properties of LBP such as stability and
uniqueness, since the zeta function has a direct link with the dynamics of LBP as we show.
The first application of the formula is the condition for the positive definiteness of the Hessian of
the Bethe free energy. The Bethe free energy is not necessarily convex, which causes unfavorable
behaviors of LBP such as oscillation and multiple fixed points. Thus, clarifying the region where
the Hessian is positive definite is an importance problem. Unlike the previous approaches which
consider the global structure of the Bethe free energy such as [4, 5], we focus the local structure.
Namely, we provide a simple sufficient condition that determines the positive definite region: if all
the correlation coefficients of the pseudomarginals are smaller than a value given by a characteristic
of the graph, the Hessian is positive definite. Additionally, we show that the Hessian always has a
negative eigenvalue around the boundary of the domain if the graph has at least two cycles.
Second, we clarify a relation between the local stability of a LBP fixed point and the local structure
of the Bethe free energy. Such a relation is not necessarily obvious, since LBP is not the gradient
descent of the Bethe free energy. In this line of studies, Heskes [6] shows that a locally stable fixed
point of LBP is a local minimum of the Bethe free energy. It is thus interesting to ask which local
1
minima of the Bethe free energy are stable or unstable fixed points of LBP. We answer this question
by elucidating the conditions of the local stability of LBP and the positive definiteness of the Bethe
free energy in terms of the eigenvalues of a matrix, which appears in the graph zeta function.
Finally, we discuss the uniqueness of LBP fixed point by developing a differential topological result
on the Bethe free energy. The result shows that the determinant of the Hessian at the fixed points,
which appears in the formula of zeta function, must satisfy a strong constraint. As a consequence,
in addition to the known result on the one-cycle case, we show that the LBP fixed point is unique
for any unattractive connected graph with two cycles without restricting the strength of interactions.
2
Loopy belief propagation algorithm and the Bethe free energy
Throughout this paper, G = (V, E) is a connected undirected graph with V the vertices and E the
undirected edges. The cardinality of V and E are denoted by N and M respectively.
In this article we focus on binary variables, i.e., xi ? {?1}. Suppose that the probability distribution
over the set of variables x = (xi )i?V is given by the following factorization form with respect to G:
?
1 ?
?ij (xi , xj )
p(x) =
?i (xi ),
(1)
Z
ij?E
i?V
where Z is a normalization constant and ?ij and ?i are positive functions given by ?ij (xi , xj ) =
exp(Jij xi xj ) and ?i (xi ) = exp(hi xi ) without loss of generality.
?
In various applications, the computation of marginal distributions pi (xi ) :=
x\{xi } p(x) and
?
pij (xi , xj ) :=
x\{xi xj } p(x) is required though the exact computation is intractable for large
graphs. If the graph is a tree, they are efficiently computed by Pearl?s belief propagation algorithm
[1]. Even if the graph has cycles, it is empirically known that the direct application of this algorithm,
called Loopy Belief Propagation (LBP), often gives good approximation.
LBP is a message passing algorithm. For each directed edge, a message vector ?i?j (xj ) is assigned
and initialized arbitrarily. The update rule of messages is given by
?
?
?new
?ji (xj , xi )?i (xi )
?k?i (xi ),
(2)
i?j (xj ) ?
k?Ni \j
xi
where Ni is the neighborhood of i ? V . The order of edges in the update is arbitrary. In this paper
we consider parallel update, that is, all edges are updated simultaneously. If the messages converge
to a fixed point {??
i?j }, the approximations of pi (xi ) and pij (xi , xj ) are calculated by the beliefs,
bi (xi ) ? ?i (xi )
?
??
k?i (xi ),
bij (xi , xj ) ? ?ij (xi , xj )?i (xi )?j (xj )
?
??
k?i (xi )
k?Ni \j
k?Ni
?
??
k?j (xj ),
k?Nj \i
(3)
?
?
with normalization xi bi (xi ) = 1 and xi ,xj bij (xi , xj ) = 1. From (2) and (3), the constraints
?
bij (xi , xj ) > 0 and xj bij (xi , xj ) = bi (xi ) are automatically satisfied.
We introduce the Bethe free energy as a tractable approximation of the Gibbs free energy. The
exact distribution (1) is characterized by a variational problem p(x) = argminp? FGibbs (?
p), where
the minimum is taken over all probability distributions on (xi )i?V and
F
(?
p
)
is
the
Gibbs
free
? Gibbs
energy defined by FGibbs (?
p) = KL(?
p||p) ? log Z. Here KL(?
p||p) = p? log(?
p/p) is the KullbackLeibler divergence from p? to p. Note that FGibbs (?
p) is a convex function of p?.
In the Bethe
? approximation,
? we confine the above minimization to the distribution of the form
b(x) ? ij?E bij (xi , xj ) i?V bi (xi )1?di , where di := |Ni | is the degree and the constraints
?
?
bij (xi , xj ) > 0,
xi ,xj bij (xi , xj ) = 1 and
xj bij (xi , xj ) = bi (xi ) are satisfied. A set
{bi (xi ), bij (xi , xj )} satisfying these constraints is called pseudomarginals. For computational
tractability, we modify the Gibbs free energy to the objective function called Bethe free energy:
??
? ?
bi (xi ) log ?i (xi )
bij (xi , xj ) log ?ij (xi , xj ) ?
F (b) := ?
i?V xi
ij?E xi xj
+
? ?
bij (xi , xj ) log bij (xi , xj ) +
ij?E xi xj
?
i?V
2
(1 ? di )
?
xi
bi (xi ) log bi (xi ).
(4)
The domain of the objective function F is the set of pseudomarginals. The function F does not
necessarily have a unique minimum. The outcome of this modified variational problem is the same
as that of LBP [3]. To put it more precisely, There is a one-to-one correspondence between the set
of stationary points of the Bethe free energy and the set of fixed points of LBP.
It is more convenient if we work with minimal parameters, mean mi = Ebi [xi ] and correlation
?ij = Ebij [xi xj ]. Then we have an effective parametrization of pseudomarginals:
1
1
bij (xi , xj ) = (1 + mi xi + mj xj + ?ij xi xj ),
bi (xi ) = (1 + mi ).
(5)
4
2
The Bethe free energy (4) is rewritten as
?
?
F ({mi , ?ij }) = ?
Jij ?ij ?
hi m i
ij?E
i?V
? ? (1 + mi xi + mj xj + ?ij xi xj ) ?
? ( 1 + mi xi )
?
+
?
+
(1 ? di )
,
4
2
x
x x
ij?E
i
i?V
j
(6)
i
where ?(x) := x log x. The domain of F is written as
{
}
L(G) := {mi , ?ij } ? RN +M |1 + mi xi + mj xj + ?ij xi xj > 0 for all ij ? E and xi , xj = ?1 .
The Hessian of F , which consists of the second derivatives with respect to {mi , ?ij }, is a square
matrix of size N + M and denoted by ?2 F . This is considered to be a matrix-valued function on
L(G). Note that, from (6), ?2 F does not depend on Jij and hi .
3
3.1
Zeta function and Hessian of Bethe free energy
Zeta function and Ihara?s formula
For each undirected edge of G, we make a pair of oppositely directed edges, which form a set of
? Thus |E|
? = 2M . For each directed edge e ? E,
? o(e) ? V is the origin of e and
directed edges E.
?
t(e) ? V is the terminus of e. For e ? E, the inverse edge is denoted by e?, and the corresponding
undirected edge by [e] = [?
e] ? E.
A closed geodesic in G is a sequence (e1 , . . . , ek ) of directed edges such that t(ei ) =
o(ei+1 ) and ei ?= e?i+1 for i ? Z/kZ. Two closed geodesics are said to be equivalent if one is
obtained by cyclic permutation of the other. An equivalent class of closed geodesics is called a
prime cycle if it is not a repeated concatenation of a shorter closed geodesic. Let P be the set of
prime cycles of G. For given weights u = (ue )e?E? , the edge zeta function [7, 8] is defined by
?
?G (u) :=
(1 ? g(p))?1 , g(p) := ue1 ? ? ? uek for p = (e1 , . . . , ek ),
p?P
where ue ? C is assumed to be sufficiently small for convergence. This is an analogue of the
Riemann zeta function which is represented by the product over all the prime numbers.
Example 1. If G is a tree, which has no prime cycles, ?G (u) = 1. For 1-cycle graph CN of
length N , the prime cycles are (e1 , e2 , . . . , eN ) and (?
eN , e?N ?1 , . . . , e?1 ), and thus ?CN (u) = (1 ?
?N
?N
?1
?1
u
)
(1
?
u
)
.
Except
for
these
two
types
of graphs, the number of prime cycles is
e
e
?
l
l
l=1
l=1
infinite.
It is known that the edge zeta function has the following simple determinant formula, which gives
? be the set of functions on the directed edges.
analytical continuation to the whole C2M . Let C(E)
? which is determined by the graph G, by
We define a matrix on C(E),
{
1
if e ?= e?? and o(e) = t(e? ),
Me,e? :=
(7)
0
otherwise.
Theorem 1 ([8], Theorem 3).
?G (u) = det(I ? U M)?1 ,
where U is a diagonal matrix defined by Ue,e? := ue ?e,e? .
3
(8)
We need to show another determinant formula of the edge zeta function, which is used in the proof
of theorem 3. We leave the proof of theorem 2 to the supplementary material.
Theorem 2 (Multivariable version of Ihara?s formula). Let C(V ) be the set of functions on V . We
define two linear operators on C(V ) by
( ?
?
ue ue? )
ue
? )(i) :=
? )(i) :=
(Df
f (i), (Af
f (o(e)), where f ? C(V ).
1
?
u
u
1
?
ue ue?
e e?
?
?
e?E
t(e)=i
e?E
t(e)=i
(9)
Then we have (
?G (u)?1 =
)
? ? A)
?
det(I ? UM) = det(I + D
?
(1 ? ue ue?).
(10)
[e]?E
? , the edge zeta function is called the Ihara zeta function [9] and
If we set ue = u for all e ? E
denoted by ?G (u). In this single variable case, theorem 2 is reduced to Ihara?s formula [10]:
u2
u
?G (u)?1 = det(I ? uM) = (1 ? u2 )M det(I +
D?
A),
(11)
1 ? u2
1 ? u2
where D is the degree matrix and A is the adjacency matrix defined by
?
(Df )(i) := di f (i), (Af )(i) :=
f (o(e)), f ? C(V ).
?
e?E,t(e)=i
3.2
Main formula
Theorem 3 (Main Formula). The following equality holds at any point of L(G):
(
)
?
?
? ?
?G (u)?1 = det(I ? UM) = det(?2 F )
bij (xi , xj )
bi (xi )1?di 22N +4M ,
ij?E xi ,xj =?1
i?V xi =?1
(12)
where bij and bi are given by (5) and
ui?j :=
?ij ? mi mj
.
1 ? m2j
(13)
Proof. (The detail of the computation is given in the supplementary material.)
From (6), it is easy to see that the (E,E)-block of the Hessian is a diagonal matrix given by
)
?2F
1(
1
1
1
1
= ?ij,kl
+
+
+
.
??ij ??kl
4 1 + mi + mj + ?ij 1 ? mi + mj ? ?ij 1 + mi ? mj ? ?ij 1 ? mi ? mj + ?ij
Using this diagonal block, we erase (V,E)-block and (E,V)-block of the Hessian. In other words,
we choose a square matrix X such that det X = 1 and
[
]
Y ( 0 )
T
2
2
X (? F )X =
.
? F
0
??ij ??kl
After the computation given in the supplementary material, we see that
?
(?ik ?mi mk )2
? 1 2 +?
k?Ni (1?m2i )(1?m2i ?m2k +2mi mk ?ik ??2ik )
1?mi
(Y )i,j =
?mi mj
??Ai,j 1?m2 ?m?2ij+2m
m ? ??2
i
From uj?i =
?ij ?mi mj
,
1?m2i
j
i
j
ij
if i = j,
otherwise.
(14)
ij
? ? A? = Y W , where A? and D
? is defined in
it is easy to check that IN + D
(9) and W is a diagonal matrix defined by Wi,j := ?i,j (1 ? m2i ). Therefore,
?
?
det(I ? UM) = det(Y )
(1 ? m2i )
(1 ? ue ue?) = R.H.S. of (12)
i?V
[e]?E
For the left equality, theorem 2 is used.
Theorem 3 shows that the determinant of the Hessian of the Bethe free energy is essentially equal to
det(I ?UM), the reciprocal of the edge zeta function. Since the matrix UM has a direct connection
with LBP as seen in section 5, the above formula derives many consequences shown in the rest of
the paper.
4
4
Application to positive definiteness conditions
The convexity of the Bethe free energy is an important issue, as it guarantees uniqueness of the fixed
point. Pakzad et al [11] and Heskes [5] derive sufficient conditions of convexity and show that the
Bethe free energy is convex for trees and graphs with one cycle. In this section, instead of such
global structure, we shall focus the local structure of the Bethe free energy as an application of the
main formula.
For given square matrix X, Spec(X) ? C denotes the set of eigenvalues (spectra), and ?(X) the
spectral radius of a matrix X, i.e., the maximum of the modulus of the eigenvalues.
Theorem 4. Let M be the matrix given by (7). For given {mi , ?ij } ? L(G), U is defined by (13).
Then, Spec(UM) ? C \ R?1 =? ?2 F is a positive definite matrix at {mi , ?ij }.
Proof. We define mi (t) := mi and ?ij (t) := t?ij + (1 ? t)mi mj . Then {mi (t), ?ij (t)} ? L(G)
and {mi (1), ?ij (1)} = {mi , ?ij }. For t ? [0, 1], we define U(t) and ?2 F (t) in the same way by
{mi (t), ?ij (t)}. We see that U(t) = tU. Since Spec(UM) ? C\R?1 , we have det(I ?tUM) ?= 0
?
t ? [0, 1]. From theorem 3, det(?2 F (t)) ?= 0 holds on this interval. Using (14) and ?ij (0) =
mi (0)mj (0), we can check that ?2 F (0) is positive definite. Since the eigenvalues of ?2 F (t) are
real and continuous with respect t, the eigenvalues of ?2 F (1) must be positive reals.
We define the symmetrization of ui?j and uj?i by
Covbij [xi , xj ]
?ij ? mi mj
?i?j = ?j?i :=
=
.
(15)
2
2
1/2
{(1 ? mi )(1 ? mj )}
{Varbi [xi ]Varbj [xj ]}1/2
Thus, ui?j uj?i = ?i?j ?j?i . Since ?i?j = ?j?i , we sometimes abbreviate ?i?j as ?ij . From
the final expression, we see that |?ij | < 1. Define diagonal matrices Z and B by (Z)e,e? :=
?e,e? (1 ? m2t(e) )1/2 and (B)e,e? := ?e,e? ?e respectively. Then we have Z UMZ ?1 = BM, because
(Z UMZ ?1 )e,e? = (1 ? m2t(e) )1/2 ue (M)e,e? (1 ? m2o(e) )?1/2 = ?e (M)e,e? .
Therefore Spec(UM) = Spec(BM).
The following corollary gives a more explicit condition of the region where the Hessian is positive
definite in terms of the correlation coefficients of the pseudomarginals.
Corollary 1. Let ? be the Perron Frobenius eigenvalue of M and define L??1 (G) := {{mi , ?ij } ?
? Then, the Hessian ?2 F is positive definite on L??1 (G).
L(G)||?e | < ??1 for all e ? E}.
Proof. Since |?e | < ??1 , we have ?(BM) < ?(??1 M) = 1 ([12] Theorem 8.1.18). Therefore
Spec(BM) ? R?1 = ?.
As is seen from (11), ??1 is the distance from the origin to the nearest pole of Ihara?s zeta ?G (u).
From example 1, we see that ?G (u) = 1 for a tree G and ?CN (u) = (1 ? uN )?2 for a 1-cycle graph
CN . Therefore ??1 is ? and 1 respectively. In these cases, L??1 (G) = L(G) and F is a strictly
convex function on L(G), because |?e | < 1 always holds. This reproduces the results shown in [11].
In general, using theorem 8.1.22 of [12], we have mini?V di ? 1 ? ? ? maxi?V di ? 1.
Theorem 3 is also useful to show non-convexity.
Corollary 2. Let {mi (t) := 0, ?ij (t) := t} ? L(G) for t < 1. Then we have
lim det(?2 F (t))(1 ? t)M +N ?1 = ?2?M ?N +1 (M ? N )?(G),
t?1
(16)
where ?(G) is the number of spanning trees in G. In particular, F is never convex on L(G) for any
connected graph with at least two linearly independent cycles, i.e. M ? N ? 1.
Proof. The equation (16) is obtained by Hashimoto?s theorem [13], which gives the u ? 1 limit
of the Ihara zeta function. (See supplementary material for the detail.) If M ? N ? 1, the right
hand side of (16) is negative. As approaches to {mi = 0, ?ij = 1} ? L(G), the determinant of the
Hessian diverges to ??. Therefore the Hessian is not positive definite near the point.
Summarizing the results in this section, we conclude that F is convex on L(G) if and only if G is a
tree or a graph with one cycle. To the best of our knowledge, this is the first proof of this fact.
5
5
Application to stability analysis
In this section we discuss the local stability of LBP and the local structure of the Bethe free energy
around a LBP fixed point. Heskes [6] shows that a locally stable fixed point of sufficiently damped
LBP is a local minima of the Bethe free energy. The converse is not necessarily true in general, and
we will elucidate the gap between these two properties.
First, we regard the LBP update as a dynamical system. Since the model is binary, each message
?i?j (xj ) is parametrized by one parameter, say ?i?j . The state of LBP algorithm is expressed
? and the update rule (2) is identified with a transform T on C(E),
?
by ? = (?e )e?E? ? C(E),
?
? new = T (?). Then, the set of fixed points of LBP is {? ? ? C(E)|T
(? ? ) = ? ? }.
A fixed point ? ? is called locally stable if LBP starting with a point sufficiently close to ? ? converges to ? ? . The local stability is determined by the linearizion T ? around the fixed point. As is
discussed in [14], ? ? is locally stable if and only if Spec(T ? (? ? )) ? {? ? C||?| < 1}.
To suppress oscillatory behaviors of LBP, damping of update T? := (1 ? ?)T + ?I is sometimes
useful, where 0 ? ? < 1 is a damping strength and I is the identity. A fixed point is locally stable
with some damping if and only if Spec(T ? (? ? )) ? {? ? C|Re? < 1}.
There are many representations of the linearization (derivative) of LBP update (see [14, 15]), we
choose a good coordinate following Furtlehner et al [16]. In section 4 of [16], they transform mes?
sages as ?i?j ? ?i?j /??
i?j and functions as ?ij ? bij /(bi bj ) and ?i ? bi , where ?i?j is
the message of the fixed point. This changes only the representations of messages and functions,
and does not affect LBP essentially. This transformation causes T ? (? ? ) ? P T ? (? ? )P ?1 with an
invertible matrix P . Using this transformation, we see that the following fact holds. (See supplementary material for the detail.)
Theorem 5 ([16], Proposition 4.5). Let ui?j be given by (3), (5) and (13) at a LBP fixed point ? ? .
The derivative T ? (? ? ) is similar to UM, i.e. UM = P T ? (? ? )P ?1 with an invertible matrix P .
?
Since det(I ? T (? ? )) = det(I ? UM), the formula in theorem 3 implies a direct link between
?
the linearization T (? ? ) and the local structure of the Bethe free energy. From theorem 4, we have
that a fixed point of LBP is a local minimum of the Bethe free energy if Spec(T ? (? ? )) ? C \ R?1 .
It is now clear that the condition for positive definiteness, local stability of damped LBP and local
stability of undamped LBP are given in terms of the set of eigenvalues, C \ R?1 , {? ? C|Re? < 1}
and {? ? C||?| < 1} respectively. A locally stable fixed point of sufficiently damped LBP is a
local minimum of the Bethe free energy, because {? ? C|Re? < 1} is included in C \ R?1 . This
reproduces Heskes?s result [6]. Moreover, we see the gap between the locally stable fixed points
with some damping and the local minima of the Bethe free energy: if Spec(T ? (? ? )) is included in
C \ R?1 but not in {? ? C|Re? < 1}, the fixed point is a local minimum of the Bethe free energy
though it is not a locally stable fixed point of LBP with any damping.
It is interesting to ask under which condition a local minimum of the Bethe free energy is a stable
fixed point of (damped) LBP. While we do not know a complete answer, for an attractive model,
which is defined by Jij ? 0, the following theorem implies that if a stable fixed point becomes
unstable by changing Jij and hi , the corresponding local minimum also disappears.
Theorem 6. Let us consider continuously parametrized attractive models {?ij (t), ?i (t)}, e.g. t
is a temperature: ?ij (t) = exp(t?1 Jij xi xj ) and ?i (t) = exp(t?1 hi xi ). For given t, run LBP
algorithm and find a (stable) fixed point. If we continuously change t and see the LBP fixed point
becomes unstable across t = t0 , then the corresponding local minimum of the Bethe free energy
becomes a saddle point across t = t0 .
Proof. From (3), we see bij (xi , xj ) ? exp(Jij xi xj + ?i xi + ?j xj ) for some ?i and ?j . From
Jij ? 0, we have Covbij [xi , xj ] = ?ij ? mi mj ? 0, and thus ui?j ? 0. When the LBP fixed point
becomes unstable, the Perron Frobenius eigenvalue of UM goes over 1, which means det(I ? UM)
crosses 0. From theorem 3 we see that det(?2 F ) becomes positive to negative at t = t0 .
Theorem 6 extends theorem 2 of [14], which discusses only the case of vanishing local fields hi = 0
and the trivial fixed point (i.e. mi = 0).
6
6
Application to uniqueness of LBP fixed point
The uniqueness of LBP fixed point is a concern of many studies, because the property guarantees that
LBP finds the global minimum of the Bethe free energy if it converges. The major approaches to the
uniqueness is to consider equivalent minimax problem [5], contraction property of LBP dynamics
[17, 18], and to use the theory of Gibbs measure [19]. We will propose a different, differential
topological approach to this problem.
In our approach, in combination with theorem 3, the following theorem is the basic apparatus.
Theorem 7. If det ?2 F (q) ?= 0 for all q ? (?F )?1 (0) then
{
?
(
)
1 if x > 0,
sgn det ?2 F (q) = 1, where sgn(x) :=
?1 if x < 0.
q:?F (q)=0
We call each summand, which is +1 or ?1, the index of F at q.
Note that the set (?F )?1 (0), which is the stationary points of the Bethe free energy, coincides with
the fixed points of LBP. The above theorem asserts that the sum of indexes of all the fixed points
must be one. As a consequence, the number of the fixed points of LBP is always odd. Note also that
the index is a local quantity, while the assertion expresses the global structure of the function F .
For the proof of theorem 7, we prepare two lemmas. The proof of lemma 1 is shown in the supplementary material. Lemma 2 is a standard result in differential topology, and we refer [20] theorem
13.1.2 and comments in p.104 for the proof.
Lemma 1. If a sequence {qn } ? L(G) converges to a point q? ? ?L(G), then ??F (qn )? ? ?,
where ?L(G) is the boundary of L(G) ? RN +M .
Lemma 2. Let M1 and M2 be compact, connected and orientable manifolds with boundaries.
Assume that the dimensions of M1 and M2 are the same. Let f : M1 ? M2 be a smooth map
satisfying f (?M1 ) ? ?M2 . For a regular value of?
p ? M2 , i.e. det(?f (q)) ?= 0 for all q ? f ?1 (p),
we define the degree of the map f by deg f := q?f ?1 (p) sgn(det ?f (q)). Then deg f does not
depend on the choice of a regular value p ? M2 .
( )
Sketch of proof. Define a map ? : L(G) ? RN +M by ? := ?F + Jh . Note that ? does not
depend on h and J as seen from (6). Then it is enough to prove
?
?
sgn(det ??(q)) =
sgn(det ??(q)),
(17)
?1 (0)
h
?1
q??
q?? ((J ))
because ??1 (0) has a unique element {mi = 0, ?ij = 0}, at which ?2 F is positive definite, and
the right
side of (17) is equal to one. Define a sequence of manifolds {Cn } by Cn := {q ?
?hand ?
L(G)| ij?E xi ,xj? log bij ? n}, which increasingly converges to L(G). Take K > 0 and ? > 0
( )
( )
to satisfy K ?? > ? Jh ?. From lemma 1, for sufficiently large n0 , we have ??1 (0), ??1 Jh ? Cn0
and ?(?Cn0 ) ? B0 (K) = ?, where B0 (K) is the closed ball of radius K at the origin. Let ?? :
RN +M ? B0 (K) be a smooth map that is the identity on B0 (K ? ?), monotonically increasing on
K
? := ?? ? ? : Cn ? B0 (K) such that
?x?, and ?? (x) = ?x?
x for ?x? ? K. We obtain a map ?
0
?
?(?C
)
?
?B
(K).
Applying
lemma
2
yields
(17).
n
0
0
If we can guarantee that the index of every fixed point is +1 in advance of running LBP, we conclude
that fixed point of LBP is unique. We have the following a priori information for ?.
Lemma 3. Let ?ij be given by (15) at any fixed point of LBP. Then |?ij | ? tanh(|Jij |) and
sgn(?ij ) = sgn(Jij ) hold.
Proof. From (3), we see that bij (xi , xj ) ? exp(Jij xi xj + ?i xi + ?j xj ) for some ?i and
?j . With (15) and straightforward computation, we obtain ?ij = sinh(2Jij )(cosh(2?i ) +
cosh(2Jij ))?1/2 (cosh(2?j )+cosh(2Jij ))?1/2 . The bound is attained when ?i = 0 and ?j = 0.
From theorem 7 and lemma 3, we can immediately obtain the uniqueness condition in [18], though
the stronger contractive property is proved under the same condition in [18].
7
Figure 1: Graph of Example 2.
?
Figure 2: Graph G.
Figure 3: Two other types.
Corollary 3 ([18]). If ?(J M) < 1, then the fixed point of LBP is unique, where J is a diagonal
matrix defined by Je,e? = tanh(|Je |)?e,e? .
Proof. Since |?ij | ? tanh(|Jij |), we have ?(BM) ? ?(J M) < 1. ([12] Theorem 8.1.18.) Then
det(I ? BM) = det(I ? UM) > 0 implies that the index of any LBP fixed point must be +1.
In the proof of the above corollary, we only used the bound of modulus. In the following case of
corollary 4, we can utilize the information of signs. To state the corollary, we need a terminology.
?
The interactions {Jij , hi } and {Jij
, h?i } are said to be equivalent if there exists (si ) ? {?1}V such
?
?
that Jij = Jij si sj and hi = hi si . Since an equivalent model is obtained by gauge transformation
xi ? xi si , the uniqueness property of LBP for equivalent models is unchanged.
Corollary 4. If the number of linearly independent cycle of G is two (i.e. M ? N + 1 = 2), and the
interaction is not equivalent to attractive model, then the LBP fixed point is unique.
The proof is shown in the supplementary material. We give an example to illustrate the outline.
Example 2. Let V := {1, 2, 3, 4} and E := {12, 13, 14, 23, 34}. The interactions are given by
arbitrary {hi } and {?J12 , J13 , J14 , J23 , J34 } with Jij ? 0. See figure 1. It is enough to check that
det(I ? BM) > 0 for arbitrary 0 ? ?13 , ?23 , ?14 , ?34 < 1 and ?1 < ?12 ? 0. Since the prime
? (in figure 2), we have det(I?BM) = det(I?B?M),
?
cycles of G bijectively correspond to those of G
?
?
?
?
?
?
?
where ?e1 = ?12 ?23 , ?e2 = ?13 , and ?e3 = ?34 . We see that det(I ? B M) = (1 ? ?e1 ?e2 ?
??e1 ??e3 ? ??e2 ??e3 ? 2??e1 ??e2 ??e3 )(1 ? ??e1 ??e2 ? ??e1 ??e3 ? ??e2 ??e3 + 2??e1 ??e2 ??e3 ) > 0. In other cases,
? or the graphs in figure 3 similarly (see the supplementary material).
we can reduce to the graph G
For attractive models, the fixed point of the LBP is not necessarily unique.
For graphs with multiple cycles, all the existing results on uniqueness make assumptions that upperbound |Jij | essentially. In contrast, corollary 4 applies to arbitrary strength of interactions if the
graph has two cycles and the interactions are not attractive. It is noteworthy that, from corollary 2,
the Bethe free energy is non-convex in the situation of corollary 4, while the fixed point is unique.
7 Concluding remarks
For binary pairwise models, we show the connection between the edge zeta function and the Bethe
free energy in theorem 3, in the proof of which the multi-variable version of Ihara?s formula (theorem
2) is essential. After the initial submission of this paper, we found that theorem 3 is extended to a
more general class of models including multinomial models and Gaussian models represented by
arbitrary factor graphs. We will discuss the extended formula and its applications in a future paper.
Some recent researches on LBP have suggested the importance of zeta function. In the context of the
LDPC code, which is an important application of LBP, Koetter et al [21, 22] show the connection
between pseudo-codewords and the edge zeta function. On the LBP for the Gaussian graphical
model, Johnson et al [23] give zeta-like product formula of the partition function. While these are
not directly related to our work, pursuing covered connections is an interesting future research topic.
Acknowledgements
This work was supported in part by Grant-in-Aid for JSPS Fellows 20-993 and Grant-in-Aid for
Scientific Research (C) 19500249.
8
References
[1] J. Pearl. Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
Morgan Kaufmann Publishers, San Mateo, CA, 1988.
[2] K. Murphy, Y. Weiss, and M.I. Jordan. Loopy belief propagation for approximate inference:
An empirical study. Proc. of Uncertainty in AI, 15:467?475, 1999.
[3] J.S. Yedidia, W.T. Freeman, and Y. Weiss. Generalized belief propagation. Adv. in Neural
Information Processing Systems, 13:689?95, 2001.
[4] Y. Weiss. Correctness of Local Probability Propagation in Graphical Models with Loops.
Neural Computation, 12(1):1?41, 2000.
[5] T. Heskes. On the uniqueness of loopy belief propagation fixed points. Neural Computation,
16(11):2379?2413, 2004.
[6] T. Heskes. Stable fixed points of loopy belief propagation are minima of the Bethe free energy.
Adv. in Neural Information Processing Systems, 15, pages 343?350, 2002.
[7] K. Hashimoto. Zeta functions of finite graphs and representations of p-adic groups. Automorphic forms and geometry of arithmetic varieties, 15:211?280, 1989.
[8] H.M. Stark and A.A. Terras. Zeta functions of finite graphs and coverings. Advances in
Mathematics, 121(1):124?165, 1996.
[9] Y. Ihara. On discrete subgroups of the two by two projective linear group over p-adic fields.
Journal of the Mathematical Society of Japan, 18(3):219?235, 1966.
[10] H. Bass. The Ihara-Selberg zeta function of a tree lattice. Internat. J. Math, 3(6):717?797,
1992.
[11] P. Pakzad and V. Anantharam. Belief propagation and statistical physics. Conference on
Information Sciences and Systems, (225), 2002.
[12] R.A. Horn and C.R. Johnson. Matrix analysis. Cambridge University Press, 1990.
[13] K. Hashimoto. On zeta and L-functions of finite graphs. Internat. J. Math, 1(4):381?396,
1990.
[14] JM Mooij and HJ Kappen. On the properties of the Bethe approximation and loopy belief
propagation on binary networks. Journal of Statistical Mechanics: Theory and Experiment,
(11):P11012, 2005.
[15] S. Ikeda, T. Tanaka, and S. Amari. Information geometry of turbo and low-density parity-check
codes. IEEE Transactions on Information Theory, 50(6):1097?1114, 2004.
[16] C. Furtlehner, J.M. Lasgouttes, and A. De La Fortelle. Belief propagation and Bethe approximation for traffic prediction. INRIA RR-6144, Arxiv preprint physics/0703159, 2007.
[17] A.T. Ihler, JW Fisher, and A.S. Willsky. Loopy belief propagation: Convergence and effects
of message errors. Journal of Machine Learning Research, 6(1):905?936, 2006.
[18] J. M. Mooij and H. J. Kappen. Sufficient Conditions for Convergence of the Sum-Product
Algorithm. IEEE Transactions on Information Theory, 53(12):4422?4437, 2007.
[19] S. Tatikonda and M.I. Jordan. Loopy belief propagation and Gibbs measures. Uncertainty in
AI, 18:493?500, 2002.
[20] B.A. Dubrovin, A.T. Fomenko, S.P. Novikov, and Burns R.G. Modern Geometry: Methods
and Applications: Part 2: the Geometry and Topology of Manifolds . Springer-Verlag, 1985.
[21] R. Koetter, W.C.W. Li, PO Vontobel, and JL Walker. Pseudo-codewords of cycle codes via
zeta functions. IEEE Information Theory Workshop, pages 6?12, 2004.
[22] R. Koetter, W.C.W. Li, P.O. Vontobel, and J.L. Walker. Characterizations of pseudo-codewords
of (low-density) parity-check codes. Advances in Mathematics, 213(1):205?229, 2007.
[23] J.K. Johnson, V.Y. Chernyak, and M. Chertkov. Orbit-Product Representation and Correction
of Gaussian Belief Propagation. Proceedings of the 26th International Conference on Machine
Learning, (543), 2009.
9
| 3779 |@word determinant:6 version:2 stronger:1 contraction:1 kappen:2 initial:1 cyclic:1 terminus:1 ue1:1 existing:1 si:4 must:4 written:1 ikeda:1 partition:1 koetter:3 pseudomarginals:5 update:7 midori:2 n0:1 stationary:3 spec:10 parametrization:1 reciprocal:2 vanishing:1 provides:1 math:2 characterization:1 mathematical:1 direct:4 differential:3 ik:3 consists:1 prove:1 introduce:1 pairwise:1 theoretically:1 behavior:2 mechanic:1 multi:1 freeman:1 riemann:1 automatically:1 jm:1 cardinality:1 increasing:1 erase:1 becomes:5 moreover:1 transformation:3 nj:1 guarantee:3 pseudo:3 fellow:1 every:1 um:15 converse:1 grant:2 positive:18 local:26 modify:1 apparatus:1 limit:1 consequence:3 chernyak:1 establishing:2 yusuke:1 noteworthy:1 inria:1 burn:1 mateo:1 factorization:1 projective:1 bi:14 contractive:1 directed:6 unique:8 horn:1 block:4 definite:10 empirical:1 convenient:1 word:1 regular:2 close:1 operator:1 put:1 context:1 applying:1 equivalent:7 map:5 go:1 straightforward:1 starting:1 convex:7 immediately:1 correcting:1 m2:7 rule:2 j12:1 stability:10 coordinate:1 updated:1 elucidate:1 suppose:1 exact:3 origin:3 element:1 satisfying:2 terras:1 submission:1 preprint:1 region:3 cycle:21 connected:4 adv:2 bass:1 convexity:4 ui:5 dynamic:2 geodesic:4 p11012:1 depend:3 hashimoto:3 po:1 various:3 represented:2 m2j:1 effective:1 neighborhood:1 outcome:1 supplementary:8 valued:1 plausible:1 say:1 otherwise:2 amari:1 transform:2 final:1 sequence:3 eigenvalue:9 rr:1 analytical:1 propose:3 interaction:6 jij:21 product:4 tu:1 loop:1 frobenius:2 asserts:1 ebi:1 convergence:4 diverges:1 leave:1 converges:4 derive:1 illustrate:1 ac:2 novikov:1 nearest:1 ij:59 odd:1 b0:5 strong:1 kenji:1 implies:3 radius:2 tokyo:2 j34:1 sgn:7 j14:1 material:8 adjacency:1 proposition:1 extension:1 strictly:1 clarify:1 hold:5 correction:1 around:3 confine:1 considered:1 sufficiently:5 exp:6 bj:1 major:1 uniqueness:12 proc:1 prepare:1 tanh:3 tatikonda:1 symmetrization:1 correctness:1 gauge:1 fukumizu:2 minimization:1 exactness:1 always:3 gaussian:3 modified:1 hj:1 corollary:11 focus:3 check:5 contrast:1 summarizing:1 inference:3 relation:3 issue:1 denoted:4 priori:1 marginal:1 equal:3 field:2 never:1 argminp:1 future:2 j23:1 summand:1 intelligent:1 modern:1 simultaneously:1 divergence:1 murphy:1 geometry:4 connects:1 message:8 elucidating:1 damped:4 implication:1 edge:21 shorter:1 damping:5 tree:8 initialized:1 re:4 vontobel:2 orbit:1 theoretical:3 minimal:1 mk:2 assertion:1 lattice:1 loopy:11 tractability:1 pole:1 vertex:1 jsps:1 successful:1 johnson:3 kullbackleibler:1 answer:2 cho:2 density:2 international:1 probabilistic:2 physic:2 invertible:2 zeta:26 continuously:2 satisfied:2 choose:2 ek:2 derivative:3 stark:1 li:2 japan:3 upperbound:1 de:1 coefficient:2 satisfy:2 closed:5 traffic:1 parallel:1 square:3 ni:6 adic:2 kaufmann:1 characteristic:1 efficiently:1 clarifies:1 correspond:2 yield:1 oscillatory:1 energy:43 nonetheless:1 obvious:1 e2:8 associated:1 di:8 mi:37 proof:17 ihler:1 proved:1 ask:2 lim:1 knowledge:1 appears:2 tum:1 attained:1 oppositely:1 wei:3 jw:1 though:3 generality:1 orientable:1 correlation:3 hand:2 sketch:1 ei:3 propagation:18 scientific:1 modulus:2 effect:1 true:1 equality:2 assigned:1 attractive:5 ue:15 covering:1 coincides:1 multivariable:1 generalized:1 outline:1 complete:1 temperature:1 reasoning:1 variational:2 multinomial:1 empirically:1 ji:1 jp:2 jl:1 discussed:1 m1:4 refer:1 cambridge:1 gibbs:6 ai:3 mathematics:4 heskes:6 similarly:1 stable:13 internat:2 recent:1 prime:7 verlag:1 cn0:2 binary:4 arbitrarily:1 seen:3 minimum:15 analyzes:1 morgan:1 converge:1 monotonically:1 arithmetic:1 pakzad:2 multiple:3 smooth:2 characterized:1 af:2 cross:1 e1:10 prediction:1 basic:1 vision:1 m2i:5 df:2 essentially:3 arxiv:1 normalization:2 sometimes:2 lbp:61 addition:1 interval:1 walker:2 tachikawa:2 publisher:1 rest:1 unlike:1 comment:1 undirected:4 jordan:2 call:1 near:1 easy:2 enough:2 variety:2 xj:54 affect:1 identified:1 topology:2 reduce:1 cn:7 det:30 t0:3 expression:1 e3:7 hessian:17 cause:2 passing:1 remark:1 useful:2 clear:1 covered:1 locally:8 cosh:4 reduced:1 continuation:1 sign:1 discrete:1 shall:1 express:1 group:2 terminology:1 changing:1 utilize:1 graph:28 sum:2 run:1 inverse:1 uncertainty:2 extends:1 throughout:1 pursuing:1 oscillation:1 bound:2 hi:10 sinh:1 correspondence:1 topological:2 turbo:1 strength:3 constraint:4 precisely:1 aspect:1 concluding:1 developing:1 c2m:1 combination:1 ball:1 smaller:1 across:2 increasingly:1 wi:1 taken:1 equation:1 discus:4 needed:1 know:1 tractable:1 rewritten:1 yedidia:1 spectral:1 bijectively:1 m2k:1 denotes:1 running:1 graphical:2 ism:2 uj:3 society:1 unchanged:1 objective:2 question:1 quantity:1 codewords:3 diagonal:6 unclear:1 said:2 gradient:1 distance:1 link:2 clarifying:1 concatenation:1 parametrized:2 me:2 topic:1 manifold:3 unstable:4 trivial:1 spanning:1 willsky:1 code:5 length:1 index:5 ldpc:1 mini:1 negative:3 sage:1 suppress:1 allowing:1 finite:3 descent:1 situation:1 extended:2 rn:4 arbitrary:5 namely:1 required:1 kl:5 ihara:9 connection:5 pair:1 perron:2 pearl:3 subgroup:1 tanaka:1 suggested:1 dynamical:1 including:1 belief:18 analogue:1 abbreviate:1 minimax:1 disappears:1 m2t:2 understanding:1 acknowledgement:1 mooij:2 loss:1 permutation:1 interesting:4 undamped:1 degree:3 sufficient:4 pij:2 article:1 pi:2 supported:1 parity:2 free:43 umz:2 side:2 jh:3 institute:2 regard:1 boundary:3 calculated:1 dimension:1 asserting:1 fgibbs:3 kz:1 qn:2 san:1 bm:8 transaction:2 sj:1 approximate:1 compact:1 deg:2 global:4 reproduces:2 assumed:1 conclude:2 xi:84 spectrum:1 continuous:1 un:1 additionally:1 bethe:43 mj:15 ca:1 necessarily:5 domain:3 main:3 linearly:2 whole:1 repeated:1 je:2 en:2 definiteness:4 aid:2 watanabe:1 explicit:1 bij:19 chertkov:1 formula:20 theorem:34 showing:1 maxi:1 unattractive:1 derives:2 intractable:1 concern:1 exists:1 restricting:1 essential:1 workshop:1 importance:2 linearization:2 gap:2 saddle:1 expressed:1 u2:4 applies:1 springer:1 determines:1 identity:2 fisher:1 change:2 included:2 infinite:1 except:1 determined:2 lemma:9 called:6 la:1 unfavorable:1 anantharam:1 |
3,067 | 378 | A Neural Network Approach for
Three-Dimensional Object Recognition
Volker 'bap
Siemens AG, Central Reeearch and Development
Otto-HaIm-Ring 6, 0.8000 Munchen 83
GermaD)'
Ab.tract
The model-bued neural vision Iystem presented here determines the p~
aition and identity of three-dimensional objects. Two ltereo imagee of
a IC8ne are described in terms of Ihape primitives (line segments derived
from edges in the lcenel) and their relational structure. A recurrent neural
matching network solves the correlpondence problem by 888igning correIponding line segments in right and left ltereo images. A 3-D relational
IC8ne description it then generated and matched by a second neural network against models in a model bue. The quality of the solutions and
the convergence Ipeed were both improved by using mean field approximations.
1 INTRODUCTION
Many machine vision IYlteDll and, to a large extent, &lao the human visual Iyatem, are model bued. The &Cenes are described in terDll of Ihape primitives and
their relational Itructure, and the vision IYltem triel to find a match between the
&cene delcriptions and 'familiar' objects in a model bue. In many lituations, IUch
u robotia applicatioDl, the problem is intrinsically 3-D. Different approaches are
pOl8ible. Poggio and Edelman (1990) describe a neural network that treat. the 3-D
object recognition problem u a multivariate approximation problem. A certain
number of 2-D viewl of the object are used to train a neural network to produce
the Itandard view of that object. After training, new penpective viewl can be
recogniled.
In the approach presented here, the vision IYltem tries to capture the true 3-D
Itructure of the 8cene. Two Itereo viewl of a lcene are used to generate a 3-D
306
A Neural Network Approach for Three-Dimensional Object Recognition
mode 1
scene
~
O~---I~~.PJ
P ?
type line segr:ent \
P< I) (length)
3cm
m
~J
\
angle.
- 30 degrees
PIl.
type line segment
p( I ) (I ength)
Scm
~
type : lIne segment
P (1) (length): Jcm
angle .
P- JO degrees
...
O~---t~
.PI
mU!
type : line segment
P ( I ) (length): Scm
Figure 1: Match of primitive Pato Pi.
r i{
Qir
log(lj/lj)
n/lj
Figure 2: Definitions of r, q, and 6 (left). The function
,..0 (right).
description of the scene which is then matched against models in a model base. The
stereo correspondence problem and the model matching problem are solved by two
recurrent neural networks with very similar architectures. A neuron is assigned to
every pOl8ible match between primitives in the left and right images or, respectively,
the scene and the model base. The networks are designed to find the best matches
by obeying certain uniqueness constraints.
The networks are robust against the uncertainties in the descriptions of both the
stereo images and the 3-D scene (Ihadow lines, missing lines). Since a partial match
is sufficient for a successful model identification, opaque and partially occluded
objects can be recognized.
2 THE NETWORK ARCHITECTURE
Here, a general model matching tuk is considered. The activity of a match neuron
mai (Figure 1) representl the certainty of a match between a primitive Pa in the
model base and Pi in the lcene description. The interactions between neurons can be
derived from the network'i energy function where the fixed points of the network
correspond to the minima of the energy function. The first term in the energy
307
308
'Jresp
function evaluates the match between the primitives
Ep
=-1/2 E leoimed?
(1)
at
The function leo. is zero if the type of primitive Po is not equal to the type of
primitive Pi. If both types are identical. leo. evaluates the agreement between
parameters pf.(k) and pf(k) which describe properties of the primitives. Here.
leo. Io'(EI: /pf.(k) - pf(k)IIc1) is maximum if the parameters of Po and Pi match
(Figures 1 and 2).
=
The evaluation of the match between the relations of primitives in the scene and
data base is performed by the energy term (Mjoisness, Gindi and Anadan, 1989)
Es
=-1/2 L
Xo'~"J moim~i'
(2)
o,~,iJ
=
The function Xoi Io'(E.lp~,~(k)-~J(k)I/C7't) is maximum if the relation between
Po and P~ matches the relatIon between Pi and Pi'
The constraint that a primitive in the Kene Ihould only match to one or no primitive
in the model base (column cODltraint) is implemented by the additional (penalty-)
energy term (Utans et al .? 1989. Tresp and Gindi, 1990)
Ec
=E[((Em i)-1)2Emoi).
O
i
(3)
a
a
Ec is equal to zero only if in all columns, the sum over the activations of all neurons
is equal to one or zero and positive otherwise.
2.1
2.1.1
DYNAMIC EQUATIONS AND MEAN FIELD THEORY
MFAI
The neural network Ihould make binary decisionl, match or no match. but binary recurrent networks get easily Ituck in local minima. Bad local minima can
be avoided by using an annealing strategy but annealing is time-conluming when
simulated on a digital computer. Using a mean field approximation. one can obtain deterministic equations by retaining some of the advantages of the annealing
process (Peterson and SOderberg, 1989). The network is interpreted as a IYltem of
interacting units in thermal contact with a heat reservoir of temperature T. Such
a system minimizes the free energy F = E - TS where 5 is the entropy of the
system. At T 0 the energy E is minimized. The mean value va. =< moi > of
a neuron becomes lIai
1/(1 + e- u? i/T ) with "oi -IJE/lJllo" These equations
can be updated synchronously, asynchronously or solved iteratively by moving only
a Imall distance from the old value of
in the direction of the new mean field.
=
=
=
"a'
At high temperatures T. the IYltem is in the trivial solution va. :: 1/2 VQ, i and
the activations of all neuronl are in the linear region of the ligmoid function. The
system can be described by linearized equatioDi. The magnitudes of all eigenValues
of the corresponding tranlfer matrix are less than 1. At a critical temperature Tc ,
the magnitude of at least one of the eigenvalues becomes greater than one and the
trivial solution becomes unstable. Tc and favorable weights for the different terms
in the energy function can be found by an eigenvalue analYlis of the linearized
equatioDl (Peterson and Soderberg, 1989).
A Neural Network Approach for Three-Dimensional Object Recognition
2.1.2
MFA,
The column constraint is satisfied by states with exactly one neuron or DO neuron
'on' in every column. If only these states are considered in the derivation of the
mean field equations, one can obtain another set of mean field equations, "ai
1 x eUoi/T /(1 + E" e U_i/T ) with "ai = -8E/8"ai.
=
The column constraint term (Equation 3) drops out of the energy function and
the energy surface in simplified. The high temperature fixed point corresponds to
"ai
1/(N + 1) 'VOl, i where N is the number of rows.
=
3
THE CORRESPONDENCE PROBLEM
To solve the correspondence problem, corresponding lines in left and right images
have to be identified. A good 888umption is that the appearance of an object in
the left image is a distortion and shifted venion of the appearance of the object in
the right image with approximately the same scale and orientation. The machinery
just developed can be applied if the left image is interpreted as the scene and the
right image as the model.
Figure 3 shows two stereo images of a simple scene and the segmentation of left and
right images into line segments which are the only primitives in this application.
Lines correspond to the edges, structure and contoun of the objects and shadow
lines. The length of a line segment pf(l) = Ii is the descriptive parameter attached
to each line segment Pi. Relations between line segments are only considered if they
are in a local neighborhood: Xa,,,.ij is equal to zero if not both a) Po is attached to
line segment p" and b) line segment Pi is attached to line segment Pi' Otherwise,
Xa,,,.ij #-,(14)0'' - 4>iil/CT~ + Ira" - riil/CT~ + 19a" - 9iil/CT;) where prj(l) 4>ij is
the angle between line segments, prj (2)
riJ the logarithm of the ratio of their
lengths and pr,/3) = 9ij the attachment point (Shumaker et aI., 1989) (Figure 2).
=
=
=
Here, we have two uniqueness constraints: only at most one neuron should be
active in each column or each row. The row constraint is enforced by an energy
term equiValent to Eo: ER Ea[?Ei mai) - 1)2 E. rna']'
=
4
DESCRIPTION OF THE 3-D OBJECT STRUCTURE
From the last section, we know which endpoints in the left image correspond to
endpoints in the right image. If D is the separation of both (in parallel mounted)
cameras, I the focal lengths of the cameras, Z" II', Z,., II,. the coordinates of a particular point in left and right images, the 3-D position of the point in camera coordinates
z, II, z becomes z DI/(z,. - z,), II = ZII,./ I, Z
ZZ,./ 1+ D/2. This information
is used to generate the 3-D description of the visible portion of the objects in the
scene.
=
=
Knowing the true 3-D position of the endpoints of the line segments, the system
concludes that the chair and the wardrobe are two distinct and spatially separated
objects and that line segments 12 and 13 in the right image and 12 in the left image
are not connected to either the chair or the wardrobe. On the other hand, it is not
309
310
Tresp
rT
-M:1
I? .. ?
"'"
~JJ e
I? (";I2
I~
I
17
II,,~:~
I
II
1
left
II
I
u
I
r1ght
Figure 3: Stereo images of a scene and segmented images. The stereo matching
network matched all line segments that are present in both images correctly.
obvious that the shadow lines under the wardrobe are not part of the wardrobe.
5
MATCHING OBJECTS AND MODELS
The scene description now must be matched with stored models describing the
complete 3-D structures of the models in the data base. The model description
might be constructed by either explicitly measuring the dimensions of the models
or by incrementally assembling the 3-D structure from several stereo views of the
models. Descriptive parameters are the (true 3-D) length of line segments I, the
(true 3-D) angles ~ between line segments and the (true 3-D) attachment points
q. The knowledge about the 3-D structure allows a segmentation of the scene into
different objects and the row constraint is only applied to neurons relating to the
same object 0 in the scene ER' = Eo Ea[?EiEO mal) - 1)2 EiEO vail?
Figure 4 shows the network after convergence. Except for the occluded leg, all line
segments belonging to the chair could be matched correctly. All not occluded line
segments of the wardrobe could be matched correctly except for its left front leg.
The shadow lines in the image did not find a match.
6
3-D POSITION
In many applications, one is also interested in determining the positions of the
recognized objects in camera coordinates. In general, the transformation between
A Neural Network Approach for Three-Dimensional Object Recognition
.__
--",4----~
13
I
11
\
? , ????? , ?? at.Il"...." .." ......"........" ......
__ 1
ICWle
? ? ? ? ? ? ? , . . . . . " "........" ..... 11 . . . . . . . . . " ... ..
_--11----,.....18
Ie. .
IIOde1
?
~
I
"I
~
~
2
I
"k
,
?,
??
?
?,?
? , ? ? ? ? ? , ? ? ,,,,,.,,,.., , . " " '... 11 ........... ,, ... ..
Figure 4: 3-D matching network.
311
312
Tresp
=
an object in a standard frame of reference Xo
(zo, 110, %0) and the transformed
(z" 11" z,) can be described by Xs RXo, where R is a
frame of reference Xs
4 x 4 matrix describing a rotation followed by a translation. R can be calculated if
Xo and Xs are known for at leut 4 points using, for example, the pseudo inverse or
an ADALINE. Knowing the coefficients of R, the object position can be calculated.
If an ADALINE is used, the error after convergence is a meuure of the consistency
of the transformation. A large error can be used u an indication that either a
wrong model W&8 matched, or certain primitives were miscl&88ified.
=
7
=
DISCUSSION
Both M F Al and M F A2 were used in the experiments. The same solutions were
found in general, but due to the simpler energy 8urface, M F A2 allowed greater time
steps and therefore converged 5 to 10 times futer.
For more complex scenes, a hierarchical system could be considered. In the first
step, simple objects such as 8quares, rectangles, and circles would be identified.
These would then form the primitives in a second stage which would then recognize
complete objects. It might also be pOllible to combine these two matching nets
into one hierarchical net similar to the networks described by Mjolsne&8, Gindi and
Anadan (1989).
Acknowledgements
I would like to acknowledge the contributions of Gene Gindi, Eric Mjolsnes& and
Joachim Utans of Yale University to the design of the matching network. I thank
Christian Evers for helping me to acquire the images.
References
Eric Mjolsness, Gene Gindi, P. Anadan. Neural Optimization in Model Matching
and Perceptual Organization. Neural Computation 1, pp. 218-209, 1989.
Carsten Peterson, Bo Soderberg. A new method for mapping optimization problems
onto neural networks. International Journal 0/ Neural SJI.tem., Vol. I, No. I, pp.
3-22, 1989.
T. Poggio, S. Edelman. A Network That Learns to Recognize Three-Dimensional
Objects. Nature, No. 6255, pp. 263-266, January 1990.
Grant Shumaker, Gene Gindi, Eric Mjolsnese, P. An&dan. Stickville: A Neural
Net for Object Recognition via Graph Matching. Tech. Report No. 8908, Yale
University, 1989.
Volker Tresp, Gene Gindi. Invariant Object Recognition by Inexact Subgraph
Matching with Applications in Industrial Part Recognition. International Neural
Ndwork Conference, Paris, pp. 95-98, 1990.
Joachim Utans, Gene Gindi, Eric Mjolsness, P. An&dan. Neural Networks for Object
Recognition within Compositional Hierarchies, Initial Experiments. Tech. Report
No. 8903, Yale University, 1989.
| 378 |@word linearized:2 initial:1 activation:2 must:1 visible:1 christian:1 designed:1 drop:1 simpler:1 zii:1 constructed:1 edelman:2 combine:1 dan:2 pf:5 becomes:4 matched:7 cm:1 interpreted:2 minimizes:1 developed:1 ag:1 transformation:2 certainty:1 pseudo:1 every:2 exactly:1 wrong:1 unit:1 grant:1 positive:1 local:3 treat:1 io:2 approximately:1 might:2 camera:4 matching:11 get:1 onto:1 equivalent:1 deterministic:1 moi:1 missing:1 primitive:15 coordinate:3 updated:1 hierarchy:1 agreement:1 pa:1 recognition:9 ep:1 solved:2 capture:1 rij:1 mal:1 region:1 connected:1 mjolsness:2 mu:1 occluded:3 dynamic:1 segment:20 eric:4 po:4 easily:1 leo:3 zo:1 train:1 heat:1 derivation:1 describe:2 separated:1 distinct:1 neighborhood:1 solve:1 distortion:1 otherwise:2 otto:1 asynchronously:1 advantage:1 indication:1 eigenvalue:3 descriptive:2 net:3 interaction:1 adaline:2 subgraph:1 description:8 ent:1 convergence:3 prj:2 produce:1 tract:1 ring:1 object:28 recurrent:3 ij:5 solves:1 implemented:1 shadow:3 xoi:1 direction:1 human:1 helping:1 considered:4 iil:2 mapping:1 a2:2 uniqueness:2 favorable:1 ength:1 rna:1 volker:2 ira:1 derived:2 joachim:2 neuronl:1 tech:2 industrial:1 lj:3 relation:4 transformed:1 interested:1 orientation:1 retaining:1 development:1 field:6 equal:4 zz:1 identical:1 tem:1 minimized:1 report:2 recognize:2 familiar:1 ab:1 organization:1 evaluation:1 edge:2 partial:1 poggio:2 machinery:1 old:1 logarithm:1 circle:1 column:6 measuring:1 wardrobe:5 successful:1 front:1 stored:1 international:2 ie:1 jo:1 central:1 satisfied:1 coefficient:1 explicitly:1 performed:1 view:2 try:1 portion:1 parallel:1 pil:1 scm:2 contribution:1 oi:1 il:1 correspond:3 identification:1 converged:1 definition:1 inexact:1 against:3 evaluates:2 energy:12 c7:1 pp:4 obvious:1 di:1 intrinsically:1 knowledge:1 segmentation:2 ea:2 improved:1 just:1 xa:2 stage:1 hand:1 ei:2 incrementally:1 mode:1 quality:1 true:5 assigned:1 spatially:1 iteratively:1 i2:1 vail:1 complete:2 temperature:4 image:20 rotation:1 attached:3 endpoint:3 assembling:1 relating:1 ai:5 focal:1 consistency:1 moving:1 surface:1 base:6 multivariate:1 certain:3 cene:2 sji:1 binary:2 minimum:3 additional:1 greater:2 eo:2 recognized:2 ii:8 segmented:1 match:15 va:2 vision:4 annealing:3 architecture:2 identified:2 knowing:2 penalty:1 stereo:6 jj:1 compositional:1 generate:2 mai:2 shifted:1 utans:3 correctly:3 vol:2 ije:1 pj:1 rectangle:1 graph:1 sum:1 enforced:1 angle:4 inverse:1 uncertainty:1 opaque:1 separation:1 ct:3 haim:1 followed:1 correspondence:3 yale:3 activity:1 constraint:7 scene:13 chair:3 belonging:1 bap:1 em:1 lp:1 leg:2 invariant:1 pr:1 xo:3 equation:6 vq:1 describing:2 know:1 munchen:1 hierarchical:2 contact:1 strategy:1 rt:1 gindi:8 distance:1 thank:1 simulated:1 me:1 extent:1 unstable:1 trivial:2 length:7 ified:1 ratio:1 acquire:1 design:1 neuron:9 acknowledge:1 t:1 thermal:1 january:1 relational:3 frame:2 interacting:1 synchronously:1 paris:1 mfa:1 critical:1 lao:1 attachment:2 concludes:1 tresp:4 acknowledgement:1 determining:1 mounted:1 digital:1 soderberg:3 degree:2 sufficient:1 pi:10 translation:1 jcm:1 row:4 last:1 free:1 peterson:3 dimension:1 calculated:2 avoided:1 simplified:1 ec:2 gene:5 active:1 nature:1 robust:1 complex:1 did:1 allowed:1 reservoir:1 position:5 obeying:1 perceptual:1 learns:1 bad:1 er:2 x:3 magnitude:2 entropy:1 tc:2 appearance:2 visual:1 bue:2 partially:1 bo:1 corresponds:1 determines:1 viewl:3 identity:1 carsten:1 except:2 e:1 siemens:1 tuk:1 |
3,068 | 3,780 | The Infinite Partially Observable Markov Decision
Process
Finale Doshi-Velez
Cambridge University
Cambridge, CB21PZ, UK
[email protected]
Abstract
The Partially Observable Markov Decision Process (POMDP) framework has
proven useful in planning domains where agents must balance actions that provide knowledge and actions that provide reward. Unfortunately, most POMDPs
are complex structures with a large number of parameters. In many real-world
problems, both the structure and the parameters are difficult to specify from domain knowledge alone. Recent work in Bayesian reinforcement learning has made
headway in learning POMDP models; however, this work has largely focused on
learning the parameters of the POMDP model. We define an infinite POMDP
(iPOMDP) model that does not require knowledge of the size of the state space;
instead, it assumes that the number of visited states will grow as the agent explores
its world and only models visited states explicitly. We demonstrate the iPOMDP
on several standard problems.
1
Introduction
The Partially Observable Markov Decision Process (POMDP) model has proven attractive in domains where agents must reason in the face of uncertainty because it provides a framework for
agents to compare the values of actions that gather information and actions that provide immediate reward. Unfortunately, modelling real-world problems as POMDPs typically requires a domain
expert to specify both the structure of the problem and a large number of associated parameters,
and both of which are often difficult tasks. Current methods in reinforcement learning (RL) focus
on learning the parameters online, that is, while the agent is acting in its environment. Bayesian
RL [1, 2, 3] has recently received attention because it allows the agent to reason both about uncertainty in its model of the environment and uncertainty within environment itself. However, these
methods also tend to focus on learning parameters of an environment rather than the structure.
In the context of POMDP learning, several algorithms [4, 5, 6, 7] have applied Bayesian methods
to reason about the unknown model parameters. All of these approaches provide the agent with the
size of the underlying state space and focus on learning the transition and observation1 dynamics for
each state. Even when the size of the state space is known, however, just making the agent reason
about a large number of unknown parameters at the beginning of the learning process is fraught with
difficulties. The agent has insufficient experience to fit a large number of parameters, and therefore
much of the model will be highly uncertain. Trying to plan under vast model uncertainty often
requires significant computational resources; moreover, the computations are often wasted effort
when the agent has very little data. Using a point estimate of the model instead?that is, ignoring
the model uncertainty?can be highly inaccurate if the expert?s prior assumptions are a poor match
for the true model.
1
[7] also learns rewards.
1
We propose a nonparametric approach to modelling the structure of the underlying space?
specifically, the number of states in the agent?s world?which allows the agent to start with a simple
model and grow it with experience. Building on the infinite hidden Markov model (iHMM) [8], the
infinite POMDP (iPOMDP) model posits that the environment contains of an unbounded number of
states. The agent is expected to stay in a local region; however, as time passes, it may explore states
that it has not visited before. Initially, the agent will infer simple, local models of the environment
corresponding to its limited experience (also conducive to fast planning). It will dynamically add
structure as it accumulates evidence for more complex models. Finally, a data-driven approach to
structure discovery allows the agent to agglomerate states with identical dynamics (see section 4 for
a toy example).
2
The Infinite POMDP Model
A POMDP consists of the n-tuple {S,A,O,T ,?,R,?}. S, A, and O are
sets of states, actions, and observations. The transition function T (s? |s, a)
defines the distribution over next-states s? to which the agent may transition after taking action a from state s. The observation function ?(o|s? , a)
is a distribution over observations o that may occur in state s? after taking
action a. The reward function R(s, a) specifies the immediate reward for
each state-action pair (see figure 1 for a slice of the graphical model). The
factor ? ? [0, 1) weighs the importance of current and future rewards.
We focus on discrete state and observation spaces (generalising to continuous observations is straightforward) and finite action spaces. The size of
the state space is unknown and potentially unbounded. The transitions,
observations, and rewards are modelled with an iHMM.
at
s t?1
st
rt
ot
Figure 1: A time-slice
of the POMDP model.
The Infinite Hidden Markov Model A standard hidden Markov model (HMM) consists of the ntuple {S,O,T ,?}, where the transition T (s? |s) and observation ?(o|s? ) distributions only depend
on the hidden state. When the number of hidden states is finite and discrete, Dirichlet distributions
may be used as priors over the transition and observation distributions. The iHMM [9] uses a
hierarchical Dirichlet Process (HDP) to define a prior over HMMs where the number of underlying
states is unbounded.2 To generate a model from the iHMM prior, we:
1. Draw the mean transition distribution T? ? Stick(?).
2. Draw observations ?(?|s, a) ? H for each s, a.
3. Draw transitions T (?|s, a) ? DP(?, T?) for each s, a.
where ? is the DP concentration parameter and H is a prior over observation distributions. For
example, if the observations are discrete, then H could be a Dirichlet distribution.
Intuitively, the first two steps define the observation distribution and an overall popularity for each
state. The second step uses these overall state popularities to define individual state transition distributions. More formally, the first two steps involve a draw G0 ? DP(?, H), where the atoms of G0
are ?, and T? are the associated stick-lengths.3 Recall that in the stick breaking procedure, the sth
Qs?1
stick-length, T?s , is given by vs i=1 (1 ? vi ), where vi ? Beta(1, ?). While the number of states
is unbounded, T?s decreases exponentially
P? with s, meaning that ?later? states are less popular. This
construction of T?s also ensures that s T?s = 1. The top part of figure 2 shows a cartoon of a few
elements of T? and ?.
The second step of the iHMM construction involves defining the transition distributions T (?|s) ?
DP(?, T?) for each state s, where ?, the concentration parameter for the DP, determines how closely
the sampled distribution T (?|s) matches the mean transition distribution T?. Because T? puts higher
probabilities on states with smaller indices, T (s? |s) will also generally put more mass on earlier s?
(see lower rows of figure 2). Thus, the generating process encodes a notion that the agent will spend
most of its time in some local region. However, the longer the agent acts in this infinite space, the
more likely it is to transition to somewhere new.
2
The iHMM models in [8] and [9] are formally equivalent [10].
A detailed description of DPs and HDPs is beyond the scope of this paper; please refer to [11] for background on Dirichlet processes and [9] for an overview of HDPs.
3
2
Infinite POMDPs To extend the iHMM framework to
iPOMDPs, we must incorporate actions and rewards into the
generative model. To incorporate actions, we draw an observation distribution ?(?|s, a) ? H for each action a and
each state s. Similarly, during the second step of the generative process, we draw a transition distribution T (s? |s, a) ?
DP(?, T?) for each state-action pair.4
HMMs have one output?observations?while POMDPs
also output rewards. We treat rewards as a secondary set of
observations. For this work, we assume that the set of possible reward values is given, and we use a multinomial distribution to describe the probability R(r|s, a) of observing
reward r after taking action a in state s. As with the observations, the reward distributions R are drawn from Dirichlet
distribution HR . We use multinomial distributions for convenience; however, other reward distributions (such as Gaussians) are easily incorporate in this framework.
G0
?1
?1
?2
?2
?3
?3
?4
...
?4
?1:
...
?2:
...
...
Figure 2: iHMM: The first row
shows each state?s observation distribution ?s and the mean transition distribution T?. Later rows show
each state?s transition distribution.
In summary, the iPOMDP prior requires that we specify
? a set of actions A and observations O,
? a generating distribution H for the observation distributions and HR for the rewards (these
generating distributions can have any form; the choice will depend on the application),
? a mean transition concentration factor ? and a state transition concentration factor ?, and
? a discount factor ?.
To sample a model from the iPOMDP prior, we first sample the mean transition distribution T? ?
Stick(?). Next, for each state s and action a, we sample
? T (?|s, a) ? DP(?, T?) ,
? ?(?|s, a) ? H,
? R(?|s, a) ? HR .
Samples from the iPOMDP prior have an infinite number of states, but fortunately all of these states
do not need to be explicitly represented. During a finite lifetime the agent can only visit a finite
number of states, and thus the agent can only make inferences about a finite number of states. The
remaining (infinite) states are equivalent from agent?s perspective, as, in expectation, these states
will exhibit the mean dynamics of the prior. Thus, the only parts of the infinite model that need to
be initialised are those corresponding to the states the agent has visited as well as a catch-all state
representing all other states. In reality, of course, the agent does not know the states it has visited:
we discuss joint inference over the unknown state history and the model in section 3.1.
3
Planning
As in the standard Bayesian RL framework, we recast the problem of POMDP learning as planning
in a larger ?model-uncertainty? POMDP in which both the true model and the true state are unknown.
We outline below our procedure for planning in this joint space of POMDP models and unknown
states and the detail each step?belief monitoring and action-selection?in sections 3.1 and 3.2.
Because the true state is hidden, the agent must choose its actions based only on past actions and
observations. Normally the best action to take at time t depends on the entire history of actions and
observations that the agent has taken so far. However, the probability distribution over current states,
known as the belief, is a sufficient statistic for a history of actions and observations. In discrete state
spaces, the belief at time t + 1 can be computed from the previous belief, bt , the last action a, and
observation o, by the following application of Bayes rule:
X
ba,o
T (s|s? , a)bt (s? )/P r(o|b, a),
(1)
t+1 (s) = ?(o|s, a)
s? ?S
4
We use the same base measure H to draw all observation distributions; however, a separate measures Ha
could be used for each action if one had prior knowledge about the expected observation distribution for reach
action. Likewise, one could also draw a separate T?a for each action.
3
P
P
where P r(o|b, a)= s? ?S ?(o|s? , a) s?S T (s? |s, a)bt (s). However, it is intractable to express the
joint belief b over models and states with a closed-form expression. We approximate the belief b with
a set of sampled models m = {T, ?, R}, each with weight w(m). Each model sample m maintains
a belief over states bm (s). The states are discrete, and thus the belief bm (s) can be updated using
equation 1. Details for sampling the models m are described in section 3.1.
Given the belief, the agent must choose what action to choose next. One approach is to solve the
planning problem offline, that is, determine a good action for every possible belief. If the goal is to
maximize the expected discounted reward, then the optimal policy is given by:
Vt (b)
Qt (b, a)
=
max Qt (b, a),
a?A
X
= R(b, a) + ?
P r(o|b, a)Vt (ba,o ),
(2)
(3)
o?O
where the value function V (b) is the expected discounted reward that an agent will receive if its
current belief is b and Q(b, a) is the value of taking action a in belief b. The exact solution to
equation 3 is only tractable for tiny problems, but many approximation methods [12, 13, 14] have
been developed to solve POMDPs offline.
While we might hope to solve equation 3 over the state space of a single model, it is intractable to
solve over the joint space of states and infinite models?the model space is so large that standard
point-based approximations will generally fail. Moreover, it makes little sense to find the optimal
policy for all models when only a few models are likely. Therefore, instead of solving 3 offline,
we build a forward-looking search tree at each time step (see [15] for a review of forward search in
POMDPs). The tree computes the value of action by investigating a number of steps into the future.
The details of the action selection are discussed in section 3.2.
3.1
Belief Monitoring
As outlined in section 3, we approximate the joint belief over states and models through a set of
samples. In this section, we describe a procedure for sampling a set of models m = {T, ?, R} from
the true belief, or posterior, over models.5 These samples can then be used to approximate various
integrations over models that occur during planning; in the limit of infinite samples, the approximations will be guaranteed to converge to their true values. To simplify matters, we assume that given
a model m, it is tractable to maintain a closed-form belief bm (s) over states using equation 1. Thus,
models need to be sampled, but beliefs do not.
Suppose we have a set of models m that have been drawn from the belief at time t. To get a set of
models drawn from the belief at time t+1, we can either draw the models directly from the new belief
or adjust the weights on the model set at time t so that they now provide an accurate representation
of the belief at time t + 1. Adjusting the weights is computationally most straightforward: directly
following belief update equation 1, the importance weight w(m) on model m is given by:
P
a,o
wt+1
(m) ? ?(o|m, a)wt (m),
(4)
?
?
where ?(o|m, a)= s?S ?(o|s, m, a)bm (s), and we have used T (m |m, a) = ?m (m ) because the
true model does not change.
The advantage of simply reweighting the samples is that the belief update is extremely fast. However, new experience may quickly render all of the current model samples unlikely. Therefore, we
must periodically resample a new set of models directly from the current belief. The beam-sampling
approach of [16] is an efficient method for drawing samples from an iHMM posterior. We adapt this
approach to allow for observations with different temporal shifts (since the reward rt depends on
the state st , whereas the observation ot is conditioned on the state st+1 ) and for transitions indexed
by both the current state and the most recent action. The correctness of our sampler follows directly
from the correctness of the beam sampler [16].
The beam-sampler is an auxiliary variable method that draws samples from the iPOMDP posterior.
A detailed description of beam sampling is beyond the scope of this paper; however, we outline the
general procedure below. The inference alternates between three phases:
5
We will use the words posterior and belief interchangeably; both refer to the probability distribution over
the hidden state given some initial belief (or prior) and the history of actions and observations.
4
? Sampling slice variables to limit trajectories to a finite number of hidden states.
Given a transition model T and a state trajectory {s1 , s2 , . . .}, an auxiliary variable
ut ? Uniform([0, min(T (?|st , a))]) is sampled for each time t. The final column k of the
transition matrix is extended via additional stick-breaking until max(T (sk |s, a)) < ut .).
Only transitions T (s? |s, a) > ut are considered for inference at time t.6
? Sampling a hidden state trajectory. Now that we have a finite model, we apply forward
filtering-backward sampling (FFBS) [18] to sample the underlying state sequence.
? Sampling a model. Given a trajectory over hidden states, transition, observation, and
reward distributions are sampled for the visited states (it only makes sense to sample distributions for visited states, as we do not have information about unvisited states). In this
finite setting, we can resample the transitions T (?|s, a) using standard Dirichlet posteriors:
sa
sa
sa
sa
T (?|s, a) ? Dirichlet(T1sa + nsa
1 , T2 + n2 , ..., Tk + nk ,
?
X
Tisa ),
(5)
i=k+1
where k is the number of active or used states, Tisa is the prior probability of transitioning
to state i from state s after taking action a, and nsa
i is the number of observed transitions
to state i from s after a. The observations and rewards are resampled in a similar manner:
for example, if the observations are discrete with Dirichlet priors:
?(?|s, a) ? Dirichlet(H1 + no1 sa , H2 + no2 sa , ..., H|O| + no|O| sa ).
(6)
As with all MCMC methods, initial samples (from the burn-in period) are biased by sampler?s start
position; only after the sampler has mixed will the samples be representative of the true posterior.
Finally, we emphasize that the approach outline above is a sampling approach and not a maximum
likelihood estimator; thus the samples, drawn from the agent?s belief, capture the variation over
possible models. The representation of the belief is necessarily approximate due to our use of
samples, but the samples are drawn from the true current belief?no other approximations have
been made. Specifically, we are not filtering: each run of the beam sampler produces samples from
the current belief. Because they are drawn from the true posterior, all samples have equal weight.
3.2
Action Selection
Given a set of models, we apply a stochastic forward search in the model-space to choose an action.
The general idea behind forward search [15] is to use a forward-looking tree to compute actionvalues. Starting from the agent?s current belief, the tree branches on each action the agent might
take and each observation the agent might see. At each action node, the agent computes its expected
immediate reward R(a) = Em [Es|m [R(?|s, a)]].
From equation 3, the value of taking action a in belief b is
X
Q(a, b) = R(a, b) + ?
?(o|b, a) max
Q(a? , bao )
?
a
o
(7)
where bao is the agent?s belief after taking action a and seeing observation o from belief b. Because
action selection must be completed online, we use equation 4 to update the belief over models via
the weights w(m). Equation 7 is evaluated recursively for each Q(a? , bao ) up to some depth D.
The number of evaluations grows with (|A||O|)D , so doing a full expansion is feasible only for very
small problems. We approximate
Pthe true value stochastically by sampling only a few observations
from the distribution P (o|a) = m P (o|a, m)w(m). Equation 7 reduces to
1 X
Q(a, b) = R(a, b) + ?
max Q(a? , baoi )
(8)
NO i a?
where NO is the number of sampled observations and oi is the ith sampled observation.
Once we reach a prespecified depth in the tree, we must approximate the value of the leaves. For
each model m in the leaves, we can compute the value Q(a, bm , m) of the action a by approximately
6
For an introduction to slice sampling, refer to [17].
5
solving offline the POMDP model that m represents. We approximate the value of action a as
X
Q(a, b) ?
w(m)Q(a, bm , m).
(9)
m
This approximation is always an overestimate of the value, as it assumes that the uncertainty over
models?but not the uncertainty over states?will be resolved in the following time step (similar to
the QMDP [19] assumption).7 As the iPOMDP posterior becomes peaked and the uncertainty over
models decreases, the approximation becomes more exact.
The quality of the action selection largely follows from the bounds presented in [20] for planning
through forward search. The key difference is that now our belief representation is particle-based;
during the forward search we approximate an expected rewards over all possible models with rewards from the particles in our set. Because we can guarantee that our models are drawn from the
true posterior over models, this approach is a standard Monte Carlo approximation of the expectation. Thus, we can apply the central limit theorem to state that the estimated expected rewards will
2
be distributed around the true expectation with approximately normal noise N (0, ?n ), where n is
the number of POMDP samples and ? 2 is a problem-specific variance.
4
Experiments
We begin with a series of illustrative examples demonstrating the properties of the iPOMDP. In
all experiments, the observations were given vague hyperparameters (1.0 Dirichlet counts per element), and rewards were given hyperparameters that encouraged peaked distributions (0.1 Dirichlet
counts per element). The small counts on the reward hyperparameters encoded the prior belief that
R(?|s, a) is highly peaked, that is, each state-action pair will likely have one associated reward value.
Beliefs were approximated with sample set of 10 models. Models were resampled between episodes
and reweighted during episodes. A burn-in of 500 iterations was used for the beam sampler when
drawing these models directly from the belief. The forward-search was expanded to a depth of 3.
Number of States in Lineworld POMDP
S
G
Total Reward in Lineworld POMDP
6
10
8
Total Reward
Lineworld
Number of States
12
6
4
2
0
10
20
30
40
50
60
70
80
90
5.5
5
4.5
4
100
Learned
Number of States in Loopworld POMDP
G
0
10
Total Reward
S
Number of States
Loopworld
Optimal
Total Reward in Loopworld POMDP
12
8
6
4
?2
?4
2
0
?6
10
20
30
40
50
60
70
80
Episode Number
90
100
Learned
Optimal
.
(a) Cartoon of Models
(b) Evolution of state size
(c) Performance
Figure 3: Various comparisons of the lineworld and loopworld models. Loopworld infers only
necessary states, ignoring the more complex (but irrelevant) structure.
Avoiding unnecessary structure: Lineworld and Loopworld. We designed a pair of simple environments to show how the iPOMDP infers states only as it can distinguish them. The first, lineworld
was a length-six corridor in which the agent could either travel left or right. Loopworld consisted
of a corridor with a series of loops (see figure 3(a)); now the agent could travel though the upper or
lower branches. In both environments, only the two ends of the corridors had unique observations.
7
We also experimented with approximating Q(a, b) ? 80 ? percentile({w(m)Q(a, bm ,m )}). Taking
a higher percentile ranking as the approximate value places a higher value on actions with larger uncertainty.
As the values of the actions become more well known and the discrepancies between the models decreases, this
criterion reduces to the true value of the action.
6
Actions produced the desired effect with probability 0.95, observations were correct with probability
0.85 (that is, 15% of the time the agent saw an incorrect observation). The agent started at the left
end of the corridor and received a reward of -1 until it reached the opposite end (reward 10).
The agent eventually infers that the lineworld environment consists of six states?based on the
number of steps it requires to reach the goal?although in the early stages of learning it infers
distinct states only for the ends of the corridor and groups the middle region as one state. The
loopworld agent also shows a growth in the number of states over time (see figure 3(b)), but it
never infers separate states for the identical upper and lower branches. By inferring states as they
needed to explain its observations?instead of relying on a prespecified number of states?the agent
avoided the need to consider irrelevant structure in the environment. Figure 3(c) shows that the agent
(unsurprisingly) learns optimal performance in both environments.
Adapting to new situations: Tiger-3. The iPOMDP?s flexibility also lets it adapt to new situations.
In the tiger-3 domain, a variant of the tiger problem [19] the agent had to choose one of three doors
to open. Two doors had tigers behind them (r = ?100) and one door had a small reward (r = 10).
At each time step, the agent could either open a door or listen for the ?quiet? door. It heard the
correct door correctly with probability 0.85.
Evolution of Reward
?40
?60
Averaged Reward
The reward was unlikely to be behind the third door (p = .2),
but during the first 100 episodes, we artificially ensured that
the reward was always behind doors 1 or 2. The improving
rewards in figure 4 show the agent steadily learning the dynamics of its world; it learned never to open door 3. The dip
in 4 following episode 100 occurs when we next allowed the
reward to be behind all three doors, but the agent quickly
adapts to the new possible state of its environment. The
iPOMDP enabled the agent to first adapt quickly to its simplified environment but add complexity when it was needed.
?80
?100
?120
?140
0
50
100
150
Episode Count
200
250
Broader Evaluation. We next completed a set of experiments on POMDP problems from the literature. Tests had
200 episodes of learning, which interleaved acting and re- Figure 4: Evolution of reward from
sampling models, and 100 episodes of testing with the mod- tiger-3.
els fixed. During learning, actions were chosen stochastically based on its value with probability 0.05 and completely randomly with probability 0.01. Otherwise, they were chosen greedily (we found this small amount of randomness was needed for exploration to overcome our very small sample set and search depths). We compared accrued rewards
and running times for the iPOMDP agent against (1) an agent that knew the state count and used
EM to train its model, (2) an agent that knew the state count and that used the same forward-filtering
backward-sampling (FFBS) algorithm used in the beam sampling inner loop to sample models, and
(3) an agent that used FFBS with ten times the true number of states. For situations where the number
of states is not known, the last case is particularly interesting?we show that simply overestimating
the number of states is not necessarily the most efficient solution.
Table 1 summarises the results. We see that the iPOMDP often infers a smaller number of states than
the true count, ignoring distinctions that the history does not support. The middle three columns
show the speeds of the three controls relative the iPOMDP. Because the iPOMDP generally uses
smaller state spaces, we see that most of these values are greater than 1, indicating the iPOMDP is
faster. (In the largest problem, dialog, the oversized FFBS model did not complete running in several
days.) The latter four columns show accumulated rewards; we see that the iPOMDP is generally on
par or better than the methods that have access to the true state space size. Finally, figure 5 plots the
learning curve for one of problems, shuttle.
5
Discussion
Recent work in learning POMDP models include[23], which uses a set of Gaussian approximations
to allow for analytic value function updates in the POMDP space, and [5], which jointly reasons
over the space Dirichlet parameter and states when planning in discrete POMDPs. Sampling based
approaches include Medusa [4], which learns using state-queries, and [7], which learns using policy
7
Total Reward
Evolution of Total Reward for Shuttle
Final Reward for Shuttle
10
10
5
5
0
0
?5
?5
?10
?10
?15
?15
?20
0
50
100
Episode Count
150
?20
200
Learned Optimal
Figure 5: Evolution of reward for shuttle. During training (left), we see that the agent makes fewer
mistakes toward the end of the period. The boxplots on the right show rewards for 100 trials after
learning has stopped; we see the iPOMDP-agent?s reward distribution over these 100 trials is almost
identical an agent who had access to the correct model.
Table 1: Inferred states and performance for various problems. The iPOMDP agent (FFBS-Inf)
often performs nearly as well as the agents who had knowledge of the true number of states (EMtrue, FFBS-true), learning the necessary number of states much faster than an agent for which we
overestimate the number of states (FFBS-big).
Metric
States
Relative Training Time
Performance
Problem
True FFBS- EMFFBS- FFBS- EMFFBS- FFBS- FFBSInf
true
true
big
true
true
big
Inf
Tiger[19]
2
2.1
0.41
0.70
1.50
-277
0.49
4.24
4.06
Shuttle[21]
8
2.1
1.82
1.02
3.56
10
10
10
10
Network[19]
7
4.36
1.56
1.09
4.82
1857
7267
6843
6508
Gridworld[19] 26
7.36
3.57
2.48
59.1
-25
-51
-67
-13
(adapted)
Dialog[22]
51
2
0.67
5.15
-3023 -1326 -1009
(adapted)
queries. All of these approaches assume that the number of underlying states is known; all but [7]
focus on learning only the transition and observation models.
In many problems, however, the underlying number of states may not be known?or may require
significant prior knowledge to model?and, from the perspective of performance, is irrelevant. The
iPOMDP model allows the agent to adaptively choose the complexity of the model; any expert
knowledge is incorporated into the prior: for example, the Dirichlet counts on observation parameters can be used to give preference to certain observations as well as encode whether we expect
observations to have low or high noise. As seen in the results, the iPOMDP allows the complexity of the model to scale gracefully with the agent?s experience. Future work remains to tailor the
planning to unbounded spaces and refine the inference for POMDP resampling.
Past work has attempted to take advantage of structure in POMDPs [24, 25], but learning that structure has remained an open problem. By giving the agent an unbounded state space?but strong
locality priors?the iPOMDP provides one principled framework to learning POMDP structure.
Moreover, the hierarchical Dirichlet process construction described in section 2 can be extended to
include more structure and deeper hierarchies in the transitions.
6
Conclusion
We presented the infinite POMDP, a new model for Bayesian RL in partially observable domains.
The iPOMDP provides a principled framework for an agent to posit more complex models of its
world as it gains more experience. By linking the complexity of the model to the agent?s experience,
the agent is not forced to consider large uncertainties?which can be computationally prohibitive?
near the beginning of the planning process, but it can later come up with accurate models of the
world when it requires them. An interesting question may also to apply these methods to learning
large MDP models within the Bayes-Adaptive MDP framework [26].
8
References
[1] R. Dearden, N. Friedman, and D. Andre, ?Model based Bayesian exploration,? pp. 150?159,
1999.
[2] M. Strens, ?A Bayesian framework for reinforcement learning,? in ICML, 2000.
[3] P. Poupart, N. Vlassis, J. Hoey, and K. Regan, ?An analytic solution to discrete Bayesian
reinforcement learning,? in ICML, (New York, NY, USA), pp. 697?704, ACM Press, 2006.
[4] R. Jaulmes, J. Pineau, and D. Precup, ?Learning in non-stationary partially observable Markov
decision processes,? ECML Workshop, 2005.
[5] S. Ross, B. Chaib-draa, and J. Pineau, ?Bayes-adaptive POMDPs,? in Neural Information Processing Systems (NIPS), 2008.
[6] S. Ross, B. Chaib-draa, and J. Pineau, ?Bayesian reinforcement learning in continuous
POMDPs with application to robot navigation,? in ICRA, 2008.
[7] F. Doshi, J. Pineau, and N. Roy, ?Reinforcement learning with limited reinforcement: Using
Bayes risk for active learning in POMDPs,? in International Conference on Machine Learning,
vol. 25, 2008.
[8] M. J. Beal, Z. Ghahramani, and C. E. Rasmussen, ?The infinite hidden Markov model,? in
Machine Learning, pp. 29?245, MIT Press, 2002.
[9] Y. W. Teh, M. I. Jordan, M. J. Beal, and D. M. Blei, ?Hierarchical Dirichlet processes,? Journal
of the American Statistical Association, vol. 101, no. 476, pp. 1566?1581, 2006.
[10] J. V. Gael and Z. Ghahramani, Inference and Learning in Dynamic Models, ch. Nonparametric
Hidden Markov Models. Cambridge University Press, 2010.
[11] Y. W. Teh, ?Dirichlet processes.? Submitted to Encyclopedia of Machine Learning, 2007.
[12] J. Pineau, G. Gordon, and S. Thrun, ?Point-based value iteration: An anytime algorithm for
POMDPs,? IJCAI, 2003.
[13] M. T. J. Spaan and N. Vlassis, ?Perseus: Randomized point-based value iteration for
POMDPs,? Journal of Artificial Intelligence Research, vol. 24, pp. 195?220, 2005.
[14] T. Smith and R. Simmons, ?Heuristic search value iteration for POMDPs,? in Proc. of UAI
2004, (Banff, Alberta), 2004.
[15] S. Ross, J. Pineau, S. Paquet, and B. Chaib-Draa, ?Online planning algorithms for POMDPs,?
Journal of Artificial Intelligence Research, vol. 32, pp. 663?704, July 2008.
[16] J. van Gael, Y. Saatci, Y. W. Teh, and Z. Ghahramani, ?Beam sampling for the infinite hidden
Markov model,? in ICML, vol. 25, 2008.
[17] R. Neal, ?Slice sampling,? Annals of Statistics, vol. 31, pp. 705?767, 2000.
[18] C. K. Carter and R. Kohn, ?On Gibbs sampling for state space models,? Biometrika, vol. 81,
pp. 541?553, September 1994.
[19] M. L. Littman, A. R. Cassandra, and L. P. Kaelbling, ?Learning policies for partially observable
environments: scaling up,? ICML, 1995.
[20] D. McAllester and S. Singh, ?Approximate planning for factored POMDPs using belief state
simplification,? in UAI 15, 1999.
[21] L. Chrisman, ?Reinforcement learning with perceptual aliasing: The perceptual distinctions
approach,? in In Proceedings of the Tenth National Conference on Artificial Intelligence,
pp. 183?188, AAAI Press, 1992.
[22] F. Doshi and N. Roy, ?Efficient model learning for dialog management,? in Proceedings of
Human-Robot Interaction (HRI 2007), (Washington, DC), March 2007.
[23] P. Poupart and N. Vlassis, ?Model-based Bayesian reinforcement learning in partially observable domains,? in ISAIM, 2008.
[24] J. H. Robert, R. St-aubin, A. Hu, and C. Boutilier, ?SPUDD: Stochastic planning using decision
diagrams,? in UAI, pp. 279?288, 1999.
[25] A. P. Wolfe, ?POMDP homomorphisms,? in NIPS RL Workshop, 2006.
[26] M. O. Duff, Optimal learning: computational procedures for Bayes-adaptive markov decision
processes. PhD thesis, 2002.
9
| 3780 |@word trial:2 middle:2 open:4 hu:1 homomorphism:1 recursively:1 initial:2 contains:1 series:2 past:2 current:10 must:8 periodically:1 analytic:2 qmdp:1 designed:1 plot:1 update:4 v:1 alone:1 generative:2 leaf:2 fewer:1 resampling:1 prohibitive:1 stationary:1 intelligence:3 beginning:2 ith:1 smith:1 prespecified:2 blei:1 provides:3 node:1 preference:1 banff:1 unbounded:6 beta:1 corridor:5 become:1 incorrect:1 consists:3 manner:1 expected:7 planning:14 dialog:3 aliasing:1 discounted:2 relying:1 alberta:1 little:2 becomes:2 begin:1 underlying:6 moreover:3 mass:1 what:1 perseus:1 developed:1 guarantee:1 temporal:1 every:1 act:1 growth:1 ensured:1 biometrika:1 uk:1 stick:6 normally:1 control:1 overestimate:2 before:1 local:3 treat:1 mistake:1 limit:3 accumulates:1 approximately:2 might:3 burn:2 dynamically:1 hmms:2 limited:2 averaged:1 unique:1 testing:1 procedure:5 adapting:1 word:1 seeing:1 get:1 convenience:1 selection:5 put:2 context:1 risk:1 equivalent:2 straightforward:2 attention:1 starting:1 pomdp:26 focused:1 factored:1 q:1 rule:1 estimator:1 enabled:1 notion:1 variation:1 headway:1 updated:1 simmons:1 construction:3 suppose:1 hierarchy:1 annals:1 exact:2 us:4 element:3 roy:2 approximated:1 particularly:1 wolfe:1 observed:1 capture:1 region:3 ensures:1 episode:9 decrease:3 principled:2 environment:14 complexity:4 reward:49 littman:1 hri:1 dynamic:5 depend:2 solving:2 singh:1 completely:1 vague:1 easily:1 joint:5 resolved:1 represented:1 various:3 train:1 distinct:1 fast:2 describe:2 forced:1 monte:1 query:2 artificial:3 encoded:1 spend:1 larger:2 solve:4 heuristic:1 drawing:2 otherwise:1 statistic:2 paquet:1 jointly:1 itself:1 final:2 online:3 beal:2 advantage:2 sequence:1 oversized:1 propose:1 interaction:1 loop:2 pthe:1 flexibility:1 adapts:1 description:2 bao:3 ijcai:1 produce:1 generating:3 tk:1 qt:2 received:2 sa:7 strong:1 auxiliary:2 involves:1 come:1 posit:2 closely:1 correct:3 stochastic:2 exploration:2 human:1 mcallester:1 require:2 aubin:1 around:1 considered:1 normal:1 scope:2 early:1 resample:2 proc:1 travel:2 visited:7 ross:3 saw:1 largest:1 correctness:2 hope:1 mit:2 always:2 gaussian:1 rather:1 shuttle:5 broader:1 encode:1 focus:5 modelling:2 likelihood:1 cb21pz:1 greedily:1 sense:2 inference:6 el:1 accumulated:1 inaccurate:1 typically:1 entire:1 bt:3 initially:1 hidden:13 unlikely:2 overall:2 plan:1 integration:1 equal:1 once:1 never:2 washington:1 atom:1 cartoon:2 identical:3 sampling:18 represents:1 encouraged:1 icml:4 nearly:1 peaked:3 discrepancy:1 future:3 t2:1 simplify:1 overestimating:1 few:3 gordon:1 randomly:1 national:1 individual:1 saatci:1 phase:1 maintain:1 friedman:1 highly:3 evaluation:2 adjust:1 nsa:2 navigation:1 behind:5 accurate:2 tuple:1 necessary:2 experience:7 draa:3 tree:5 indexed:1 desired:1 re:1 weighs:1 uncertain:1 stopped:1 column:3 earlier:1 kaelbling:1 uniform:1 adaptively:1 st:5 accrued:1 explores:1 international:1 randomized:1 stay:1 quickly:3 precup:1 thesis:1 central:1 aaai:1 management:1 choose:6 isaim:1 stochastically:2 expert:3 american:1 toy:1 unvisited:1 matter:1 explicitly:2 ranking:1 vi:2 depends:2 later:3 h1:1 closed:2 observing:1 doing:1 reached:1 start:2 bayes:5 maintains:1 oi:1 variance:1 largely:2 likewise:1 who:2 modelled:1 bayesian:10 produced:1 carlo:1 trajectory:4 monitoring:2 pomdps:16 randomness:1 history:5 submitted:1 explain:1 reach:3 andre:1 ihmm:9 against:1 initialised:1 steadily:1 pp:10 doshi:3 associated:3 sampled:7 gain:1 chaib:3 adjusting:1 popular:1 recall:1 knowledge:7 ut:3 infers:6 listen:1 anytime:1 higher:3 day:1 specify:3 evaluated:1 though:1 lifetime:1 just:1 stage:1 until:2 reweighting:1 defines:1 pineau:6 quality:1 grows:1 mdp:2 building:1 effect:1 usa:1 consisted:1 true:24 evolution:5 neal:1 attractive:1 reweighted:1 during:8 interchangeably:1 please:1 illustrative:1 percentile:2 strens:1 criterion:1 trying:1 agglomerate:1 outline:3 complete:1 demonstrate:1 performs:1 meaning:1 recently:1 multinomial:2 rl:5 overview:1 exponentially:1 extend:1 discussed:1 linking:1 association:1 velez:1 significant:2 refer:3 cambridge:3 gibbs:1 outlined:1 similarly:1 particle:2 had:8 access:2 robot:2 longer:1 add:2 base:1 posterior:9 recent:3 alum:1 perspective:2 irrelevant:3 driven:1 inf:2 certain:1 vt:2 seen:1 fortunately:1 additional:1 greater:1 determine:1 maximize:1 converge:1 period:2 july:1 branch:3 full:1 infer:1 reduces:2 conducive:1 match:2 adapt:3 faster:2 visit:1 variant:1 expectation:3 metric:1 iteration:4 beam:8 receive:1 background:1 whereas:1 diagram:1 grow:2 ot:2 biased:1 pass:1 tend:1 mod:1 finale:2 jordan:1 near:1 door:10 jaulmes:1 fit:1 opposite:1 inner:1 idea:1 shift:1 whether:1 expression:1 six:2 kohn:1 effort:1 render:1 york:1 action:50 boutilier:1 useful:1 generally:4 detailed:2 involve:1 heard:1 medusa:1 gael:2 amount:1 nonparametric:2 discount:1 encyclopedia:1 ten:1 carter:1 generate:1 specifies:1 estimated:1 popularity:2 hdps:2 per:2 correctly:1 discrete:8 vol:7 express:1 group:1 key:1 four:1 demonstrating:1 drawn:7 boxplots:1 tenth:1 backward:2 vast:1 wasted:1 run:1 uncertainty:11 tailor:1 place:1 almost:1 draw:10 decision:6 scaling:1 interleaved:1 bound:1 resampled:2 guaranteed:1 distinguish:1 simplification:1 refine:1 adapted:2 occur:2 encodes:1 speed:1 extremely:1 min:1 expanded:1 alternate:1 march:1 poor:1 smaller:3 em:2 sth:1 spaan:1 making:1 s1:1 intuitively:1 hoey:1 taken:1 computationally:2 resource:1 equation:9 remains:1 discus:1 count:9 fail:1 eventually:1 needed:3 know:1 tractable:2 end:5 gaussians:1 apply:4 hierarchical:3 top:1 assumes:2 dirichlet:16 remaining:1 completed:2 graphical:1 running:2 include:3 somewhere:1 giving:1 ghahramani:3 build:1 approximating:1 ipomdp:24 icra:1 summarises:1 g0:3 question:1 occurs:1 concentration:4 rt:2 exhibit:1 september:1 dp:8 quiet:1 separate:3 thrun:1 hmm:1 gracefully:1 poupart:2 reason:5 toward:1 hdp:1 length:3 index:1 ffbs:10 insufficient:1 balance:1 difficult:2 unfortunately:2 robert:1 potentially:1 ba:2 policy:4 unknown:6 teh:3 upper:2 observation:45 markov:11 finite:8 ecml:1 immediate:3 defining:1 extended:2 looking:2 situation:3 incorporated:1 gridworld:1 vlassis:3 dc:1 spudd:1 duff:1 inferred:1 pair:4 learned:4 distinction:2 chrisman:1 nip:2 beyond:2 below:2 recast:1 max:4 belief:40 dearden:1 difficulty:1 hr:3 representing:1 started:1 catch:1 prior:17 review:1 discovery:1 literature:1 relative:2 unsurprisingly:1 par:1 expect:1 mixed:1 interesting:2 regan:1 filtering:3 proven:2 h2:1 agent:62 gather:1 sufficient:1 tiny:1 row:3 course:1 summary:1 last:2 rasmussen:1 offline:4 allow:2 deeper:1 face:1 taking:8 distributed:1 slice:5 dip:1 depth:4 overcome:1 world:7 transition:27 curve:1 computes:2 van:1 forward:10 made:2 reinforcement:9 adaptive:3 avoided:1 bm:7 simplified:1 far:1 approximate:10 observable:7 emphasize:1 active:2 investigating:1 uai:3 generalising:1 unnecessary:1 knew:2 continuous:2 search:9 sk:1 reality:1 table:2 ignoring:3 improving:1 fraught:1 expansion:1 complex:4 necessarily:2 artificially:1 domain:7 did:1 s2:1 noise:2 hyperparameters:3 big:3 n2:1 allowed:1 representative:1 ny:1 position:1 inferring:1 perceptual:2 breaking:2 third:1 learns:4 theorem:1 remained:1 transitioning:1 specific:1 experimented:1 evidence:1 intractable:2 workshop:2 importance:2 phd:1 conditioned:1 nk:1 cassandra:1 locality:1 simply:2 explore:1 likely:3 partially:7 ch:1 determines:1 acm:1 goal:2 feasible:1 change:1 tiger:6 infinite:16 specifically:2 acting:2 wt:2 sampler:7 total:6 secondary:1 e:1 attempted:1 indicating:1 formally:2 support:1 latter:1 incorporate:3 mcmc:1 avoiding:1 |
3,069 | 3,781 | Adapting to the Shifting Intent of Search Queries?
Umar Syed?
Department of Computer
and Information Science
University of Pennsylvania
Philadelphia, PA 19104
[email protected]
Aleksandrs Slivkins
Microsoft Research
Mountain View, CA 94043
[email protected]
Nina Mishra
Microsoft Research
Mountain View, CA 94043
[email protected]
Abstract
Search engines today present results that are often oblivious to recent shifts in
intent. For example, the meaning of the query ?independence day? shifts in early
July to a US holiday and to a movie around the time of the box office release.
While no studies exactly quantify the magnitude of intent-shifting traffic, studies
suggest that news events, seasonal topics, pop culture, etc account for 1/2 the
search queries. This paper shows that the signals a search engine receives can be
used to both determine that a shift in intent happened, as well as find a result that
is now more relevant. We present a meta-algorithm that marries a classifier with
a bandit algorithm to achieve regret that depends logarithmically on the number
of query impressions, under certain assumptions. We provide strong evidence that
this regret is close to the best achievable. Finally, via a series of experiments, we
demonstrate that our algorithm outperforms prior approaches, particularly as the
amount of intent-shifting traffic increases.
1
Introduction
Search engines typically use a ranking function to order results. The function scores a document by
the extent to which it matches the query, and documents are ordered according to this score. This
function is fixed in the sense that it does not change from one query to another and also does not
change over time. For queries such as ?michael jackson? traditional ranking functions that value
features such as high page rank will not work since documents new to the web will not have accrued
sufficient inlinks. Thus, a search engine?s ranking function should not be fixed; different results
should surface depending on the temporal context.
Intuitively, a query is ?intent-shifting? if the most desired search result(s) change over time. More
concretely, a query?s intent has shifted if the click distribution over search results at some time
differs from the click distribution at a later time. For the query ?tomato? on the heels of a tomato
salmonella outbreak, the probability a user clicks on a news story describing the outbreak increases
while the probability a user clicks on the Wikipedia entry for tomatoes rapidly decreases. There
are studies that suggest that queries likely to be intent-shifting ? such as pop culture, news events,
trends, and seasonal topics queries ? constitute roughly half of the search queries that a search
engine receives [10].
The goal of this paper is to devise an algorithm that quickly adapts search results to shifts in user
intent. Ideally, for every query and every point in time, we would like to display the search result that
users are most likely to click. Since traditional ranking features like PageRank [4] change slowly
over time, and may be misleading if user intent has shifted very recently, we want to use just the
observed click behavior of users to decide which search results to display.
?
Full version of this paper [20] is available on arxiv.org. In the present version, all proofs are omitted.
This work was done while the author was an intern at Microsoft Research and a student in the Department
of Computer Science, Princeton University.
?
1
There are many signals a search engine can use to detect when the intent of a query shifts. Query
features such as as volume, abandonment rate, reformulation rate, occurrence in news articles, and
the age of matching documents can all be used to build a classifier which, given a query, determines
whether the intent has shifted. We refer to these features as the context, and an occassion when a
shift in intent occurs as an event.
One major challenge in building an event classifier is obtaining training data. For most query and
date combinations (e.g. ?tomato, 06/09/2008?), it will be difficult even for a human labeler to recall
in hindsight whether an event related to the query occurred on that date. In this paper, we propose a
novel solution that learns from unlabeled contexts and user click activity.
Contributions. We describe a new algorithm that leverages the information contained in contexts.
Our algorithm is really a meta-algorithm that combines a bandit algorithm designed for the eventfree setting with an online classification algorithm. The classifier uses the contexts to predict when
events occur, and the bandit algorithm ?starts over? on positive predictions. The bandit algorithm
provides feedback to the classifier by checking, soon after each of the classifier?s positive predictions, whether the optimal search result actually changed. The key technical hurdle in proving a
regret bound is handling events that happen during the ?checking? phase.
For suitable choices of the bandit and classifier subroutines, the regret incurred by our metan
algorithm is (under certain mild assumptions) at most O(k + dF )( ?
log T ), where k is the number
of events, dF is a certain measure of the complexity of the concept class F used by the classifier,
n is the number of possible search results, ? is the ?minimum suboptimality? of any search result
(defined formally in Section 2), and T is the total number of impressions. This regret bound has a
very weak dependence on T , which is highly desirable for search engines that receive much traffic.
The context turns out to be crucial for achieving logarithmic?dependence on T . Indeed, we show that
any bandit algorithm that ignores context suffers regret ?( T ), even when there is only one event.
Unlike many lower bounds for bandit problems, our lower bound holds even when ? is a constant
independent of T . We also show that assuming a logarithmic dependence on T , the dependence on
k and dF is essentially optimal.
For empirical evaluation, we ideally need access to the traffic of a real search engine so that search
results can be adapted based on real-time click activity. Since we did not have access to live traffic, we instead conduct a series of synthetic experiments. The experiments show that if there are
no events then the well-studied UCB 1 algorithm [2] performs the best. However, when many different queries experience events, the performance of our algorithm significantly outperforms prior
techniques.
2
Problem Formulation and Preliminaries
We view the problem of deciding which search results to display in response to user click behavior
as a bandit problem, a well-known type of sequential decision problem. For a given query q, the
task is to determine, at each round t ? {1, . . . , T } that q is issued by a user to our search engine, a
single result it ? {1, . . . , n} to display.1 This result is clicked by the user with probability pt (it ).
A bandit algorithm A chooses it using only observed information from previous rounds, i.e., all
previously displayed results
hP and received clicks.
i The performance of an algorithm A is measured by
T
?
?
its regret: R(A) , E
)
?
p
(i
)
p
(i
t t , where an optimal result it = arg maxi pt (i) is one
t=1 t t
with maximum click probability, and the expectation is taken over the randomness in the clicks and
the internal randomization of the algorithm. Note our unusually strong definition of regret: we are
competing against the best result on every round.
We call an event any round t where pt?1 6= pt . It is reasonable to assume that the number of
events k ? T , since we believe that abrupt shifts in user intent are relatively rare. Most existing
bandit
? algorithms make no attempt to predict when events will occur, and consequently suffer regret
?( T ). On the other hand, a typical search engine receives many signals that can be used to predict
events, such as bursts in query reformulation, average age of retrieved document, etc.
1
For simplicity, we focus on the task of returning a single result, and not a list of results. Techniques
from [19] may be adopted to find a good list of results.
2
We assume that our bandit algorithm receives a context xt ? X at each round t, and that there exists
a function f ? F, in some known concept class F, such that f (xt ) = +1 if an event occurs at round
t, and f (xt ) = ?1 otherwise.2 In other words, f is an event oracle. At each round t, an eventful
bandit algorithm must choose a result it using only observed information from previous rounds, i.e.,
all previously displayed results and received clicks, plus all contexts up to round t.
In order to develop an efficient eventful bandit algorithm, we make an additional key assumption: At
least one optimal result before an event is significantly suboptimal after the event. More precisely,
we assume there exists a minimum shift ?S > 0 such that, whenever an event occurs at round t,
we have pt (i?t?1 ) < pt (i?t ) ? ?S for at least one previously optimal search result i?t?1 . For our
problem setting, this assumption is relatively mild: the events we are interested in tend to have a
rather dramatic effect on the optimal search results. Moreover, our bounds are parameterized by
? = mint mini6=i?t pt (i?t ) ? pt (i), the minimum suboptimality of any suboptimal result.
3
Related Work
While there has been a substantial amount of work on ranking algorithms [11, 5, 13, 8, 6], all of these
results assume that there is a fixed ranking function to learn, not one that shifts over time. Online
bandit algorithms (see [7] for background) have been considered in the context of ranking. For
instance, Radlinski et al [19] showed how to compose several instantiations of a bandit algorithm
to produce a ranked list of search results. Pandey et al [18] showed that bandit algorithms can
be effective in serving advertisements to search engine users. These approaches also assume a
stationary inference problem.
Even though existing bandit work does not address our problem, there are two key algorithms that
we do use in our work. The UCB 1 algorithm [2] assumes fixed click probabilities and has regret at
n
log T ). The EXP 3. S algorithm [3] assumes that click probabilities can change on every
most O( ?
p
round and has regret at most O(k nT log(nT )) for arbitrary pt ?s. Note that the dependence of
EXP 3. S on T is substantially stronger.
The ?contextual bandits? problem setting [21, 17, 12, 16, 14] is similar to ours. A key difference
is that the context received in each round is assumed to contain information about the identity of
an optimal result i?t , a considerably stronger assumption than we make. Our context includes only
side information such as volume of the query, but we never actually receive information about the
identity of the optimal result.
A different approach is to build a statistical model of user click behavior. This approach has been
applied to the problem of serving news articles on the web. Diaz [9] used a regularized logistic
model to determine when to surface news results for a query. Agarwal et al [1] used several models,
including a dynamic linear growth curve model.
There has also been work on detecting bursts in data streams. For example, Kleinberg [15] describes
a state-based model for inferring stages of burstiness. The goal of our work is not to detect bursts,
but rather to predict shifts in intent.
In a recent concurrent and independent work, Yu et al [22] studied bandit problems with ?piecewisestationary? distributions, a notion that closely resembles our definition of events. However, they
make different assumptions than we do about the information a bandit algorithm can observe. Expressed in the language of our problem setting, they assume that from time-to-time a bandit algorithm receives information about how users would have responded to search results that are never
actually displayed. For us, this assumption is clearly inappropriate.
4
Bandit with Classifier
Our algorithm is called BWC, or ?Bandit with Classifier?. The high-level idea is to use a bandit
algorithm such as UCB 1, restart it every time the classifier predicts an event, and use subsequent
rounds to generate feedback for the classifier. We will present our algorithm in a modular way,
as a meta-algorithm which uses the following two components: classifier and bandit. In
2
In some of our analysis, we require that contexts be restricted to a strict subset of X ; the value of f outside
this subset will technically be null.
3
each round, classifier inputs a context xt and outputs a ?positive? or ?negative? prediction
of whether an event has happened in this round. Also, it may input labeled samples of the form
(x, l), where x is a context and l is a boolean label, which it uses for training. Algorithm bandit
is a bandit algorithm that is tuned for the event-free runs and provides the following additional
functionality: after each round t of execution, it outputs the t-th round guess: a pair (G+ , G? ),
where G+ and G? are subsets of arms that it estimates to be optimal and suboptimal, respectively.3
Since both classifier and bandit make predictions (about events and arms, respectively), for
clarity we use the term ?guess? exclusively to refer to predictions made by bandit, and reserve the
term ?prediction? for classifier.
The algorithm operates as follows. It runs in phases of two alternating types: odd phases are called
?testing? phases, and even phases are called ?adapting? phases. The first round of phase j is denoted
tj . In each phase we run a fresh instance of bandit. Each testing phase lasts for L rounds, where
L is a parameter. Each adapting phase j ends as soon as classifier predicts ?positive?; the
round t when this happens is round tj+1 . Phase j is called full if it lasts at least L rounds. For a full
?
phase j, let (G+
j , Gj ) be the L-th round guess in this phase. After each testing phase j, we generate
a boolean prediction l of whether there was an event in the first round thereof. Specifically, letting
?
i be the most recent full phase before j, we set ltj = false if and only if G+
i ? Gj 6= ?. If ltj
is false, the labeled sample (xtj , ltj ) is fed back to the classifier. Note that classifier never
receives true-labeled samples. Pseudocode for BWC is given in Algorithm 1.
Disregarding the interleaved testing phases for the moment, BWC restarts bandit whenever
classifier predicts ?positive?, optimistically assuming that the prediction is correct. By our
assumption that events cause some optimal arm to become significantly suboptimal (see Section 2),
?
an incorrect prediction should result in G+
i ? Gj 6= ?, where i is a phase before the putative event,
and j is a phase after it. However, to ensure that the estimates Gi and Gj are reliable, we require
that phases i and j are full. And to ensure that the full phases closest to a putative event are not too
far from it, we insert a full testing phase every other phase.
Algorithm 1 BWC Algorithm
1: Given: Parameter L, a (L, ?S )-testable bandit, and a safe classifier.
2: for phase j = 1, 2, . . . do
3:
Initialize bandit. Let tj be current round.
4:
if j is odd then
5:
for round t = tj . . . tj + L do
6:
Select arm it according to bandit.
7:
Observe pt (it ) and update bandit.
8:
Let i be the most recent full phase before j.
?
9:
If G+
i ? Gj 6= ? let ltj = false and pass training example (xtj , ltj ) to classifier.
10:
else
11:
for round t = tj , tj + 1, . . . do
12:
Select arm it according to bandit.
13:
Observe pt (it ) and update bandit; pass context xt to classifier.
14:
if classifier predicts ?positive? then
15:
Terminate inner for loop.
Let S be the set of all contexts which correspond to an event. When the classifier receives a context
x and predicts a ?positive?, this prediction is called a true positive if x ? S, and a false positive
otherwise. Likewise, when the classifier predicts a ?negative?, the prediction is called a true negative
if x 6? S, and a false negative otherwise. The sample (x, l) is correctly labeled if l = (x ? S).
We make the following two assumptions. First, classifier is safe for a given concept class:
if it inputs only correctly labeled samples, it never outputs a false negative. Second, bandit is
(L, ?)-testable, in the following sense. Consider an event-free run of bandit, and let (G+ , G? )
be its L-th round guess. Then with probability at least 1 ? T ?2 , each optimal arm lies in G+ but
not in G? , and any arm that is at least ?-suboptimal lies in G? but not in G+ . So an (L, ?)-testable
3
Following established convention, we call the options available to a bandit algorithm ?arms?. In our setting,
each arm corresponds to a search result.
4
bandit algorithm is one that, after L rounds, has a good guess of which arms are optimal and which
are at least ?-suboptimal.
For correctness, we require bandit to be (L, ?S )-testable, where ?S is the minimum shift. The
performance of bandit is quantified via its event-free regret, i.e. regret on the event-free runs.
Likewise, for correctness we need classifier to be safe; we quantify its performance via the
maximum possible number of false positives, in the precise sense defined below. We assume that the
state of classifier is updated only if it receives a labeled sample, and consider a game in which
in each round t, classifier receives a context xt 6? S, outputs a (false) positive, and receives
a (correctly) labeled sample (x, false). For a given context set X and a given concept class F,
let the FP-complexity of the classifier be the maximal possible number of rounds in such a game,
where the maximum is taken over all event oracles f ? F and all possible sequences {xt }. Put
simply, the FP-complexity of classifier is the maximum number of consecutive false positives
it can make when given correctly labeled examples.
We will discuss efficient implementations of a safe classifier and a (L, ?)-testable bandit
in Sections 5 and Section 6, respectively. We present provable guarantees for BWC in a modular
way, in terms of FP-complexity, event-free regret, and the number of events. The main technical
difficulty in the analysis is that the correct operation of the components of BWC ? classifier
and bandit ? is interdependent. In particular, one challenge is to handle events that occur during
the first L rounds of a phase; these events may potentially ?contaminate? the L-th round guesses and
cause incorrect feedback to classifier.
Theorem 1. Consider an instance of the eventful bandit problem with number of rounds T , n
arms, k events and minimum shift ?S . Consider algorithm BWC with parameter L and components classifier and bandit such that for this problem instance, classifier is safe, and
bandit is (L, ?S )-testable. If any two events are at least 2L rounds apart, then the regret of BWC
is
R(T ) ? (2k + d) R0 (T ) + (k + d) R0 (L) + kL.
(1)
where d is the FP-complexity of the classifier and R0 (?) is the event-free regret of bandit.
Remarks. The proof is available in the full version [20]. In our implementations of bandit, L =
?( ?nS log T ) suffices. In the +kL term in (1), the k can be replaced by the number of testing phases
that contain both a false positive in round 1 of the phase and an actual event later in the phase; this
number can potentially be much smaller than k.
5
Safe Classifier
We seek a classifier that is safe for a given concept class F and has low FP-complexity. We present
a classifier whose FP-complexity is bounded in terms of the following property of F:
Definition 1. Define the safe function SF : 2X ? 2X of F as follows: x ? SF (N ) if and only
if there is no concept f ? F such that: f (y) = ?1 for all y ? N and f (x) = +1. The diameter
of F, denoted dF , is equal to the length of the longest sequence x1 , . . . , xm ? X such that xt ?
/
SF ({x1 , . . . , xt?1 }) for all t = 1, . . . , m.
So if N contains only true negatives, then SF (N ) contains only true negatives. This property suggests that SF can be used to construct a safe classifier SafeCl, which operates as follows: It
maintains a set of false-labeled examples N , initially empty. When input an unlabeled context x,
SafeCl outputs a positive prediction if and only if x ?
/ SF (N ). After making a positive prediction, SafeCl inputs a labeled example (x, l). If l = false, then x is added to N ; otherwise x is
discarded. Clearly, SafeCl is a safe classifer.
In the full version [20], we show that the FP-complexity of SafeCl is at most the diameter dF ,
which is to be expected: FP-complexity is a property of a classifier, and diameter is the completely
analogous property for SF . Moreover, we give examples of common concept classes with efficiently
computable safe functions. For example, if F is the space of hyperplanes with ?margin? at least ?
(probably the most commonly-used concept class in machine learning), then SF (N ) is the convex
hull of the examples in N , extended in all directions by a ?.
By using SafeCl as our classifier, we introduce dF into the regret bound of bwc, and this quantity
can be large. However, in Section 7 we show that the regret of any algorithm must depend on dF ,
unless it depends strongly on the number of rounds T .
5
6
Testable Bandit Algorithms
In this section we will consider the stochastic n-armed bandit problem. We are looking for (L, ?)testable algorithms with low regret. The L will need to be sufficiently large, on the order of ?(n??2 ).
A natural candidate would be algorithm UCB 1 from [2] which does very well on regret. Unfortunately, it does not come with a guarantee of (L, ?)-testability. One simple fix is to choose
at random between arms in the first L rounds, use these samples to form the best guess, in
a straightforward way, and then run UCB 1. However, in the first L rounds this algorithm incurs regret of ?(L), which
? is very suboptimal. For instance, for UCB 1 the regret would be
n
log L, nL log L)).
R(L) ? O(min( ?
In this section, we develop an algorithm which has the same regret bound as UCB 1, and is (L, ?)testable. We state this result more generally, in terms of estimating expected payoffs; we believe it
may be of independent interest. The (L, ?)-testability is then an easy corollary.
Since our analysis in this section is for the event-free setting, we can drop the subscript t from much
of our notation. Let p(u) denote the (time-invariant) expected payoff of arm u. Let p? = maxu p(u),
and let ?(u) = p? ? p(u) be the ?suboptimality? of arm u. For round t, let ?t (u) be the sample
average of arm u, and let nt (u) be the number of times arm u has been played.
We will use a slightly modified algorithm UCB 1 from [2], with a significantly extended analysis.
Recall that in each round t algorithm
UCB 1 chooses an arm u with the highest index It (u) =
p
?t (u) + rt (u), where rt (u) = 8 log(t)/nt (u) is a term that we?ll call the confidence radius whose
meaning is that |p(u) ? ?t (u)| ? rt (u) with high probability. For our purposes here it is instructive
to re-write the index as It (u) = ?t (u) + ? rt (u) for some parameter ?. Also,
p to better bound the
early failure probability we will re-define the confidence radius as rt (u) = 8 log(t0 + t)/nt (u)
for some parameter t0 . We will denote this parameterized version by UCB 1(?, t0 ). Essentially, the
original analysis of UCB 1 in [2] carries over; we omit the details.
Our contribution concerns estimating the ?(u)?s. We estimate the maximal expected reward p? via
the sample average of an arm that has been played most often. More precisely, in order to bound the
failure probability we consider a arm that has been played most often in the last t/2 rounds. For a
given round t let vt be one such arm (ties broken arbitrarily), and let ?t (u) = ?t (vt ) ? ?t (u) will
be our estimate of ?(u). We express the ?quality? of this estimate as follows:
Theorem 2. Consider the stochastic n-armed bandits problem. Suppose algorithm UCB 1(6, t0 ) has
been played for t steps, and t + t0 ? 32. Then with probability at least 1 ? (t0 + t)?2 for any arm
u we have
|?(u) ? ?t (u)| < 14 ?(u) + ?(t)
(2)
pn
where ?(t) = O( t log(t + t0 )).
Remark. Either we know that ?(u) is small, or we can approximate it up to a constant factor.
Specifically, if ?(t) < 12 ?t (u) then ?(u) ? 2 ?t (u) ? 5 ?(u) else ?(u) ? 4?(t).
Let us convert UCB 1(6, T ) into an (L, ?)-testable algorithm, as long as L ? ?( ?n2 log T ). The t-th
?
+
?
round best guess (G+
t , Gt ) is defined as Gt = {u : ?t (u) ? ?/4} and Gt = {u : ?t (u) >
?/2}. Then the resulting algorithm is (L, ?)-testable assuming that ?(L) ? ?/4, where ?(t) is from
Theorem 2. The proof is in the full version [20].
7
Upper and Lower Bounds
Plugging the classifier from Section 5 and the bandit algorithm from Section 6 into the metaalgorithm from Section 4, we obtain the following numerical guarantee.
Theorem 3. Consider an instance S of the eventful bandit problem with with number of rounds
T , n arms and k events, minimum shift ?S , minimum suboptimality ?, and concept class diameter dF . Assume that any two events are at least 2L rounds apart, where L = ?( ?n2 log T ).
S
Consider the BWC algorithm with parameter L and components classifier and bandit
as presented, respectively,
in Section 5 and Section 6. Then the regret of BWC is R(T ) ?
n
n
(3k + 2dF ) ? + k ?2 (log T ).
S
6
While the linear dependence on n in this bound may seem large, note that without additional assumptions, regret must be linear in n, since each arm must be pulled at least once. In an actual
search engine application, the arms can be restricted to, say, the top ten results that match the query.
We now state two lower bounds about eventful bandit problems; the proofs are in the full version [20]. Theorem 4 shows that in order to achieve regret that is logarithmic in the number of
rounds, a context-aware algorithm is necessary, assuming there is at least one event. Incidentally,
this lowerbound can be easily extended to prove that, in our model, no algorithm can achieve logarithmic regret when an event oracle f is not contained in the concept class F.
Theorem 4. Consider the eventful bandit problem with number of rounds T , two arms, minimum
shift ?S and minimum suboptimality ?, where ?S = ? = ?, for an arbitrary ? ? (0, 21 ). For any
context-ignoring bandit
? algorithm A, there exists a problem instance with a single event such that
regret RA (T ) ? ?(? T ).
Theorem 5 proves that in Theorem 3, linear dependence on k + dF is essentially unavoidable. If
we desire a regret bound that has logarithmic dependence on the number of rounds, then a linear
dependence on k + dF is necessary.
Theorem 5. Consider the eventful bandit problem with number of rounds T and concept class
diameter dF . Let A be an eventful bandit algorithm. Then there exists a problem instance with n
arms, k events, minimum shift ?S , minimum suboptimality ?, where ?S = ? = ?, for any given
values of k ? 1, n ? 3, and ? ? (0, 14 ), such that RA (T ) ? ?(k n? ) log(T /k).
Moreover, there exists a problem instance with two arms, a single event, event threshold ?(1) and
minimum suboptimality ?(1) such that regret RA (T ) ? ?(max(T 1/3 , dF )) log T .
8
Experiments
To truly demonstrate the benefits of BWC requires real-time manipulation of search results. Since we
did not have the means to deploy a system that monitors click/skip activity and correspondingly alters search results with live users, we describe a collection of experiments on synthetically generated
data.
We begin with a head-to-head comparison of BWC versus a baseline UCB 1 algorithm and show
that BWC?s performance improves substantially upon UCB 1. Next, we compare the performance of
these algorithms as we vary the fraction of intent-shifting queries: as the fraction increases, BWC?s
performance improves even further upon prior approaches. Finally, we compare the performance
as we vary the number of features. While our theoretical results suggest that regret grows with the
number of features in the context space, in our experiments, we surprisingly find that BWC is robust
to higher dimensional feature spaces.
Setup: We synthetically generate data as follows. We assume that there are 100 queries where the
total number of times these queries are posed is 3M. Each query has five search results for a user
to select from. If a query does not experience any events ? i.e., it is not ?intent-shifting? ? then
the optimal search result is fixed over time; otherwise the optimal search result may change. Only
10% of the queries are intent-shifting, with at most 10 events per such query. Due to the random
nature with which data is generated, regret is reported as an average over 10 runs. The event oracle
is an axis-parallel rectangle anchored at the origin, where points inside the box are negative and
points outside the box are positive. Thus, if there are two features, say query volume and query
abandonment rate, an event occurs if and only if both the volume and abandonment rate exceed
certain thresholds.
Bandit with Classifier (BWC): Figure 1(a) shows the average cumulative regret over time of three
algorithms. Our baseline comparison is UCB 1 which assumes that the best search result is fixed
throughout. In addition, we compare to an algorithm we call ORA, which uses the event oracle
to reset UCB 1 whenever an event occurs. We also compared to EXP 3. S, but its performance was
dramatically worse and thus we have not included it in the figure.
In the early stages of the experiment before any intent-shifting event has happened, UCB 1 performs
the best. BWC?s safe classifier makes many mistakes in the beginning and consequently pays the
price of believing that each query is experiencing an event when in fact it is not. As time progresses,
BWC?s classifier makes fewer mistakes, and consequently knows when to reset UCB 1 more accu7
4
4.5
x 10
ORA
BWC
UCB1
4
ORA
BWC
UCB 1
EXP 3. S
Cumulative regret
3.5
3
2.5
2
ORA
BWC
UCB 1
EXP 3. S
1.5
1
0.5
0
0
0.5
1
1.5
2
Time (impressions)
2.5
0
17.2
17.8
17.2
78.4
1/8
22.8
24.6
34.1
123.7
10
21.9
23.1
32.3
111.6
1/4
30.4
39.9
114.9
180.2
20
23.2
24.4
33.5
109.4
3/8
33.8
46.7
84.2
197.6
30
21.9
22.9
31.1
112.5
1/2
39.5
99.4
140.0
243.1
40
22.8
23.7
37.4
121.3
3
4
x 10
Figure 1: (a) (Left) BWC?s cumulative regret compared to UCB 1 and ORA (UCB 1 with an oracle
indicating the exact locations of the intent-shifting event) (b) (Right, Top Table) Final regret (in
thousands) as the fraction of intent-shifting queries varies. With more intent-shifting queries, BWC?s
advantage over prior approaches improves. (c) (Right, Bottom Table) Final regret (in thousands) as
the number of features grows.
rately. UCB 1 alone ignores the context entirely and thus incurs substantially larger cumulative regret
by the end.
Fraction of Intent-Shifting Queries: In the next experiment, we varied the fraction of intentshifting queries. Figure 1(b) shows the result of changing the distribution from 0, 1/8, 1/4, 3/8 and
1/2 intent-shifting queries. If there are no intent-shifting queries, then UCB 1?s regret is the best. We
expect this outcome since BWC?s classifier, because it is safe, initially assumes that all queries are
intent-shifting and thus needs time to learn that in fact no queries are intent-shifting. On the other
hand, BWC?s regret dominates the other approaches, especially as the fraction of intent-shifting
queries grows. EXP 3. S?s performance is quite poor in this experiment ? even when all queries are
intent-shifting. The reason is that even when a query is intent-shifting, there are at most 10 intentshifting events, i.e., each query?s intent is not shifting all the time.
With more intent-shifting queries, the expectation is that regret monotonically increases. In general,
this seems to be true in our experiment. There is however a decrease in regret going from 1/4 to 3/8
intent-shifting queries. We believe that this is due to the fact that each query has at most 10 intentshifting events spread uniformly and it is possible that there were fewer events with potentially
smaller shifts in intent in those runs. In other words, the standard deviation of the regret is large.
Over the ten 3/8 intent-shifting runs for ORA, BWC, UCB 1 and EXP 3. S, the standard deviation was
roughly 1K, 10K, 12K and 6K respectively.
Number of Features: Finally, we comment on the performance of our approach as the number of
features grows. Our theoretical results suggest that BWC?s performance should deteriorate as the
number of features grows. Surprisingly, BWC?s performance is consistently close to the Oracle?s.
In Figure 1(b), we show the cumulative regret after 3M impressions as the dimensionality of the
context vector grows from 10 to 40 features. BWC?s regret is consistently close to ORA as the
number of features grows. On the other hand, UCB 1?s regret though competitive is worse than BWC,
while EXP 3. S?s performance is across the board poor. Note that both UCB 1 and EXP 3. S?s regret is
completely independent of the number of features. The standard deviation of the regret over the 10
runs is substantially lower than the previous experiment. For example, over 10 features, the standard
deviation was 355, 1K, 5K, 4K for ORA, BWC, UCB 1 and EXP 3. S, respectively.
9
Future Work
The main question left for future work is testing this approach in a real setting. Since gaining access
to live traffic is difficult, it would be interesting to find ways to rewind the search logs to simulate
live traffic.
Acknowledgements. We thank Rakesh Agrawal, Alan Halverson, Krishnaram Kenthapadi, Robert
Kleinberg, Robert Schapire and Yogi Sharma for their helpful comments and suggestions.
8
References
[1] Deepak Agarwal, Bee-Chung Chen, Pradheep Elango, Nitin Motgi, Seung-Taek Park, Raghu Ramakrishnan, Scott Roy, and Joe Zachariah. Online models for content optimization. In 22nd Advances in Neural
Information Processing Systems (NIPS), 2008.
[2] Peter Auer, Nicol`o Cesa-Bianchi, and Paul Fischer. Finite-time analysis of the multiarmed bandit problem.
Machine Learning, 47(2-3):235?256, 2002.
[3] Peter Auer, Nicol`o Cesa-Bianchi, Yoav Freund, and Robert E. Schapire. The nonstochastic multiarmed
bandit problem. SIAM J. Comput., 32(1):48?77, 2002.
[4] Sergey Brin and Lawrence Page. The anatomy of a large-scale hypertextual Web search engine. Computer
Networks and ISDN Systems, 30(1?7):107?117, 1998.
[5] Christopher J. C. Burges, Tal Shaked, Erin Renshaw, Ari Lazier, Matt Deeds, Nicole Hamilton, and
Gregory N. Hullender. Learning to rank using gradient descent. In 22nd Intl. Conf. on Machine Learning
(ICML), 2005.
[6] Zhe Cao, Tao Qin, Tie-Yan Liu, Ming-Feng Tsai, and Hang Li. Learning to rank: from pairwise approach
to listwise approach. In 24th Intl. Conf. on Machine Learning (ICML), 2007.
[7] Nicol`o Cesa-Bianchi and G?abor Lugosi. Prediction, learning, and games. Cambridge University Press,
2006.
[8] William W. Cohen, Robert E. Schapire, and Yoram Singer. Learning to order things. J. of Artificial
Intelligence Research, 10:243?270, 1999.
[9] Fernando Diaz. Integration of news content into web results. In 2nd Intl. Conf. on Web Search and Data
Mining, pages 182?191, 2009.
[10] D. Fallows. Search engine users. Pew Internet and American Life Project, 2005.
[11] Yoav Freund, Raj Iyer, Robert E. Schapire, and Yoram Singer. An efficient boosting algorithm for combining preferences. J. of Machine Learning Research, 4:933?969, 2003.
[12] Elad Hazan and Nimrod Megiddo. Online Learning with Prior Knowledge. In 20th Conference on
Learning Theory (COLT), pages 499?513, 2007.
[13] Thorsten Joachims. Optimizing search engines using clickthrough data. In 8th ACM SIGKDD Intl. Conf.
on Knowledge Discovery and Data Mining (KDD), 2002.
[14] Sham M. Kakade, Shai Shalev-Shwartz, and Ambuj Tewari. Efficient bandit algorithms for online multiclass prediction. In 25th Intl. Conf. on Machine Learning (ICML), 2008.
[15] Jon M. Kleinberg. Bursty and hierarchical structure in streams. In 8th ACM SIGKDD Intl. Conf. on
Knowledge Discovery and Data Mining (KDD), 2002.
[16] John Langford and Tong Zhang. The epoch-greedy algorithm for multi-armed bandits with side information. In 21st Advances in Neural Information Processing Systems (NIPS), 2007.
[17] Sandeep Pandey, Deepak Agarwal, Deepayan Chakrabarti, and Vanja Josifovski. Bandits for Taxonomies:
A Model-based Approach. In SIAM Intl. Conf. on Data Mining (SDM), 2007.
[18] Sandeep Pandey, Deepayan Chakrabarti, and Deepak Agarwal. Multi-armed Bandit Problems with Dependent Arms. In 24th Intl. Conf. on Machine Learning (ICML), 2007.
[19] Filip Radlinski, Robert Kleinberg, and Thorsten Joachims. Learning diverse rankings with multi-armed
bandits. In 25th Intl. Conf. on Machine Learning (ICML), 2008.
[20] Umar Syed, Aleksandrs Slivkins, and Nina Mishra. Adapting to the shifting intent of search queries.
Technical report. Available from arXiv.
[21] Chih-Chun Wang, Sanjeev R. Kulkarni, and H. Vincent Poor. Bandit problems with side observations.
IEEE Trans. on Automatic Control, 50(3):338355, 2005.
[22] Jia Yuan Yu and Shie Mannor. Piecewise-stationary bandit problems with side observations. In 26th Intl.
Conf. on Machine Learning (ICML), 2009.
9
| 3781 |@word mild:2 version:7 achievable:1 stronger:2 seems:1 nd:3 seek:1 incurs:2 dramatic:1 carry:1 moment:1 liu:1 series:2 score:2 exclusively:1 contains:2 tuned:1 document:5 ours:1 outperforms:2 mishra:2 existing:2 current:1 com:2 nt:5 contextual:1 must:4 john:1 subsequent:1 happen:1 numerical:1 kdd:2 designed:1 drop:1 update:2 stationary:2 half:1 fewer:2 guess:8 alone:1 intelligence:1 greedy:1 beginning:1 renshaw:1 provides:2 detecting:1 nitin:1 location:1 boosting:1 hyperplanes:1 org:1 preference:1 zhang:1 five:1 mannor:1 elango:1 burst:3 become:1 chakrabarti:2 incorrect:2 prove:1 yuan:1 combine:1 compose:1 inside:1 introduce:1 pairwise:1 deteriorate:1 upenn:1 expected:4 ra:3 behavior:3 os:8 indeed:1 roughly:2 multi:3 ming:1 actual:2 armed:5 inappropriate:1 clicked:1 begin:1 estimating:2 moreover:3 bounded:1 notation:1 project:1 null:1 mountain:2 substantially:4 kenthapadi:1 hindsight:1 guarantee:3 temporal:1 contaminate:1 every:6 unusually:1 growth:1 tie:2 exactly:1 returning:1 classifier:50 megiddo:1 control:1 omit:1 hamilton:1 positive:16 before:5 mistake:2 subscript:1 optimistically:1 lugosi:1 usyed:1 plus:1 studied:2 resembles:1 quantified:1 suggests:1 josifovski:1 lowerbound:1 testing:7 regret:48 differs:1 empirical:1 yan:1 adapting:4 significantly:4 matching:1 deed:1 word:2 confidence:2 suggest:4 isdn:1 close:3 unlabeled:2 put:1 context:26 live:4 shaked:1 nicole:1 straightforward:1 convex:1 simplicity:1 abrupt:1 jackson:1 proving:1 handle:1 notion:1 holiday:1 analogous:1 updated:1 pt:11 today:1 suppose:1 user:17 deploy:1 experiencing:1 exact:1 us:4 origin:1 pa:1 logarithmically:1 trend:1 roy:1 particularly:1 predicts:6 labeled:10 observed:3 bottom:1 wang:1 thousand:2 news:7 decrease:2 highest:1 motgi:1 burstiness:1 substantial:1 broken:1 complexity:9 reward:1 ideally:2 testability:2 seung:1 dynamic:1 depend:1 technically:1 classifer:1 upon:2 completely:2 easily:1 describe:2 effective:1 query:51 artificial:1 outside:2 outcome:1 shalev:1 whose:2 modular:2 posed:1 larger:1 quite:1 say:2 elad:1 otherwise:5 gi:1 fischer:1 final:2 online:5 sequence:2 advantage:1 agrawal:1 sdm:1 propose:1 maximal:2 reset:2 qin:1 relevant:1 loop:1 cao:1 rapidly:1 date:2 combining:1 achieve:3 adapts:1 ltj:5 empty:1 intl:10 produce:1 incidentally:1 depending:1 develop:2 measured:1 odd:2 received:3 progress:1 strong:2 skip:1 come:1 quantify:2 convention:1 direction:1 safe:13 radius:2 closely:1 correct:2 functionality:1 mini6:1 hull:1 stochastic:2 anatomy:1 human:1 brin:1 require:3 suffices:1 fix:1 really:1 preliminary:1 randomization:1 insert:1 hold:1 around:1 considered:1 sufficiently:1 deciding:1 bursty:1 maxu:1 lawrence:1 predict:4 exp:10 reserve:1 major:1 vary:2 early:3 consecutive:1 omitted:1 purpose:1 label:1 concurrent:1 correctness:2 vanja:1 clearly:2 modified:1 rather:2 pn:1 office:1 corollary:1 release:1 focus:1 seasonal:2 longest:1 joachim:2 consistently:2 rank:3 believing:1 sigkdd:2 baseline:2 sense:3 detect:2 helpful:1 inference:1 dependent:1 typically:1 abandonment:3 abor:1 initially:2 bandit:71 going:1 subroutine:1 interested:1 tao:1 arg:1 classification:1 colt:1 denoted:2 integration:1 initialize:1 equal:1 construct:1 never:4 once:1 aware:1 labeler:1 park:1 yu:2 icml:6 jon:1 future:2 report:1 piecewise:1 oblivious:1 krishnaram:1 xtj:2 replaced:1 phase:28 microsoft:5 william:1 attempt:1 interest:1 highly:1 mining:4 evaluation:1 truly:1 nl:1 tj:7 necessary:2 experience:2 culture:2 unless:1 conduct:1 desired:1 re:2 theoretical:2 instance:9 boolean:2 yoav:2 deviation:4 entry:1 rare:1 subset:3 lazier:1 too:1 reported:1 varies:1 gregory:1 synthetic:1 chooses:2 considerably:1 st:1 accrued:1 siam:2 michael:1 quickly:1 sanjeev:1 unavoidable:1 cesa:3 choose:2 slowly:1 worse:2 conf:10 american:1 chung:1 li:1 account:1 student:1 erin:1 includes:1 ranking:8 depends:2 stream:2 later:2 view:3 rately:1 hazan:1 traffic:7 start:1 competitive:1 option:1 maintains:1 parallel:1 shai:1 jia:1 contribution:2 responded:1 likewise:2 efficiently:1 correspond:1 weak:1 vincent:1 randomness:1 suffers:1 whenever:3 definition:3 against:1 failure:2 thereof:1 proof:4 recall:2 knowledge:3 improves:3 dimensionality:1 actually:3 back:1 auer:2 metaalgorithm:1 higher:1 day:1 restarts:1 response:1 formulation:1 done:1 box:3 though:2 strongly:1 zachariah:1 just:1 stage:2 langford:1 hand:3 receives:10 web:5 christopher:1 logistic:1 quality:1 believe:3 grows:7 building:1 effect:1 matt:1 concept:11 contain:2 true:6 alternating:1 round:50 ll:1 during:2 game:3 suboptimality:7 impression:4 demonstrate:2 performs:2 meaning:2 novel:1 recently:1 ari:1 wikipedia:1 common:1 pseudocode:1 cohen:1 volume:4 occurred:1 refer:2 multiarmed:2 cambridge:1 pew:1 automatic:1 hp:1 language:1 access:3 surface:2 gj:5 etc:2 gt:3 closest:1 recent:4 showed:2 retrieved:1 raj:1 optimizing:1 mint:1 apart:2 manipulation:1 certain:4 issued:1 meta:3 arbitrarily:1 vt:2 life:1 devise:1 minimum:12 additional:3 r0:3 determine:3 sharma:1 fernando:1 monotonically:1 july:1 signal:3 full:12 desirable:1 sham:1 alan:1 technical:3 match:2 long:1 plugging:1 prediction:15 essentially:3 expectation:2 df:13 arxiv:2 sergey:1 salmonella:1 agarwal:4 receive:2 background:1 want:1 hurdle:1 addition:1 else:2 crucial:1 unlike:1 strict:1 probably:1 comment:2 tend:1 thing:1 shie:1 seem:1 call:4 leverage:1 synthetically:2 exceed:1 easy:1 independence:1 pennsylvania:1 competing:1 click:17 suboptimal:7 inner:1 idea:1 nonstochastic:1 computable:1 multiclass:1 shift:16 t0:7 whether:5 sandeep:2 suffer:1 peter:2 cause:2 constitute:1 remark:2 dramatically:1 generally:1 tewari:1 amount:2 ten:2 diameter:5 nimrod:1 generate:3 schapire:4 shifted:3 happened:3 alters:1 correctly:4 per:1 serving:2 diverse:1 write:1 diaz:2 express:1 key:4 reformulation:2 threshold:2 achieving:1 monitor:1 clarity:1 changing:1 rectangle:1 fraction:6 convert:1 run:10 parameterized:2 throughout:1 reasonable:1 decide:1 chih:1 putative:2 decision:1 fallow:1 entirely:1 interleaved:1 internet:1 bound:13 pay:1 played:4 display:4 hypertextual:1 oracle:7 activity:3 adapted:1 occur:3 precisely:2 tal:1 kleinberg:4 simulate:1 min:1 relatively:2 department:2 according:3 combination:1 poor:3 describes:1 smaller:2 slightly:1 across:1 kakade:1 heel:1 making:1 happens:1 outbreak:2 intuitively:1 restricted:2 invariant:1 thorsten:2 taken:2 previously:3 describing:1 turn:1 discus:1 singer:2 know:2 letting:1 fed:1 end:2 raghu:1 adopted:1 available:4 operation:1 observe:3 hierarchical:1 occurrence:1 original:1 assumes:4 top:2 ensure:2 umar:2 yoram:2 testable:11 build:2 prof:1 especially:1 feng:1 added:1 quantity:1 occurs:5 question:1 taek:1 dependence:9 rt:5 traditional:2 gradient:1 thank:1 restart:1 topic:2 extent:1 reason:1 nina:2 fresh:1 provable:1 assuming:4 length:1 index:2 deepayan:2 difficult:2 unfortunately:1 setup:1 robert:6 potentially:3 taxonomy:1 negative:8 intent:36 implementation:2 clickthrough:1 bianchi:3 upper:1 observation:2 discarded:1 finite:1 descent:1 displayed:3 payoff:2 extended:3 looking:1 precise:1 head:2 inlinks:1 varied:1 arbitrary:2 aleksandrs:2 pair:1 kl:2 slivkins:3 engine:15 established:1 pop:2 nip:2 trans:1 address:1 below:1 xm:1 scott:1 fp:8 challenge:2 pradheep:1 ambuj:1 pagerank:1 including:1 reliable:1 max:1 gaining:1 shifting:25 event:64 syed:2 suitable:1 ranked:1 regularized:1 difficulty:1 natural:1 arm:28 movie:1 misleading:1 axis:1 philadelphia:1 hullender:1 prior:5 interdependent:1 acknowledgement:1 checking:2 bee:1 nicol:3 discovery:2 epoch:1 freund:2 expect:1 interesting:1 suggestion:1 versus:1 age:2 halverson:1 incurred:1 sufficient:1 article:2 tomato:4 story:1 changed:1 surprisingly:2 last:3 soon:2 free:7 side:4 pulled:1 burges:1 correspondingly:1 deepak:3 benefit:1 listwise:1 feedback:3 curve:1 cumulative:5 ignores:2 concretely:1 author:1 made:1 commonly:1 collection:1 far:1 approximate:1 hang:1 instantiation:1 filip:1 assumed:1 shwartz:1 zhe:1 search:42 pandey:3 anchored:1 table:2 nature:1 learn:2 terminate:1 ca:2 robust:1 ignoring:1 obtaining:1 did:2 main:2 spread:1 paul:1 n2:2 x1:2 board:1 tong:1 n:1 inferring:1 sf:8 comput:1 lie:2 candidate:1 advertisement:1 learns:1 theorem:9 xt:9 maxi:1 list:3 disregarding:1 chun:1 evidence:1 concern:1 exists:5 dominates:1 joe:1 false:13 sequential:1 ci:1 magnitude:1 execution:1 iyer:1 margin:1 chen:1 ucb1:1 logarithmic:5 simply:1 likely:2 intern:1 expressed:1 ordered:1 contained:2 desire:1 ramakrishnan:1 corresponds:1 determines:1 acm:2 goal:2 identity:2 consequently:3 price:1 content:2 change:6 included:1 typical:1 specifically:2 operates:2 uniformly:1 total:2 called:6 pas:2 rakesh:1 ucb:29 indicating:1 formally:1 select:3 internal:1 radlinski:2 kulkarni:1 tsai:1 princeton:1 instructive:1 handling:1 |
3,070 | 3,782 | Neural Implementation of Hierarchical Bayesian
Inference by Importance Sampling
Thomas L. Griffiths
Department of Psychology
University of California, Berkeley
Berkeley, CA 94720
tom [email protected]
Lei Shi
Helen Wills Neuroscience Institute
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Abstract
The goal of perception is to infer the hidden states in the hierarchical process by
which sensory data are generated. Human behavior is consistent with the optimal statistical solution to this problem in many tasks, including cue combination
and orientation detection. Understanding the neural mechanisms underlying this
behavior is of particular importance, since probabilistic computations are notoriously challenging. Here we propose a simple mechanism for Bayesian inference
which involves averaging over a few feature detection neurons which fire at a rate
determined by their similarity to a sensory stimulus. This mechanism is based
on a Monte Carlo method known as importance sampling, commonly used in
computer science and statistics. Moreover, a simple extension to recursive importance sampling can be used to perform hierarchical Bayesian inference. We
identify a scheme for implementing importance sampling with spiking neurons,
and show that this scheme can account for human behavior in cue combination
and the oblique effect.
1
Introduction
Living creatures occupy an environment full of uncertainty due to noisy sensory inputs, incomplete
observations, and hidden variables. One of the goals of the nervous system is to infer the states of the
world given these limited data and make decisions accordingly. This task involves combining prior
knowledge with current data [1], and integrating cues from multiple sensory modalities [2]. Studies
of human psychophysics and animal behavior suggest that the brain is capable of solving these
problems in a way that is consistent with optimal Bayesian statistical inference [1, 2, 3, 4]. Moreover,
complex brain functions such as visual information processing involves multiple brain areas [5].
Hierarchical Bayesian inference has been proposed as a computational framework for modeling such
processes [6]. Identifying neural mechanisms that could support hierarchical Bayesian inference is
important, since probabilistic computations can be extremely challenging. Just representing and
updating distributions over large numbers of hypotheses is computationally expensive.
Much effort has recently been devoted towards proposing possible mechanisms based on known
neuronal properties. One prominent approach to explaining how the brain uses population activities
for probabilistic computations has been done in the ?Bayesian decoding? framework [7]. In this
framework, it is assumed that the firing rate of a population of neurons, r, can be converted to a
probability distribution over stimuli, p(s|r), by applying Bayesian inference, where the likelihood
p(r|s) reflects the probability of that firing pattern given the stimulus s. A firing pattern thus encodes
a distribution over stimuli, which can be recovered through Bayesian decoding. The problem of
performing probabilistic computations then reduces to identifying a set of operations on firing rates
r that result in probabilistically correct operations on the resulting distributions p(s|r). For example,
1
[8] showed that when the likelihood p(r|s) is an exponential family distribution with linear sufficient
statistics, adding two sets of firing rates is equivalent to multiplying probability distributions.
In this paper, we take a different approach, allowing a population of neurons to encode a probability
distribution directly. Rather than relying on a separate decoding operation, we assume that the activity of each neuron translates directly to the weight given to the optimal stimulus for that neuron in the
corresponding probability distribution. We show how this scheme can be used to perform Bayesian
inference, and how simple extensions of this basic idea make it possible to combine sources of information and to propagate uncertainty through multiple layers of random variables. In particular,
we focus on one Monte Carlo method, namely importance sampling with the prior as a surrogate,
and show how recursive importance sampling approximates hierarchical Bayesian inference.
2
Bayesian inference and importance sampling
Given a noisy observation x, we can recover the true stimulus x? by using Bayes? rule to compute
the posterior distribution
p(x? )p(x|x? )
p(x? |x) = R
(1)
p(x? )p(x|x? )dx?
x?
where p(x? ) is the prior distribution over stimulus values, and p(x|x? ) is the likelihood, indicating
the probability of the observation x if the true stimulus value is x? . A good guess for the value of
x? is the expectation of x? given x. In general, we are often interested in the expectation of some
function f (x? ) over the posterior distribution p(x? |x), E[f (x? )|x]. The choice of f (x? ) depends
on the task. For example, in noise reduction where x? itself is of interest, we can take f (x? ) = x? .
However, evaluating expectations over the posterior distribution can be challenging: it requires computing a posterior distribution and often a multidimensional integration. The expectation E[f (x? )|x]
can be approximated using a Monte Carlo method known as importance sampling. In its general
form, importance sampling approximates the expectation by using a set of samples from some surrogate distribution q(x? ) and assigning those samples weights proportional to the ratio p(x? |x)/q(x? ).
Z
M
?
p(x? |x)
1 X
? p(xi |x)
?
?
E[f (x? )|x] = f (x? )
f
(x
)
q(x
)dx
'
x?i ? q(x? )
(2)
i
q(x? )
M i=1
q(x?i )
If we choose q(x? ) to be the prior p(x? ), the weights reduce to the likelihood p(x|x? ), giving
E[f (x? )|x] '
=
M
M
M
?
1 X
p(x?i |x)
1 X
1 X
p(x|x?i )
? p(x, xi )
f (x?i )
=
f
(x
)
=
f (x?i )
i
?
?
M i=1
p(xi )
M i=1
p(xi )p(x)
M i=1
p(x)
P
M
1
?
?
?
X
i=1 f (xi )p(x|xi )
? Pp(x|xi )
MR
x?i ? p(x? ) (3)
'
f
(x
)
i
?
p(x|x? )p(x? )dx?
? p(x|xi )
?
x
x
i
i
Thus, importance sampling provides a simple and efficient way to perform Bayesian inference,
approximating the posterior distribution with samples from the prior weighted by the likelihood.
Recent work also has suggested that importance sampling might provide a psychological mechanism
for performing probabilistic inference, drawing on its connection to exemplar models [9].
3
Possible neural implementations of importance sampling
The key components of an importance sampler can be realized in the brain if: 1) there are feature
detection neurons with preferred stimulus tuning curves proportional to the likelihood p(x|x?i ); 2)
the frequency of these feature detection neurons is determined by the prior p(x? ); and 3) divisive
normalization can be realized by some biological mechanism. In this section, we first describe a
radial basis function network implementing importance sampling, then discuss the feasibility of
three assumptions mentioned above. The model is then extended to networks of spiking neurons.
3.1
Radial basis function (RBF) networks
Radial basis function (RBF) networks are a multi-layer neural network architecture in which the
hidden units are parameterized by locations in a latent space x?i . On presentation of a stimulus x,
2
stimulus
SX
inhibitory
neuron
?
p(Sx|x1*)
xi* ~ p(x)
lateral
normalization
f2(x1*)
f1(x1*)
?
E[f1(x)|Sx]?
RBF
neurons
p(Sx|xn*)
p(Sx|xi*)
?
f2(xn*)
f2(xi*)
f1(xi*)
f1(xn*)
f1(xi*)p(Sx|xi*)
E[f2(x)|
)|Sx]?
? p(Sx|xi*)
output
neurons
?
?
f2(xi*)p(Sx|xi*)
p(Sx|xi*)
?
Figure 1: Importance sampler realized by radial basis function network. For details see Section 3.1.
these hidden units are activated according to a function that depends only on the distance ||x ? x?i ||,
e.g., exp(?|x ? x?i |2 /2? 2 ), similar to the tuning curve of a neuron. RBF networks are popular
because they have a simple structure with a clear interpretation and are easy to train. Using RBF
networks to model the brain is not a new idea ? similar models have been proposed for pattern
recognition [10] and as psychological accounts of human category learning [11].
Implementing importance sampling with RBF networks is straightforward. A RBF neuron is recruited for a stimulus value x?i drawn from the prior (Fig. 1). The neuron?s synapses are organized
so that its tuning curve is proportional to p(x|x?i ). For a Gaussian likelihood, the peak firing rate
would be reached at preferred stimulus x = x?i and diminishes as ||x ? x?i || increases. The ith RBF
neuron makes a synaptic connection to output neuron j with strength fj (x?i ), where fj is a function
of interest. The output units also receive input from an inhibitory neuron that sums over all RBF
neurons? activities. Such an RBF network produces output exactly in the form of Eq. 3, with the
activation of the output units corresponding to E[fj (x? )|x].
Training RBF networks is practical for neural implementation. Unlike the multi-layer perceptron
that usually requires global training of the weights, RBF networks are typically trained in two stages.
First, the radial basis functions are determined using unsupervised learning, and then, weights to the
outputs are learned using supervised methods. The first stage is even easier in our formulation,
because RBF neurons simply represent samples from the prior, independent of the second stage
later in development. Moreover, the performance of RBF networks is relatively insensitive to the
precise form of the radial basis functions [12], providing some robustness to differences between the
Bayesian likelihood p(x|x?i ) and the activation function in the network. RBF networks also produce
sparse coding, because localized radial basis likelihood functions mean only a few units will be
significantly activated for a given input x.
3.2
Tuning curves, priors and divisive normalization
We now examine the neural correlates of the three components in RBF model. First, responses
of cortical neurons to stimuli are often characterized by receptive fields and tuning curves, where
receptive fields specify the domain within a stimulus feature space that modify neuron?s response
and tuning curves detail how neuron?s responses change with different feature values. A typical
tuning curve (like orientation tuning in V1 simple cells) has a bell-shape that peaks at the neuron?s
preferred stimulus parameter and diminishes as parameter diverges. These neurons are effectively
measure the likelihood p(x|x?i ), where x?i is the preferred stimulus.
Second, importance sampling requires neurons with preferred stimuli x?i to appear with frequency
proportional to the prior distribution p(x? ). This can be realized if the number of neurons representing x? is roughly proportional to p(x? ). While systematic study of distribution of neurons over their
preferred stimuli is technically challenging, there are cases where this assumption seems to hold.
For example, research on the ?oblique effect? supports the idea that the distribution of orientation
tuning curves in V1 is proportional to the prior. Electrophysiology [13], optical imaging [14] and
3
fMRI studies [15] have found that there are more V1 neurons tuned to cardinal orientations than
to oblique orientations. These findings are in agreement with the prior distribution of orientations
of lines in the visual environment. Other evidence comes from motor areas. Repetitive stimulation
of a finger expands its corresponding cortical representation in somatosensory area [16], suggesting
more neurons are recruited to represent this stimulus. Alternatively, recruiting neurons x?i according
to the prior distribution can be implemented by modulating feature detection neurons? firing rates.
This strategy also seems to be used by the brain: studies in parietal cortex [17] and superior colliculus [18] show that increased prior probability at a particular location results in stronger firing for
neurons with receptive fields at that location.
Third, divisive normalization is a critical component in many neural models, notably in the study of
attention modulation [19, 20]. It has been suggested that biophysical mechanisms such as shunting
inhibition and synaptic depression might account for normalization and gain control [10, 21, 22].
Moreover, local interneurons [23] act as modulator for pooled inhibitory inputs and are good candidates for performing normalization. Our study makes no specific claims about the underlying
biophysical processes, but gains support from the literature suggesting that there are plausible neural mechanisms for performing divisive normalization.
3.3
Importance sampling by Poisson spiking neurons
Neurons communicate mostly by spikes rather than continuous membrane potential signals. Poisson
spiking neurons play an important role in other analyses of systems for representing probabilities [8].
Poisson spiking neurons can also be used to perform importance sampling if we have an ensemble
of neurons with firing rates ?i proportional to p(x|x?i ), with the values of x?i drawn from theP
prior.
To show this we need
a
property
of
Poisson
distributions:
if
y
?
Poisson(?
)
and
Y
=
i
i
i yi ,
P
P
then Y ? Poisson( i ?i ) and (y1 , y2 ,P
. . . , ym |Y = n) ? Multinomial(n, ?i / i ?i ). This further
implies that E(yi /Y |Y = n) = ?i / i ?i . Assume a neuron tuned to stimulus x?i emits spikes
ri ? Poisson(c ? p(x|x?i )), where c is any positive constant. An averagePof a functionPf (x?i ) using
the number of spikes produced by the corresponding neurons yields i f (x?i )ri / i ri , whose
expectation is
"
#
"
#
P
X
X
X
f (x? )p(x|x?i )
r
r
i
i
? P
?
? Pc?i
P
E
f (xi )
(4)
=
f (xi )E
=
f (xi )
= iP i
?
j rj
j rj
j c?j
i p(x|xi )
i
i
i
which is thus an unbiased estimator of the importance sampling approximation to the
P posterior expectation. The variance of this estimator decreases as population activity n =
i ri increases
because var[ri /n] ? 1/n. Thus, Poisson spiking neurons, if plugged into an RBF network, can perform importance sampling and give similar results to ?neurons? with analog output, as we confirm
later in the paper through simulations.
4
Hierarchical Bayesian inference and multi-layer importance sampling
Inference tasks solved by the brain often involve more than one random variable, with complex
dependency structures between those variables. For example, visual information process in primates involves dozens of subcortical areas that interconnect in a hierarchical structure containing
two major pathways [5]. Hierarchical Bayesian inference has been proposed as a solution to this
problem, with particle filtering and belief propagation as possible algorithms implemented by the
brain [6]. However, few studies have proposed neural models that are capable of performing hierarchical Bayesian inference (although see [24]). We show how a multi-layer neural network can
perform such computations using importance samplers (Fig. 1) as building blocks.
4.1
Generative models and Hierarchical Bayesian inference
Generative models describe the causal process by which data are generated, assigning a probability
distribution to each step in that process. To understand brain function, it is often helpful to identify
the generative model that determines how stimuli to the brain Sx are generated. The brain then has
to reverse the generative model to recover the latent variables expressed in the data (see Fig. 2). The
direction of inference is thus the opposite of the direction in which the data are generated.
4
generative
model
Z
inference
process
Z
p(yj|zk)
p(zk|yj)
Y
p(xi|yj)
X
p(Sx|xi)
Y
p(yj|xi)
X
p(xi|Sx)
Sx
Sx
Figure 2: A hierarchical Bayesian model. The generative model specifies how each variable is
generated (in circles), while inference reverses this process (in boxes). Sx is the stimulus presented
to the nervous system, while X, Y , and Z are latent variables at increasing levels of abstraction.
In the case of a hierarchical Bayesian model, as shown in Fig. 2, the quantity of interest is the
posterior expectation
of some function f (z) of a high-level latent variable Z given stimulus Sx ,
R
E[f (z)|Sx ] = f (z)p(z|Sx ) dz. After repeatedly using the importance sampling trick (see Eq. 5),
this hierarchical Bayesian inference problem can decomposed into three importance samplers with
values x?i ,yj? and zk? drawn from the prior.
E [ f(z )|S x ] =
f (z ) p(z |y)[
?
f (z ) p(z |y)
=
f (z )
p(y|x )p(x |S x ) dx ] dy dz
importance *
sampling x i ~ p(x)
*
*
p(y|x
)p(S
|x
)
x
i
i
i
dy dz
*
i p(S x |x i )
p(z |y)p(y|x *i ) dy p(S x |x *i )
i
i
?
j
f (z )
=
[
j
f
j
(z k* )p(y j* |z k* )
* *
k p(y j |z k )
f (z k* )
=
j
k
p(x *i |y j* )
f (z )p(z |y j* ) dz ]
k
?
p(z |y j* )p(x *i |y j* )
j
i
p(S x |x *i )
dz
importance
y *j ~ p(y)
sampling
*
p(S x |x i )
* dz
i p(S x |x i )
p(x *i |y j* )
* *
j p(x i |y j )
i
importance
sampling
p(x *i |y j* )
* *
j p(x i |y j )
i
p(y j* |z k* )
* *
k p(y j |z k )
zk
i
p(S x |x *i )
*
i p(S x |x i )
z *k ~ p(z)
p(S x |x *i )
*
i p(S x |x i )
p(x *i |y j* )
* *
j p(x i |y j )
p(S x |x *i )
*
i p(S x |x i )
yj
xi
(5)
This result relies on recursively applying importance sampling to the integral, with each recursion
resulting in an approximation to the posterior distribution of another random variable. This recursive
importance sampling scheme can be used in a variety of graphical models. For example, tracking a
stimulus over time is a natural extension where an additional observation is added at each level of
the generative model. We evaluate this scheme in several generative models in Section 5.
4.2
Neural implementation of the multi-layer importance sampler
The decomposition of hierarchical inference into recursive importance sampling (Eq. 5) gives rise
to a multi-layer neural network implementation (see Fig. 3a). The input layer X is similar to that in
Fig. 1, composed of feature detection neurons with output proportional to the likelihood p(Sx |x?i ).
Their output, after presynaptic normalization, is fed into a layer corresponding to the Y variables,
?
p(x?
i |yj )
?
with synaptic weights P p(x
? |y ? ) . The response of neuron yj , summing over synaptic inputs, apj
i
j
proximates p(yj? |Sx ). Similarly, the response of zk? ? p(zk? |Sx ), and the activities of these neurons
are pooled to compute E[f (z)|Sx ]. Note that, at each level, x?i ,yj? and zk? are sampled from prior
distributions. Posterior expectations involving any random variable can be computed because the
neuron activities at each level approximate the posterior density. A single pool of neurons can also
feed activation to multiple higher levels. Using the visual system as an example (Fig. 3b), such
a multi-layer importance sampling scheme could be used to account for hierarchical inference in
divergent pathways by projecting a set of V2 cells to both MT and V4 areas with corresponding
synaptic weights.
5
(a)
GRBF neurons
x 1*
xi*~p(x)
xi*
x 2*
x n*
activity of xi :
?
?
?
?
lateral
normalization
yj*~p(y)
y1*=?i
yj*=?i
ym*=?i
E[f(z)|Sx]
?
synaptic weight:
p(x *i |y j* )
* *
j p(x i |y j )
i
?
f( Z1*)
zk*=?j
f( Zk*)
zN*=?j
f( ZN*)
p(y j* |z k* )
* *
k p(y j |z k )
i
p(x *i |y j* )
* *
j p(x i |y j )
synaptic weight:
*
|V2,j* )
p(V1,i
*
*
j p(V1,i |V2,j )
p(y j* |z k* )
* *
k p(y j |z k )
*
|MTm* )
p(V2,j
*
p (V2,j
|MTm* )
activity of zk:
j
V1
p(S x |x *i )
*
i p(S x |x i )
synaptic weight:
?
zk*~p(z) z1*=?j
p(x *i |y j* )
* *
j p(x i |y j )
activity of yj:
?
(b))
p(S x |x *i )
*
i p(S x |x i )
p(S x |x *i )
*
i p(S x |x i )
MT
?Where? pathway
V2
*
p(V2,j
|V4,k* )
*
*
j p(V2,j |V4,k )
V4
?What? pathway
Figure 3: a) Multi-layer importance sampler for hierarchical Bayesian inference. b) Possible implementation in dorsal-ventral visual inference pathways, with multiple higher levels receiving input
from one lower level. Note that the arrow directions in the figure are direction of inference, which
is opposite to that of its generative model.
5
Simulations
In this section we examine how well the mechanisms introduced in the previous sections account
for human behavioral data for two perceptual phenomena: cue combination and the oblique effect.
5.1
Haptic-visual cue combination
When sensory cues come from multiple modalities, the nervous system is able to combine those cues
optimally in the way dictated by Bayesian statistics [2]. Fig. 4a shows the setup of an experiment
where a subject measures the height of a bar through haptic and visual inputs. The object?s visual
input is manipulated so that the visual cues can be inconsistent with haptic cues and visual noise
can be adjusted to different levels, i.e. visual cue follows xV ? N (SV , ?V2 ) and haptic cue follows
2
xH ? N (SH , ?H
), where SV , SH , ?V2 are controlled parameters. The upper panel of Fig. 4d shows
the percentage of trials that participants report the comparison stimulus (consistent visual/haptic cues
from 45-65mm) is larger than the standard stimulus (inconsistent visual/haptic cues, SV = 60mm
and SH = 50mm). With the increase of visual noise, haptic input accounts for larger weights in
decision making and the percentage curve is shifted towards SH , consistent with Bayesian statistics.
Several studies have suggested that this form of cue combination could be implemented by population coding [2, 8]. In particular, [8] made an interesting observation that, for Poisson-like spiking
neurons, summing firing activities of two populations is the optimal strategy. This model is under
the Bayesian decoding framework and requires construction of the network so that these two populations of neurons have exactly the same number of neurons and precise one-to-one connection
between two populations, with the connected pair of neurons having exactly the same tuning curves.
We present an alternative solution based on importance sampling that encodes the probability distribution by a population of neurons directly.
The importance sampling solution approximates the posterior expectation of the bar?s height x?C
given SV and SH . Sensory inputs are channeled in through xV and xH (Fig.4b). Because sensory
input varies in a small range (45-65mm in [2]), we assume priors p(xC ), p(xV ) and p(xH ) are
uniform. It is straightforward to approximate posterior p(xV |SV ) using importance sampling:
p(SV |x?V )
rV
p(xV = x?V |SV ) = E[1(xV = x?V )|SV ] ? P
?P
?
i p(SV |xV,i )
i rV,i
x?V,i ? p(xV )
(6)
where rV,i ? Poisson[c?p(SV |x?V,i )] is the number of spikes emitted by neuron x?V,i . A similar strategy applies to p(xH |SH ). The posterior p(xC |SV , SH ), however, is not trivial since multiplication
6
(b) Generative model of
cue combination
M.O. Ernst and M.S. Banks Nature (2002)
(c) Importance sampling from
visual?haptic examples
V
Forcefeedback
devices
W
Noise:
3 cm equals 100%
3-c
idth
Visual and haptic
scene
md
epth
p
ste
Visual input (mm)
xH
xV
55
p(xV,xH)
{xC,k*}
50
SH
SV
*)
C,k
60
Stereo
glasses
Opaque
mirror
H
{xH, j*}
ight
l he
Visua
ht
heig
tic
Hap
50
55
Haptic input (mm)
human behavior (Ernst et. al. 2002)
1.00
60
Visual?haptic
0%
67%
133%
200%
Noise level
{xV,i*} p(x ,x |x
xC
CRT
(d) Visual?haptic discrimination
Proportion of trials perceived as 'taller'
(a) Experiment setting
0.75
0.50
0.25
0
45
SH
55
SV
65
SV
65
simulation
1.00
0.75
0.50
0.25
0
45
SH
55
Normalized comparison height (mm)
Figure 4: (a) Experimental setup [2]. (b) Generative model. SV and SH are the sensory stimuli,
XV and XH the values along the visual and haptic dimensions, and XC the combined estimate of
object height. (c) Illustration of importance sampling using two sensory arrays {x?V,i }, {x?H,j }. The
transparent ellipses indicate the tuning curves of high level neurons centered on values x?C,k over
xV and xH . The big ellipse represents the manipulated input with inconsistent sensory input and
different variance structure. Bars at the center of opaque ellipses indicate the relative firing rates of
xC neurons, proportional to p(x?C,k |SV , SH ). (d) Human data and simulation results.
of spike trains is needed.
p(xC = x?C |SV , SH )
Z
=
?
1(xC = x?C )p(xC |xV , xH )p(xV |SV )p(xH |SH ) dxV dxH
XXZ
rV,i
r
P H,j
1(xC = x?C )p(xC |xV , xH ) P
(7)
r
i V,i
j rH,j
i
j
Fortunately, the experiment gives an important constraint, namely subjects were not aware of the
manipulation of visual input. Thus, the values x?C,k employed in the computation are sampled from
normal perceptual conditions, namely consistent visual and haptic inputs (xV = xH ) and normal
variance structure (transparent ellipses in Fig.4c, on the diagonal). Therefore, the random variables
{xV , xH } effectively become one variable xV,H and values of x?V,H,i are composed of samples
drawn from xV and xH independently. Applying importance sampling,
P
P
?
?
p(x?H,j |x?C )rH,j
i p(xV,i |xC )rV,i +
?
P
Pj
p(xC = xC |SV , SH ) ?
(8)
i rV,i +
j rH,j
X
X
E[x?C |SV , SH ] ?
x?C,k rC,k /
rC,k
(9)
k
p(x?C,k |SV
k
where rC,k ? Poisson(c ?
, SH )) and
? p(xC ). Compared with Eq. 6, inputs x?V,i
?
and xH,j are treaded as from one population in Eq 8. rV,i and rH,j are weighted differently only
because of different observation noise. Eq. 9 is applicable for manipulated sensory input (in Fig. 4c,
the ellipse off the diagonal). The simulation results (for an average of 500 trials) are shown in the
lower panel of Fig.4d, compared with human data in the upper panel. There are two parameters,
noise levels ?V and ?H , are optimized to fit within-modality discrimination data (see [2] Fig. 3a).
{x?V,i },{x?H,j } and {x?C,k } consist of 20 independently drawn examples each, and the total firing rate
of each set of neurons is limited to 30. The simulations produce a close match to human behavior.
5.2
x?C,k
The oblique effect
The oblique effect describes the phenomenon that people show greater sensitivity to bars with horizontal or vertical (0o /90o ) orientations than ?oblique? orientations. Fig. 5a shows an experimental
setup where subjects exhibited higher sensitivity in detecting the direction of rotation of a bar when
the reference bar to which it was compared was in one of these cardinal orientations. Fig. 5b shows
the generative model for this detection problem. The top-level binary variable D randomly chooses a
direction of rotation. Conditioning on D, the amplitude of rotation ?? is generated from a truncated
7
(b) Generative model
p(clockwise)?
D
?? | D ~ NT(D) (0,???)
o
Relative detection sensitivity
180
2
90
reference bar
test bar
2
p(D=1) = p(D=-1) = 0.5
o
??
(c) Oblique effect and prior
Clockwise or counterclockwise?
?? ~ N (0,???)
2
??
r
? | ??, r ~ N (??+r ,?? )
2
Relative detection sensitivity
(a) Oblique effect
o
0
1
0
prior
0
45
90
90
180
135
180
2
?
?~
Uni([0, pi]) or
{ (1-k)/2[N(0, ? )+N(pi/2, ? )]
?
+k Uni([0, pi])
S? | ? ~ N (? ,? )
2
S
S?
?
1
0
adopted from Furmanski & Engel (2000)
prior
45
90
0
Orientation
90
135
180
180
Figure 5: (a) Orientation detection experiment. The oblique effect is shown in lower panel, being
greater sensitivity to orientation near the cardinal directions. (b) Generative model. (c) The oblique
effect emerges from our model, but depends on having the correct prior p(?).
normal distribution (NT (D) , being restricted to ?? > 0 if D = 1 and ?? < 0 otherwise). When
combined with the angle of the reference bar r (shaded in the graphical model, since it is known),
?? generates the orientation of a test bar ?, and ? further generates the observation S? , both with
normal distributions with variance ?? and ?S? respectively.
The oblique effect has been shown to be closely related to the number of V1 neurons that tuned to
different orientations [25]. Many studies have found more V1 neurons tuned to cardinal orientations
than other orientations [13, 14, 15]. Moreover, the uneven distribution of feature detection neurons
is consistent with the idea that these neurons might be sampled proportional to the prior: more
horizontal and vertical segments exist in the natural visual environment of humans.
Importance sampling provides a direct test of the hypothesis that preferential distribution of V1
neurons around 0o /90o can cause the oblique effect, which becomes a question of whether the
oblique effect depends on the use of a prior p(?) with this distribution. The quantity of interest is:
X X p(?i? |??j?0 , r)
p(S? |?i? )
P
P
p(D = 1|S? , r) ?
(10)
?
?
?
j p(?i |??j , r)
i p(S? |?i )
0
i
j
0
?
where j indexes all ?? > 0. If p(D = 1|S? , r) > 0.5, then we should assign D = 1. Fig. 5c
shows that detection sensitivity is uncorrelated with orientations if we take a uniform prior p(?), but
exhibits the oblique effect under a prior that prefers cardinal directions. In both cases, 40 neurons
are used to represent each of ??i? and ?i? , and results are averaged over 100 trials. Sensitivity is
measured by percentage correct in inference. Due to the qualitative nature of this simulation, model
parameters are not tuned to fit experiment data.
6
Conclusion
Understanding how the brain solves the problem of Bayesian inference is a significant challenge for
computational neuroscience. In this paper, we have explored the potential of a class of solutions
that draw on ideas from computer science, statistics, and psychology. We have shown that a small
number of feature detection neurons whose tuning curves represent a small set of typical examples
from sensory experience is sufficient to perform some basic forms of Bayesian inference. Moreover,
our theoretical analysis shows that this mechanism corresponds to a Monte Carlo sampling method,
i.e. importance sampling. The basic idea behind this approach ? storing examples and activating
them based on similarity ? is at the heart of a variety of psychological models, and straightforward
to implement either in traditional neural network architectures like radial basis function networks,
circuits of Poisson spiking neurons, or associative memory models. The nervous system is constantly reorganizing to capture the ever-changing structure of our environment. Components of the
importance sampler, such as the tuning curves and their synaptic strengths, need to be updated to
match the distributions in the environment. Understanding how the brain might solve this daunting
problem is a key question for future research.
Acknowledgments. Supported by the Air Force Office of Scientific Research (grant FA9550-07-1-0351).
8
References
[1] K. K?ording and D. M. Wolpert. Bayesian integration in sensorimotor learning. Nature, 427:244?247,
2004.
[2] M. O. Ernst and M. S. Banks. Humans integrate visual and haptic information in a statistically optimal
fashion. Nature, 415(6870):429?433, 2002.
[3] A. Stocker and E. Simoncelli. A bayesian model of conditioned perception. In J.C. Platt, D. Koller,
Y. Singer, and S. Roweis, editors, Advances in Neural Information Processing Systems 20, pages 1409?
1416. MIT Press, Cambridge, MA, 2008.
[4] A. P. Blaisdell, K. Sawa, K. J. Leising, and M. R. Waldmann. Causal reasoning in rats. Science,
311(5763):1020?1022, 2006.
[5] D. C. Van Essen, C. H. Anderson, and D. J. Felleman. Information processing in the primate visual
system: an integrated systems perspective. Science, 255(5043):419?423, 1992 Jan 24.
[6] T. S. Lee and D. Mumford. Hierarchical bayesian inference in the visual cortex. J.Opt.Soc.Am.A
Opt.Image Sci.Vis., 20(7):1434?1448, 2003.
[7] R. S. Zemel, P. Dayan, and A. Pouget. Probabilistic interpretation of population codes. Neural Comput,
10(2):403?430, 1998.
[8] W. J. Ma, J. M. Beck, P. E. Latham, and A. Pouget. Bayesian inference with probabilistic population
codes. Nat.Neurosci., 9(11):1432?1438, 2006.
[9] L. Shi, N. H. Feldman, and T. L. Griffiths. Performing bayesian inference with exemplar models. In
Proceedings of the 30th Annual Conference of the Cognitive Science Society, 2008.
[10] M. Kouh and T. Poggio. A canonical neural circuit for cortical nonlinear operations. Neural Comput,
20(6):1427?1451, 2008.
[11] J. K. Kruschke. Alcove: An exemplar-based connectionist model of category learning. Psychological
Review, 99:22?44, 1992.
[12] M. J. D. Powell. Radial basis functions for multivariable interpolation: a review. Clarendon Press, New
York, NY, USA, 1987.
[13] R. L. De Valois, E. W. Yund, and N. Hepler. The orientation and direction selectivity of cells in macaque
visual cortex. Vision Res, 22(5):531?544, 1982.
[14] D. M. Coppola, L. E. White, D. Fitzpatrick, and D. Purves. Unequal representation of cardinal and oblique
contours in ferret visual cortex. Proc Natl Acad Sci U S A, 95(5):2621?2623, 1998 Mar 3.
[15] C. S. Furmanski and S. A. Engel. An oblique effect in human primary visual cortex. Nat Neurosci,
3(6):535?536, 2000.
[16] A. Hodzic, R. Veit, A. A. Karim, M. Erb, and B. Godde. Improvement and decline in tactile discrimination
behavior after cortical plasticity induced by passive tactile coactivation. J Neurosci, 24(2):442?446, 2004.
[17] M. L. Platt and P. W. Glimcher. Neural correlates of decision variables in parietal cortex. Nature, 400:233?
238, 1999.
[18] M. A. Basso and R. H. Wurtz.
389(6646):66?69, 1997.
Modulation of neuronal activity by target uncertainty.
Nature,
[19] J. H. Reynolds and D. J. Heeger. The normalization model of attention. Neuron, 61(2):168?185, 2009
Jan 29.
[20] J. Lee and J. H. R. Maunsell. A normalization model of attentional modulation of single unit responses.
PLoS ONE, 4(2):e4651, 2009.
[21] S. J. Mitchell and R. A. Silver. Shunting inhibition modulates neuronal gain during synaptic excitation.
Neuron, 38(3):433?445, 2003.
[22] J. S. Rothman, L. Cathala, V. Steuber, and R A. Silver. Synaptic depression enables neuronal gain control.
Nature, 457(7232):1015?1018, 2009 Feb 19.
[23] H. Markram, M. Toledo-Rodriguez, Y. Wang, A. Gupta, G. Silberberg, and C. Wu. Interneurons of the
neocortical inhibitory system. Nat Rev Neurosci, 5(10):793?807, 2004 Oct.
[24] K. Friston. Hierarchical models in the brain. PLoS Comput Biol, 4(11):e1000211, 2008 Nov.
[25] G. A. Orban, E. Vandenbussche, and R. Vogels. Human orientation discrimination tested with long stimuli. Vision Res, 24(2):121?128, 1984.
9
| 3782 |@word trial:4 proportion:1 seems:2 stronger:1 simulation:7 propagate:1 decomposition:1 recursively:1 reduction:1 valois:1 tuned:5 ording:1 reynolds:1 current:1 recovered:1 nt:2 activation:3 assigning:2 dx:4 plasticity:1 shape:1 enables:1 motor:1 discrimination:4 xxz:1 cue:15 generative:14 guess:1 nervous:4 device:1 accordingly:1 ith:1 oblique:17 fa9550:1 provides:2 detecting:1 location:3 height:4 rc:3 along:1 direct:1 become:1 qualitative:1 veit:1 combine:2 pathway:5 behavioral:1 notably:1 roughly:1 behavior:7 examine:2 multi:8 brain:15 relying:1 decomposed:1 reorganizing:1 increasing:1 becomes:1 underlying:2 moreover:6 panel:4 circuit:2 what:1 tic:1 cm:1 proposing:1 finding:1 berkeley:6 multidimensional:1 expands:1 act:1 exactly:3 platt:2 control:2 unit:6 grant:1 maunsell:1 appear:1 positive:1 local:1 modify:1 xv:21 acad:1 firing:12 modulation:3 interpolation:1 might:4 challenging:4 shaded:1 limited:2 range:1 statistically:1 averaged:1 coactivation:1 practical:1 acknowledgment:1 yj:13 recursive:4 block:1 implement:1 jan:2 powell:1 area:5 bell:1 significantly:1 radial:9 griffith:3 integrating:1 suggest:1 close:1 applying:3 equivalent:1 shi:2 dz:6 helen:1 straightforward:3 attention:2 center:1 independently:2 kruschke:1 identifying:2 pouget:2 rule:1 estimator:2 array:1 kouh:1 population:12 updated:1 construction:1 play:1 target:1 us:1 hypothesis:2 agreement:1 trick:1 expensive:1 approximated:1 updating:1 recognition:1 role:1 solved:1 capture:1 wang:1 connected:1 plo:2 decrease:1 mentioned:1 environment:5 proximates:1 trained:1 solving:1 segment:1 technically:1 f2:5 basis:9 differently:1 finger:1 train:2 describe:2 monte:4 zemel:1 whose:2 larger:2 plausible:1 solve:1 drawing:1 otherwise:1 statistic:5 noisy:2 itself:1 ip:1 associative:1 biophysical:2 propose:1 ste:1 combining:1 basso:1 ernst:3 dxh:1 roweis:1 diverges:1 produce:3 silver:2 object:2 measured:1 dxv:1 exemplar:3 eq:6 solves:1 soc:1 implemented:3 involves:4 come:2 somatosensory:1 implies:1 revers:1 direction:9 indicate:2 closely:1 correct:3 centered:1 human:13 crt:1 implementing:3 activating:1 assign:1 f1:5 transparent:2 opt:2 biological:1 rothman:1 adjusted:1 extension:3 hold:1 mm:7 around:1 normal:4 exp:1 claim:1 recruiting:1 fitzpatrick:1 major:1 ventral:1 perceived:1 diminishes:2 proc:1 applicable:1 waldmann:1 modulating:1 engel:2 alcove:1 reflects:1 weighted:2 mit:1 gaussian:1 rather:2 mtm:2 probabilistically:1 office:1 encode:1 focus:1 improvement:1 likelihood:11 am:1 glass:1 helpful:1 inference:32 abstraction:1 dayan:1 interconnect:1 typically:1 integrated:1 hidden:4 koller:1 yund:1 interested:1 orientation:19 development:1 animal:1 integration:2 psychophysics:1 field:3 equal:1 aware:1 having:2 sampling:37 represents:1 unsupervised:1 fmri:1 future:1 report:1 stimulus:29 connectionist:1 cardinal:6 few:3 randomly:1 composed:2 manipulated:3 beck:1 fire:1 hepler:1 detection:13 interest:4 interneurons:2 essen:1 sh:17 pc:1 activated:2 behind:1 devoted:1 natl:1 stocker:1 integral:1 capable:2 preferential:1 experience:1 poggio:1 incomplete:1 plugged:1 circle:1 re:2 causal:2 theoretical:1 psychological:4 increased:1 modeling:1 zn:2 uniform:2 optimally:1 coppola:1 dependency:1 varies:1 sv:21 combined:2 chooses:1 density:1 peak:2 sensitivity:7 lee:2 probabilistic:7 systematic:1 v4:4 decoding:4 pool:1 receiving:1 ym:2 off:1 containing:1 choose:1 cognitive:1 channeled:1 account:6 converted:1 suggesting:2 potential:2 de:1 coding:2 pooled:2 depends:4 vi:1 later:2 reached:1 recover:2 bayes:1 participant:1 purves:1 air:1 variance:4 ensemble:1 yield:1 identify:2 bayesian:32 produced:1 carlo:4 multiplying:1 notoriously:1 synapsis:1 synaptic:11 sensorimotor:1 pp:1 frequency:2 gain:4 emits:1 sampled:3 popular:1 mitchell:1 knowledge:1 emerges:1 organized:1 amplitude:1 feed:1 clarendon:1 higher:3 supervised:1 tom:1 response:6 specify:1 daunting:1 formulation:1 done:1 box:1 mar:1 anderson:1 just:1 stage:3 horizontal:2 grbf:1 nonlinear:1 propagation:1 rodriguez:1 vogels:1 lei:1 scientific:1 building:1 effect:14 usa:1 normalized:1 true:2 y2:1 unbiased:1 karim:1 white:1 during:1 excitation:1 rat:1 multivariable:1 prominent:1 neocortical:1 latham:1 felleman:1 fj:3 passive:1 reasoning:1 image:1 recently:1 superior:1 rotation:3 stimulation:1 spiking:8 multinomial:1 mt:2 conditioning:1 insensitive:1 analog:1 interpretation:2 approximates:3 he:1 significant:1 cambridge:1 feldman:1 tuning:13 similarly:1 particle:1 similarity:2 cortex:6 inhibition:2 feb:1 posterior:13 showed:1 recent:1 dictated:1 perspective:1 reverse:1 manipulation:1 selectivity:1 binary:1 yi:2 additional:1 fortunately:1 greater:2 mr:1 employed:1 ight:1 clockwise:2 living:1 signal:1 full:1 simoncelli:1 multiple:6 infer:2 reduces:1 rj:2 rv:7 match:2 characterized:1 long:1 shunting:2 ellipsis:3 feasibility:1 controlled:1 involving:1 basic:3 vision:2 expectation:10 poisson:12 wurtz:1 repetitive:1 normalization:11 represent:4 cell:3 receive:1 ferret:1 source:1 modality:3 unlike:1 exhibited:1 haptic:15 subject:3 recruited:2 induced:1 counterclockwise:1 inconsistent:3 emitted:1 near:1 easy:1 variety:2 fit:2 psychology:2 architecture:2 modulator:1 opposite:2 reduce:1 idea:6 decline:1 translates:1 whether:1 effort:1 stereo:1 tactile:2 york:1 cause:1 repeatedly:1 depression:2 prefers:1 clear:1 involve:1 category:2 occupy:1 specifies:1 percentage:3 exist:1 canonical:1 inhibitory:4 shifted:1 neuroscience:2 taller:1 erb:1 key:2 drawn:5 changing:1 pj:1 ht:1 v1:9 imaging:1 sum:1 colliculus:1 angle:1 parameterized:1 uncertainty:3 communicate:1 opaque:2 family:1 wu:1 draw:1 decision:3 dy:3 layer:11 annual:1 activity:11 strength:2 constraint:1 ri:5 scene:1 encodes:2 generates:2 orban:1 extremely:1 performing:6 optical:1 relatively:1 furmanski:2 department:1 according:2 combination:6 membrane:1 describes:1 blaisdell:1 rev:1 primate:2 making:1 projecting:1 restricted:1 heart:1 computationally:1 discus:1 mechanism:11 needed:1 singer:1 fed:1 adopted:1 hap:1 operation:4 hierarchical:19 v2:10 alternative:1 robustness:1 thomas:1 top:1 graphical:2 xc:15 giving:1 ellipse:2 approximating:1 society:1 added:1 realized:4 spike:5 quantity:2 receptive:3 strategy:3 question:2 md:1 diagonal:2 surrogate:2 traditional:1 exhibit:1 mumford:1 primary:1 vandenbussche:1 distance:1 separate:1 attentional:1 lateral:2 sci:2 presynaptic:1 trivial:1 code:2 index:1 illustration:1 ratio:1 providing:1 setup:3 mostly:1 rise:1 implementation:6 perform:7 allowing:1 upper:2 vertical:2 neuron:67 observation:7 parietal:2 truncated:1 extended:1 ever:1 precise:2 y1:2 introduced:1 namely:3 pair:1 connection:3 z1:2 optimized:1 california:2 unequal:1 learned:1 toledo:1 macaque:1 able:1 suggested:3 bar:10 usually:1 perception:2 pattern:3 epth:1 challenge:1 including:1 memory:1 belief:1 critical:1 natural:2 force:1 friston:1 recursion:1 representing:3 scheme:6 prior:26 understanding:3 literature:1 review:2 multiplication:1 relative:3 interesting:1 proportional:10 subcortical:1 filtering:1 var:1 localized:1 integrate:1 sufficient:2 consistent:6 silberberg:1 editor:1 bank:2 uncorrelated:1 pi:3 storing:1 supported:1 visua:1 understand:1 perceptron:1 institute:1 explaining:1 markram:1 sparse:1 van:1 curve:13 dimension:1 xn:3 evaluating:1 world:1 cortical:4 contour:1 sensory:12 commonly:1 made:1 correlate:2 approximate:2 nov:1 uni:2 preferred:6 confirm:1 global:1 summing:2 assumed:1 xi:30 thep:1 alternatively:1 continuous:1 latent:4 nature:7 zk:11 ca:2 complex:2 domain:1 neurosci:4 arrow:1 big:1 noise:7 rh:4 x1:3 neuronal:4 fig:17 creature:1 fashion:1 ny:1 heeger:1 xh:16 exponential:1 comput:3 candidate:1 perceptual:2 third:1 dozen:1 specific:1 explored:1 divergent:1 gupta:1 evidence:1 consist:1 adding:1 effectively:2 importance:44 modulates:1 mirror:1 nat:3 conditioned:1 sx:24 easier:1 wolpert:1 electrophysiology:1 simply:1 visual:29 expressed:1 tracking:1 applies:1 corresponds:1 determines:1 relies:1 constantly:1 ma:2 oct:1 goal:2 presentation:1 rbf:17 towards:2 change:1 determined:3 typical:2 averaging:1 sampler:7 total:1 divisive:4 experimental:2 indicating:1 uneven:1 support:3 people:1 dorsal:1 phenomenon:2 glimcher:1 evaluate:1 tested:1 biol:1 |
3,071 | 3,783 | Linearly constrained Bayesian matrix factorization
for blind source separation
Mikkel N. Schmidt
Department of Engineering
University of Cambridge
[email protected]
Abstract
We present a general Bayesian approach to probabilistic matrix factorization subject to linear constraints. The approach is based on a Gaussian observation model
and Gaussian priors with bilinear equality and inequality constraints. We present
an efficient Markov chain Monte Carlo inference procedure based on Gibbs sampling. Special cases of the proposed model are Bayesian formulations of nonnegative matrix factorization and factor analysis. The method is evaluated on a
blind source separation problem. We demonstrate that our algorithm can be used
to extract meaningful and interpretable features that are remarkably different from
features extracted using existing related matrix factorization techniques.
1 Introduction
Source separation problems arise when a number of signals are mixed together, and the objective
is to estimate the underlying sources based on the observed mixture. In the supervised, modelbased approach to source separation, examples of isolated sources are used to train source models,
which are then combined in order to separate a mixture. Oppositely, in unsupervised, blind source
separation, only very general information about the sources is available. Instead of estimating models of the sources, blind source separation is based on relatively weak criteria such as minimizing
correlations, maximizing statistical independence, or fitting data subject to constraints.
Under the assumptions of linear mixing and additive noise, blind source separation can be expressed
as a matrix factorization problem,
X = AB +N,
I?J
I?K K?J
I?J
or equivalently, xij =
K
X
aik bkj + nij ,
(1)
k=1
where the subscripts below the matrices denote their dimensions. The columns of A represent K
unknown sources, and the elements of B are the mixing coefficients. Each of the J columns of
X contains an observation that is a mixture of the sources plus additive noise represented by the
columns of N . The objective is to estimate the sources, A, as well as the mixing coefficients, B,
when only the data matrix, X, is observed. In a Bayesian formulation, the aim is not to compute a
single value for A and B, but to infer their posterior distribution under a set of model assumptions.
These assumptions are specified in the likelihood function, p(X|A, B), which expresses the probability of the data given the factorizing matrices, and in the prior, p(A, B), which describes available
knowledge before observing the data. Depending on the specific choice of likelihood and priors,
matrix factorizations with different characteristics can be devised.
Non-negative matrix factorization (NMF), which is distinguished from other matrix factorization
techniques by its non-negativity constraints, has been shown to decompose data into meaningful,
interpretable parts [3]; however, a parts-based decomposition is not necessarily useful, unless it
1
Linear subspace
Affine subspace
Simplicial cone
Simplicial cone in
non-neg. orthant
(b)
bkj ? 0
(c)
aik ? 0, bkj ? 0
(d)
Polytope in
non-neg. orthant
Polytope on
unit simplex
Polytope in
unit hypercube
P
aik ? 0, Pi aik = 1,
bkj ? 0,
k bkj = 1
(g)
0 ? aP
ik ? 1,
bkj ? 0,
k bkj = 1
(h)
P
k bkj
No constraints
(a)
Polytope
bkj ? 0,
P
(e)
k bkj
=1
bkj
=1
aik P
? 0,
? 0,
k bkj = 1
(f)
Figure 1: Examples of model spaces that can be attained using matrix factorization with different
linear constraints in A and B. The red hatched area indicates the feasible region for the source
vectors (columns of A). Dots, , are examples of specific positions of source vectors, and the black
hatched area, , is the corresponding feasible region for the data vectors. Special cases include (a)
factor analysis and (d) non-negative matrix factorization.
finds the ?correct? parts. The main contribution in this paper is that specifying relevant constraints
other than non-negativity substantially changes the qualities of the results obtained using matrix
factorization. Some intuition about how the incorporation of different constraints affects the matrix
factorization can be gained by considering their geometric implications. Figure 1 shows how different linear constraints on A and B constrain the model space. For example, if the mixing coefficients
are constrained to be non-negative, data is modelled as the convex hull of a simplicial cone, and if
the mixing coefficients are further constrained to sum to unity, data is modelled as the hull of a
convex polytope.
In this paper, we develop a general and flexible framework for Bayesian matrix factorization, in
which the unknown sources and the mixing coefficients are treated as hidden variables. Furthermore,
we allow any number of linear equality or inequality constraints to be specified. On an unsupervised
image separation problem, we demonstrate, that when relevant constraints are specified, the method
finds interpretable features that are remarkably different from features computed using other matrix
factorization techniques.
The proposed method is related to recently proposed Bayesian matrix factorization techniques:
Bayesian matrix factorization based on Gibbs sampling has been demonstrated [7, 8] to scale up
to very large datasets and to avoid the problem of overfitting associated with non-Bayesian techniques. Bayesian methods for non-negative matrix factorization have also been proposed, either
based on variational inference [1] or Gibbs sampling [4, 9]. The latter can be seen as special cases
of the algorithm proposed here.
The paper is structured as follows: In section 2, the linearly constrained Bayesian matrix factorization model is described. Section 3 presents an inference procedure based on Gibbs sampling. In
Section 4, the method is applied to an unsupervised source separation problem and compared to
other existing matrix factorization methods. We discuss our results and conclude in Section 5.
2
2 The linearly constrained Bayesian matrix factorization model
In the following, we describe the linearly constrained Bayesian matrix factorization model. We
make specific choices for the likelihood and priors that keep the formulation general while allowing
for efficient inference based on Gibbs sampling.
2.1 Noise model
We choose an iid. zero mean Gaussian noise model,
1
2?vij
p(nij ) = N (nij |0, vij ) = ?
n2
exp ? 2vijij ,
(2)
where, in the most general formulation, each matrix element has its own variance, vij ; however, the
variance parameters can easily be joined, e.g., to have a single noise variance per row or just one
overall variance, which corresponds to an isotropic noise model. The noise model gives rise to the
likelihood, i.e., the probability of the observations given the parameters of the model. The likelihood
is given by
PK
I Y
J
K
I Y
J
Y
X
Y
(x ? k=1 aik bkj )2
1
p
p(x|?) =
N xij
aik bkj , vij =
exp ?
, (3)
2vij
2?vij
i=1 j=1
i=1 j=1
k=1
where ? = A, B, {vij } denotes all parameters in the model. For the noise variance parameters
we choose conjugate inverse-gamma priors,
??
? ? ?(?+1)
vij
exp
.
(4)
p(vij ) = IG(vij |?, ?) =
?(?)
vij
2.2 Priors for sources and mixing coefficients
We now define the prior distribution for the factorizing matrices, A and B. To simplify the notation,
we specify the matrices by vectors a = vec(A? ) = [a11 , a12 , . . . , aIK ]? and b = vec(B) =
[b11 , b21 , . . . , bKJ ]? . We choose a Gaussian prior over a and b subject to inequality constraints, Q,
and equality constraints, R,
?
!
?
?
?
?
a
?
a
ab
a
?
,
, if Q(a, b) ? 0, R(a, b) = 0,
? N
b ?b
??
?b
ab
(5)
p(a, b) ?
| {z } |
{z
}
?
?
??
??
?
?
0,
otherwise.
In slight abuse of denotation, we refer to ? and ? as the mean and covariance matrix, although the
actual mean and covariance of a and b depends on the constraints.
In the most general formulation, the constraints, Q : RIK?RKJ ? RNQ and R : RIK?RKJ ? RNR ,
are biaffine maps, that define NQ inequality and NR equality constraints jointly in a and b. Specifically, each inequality constraint has the form
? (b)
? (ab)
Qm (a, b) = qm + a? q (a)
m + b q m + a Qm b ? 0.
By rearranging terms and combining the NQ constraints in matrix notation, we may write
?
?
? (b)
?q
?b
q
1
1
h
i?
?
?
(a)
(ab)
..
(a)
(ab)
?,
q 1 +Q1 b ? ? ? q NQ+QNQ b a ? ?
Q?
a a ? qa ,
.
?
?
|
{z
}
? (b)
?qNQ?b q NQ
?Qa
|
{z
}
(6)
(7)
?qa
from which it is clear that the constraints are linear in a. Likewise, the constraints can be rearranged
to a linear form in b. The equality constraints, R, are defined analogously.
This general formulation of the priors allows all elements of a and b to have prior dependencies
both through their covariance matrix, ?ab , and through the joint constraints; however, in some
3
?, ?, Q, R
?, ?
J
1.
..
i=
xij
bkj
j=
k=1...K
..I
1.
aik
vij
Figure 2: Graphical model for linearly constrained Bayesian matrix factorization, when A and B are
independent in the prior. White and grey nodes represent latent and observed variables respectively,
and arrows indicate stochastic dependensies. The colored plates denote repeated variables over the
indicated indices.
applications it is not relevant or practical to specify all of these dependencies in advance. We may
(ab)
restrict the model such that a and b are independent a priori by setting ?ab , Q(ab)
m , and Rm to zero,
(a)
(b)
and restricting q m = 0 for all m where q m 6= 0 and vice versa. Furthermore, we can decouple
the elements of A, or groups of elements such as rows or columns, by choosing ?a , Qa , and Ra to
have an appropriate block structure. Similarly we can decouple elements of B.
2.3 Posterior distribution
Having specified the model and the prior densities, we can now write the posterior, which is the
distribution of the parameters conditioned on the observed data and hyper-parameters. The posterior
is given by
I Y
J
Y
p(?|x, ?) ? p(a, b)p(x|?)
p(vij ),
(8)
i=1 j=1
where ? = {?, ?, ?, ?, Q, R} denotes all hyper-parameters in the model. A graphical representation of the model is given in Figure 2.
3 Inference
In a Bayesian framework, we are interested in computing the posterior distribution over the parameters, p(?|x, ?). The posterior, given in Eq. (8), is only known up to a multiplicative constant, and
direct computation of this normalizing constant involves integrating over the unnormalized posterior, which is not analytically tractable. Instead, we approximate the posterior distribution using
Markov chain Monte Carlo (MCMC).
3.1 Gibbs sampling
We propose an inference procedure based on Gibbs sampling. Gibbs sampling is applicable when
the joint density of the parameters is not known, but the parameters can be partitioned into groups,
such that their posterior conditional densities are known. We iteratively sweep through the groups of
parameters and generate a random sample for each, conditioned on the current value of the others.
This procedure forms a homogenous Markov chain and its stationary distribution is exactly the joint
posterior.
In the following, we derive the posterior conditional densities required in the Gibbs sampler. First,
we consider the noise variances, vij . Due to the choice of conjugate prior, the posterior density is an
inverse-gamma,
?
p(vij |?\vij ) = IG(vij |?
?, ?),
2
P
K
?
? = ? + 21 , ?? = ? + 12 xij ? k=1 aik bkj ,
4
(9)
(10)
from which samples can be generated using standard acceptance-rejection methods.
Next, we consider the factorizing matrices, represented by the vectors a and b. We only discuss
generating samples from a, since the sampling procedure for b is identical due to the symmetry of
the model. Conditioned on b, the prior density of a is a constrained Gaussian,
(
?
? a ), if Q?
N (a|?
?a , ?
a a ? q a , Ra a = r a ,
p(a|b) ?
(11)
0,
otherwise,
? a = ?a ? ?ab ??1 ??
?
? a = ?a + ?ab ??1 (b ? ?b ), ?
(12)
ab ,
b
b
where we have used Eq. (7) and the standard result for a conditional Gaussian density. In the special
? a = ?a . Further,
case when a and b are independent in the prior, we simply have ?
? a = ?a and ?
conditioning on the data leads to the final expression for the posterior conditional density of a,
(
?
? a ), if Q?
N (a|?
?a , ?
a a ? q a , Ra a = r a ,
p(a|x, ?\a) ?
(13)
0,
otherwise,
?a = ?
? ?1
? ?1 B
? ? ?1 , ?
?a ?
? a?1 ?
? ?1 x ,
?
?a = ?
?a + BV
(14)
a + BV
? = diag(B, . . . , B) is a diagonal block matrix with I
where V = diag(v11 , v12 , . . . , vIJ ) and B
repetitions of B.
The Gibbs sampler proceeds iteratively: First, the noise variance is generated from the inversegamma density in Eq. (9); second, a is generated from the constrained Gaussian density in Eq. (13);
and third, b is generated from a constrained Gaussian analogous to Eq. (13).
3.2 Sampling from a constrained Gaussian
An essential component in the proposed matrix factorization method is an algorithm for generating random samples from a multivariate Gaussian density subject to linear equality and inequality
constraints. With a slight change of notation, we consider generating x ? RN from the density
(
?
N (x|?x , ?x ), if Q?
x x ? q x , Rx x = r x ,
p(x) ?
(15)
0,
otherwise.
A similar problem has previously been treated by Geweke [2], who proposes a Gibbs sampling
procedure, that does not handle equality constraints and no more than N inequality constraints.
Rodriguez-Yam et al. [6] extends the method in [2] to an arbitrary number of inequality constraints,
but do not provide an algorithm for handling equality constraints. Here, we present a general Gibbs
sampling procedure that handles any number of equality and inequality constraints.
The equality constraints restrict the distribution to an affine subspace of dimensionality N ? R,
where R is the number of linearly independent constraints. The conditional distribution on that
subspace is a Gaussian subject to inequality constraints. To handle the equality constraints, we map
the distribution onto this subspace. Using the singular value decomposition (SVD), we can robustly
compute an orthonormal basis, T , for the constraints, as well as its orthogonal complement, T? ,
?
T
ST 0
?
Rx = U SV =
V ?,
(16)
T?
0 0
where S T = diag(s1 , . . . , sR ) holds the R non-zero singular values. We now define a transformed
variable, y, that is related to x by
y = T? (x ? x0 ), y ? RN ?R
(17)
where x0 is some vector that satisfies the equality constraints, e.g., computed using the pseudoinverse, x0 = R??
x r x . This transformation ensures, that for any value of y, the corresponding x
satisfies the equality constraints. We can now compute the distribution of y conditioned on the
equality constraints, which is Gaussian subject to inequality constraints,
(
N (y|?y , ?y ) if Q?
y y ? qy
p(y|R?
(18)
x x = rx) ?
0
otherwise,
?y = ?(?x ? x0 ),
?y = ??x T?? ,
5
Qy = T? Qx ,
q y = q x ? Q?
x x0 ,
(19)
where ? = T? (I ? ?x T ? (T ?x T ? )?1 T ).
We introduce a second transformation with the purpose of reducing the correlations between the
variables. This may potentially improve the sampling procedure, because Gibbs sampling can suffer
from slow mixing when the distribution is highly correlated. Correlations between the elements
of y are due to both the Gaussian covariance structure and the inequality constraints; however,
for simplicity we only decorrelate with respect to the covariance of the underlying unconstrained
Gaussian. To this end, we define the transformed variable, z, given by
z = L?? (y ? ?y ),
(20)
where L is the Cholesky factorization of the covariance matrix, LL? = ?y . The distribution of z
is then a standard Gaussian subject to inequality constraints,
(
N (z|0, I), if Q?
z z ? qz ,
?
p(z|Rx x = rx ) ?
(21)
0,
otherwise,
Qz = LQy ,
q z = q y ? Q?
y ?y .
(22)
We can now sample from z using a Gibbs sampling procedure by sweeping over the elements zi
and generating samples from their conditional distributions, which are univariate truncated standard
Gaussian,
q
2
?zi
2
exp
?
2
N (zi |0, 1), ?i ? zi ? ui ,
p(zi |z\zi ) =
?
(23)
0,
otherwise.
u
?
i
i
erf ?2 ? erf ?2
Samples from this density can be generated using standard methods such as inverse transform sampling (transforming a uniform random variable by the inverse cumulative density function); the
efficient mixed rejection sampling algorithm proposed by Geweke [2]; or slice sampling [5].
The upper and lower points of truncation can be computed as
Q?
zz
?i = max ??, ndkk
[Qz ]?
zi
| {z i:}
?
qz
q z ? [Qz ]??
z?i
{z i: }
|
d
n
: dk < 0 ? zi ? min ?, ndkk : dk > 0 = ui ,
?
(24)
(25)
(26)
where [Qz ]i: denotes the ith row of Qz , [Qz ]?i: denotes all rows except the ith, and z?i denotes the
vector of all elements of z except the ith.
Finally, when a sample of z has been generated after a number of Gibbs sweeps, it can be transformed into a sample of the original variable, x, using
x = T?? (L? z + ?y ) + x0 .
(27)
The sampling procedure is illustrated in Figure 3.
4 Experiments
We demonstrate the proposed linearly constrained Bayesian matrix factorization method on a blind
image separation problem, and compare it to two other matrix factorization techniques: independent
component analysis (ICA) and non-negative matrix factorization (NMF).
Data We used a subset from the MNIST dataset which consists of 28 ? 28 pixel grayscale images
of handwritten digits (see Figure 4.a). We selected the first 800 images of each digit, 0?9, which
gave us 8, 000 unique images. From these images we created 4, 000 image mixtures by adding the
grayscale intensities of the images two and two, such that the different digits were combined in equal
proportion. We rescaled the mixed images so that their pixel intensities were in the 0?1 interval, and
arranged the vectorized images as the columns of the matrix X ? RI?J , where I = 784 and
J = 4, 000. Examples of the image mixtures are shown in Figure 4.b.
6
x2
x2
p(x1 |x2 = x? )
x?
1
2
5
3
x1
?
(a)
u
x1
?
(b)
u
4
x1
(c)
Figure 3: Gibbs sampling from a multivariate Gaussian density subject to linear constraints. a) Twodimensional Gaussian subject to three inequality constraints. b) The conditional distribution of x1
given x2 = x? is a truncated Gaussian. c) Gibbs sampling proceeds iteratively by sweeping over
the dimensions and sampling from the conditional distribution in each dimension conditioned on the
current value in the other dimensions.
Task The objective is to factorize the data matrix in order to find a number of source images that
explain the data. Ideally, the sources should correspond to the original digits. We cannot hope to
find exactly 10 sources that each corresponds to a digit, because there are large variations as to how
each digit is written. For that reason, we used 40 hidden sources in our experiments, which allowed
4 exemplars on average for each digit.
Method For comparison we factorized the mixed image data using two standard matrix factorization techniques: ICA, where we used the FastICA algorithm, and NMF, where we used Lee and
Seung?s multiplicative update algorithm [3]. The sources determined using these methods are shown
in Figure 4.c?d.
For the linearly constrained Bayesian matrix factorization, we used an isotropic noise model. We
chose a decoupled prior for A and B with zero mean, ? = 0, and unit diagonal covariance matrix,
? = I. The hidden sources were constrained to have the same range of pixel intensities as the
image mixtures, 0 ? aik ? 1. This constraint allows the sources to be interpreted as images since
pixel intensities outside this interval are not meaningful. The mixing coefficients were constrained
P
to be non-negative, bkj ? 0, and sum to unity, K
k=1 bkj = 1; thus, the observed data is modeled
as a convex combination of the sources. The constraints ensure that only additive combinations of
the sources are allowed, and introduces a negative correlation between the mixing coefficients. This
negative correlation implies that if one source contributes more to a mixture the other sources must
correspondingly contribute less. The idea behind this constraint is that it will lead the sources to
compete as opposed to collaborate to explain the data. A geometric interpretation of the constraints
is illustrated in Figure 1.h: The data vectors are modeled by a convex polytope in the non-negative
unit hypercube, and the hidden sources are the vertices of this polytope. We computed 10, 000 Gibbs
samples, which appeared sufficient for the sampler to converge. The result of the matrix factorization
are shown in Figure 4.e, which displays a single sample of A at the last iteration.
Results In ICA (see Figure 4.c) the sources are not constrained to be non-negative, and therefore
do not have a direct interpretation as images. Most of the computed sources are complex patterns,
that appear to be dominated by a single digit. While ICA certainly does find structure in the data,
the estimated sources lack a clear interpretation.
The sources computed using NMF (see Figure 4.d) have the property which Lee and Seung [3]
refer to as a parts-based representation. Spatially, the sources are local as opposed to global. The
decomposition has an intuitive interpretation: Each source is a short line segment or a dot, and the
different digits can be constructed by combining these parts.
Linearly constrained Bayesian matrix factorization (see Figure 4.e) computes sources with a very
clear and intuitive interpretation: Almost all of the 40 computed sources visually resemble handwritten digits, and are thus well aligned with the sources that were used to generate the mixtures.
Compared to the original data, the computed sources are a bit bolder and have slightly smeared
7
(a) Original dataset: MNIST digits
(b) Training data: Mixture of digits
(c) Independent component analysis
(d) Non-negative matrix factorization
(e) Linearly constrained Bayesian matrix factorization
Figure 4: Data and results of the analyses of an image separation problem. a) The MNIST digits data
(20 examples shown) used to generate the mixture data. b) The mixture data consists of 4000 images
of two mixed digits (20 examples shown). c) Sources computed using independent component
analysis (color indicate sign). d) Sources computed using non-negative matrix factorization. e)
Sources computed using linearly constrained Bayesian matrix factorization (details explained in the
text).
edges. Two sources stand out: One is a black blob of approximately the same size as the digits, and
another is an all white feature, which are useful for adjusting the brightness.
5 Conclusions
We presented a linearly constrained Bayesian matrix factorization method as well as an inference
procedure for this model. On an unsupervised image separation problem, we have demonstrated that
the method finds sources that have a clear an interpretable meaning. As opposed to ICA and NMF,
our method finds sources that visually resemble handwritten digits.
We formulated the model in general terms, which allows specific prior information to be incorporated in the factorization. The Gaussian priors over the sources can be used if knowledge is available
about the covariance structure of the sources, e.g., if the sources are known to be smooth. The constraints we used in our experiments were separate for A and B, but the framework allows bilinearly
dependent constraints to be specified, which can be used for example to specify constraints in the
data domain, i.e., on the product AB.
As a general framework for constrained Bayesian matrix factorization, the proposed method has
applications in many other areas than blind source separation. Interesting applications include blind
deconvolution, music transcription, spectral unmixing, and collaborative filtering. The method can
also be used in a supervised source separation setting, where the distributions over sources and
mixing coefficients are learned from a training set of isolated sources. It is an interesting challenge
to develop methods for learning relevant constraints from data.
8
References
[1] A. T. Cemgil. Bayesian inference for nonnegative matrix factorisation models. Computational
Intelligence and Neuroscience, 2009. doi: 10.1155/2009/785152.
[2] J. Geweke. Efficient simulation from the multivariate normal and student-t distributions subject
to linear constraints and the evaluation of constraint probabilities. In Computer Sciences and
Statistics, Proceedings the 23rd Symposium on the Interface between, pages 571?578, 1991.
doi: 10.1.1.26.6892.
[3] D. D. Lee and H. S. Seung. Learning the parts of objects by non-negative matrix factorization.
Nature, pages 788?791, October 1999. doi: 10.1038/44565.
[4] S. Moussaoui, D. Brie, A. Mohammad-Djafari, and C. Carteret. Separation of non-negative
mixture of non-negative sources using a Bayesian approach and MCMC sampling. Signal Processing, IEEE Transactions on, 54(11):4133?4145, Nov 2006. doi: 10.1109/TSP.2006.880310.
[5] R. M. Neal. Slice sampling. Annals of Statistics, 31(3):705?767, 2003.
[6] G. Rodriguez-Yam, R. Davis, and L. Scharf. Efficient gibbs sampling of truncated multivariate normal with application to constrained linear regression. Technical report, Colorado State
University, Fort Collins, 2004.
[7] R. Salakhutdinov and A. Mnih. Probabilistic matrix factorization. In Neural Information Processing Systems, Advances in (NIPS), pages 1257?1264, 2008.
[8] R. Salakhutdinov and A. Mnih. Bayesian probabilistic matrix factorization using Markov chain
Monte Carlo. In Machine Learning, International Conference on (ICML), pages 880?887, 2008.
[9] M. N. Schmidt, O. Winther, and L. K. Hansen. Bayesian non-negative matrix factorization. In
Independent Component Analysis and Signal Separation, International Conference on, volume
5441 of Lecture Notes in Computer Science (LNCS), pages 540?547. Springer, 2009. doi: 10.
1007/978-3-642-00599-2 68.
9
| 3783 |@word proportion:1 grey:1 simulation:1 decomposition:3 covariance:8 decorrelate:1 q1:1 brightness:1 contains:1 existing:2 current:2 written:1 must:1 additive:3 interpretable:4 update:1 stationary:1 intelligence:1 selected:1 nq:4 isotropic:2 inversegamma:1 ith:3 short:1 colored:1 node:1 contribute:1 constructed:1 direct:2 symposium:1 ik:1 consists:2 fitting:1 introduce:1 x0:6 ica:5 ra:3 salakhutdinov:2 actual:1 considering:1 estimating:1 underlying:2 notation:3 factorized:1 interpreted:1 substantially:1 transformation:2 exactly:2 qm:3 rm:1 unit:4 appear:1 before:1 engineering:1 local:1 cemgil:1 bilinear:1 subscript:1 ap:1 abuse:1 black:2 plus:1 chose:1 approximately:1 specifying:1 factorization:42 bilinearly:1 range:1 practical:1 unique:1 block:2 digit:16 procedure:11 lncs:1 area:3 v11:1 integrating:1 onto:1 cannot:1 twodimensional:1 map:2 demonstrated:2 maximizing:1 convex:4 simplicity:1 factorisation:1 orthonormal:1 handle:3 variation:1 analogous:1 annals:1 colorado:1 aik:11 element:9 observed:5 region:2 ensures:1 rescaled:1 intuition:1 transforming:1 ui:2 ideally:1 seung:3 segment:1 basis:1 easily:1 joint:3 represented:2 train:1 describe:1 monte:3 doi:5 hyper:2 choosing:1 outside:1 otherwise:7 erf:2 statistic:2 jointly:1 transform:1 tsp:1 final:1 blob:1 propose:1 product:1 relevant:4 combining:2 aligned:1 mixing:11 intuitive:2 unmixing:1 a11:1 generating:4 object:1 depending:1 develop:2 derive:1 exemplar:1 eq:5 involves:1 indicate:2 implies:1 resemble:2 correct:1 hull:2 stochastic:1 a12:1 decompose:1 hold:1 normal:2 exp:4 visually:2 purpose:1 applicable:1 hansen:1 vice:1 repetition:1 hope:1 smeared:1 gaussian:20 aim:1 avoid:1 likelihood:5 indicates:1 inference:8 dependent:1 hidden:4 transformed:3 interested:1 pixel:4 overall:1 rnq:1 flexible:1 priori:1 proposes:1 constrained:22 special:4 homogenous:1 equal:1 having:1 sampling:25 zz:1 identical:1 unsupervised:4 icml:1 simplex:1 others:1 report:1 simplify:1 gamma:2 ab:14 acceptance:1 highly:1 mnih:2 evaluation:1 certainly:1 introduces:1 mixture:12 behind:1 chain:4 implication:1 edge:1 orthogonal:1 unless:1 decoupled:1 isolated:2 nij:3 column:6 vertex:1 subset:1 uniform:1 fastica:1 dependency:2 sv:1 combined:2 st:1 density:15 international:2 winther:1 probabilistic:3 lee:3 modelbased:1 together:1 analogously:1 opposed:3 choose:3 mikkel:1 student:1 coefficient:9 blind:8 depends:1 multiplicative:2 observing:1 red:1 contribution:1 collaborative:1 variance:7 characteristic:1 likewise:1 who:1 simplicial:3 correspond:1 weak:1 bayesian:25 modelled:2 handwritten:3 iid:1 carlo:3 rx:5 explain:2 associated:1 dataset:2 adjusting:1 knowledge:2 color:1 dimensionality:1 geweke:3 attained:1 oppositely:1 supervised:2 specify:3 formulation:6 evaluated:1 arranged:1 furthermore:2 just:1 correlation:5 lack:1 rodriguez:2 quality:1 indicated:1 equality:14 analytically:1 spatially:1 iteratively:3 neal:1 illustrated:2 white:2 ll:1 davis:1 unnormalized:1 criterion:1 plate:1 demonstrate:3 mohammad:1 interface:1 image:19 variational:1 meaning:1 recently:1 conditioning:1 volume:1 slight:2 interpretation:5 refer:2 cambridge:1 vec:2 gibbs:19 versa:1 rd:1 unconstrained:1 collaborate:1 similarly:1 dot:2 posterior:13 own:1 multivariate:4 bkj:19 inequality:14 neg:2 seen:1 converge:1 signal:3 infer:1 smooth:1 technical:1 devised:1 regression:1 ndkk:2 iteration:1 represent:2 qy:2 remarkably:2 interval:2 singular:2 source:57 sr:1 subject:10 independence:1 affect:1 zi:8 gave:1 restrict:2 idea:1 expression:1 rnr:1 suffer:1 useful:2 clear:4 rearranged:1 generate:3 xij:4 sign:1 estimated:1 qnq:2 per:1 neuroscience:1 write:2 express:1 group:3 cone:3 sum:2 compete:1 inverse:4 extends:1 almost:1 v12:1 separation:16 bit:1 display:1 nonnegative:2 bv:2 denotation:1 constraint:51 incorporation:1 constrain:1 ri:1 x2:4 dominated:1 min:1 relatively:1 department:1 structured:1 combination:2 conjugate:2 describes:1 slightly:1 unity:2 partitioned:1 s1:1 explained:1 handling:1 previously:1 discus:2 tractable:1 end:1 available:3 appropriate:1 spectral:1 distinguished:1 robustly:1 schmidt:2 original:4 denotes:5 include:2 ensure:1 graphical:2 music:1 rkj:2 hypercube:2 sweep:2 objective:3 nr:1 diagonal:2 subspace:5 separate:2 polytope:7 reason:1 index:1 modeled:2 minimizing:1 equivalently:1 scharf:1 october:1 potentially:1 negative:16 rise:1 unknown:2 allowing:1 upper:1 observation:3 markov:4 datasets:1 orthant:2 truncated:3 incorporated:1 rn:2 arbitrary:1 sweeping:2 intensity:4 nmf:5 complement:1 fort:1 required:1 specified:5 learned:1 nip:1 qa:4 proceeds:2 below:1 pattern:1 appeared:1 b21:1 challenge:1 max:1 treated:2 mn:1 improve:1 dtu:1 created:1 negativity:2 extract:1 text:1 prior:18 geometric:2 lecture:1 mixed:5 interesting:2 filtering:1 affine:2 rik:2 vectorized:1 sufficient:1 vij:18 pi:1 row:4 last:1 truncation:1 allow:1 correspondingly:1 slice:2 dimension:4 stand:1 cumulative:1 computes:1 ig:2 qx:1 transaction:1 hatched:2 approximate:1 nov:1 transcription:1 keep:1 pseudoinverse:1 imm:1 overfitting:1 global:1 conclude:1 factorize:1 grayscale:2 factorizing:3 latent:1 qz:8 nature:1 b11:1 rearranging:1 symmetry:1 contributes:1 necessarily:1 complex:1 domain:1 diag:3 pk:1 main:1 linearly:12 arrow:1 noise:11 arise:1 n2:1 repeated:1 allowed:2 x1:5 brie:1 slow:1 position:1 third:1 specific:4 dk:3 normalizing:1 deconvolution:1 essential:1 mnist:3 restricting:1 adding:1 gained:1 conditioned:5 rejection:2 simply:1 univariate:1 expressed:1 joined:1 springer:1 corresponds:2 satisfies:2 extracted:1 conditional:8 formulated:1 feasible:2 change:2 specifically:1 except:2 reducing:1 determined:1 sampler:3 decouple:2 svd:1 meaningful:3 cholesky:1 latter:1 collins:1 mcmc:2 correlated:1 |
3,072 | 3,784 | Orthogonal Matching Pursuit from
Noisy Measurements: A New Analysis?
Sundeep Rangan
Qualcomm Technologies
Bedminster, NJ
[email protected]
Alyson K. Fletcher
University of California, Berkeley
Berkeley, CA
[email protected]
Abstract
A well-known analysis of Tropp and Gilbert shows that orthogonal matching
pursuit (OMP) can recover a k-sparse n-dimensional real vector from m =
4k log(n) noise-free linear measurements obtained through a random Gaussian
measurement matrix with a probability that approaches one as n ? ?. This
work strengthens this result by showing that a lower number of measurements,
m = 2k log(n ? k), is in fact sufficient for asymptotic recovery. More generally, when the sparsity level satisfies kmin ? k ? kmax but is unknown,
m = 2kmax log(n ? kmin ) measurements is sufficient. Furthermore, this number
of measurements is also sufficient for detection of the sparsity pattern (support)
of the vector with measurement errors provided the signal-to-noise ratio (SNR)
scales to infinity. The scaling m = 2k log(n ? k) exactly matches the number of
measurements required by the more complex lasso method for signal recovery in
a similar SNR scaling.
1 Introduction
Suppose x ? Rn is a sparse vector, meaning its number of nonzero components k is smaller than n.
The support of x is the locations of the nonzero entries and is sometimes called its sparsity pattern.
A common sparse estimation problem is to infer the sparsity pattern of x from linear measurements
of the form
y = Ax + w,
(1)
where A ? Rm?n is a known measurement matrix, y ? Rm represents a vector of measurements
and w ? Rm is a vector of measurements errors (noise).
Sparsity pattern detection and related sparse estimation problems are classical problems in nonlinear
signal processing and arise in a variety of applications including wavelet-based image processing [1]
and statistical model selection in linear regression [2]. There has also been considerable recent
interest in sparsity pattern detection in the context of compressed sensing, which focuses on large
random measurement matrices A [3?5]. It is this scenario with random measurements that will be
analyzed here.
Optimal subset recovery is NP-hard [6] and usually involves searches over all the nk possible
support sets of x. Thus, most attention has focused on approximate methods for reconstruction.
One simple and popular approximate algorithm is orthogonal matching pursuit (OMP) developed
in [7?9]. OMP is a simple greedy method that identifies the location of one nonzero component of x
at a time. A version of the algorithm will be described in detail below in Section 2. The best known
?
This work was supported in part by a University of California President?s Postdoctoral Fellowship and the
?
Centre Bernoulli at Ecole
Polytechnique F?ed?erale de Lausanne.
1
analysis of the performance of OMP for large random matrices is due to Tropp and Gilbert [10, 11].
Among other results, Tropp and Gilbert show that when the number of measurements scales as
m ? (1 + ?)4k log(n)
(2)
for some ? > 0, A has i.i.d. Gaussian entries, and the measurements are noise-free (w = 0), the
OMP method will recover the correct sparse pattern of x with a probability that approaches one as
n and k ? ?. Deterministic conditions on the matrix A that guarantee recovery of x by OMP are
given in [12].
However, numerical experiments reported in [10] suggest that a smaller number of measurements
than (2) may be sufficient for asymptotic recovery with OMP. Specifically, the experiments suggest
that the constant 4 can be reduced to 2.
Our main result, Theorem 1 below, proves this conjecture. Specifically, we show that the scaling in
measurements
m ? (1 + ?)2k log(n ? k)
(3)
is also sufficient for asymptotic reliable recovery with OMP provided both n ? k and k ? ?. The
result goes further by allowing uncertainty in the sparsity level k.
We also improve upon the Tropp?Gilbert analysis by accounting for the effect of the noise w. While
the Tropp?Gilbert analysis requires that the measurements are noise-free, we show that the scaling
(3) is also sufficient when there is noise w, provided the signal-to-noise ratio (SNR) goes to infinity.
The main significance of the new scaling (3) is that it exactly matches the conditions for sparsity
pattern recovery using the well-known lasso method. The lasso method, which will be described
in detail in Section 4, is based on a convex relaxation of the optimal detection problem. The best
analysis of the sparsity pattern recovery with lasso is due to Wainwright [13, 14]. He showed in
[13] that under a similar high SNR assumption, the scaling (3) in number of measurements is both
necessary and sufficient for asymptotic reliable sparsity pattern detection.1 Now, although the lasso
method is often more complex than OMP, it is widely believed that lasso has superior performance
[10]. Our results show that at least for sparsity pattern recovery with large Gaussian measurement
matrices in high SNR, lasso and OMP have identical performance. Hence, the additional complexity
of lasso for these problems is not warranted.
Of course, neither lasso nor OMP is the best known approximate algorithm, and our intention is not
to claim that OMP is optimal in any sense. For example, where there is no noise in the measurements,
the lasso minimization (14) can be replaced by
b = arg min kvk1 , s.t. y = Av.
x
v?Rn
A well-known analysis due to Donoho and Tanner [15] shows that, for i.i.d. Gaussian measurement
matrices, this minimization will recover the correct vector with
m ? 2k log(n/m)
(4)
when k ? n. This scaling is fundamentally better than the scaling (3) achieved by OMP and lasso.
There are also several variants of OMP that have shown improved performance. The CoSaMP algorithm of Needell and Tropp [16] and subspace pursuit algorithm of Dai and Milenkovic [17] achieve
a scaling similar to (4). Other variants of OMP include the stagewise OMP [18] and regularized
OMP [19]. Indeed with the recent interest in compressed sensing, there is now a wide range of
promising algorithms available. We do not claim that OMP achieves the best performance in any
sense. Rather, we simply intend to show that both OMP and lasso have similar performance in
certain scenarios.
Our proof of (3) follows along the same lines as Tropp and Gilbert?s proof of (2), but with two key
differences. First, we account for the effect of the noise by separately considering its effect in the
?true? subspace and its orthogonal complement. Second and more importantly, we provide a tighter
bound on the maximum correlation of the incorrect vectors. Specifically, in each iteration of the
1
Sufficient conditions under weaker conditions on the SNR are more subtle [14]: the scaling of SNR with
n determines the sequences of regularization parameters for which asymptotic almost sure success is achieved,
and the regularization parameter sequence affects the sufficient number of measurements.
2
OMP algorithm, there are n ? k possible incorrect vectors that the algorithm can choose. Since the
algorithm runs for k iterations, there are total of k(n ? k) possible error events. The Tropp and
Gilbert proof bounds the probability of these error events with a union bound, essentially treating
them as statistically independent. However, here we show that energies on any one of the incorrect
vectors across the k iterations are correlated. In fact, they are precisely described by samples on
a certain normalized Brownian motion. Exploiting this correlation we show that the tail bound on
error probability grows as n ? k, not k(n ? k), independent events.
The outline of the remainder of this paper is as follows. Section 2 describes the OMP algorithm. Our
main result, Theorem 1, is stated in Section 3. A comparison to lasso is provided in Section 4, and
we suggest some future problems in Section 6. The proof of the main result is sketched in Section 7.
2 Orthogonal Matching Pursuit
To describe the algorithm, suppose we wish to determine the vector x from a vector y of the form
(1). Let
Itrue = { j : xj 6= 0 },
(5)
which is the support of the vector x. The set Itrue will also be called the sparsity pattern. Let
k = |Itrue |, which is the number of nonzero components of x. The OMP algorithm produces a
?
sequence of estimates I(t),
t = 0, 1, 2, . . ., of the sparsity pattern Itrue , adding one index at a time.
In the description below, let aj denote the jth column of A.
Algorithm 1 (Orthogonal Matching Pursuit) Given a vector y ? Rm , a measurement matrix
A ? Rm?n and threshold level ? > 0, compute an estimate I?OMP of the sparsity pattern of x
as follows:
? = ?.
1. Initialize t = 0 and I(t)
2. Compute P(t), the projection operator onto the orthogonal complement of the span of
?
{ai , i ? I(t)}.
3. For each j, compute
?(t, j) =
|a?j P(t)y|2
,
kP(t)yk2
and let
[?? (t), i? (t)] = max ?(t, j),
j=1,...,n
(6)
where ?? (t) is the value of the maximum and i? (t) is an index which achieves the maximum.
? + 1) = I(t)
? ? {i? (t)}. Also, increment t = t + 1 and return to step 2.
4. If ?? (t) > ?, set I(t
?
5. Otherwise stop. The final estimate of the sparsity pattern is I?OMP = I(t).
?
Note that since P(t) is the projection onto the orthogonal complement of aj for all j ? I(t),
?
?
P(t)aj = 0 for all j ? I(t). Hence, ?(t, j) = 0 for all j ? I(t), and therefore the algorithm will
not select the same vector twice.
The algorithm above only provides an estimate, I?OMP , of the sparsity pattern of Itrue . Using I?OMP ,
one can estimate the vector x in a number of ways. For example, one can take the least-squares
estimate,
b = arg min ky ? Avk2
x
(7)
b is
where the minimization is over all vectors v such vj = 0 for all j 6? I?OMP . The estimate x
the projection of the noisy vector y onto the space spanned by the vectors ai with i in the sparsity
pattern estimate I?OMP . However, this paper only analyzes the sparsity pattern estimate I?OMP itself,
b.
and not the vector estimate x
3
3 Asymptotic Analysis
We analyze the OMP algorithm in the previous section under the following assumptions.
Assumption 1 Consider a sequence of sparse recovery problems, indexed by the vector dimension
n. For each n, let x ? Rn be a deterministic vector and let k = k(n) be the number of nonzero
components in x. Also assume:
(a) The sparsity level, k = k(n) satisfies
k(n) ? [kmin (n), kmax (n)],
(8)
for some deterministic sequences kmin (n) and kmax (n) with kmin (n) ? ? as n ? ?
and kmax (n) < n/2 for all n.
(b) The number of measurements m = m(n) is a deterministic sequence satisfying
m ? (1 + ?)2kmax log(n ? kmin ),
(9)
for some ? > 0.
(c) The minimum component power x2min satisfies
lim kx2min = ?,
(10)
xmin = min |xj |,
(11)
n??
where
j?Itrue
is the magnitude of the smallest nonzero component of x.
(d) The powers of the vectors kxk2 satisfy
1
log 1 + kxk2 = 0.
?
n?? (n ? k)
lim
for all ? > 0.
(12)
(e) The vector y is a random vector generated by (1) where A and w have i.i.d. Gaussian
components with zero mean and variance of 1/m.
Assumption 1(a) provides a range on the sparsity level, k. As we will see below in Section 5, bounds
on this range are necessary for proper selection of the threshold level ? > 0.
Assumption 1(b) is our the main scaling law on the number of measurements that we will show is
sufficient for asymptotic reliable recovery. In the special case when k is known so that kmax =
kmin = k, we obtain the simpler scaling law
m ? (1 + ?)2k log(n ? k).
(13)
We have contrasted this scaling law with the Tropp?Gilbert scaling law (2) in Section 1. We will
also compare it to the scaling law for lasso in Section 4.
Assumption 1(c) is critical and places constraints on the smallest component magnitude. The importance of the smallest component magnitude in the detection of the sparsity pattern was first
recognized by Wainwright [13,14,20]. Also, as discussed in [21], the condition requires that signalto-noise ratio (SNR) goes to infinity. Specifically, if we define the SNR as
SNR =
EkAxk2
,
kwk2
then under Assumption 1(e), it can be easily checked that
SNR = kxk2 .
Since x has k nonzero components, kxk2 ? kx2min , and therefore condition (10) requires that
SNR ? ?. For this reason, we will call our analysis of OMP a high-SNR analysis. The analysis of
OMP with SNR that remains bounded above is an interesting open problem.
4
Assumption (d) is technical and simply requires that the SNR does not grow too quickly with n.
Note that even if SNR = O(k ? ) for any ? > 0, Assumption 1(d) will be satisfied.
Assumption 1(e) states that our analysis concerns large Gaussian measurement matrices A and
Gaussian noise w.
Theorem 1 Under Assumption 1, there exists a sequence of threshold levels ? = ?(n) such that the
OMP method in Algorithm 1 will asymptotically detect the correct sparsity pattern in that
lim Pr I?OMP 6= Itrue = 0.
n??
Moreover, the threshold levels ? can be selected simply as a function of kmin , kmax , n, m and ?.
Theorem 1 provides our main result and shows that the scaling law (9) is sufficient for asymptotic
recovery.
4 Comparison to Lasso Performance
It is useful to compare the scaling law (13) to the number of measurements required by the widelyused lasso method described for example in [22]. The lasso method finds an estimate for the vector
x in (1) by solving the quadratic program
b = arg min ky ? Avk2 + ?kvk1 ,
x
(14)
v?Rn
where ? > 0 is an algorithm parameter that trades off the prediction error with the sparsity of the
solution. Lasso is sometimes referred to as basis pursuit denoising [23]. While the optimization (14)
is convex, the running time of lasso is significantly longer than OMP unless A has some particular
structure [10]. However, it is generally believed that lasso has superior performance.
The best analysis of lasso for sparsity pattern recovery for large random matrices is due to Wainwright [13, 14]. There, it is shown that with an i.i.d. Gaussian measurement matrix and white Gaussian noise, the condition (13) is necessary for asymptotic reliable detection of the sparsity pattern.
In addition, under the condition (10) on the minimum component magnitude, the scaling (13) is also
sufficient. We thus conclude that OMP requires an identical scaling in the number of measurements
to lasso. Therefore, at least for sparsity pattern recovery from measurements with large random
Gaussian measurement matrices and high SNR, there is no additional performance improvement
with the more complex lasso method over OMP.
5 Threshold Selection and Stopping Conditions
In many problems, the sparsity level k is not known a priori and must be detected as part of the estimation process. In OMP, the sparsity level of estimated vector is precisely the number of iterations
conducted before the algorithm terminates. Thus, reliable sparsity level estimation requires a good
stopping condition.
When the measurements are noise-free and one is concerned only with exact signal recovery, the
optimal stopping condition is simple: the algorithm should simply stop whenever there is no more
error. That is ?? (t) = 0 in (6). However, with noise, selecting the correct stopping condition requires
some care. The OMP method as described in Algorithm 1 uses a stopping condition based on testing
if ?? (t) > ? for some threshold ?.
One of the appealing features of Theorem 1 is that it provides a simple sufficient condition under
which this threshold mechanism will detect the correct sparsity level. Specifically, Theorem 1 provides a range k ? [kmin , kmax ] under which there exists a threshold that the OMP algorithm will
terminate in the correct number of iterations. The larger the number of measurements, m, the greater
one can make the range [kmin , kmax ]. The formula for the threshold level is given in (20).
Of course, in practice, one may deliberately want to stop the OMP algorithm with fewer iterations
than the ?true? sparsity level. As the OMP method proceeds, the detection becomes less reliable and
it is sometimes useful to stop the algorithm whenever there is a high chance of error. Stopping early
5
may miss some small components, but may result in an overall better estimate by not introducing
too many erroneous components or components with too much noise. However, since our analysis
is only concerned with exact sparsity pattern recovery, we do not consider this type of stopping
condition.
6 Conclusions and Future Work
We have provided an improved scaling law on the number of measurements for asymptotic reliable sparsity pattern detection with OMP. This scaling law exactly matches the scaling needed by
lasso under similar conditions. However, much about the performance of OMP is still not fully understood. Most importantly, our analysis is limited to high SNR. It would be interesting to see if
reasonable sufficient conditions can be derived for finite SNR as well. Also, our analysis has been
restricted to exact sparsity pattern recovery. However, in many problems, especially with noise, it is
not necessary to detect every component in the sparsity pattern. It would be useful if partial support
recovery results such as [24?27] can be obtained for OMP.
Finally, our main scaling law (9) is only sufficient. While numerical experiments in [10, 28] suggest
that this scaling is also necessary for vectors with equal magnitude, it is possible that OMP can
perform better than the scaling law (9) when the component magnitudes have some variation; this is
demonstrated numerically in [28]. The benefit of dynamic range in an OMP-like algorithm has also
been observed in [29] and sparse Bayesian learning methods in [30, 31].
7 Proof Sketch for Theorem 1
7.1 Proof Outline
Due to space considerations, we only sketch the proof; additional details are given in [28].
The main difficulty in analyzing OMP is the statistical dependencies between iterations in the OMP
algorithm. Following along the lines of the Tropp?Gilbert proof in [10], we avoid these difficulties
by considering the following ?genie? algorithm. A similar alternate algorithm is analyzed in [29].
1. Initialize t = 0 and Itrue (t) = ?.
2. Compute Ptrue (t), the projection operator onto the orthogonal complement of the span of
{ai , i ? Itrue (t)}.
3. For all j = 1, . . . , n, compute
?true (t, j) =
|a?j Ptrue (t)y|2
,
kPtrue (t)yk2
(15)
and let
[??true (t), i? (t)] = max ?true (t, j).
j?Itrue
(16)
4. If t < k, set Itrue (t + 1) = Itrue (t) ? {i? (t)}. Increment t = t + 1 and return to step 2.
5. Otherwise stop. The final estimate of the sparsity pattern is Itrue (k).
This ?genie? algorithm is identical to the regular OMP method in Algorithm 1, except that it runs
for precisely k iterations as opposed to using a threshold ? for the stopping condition. Also, in
the maximization in (16), the genie algorithm searches over only the correct indices j ? Itrue .
Hence, this genie algorithm can never select an incorrect index j 6? Itrue . Also, as in the regular
OMP algorithm, the genie algorithm will never select the same vector twice for almost all vectors
y. Therefore, after k iterations, the genie algorithm will have selected all the k indices in Itrue and
terminate with correct sparsity pattern estimate Itrue (k) = Itrue with probability one. So, we need
to show that true OMP algorithm behaves identically to the ?genie? algorithm with high probability.
6
To this end, define the following two probabilities:
pMD = Pr
max
min ?true (t, j) ? ?
t=0,...k?1 j?Itrue
pFA = Pr max max ?true (t, j) ? ?
t=0,...k j6?Itrue
(17)
(18)
Both probabilities are implicitly functions of n. The first term, pMD , can be interpreted as a
?missed detection? probability, since it corresponds to the event that the maximum correlation energy ?true (t, j) on the correct vectors j ? Itrue falls below the threshold. We call the second term
pFA the ?false alarm? probability since it corresponds to the maximum energy on one of the ?incorrect? indices j 6? Itrue exceeding the threshold. A simple induction argument shows that if there
are no missed detections or false alarms, the true OMP algorithm will select the same vectors as the
?genie? algorithm, and therefore recover the sparsity pattern. This shows that
Pr I?OMP 6= Itrue ? pMD + pFA .
So we need to show that there exists a sequence of thresholds ? = ?(n) > 0, such that pMD and
pFA ? 0 as n ? ?. To set this threshold, we select an ? > 0 such that
1+?
? 1 + ?,
1+?
where ? is from (9). Then, define the threshold level
? = ?(n) =
(19)
2(1 + ?)
log(n ? kmin ).
m
(20)
7.2 Probability of Missed Detection
The proof that pMD ? 0 is similar to that of Tropp and Gilbert?s proof in [10]. The key modification
is to use (10) to show that the effect of the noise is asymptotically negligible so that for large n,
y ? Ax = ?xtrue .
(21)
This is done by separately considering the components of w in the span of the vectors aj for j ?
Itrue and its orthogonal complement.
One then follows the Tropp?Gilbert proof for the noise-free case to show that
max ?true (t, j) ?
j?Itrue
1
k
for large k. Hence, using (9) and (20) one can then show
lim inf max
n?? j?Itrue
1
?true (t, j) ? 1 + ?,
?
which shows that pMD ? 0.
7.3 Probability of False Alarm
This part is harder. Define
a?j Ptrue (t)y
,
kPtrue (t)yk
so that ?true (t, j) = |z(t, j)|2 . Now, Ptrue (t) and y are functions of w and aj for j ? Itrue .
Therefore, they are independent of aj for any j 6? Itrue . Also, since the vectors aj have i.i.d.
Gaussian components with variance 1/m, conditional on Ptrue (t) and y, z(t, j) is normal with
variance 1/m. Hence, m?true (t, j) is a chi-squared random variable with one degree of freedom.
z(t, j) =
Now, there are k(n ? k) values of ?true (t, k) for t = 1, . . . , k and j 6? Itrue . The Tropp?Gilbert
proof bounds the maximum of these k(n ? k) value by the standard tail bound
max
max ?true (t, j) ?
j6?Itrue t=1,...,k
2
4
2
log(k(n ? k)) ?
log(n2 ) =
log(n).
m
m
m
7
To improve the bound in this proof, we exploit the fact that for any j, the values of z(t, j) are
correlated. In fact, we show that the values z(t, j), t = 1, . . . , k are distributed identically to points
on a normalized Brownian motion. Specifically, let W (s) be a standard linear Brownian motion and
let S(s) be the normalized Brownian motion
1
S(s) = ? B(s), s > 0.
s
(22)
We then show that, for every j, there exists times s1 , . . . , sk with
1 ? s1 < ? ? ? < sk ? 1 + kxk2
such that the vector
z(j) = [z(1, j), . . . , z(k, j)]
is identically distributed to
[S(s1 ), . . . , S(sj )].
Hence,
max |z(t, j)|2 = max |S(sj )|2 ?
t=1,...,k
t=1,...,k
sup
s?[1,1+kxk2 ]
|S(s)|2 .
The right-hand side of the sample path can then be bounded by the reflection principle [32]. This
yields an improved bound,
max
max ?true (t, j) ?
j6?Itrue t=1,...,k
2
log(n ? k).
m
Combining this with (20) shows
lim inf max
n?? j?Itrue
1
1
?true (t, j) ?
,
?
1+?
which shows that pFA ? 0.
References
[1] S. Mallat. A Wavelet Tour of Signal Processing. Academic Press, second edition, 1999.
[2] A. Miller. Subset Selection in Regression. Number 95 in Monographs on Statistics and Applied
Probability. Chapman & Hall/CRC, New York, second edition, 2002.
[3] E. J. Cand`es, J. Romberg, and T. Tao. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inform. Theory, 52(2):489?
509, February 2006.
[4] D. L. Donoho. Compressed sensing. IEEE Trans. Inform. Theory, 52(4):1289?1306, April
2006.
[5] E. J. Cand`es and T. Tao. Near-optimal signal recovery from random projections: Universal
encoding strategies? IEEE Trans. Inform. Theory, 52(12):5406?5425, December 2006.
[6] B. K. Natarajan. Sparse approximate solutions to linear systems. SIAM J. Computing,
24(2):227?234, April 1995.
[7] S. Chen, S. A. Billings, and W. Luo. Orthogonal least squares methods and their application
to non-linear system identification. Int. J. Control, 50(5):1873?1896, November 1989.
[8] Y. C. Pati, R. Rezaiifar, and P. S. Krishnaprasad. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Conf. Rec. 27th Asilomar
Conf. Sig., Sys., & Comput., volume 1, pages 40?44, Pacific Grove, CA, November 1993.
[9] G. Davis, S. Mallat, and Z. Zhang. Adaptive time-frequency decomposition. Optical Eng.,
37(7):2183?2191, July 1994.
[10] J. A. Tropp and A. C. Gilbert. Signal recovery from random measurements via orthogonal
matching pursuit. IEEE Trans. Inform. Theory, 53(12):4655?4666, December 2007.
[11] J. A. Tropp and A. C. Gilbert. Signal recovery from random measurements via orthogonal
matching pursuit: The Gaussian case. Appl. Comput. Math. 2007-01, California Inst. of Tech.,
August 2007.
8
[12] J. A. Tropp. Greed is good: Algorithmic results for sparse approximation. IEEE Trans. Inform.
Theory, 50(10):2231?2242, October 2004.
[13] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy recovery of sparsity. Technical report, Univ. of California, Berkeley, Dept. of Statistics, May 2006.
arXiv:math.ST/0605740 v1 30 May 2006.
[14] M. J. Wainwright. Sharp thresholds for high-dimensional and noisy sparsity recovery using
?1 -constrained quadratic programming (lasso). IEEE Trans. Inform. Theory, 55(5):2183?2202,
May 2009.
[15] D. L. Donoho and J. Tanner. Counting faces of randomly-projected polytopes when the projection radically lowers dimension. J. Amer. Math. Soc., 22(1):1?53, January 2009.
[16] D. Needell and J. A. Tropp. CoSaMP: Iterative signal recovery from incomplete and inaccurate
samples. Appl. Comput. Harm. Anal., 26(3):301?321, July 2008.
[17] W. Dai and O. Milenkovic. Subspace pursuit for compressive sensing: Closing the gap between
performance and complexity. arXiv:0803.0811v1 [cs.NA]., March 2008.
[18] D. L. Donoho, Y. Tsaig, I. Drori, and J. L. Starck. Sparse solution of underdetermined linear
equations by stagewise orthogonal matching pursuit. preprint, March 2006.
[19] D. Needell and R. Vershynin. Uniform uncertainty principle and signal recovery via regularized
orthogonal matching pursuit. Found. Comput. Math., 9(3):317?334, June 2008.
[20] M. J. Wainwright. Information-theoretic limits on sparsity recovery in the high-dimensional
and noisy setting. Technical Report 725, Univ. of California, Berkeley, Dept. of Statistics,
January 2007.
[21] A. K. Fletcher, S. Rangan, and V. K. Goyal. Necessary and sufficient conditions for sparsity
pattern recovery. IEEE Trans. Inform. Theory, 55(12), December 2009. To appear. Original
submission available online [33].
[22] R. Tibshirani. Regression shrinkage and selection via the lasso. J. Royal Stat. Soc., Ser. B,
58(1):267?288, 1996.
[23] S. S. Chen, D. L. Donoho, and M. A. Saunders. Atomic decomposition by basis pursuit. SIAM
J. Sci. Comp., 20(1):33?61, 1999.
[24] M. Akc?akaya and V. Tarokh. Noisy compressive sampling limits in linear and sublinear
regimes. In Proc. Conf. on Inform. Sci. & Sys., pages 1?4, Princeton, NJ, March 2008.
[25] M. Akc?akaya and V. Tarokh. Shannon theoretic limits on noisy compressive sampling.
arXiv:0711.0366v1 [cs.IT]., November 2007.
[26] G. Reeves. Sparse signal sampling using noisy linear projections. Technical Report
UCB/EECS-2008-3, Univ. of California, Berkeley, Dept. of Elec. Eng. and Comp. Sci., January 2008.
[27] S. Aeron, M. Zhao, and V. Saligrama. On sensing capacity of sensor networks for the class of
linear observation, fixed SNR models. arXiv:0704.3434v3 [cs.IT]., June 2007.
[28] A. K. Fletcher and S. Rangan.
Sparse support recovery from random measurements with orthogonal matching pursuit.
Manuscript available online at
http://www.eecs.berkeley.edu/?alyson/Publications/FletcherRangan OMP.pdf,
October
2009.
[29] A. K. Fletcher, S. Rangan, and V. K. Goyal. On?off random access channels: A compressed
sensing framework. arXiv:0903.1022v1 [cs.IT]., March 2009.
[30] Y. Jin and B. Rao. Performance limits of matching pursuit algorithms. In Proc. IEEE Int.
Symp. Inform. Th., pages 2444?2448, Toronto, Canada, June 2008.
[31] D. Wipf and B. Rao. Comparing the effects of different weight distributions on finding sparse
representations. In Proc. Neural Information Process. Syst., Vancouver, Canada, December
2006.
[32] I. Karatzas and S. E. Shreve. Brownian Motion and Stochastic Calculus. Springer-Verlag, New
York, NY, 2nd edition, 1991.
[33] A. K. Fletcher, S. Rangan, and V. K. Goyal. Necessary and sufficient conditions on sparsity
pattern recovery. arXiv:0804.1839v1 [cs.IT]., April 2008.
9
| 3784 |@word milenkovic:2 version:1 itrue:32 nd:1 open:1 calculus:1 accounting:1 decomposition:3 eng:2 harder:1 selecting:1 ecole:1 com:1 comparing:1 luo:1 must:1 numerical:2 treating:1 tarokh:2 greedy:1 selected:2 fewer:1 sys:2 provides:5 math:4 location:2 toronto:1 simpler:1 zhang:1 along:2 incorrect:5 symp:1 indeed:1 cand:2 nor:1 chi:1 karatzas:1 considering:3 becomes:1 provided:5 bounded:2 moreover:1 interpreted:1 developed:1 compressive:3 finding:1 nj:2 guarantee:1 berkeley:7 every:2 exactly:3 rm:5 ser:1 control:1 appear:1 before:1 negligible:1 understood:1 limit:4 encoding:1 analyzing:1 path:1 twice:2 lausanne:1 appl:2 limited:1 range:6 statistically:1 testing:1 atomic:1 union:1 practice:1 recursive:1 goyal:3 drori:1 universal:1 significantly:1 matching:12 projection:7 intention:1 regular:2 suggest:4 onto:4 selection:5 operator:2 romberg:1 kmax:10 context:1 gilbert:14 www:1 deterministic:4 demonstrated:1 go:3 attention:1 convex:2 focused:1 recovery:29 needell:3 importantly:2 spanned:1 variation:1 increment:2 president:1 suppose:2 mallat:2 exact:4 programming:1 us:1 sig:1 strengthens:1 satisfying:1 natarajan:1 rec:1 submission:1 observed:1 preprint:1 trade:1 xmin:1 yk:1 monograph:1 complexity:2 dynamic:1 solving:1 upon:1 basis:2 alyson:3 easily:1 univ:3 elec:1 describe:1 kp:1 detected:1 saunders:1 widely:1 larger:1 akaya:2 otherwise:2 compressed:4 qualcomm:2 statistic:3 noisy:8 itself:1 final:2 online:2 sequence:8 reconstruction:2 remainder:1 saligrama:1 combining:1 erale:1 achieve:1 description:1 ky:2 exploiting:1 cosamp:2 produce:1 stat:1 soc:2 c:5 involves:1 correct:9 stochastic:1 crc:1 tighter:1 underdetermined:1 hall:1 normal:1 fletcher:5 algorithmic:1 claim:2 rezaiifar:1 achieves:2 early:1 smallest:3 estimation:4 proc:3 minimization:3 sensor:1 gaussian:12 rather:1 avoid:1 shrinkage:1 publication:1 kvk1:2 ax:2 focus:1 derived:1 june:3 improvement:1 bernoulli:1 tech:1 sense:2 detect:3 inst:1 stopping:8 inaccurate:1 tao:2 sketched:1 arg:3 among:1 overall:1 krishnaprasad:1 priori:1 constrained:1 special:1 initialize:2 equal:1 never:2 sampling:3 chapman:1 identical:3 represents:1 future:2 wipf:1 np:1 report:3 fundamentally:1 randomly:1 sundeep:1 replaced:1 freedom:1 detection:12 interest:2 highly:1 analyzed:2 pfa:5 grove:1 partial:1 necessary:7 orthogonal:17 unless:1 indexed:1 incomplete:2 column:1 rao:2 maximization:1 introducing:1 entry:2 subset:2 snr:20 tour:1 uniform:1 conducted:1 too:3 reported:1 dependency:1 eec:3 vershynin:1 st:1 siam:2 off:2 tanner:2 quickly:1 na:1 squared:1 satisfied:1 opposed:1 choose:1 conf:3 zhao:1 return:2 syst:1 account:1 de:1 int:2 satisfy:1 analyze:1 sup:1 recover:4 square:2 variance:3 miller:1 yield:1 bayesian:1 identification:1 comp:2 j6:3 inform:9 whenever:2 ed:1 checked:1 energy:3 frequency:2 proof:13 stop:5 popular:1 lim:5 pmd:6 genie:8 subtle:1 manuscript:1 improved:3 april:3 amer:1 done:1 furthermore:1 shreve:1 correlation:3 sketch:2 hand:1 tropp:17 nonlinear:1 stagewise:2 aj:7 grows:1 effect:5 normalized:3 true:18 deliberately:1 hence:6 regularization:2 nonzero:7 white:1 davis:1 pdf:1 outline:2 theoretic:2 polytechnique:1 starck:1 motion:5 reflection:1 meaning:1 image:1 consideration:1 common:1 superior:2 behaves:1 volume:1 tail:2 he:1 discussed:1 numerically:1 kwk2:1 measurement:39 ai:3 reef:1 closing:1 centre:1 access:1 longer:1 yk2:2 brownian:5 recent:2 showed:1 inf:2 scenario:2 certain:2 verlag:1 success:1 analyzes:1 additional:3 dai:2 minimum:2 omp:55 care:1 greater:1 recognized:1 determine:1 v3:1 signal:13 july:2 infer:1 technical:4 match:3 academic:1 believed:2 prediction:1 variant:2 regression:3 essentially:1 arxiv:6 iteration:9 sometimes:3 achieved:2 addition:1 fellowship:1 kmin:11 separately:2 want:1 grow:1 sure:1 december:4 xtrue:1 call:2 near:1 counting:1 identically:3 concerned:2 variety:1 srangan:1 affect:1 xj:2 lasso:26 billing:1 greed:1 york:2 bedminster:1 generally:2 useful:3 reduced:1 http:1 estimated:1 tibshirani:1 key:2 threshold:17 neither:1 v1:5 asymptotically:2 relaxation:1 run:2 uncertainty:3 place:1 almost:2 reasonable:1 missed:3 scaling:25 bound:9 quadratic:2 infinity:3 precisely:3 rangan:5 constraint:1 argument:1 min:5 span:3 optical:1 conjecture:1 pacific:1 alternate:1 march:4 smaller:2 across:1 describes:1 terminates:1 appealing:1 modification:1 s1:3 restricted:1 pr:4 asilomar:1 equation:1 aeron:1 remains:1 mechanism:1 needed:1 end:1 pursuit:16 available:3 original:1 running:1 include:1 exploit:1 prof:1 especially:1 february:1 classical:1 intend:1 strategy:1 subspace:3 sci:3 capacity:1 reason:1 induction:1 index:6 pati:1 ratio:3 october:2 stated:1 anal:1 proper:1 unknown:1 perform:1 allowing:1 av:1 observation:1 finite:1 november:3 jin:1 january:3 rn:4 sharp:2 august:1 canada:2 complement:5 required:2 tsaig:1 california:6 polytopes:1 trans:7 proceeds:1 usually:1 pattern:31 below:5 akc:2 regime:1 sparsity:43 program:1 including:1 reliable:7 max:14 royal:1 wainwright:6 power:2 event:4 critical:1 difficulty:2 regularized:2 improve:2 technology:1 identifies:1 vancouver:1 asymptotic:10 law:11 fully:1 sublinear:1 interesting:2 degree:1 sufficient:17 principle:3 course:2 supported:1 free:5 jth:1 side:1 weaker:1 wide:1 fall:1 face:1 sparse:13 benefit:1 distributed:2 dimension:2 adaptive:1 projected:1 sj:2 approximate:4 implicitly:1 harm:1 conclude:1 postdoctoral:1 search:2 iterative:1 sk:2 promising:1 terminate:2 channel:1 robust:1 ca:2 warranted:1 complex:3 vj:1 significance:1 main:8 noise:19 arise:1 alarm:3 n2:1 edition:3 referred:1 ny:1 wish:1 exceeding:1 comput:4 kxk2:6 wavelet:3 theorem:7 formula:1 erroneous:1 showing:1 sensing:6 concern:1 exists:4 false:3 adding:1 importance:1 widelyused:1 magnitude:6 nk:1 chen:2 gap:1 signalto:1 simply:4 ptrue:5 springer:1 corresponds:2 radically:1 satisfies:3 determines:1 chance:1 conditional:1 donoho:5 considerable:1 hard:1 specifically:6 except:1 contrasted:1 denoising:1 miss:1 called:2 total:1 e:2 shannon:1 ucb:1 select:5 support:6 dept:3 princeton:1 correlated:2 |
3,073 | 3,785 | A Biologically Plausible Model for Rapid Natural
Image Identification
S. Ghebreab, A. W.M. Smeulders
Intelligent Sensory Information Systems Group
University of Amsterdam, The Netherlands
[email protected]
H. S. Schoite, V.A.F. Lamme
Cognitive Neuroscience Group
University of Amsterdam, The Netherlands
[email protected]
Abstract
Contrast statistics of the majority of natural images conform to a Weibull distribution. This property of natural images may facilitate efficient and very rapid
extraction of a scene's visual gist. Here we investigated whether a neural response
model based on the Wei bull contrast distribution captures visual information that
humans use to rapidly identify natural scenes. In a learning phase, we measured
EEG activity of 32 subjects viewing brief flashes of 700 natural scenes. From
these neural measurements and the contrast statistics of the natural image stimuli,
we derived an across subject Wei bull response model. We used this model to predict the EEG responses to 100 new natural scenes and estimated which scene the
subject viewed by finding the best match between the model predictions and the
observed EEG responses. In almost 90 percent of the cases our model accurately
predicted the observed scene. Moreover, in most failed cases, the scene mistaken
for the observed scene was visually similar to the observed scene itself. Similar results were obtained in a separate experiment in which 16 other subjects where presented with artificial occlusion models of natural images. Together, these results
suggest that Weibull contrast statistics of natural images contain a considerable
amount of visual gist information to warrant rapid image identification.
1 Introduction
Natural images, although apparently diverse, have a surprisingly regular statistical regularity. There
is a strong correlation between adjacent image points in terms of local features such as luminance
[1]. These second-order correlations decrease with distance between image points, giving rise to the
typical 1/12 power spectra of natural images. On account of this power-law characteristic, natural
images compromise a very small and distinguishable subset of the space of all possible images, with
specific scene categories occupying different parts of this subspace. For example, white noise images
can be distinguished from natural images because of their deviation from the power law statistics,
while street scenes and beach scenes can be separated from each other on the basis of differences
in ensemble power spectra [2]. Thus, the power spectra of natural images contain an indeterminate
amount of the visual gist of these images.
The similarity structure among nearby image points, however, represents only part of the statistical
structure in natural images. There are also higher-order correlations, which introduce structure in
the phase spectra of natural images. This structure is assumed to carry perceptually important image
features such as edges and has been measured in terms of kurtosis in the contrast distribution of
natural images [3, 4, 5]. Geusebroek and Smeulders [6] showed that the two-parameter Weibull
distribution adequately captures the variance and kurtosis in the contrast distribution of the majority
of natural images. In fact, the two parameters of the Weibull contrast distribution tum out to organize
the space of all possible natural scenes in a perceptually meaningful manner [7] and thus are likely
to provide additional information about a scene's visual gist.
Scholte et al. [7] have further shown that the two parameters of the Weibull contrast distribution
match biologically realistic computations of Lateral Geniculate Nucleus (LGN) cells. Specifically,
they simulated X-cell responses by filtering images with a difference of Gaussians (DoG), rectifying
the filtered images and transforming the pixel values of the resulting images with a contrast gain
function adequate for P-cells. To simulate Y-cell responses, the rectified images were passed through
a Gaussian smoothing function and resulting pixel values were subsequently transformed with a
contrast gain function adequate for M-cells. The sum of the resulting X-cell responses turned out
to correlate highly with one Wei bull parameter (r=0.95), whereas the sum of the resulting Y-cell
responses correlated highly with the other Weibull parameter (r=0.70). Moreover, the two Wei bull
parameters correlated highly with EEG activity (r 2 =0.5) at the occipital part of the brain. The
findings of Scholte et al. [7] show that our brain is capable of approximating the Wei bull contrast
distribution of an image on the basis of filters that are biologically realistic in shape, sensitivity, and
SIze.
Here we hypothesized that if Wei bull contrast distributions of natural images carry perceptually important information, a neural response model based on the Weibull contrast distribution will predict
brain responses to brief flashes of natural images. We tested this hypothesis with two experiments in
which we rapidly presented a large set of natural or artificial images to multiple subjects while measuring EEG activity across the entire cortex. In each experiment, we constructed a neural response
model from the Weibull statistics of the presented images and corresponding EEG data, which we
then applied to predict EEG responses to a new collection of natural or artificial images. To validate the constructed neural response models, we used the approach of Kay et al. [8]: predicted and
measured EEG responses were compared to determine whether the observed image was correctly
identified.
2
Methods
We first describe how we filter images locally with a set of biologically-realistic filters. Then we
address a local contrast response selection mechanism with which we construct a contrast magnitude
map for a given input image (a detailed description is in submission [12]). Subsequently, Weibull
contrast statistics are estimated from such maps and the relation between image statistics and neural
activity modeled. The section ends with an explanation of a performance measure for EEG-based
image identification.
2.1
Local contrasts values in natural images
As in [7], we use contrast filters that have spatial characteristics and contrast response properties
closely mirroring well-known characteristic receptive-fields of LGN neurons [9]. Specifically, we
use a bank of second-order Gaussian derivative filters that span multiple octaves in spatial scale,
that have peak sensitivity approximately inverse to filter size and that have contrast gain properties
independent of size. We represent contrast gain using an established non-linear response model that
divides input contrast by the sum of the input and a semi-saturation constant [10]. In this model a low
value of the semi-saturation parameter indicates high non-linear contrast gain whereas higher values
result in a linear mapping and thus will not lead to saturation. Given an image, we process each
image location with a bank of 5 contrast filters covering 5 octaves in spatial scale and , subsequently,
subject the output of each scale-tuned filter to 5 different gain controls (5 semi-saturation values).
This results, for each image location, in 25 contrast response values.
We applied each of the 5 scale-specific filters, combined with each of the 5 contrast gain controls,
to 800 natural images. Figure 1 shows average responses over all image locations. Contrast is high
at small scale and low semi-saturation. It decreases exponentially with scale owing to the peak
sensitivity of the filters, which is inversely related to spatial scale. That contrast also decreases with
semi-saturation is explained by the fact that the amount of contrast suppression is proportional to
the semi-saturation value. From these summary statistics it follows that, although natural image
contrast varies considerable within and across scale and contrast gain, the fast majority of natural
image contrasts falls above a lower threshold. It is reasonable to assume that the LGN considers
contrast below this statistical threshold as noise and only processes contrasts above it, i.e. only
processes reliable contrast outputs.
0.3
0.05
0.25
0.04
0.2
0.04
0.03
003
0.15
0.1
0.05
U.UL
002
0.01
0.01
0
Semisaturation
Figure 1: Approximation of the typical range of contrasts generated by LGN neurons tuned to different spatial
frequencies (5 octave scales) and with different contrast gain properties (5 semi-saturation constants). Shown
are the average of local contrast (dark gray), plus and minus two standard deviations, in the gray level (left),
blue-yellow (middle) and red-green (right) color components of 800 natural images.
2.2
Natural image statistics-based selection of unique local image contrast values
What spatial scale and contrast gain does the LGN use to process local image contrast? It is unlikely
that the LGN (linearly) integrates the output of a population of spatially overlapping filters to determine local image contrast as this would make it sensitive to receptive field clutter [II]. Here we
depart from the view that LGN aims to minimize receptive clutter by selecting a single output from a
population of scale and gain specific contrast filters [12]. Specifically, in order to determine contrast
at an image location, we apply the smallest filter with boosted contrast output above what can be
expected to be noise for that particular filter. We define local contrast as the amount of contrast exceeding the noise threshold, which for a given scale and gain is set here to half standard deviation of
contrasts in 800 natural images (see figure 1). This contrast response selection mechanism produces
a contrast magnitude map in ways similar to the scale selection model in [13].
We apply the local contrast selection mechanism separately to the individual color components of
an image. From a single color image, the three color components are extracted using the Gaussian
color model [14], resulting in a gray-scale, blue-yellow and red-green image representations. Each
of these representations is convolved with the 25 scale and gain specific contrast filters and subsequently subjected to our local contrast selection mechanism. For each color component a dedicated
scale and gain dependent noise threshold is used (see figure I). As a result, for each color image we
get three contrast magnitude maps, which we linearly sum to arrive at a single contrast magnitude
map.
2.3
Weibull statistics of local image contrast
The contrast magnitude map of an image is summarized in a histogram, representing the distribution
of local contrast values of that image. Note that the histogram does not preserve information about
spatial structure in the contrast magnitude map: a scrambling the contrast magnitude map will not
affect the histogram. We subsequently fit a three-parameter Weibull distribution to the contrast
histogram. The three-parameter Wei bull distribution is given by
f(x)
= cexpc';tY
(I)
The parameters of this distribution are indicative for the spatial structure in a natural scene (see
figure 2) and can be put in a biologically plausible framework [7]. The scale parameter f3 describes
the width of the histogram. Hence, it varies roughly with the variation in local image contrasts. The
shape parameter y describes the shape of the histogram. It varies with the amount of scene clutter.
The J1 parameter, represents the origin of the distribution. Its position is influenced by uneven
illumination. The three Weibull parameters are estimated using a maximum likelihood estimator
(MLE). To achieve illumination invariance, the J1 parameter is normalized out.
Edge Strength
13=5 .0,]'=1.6
~
I::
GJ
::::lI
"".
c-
GJ
....
u..
GJ
Cl
"'C
W
Edge Strength
13 = 0 .9,]'= 0.8
Figure 2: Two arbitrary natural images from the Corel Photo Library with varying degrees of details and
varying degrees of scene clutter. The details in the upper image are chaotic. They range from large for the bird
to small for partially occluded tree branches. In contrast, the second picture depicts a single coherent object,
the eagle, against a highly uniform background. The image gradient at each image location shows the contrast
strength. All gradients accumulated in a histogram reveal the distribution of local contrasts. The scale and
shape parameters of the Weibull distribution are estimated from the fit to the histogram by maximum likelihood
estimation.
2.4
Model Estimation
We use EEG response signals from C channels (electrodes) covering the entire cortex, to develop
a Wei bull response model that predicts neuronal responses to natural images. EEG signals are
measured for S subjects watching N natural images. We average these signals across subjects to
obtain a more robust response signal per channel and per image. This results in an N x C matrix
F(t) of response signals fne(t). We construct a linear Wei bull response model for each channel
separately. Our rationale for combining the two Weibull parameters in a linear fashion is that these
two parameters can be suitably extracted from the X and Y units in the LGN model (as shown in
Scholte et al [7]) and as such the linear combination reflects linear pooling at the LGN level.
Functional data analysis [15] provides a natural framework for modeling continuous stochastic brain
processes. We use a point-wise multivariate functional linear model to establish the relation between
Weibull parameters X = [,81, ... ,i3N;YI, ???,YNf and the EEG response feCt) = Line, ... ,jNeF. The
values i3n, Yn are the Wei bull parameters of image nand lne is the across subject average response to
that image at channel c. Wei bull response model estimation for channel c then reduces to solving
feet)
= XwCt) + E(t)
(2)
where wet) is 2 x I vector of regression functions and ECt) = [EICt), .... , Es(t)f is the vector ofresidual functions. Under the assumption that the residual functions E(t) are independent and normally
distributed with zero mean, the regression function is estimated by least squares minimization such
that
weCt)
= min f11fe(t) W'(i)
XW*(t)11 2 dt.
(3)
t
A roughness penalty, based on the second derivative of wCt), regularize the estimate we(t). The
estimated regression function provides the best estimate of feCt) in least squares sense:
(4)
We use we(t) to predict the EEG responses to a new set of M images represented by their Weibull
distribution. The EEG responses to these new images are predicted using the Wei bull response
model:
(5)
where the M X 2 data matrix Y contains the two Weibull parameters for each of the new images and
the M-vector of functions ge(t) denotes the predicted neural responses for channel c.
2.5
Image Identification
How well does the Wei bull response model predict EEG responses to natural images? We answer
this question in terms of EEG-based identification of individual images. Given a set of M new images and their Weibull parameters Y, the Weibull response model provides the EEG prediction ge(t).
The match between prediction ge(t) and true, measured EEG activity ge(t) = [gl, ... ,gM] provides
a means for image identification. More specifically, an M X M similarity matrix S is constructed,
where each element contains the Pearson's correlation coefficient R between measured gem(t) and
predicted gcm(t) response. The similarity matrix shows for each individual image, the amount of
EEG correlation with the other images. The image whose predicted activity pattern is most correlated with the measured activity pattern is selected. A similarity matrix is constructed separately
for each of the C channels. These similarity matrices are squared in order to allow averaging of
similarity matrices across channels. Hence, the square of the correlation coefficient r2 rather than r
itself is used as a measure of similarity between true and predicted response.
3
3.1
Experiments and Results
Stimulus and EEG Data
In our experiments we used 800 color images with a resolution 345 x 217 pixels and a bit-depth
of 24. Of these, 400 were pictures of animals in their natural habitat and 400 pictures of natural
landscapes, city scenes, indoor scenes and man-made objects. These images were taken from a
larger set of images used in Fabre-Thorpe [16]. This subset of images was reasonably balanced in
terms of Michelson contrast, spatial frequency and orientation properties. The Weibull properties of
these images nevertheless covered a wide range of real-world images. The data set did not contain
near duplicates.
The images were presented to 32 subjects on a 19" I1yama monitor with a resolution of 1024*768
pixels and a frame-rate of 100 Hz. Subjects were seated 90 cm from the monitor. During EEG
acquisition a stimulus was presented, on average every 1500 ms (range 1000-2000 ms) for 100
ms. Each stimulus was presented 2 times for a total of 1600 presentations. Recordings were made
with a Biosemi 52-channel Active Two EEG system (Biosemi Instrumentation BV, Amsterdam, The
Netherlands). Data was sampled at 256 Hz. Data analysis was identical to [17] with the exception
that the high-pass filter was placed at 0.1 Hz (12 db/octave) and the pre-stimulus baseline activity
was taken between -100 and 0 ms with regard to stimulus onset. Trials were averaged over subject
per individual stimulus resulting in 800 averages of 20 to 32 averages per individual image.
3.2
Experiments
The experiments were carried out with the following parameters settings. Two banks of Gaussian
second-order derivative filters were used to determine image contrast for each image location. The
first set consisted of filters with octave spatial scales 1.5, 3, 6, 12, 24 (std. in pixels). This set was
used to determine the Wei bull scale parameter [3. The other filter bank, with scales 3, 6, 12, 24,
48, was used for the estimation of Wei bull shape parameter y. The spatial properties of the two
sets were determined experimentally and roughly correspond to receptive field sizes of small X and
large Y Ganglion cells in the early visual system of the human brain [18]. We used 5 semi-saturation
constants between 0.15 and 1.6 to cover the spectrum from linear to non-linear contrast gain control
in the LGN.
A cross validation study was performed to obtain reliable performance measurements. We repeated
the same experiment 50 times, each time randomly selecting 700 images for model estimation and
100 images for image identification. Performance was measured in terms of the percentage of correctly identified images for each of the 50 experiments. The 50 measures were then averaged to
O.B
0.'
<?. 0.7
~
r? 0.8
.~ O.B
~
~ 0.5
.~
1-." 0.4-
ffi
Jij 0.3
~
B
00 ..2
;n
~
?
i
<f
0.4
0.2 '
0.1
?0L---'~0--~--~~~4~
O ---5~0----60--~ro
Electrcx:hu. jsorted)
-10~2--~~~~~~~~~=4~M~--~M
~0~
Time kom onset ( m s )
Figure 3: Total explained variance in ERP signals by the two Wei bull parameters. The peak of
the total explained variance is highest (75 percent) for the IZ electrode overlying the early visual
cortex and gradually decays at higher brain areas. The time course of explained variance for the IZ
electrode reveals that the peak occurs at 113 ms after stimulus onset.
arrive at a single performance outcome. Hence, accuracy was defined as the fraction of images
for which the predicted activity pattern and measured activity pattern produced the highest r2. As
accuracy does not reflect how close the correct image was to being selected, we also ranked the
correlation coefficients and determined within which percentage of the ranked M images the correct
one was.
3.3
Results
We first present correlations between ERP signals from across the entire brain and the two parameters of the Weibull fit to the sum of selected local contrast values in the gray-level, blue-yellow and
red-green components of each image. Correlations are strikingly high at electrode Iz overlying the
early visual cortex. The peak r2 (square of the correlation coefficient) over time for that electrode is
75 percent (r = 0.8691; p = 0). The peak r2 over time slowly decays away from the occipital part
of the head as can be seen from the topographic plots in figure 3. The Wei bull parameters explain
most variance in the ERP signal very early in visual processing at 113 ms after stimulus onset (3)
and continue to explain variance up to about 200 ms. This suggests that the two Wei bull parameters
are probably only relevant to the brain in the early phases of visual processing.
Accuracy results are shown in figure 4. The topographic plots show image identification accuracy
for single channels (electrodes). Channel IZ produces the highest accuracy with 5 percent. This
means that based on ERP signal at the IZ electrode, 5 out of 100 images are on average correctly
identified from the similarity matrix. Then follow channel Oz with 4.3 percent, 02 with 4.1 and
so on. Image identification based on multiple channels strikingly improves performance as shown
in figure 4. When the similarity matrices from the 20 most contributive channels are averaged,
accuracy of almost 90 percent is obtained. This means that, with a Wei bull response model of only
two parameters, almost every image can be correctly identified from the neural activity that this
image triggers. As an aside we note that this implies that the different parts of the early visual
system process different types of images (in terms of the two Wei bull parameters) in different ways.
To test the individual contribution of the Weibull parameters, we performed principal component
analysis on the beta and gamma parameters and used the principal component scores separately for
image identification. A Weibull response model based only on one of the two principal component
scores performs significantly less as can be seen in figure 4. Moreover, there is large difference in
accuracy performance between the two projected Wei bull parameters. These results demonstrate
1 00
~
,-- --~---------------------------?
90
~
r r
1!,
""'~"
- - Full model (natural im a ges)
- - - Pa rtial model 1
. - - - - P a rtial mod el 2
i
-
-
-
F u ll m odel (arti ficia l images)
~
10
0
0
20
30
40
50
60
E lect rodes ( ra n ke d according to individual perfoma n ce )
Figure 4: Accuracy performance for the full (two-parameter) and partial (orthogonal projection of
one of the two parameters) Weibull response model. Accuracy is based on the accumulation of
image identification at multiple channels. The topographic plots show the accuracy performance for
the individual channels.
that the two Weibull parameters indeed capture two different perceptual aspects of natural scenes,
which together constitutes an important part of early neural processing of natural images.
Accuracy results in figure 4 only show how often the correct image is ranked first, not where it is
ranked. We therefore analyzed the image rankings (data not shown). For the first most contributive
channel (41), the correct image is always ranked within the top 13 percent of the images. The
ranking slightly worsens (top 15 percent) for the second most contributive channel (Oz) and for the
third (02, top 16 percent). From the fourth channel and beyond there is a clear but steady drop
in ranking. The ranking data show an overall pattern similar to the one seen in the accuracy data
and indicate that, even in cases where an image is not correctly identified, the misidentification is
limited.
When does identification fail? We extracted frequently confused image pairs from all similarity matrices of all 50 cross validation steps for all 64 channels. These image pairs reveal that identification
errors tend to occur when the selected image is visually similar to the correct image. The upper row
of figure 5 shows 4 images from our data set and the images with which these have been confused
frequently. The first set of 2 images, containing grazing cows and a packed donkey, have been confused 6 times across the 50 cross validation experiments, the second, and third set 5 times and the
fourth set 4 times. The overall similarity between the confused images is evident and remarkable
considering the variety of images we have used. These findings suggest that the Wei bull model
captures aspect of a scene's visual gist that the brain possibly uses for perception at a glance.
We further scrutinized image identification performance on occlusion models of natural images.
Following [19], we created 24 types of dead leave images containing disks of various sizes (large,
medium and small), size distributions (Power law and exponential), intensities (equal intensity versus decaying intensity) and opacities (occluding versus transparent). For each image type, 16 instances were composed resulting in a total of 384 dead leave images. We presented 16 subjects with
the 384 dead leaves images while recording their EEG activity. As with our natural images, the beta
and gamma parameter values of the Weibull contrast distributions underlying the 364 dead leave
images correlated highly with EEG activity (r2 = 0.83). A cross validation experiment in which we
used 284 dead leaves images for building a Wei bull response model and 100 for image identification resulted in an average image identification performance of 94 percent (see figure 4). Confusion
analysis revealed that dead leave images with clear disks were well identified, whereas dead leaves
images composed of transparent and thus indistinguishable disks were confused frequently (figure
Figure 5: Most confused image pairs during cross-validation. Note the global similarity in spatial
configuration between the natural image pairs. Similarity between most confused dead leave image
pairs is also apparent: except for the fourth pair, they are all images with transparent disk (but with
different disk sizes and disk intensity patterns). Dead leave images with small, opaque and equal
intensity disks (as in the lower right example) were least confused.
5). Apparently, the information in the EEG signal that facilitates image identification is related to
clear object-background differences.
4
Discussion and Conclusion
To determine local image contrasts, we have applied a bank of biologically-motivated contrast filters
to each image location and selected a single filter output based on receptive field size and response
reliability. The statistics of locally selected image contrasts, appropriately captured by the Weibull
distribution, explain up to 75 percent of occipital EEG activity for natural images and almost 83
for artificial dead leave images. We have used Wei bull contrast statistics of these images and corresponding EEG activity to construct a Weibull response model for EEG-based rapid image identification. Using this model, we have obtained image identification performance of 90 percent for natural
images and 94 percent for dead leave images, which is remarkable considering the simplicity of the
two-parameter Weibull image model and the limited spatial resolution of EEG data. We attribute
this success to the ability of the Weibull parameters to structure the space of natural images in a
highly meaningful and compact way, invariant to a large class of accidental or trivial scene features.
Both the scale and shape parameters contribute to the meaningful organization of natural images and
appear to play an important role in the early neural processing of natural images.
Kay et. al [8] report similar image identification performance using an other biologically plausible
model. In this model, a natural image is represented by a large set of Gabor wavelets differing in
size, position, orientation, spatial frequency and phase. Haemodynaymic responses in the visual
cortex are integrally modeled as a linear function of the contrast energy contained in quadrature
wavelet pairs. In a repeated trial experiment involving 1700 training images, 120 test images, and
fMRI data of 2 subjects, 92 percent of the test images were correctly identified for one subject and 72
for a second subject. In a single trial experiment, the reported performances are 52 and 31 percent
respectively. We note that in contrast to [8], our neural response model is based on (summary)
statistics of filter outputs, rather than on filter outputs themselves. This may explain our models
ability to compactly describe a scene's visual gist.
In conclusion, we embrace the view that common factors of natural images imprinted in the brain
daily, underlie rapid image identification by humans. Departing from this view, we establish a
relationship between natural image statistics and neural processing through the Weibull response
model. Results with EEG-based image identification using the Wei bull response model, together
with the biological plausibility of the Weibull response model, supports the idea that the human
visual system evolved, among others, to estimate the Weibull statistics of natural images for rapid
extraction of their visual gist [7].
References
[1] E.P. Simoncelli and Olshausen. B. Natural image statistics and neural representation. Annu.
Rev. Neurosci., 24:11931216, 200l.
[2] A. Oliva and A. Torralba. Building the gist of a scene: The role of global image features in
recognition. Visual Perception, Progress in Brain Research, 155,2006.
[3] S. G. Mallat. A theory for multiresolution signal decomposition: The wavelet representation.
IEEE Trans. Pattern Anal. Mach. Intell., 11(7):674-693, 1989.
[4] M.A. Thomson. Higher-order structure in natural scenes. 1. Opt. Soc. Am. A, 16:15491553,
1999.
[5] E.P. Simoncelli, A. Srivastava, A.B. Lee, and S-C Zhu. On advances in statistical modeling of
natural images. Journal o.f Mathematical Imaging and Vision, 18(1), 2003.
[6] J. M. Geusebroek and A. W. M. Smeulders. A six-stimulus theory for stochastic texture.
International Journal o.f Computer Vision, 62(1/2):7-16, 2005.
[7] H. S. Scholte, S. Ghebreab, A. Smeulders, and V. Lamme. Brain responses strongly correlate
with weibull image statistics when processing natural images. Journal of Vision, 9(4):1-15 ,
2009.
[8] K.N. Kay, T. Naselaris, R.J. Prenger, and J.L. Gallant. Identifying natural images from human
brain activity. Nature, 452:352-355, 2008.
[9] L.J. Croner and E. Kaplan. Receptive fields of p and m ganglion cells across the primate retina.
Vision Research, 35(11):7-24, 1995.
[10] V. Bonin, V. Mante, and M. Carandini. The suppressive field of neurons in lateral geniculate
nucleus. Journal of Neuroscience , 25: 10844-10856, 2005.
[11] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature
Neuroscience , 2(11): 1019-1025, November 1999.
[12] S.Ghebreab, H.S Scholte, V.A.F. Lamme, and A.W.M. Smeulders. Neural adaption to the
spatial distribution of multi-scale contrast in natural images. Submitted.
[13] J.H. Elder and S.W. Zucker. Local scale control for edge detection and blur estimation. IEEE
Transactions on Pattern Analysis and Machine intelligence, 20:699-716,1998.
[14] J. M. Geusebroek, R. van den Boomgaard, A. W. M. Smeulders, and H. Geerts. Color invariance. IEEE Transactions on Pattern Analysis and Machine intelligence, 23(12): 1338-1350,
2001.
[15] J. Ramsay and B. Silverman. Functional Data Analysis. Springer-Verlag, 1997.
[16] M. Fabre-Thorpe, A. Delorme, C. Marlot, and S.J. Thorpe. A limit to the speed of processing
in ultra-rapid visual categorisation of novel natural scenes. Journal Cognitive Neuroscience,
13:171-180, 2001.
[17] J.J. Fahrenfort, H.S. Scholte, and V.A.F. Lamme. The spatiotemporal profile of cortical processing leading up to visual perception. Journal o.f Vision, 8(1): 1-12, 2008.
[18] L. Watanabe and R.W. Rodieck. Parasol and midget ganglion cells of the primate retina.
Journal of Computational Neurology, 289:434-454, 1989.
[19] W.H. Hsiao and R.P. Millane. Effects of occlusion, edges, and scaling on the power spectra of
natural images. J. Opt. Soc. Am. A, 22:1789-1797, 2005 .
| 3785 |@word trial:3 biosemi:2 middle:1 worsens:1 suitably:1 disk:7 hu:1 decomposition:1 arti:1 minus:1 carry:2 configuration:1 contains:2 score:2 selecting:2 tuned:2 i3n:2 realistic:3 j1:2 blur:1 shape:6 plot:3 gist:8 drop:1 aside:1 half:1 selected:6 leaf:3 intelligence:2 indicative:1 filtered:1 provides:4 contribute:1 location:7 mathematical:1 ghebreab:4 constructed:4 beta:2 ect:1 manner:1 introduce:1 ra:1 indeed:1 expected:1 themselves:1 frequently:3 rapid:7 multi:1 brain:13 roughly:2 considering:2 confused:8 moreover:3 underlying:1 medium:1 what:2 evolved:1 cm:1 geerts:1 weibull:34 differing:1 finding:3 every:2 ro:1 control:4 unit:1 underlie:1 normally:1 yn:1 organize:1 appear:1 local:17 limit:1 mach:1 approximately:1 hsiao:1 plus:1 bird:1 suggests:1 limited:2 range:4 averaged:3 unique:1 silverman:1 chaotic:1 area:1 indeterminate:1 significantly:1 projection:1 gabor:1 pre:1 regular:1 suggest:2 get:1 close:1 selection:6 put:1 accumulation:1 map:8 occipital:3 resolution:3 ke:1 simplicity:1 identifying:1 estimator:1 regularize:1 kay:3 population:2 variation:1 gm:1 trigger:1 play:1 mallat:1 us:1 hypothesis:1 origin:1 pa:1 element:1 recognition:2 std:1 submission:1 predicts:1 observed:5 role:2 capture:4 decrease:3 highest:3 balanced:1 transforming:1 occluded:1 solving:1 compromise:1 basis:2 strikingly:2 compactly:1 represented:2 various:1 integrally:1 separated:1 fast:1 describe:2 prenger:1 lect:1 artificial:4 pearson:1 outcome:1 whose:1 kom:1 larger:1 plausible:3 apparent:1 ability:2 statistic:17 topographic:3 itself:2 kurtosis:2 jij:1 turned:1 combining:1 relevant:1 rapidly:2 achieve:1 multiresolution:1 oz:2 description:1 validate:1 imprinted:1 regularity:1 electrode:7 produce:2 leave:8 object:4 develop:1 measured:9 progress:1 strong:1 soc:2 predicted:8 implies:1 indicate:1 foot:1 closely:1 correct:5 owing:1 filter:23 subsequently:5 stochastic:2 attribute:1 human:5 viewing:1 transparent:3 opt:2 biological:1 ultra:1 roughness:1 im:1 rodes:1 visually:2 mapping:1 predict:5 early:8 smallest:1 torralba:1 estimation:6 geniculate:2 integrates:1 wet:1 sensitive:1 occupying:1 city:1 reflects:1 minimization:1 naselaris:1 gaussian:4 always:1 aim:1 rather:2 boosted:1 varying:2 derived:1 indicates:1 likelihood:2 contrast:77 suppression:1 baseline:1 sense:1 am:2 dependent:1 el:1 accumulated:1 entire:3 unlikely:1 nand:1 relation:2 transformed:1 lgn:10 pixel:5 overall:2 among:2 orientation:2 animal:1 smoothing:1 spatial:15 fabre:2 field:6 construct:3 f3:1 extraction:2 beach:1 equal:2 identical:1 represents:2 constitutes:1 warrant:1 fmri:1 report:1 stimulus:10 intelligent:1 duplicate:1 others:1 thorpe:3 retina:2 randomly:1 composed:2 preserve:1 gamma:2 resulted:1 individual:8 intell:1 phase:4 occlusion:3 detection:1 organization:1 highly:6 marlot:1 analyzed:1 nl:2 edge:5 capable:1 partial:1 daily:1 poggio:1 orthogonal:1 bonin:1 tree:1 divide:1 instance:1 modeling:2 cover:1 measuring:1 bull:25 deviation:3 subset:2 uniform:1 reported:1 answer:1 varies:3 spatiotemporal:1 combined:1 peak:6 sensitivity:3 international:1 lee:1 together:3 fect:2 squared:1 reflect:1 containing:2 slowly:1 possibly:1 watching:1 cognitive:2 dead:11 derivative:3 leading:1 li:1 account:1 parasol:1 summarized:1 coefficient:4 ranking:4 onset:4 performed:2 view:3 apparently:2 red:3 decaying:1 odel:1 rectifying:1 contribution:1 smeulders:6 minimize:1 square:4 accuracy:12 variance:6 characteristic:3 ensemble:1 correspond:1 identify:1 landscape:1 yellow:3 identification:22 accurately:1 produced:1 rectified:1 submitted:1 explain:4 influenced:1 against:1 ty:1 energy:1 acquisition:1 frequency:3 gain:15 sampled:1 carandini:1 color:9 improves:1 elder:1 tum:1 higher:4 dt:1 follow:1 response:50 wei:25 strongly:1 correlation:10 gcm:1 overlapping:1 glance:1 gray:4 reveal:2 scrutinized:1 olshausen:1 building:2 effect:1 facilitate:1 contain:3 hypothesized:1 normalized:1 true:2 adequately:1 hence:3 consisted:1 croner:1 spatially:1 white:1 adjacent:1 ll:1 during:2 width:1 indistinguishable:1 covering:2 steady:1 m:7 octave:5 evident:1 thomson:1 demonstrate:1 confusion:1 performs:1 dedicated:1 percent:15 image:174 wise:1 novel:1 common:1 functional:3 corel:1 exponentially:1 measurement:2 mistaken:1 ramsay:1 reliability:1 zucker:1 similarity:13 cortex:6 gj:3 multivariate:1 showed:1 instrumentation:1 verlag:1 continue:1 success:1 yi:1 seen:3 captured:1 additional:1 determine:6 signal:11 semi:8 ii:1 multiple:4 simoncelli:2 branch:1 reduces:1 full:2 match:3 plausibility:1 cross:5 mle:1 prediction:3 involving:1 regression:3 oliva:1 vision:5 histogram:8 represent:1 cell:10 whereas:3 background:2 separately:4 wct:1 suppressive:1 appropriately:1 probably:1 subject:16 pooling:1 hz:3 recording:2 db:1 tend:1 facilitates:1 mod:1 near:1 revealed:1 lne:1 variety:1 affect:1 fit:3 identified:7 cow:1 idea:1 donkey:1 whether:2 motivated:1 six:1 passed:1 ul:1 penalty:1 adequate:2 mirroring:1 detailed:1 covered:1 clear:3 netherlands:3 amount:6 clutter:4 dark:1 locally:2 category:1 percentage:2 neuroscience:4 estimated:6 correctly:6 per:4 blue:3 diverse:1 conform:1 michelson:1 iz:5 group:2 threshold:4 nevertheless:1 monitor:2 erp:4 ce:1 luminance:1 imaging:1 fraction:1 sum:5 inverse:1 overlying:2 fourth:3 opaque:1 arrive:2 almost:4 reasonable:1 scaling:1 bit:1 accidental:1 mante:1 activity:16 eagle:1 strength:3 bv:1 occur:1 categorisation:1 scene:26 nearby:1 aspect:2 simulate:1 speed:1 span:1 min:1 embrace:1 according:1 combination:1 across:9 describes:2 slightly:1 rev:1 biologically:7 primate:2 explained:4 gradually:1 invariant:1 den:1 taken:2 ffi:1 mechanism:4 fail:1 ge:5 subjected:1 end:1 photo:1 grazing:1 gaussians:1 apply:2 hierarchical:1 away:1 rodieck:1 distinguished:1 convolved:1 denotes:1 top:3 xw:1 giving:1 establish:2 approximating:1 question:1 depart:1 occurs:1 receptive:6 gradient:2 subspace:1 distance:1 separate:1 lateral:2 simulated:1 majority:3 street:1 considers:1 trivial:1 modeled:2 relationship:1 rise:1 kaplan:1 anal:1 packed:1 gallant:1 upper:2 neuron:3 riesenhuber:1 november:1 head:1 frame:1 scrambling:1 arbitrary:1 intensity:5 dog:1 pair:7 coherent:1 delorme:1 established:1 trans:1 address:1 beyond:1 below:1 pattern:9 perception:3 indoor:1 saturation:9 geusebroek:3 reliable:2 green:3 explanation:1 power:7 natural:61 ranked:5 residual:1 midget:1 zhu:1 representing:1 brief:2 inversely:1 library:1 picture:3 habitat:1 created:1 carried:1 law:3 rationale:1 filtering:1 proportional:1 versus:2 remarkable:2 validation:5 nucleus:2 degree:2 bank:5 seated:1 row:1 course:1 summary:2 surprisingly:1 gl:1 placed:1 allow:1 fall:1 wide:1 departing:1 distributed:1 regard:1 van:1 depth:1 cortical:1 world:1 sensory:1 collection:1 made:2 projected:1 correlate:2 transaction:2 compact:1 global:2 active:1 reveals:1 assumed:1 gem:1 neurology:1 spectrum:6 continuous:1 channel:20 reasonably:1 robust:1 nature:2 correlated:4 eeg:30 investigated:1 cl:1 uva:2 did:1 linearly:2 neurosci:1 noise:5 profile:1 repeated:2 quadrature:1 neuronal:1 depicts:1 fashion:1 position:2 watanabe:1 exceeding:1 exponential:1 perceptual:1 third:2 wavelet:3 annu:1 specific:4 lamme:4 r2:5 decay:2 texture:1 magnitude:7 perceptually:3 illumination:2 distinguishable:1 likely:1 ganglion:3 visual:19 failed:1 amsterdam:3 contained:1 partially:1 springer:1 opacity:1 adaption:1 extracted:3 viewed:1 presentation:1 flash:2 man:1 considerable:2 experimentally:1 typical:2 specifically:4 determined:2 except:1 averaging:1 principal:3 total:4 pas:1 invariance:2 e:1 meaningful:3 occluding:1 exception:1 uneven:1 support:1 millane:1 tested:1 srivastava:1 |
3,074 | 3,786 | Generalization Errors and Learning Curves for
Regression with Multi-task Gaussian Processes
Kian Ming A. Chai
School of Informatics, University of Edinburgh,
10 Crichton Street, Edinburgh EH8 9AB, UK
[email protected]
Abstract
We provide some insights into how task correlations in multi-task Gaussian process (GP) regression affect the generalization error and the learning curve. We
analyze the asymmetric two-tasks case, where a secondary task is to help the learning of a primary task. Within this setting, we give bounds on the generalization
error and the learning curve of the primary task. Our approach admits intuitive
understandings of the multi-task GP by relating it to single-task GPs. For the
case of one-dimensional input-space under optimal sampling with data only for
the secondary task, the limitations of multi-task GP can be quantified explicitly.
1
Introduction
Gaussian processes (GPs) (see e.g., [1]) have been applied to many practical problems. In recent
years, a number of models for multi-task learning with GPs have been proposed to allow different
tasks to leverage on one another [2?5]. While it is generally assumed that learning multiple tasks
together is beneficial, we are not aware of any work that quantifies such benefits, other than PACbased theoretical analysis for multi-task learning [6?8]. Following the tradition of the theoretical
works on GPs in machine learning, our goal is to quantify the benefits using average-case analysis.
We concentrate on the asymmetric two-tasks case, where the secondary task is to help the learning
of the primary task. Within this setting, the main parameters are (1) the degree of ?relatedness? ?
between the two tasks, and (2) the ratio ?S of total training data for the secondary task. While higher
|?| and lower ?S is clearly more beneficial to the primary task, the extent and manner that this is
so has not been clear. To address this, we measure the benefits using generalization error, learning
curve and optimal error, and investigate the influence of ? and ?S on these quantities.
We will give non-trivial lower and upper bounds on the generalization error and the learning curve.
Both types of bounds are important in providing assurance on the quality of predictions: an upper
bound provides an estimate of the amount of training data needed to attain a minimum performance
level, while a lower bound provides an understanding of the limitations of the model [9]. Our
approach relates multi-task GPs to single-task GPs and admits intuitive understandings of multi-task
GPs. For one-dimensional input-space under optimal sampling with data only for the secondary task,
we show the limit to which error for the primary task can be reduced. This dispels any misconception
that abundant data for the secondary task can remedy no data for the primary task.
2
2.1
Preliminaries and problem statement
Multi-task GP regression model and setup
The multi-task Gaussian process regression model in [5] learns M related functions {fm }M
m=1 by
placing a zero mean GP prior which directly induces correlations between tasks. Let ym be an
1
observation of the mth function at x. Then the model is given by
x
0
f
hfm (x)fm0 (x0 )i def
= Kmm0 k (x, x )
2
ym ? N (fm (x), ?m
),
(1)
where k x is a covariance function over inputs, and K f is a positive semi-definite matrix of inter-task
2
similarities, and ?m
is the noise variance for the mth task.
The current focus is on the two tasks case, where the secondary task S is to help improve the
performance of the primary task T ; this is the asymmetric multi-task learning as coined in [10]. We
fix K f to be a correlation matrix, and let the variance be explained fully by k x (the converse has been
done in [5]). Thus K f is fully specified by the correlation ? ? [?1, 1] between the two tasks. We
further fix the noise variances of the two tasks to be the same, say ?n2 . For the training data, there are
nT (resp. nS ) observations at locations XT (resp. XS ) for task T (resp. S). We use n def
= nT + nS
for the total number of observations, ?S def
= nS /n for the proportion of observations for task S, and
also X def
= XT ? XS . The aim is to infer the noise-free response fT ? for task T at x? . See Figure 1.
The covariance matrix of the noisy training data is K(?) + ?n2 I, where
x
KT T ?KTx S
K(?) def
;
= ?K x
x
KSS
ST
(2)
x
and KTx T (resp. KSS
) is the matrix of covariances (due to k x ) between locations in XT (resp. XS );
x
x
KT S is the matrix of cross-covariances from locations in XT to locations in XS ; and KST
is KTx S
transposed. The posterior variance at x? for task T is
x T
x T
2 ?1
def
?T2 (x? , ?, ?n2 , XT , XS ) = k?? ? kT
k? , where kT
; (3)
? (K(?) + ?n I)
? = (kT ? ) ?(kS? )
and k?? is the prior variance at x? , and kxT ? (resp. kxS? ) is the vector of covariances (due to k x )
between locations in XT (resp. XS ) and x? . Where appropriate and clear from context, we will
suppress some of the parameters in ?T2 (x? , ?, ?n2 , XT , XS ), or use X for (XT , XS ). Note that
?T2 (?) = ?T2 (??), so that ?T2 (1) is the same as ?T2 (?1); for brevity, we only write the former.
If the GP prior is correctly specified, then the posterior variance (3) is also the generalization error at
x? [1, ?7.3]. The latter is defined as h(fT? (x? ) ? f?T (x? ))2 ifT? , where f?T (x? ) is the posterior mean
at x? for task T , and the expectation is taken over the distribution from which the true function fT?
is drawn. In this paper, in order to distinguish succinctly from the generalization error introduced
in the next section, we use posterior variance to mean the generalization error at x? . Note that the
actual y-values observed at X do not effect the posterior variance at any test location.
Problem statement Given the above setting, the aim is to investigate how training observations
for task S can benefit the predictions for task T . We measure the benefits using generalization error,
learning curve and optimal error, and investigate how these quantities vary with ? and ?S .
2.2
Generalization errors, learning curves and optimal errors
We outline the general approach to obtain the generalization error and the learning curve [1, ?7.3]
under our setting, where we have two tasks and are concerned with the primary task T . Let p(x)
be the probability density, common to both tasks, from which test and training locations are drawn,
and assume that the GP prior is correctly specified. The generalization error for task T is obtained
by averaging the posterior variance for task T over x? , and the learning curve for task T is obtained
by averaging the generalization error over training sets X:
R
generalization error:
T (?, ?n2 , XT , XS ) def
(4)
= ?T2 (x? , ?, ?n2 , XT , XS )p(x? )dx?
R
avg
2
2
def
learning curve:
T (?, ?n , ?S , n) = T (?, ?n , XT , XS )p(X)dX,
(5)
where the training locations in X are drawn i.i.d, that is, p(X) factorizes completely into a product of
p(x)s. Besides averaging T to obtain the learning curve, one may also use the optimal experimental
design methodology and minimize T over X to find the optimal generalization error [11, chap. II]:
optimal error:
T (0, ?n2 , XT , XS )
2
2
def
opt
T (?, ?n , ?S , n) = minX T (?, ?n , XT , XS ).
T (1, ?n2 , XT , XS )
(6)
Both
and
reduce to single-task GP cases; the former discards
training observations at XS , while the latter includes them. Similar analogues to single-task GP
avg
opt
opt
2
2
2
2
cases for avg
T (0, ?n , ?S , n) and T (1, ?n , ?S , n), and T (0, ?n , ?S , n) and T (1, ?n , ?S , n) can be
avg
opt
obtained. Note that T and T are well-defined since ?S n = nS ? N0 by the definition of ?S .
2
1
Task-space
|XS | = nS
S
Input
space
?
~
T
|XT | = nT
k x (x, x0 )
Figure 1: The two tasks S and T have
task correlation ?. The data set XT (resp.
XS ) for task T (resp. S) consists of the
?s (resp. s). The test location x? for
task T is denoted by ~.
2.3
?T2 (x? )
0.8
?=0
0.6
0.4
?=1
0.2
0
0
0.2
0.4
x? 0.6
0.8
1
Figure 2: The posterior variances of each test location
within [0, 1] given data ?s at 1/3 and 2/3 for task T , and
s at 1/5, 1/2 and 4/5 for task S.
Eigen-analysis
We now state known results of eigen-analysis used in this paper. Let ?
? def
= ?1 > ?2 > . . . and
?1 (?), ?2 (?), . . . be the eigenvalues and eigenfunctionsRof the covariance function k x under the
measure p(x)dx: they satisfy the integral equation k x (x, x0 )?i (x)p(x)dx = ?i ?i (x0 ). Let
x
? def
. If the locations in XS are sampled
?
= ? be the eigenvalues of KSS
= ?1 > ?2 > . . . > ?nS def
? ? /n , i = 1 . . . n ; see e.g., [1, ?4.3.2] and [12, Theorem 3.4].
from p(x), then ?i = limnS ??
i
S
S
However, for finite nS used in practice, the estimate ?i /nS for ?i is better for the larger eigenvalues
than for the smaller ones. Additionally, in one-dimension with uniform p(x) on the unit interval,
if k x satisfies the Sacks-Ylvisaker conditions of order r, then ?i ? (?i)?2r?2 in the limit i ? ?
[11, Proposition IV.10, Remark IV.2]. Broadly speaking, an order r process is exactly r times mean
square differentiable. For example, the stationary Ornstein-Uhlenbeck process is of order r = 0.
3
Generalization error
In this section, we derive expressions for the generalization error (and the bounds thereon) for the
two-tasks case in terms of the single-task one. To illustrate and further motivate the problem, Figure 2 plots the posterior variance ?T2 (x? , ?) as a function of x? given two observations for task T
and three observations for task S. We roughly follow [13, Fig. 2], and use squared exponential covariance function with length-scale 0.11 and noise variance ?n2 = 0.05. Six solid curves are plotted,
corresponding, from top to bottom, to ?2 = 0, 1/8, 1/4, 1/2, 3/4 and 1. The two dashed curves enveloping each solid curve are the lower and upper bounds derived in this section; the dashed curves
are hardly visible because the bounds are rather tight. The dotted line is the prior noise variance.
Similar to the case of single-task learning, each training point creates a depression on the ?T2 (x? , ?)
surface [9, 13]. However, while each training point for task T creates a ?full? depression that reaches
the prior noise variance (horizontal dotted line at 0.05), the depression created by each training
point for task S depends on ?, ?deeper? depressions for larger ?2 . From the figure, and also from
definition, it is clear that the following trivial bounds on ?T2 (x? , ?) hold:
Proposition 1. For all x? , ?T2 (x? , 1) 6 ?T2 (x? , ?) 6 ?T2 (x? , 0).
Integrating wrt to x? then gives the following corollary:
Corollary 2. T (1, ?n2 , XT , XS ) 6 T (?, ?n2 , XT , XS ) 6 T (0, ?n2 , XT , XS ).
Sections 3.2 and 3.3 derive lower and upper bounds that are tighter than the above trivial bounds.
Prior to the bounds, we consider a degenerate case to illustrate the limitations of multi-task learning.
3.1
The degenerate case of no training data for primary task
It is clear that if there is no training data for the secondary task, that is, if XS = ?, then ?T2 (x? 1) =
?T2 (x? , ?) = ?T2 (x? 0) for all x? and ?. In the converse case where there is no training data for the
primary task, that is, XT = ?, we instead have the following proposition:
3
Proposition 3. For all x? , ?T2 (x? , ?, ?, XS ) = ?2 ?T2 (x? , 1, ?, XS ) + (1 ? ?2 )k?? .
Proof.
x
?T2 (x? , ?, ?, XS ) = k?? ? ?2 (kxS? )T (KSS
+ ?n2 I)?1 kxS?
x
= (1 ? ?2 )k?? + ?2 k?? ? (kxS? )T (KSS
+ ?n2 I)?1 kxS?
= (1 ? ?2 )k?? + ?2 ?T2 (x? , 1, ?, XS ).
Hence the posterior variance is a weighted average of the prior variance k?? and the posterior variance at perfect correlation. When the cardinality of XS increases under infill asymptotics [14, ?3.3],
limnS ?? ?T2 (x? , 1, ?, XS ) = 0
=?
limnS ?? ?T2 (x? , ?, ?, XS ) = (1 ? ?2 )k?? .
(7)
This is the limit for the posterior variance at any test location for task T , if one has training data only
for the secondary task S. This is because a correlation of ? between the tasks prevents any training
location for task S from having correlation higher than ? with a test location for task T . Suppose
correlations in the input-space are given by an isotropic covariance function k x (|x ? x0 |). If we
translate correlations into distances between data locations, then any training location from task S
is beyond a certain radius from any test location for task T . In contrast, a training location from task
T may lay arbitrarily close to a test location for task T , subject to the constraints of noise.
We obtain the generalization error in this degenerate case, by integrating Proposition 3 wrt p(x? )dx?
and using the fact that the mean prior variance is given by the sum of the process eigenvalues.
P?
Corollary 4. T (?, ?n2 , ?, XS ) = ?2 T (1, ?n2 , ?, XS ) + (1 ? ?2 ) i=1 ?i .
3.2
A lower bound
When XT 6= ?, the correlations between locations in XT and locations in XS complicate the situation. However, since ?T2 (?) is a continuous and monotonically decreasing function of ?, there exists
an ? ? [0, 1], which depends on ?, x? and X, such that ?T2 (?) = ??T2 (1) + (1 ? ?)?T2 (0). That
? depends on x? obstructs further analysis. The next proposition gives a lower bound ?T2 (?) of the
?
of x? .
same form satisfying ?T2 (1) 6 ?T2 (?) 6 ?T2 (?), where the mixing proportion is independent
?
2
2 2
2 2
def
Proposition 5. Let ?T (x? , ?) = ? ?T (x? , 1) + (1 ? ? )?T (x? , 0). Then for all x? :
?
(a) ?T2 (x? , ?) 6 ?T2 (x? , ?)
?
(b) ?T2 (x? , ?) ? ?T2 (x? , ?) 6 ?2 (?T2 (x? , 0) ? ?T2 (x? , 1))
?
(c) arg max?2 ?T2 (x? , ?) ? ?T2 (x? , ?) > 1/2.
?
The proofs are in supplementary material ?S.2. The lower bound ?T2 (?) depends explicitly on ?2 .
? for task S, through the gap
It depends implicitly on ?S , which is the proportion of observations
between ?T2 (1) and ?T2 (0). If there is no training data for the primary task, i.e., if ?S = 1, the
bound reduces to Proposition 3, and becomes exact for all values of ?. If ?S = 0, the bound is also
exact. For ?S 6? {0, 1}, the bound is exact when ? ? {?1, 0, 1}. As from Figure 2 and later from
our simulation results in section 5.3, this bound is rather tight. Part (b) of the proposition states the
tightness of the bound: it is no more than factor ?2 of the gap between the trivial bounds ?T2 (0) and
?T2 (1). Part (c) of the proposition says that the bound is least tight for a value of ?2 greater than 1/2.
We provide an intuition on Proposition 5a. Let f?1 (resp. f?0 ) be the posterior mean of the single-task
GP when ? = 1 (resp. ? = 0). Contrasted with the multi-task predictor f?T , f?1 directly involves the
noisy observations for task T at XS , so it has more information on task T . Hence, predicting f?1 (x? )
gives the trivial lower bound ?T2 (1) on ?T2 (?). The tighter bound ?T2 (?) is obtained by ?throwing
?
away? information and predicting f?1 (x? ) with probability ?2 and f?0 (x? ) with probability (1 ? ?2 ).
Finally, the next corollary is readily obtained from Proposition 5a by integrating wrt p(x? )dx? . This
is possible because ? is independent of x? .
Corollary 6. Let T (?, ?n2 , XT , XS ) def
= ?2 T (1, ?n2 , XT , XS ) + (1 ? ?2 )T (0, ?n2 , XT , XS ). Then
?
2
2
T (?, ?n , XT , XS ) 6 T (?, ?n , XT , XS ).
?
3.3
An upper bound via equivalent isotropic noise at XS
The following question motivates our upper bound: if the training locations in XS had been observed
for task T rather than for task S, what is the variance ?
? 2n of the equivalent isotropic noise at XS so
4
that the posterior variance remains the same? To answer this question, we first refine the definition
of ?T2 (?) to include a different noise variance parameter s2 for the XS observations:
h
2
i
?1
T
? I 0
?T2 (x? , ?, ?n2 , s2 , XT , XS ) def
k? ;
(8)
= k?? ? k? K(?) + 0n s2 I
cf. (3). We may suppress the parameters x? , XT and XS when writing ?T2 (?). The variance ?
? 2n of
the equivalent isotropic noise is a function of x? defined by the equation
?T2 (x? , 1, ?n2 , ?
? 2n ) = ?T2 (x? ?, ?n2 , ?n2 ).
(9)
2
For any x? there is always a ?
? n that satisfies the equation because the difference
2
2
2
2
2 2
?(?, ?n2 , s2 ) def
(10)
= ?T (?, ?n , ?n ) ? ?T (1, ?n , s )
2
is a continuous and monotonically decreasing function of s . To make progress, we seek an upper
bound ?
? 2n for ?
? 2n that is independent of the choice of x? : ?(?, ?n2 , ?
? 2n ) 6 0 for all test locations. Of
? 2n , which is the minimum possible ?
interest is the tight upper bound ?
? 2n , given in the next proposition.
? + ?2 ) + ?2 .
? be the maximum eigenvalue of K x , ? def
? 2n def
Proposition 7. Let ?
= ?(?
= ??2 ? 1 and ?
n
n
SS
? 2n ). The bound is tight in this sense: for any
Then for all x? , ?T2 (x? , ?, ?n2 , ?n2 ) 6 ?T2 (x? , 1, ?n2 , ?
? 2n ) 6 ?T2 (x? , 1, ?n2 , ?
?
? 2n , if ?x? ?T2 (x? , ?, ?n2 , ?n2 ) 6 ?T2 (x? , 1, ?n2 , ?
? 2n ), then ?x? ?T2 (x? , ?, ?n2 , ?
? 2n ).
Proof sketch. Matrix K(?) may be factorized as
x
KT T
I 0
K(?) =
x
0 ?I
KST
KTx S
?2 x
? KSS
I
0
0
.
?I
I 0
0 ?I
By using this factorization in the posterior variance (8) and taking out the
(kx? )T
def
?(?, ?n2 , s2 )
def
where
=
=
?T2 (?, ?n2 , s2 ) = k?? ?
(kx )T , (kxS? )T and
2
Tx?
KTx S
KT T
?n I
+
x
x
??2 KSS
KST
0
(11)
factors, we obtain
(kx? )T [?(?, ?n2 , s2 )]?1 kx? ,
0
??2 s2 I
=
?(1, ?n2 , s2 )
(12)
0
0
.
+?
x
+ s2 I
0 KSS
The second expression for ? makes clear that, in the terms of ?T2 (?, ?n2 , ?n2 ), having data XS for task
S is equivalent to an additional correlated noise at these observations for task T . This expression
motivates the question that began this section. Note that ??2 > 1, and hence ? > 0.
The increase in posterior variance due to having XS at task S with noise variance ?n2 rather than
having them at task T with noise variance s2 is given by ?(?, ?n2 , s2 ), which we may now write as
?(?, ?n2 , s2 ) = (kx? )T (?(1, ?n2 , s2 ))?1 ? (?(?, ?n2 , ?n2 ))?1 kx? .
(13)
2
2
2
2
Recall that we seek an upper bound ?
? n for ?
? n such that ?(?, ?n , ?
? n ) 6 0 for all test locations. In
? + ?2 ) + ?2 6 ?
? 2n def
? 2n ; details can be found in supplementary material
general, this requires ?
= ?(?
n
n
? 2n is evident from the construction.
?S.3. The tightness ?
? 2n ) is the tight upper bound because it inflates the noise (co)variance at XS
Intuitively, ?T2 (x? , 1, ?n2 , ?
x
? 2n I. Analogously, the tight lower bound on ?
? 2n is given
+ ?n2 I/?2 ) to ?
just sufficiently, from (?KSS
2
2
2
2
2
2
?2 2
2 def
?n 6 ?
?n 6 ?
? n , where the first inequality
by =? n = ?(? + ?n ) + ?n . In summary, ? ?n 6 =? n 6 ?
?
is obtained by substituting in zero for ? in =? 2n . Hence observing XS at S is at most as ?noisy? as
?
? + ? 2 ) noise variance,
an additional ?(?
and at least as ?noisy? as an additional ?(? + ?n2 ) noise
n
? |?| is larger,
variance. Since ? decreases with |?|, the additional noise variances are smaller when
i.e., when the task S is more correlated with task T .
We give a description of how the above bounds scale with nS , using the results stated
? ? nS ?
in section 2.3. For large enough nS , we may write ?
? and ? ? nS ?nS . Further?
more, for uniformly distributed inputs in the one-dimension unit interval,
if the covariance
function satisfies Sacks-Ylvisaker
conditions
of
order
r,
then
?
=
?
(?n
)?2r?2 , so that
n
S
S
? and ?, we have ?
? 2n and =? 2n are linear in ?
? 2n = ??2 ?n2 + ? ?(nS )
? = ? (?nS )?2r?1 . Since ?
?
?
?2r?1
? 2n , note that although it scales linearly
and =? 2n = ??2 ?n2 + ? ? nS
. For the upper bound ?
2
? 2n ) depends on ?S def
with nS , the eigenvalues of K(1) scales with n, thus ?T (1, ?n2 , ?
= nS /n. In
contrast the lower bound =? 2n is dominated by ??2 ?n2 , so that ?T2 (1, ?n2 , =? 2n ) does not depend on ?S
even for moderate sizes nS . Therefore, the lower bound is not as useful as the upper bound.
Finally, if we refine T as we have done for ?T2 in (8), we obtain the following corollary:
? 2n , XT , XS ). Then
Corollary 8. Let ?T (?, ?n2 , ?n2 , XT , XS ) def
= T (1, ?n2 , ?
?T (?, ?n2 , ?n2 , XT , XS ) > T (?, ?n2 , ?n2 , XT , XS ).
5
3.4
Exact computation of generalization error
The factorization of ?T2 expressed by (12) allows the generalization error to be computed exactly in
certain cases. We replace the quadratic form in (12) by matrix trace and then integrate out x? to give
P?
T (?, ?n2 , XT , XS ) = hk?? i ? tr ??1 hkx? (kx? )T i = i=1 ?i ? tr ??1 M ,
where ? denotes ?(?, ?n2 , ?n2 ), the expectations are taken over x? , and M is an n-by-n matrix with
R
P?
Mpq def
= k x (xp , x? ) k x (xq , x? ) p(x? )dx? = i=1 ?2i ?i (xp )?i (xq ), where xp , xq ? X.
When the eigenfunctions ?i (?)s are not bounded, the infinite-summation expression for Mpq is often
difficult to use. Nevertheless, analytical results for Mpq are still possible in some cases using the
integral expression. An example is the case of the squared exponential covariance function with
normally distributed x, when the integrand is a product of three Gaussians.
4
Optimal error for the degenerate case of no training data for primary task
If training examples are provided only for task S, then task T has the following optimal performance.
Proposition 9. Under optimal sampling on a 1-d space, if the covariance function satisfies SacksP?
?(2r+1)/(2r+2)
2
Ylvisaker conditions of order r, then opt
) + (1 ? ?2 ) i=1 ?i .
T (?, ? , 1, n) = ?(nS
P?
2
2
2 opt
2
Proof. We obtain opt
T (?, ? , 1, n) = ? T (1, ?n , 1, n) + (1 ? ? )
i=1 ?i by minimizing Corollary 4 wrt XS . Under the same conditions as the proposition, the optimal generalization error using
the single-task GP decays with training set size n as ?(n?(2r+1)/(2r+2) ) [11, Proposition V.3]. Thus
?(2r+1)/(2r+2)
?(2r+1)/(2r+2)
2
2
) = ?(nS
).
?2 opt
T (1, ?n , 1, n) = ? ?(nS
P
A directly corollary of the above result is that one cannot expect to do better than (1 ? ?2 ) ?i on
the average. As this is a lower bound, the same can be said for incorrectly specified GP priors.
5
Theoretical bounds on learning curve
Using the results from section 3, lower and upper bounds on the learning curve may be computed by
averaging over the choice of X using Monte Carlo approximation.1 For example, using Corollary 2
and integrating wrt p(X)dX gives the following trivial bounds on the learning curve:
avg
avg
2
2
2
Corollary 10. avg
T (1, ?n , ?S , n) 6 T (?, ?n , ?S , n) 6 T (0, ?n , ?S , n).
The gap between the trivial bounds can be analyzed as follows. Recall that ?S n ? N0 by definition,
avg
avg
2
2
2
so that avg
T (1, ?n , ?S , (1 ? ?S )n) = T (0, ?n , ?S , n). Therefore T (1, ?n , ?S , n) is equivalent to
avg
2
T (0, ?n , ?S , n) scaled along the n-axis by the factor (1 ? ?S ) ? [0, 1], and hence the gap between
the trivial bounds becomes wider with ?S .
In the rest of this section, we derive non-trivial theoretical bounds on the learning curve before
providing simulation results. Theoretical bounds are particularly attractive for high-dimensional
input-spaces, on which Monte Carlo approximation is harder.
5.1
Lower bound
P?
For the single-task GP, a lower bound on its learning curve is ?n2 i=1 ?i /(?n2 + n?i ) [15]. We
shall call this the single-task OV bound. This lower bound can be combined with Corollary 6.
?
?
X
X
?i
?i
2
2 2
2 2
Proposition 11. avg
(?,
?
,
?
,
n)
>
?
?
+
(1
?
?
)?
,
n
S
n
n
T
2 + n?
2 + (1 ? ? )n?
?
?
i
i
S
i=1 n
i=1 n
2
2
or equivalently, avg
T (?, ?n , ?S , n) > ?n
?
X
i=1
2
2
or equivalently, avg
T (?, ?n , ?S , n) > ?n
?
X
i=1
1
b1i ?i
,
?n2 + n?i
with b1i def
=
?n2 + (1 ? ?2 ?S )n?i
,
?n2 + (1 ? ?S )n?i
2
2
b0i ?i
0 def ?n + (1 ? ? ?S )n?i
,
with
b
.
=
i
?n2 + (1 ? ?S )n?i
?n2 + n?i
Approximate lower bounds are also possible, by combining Corollary 6 and approximations in, e.g., [13].
6
Proof sketch. To obtain the first inequality, we integrate Corollary 6 wrt to p(X)dX, and apply the
single-task OV bound twice. For the second inequality, its ith summand is obtained by combining
the corresponding pair of ith summands in the first inequality. The third inequality is obtained from
the second by swapping the denominator of b1i with that of ?i /(?n2 + n?i ) for every i.
For fixed ?n2 , ?S and n, denote the above bound by OV? . Then OV0 and OV1 are both single task
2
bounds. In particular, from Corollary 10, we have that the OV1 is a lower bound on avg
T (?, ?n , ?S , n).
From the first expression of the above proposition, it is clear from the ?mixture? nature of the bound
that the two-tasks bound OV? is always better than OV1 . As ?2 decreases, the two-tasks bound moves
towards the OV0 ; and as ?S increases, the gap between OV0 and OV1 increases. In addition, the gap
is also larger for rougher processes, which are harder to learn. Therefore, the relative tightness of
OV? over OV1 is more noticeable for lower ?2 , higher ?S and rougher processes.
The second expression in the Proposition 11 is useful for comparing with the OV1 . Each summand
for the two-tasks case is a factor b1i of the corresponding summand for the single-task case. Since
b1i ? [1, (1 ? ?2 ?S )/(1 ? ?S )[ , OV? is more than OV1 by at most (1 ? ?2 )?S /(1 ? ?S ) times.
Similarly, the third expression of the proposition is useful for comparing with OV0 : each summand
for the the two-tasks case is a factor b0i ? ](1 ? ?2 ?S ), 1] of the corresponding single-task one.
Hence, OV? is less than OV0 by up to ?2 ?S times. In terms of the lower bound, this is the limit to
which multi-task learning can outperform the single-task learning that ignores the secondary task.
5.2
Upper bound using equivalent noise
An upper bound on the learning curve of a single-task GP is given in [16]. We shall refer to this
as the single-task FWO bound and combine it with the approach in section 3.3 to obtain an upper
on the learning curve of task T . Although the single-task FWO bound was derived for observations
with isotropic noise, with some modifications (see supplementary material ?S.4), the derivations are
still valid for observations with heteroscedastic and correlated noise. Below is a version of the FWO
bound that has yet to assume isotropic noise:
Theorem 12. ([16], modified second part of Theorem 6) Consider a zero-mean GP with covariance function k x (?, ?), and eigenvalues ?i and eigenfunctions ?i (?) under the measure p(x)dx;
and suppose that the noise (co)variances of the observations are given by ? 2 (?, ?). For n obdef
servations {xi }ni=1 , let H and ? be matrices such that Hij P
+ ? 2 (xi , xj ) and
= k x (xi , xj )P
?
?
2
def
?ij =
?j (xi ). Then
the learning curve at n is upper-bounded by i=1 ?i ? n i=1 ?i /ci , where
T
def
ci = (? H?)ii /n, and the expectation in ci is taken over the set of n input locations drawn
independently from p(x).
Unlike [16], we do not assume that the noise variance ? 2 (xi , xj ) is of the form ?n2 ?ij . Instead
? 2n ), we proceed directly from the exact posterior
of proceeding from the upper bound ?T2 (1, ?n2 , ?
variance given by (12). Thus we set the observation noise (co)variance ? 2 (xi , xj ) to
?(xi ? XT )?(xj ? XT ) ?ij ?n2 + ?(xi ? XS )?(xj ? XS ) ?k x (xi , xj ) + ??2 ?ij ?n2 , (14)
so that, through the definition of ci in Theorem 12, we obtain
n
o
R
2
ci = (1 + ??S ) (1 + ??S2 )n/(1 + ??S ) ? 1 ?i + k x (x, x) [?i (x)] p(x)dx + ?n2 ; (15)
details are in the supplementary material ?S.5. This leads to the following proposition:
Proposition 13. Let ? def
in (15), we have
= ??2 ? 1. Then, usingPthe ci s defined
P? 2
?
2
avg
(?,
?
,
?
,
n)
6
?
?
n
i
n
S
T
i=1
i=1 ?i /ci .
Denote the above upper bound by FWO? . When ? = ?1 or ?S = 0, the single-task
FWO upper
P
bound is recovered. However, FWO? with ? = 0 gives the prior variance
?i instead. A trivial
upper bound can be obtained using Corollary 10, by replacing n with (1 ? ?S )n in the single-task
FWO bound. The FWO? bound is better than this trivial single-task bound for small n and high |?|.
5.3
Comparing bounds by simulations of learning curve
We compare our bounds with simulated learning curves. We follow the third scenario in [13]: the input space is one dimensional with Gaussian distribution N (0, 1/12), the covariance function is the
7
1
1
avg
T
OV? / hhT (?)ii / FWO?
avg
T
OV? / hhT (?)ii / FWO?
0.8
hhT (1)ii / hhT (0)ii
? hh?T (?)ii
4 hh?
T (?)ii
0.8
hhT (1)ii / hhT (0)ii
? hh?T (?)ii
4 hh?
T (?)ii
0.6
0.6
0.4
0.4
0.2
0.2
n
0
50
100
150
200
250
n
300
0
(a) ?2 = 1/2, ?S = 1/2
50
100
150
200
250
300
(b) ?2 = 3/4, ?S = 3/4
Figure 3: Comparison of various bounds for two settings of (?, ?S ). Each graph plots avg
T against n
and consists of the ?true? multi-task learning curve (middle ), the theoretical lower/upper bounds
of Propositions 11/13 (lower/upper ), the empirical trivial lower/upper bounds using Corollary
10 (lower/upper ), and the empirical lower/upper bounds using Corollaries 6/8 (?/ 4). The
thickness of the ?true? multi-task learning curve reflects 95% confidence interval.
unit variance squared exponential k x (x, x0 ) = exp[?(x ? x0 )2 /(2l2 )] with length-scale l = 0.01,
the observation noise variance is ?n2 = 0.05, and the learning curves are computed for up to n = 300
training data points. When required, the average over x? is computed analytically (see section 3.4).
The empirical average over X def
= XT ? XS , denoted by hh?ii, is computed over 100 randomly sampled training sets. The process eigenvalues ?i s needed to compute the theoretical bounds are given
in [17]. Supplementary material ?S.6 gives further details.
Learning curves for pairwise combinations of ?2 ? {1/8, 1/4, 1/2, 3/4} and ?S ? {1/4, 1/2, 3/4}
are computed. We compare the following: (a) the ?true? multi-task learning curve hhT (?)ii obtained
by averaging ?T2 (?) over x? and X; (b) the theoretical bounds OV? and FWO? of Propositions 11
and 13; (c) the trivial upper and lower bounds that are single-task learning curves hhT (0)ii and
hhT (1)ii obtained by averaging ?T2 (0) and ?T2 (1); and (d) the empirical lower bound hhT (?)ii and
upper bound hh?
T (?)ii using Corollaries 6 and 8. Figure 3 gives some indicative plots of?the curves.
We summarize with the following observations: (a) The gap between the trivial bounds hhT (0)ii
and hhT (1)ii increases with ?S , as described at the start of section 5. (b) We find the lower bound
hhT (?)ii a rather close approximation to the multi-task learning curve hhT (?)ii, as evidenced by
? much overlap between the
the
? lines and the middle lines in Figure 3. (c) The curve for the
empirical upper bound hh?
T (?)ii using the equivalent noise method has jumps, e.g., the 4 lines in
? 2n increases whenever a datum for XS is sampled.
Figure 3, because the equivalent noise variance ?
(d) For small n, hhT (?)ii is closer to FWO? , but becomes closer to OV? as n increases, as shown by
the unmarked solid lines in Figure 3. This is because the theoretical lower bound OV? is based on the
asymptotically exact single-task OV bound and the T (?) bound, which is observed to approximate
?
the multi-task learning curve rather closely (point (b)).
Conclusions We have measured the influence of the secondary task on the primary task using the
generalization error and the learning curve, parameterizing these with the correlation ? between the
two tasks, and the proportion ?S of observations for the secondary task. We have provided bounds
on the generalization error and learning curves, and these bounds highlight the effects of ? and ?S .
This is a step towards understanding the role of the matrix K f of inter-task similarities in multi-task
GPs with more than two tasks. Analysis on the degenerate case of no training data for the primary
task has uncovered an intrinsic limitation of multi-task GP. Our work contributes to an understanding
of multi-task learning that is orthogonal to the existing PAC-based results in the literature.
Acknowledgments
I thank E Bonilla for motivating this problem, CKI Williams for helpful discussions and for proposing the equivalent isotropic noise approach, and DSO National Laboratories, Singapore, for financial
support. This work is supported in part by the EU through the PASCAL2 Network of Excellence.
8
References
[1] Carl E. Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine Learning.
MIT Press, Cambridge, Massachusetts, 2006.
[2] Yee Whye Teh, Matthias Seeger, and Michael I. Jordan. Semiparametric latent factor models.
In Robert G. Cowell and Zoubin Ghahramani, editors, Proceedings of the 10th International
Workshop on Artificial Intelligence and Statistics, pages 333?340. Society for Artificial Intelligence and Statistics, January 2005.
[3] Edwin V. Bonilla, Felix V. Agakov, and Christopher K. I. Williams. Kernel Multi-task Learning
using Task-specific Features. In Marina Meila and Xiaotong Shen, editors, Proceedings of
the 11th International Conference on Artificial Intelligence and Statistics. Omni Press, March
2007.
[4] Kai Yu, Wei Chu, Shipeng Yu, Volker Tresp, and Zhao Xu. Stochastic Relational Models for
Discriminative Link Prediction. In B. Sch?olkopf, J. Platt, and T. Hofmann, editors, Advances
in Neural Information Processing Systems 19, Cambridge, MA, 2007. MIT Press.
[5] Edwin V. Bonilla, Kian Ming A. Chai, and Christopher K.I. Williams. Multi-task Gaussian
process prediction. In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in
Neural Information Processing Systems 20. MIT Press, Cambridge, MA, 2008.
[6] Jonathan Baxter. A Model of Inductive Bias Learning. Journal of Artificial Intelligence Research, 12:149?198, March 2000.
[7] Andreas Maurer. Bounds for linear multi-task learning. Journal of Machine Learning Research, 7:117?139, January 2006.
[8] Shai Ben-David and Reba Schuller Borbely. A notion of task relatedness yielding provable
multiple-task learning guarantees. Machine Learning, 73(3):273?287, 2008.
[9] Christopher K. I. Williams and Francesco Vivarelli. Upper and lower bounds on the learning
curve for Gaussian processes. Machine Learning, 40(1):77?102, 2000.
[10] Ya Xue, Xuejun Liao, Lawrence Carin, and Balaji Krishnapuram. Multi-task learning for
classification with Dirichlet process prior. Journal of Machine Learning Research, 8:35?63,
January 2007.
[11] Klaus Ritter. Average-Case Analysis of Numerical Problems, volume 1733 of Lecture Notes in
Mathematics. Springer, 2000.
[12] Christopher T. H. Baker. The Numerical Treatment of Integral Equations. Clarendon Press,
1977.
[13] Peter Sollich and Anason Halees. Learning curves for Gaussian process regression: Approximations and bounds. Neural Computation, 14(6):1393?1428, 2002.
[14] Noel A. Cressie. Statistics for Spatial Data. Wiley, New York, 1993.
[15] Manfred Opper and Francesco Vivarelli. General bounds on Bayes errors for regression with
Gaussian processes. In Kearns et al. [18], pages 302?308.
[16] Giancarlo Ferrari Trecate, Christopher K. I. Williams, and Manfred Opper. Finite-dimensional
approximation of Gaussian processes. In Kearns et al. [18], pages 218?224.
[17] Huaiyu Zhu, Christopher K. I. Williams, Richard Rohwer, and Michal Morciniec. Gaussian
regression and optimal finite dimensional linear models. In Christopher M. Bishop, editor,
Neural Networks and Machine Learning, volume 168 of NATO ASI Series F: Computer and
Systems Sciences, pages 167?184. Springer-Verlag, Berlin, 1998.
[18] Michael J. Kearns, Sara A. Solla, and David A. Cohn, editors. Advances in Neural Information
Processing Systems 11, 1999. The MIT Press.
9
| 3786 |@word middle:2 version:1 proportion:4 seek:2 simulation:3 covariance:13 tr:2 solid:3 harder:2 uncovered:1 series:1 existing:1 current:1 comparing:3 nt:3 recovered:1 michal:1 yet:1 dx:11 chu:1 readily:1 visible:1 numerical:2 hofmann:1 plot:3 n0:2 stationary:1 intelligence:4 assurance:1 indicative:1 isotropic:7 ith:2 manfred:2 provides:2 location:25 along:1 consists:2 combine:1 manner:1 excellence:1 pairwise:1 x0:7 inter:2 roughly:1 multi:26 ming:2 chap:1 decreasing:2 actual:1 cardinality:1 becomes:3 provided:2 bounded:2 baker:1 factorized:1 what:1 proposing:1 guarantee:1 every:1 exactly:2 scaled:1 uk:2 platt:2 unit:3 converse:2 normally:1 positive:1 thereon:1 before:1 felix:1 morciniec:1 limit:4 servations:1 twice:1 k:1 quantified:1 sara:1 heteroscedastic:1 co:3 factorization:2 practical:1 acknowledgment:1 practice:1 definite:1 infill:1 asymptotics:1 empirical:5 asi:1 attain:1 confidence:1 integrating:4 zoubin:1 krishnapuram:1 cannot:1 close:2 context:1 influence:2 writing:1 yee:1 equivalent:9 williams:7 independently:1 shen:1 xuejun:1 insight:1 parameterizing:1 financial:1 notion:1 ferrari:1 resp:12 construction:1 suppose:2 exact:6 gps:8 carl:1 cressie:1 satisfying:1 particularly:1 lay:1 asymmetric:3 agakov:1 balaji:1 observed:3 ft:3 bottom:1 role:1 eu:1 decrease:2 solla:1 intuition:1 reba:1 motivate:1 depend:1 tight:7 ov:13 creates:2 completely:1 edwin:2 various:1 tx:1 derivation:1 monte:2 artificial:4 klaus:1 larger:4 supplementary:5 kai:1 say:2 tightness:3 s:1 statistic:4 gp:16 noisy:4 kxt:1 eigenvalue:8 differentiable:1 analytical:1 matthias:1 product:2 combining:2 translate:1 degenerate:5 mixing:1 roweis:1 intuitive:2 description:1 olkopf:1 chai:3 perfect:1 ben:1 help:3 derive:3 illustrate:2 ac:1 wider:1 measured:1 ij:4 school:1 noticeable:1 progress:1 involves:1 quantify:1 concentrate:1 radius:1 closely:1 stochastic:1 material:5 fix:2 generalization:23 preliminary:1 opt:8 proposition:25 tighter:2 summation:1 hold:1 sufficiently:1 exp:1 lawrence:1 substituting:1 vary:1 weighted:1 reflects:1 mit:4 clearly:1 gaussian:12 always:2 aim:2 modified:1 rather:6 b0i:2 factorizes:1 volker:1 corollary:19 derived:2 focus:1 hk:1 contrast:2 seeger:1 tradition:1 hkx:1 sense:1 helpful:1 mth:2 koller:1 arg:1 classification:1 denoted:2 spatial:1 aware:1 having:4 sampling:3 placing:1 yu:2 carin:1 t2:68 summand:4 richard:1 inflates:1 randomly:1 national:1 ab:1 interest:1 investigate:3 analyzed:1 mixture:1 yielding:1 swapping:1 kt:7 integral:3 closer:2 orthogonal:1 iv:2 maurer:1 abundant:1 plotted:1 theoretical:9 uniform:1 predictor:1 motivating:1 answer:1 thickness:1 xue:1 combined:1 st:1 density:1 international:2 cki:1 ritter:1 informatics:1 michael:2 together:1 ym:2 analogously:1 dso:1 squared:3 zhao:1 includes:1 satisfy:1 explicitly:2 bonilla:3 ornstein:1 depends:6 later:1 analyze:1 observing:1 start:1 bayes:1 shai:1 minimize:1 square:1 ni:1 variance:39 carlo:2 reach:1 whenever:1 ed:1 complicate:1 definition:5 rohwer:1 against:1 proof:5 transposed:1 sampled:3 treatment:1 massachusetts:1 recall:2 clarendon:1 higher:3 follow:2 methodology:1 response:1 wei:1 done:2 just:1 correlation:12 sketch:2 horizontal:1 replacing:1 christopher:8 cohn:1 quality:1 effect:2 true:4 remedy:1 former:2 hence:6 analytically:1 inductive:1 laboratory:1 attractive:1 whye:1 outline:1 evident:1 began:1 common:1 volume:2 relating:1 refer:1 cambridge:3 meila:1 mathematics:1 similarly:1 sack:2 had:1 similarity:2 surface:1 summands:1 ktx:5 posterior:16 recent:1 moderate:1 discard:1 scenario:1 certain:2 verlag:1 inequality:5 arbitrarily:1 minimum:2 greater:1 additional:4 monotonically:2 dashed:2 semi:1 relates:1 multiple:2 ii:24 full:1 infer:1 reduces:1 cross:1 marina:1 prediction:4 regression:7 denominator:1 liao:1 expectation:3 kernel:1 uhlenbeck:1 addition:1 semiparametric:1 interval:3 sch:1 rest:1 unlike:1 enveloping:1 eigenfunctions:2 subject:1 jordan:1 call:1 leverage:1 crichton:1 enough:1 concerned:1 baxter:1 affect:1 xj:7 fm:2 reduce:1 mpq:3 andreas:1 trecate:1 expression:8 six:1 peter:1 york:1 speaking:1 hardly:1 remark:1 depression:4 proceed:1 generally:1 useful:3 clear:6 amount:1 fwo:12 induces:1 reduced:1 kian:2 outperform:1 singapore:1 dotted:2 correctly:2 broadly:1 write:3 shall:2 nevertheless:1 drawn:4 graph:1 asymptotically:1 year:1 sum:1 ks:9 def:31 bound:93 giancarlo:1 distinguish:1 datum:1 quadratic:1 refine:2 constraint:1 throwing:1 dominated:1 integrand:1 xiaotong:1 combination:1 march:2 beneficial:2 smaller:2 sollich:1 modification:1 explained:1 intuitively:1 taken:3 equation:4 remains:1 vivarelli:2 hh:22 needed:2 wrt:6 ov1:7 singer:1 gaussians:1 apply:1 away:1 appropriate:1 eigen:2 top:1 denotes:1 include:1 cf:1 dirichlet:1 coined:1 ghahramani:1 society:1 move:1 question:3 quantity:2 primary:14 said:1 minx:1 distance:1 thank:1 link:1 simulated:1 berlin:1 street:1 extent:1 trivial:14 provable:1 kst:3 besides:1 length:2 ratio:1 providing:2 minimizing:1 equivalently:2 setup:1 difficult:1 robert:1 statement:2 hij:1 trace:1 stated:1 suppress:2 design:1 motivates:2 teh:1 upper:30 observation:19 francesco:2 finite:3 incorrectly:1 january:3 situation:1 relational:1 omni:1 introduced:1 evidenced:1 pair:1 ylvisaker:3 specified:4 required:1 david:2 eh8:1 rougher:2 address:1 beyond:1 below:1 summarize:1 max:1 analogue:1 pascal2:1 overlap:1 predicting:2 schuller:1 zhu:1 improve:1 axis:1 created:1 huaiyu:1 tresp:1 xq:3 prior:12 understanding:5 l2:1 literature:1 relative:1 fully:2 expect:1 highlight:1 lecture:1 limitation:4 integrate:2 degree:1 xp:3 editor:6 ift:1 succinctly:1 summary:1 supported:1 free:1 rasmussen:1 bias:1 allow:1 deeper:1 taking:1 edinburgh:2 benefit:5 curve:39 dimension:2 distributed:2 valid:1 opper:2 ignores:1 avg:19 jump:1 approximate:2 relatedness:2 implicitly:1 nato:1 assumed:1 xi:9 discriminative:1 continuous:2 latent:1 quantifies:1 additionally:1 nature:1 learn:1 contributes:1 shipeng:1 main:1 linearly:1 s2:15 noise:29 unmarked:1 n2:78 xu:1 fig:1 wiley:1 n:22 exponential:3 third:3 learns:1 theorem:4 misconception:1 xt:37 specific:1 pac:1 bishop:1 x:58 admits:2 decay:1 exists:1 intrinsic:1 workshop:1 ci:7 kx:7 gap:7 prevents:1 expressed:1 halees:1 cowell:1 springer:2 satisfies:4 b1i:5 ma:2 goal:1 noel:1 towards:2 replace:1 infinite:1 contrasted:1 uniformly:1 averaging:6 kearns:3 total:2 secondary:12 kxs:6 experimental:1 ya:1 support:1 latter:2 jonathan:1 brevity:1 correlated:3 |
3,075 | 3,787 | DUOL: A Double Updating Approach for
Online Learning
Peilin Zhao
Steven C.H. Hoi
Rong Jin
School of Comp. Eng.
Nanyang Tech. University
Singapore 639798
School of Comp. Eng.
Nanyang Tech. University
Singapore 639798
Dept. of Comp. Sci. & Eng.
Michigan State University
East Lansing, MI, 48824
[email protected]
[email protected]
[email protected]
Abstract
In most online learning algorithms, the weights assigned to the misclassified examples (or support vectors) remain unchanged during the entire learning process.
This is clearly insufficient since when a new misclassified example is added to
the pool of support vectors, we generally expect it to affect the weights for the
existing support vectors. In this paper, we propose a new online learning method,
termed Double Updating Online Learning, or DUOL for short. Instead of only
assigning a fixed weight to the misclassified example received in current trial, the
proposed online learning algorithm also tries to update the weight for one of the
existing support vectors. We show that the mistake bound can be significantly improved by the proposed online learning method. Encouraging experimental results
show that the proposed technique is in general considerably more effective than
the state-of-the-art online learning algorithms.
1
Introduction
Online learning has been extensively studied in the machine learning community (Rosenblatt, 1958;
Freund & Schapire, 1999; Kivinen et al., 2001a; Crammer et al., 2006). Most online learning
algorithms work by assigning a fixed weight to a new example when it is misclassified. As a result,
the weights assigned to the misclassified examples, or support vectors, remain unchanged during the
entire process of learning. This is clearly insufficient because when a new example is added to the
pool of support vectors, we expect it to affect the weights assigned to the existing support vectors
received in previous trials.
Although several online algorithms are capable of updating the example weights as the learning
process goes, most of them are designed for the purposes other than improving the classification
accuracy and reducing the mistake bound. For instance, in (Orabona et al., 2008; Crammer et al.,
2003; Dekel et al., 2005), online learning algorithms are proposed to adjust the example weights
in order to fit in the constraint of fixed number of support vectors; in (Cesa-Bianchi & Gentile,
2006), example weights are adjusted to track the drifting concepts. In this paper, we propose a new
formulation for online learning that aims to dynamically update the example weights in order to
improve the classification accuracy as well as the mistake bound. Instead of only assigning a weight
to the misclassified example that is received in current trial, the proposed online learning algorithm
also updates the weight for one of the existing support vectors. As a result, the example weights
are dynamically updated as learning goes. We refer to the proposed approach as Double Updating
Online Learning, or DUOL for short.
The key question in the proposed online learning approach is which one of the existing support vectors should be selected for weight updating. To this end, we employ an analysis for double updating
online learning that is based on the recent work of online convex programming by incremental dual
ascent (Shalev-Shwartz & Singer, 2006). Our analysis shows that under certain conditions, the proposed online learning algorithm can significantly reduce the mistake bound of the existing online
algorithms. This result is further verified empirically by extensive experiments and comparison to
the state-of-the-art algorithms for online learning.
1
The rest of this paper is organized as follows. Section 2 reviews the related work for online learning.
Section 3 presents the proposed ?double updating? approach to online learning. Section 4 gives our
experimental results. Section 5 sets out the conclusion and addresses some future work.
2
Related Work
Online learning has been extensively studied in machine learning (Rosenblatt, 1958; Crammer &
Singer, 2003; Cesa-Bianchi et al., 2004; Crammer et al., 2006; Fink et al., 2006; Yang et al., 2009).
One of the most well-known online approaches is the Perceptron algorithm (Rosenblatt, 1958; Freund & Schapire, 1999), which updates the learning function by adding a new example with a constant
weight into the current set of support vectors when it is misclassified. Recently a number of online
learning algorithms have been developed based on the criterion of maximum margin (Crammer &
Singer, 2003; Gentile, 2001; Kivinen et al., 2001b; Crammer et al., 2006; Li & Long, 1999). One
example is the Relaxed Online Maximum Margin algorithm (ROMMA) (Li & Long, 1999), which
repeatedly chooses the hyper-planes that correctly classify the existing training examples with the
maximum margin. Another representative example is the Passive-Aggressive (PA) method (Crammer et al., 2006). It updates the classification function when a new example is misclassified or its
classification score does not exceed some predefined margin. Empirical studies showed that the
maximum margin based online learning algorithms are generally more effective than the Perceptron
algorithm. However, despite the difference, most online learning algorithms only update the weight
of the newly added support vector, and keep the weights of the existing support vectors unchanged.
This constraint could significantly limit the effect of online learning.
Besides the studies for regular online learning, several algorithms are proposed for online learning
with fixed budget. In these studies, the total number of support vectors is required to be bounded
either by a theoretical bound or by a manually fixed budget. Example algorithms for fixed budget
online learning include (Weston & Bordes, 2005; Crammer et al., 2003; Cavallanti et al., 2007;
Dekel et al., 2008). The key idea of these algorithms is to dynamically update the weights of the
existing support vectors as a new support vector is added, and the support vector with the least weight
will be discarded when the number of support vectors exceeds the budget. The idea of discarding
support vectors is also used in studies (Kivinen et al., 2001b) and (Cheng et al., 2006). In a very
recently proposed method (Orabona et al., 2008), a new ?projection? approach is proposed for online
learning that ensures the number of support vectors is bounded. Besides, in (Cesa-Bianchi & Gentile,
2006), an online learning algorithm is proposed to handle the drifting concept, in which the weights
of the existing support vectors are reduced whenever a new support vector is added. Although these
online learning algorithms are capable of dynamically adjusting the weights of support vectors, they
are designed to either fit in the budget of the number of support vectors or to handle drifting concepts,
not to improve the classification accuracy and the mistake bound.
The proposed online learning algorithm is closely related to the recent work of online convex programming by incremental dual ascent (Shalev-Shwartz & Singer, 2006). Although the idea of simultaneously updating the weights of multiple support vectors was mentioned in (Shalev-Shwartz
& Singer, 2006), no efficient updating algorithm was explicitly proposed. As will be shown later, the
online algorithm proposed in this work shares the same computational cost as that of conventional
online learning algorithms, despite the need of updating weights of two support vectors.
3
Double Updating to Online Learning
3.1 Motivation
We consider an online learning trial t with an incoming example that is misclassified. Let ?(?, ?) :
Rd ? Rd ? R be the kernel function used in our classifier. Let D = {(xi , yi ), i = 1, . . . , n}
be the collection of n misclassified examples received before the trial t, where xi ? Rd and yi ?
{?1, +1}. We also refer to these misclassified training examples as ?support vectors?. We denote
by ? = (?1 , . . . , ?n ) ? [0, C]n the weights assigned to the support vectors in D, where C is a
predefined constant. The resulting classifier, denoted by f (x), is expressed as
f (x) =
n
X
?i yi ?(x, xi )
(1)
i=1
Let (xa , ya ) be the misclassified example received in the trial t, i.e., ya f (xa ) ? 0. In the conventional approach for online learning, we simply assign a constant weight, denoted by ?, to (xa , ya ),
2
and the resulting classifier becomes
f 0 (x) = ?ya ?(x, xa ) +
n
X
?i yi ?(x, xi ) = ?ya ?(x, xa ) + f (x)
(2)
i=1
The shortcoming with the conventional online learning approach is that the introduction of the new
support vector (xa , ya ) may harm the classification of existing support vectors in D, which is revealed by the following proposition.
Proposition
1. Let (xa , ya ) be an example misclassified by the current classifier f (x) =
Pn
0
i=1 ?i yi ?(x, xi ), i.e., ya f (xa ) < 0. Let f (x) = ?ya ?(x, xa ) + f (x) be the updated classifier with ? > 0. There exists at least one support vector xi ? D such that yi f (xi ) > yi f 0 (xi ).
Proof. It follows from the fact that: ?xi ? D, yi ya ?(xi , xa ) < 0 when ya f (xa ) < 0.
As indicated by the above proposition, when a new misclassified example is added to the classifier, the classification confidence of at least one support vector will be reduced. In the case when
ya f (xa ) ? ??, it is easy to verify that there exists some support vector (xb , yb ) who satisfies
?ya yb k(xa , xb ) ? ??/n; at the meantime, it can be shown that when the classification confidence
of (xb , yb ) is less than ?/n, i.e., yb f (xb ) ? ?/n, such support vector will be misclassified after
the classifier is updated with the example (xa , ya ). In order to alleviate this problem, we propose
to update the weight for the existing support vector whose classification confidence is significantly
affected by the new misclassified example. In particular, we consider a support vector (xb , yb ) ? D
for weight updating if it satisfies the following two conditions
? yb f (xb ) ? 0, i.e., support vector (xb , yb ) is misclassified by the current classifier f (x)
? k(xb , xa )ya yb ? ?? where ? ? 0 is a predefined threshold, i.e., support vector (xb , yb )
?conflicts? with the new misclassified example (xa , ya ).
We refer to the support vector satisfying the above conditions as auxiliary example. It is clear that
by adding the misclassified example (xa , ya ) to classifier f (x) with weight ?, the classification score
of (xb , yb ) will be reduced by at least ??, which could lead to the misclassification of the auxiliary
example (xb , yb ). To avoid such a mistake, we propose to update the weights for both (xa , ya ) and
(xb , yb ) simultaneously. In the next section, we show the details of the double updating algorithm
for online learning, and the analysis for mistake bound.
Our analysis follows closely the previous work on the relationship between online learning and
the dual formulation of SVM (Shalev-Shwartz & Singer, 2006), in which the online learning is
interpreted as an efficient updating rule for maximizing the objective function in the dual form of
SVM. We denote by ?t the improvement of the objective function in dual SVM when adding a new
misclassified example to the classification function in the t-th trial. If an online learning algorithm A
is designed to ensure that all ?t is bounded from the below by a positive constant ?, then the number
of mistakes made by A when trained over a sequence of trials (x1 , y1 ), . . . , (xT , yT ), denoted by
M , is upper bounded by:
?
!
T
X
1
1
2
`(yi f (xi ))
(3)
min kf kH? + C
M?
? f ?H? 2
i=1
where `(yi f (xi )) = max(0, 1 ? yi f (xi )) is the hinge loss function. In our analysis, we will show
that ?, which is referred to as the bounding constant for the improvement in the objective function,
could be significantly improved when updating the weight for both the newly misclassified example
and the auxiliary example.
For the remaining part of this paper, we denote by (xb , yb ) an auxiliary example that satisfies the two
conditions specified before. We slightly abuse the notation by using ? = (?1 , . . . , ?n?1 )) ? Rn?1
to denote the weights assigned to all the support vectors in D except (xb , yb ). Similarly, we denote
by y = (y1 , . . . , yn?1 ) ? [?1, 1]n?1 the class labels assigned to all the examples in D except for
(xb , yb ). We define
sa = ?(xa , xa ), sb = ?(xb , xb ), sab = ?(xa , xb ), wab = ya yb sab .
(4)
According to the assumption of auxiliary example, we have wab = sab ya yb ? ??. Finally, we denote by ?
bb the weight for the auxiliary example (xb , yb ) that is used in the current classifier f (x), and
by ?a and ?b the updated weights for (xa , ya ) and (xb , yb ), respectively. Throughout the analysis,
we assume ?(x, x) ? 1 for any example x.
3
3.2
Double Updating Online Learning
Recall an auxiliary example (xb , yb ) should satisfy two conditions (I) yb f (xb ) ? 0, and (II) wab ?
??. In addition, the new example (xa , ya ) received in the current iteration t is misclassified, i.e.,
ya f (xa ) ? 0. Following the framework of dual formulation for online learning, the following
lemma shows how to compute ?t , i.e., the improvement in the objective function of dual SVM by
adjusting weights for (xa , ya ) and (xb , yb ).
Lemma 1. The maximal improvement in the objective function of dual SVM by adjusting weights
for (xa , ya ) and (xb , yb ), denoted by ?t , is computed by solving the following optimization problem:
?t = max {h(?a , ??b ) : 0 ? ?a ? C, 0 ? ??b ? C ? ?
bb }
(5)
?a ,??b
where
h(?a , ??b ) = ?a (1 ? ya f (xa )) + ??b (1 ? yb f (xb )) ?
sa 2 sb
? ? ??b2 ? wab ?a ??b
2 a
2
1
kft k2H? +C
ft ?H? 2
Proof. It is straightforward to verify that the dual function of min
Pt
i=1
(6)
`(yi ft (xi )),
denoted by Dt (?1 , . . . , ?t ), is computed as follows,
t
X
t
X
1
(7)
?i yi ft (xi ) + kft k2H?
2
i=1
i=1
Pt
where 0 ? ?i ? C, i = 1, . . . , t and ft (?) = i=1 ?i yi ?(?, xi ) is the current classifier. Thus,
h(?a , ??b ) = Dt (?1 , . . . , ?
bb + ??b , . . . , ?t?1 , ?a ) ? Dt?1 (?1 , . . . , ?
bb , . . . , ?t?1 )
? t?1
!
t?1
X
X
1
=
?i + ??b + ?a ?
?i yi ft (xi ) + ??b yb ft (xb ) + ?a ya ft (xa ) + kft k2H?
2
i=1
i=1
!
? t?1
t?1
X
X
1
2
?
?i ?
?i yi ft?1 (xi ) + kft?1 kH?
2
i=1
i=1
Dt (?1 , . . . , ?t ) =
?i ?
Using the relation ft (x) = ft?1 (x) + ??b yb ?(x, xb ) + ?a ya ?(x, xa ), we have
sb
sa
h(?a , ??b ) = ?a (1 ? ya ft?1 (xa )) + ??b (1 ? yb ft?1 (xb )) ? ?a2 ? ??b2 ? wab ?a ??b
2
2
Finally, we need to show ??b ? 0. Note that this constraint does not come directly from the box
constraint that the weight for example (xb , yb ) is in the range [0, C], i.e., ?
bb + ??b ? [0, C]. To this
end, we consider the part of h(?a , ??b ) that is related to ??b , i.e.,
sb
g(??b ) = ??b (1 ? yb ft?1 (xb ) ? wab ?a ) ? ??b2
2
Since wab ? ?? and yb ft?1 (xb ) ? 0, it is clear that ??b ? 0 when maximizing g(??b ), which
results in the constraint ??b ? 0.
The following theorem shows the bound for ? when C is sufficiently large.
Theorem 1. Assume C > ?
bb + 1/(1 ? ?) for the selected auxiliary example (xb , yb ). We have the
following bound for ?
1
(8)
??
1??
Proof. Using the fact sa , sb ? 1, ?a , ??b ? 0, ya f (xa ) ? 0, yb f (xb ) ? 0, and wa,b ? ??, we
have
1
1
h(?a , ??b ) ? ?a + ??b ? ?a2 ? ??b2 + ??a ??b
2
2
Thus, ? is bounded as
1
??
max
?a + ??b ? (?a2 + ??b2 ) + ??a ??b
2
?b ?[0,C],??b ?[0,C?b
?]
Under the condition that C > ??b + 1/(1 ? ?), it is easy to verify that the optimal solution for the
above problem is ?a = ??b = 1/(1 ? ?), which leads to the result in the theorem.
4
We now consider the general case, where we only assume C ? 1. The following theorem shows the
bound for ? in the general case.
Theorem 2. Assume C ? 1. We have the following bound for ?, when updating the weights for the
new example (xa , ya ) and the auxiliary example (xb , yb )
?
?
1 1
? ? + min (1 + ?)2 , (C ? ?
b)2
2 2
Proof. By setting ?a = 1, we have h(?a , ??b ) computed as
h(?a = 1, ??b ) ?
1
1
+ (1 + ?)??b ? ??b2
2
2
Hence, ? is lower bounded by
?
?
?
?
1 1
1
1
2
?? +
max
(1 + ?)??b ? ??b ? + min (1 + ?)2 , (C ? ?
b)2
2 ??b ?[0,C?b? ]
2
2 2
Since we only have ? ? 1/2 if we only update the weight for the new misclassified example
(xa , ya ), the result in theorem 2 indicates an increase in ? when updating the weight for both
(xa , ya ) and the auxiliary example (xb , yb ). Furthermore, when C is sufficiently large, as indicated
by Theorem 1, the improvement in ? can be very significant.
The final remaining question is how to identify the auxiliary example (xb , yb ) efficiently, which
requires efficiently updating the classification score yi f (xi ) for all the support vectors. To this
end, we introduce a variable for each support vector, denoted by fti , to keep track the classification score. When a new support vector (xa , ya ) with weight ?a is added to the classifier, we
i
i
+ yi ?a ya ?(xi , xa ), and when the weight of
by fti ? ft?1
update the classification score ft?1
i
by
an auxiliary example (xb , yb ) is updated from ??b to ?b , we update the classification score ft?1
i
i
ft ? ft?1 + yi (?b ? ??b )yb ?(xi , xb ).This updating procedure ensures that the computational cost of
double updating online learning is O(n), where n is the number of support vectors, similar to that
of the kernel online learning algorithm. Figure 1 shows the details of the DUOL algorithm.
Finally, we show a bound on the number of mistakes by assuming C is sufficiently large.
Theorem 3. Let (x1 , y1 ), . . . , (xT , yT ) be a sequence of examples, where xt ? Rn , yt ? {?1, +1}
and ?(xt , xt ) ? 1 for all t. And assume C is sufficiently large. Then for any function f in H? , the
number of prediction mistakes M made by DUOL on this sequence of examples is bounded by:
!
?
T
X
1+?
1
2
`(yi f (xi )) ?
Md (?)
(9)
M ? 2 min kf kH? + C
f ?H? 2
1
??
i=1
where Md (?) is the number of mistakes when there is an auxiliary example, which depends on the
threshold ? and the dataset (Md (?) is actually a decreasing function with ?).
Proof. We denote by Ms the number of mistakes when we made a single update without finding
appropriate auxiliary example. Using Theorem 1, we have the following inequality,
!
?
T
X
1
1
1
Ms +
Md (?) ? min kf k2H? + C
`(yi f (xi ))
(10)
f ?H? 2
2
1??
i=1
Plugging M = Ms + Md into the equation above, we can get
!
?
T
X
1+?
1
2
`(yi f (xi )) ?
Md (?)
M ? 2 min kf kH? + C
f ?H? 2
1??
i=1
(11)
It is worthwhile pointing out that although according to Theorem 3, it seems that the larger the value
of ? the smaller the mistake bound will be. This however is not true since Md (?) is in general a
monotonically decreasing function in ?. As a result, it is unclear if Md (?) ? (1 + ?)/(1 ? ?) will
increase when ? is increased.
5
20:
Algorithm 1 The DUOL Algorithm (DUOL)
P ROCEDURE
1: Initialize S0 = ?, f0 = 0;
2: for t=1,2,. . . ,T do
3:
Receive new instance xt
4:
Predict y
?t = sign(ft?1 (xt ));
5:
Receive label yt ;
6:
lt = max{0, 1 ? yt ft?1 (xt )}
7:
if lt > 0 then
8:
wmin = 0
9:
for ?i ? St?1 do
i
10:
if (ft?1
? 0) then
11:
if (yi yt k(xi , xt ) < wmin ) then
12:
wmin = yi yt k(xi , xt );
13:
(xb , yb ) = (xi , yi );/*auxiliary example*/
14:
end if
15:
end if
16:
end for
t
17:
ft?1
= yt ft?1 (xt );
18:
St = St?1 ? {t};
19:
if (wmin ? ??) then
21:
22:
23:
1 );
?t = min(C, 1??
1 );
?b = min(C, ?
?b + 1??
for ?i ? St do
i
fti ? ft?1
+ yi ?t yt k(xi , xt )
+ yi (?b ? ?
?b )yb k(xi , xb );
end for
ft = ft?1 + ?t yt k(xt , ?) + (?b ? ?
?b )yb k(xb , ?);
else /* no auxiliary example found */
?t = min(C, 1);
for ?i ? St do
i
fti ? ft?1
+ yi ?t yt k(xi , xt );
end for
ft = ft?1 + ?t yt k(xt , ?);
end if
24:
25:
26:
27:
28:
29:
30:
31:
32:
33:
else
34:
ft = ft?1 ; St = St?1 ;
35:
for ?i ? St do
i
36:
fti ? ft?1
;
37:
end for
38:
end if
39: end for
Figure 1: The Algorithm of Double Updating Online Learning (DUOL).
4
Experimental Results
4.1 Experimental Testbed and Setup
We now evaluate the empirical performance of the proposed double updating online learning
(DUOL) algorithm. We compare DUOL with a number of state-of-the-art techniques, including
Perceptron (Rosenblatt, 1958; Freund & Schapire, 1999), the ?ROMMA? algorithm and its aggressive version ?agg-ROMMA? (Li & Long, 1999), the ALMAp (?) algorithm (Gentile, 2001), and the
Passive-Aggressive algorithms (?PA?) (Crammer et al., 2006). The original Perceptron algorithm
was proposed for learning linear models. In our experiments, we follow (Kivinen et al., 2001b) by
adapting it to the kernel case. Two versions of PA algorithms (PA-I and PA-II) were implemented as
described in (Crammer et al., 2006). Finally, as an ideal yardstick, we also implement a full online
SVM algorithm (?Online-SVM?) (Shalev-Shwartz & Singer, 2006), which updates all the support
vectors in each trial, and is thus computationally extremely intensive as will be revealed in our study.
To extensively examine the performance, we test all the algorithms on a number of benchmark
datasets from web machine learning repositories. All of the datasets can be downloaded from LIBSVM website 1 , UCI machine learning repository 2 and MIT CBCL face datasets 3 . Due to space
limitation, we randomly choose six of them in our discussions, including ?german?, ?splice?, ?spambase?, ?MITFace?, ?a7a?, and ?w7a?.
To make a fair comparison, all algorithms adopt the same experimental setup. In particular, for all
the compared algorithms, we set the penalty parameter C = 5, and employ the same Gaussian kernel
with ? = 8. For the ALMAp (?) algorithm, parameter p and ? are set to be 2 and 0.9, respectively,
based on our experience. For the proposed DUOL algorithm, we fix ? to be 0.2 for all cases.
All the experiments were conducted over 20 random permutations for each dataset. All the results
were reported by averaging over these 20 runs. We evaluate the online learning performance by measuring mistake rate, i.e., the ratio of the number of mistakes made by the online learning algorithm
over the total number of examples received for predictions. In addition, to examine the sparsity of
the resulting classifiers, we also evaluate the number of support vectors produced by each online
learning algorithm. Finally, we also evaluate computational efficiency of all the algorithms by their
running time (in seconds). All experiments were run in Matlab over a machine of 2.3GHz CPU.
4.2 Performance Evaluation
Table 1 to 6 summarize the performance of all the compared algorithms over the six datasets4 ,
respectively. Figure 2 to 6 show the mistake rates of all online learning algorithms in comparison
over trials. We observe that Online-SVM yields considerably better performance than the other
online learning algorithms for dataset ?german?, ?splice?, ?spambase?, and ?MITFace?, however,
at the price of extremely high computational cost. For most cases, the running time of Online-SVM
is two order, sometimes three order, higher than the other online learning algorithms, making it
1
http://www.csie.ntu.edu.tw/?cjlin/libsvmtools/datasets/
http://www.ics.uci.edu/?mlearn/MLRepository.html
3
http://cbcl.mit.edu/software-datasets
4
Due to huge computational cost, we are unable to obtain the results of Online-SVM on two large datasets.
2
6
unsuitable for online learning. For the remaining part of this section, we restrict our discussion to
the other six baseline online learning algorithms.
First, among the six baseline algorithms in comparison, we observe that the agg-ROMMA and two
PA algorithms (PA-I and PA-II) perform considerably better than the other three algorithms (i.e.,
Perceptron, ROMMA, and ALMA) in most cases. We also notice that the agg-ROMMA and the
two PA algorithms consume considerably larger numbers of support vectors than the other three
algorithms. We believe this is because the agg-ROMMA and the two PA algorithms adopt more
aggressive strategies than the other three algorithms, resulting more updates and better classification
performance. For the convenience of discussion, we refer to agg-ROMMA and two PA algorithms
as aggressive algorithms, and the three algorithms as non-aggressive ones.
Second, comparing with all six competing algorithms, we observe that DUOL achieves significantly
smaller mistake rates than the other single-updating algorithms in all cases. This shows that the
proposed double updating approach is effective in improving the online prediction performance.
By examining the sparsity of resulting classifiers, we observed that DUOL results in sparser classifiers than the three aggressive online learning algorithms, and denser classifiers than the three
non-aggressive algorithms.
Third, according to the results of running time, we observe that DUOL is overall efficient compared
to the state-of-the-art online learning algorithms. Among all the compared algorithms, Perceptron, for its simplicity, is clearly the most efficient algorithm, and the agg-ROMMA algorithm is
significantly slower than the others (except for ?Online-SVM?). Although DUOL requires double
updating, its efficiency is comparable to the PA and ROMMA algorithms.
Table 2: Evaluation on splice (n=1000, d=6).
Table 1: Evaluation on german (n=1000, d=24).
Algorithm
Perceptron
ROMMA
agg-ROMMA
ALMA2 (0.9)
PA-I
PA-II
Online-SVM
DUOL
Mistake (%)
Support Vectors (#)
Time (s)
35.305 ? 1.510
35.105 ? 1.189
33.350 ? 1.287
34.025 ? 0.910
33.670 ? 1.278
33.175 ? 1.229
28.860 ? 0.651
29.990 ? 1.033
353.05 ? 15.10
351.05 ? 11.89
643.25 ? 12.31
402.00 ? 7.33
732.60 ? 9.74
757.00 ? 10.02
646.10 ? 5.00
682.50 ? 12.87
0.018
0.154
1.068
0.225
0.029
0.030
16.097
0.089
Algorithm
Perceptron
ROMMA
agg-ROMMA
ALMA2 (0.9)
PA-I
PA-II
Online-SVM
DUOL
Table 3: Evaluation on spambase (n=4601, d=57).
Algorithm
Perceptron
ROMMA
agg-ROMMA
ALMA2 (0.9)
PA-I
PA-II
Online-SVM
DUOL
Perceptron
ROMMA
agg-ROMMA
ALMA2 (0.9)
PA-I
PA-II
DUOL
5
Support Vectors (#)
Time (s)
271.20 ? 9.75
255.60 ? 8.14
602.95 ? 7.43
314.95 ? 9.41
665.60 ? 5.60
689.00 ? 7.85
614.90 ? 2.92
577.85 ? 8.93
0.016
0.055
0.803
0.075
0.028
0.028
12.243
0.076
Table 4: Evaluation on MITFace (n=6977, d=361).
Mistake (%)
Support Vectors (#)
Time (s)
Algorithm
24.987 ? 0.525
23.953 ? 0.510
21.242 ? 0.384
23.579 ? 0.411
22.112 ? 0.374
21.907 ? 0.340
17.138 ? 0.321
19.438 ? 0.432
1149.65 ? 24.17
1102.10 ? 23.44
2550.60 ? 27.32
1550.15 ? 15.65
2861.50 ? 24.36
3029.10 ? 24.69
2396.95 ? 10.57
2528.55 ? 20.57
0.204
10.128
95.028
25.294
0.490
0.505
2521.665
0.985
Perceptron
ROMMA
agg-ROMMA
ALMA2 (0.9)
PA-I
PA-II
Online-SVM
DUOL
Table 5: Evaluation on a7a (n=16100, d=123).
Algorithm
Mistakes (%)
27.120 ? 0.975
25.560 ? 0.814
22.980 ? 0.780
26.040 ? 0.965
23.815 ? 1.042
23.515 ? 1.005
17.455 ? 0.518
20.560 ? 0.566
Mistake (%)
Support Vectors (#)
Time (s)
4.665 ? 0.192
4.114 ? 0.155
3.137 ? 0.093
4.467 ? 0.169
3.190 ? 0.128
3.108 ? 0.112
1.142 ? 0.073
2.409 ? 0.161
325.50 ? 13.37
287.05 ? 10.84
1121.15 ? 24.18
400.10 ? 10.53
1155.45 ? 14.53
1222.05 ? 13.73
520.05 ? 4.55
768.65 ? 16.18
0.164
0.362
11.074
0.675
0.356
0.370
7238.105
0.384
Table 6: Results on w7a (n=24292, d=300).
Mistake (%)
Support Vectors (#)
Time (s)
Algorithm
22.022 ? 0.202
21.297 ? 0.272
20.832 ? 0.234
20.096 ? 0.214
21.826 ? 0.239
21.478 ? 0.237
19.389 ? 0.227
3545.50 ? 32.49
3428.85 ? 43.77
4541.30 ? 109.39
3571.05 ? 40.38
6760.70 ? 47.89
7068.40 ? 51.32
7089.85 ? 38.93
2.043
306.793
661.632
338.609
4.296
4.536
10.122
Perceptron
ROMMA
agg-ROMMA
ALMA2 (0.9)
PA-I
PA-II
DUOL
Mistake (%)
Support Vectors (#)
Time (s)
4.027 ? 0.095
4.158 ? 0.087
3.500 ? 0.061
3.518 ? 0.071
3.701 ? 0.057
3.571 ? 0.053
2.771 ? 0.041
994.40 ? 23.57
1026.75 ? 21.51
2317.70 ? 58.92
1031.05 ? 15.33
2839.60 ? 41.57
3391.50 ? 51.94
1699.80 ? 22.78
1.233
13.860
137.975
13.245
3.732
4.719
2.677
Conclusions
This paper presented a novel ?double updating? approach to online learning named as ?DUOL?,
which not only updates the weight of the newly added support vector, but also adjusts the weight
of one existing support vector that seriously conflicts with the new support vector. We show that
the mistake bound for an online classification task can be significantly reduced by the proposed
DUOL algorithms. We have conducted an extensive set of experiments by comparing with a number
of competing algorithms. Promising empirical results validate the effectiveness of our technique.
Future work will address issues of multi-class double updating online learning.
Acknowledgements
This work was supported in part by MOE tier-1 Grant (RG67/07), NRF IDM Grant (NRF2008IDM-IDM-004018), National Science Foundation (IIS-0643494), and US Navy Research Office (N00014-09-1-0663).
7
800
0.4
PA?I
PA?II
Online?SVM
DUOL
2
0.35
0.3
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
700
600
2
PA?I
PA?II
Online?SVM
DUOL
500
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
3
2
average time cost (log10 t)
0.45
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
Online average number of support vectors
Online average rate of mistakes
0.5
400
300
200
2
PA?I
PA?II
Online?SVM
DUOL
1
0
?1
?2
100
?3
0.25
0
200
400
600
Number of samples
800
0
1000
(a) average rate of mistakes
0
200
400
600
Number of samples
800
1000
0
(b) average number of support vectors
200
400
600
Number of samples
800
1000
(c) average time cost (log10 t)
Figure 2: Evaluation on the german dataset. The data size is 1000 and the dimensionality is 24.
0.5
700
2
PA?I
PA?II
Online?SVM
DUOL
0.4
0.35
0.3
0.25
0.2
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
600
2
PA?I
PA?II
Online?SVM
DUOL
500
400
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
3
2
average time cost (log10 t)
Online average rate of mistakes
Online average number of support vectors
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
0.45
300
200
2
PA?I
PA?II
Online?SVM
DUOL
1
0
?1
?2
100
?3
0
200
400
600
Number of samples
800
0
1000
(a) average rate of mistakes
0
200
400
600
Number of samples
800
1000
0
(b) average number of support vectors
200
400
600
Number of samples
800
1000
(c) average time cost (log10 t)
Figure 3: Evaluation on the splice dataset. The data size is 1000 and the dimensionality is 60.
3500
Online average rate of mistakes
0.32
PA?I
PA?II
Online?SVM
DUOL
0.3
0.28
0.26
0.24
0.22
0.2
8
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
3000
6
PA?I
PA?II
Online?SVM
DUOL
2500
2000
1500
1000
2
PA?I
PA?II
Online?SVM
DUOL
5
4
3
2
1
0
?1
500
0.18
0.16
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
7
2
average time cost (log10 t)
Perceptron
ROMMA
agg?ROMMA
ALMA2(0.9)
0.34
Online average number of support vectors
0.36
?2
0
1000
2000
3000
Number of samples
4000
0
5000
(a) average rate of mistakes
0
1000
2000
3000
Number of samples
4000
?3
5000
(b) average number of support vectors
0
1000
2000
3000
Number of samples
4000
5000
(c) average time cost (log10 t)
Figure 4: Evaluation on the spambase dataset. The data size is 4601 and the dimensionality is 57.
0.28
8000
2
0.26
PA?I
PA?II
DUOL
0.25
0.24
0.23
0.22
0.21
0.2
0.19
0
2000
4000
6000
8000
10000
Number of samples
12000
14000
6000
2
3
PA?I
PA?II
DUOL
5000
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
4
2
4000
3000
PA?I
PA?II
DUOL
2
1
0
2000
?1
1000
0
16000
(a) average rate of mistakes
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
7000
average time cost (log10 t)
Online average rate of mistakes
Online average number of support vectors
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
0.27
0
2000
4000
6000
8000
10000
Number of samples
12000
14000
?2
16000
(b) average number of support vectors
0
2000
4000
6000
8000
10000
Number of samples
12000
14000
16000
(c) average time cost (log10 t)
Figure 5: Evaluation on the a7a dataset. The data size is 16100 and the dimensionality is 123.
3500
Online average rate of mistakes
Online average number of support vectors
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
2
PA?I
PA?II
DUOL
0.05
0.045
0.04
0.035
0.03
3.5
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
3000
PA?I
PA?II
DUOL
2500
Perceptron
ROMMA
agg?ROMMA
ALMA (0.9)
3
2.5
2
average time cost (log10 t)
0.06
0.055
2000
1500
1000
2
PA?I
PA?II
DUOL
2
1.5
1
0.5
0
?0.5
?1
500
?1.5
0.025
0
0.5
1
1.5
Number of samples
2
(a) average rate of mistakes
2.5
4
x 10
0
0
0.5
1
1.5
Number of samples
2
2.5
4
x 10
(b) average number of support vectors
?2
0
0.5
1
1.5
Number of samples
2
(c) average time cost (log10 t)
2.5
4
x 10
Figure 6: Evaluation on the w7a dataset. The data size is 24292 and the dimensionality is 300.
8
References
Cavallanti, G., Cesa-Bianchi, N., & Gentile, C. (2007). Tracking the best hyperplane with a simple
budget perceptron. Machine Learning, 69, 143?167.
Cesa-Bianchi, N., Conconi, A., & Gentile, C. (2004). On the generalization ability of on-line learning algorithms. IEEE Trans. on Inf. Theory, 50, 2050?2057.
Cesa-Bianchi, N., & Gentile, C. (2006). Tracking the best hyperplane with a simple budget perceptron. COLT (pp. 483?498).
Cheng, L., Vishwanathan, S. V. N., Schuurmans, D., Wang, S., & Caelli, T. (2006). Implicit online
learning with kernels. NIPS (pp. 249?256).
Crammer, K., Dekel, O., Keshet, J., Shalev-Shwartz, S., & Singer, Y. (2006). Online passiveaggressive algorithms. JMLR, 7, 551?585.
Crammer, K., Kandola, J. S., & Singer, Y. (2003). Online classification on a budget. NIPS.
Crammer, K., & Singer, Y. (2003). Ultraconservative online algorithms for multiclass problems.
JMLR, 3, 951?991.
Dekel, O., Shalev-Shwartz, S., & Singer, Y. (2005). The forgetron: A kernel-based perceptron on a
fixed budget. NIPS.
Dekel, O., Shalev-Shwartz, S., & Singer, Y. (2008). The forgetron: A kernel-based perceptron on a
budget. SIAM J. Comput., 37, 1342?1372.
Fink, M., Shalev-Shwartz, S., Singer, Y., & Ullman, S. (2006). Online multiclass learning by interclass hypothesis sharing. ICML (pp. 313?320).
Freund, Y., & Schapire, R. E. (1999). Large margin classification using the perceptron algorithm.
Mach. Learn., 37, 277?296.
Gentile, C. (2001). A new approximate maximal margin classification algorithm. JMLR, 2, 213?242.
Kivinen, J., Smola, A. J., & Williamson, R. C. (2001a). Online learning with kernels. NIPS (pp.
785?792).
Kivinen, J., Smola, A. J., & Williamson, R. C. (2001b). Online learning with kernels. NIPS (pp.
785?792).
Li, Y., & Long, P. M. (1999). The relaxed online maximum margin algorithm. NIPS (pp. 498?504).
Orabona, F., Keshet, J., & Caputo, B. (2008). The projectron: a bounded kernel-based perceptron.
ICML (pp. 720?727).
Rosenblatt, F. (1958). The perceptron: A probabilistic model for information storage and organization in the brain. Psychological Review, 65, 386?407.
Shalev-Shwartz, S., & Singer, Y. (2006). Online learning meets optimization in the dual. COLT (pp.
423?437).
Weston, J., & Bordes, A. (2005). Online (and offline) on an even tighter budget. AISTATS (pp.
413?420).
Yang, L., Jin, R., & Ye, J. (2009). Online learning by ellipsoid method. ICML (p. 145).
9
| 3787 |@word trial:10 repository:2 version:2 seems:1 dekel:5 eng:3 score:6 seriously:1 spambase:4 existing:13 current:8 comparing:2 assigning:3 kft:4 designed:3 update:16 selected:2 website:1 plane:1 short:2 cse:1 introduce:1 lansing:1 examine:2 multi:1 brain:1 decreasing:2 encouraging:1 cpu:1 becomes:1 fti:5 bounded:8 notation:1 interpreted:1 developed:1 finding:1 fink:2 classifier:16 grant:2 yn:1 before:2 positive:1 mistake:34 limit:1 despite:2 mach:1 meet:1 abuse:1 studied:2 dynamically:4 range:1 nanyang:2 implement:1 caelli:1 projectron:1 procedure:1 empirical:3 significantly:8 adapting:1 projection:1 confidence:3 regular:1 get:1 convenience:1 storage:1 a7a:3 www:2 conventional:3 yt:12 maximizing:2 go:2 straightforward:1 convex:2 simplicity:1 rule:1 adjusts:1 handle:2 updated:5 pt:2 programming:2 hypothesis:1 pa:54 satisfying:1 updating:28 observed:1 steven:1 ft:33 csie:1 wang:1 ensures:2 mentioned:1 trained:1 solving:1 efficiency:2 effective:3 shortcoming:1 hyper:1 shalev:10 navy:1 whose:1 larger:2 denser:1 consume:1 ability:1 final:1 online:107 sequence:3 propose:4 maximal:2 almap:2 uci:2 kh:4 validate:1 double:15 incremental:2 school:2 received:7 sa:4 auxiliary:16 implemented:1 come:1 closely:2 libsvmtools:1 hoi:1 assign:1 fix:1 generalization:1 ntu:3 alleviate:1 proposition:3 tighter:1 adjusted:1 rong:1 sufficiently:4 ic:1 cbcl:2 k2h:4 datasets4:1 predict:1 pointing:1 achieves:1 adopt:2 a2:3 purpose:1 label:2 mit:2 clearly:3 gaussian:1 aim:1 pn:1 avoid:1 sab:3 office:1 improvement:5 indicates:1 tech:2 baseline:2 sb:5 entire:2 relation:1 misclassified:23 overall:1 classification:20 dual:10 html:1 denoted:6 among:2 issue:1 colt:2 art:4 initialize:1 manually:1 nrf:1 icml:3 future:2 others:1 employ:2 randomly:1 simultaneously:2 national:1 kandola:1 organization:1 huge:1 evaluation:11 adjust:1 xb:41 predefined:3 capable:2 experience:1 theoretical:1 psychological:1 instance:2 classify:1 increased:1 measuring:1 cost:14 examining:1 conducted:2 reported:1 considerably:4 chooses:1 st:8 siam:1 probabilistic:1 pool:2 cesa:6 choose:1 zhao:1 ullman:1 li:4 aggressive:8 b2:6 satisfy:1 explicitly:1 depends:1 later:1 try:1 accuracy:3 who:1 efficiently:2 yield:1 identify:1 produced:1 comp:3 wab:7 mlearn:1 whenever:1 sharing:1 pp:9 proof:5 mi:1 newly:3 dataset:8 adjusting:3 recall:1 dimensionality:5 organized:1 actually:1 higher:1 dt:4 forgetron:2 follow:1 improved:2 yb:40 formulation:3 box:1 furthermore:1 xa:36 implicit:1 smola:2 web:1 alma:15 indicated:2 believe:1 effect:1 ye:1 concept:3 verify:3 true:1 hence:1 assigned:6 during:2 mlrepository:1 criterion:1 m:3 passive:2 novel:1 recently:2 empirically:1 refer:4 significant:1 rd:3 similarly:1 f0:1 recent:2 showed:1 inf:1 termed:1 certain:1 n00014:1 inequality:1 yi:28 gentile:8 relaxed:2 monotonically:1 ii:25 multiple:1 full:1 exceeds:1 long:4 dept:1 plugging:1 prediction:3 iteration:1 kernel:10 sometimes:1 receive:2 addition:2 else:2 rest:1 romma:52 ascent:2 effectiveness:1 yang:2 ideal:1 exceed:1 revealed:2 easy:2 affect:2 fit:2 restrict:1 competing:2 reduce:1 idea:3 alma2:7 multiclass:2 intensive:1 six:5 penalty:1 repeatedly:1 matlab:1 generally:2 clear:2 extensively:3 reduced:4 schapire:4 http:3 singapore:2 notice:1 sign:1 track:2 correctly:1 rosenblatt:5 affected:1 key:2 threshold:2 libsvm:1 verified:1 run:2 named:1 throughout:1 peilin:1 comparable:1 bound:14 cheng:2 constraint:5 vishwanathan:1 software:1 min:10 extremely:2 according:3 remain:2 slightly:1 smaller:2 tw:1 making:1 tier:1 computationally:1 equation:1 german:4 cjlin:1 singer:14 end:12 observe:4 worthwhile:1 appropriate:1 slower:1 drifting:3 original:1 remaining:3 include:1 ensure:1 running:3 hinge:1 log10:10 unsuitable:1 unchanged:3 objective:5 added:8 question:2 strategy:1 md:8 unclear:1 unable:1 sci:1 idm:2 assuming:1 besides:2 passiveaggressive:1 relationship:1 insufficient:2 ratio:1 ellipsoid:1 setup:2 perform:1 bianchi:6 upper:1 datasets:6 discarded:1 benchmark:1 jin:2 y1:3 rn:2 interclass:1 community:1 required:1 specified:1 extensive:2 moe:1 conflict:2 testbed:1 nip:6 trans:1 address:2 below:1 sparsity:2 summarize:1 cavallanti:2 max:5 including:2 misclassification:1 meantime:1 kivinen:6 improve:2 review:2 sg:2 acknowledgement:1 kf:4 freund:4 loss:1 expect:2 permutation:1 limitation:1 foundation:1 downloaded:1 s0:1 share:1 bordes:2 supported:1 offline:1 perceptron:34 face:1 wmin:4 ghz:1 collection:1 made:4 bb:6 approximate:1 keep:2 incoming:1 harm:1 xi:30 shwartz:10 msu:1 ultraconservative:1 table:7 promising:1 learn:1 rongjin:1 improving:2 schuurmans:1 caputo:1 williamson:2 aistats:1 motivation:1 bounding:1 fair:1 x1:2 representative:1 referred:1 comput:1 jmlr:3 third:1 splice:4 theorem:10 discarding:1 xt:15 svm:24 exists:2 adding:3 keshet:2 budget:11 margin:8 sparser:1 michigan:1 lt:2 simply:1 expressed:1 conconi:1 tracking:2 agg:27 satisfies:3 weston:2 orabona:3 price:1 except:3 reducing:1 averaging:1 hyperplane:2 lemma:2 total:2 experimental:5 ya:35 east:1 support:66 crammer:13 yardstick:1 evaluate:4 |
3,076 | 3,788 | Parallel Inference for Latent Dirichlet Allocation on
Graphics Processing Units
Ningyi Xu
Microsoft Research Asia
No. 49 Zhichun Road
Beijing, P.R. China
Feng Yan
Department of CS
Purdue University
West Lafayette, IN 47907
Yuan (Alan) Qi
Departments of CS and Statistics
Purdue University
West Lafayette, IN 47907
Abstract
The recent emergence of Graphics Processing Units (GPUs) as general-purpose
parallel computing devices provides us with new opportunities to develop scalable learning methods for massive data. In this work, we consider the problem of parallelizing two inference methods on GPUs for latent Dirichlet Allocation (LDA) models, collapsed Gibbs sampling (CGS) and collapsed variational
Bayesian (CVB). To address limited memory constraints on GPUs, we propose
a novel data partitioning scheme that effectively reduces the memory cost. This
partitioning scheme also balances the computational cost on each multiprocessor
and enables us to easily avoid memory access conflicts. We use data streaming to
handle extremely large datasets. Extensive experiments showed that our parallel
inference methods consistently produced LDA models with the same predictive
power as sequential training methods did but with 26x speedup for CGS and 196x
speedup for CVB on a GPU with 30 multiprocessors. The proposed partitioning
scheme and data streaming make our approach scalable with more multiprocessors. Furthermore, they can be used as general techniques to parallelize other
machine learning models.
1
Introduction
Learning from massive datasets, such as text, images, and high throughput biological data, has
applications in various scientific and engineering disciplines. The scale of these datasets, however,
often demands high, sometimes prohibitive, computational cost. To address this issue, an obvious
approach is to parallelize learning methods with multiple processors. While large CPU clusters
are commonly used for parallel computing, Graphics Processing Units (GPUs) provide us with a
powerful alternative platform for developing parallel machine learning methods.
A GPU has massively built-in parallel thread processors and high-speed memory, therefore providing potentially one or two magnitudes of peak flops and memory throughput greater than its CPU
counterpart. Although GPU is not good at complex logical computation, it can significantly reduce
running time of numerical computation-centric applications. Also, GPUs are more cost effective
and energy efficient. The current high-end GPU has over 50x more peak flops than CPUs at the
same price. Given a similar power consumption, GPUs perform more flops per watt than CPUs. For
large-scale industrial applications, such as web search engines, efficient learning methods on GPUs
can make a big difference in energy consumption and equipment cost. However, parallel computing
1
on GPUs can be a challenging task because of several limitations, such as relatively small memory
size.
In this paper, we demonstrate how to overcome these limitations to parallel computing on GPUs with
an exemplary data-intensive application, training Latent Dirichlet Allocation (LDA) models. LDA
models have been successfully applied to text analysis. For large corpora, however, it takes days,
even months, to train them. Our parallel approaches take the advantage of parallel computing power
of GPUs and explore the algorithmic structures of LDA learning methods, therefore significantly
reducing the computational cost. Furthermore, our parallel inference approaches, based on a new
data partition scheme and data streaming, can be applied to not only GPUs but also any shared
memory machine. Specifically, the main contributions of this paper include:
? We introduce parallel collapsed Gibbs sampling (CGS) and parallel collapsed variational
Bayesian (CVB) for LDA models on GPUs. We also analyze the convergence property
of the parallel variational inference and show that, with mild convexity assumptions, the
parallel inference monotonically increases the variational lower bound until convergence.
? We propose a fast data partition scheme that efficiently balances the workloads across processors, fully utilizing the massive parallel mechanisms of GPUs.
? Based on this partitioning scheme, our method is also independent of specific memory
consistency models: with partitioned data and parameters in exclusive memory sections,
we avoid access conflict and do not sacrifice speedup caused by extra cost from a memory
consistency mechanism
? We propose a data streaming scheme, which allows our methods to handle very large corpora that cannot be stored in a single GPU.
? Extensive experiments show both parallel inference algorithms on GPUs achieve the same
predictive power as their sequential inference counterparts on CPUs, but significantly faster.
The speedup is near linear in terms of the number of multiprocessors in the GPU card.
2
Latent Dirichlet Allocation
We briefly review the LDA model and two inference algorithms for LDA. 1 LDA models each of D
documents as a mixture over K latent topics, and each topic k is a multinomial distribution over
a word vocabulary having W distinct words denoted by ?k = {?kw }, where ?k is drawn from a
symmetric Dirichlet prior with parameter ?. In order to generate a document j, the document?s
mixture over topics, ?j = {?jk }, is drawn from a symmetric Dirichlet prior with parameter ?
first. For the ith token in the document, a topic assignment zij is drawn with topic k chosen with
probability ?jk . Then word xij is drawn from the zij th topic, with xij taking on value w with
probability ?zij w . Given the training data with N words x = {xij }, we need to compute the
posterior distribution over the latent variables.
Collapsed Gibbs sampling [4] is an efficient procedure to sample the posterior distribution of topic
assignment z = {zij } by integrating out all ?jk and ?kw . Given the current state of all but one
variable zij , the conditional distribution of zij is
P (zij = k|z?ij , x, ?, ?) ?
n?ij
xij k + ?
n?ij
k + W?
(n?ij
jk + ?)
(1)
where nwk denotes the number of tokens with word w assigned
X ?ijto topic k, njk denotes the number
?ij
of tokens in document j assigned to topic k and nk =
nwk . Superscript ?ij denotes that the
w
variable is calculated as if token xij is removed from the training data.
CGS is very efficient because the variance is greatly reduced by sampling in a collapsed state
space. Teh et al. [9] applied the same state space to variational Bayesian and proposed the collapsed variational Bayesian inference algorithm. It has been shown that CVB has a theoretically
tighter variational bound Q
than standard VB. In CVB, the posterior of z is approximated by a factorized posterior q(z) = ij q(zij |?ij ) where q(zij |?ij ) is multinomial with variational parameter
1
We use indices to represent topics, documents and vocabulary words.
2
?ij = {?ijk }. The inference task is to find variational parameters maximizing the variational lower
X
p(z, x|?, ?)
bound L(q) =
q(z) log
. The authors used a computationally efficient Gaussian
q(z)
z
approximation. The updating formula for ?ij is similar to the CGS updates
?ijk
?ij
?ij
?1
? (Eq [n?ij
xij k ] + ?)(Eq [njk ] + ?)(Eq [nk ] + W ?)
Varq [n?ij
x k]
ij
?ij
2
q [nx k ]+?)
ij
exp(? 2(E
3
3.1
?
Varq [n?ij
jk ]
2
2(Eq [n?ij
jk ]+?) )
+
Varq [n?ij
k ]
)
2(Eq [n?ij
]+W
?)2
k
(2)
Parallel Algorithms for LDA Training
Parallel Collapsed Gibbs Sampling
A natural way to parallelize LDA training is to distribute documents across P processors. Based on
this idea, Newman et al. [8] introduced a parallel implementation of CGS on distributed machines,
called AD-LDA. In AD-LDA, D documents and document-specific counts njk are distributed over
P processors, with D
P documents on each processor. In each iteration, every processor p independently
runs local Gibbs sampling with its own copy of topic-word count npkw and topic counts
P
npk = w npkw in parallel. Then a global synchronization aggregates local counts npkw to produce
global counts nkw and nk . AD-LDA achieved substantial speedup compared with single-processor
CGS training without sacrificing prediction accuracy. However, it needs to store P copies of topicword counts nkw for all processors, which is unrealistic for GPUs with large P and large datasets
due to device memory space limitation. For example, a dataset having 100, 000 vocabulary words
needs at least 1.4 GBytes to store 256-topic nwk for 60 processors, exceeding the device memory capacity of current high-end GPUs. In order to address this issue, we develop parallel CGS algorithm
that only requires one copy of nkw .
Our parallel CGS algorithm is motivated
by the following observation: for word
Algorithm 1: Parallel Collapsed Gibbs Sampling
token w1 in document j1 and word toInput: Word tokens x, document partition
ken w2 in document j2 , if w1 6= w2 and
J1 , . . . , JP and vocabulary partition
j1 6= j2 , simultaneous updates of topic
V1 , . . . , V P
assignment by (1) have no memory
Output: njk , nwk , zij
read/write conflicts on document-topic
1 Initialize topic assignment to each word token, set
counts njk and topic-word counts nwk .
The algorithmic flow is summarized in
npk ? nk
Algorithm 1. In addition to dividing
2 repeat
all documents J = {1, . . . , D} to P
3
for l = 0 to P ? 1 do
(disjoint) sets of documents J1 , . . . , JP
/* Sampling step
*/
and distribute them to P processors,
4
for each processor p in parallel do
we further divide the vocabulary words
5
Sample zij for j ? Jp and xij ? Vp?l
V = {1, . . . , W } into P disjoint subsets
(Equation (1)) with global counts nwk ,
V1 , . . . , VP , and each processor p (p =
global counts njk and local counts npk
0, . . . , P ?1) stores a local copy of topic
6
end
counts npk . Every parallel CGS training
/* Synchronization step
*/
p
iteration consists of P epochs, and each
7
Update nk according to Equation (3)
epoch
consists of a sampling step and
8
end
a
synchronization
step. In the sampling
9 until convergence
step of the lth epoch (l = 0, . . . , P ? 1),
processor p samples topic assignments
zij whose document index is j ? Jp and word index is xij ? Vp?l . The ? is the modulus P
addition operation defined by
a ? b = (a + b) mod P,
and all processors run the sampling simultaneously without memory read/write conflicts on the
global counts njk and nwk . Then the synchronization step uses (3) to aggregate npk to global counts
nk , which are used as local counts in the next epoch.
X p
nk ? nk +
(nk ? nk ),
npk ? nk
(3)
p
3
Our parallel CGS can be regarded as an extension to AD-LDA by using the data partition in local
sampling and inserting P ?1 more synchronization steps within an iteration. Since our data partition
guarantees that any two processors access neither the same document nor the same word in an epoch,
the synchronization of nwk in AD-LDA is equivalent to keeping nwk unchanged after the sampling
step of the epoch. Becasue P processors concurrently sample new topic assignments in parallel
CGS, we don?t necessarily sample from the correct posterior distribution. However, we can view it
as a stochastic optimization method that maximizes p(z|x, ?, ?). A justification of this viewpoint
can be found in [8].
3.2
Parallel Collapsed Variational Bayesian
The collapsed Gibbs sampling and the collapsed variational Bayesian inference [9] are similar in
their algorithmic structures. As pointed out by Asuncion et al. [2], there are striking similarities
between CGS and CVB. A single iteration of our parallel CVB also consists of P epochs, and each
epoch has an updating step and a synchronization step. The updating step updates variational parameters in a similar manner as the sampling step of parallel CGS. Counts in CGS are replaced
by expectations and variances, and new variational parameters are computed by (2). The synchronization step involves an affine combination of the variational parameters in the natural parameter
space.
Since multinomial distribution belongs to the exponential family, we can represent the multinomial
distribution over K topics defined by mean parameter ?ij in natural parameter ?ij = (?ijk ) by
?
?ijk = log( 1?P 0ijk ? 0 ) for k = 1, 2, . . . , K ? 1, and the domain of ?ij is unconstrained. Thus
k 6=K
ijk
maximizing L(q(?)) becomes an unconstrained optimization problem. Denote ?m = (?ij )j?Jm ,
? = (?0 , . . . , ?P ?1 ), ?new and ?old to be the variational parameters immediately after and before
sync
new
as the
, . . . , ?old
the updating step respectively. Let ?(p) = (?old
P ?1 ). We pick a ?
0 , . . . , ?p
updated ? from a one-parameter class of variational parameters ?(?) that combines the contribution
from all processors
P
?1
X
old
?(?) = ? + ?
(?(i) ? ?old ), ? ? 0.
i=0
Two special cases are of interest: 1) ?sync = ?( P1 ) is a convex combination of {?(p) }; and 2)
?sync = ?(1) = ?new . If (quasi)concavity [3] holds in sufficient large neighborhoods of the
sequence of ?(?), say near a local maximum having negatively defined Hessian, then L(q(?(?))) ?
minp L(q(?(p) )) ? L(q(?old )) and L(q) converge locally. For the second case, we keep ? new and
only update Eq [nk ] and Varq [nk ] similarly as (3) in the synchronization step. The formulas are
P
E[nk ] ? E[nk ] + p (E[npk ] ? E[nk ]),
E[npk ] ? E[nk ]
P
Var[nk ] ? Var[nk ] + p (Var[npk ] ? Var[nk ]),
Var[npk ] ? Var[nk ]
(4)
PP ?1 (i)
Also, ?(1) assigns a larger step size to the direction i=0 (? ? ?old ). Thus we can achieve a
faster convergence rate if it is an ascending direction. It should be noted that our choice of ?sync
doesn?t guarantee global convergence, but we shall see that ?(1) can produce models that have
almost the same predictive power and variational lower bounds as the single-processor CVB.
3.3
Data Partition
In order to achieve maximal speedup, we need the partitions producing balanced workloads across
processors, and we also hope that generating the data partition consumes a small fraction of time in
the whole training process.
In order to present in a unified way, we define the co-occurrence matrix R = (rjw ) as: For parallel
CGS, rjw is the number of occurrences of word w in document j; for parallel CVB, rjw = 1 if w
occurs at least once in j, otherwise rjw = 0. We define the submatrix Rmn = (rjw ) ?j ? Jm , w ?
Vn . The optimal data partition is equivalent to minimizing the following cost function
C=
P
?1
X
max {Cmn },
Cmn =
(m,n):
l=0 m?l=n
X
rjw ?Rmn
4
rjw
(5)
The basic operation in the proposed algorithms is either sampling topic assignments (in CGS) or
updating variational parameters (in CVB). Each value of l in the first summation term in (5) is
associated with one epoch. All Rmn satisfying m ? l = n are the P submatrices of R whose entries
are used to perform basic operations in epoch l. The number of these two types of basic operations
on each unique document/word pair (j, w) are all rjw . So the total number of basic operations in
Rm,n is Cmn for a single processor. Since all processors have to wait for the slowest processor to
complete its job before a synchronization step, the maximal Cmn is the number of basic operations
for the slowest processor. Thus the total number of basic operations is C. We define data partition
efficiency, ?, for a given row and column partitions by
X
Copt
, Copt =
rjw /P
(6)
?=
C
j?J,w?V
where Copt is the theoretically minimal number of basic operations. ? is defined to be less than or
equal to 1. The higher the ?, the better the partitions. Exact optimization of (5) can be achieved
through solving an equivalent integer programming problem. Since integer programming is NPhard in general, and the large number of free variables for real-world datasets makes it intractable
to solve, we use a simple approximate algorithm to perform data partitioning. In our observation, it
works well empirically.
Here we use the convention of initial value j0 = w0 = 0. Our data partition algorithm divides
X row
|mC
?
rjw |.
index J into disjoint subsets Jm = {j(m?1) , . . . , jm }, where jm = arg min
opt
0
j
j?j 0
Similarly, we divide column index V into disjoint subsets Vn = {w(n?1) + 1, . . . , wn } by wn =
X
|mC
?
rjw |. This algorithm is fast, since it needs only one full sweep over all word
arg min
opt
0
w
w?w0
tokens or unique document/word pairs to calculate jm and wn . In practice, we can run this algorithm
for several random permutations of J or V , and take the partitions with the highest ?.
We empirically obtained high ? on large datasets with the approximate algorithm. For a word token x
in the corpus, the probability that x is the word w is P (x = w), the probability that x is in document
j is P (x in j). If we assume these two distributions are independent and x is i.i.d., then for a fixed P ,
j ?j
w ?w
the law of large numbers asserts P (x in Jm ) ? m D(m?1) ? P1 and P (x ? Vn ) ? n W(n?1) ? P1 .
P
C
Independence gives E[Cmn ] ? Popt where Cmn = x 1{x inJm ,x?Vn } . Furthermore, the law of
C
large numbers and the central limit theorem also give Cmn ? Popt and the distribution of Cmn is
approximately a normal distribution. Although independence and i.i.d. assumptions are not true for
real data, the above analysis holds in an approximate way. Actually, when P = 10, the Cmn of
NIPS and NY Times datasets (see Section 4) accepted the null hypothesis of Lilliefors? normality
test with a 0.05 significance level.
3.4
GPU Implementation and Data Streaming
We used a Leatek Geforce 280 GTX GPU (G280) in this experiment. The G280 has 30 on-chip
multiprocessors running at 1296 MHz, and each multiprocessor has 8 thread processors that are
responsible for executing all threads deployed on the multiprocessor in parallel. The G280 has
1 GBytes on-board device memory, the memory bandwidth is 141.7 GB/s. We adopted NVidia?s
Compute Unified Device Architecture (CUDA) as our GPU programming environment. CUDA
programs run in a Single Program Multiple Threads (SPMT) fashion. All threads are divided into
equal-sized thread blocks. Threads in the same thread block are executed on a multiprocessor, and
a multiprocessor can execute a number of thread blocks. We map a ?processor? in the previous
algorithmic description to a thread block. For a word token, fine parallel calculations, such as (1)
and (2), are realized by parallel threads inside a thread block.
Given the limited amount of device memory on GPUs, we cannot load all training data and model
parameters into the device memory for large-scale datasets. However, the sequential nature of Gibbs
sampling and variational Bayesian inferences allow us to implement a data streaming [5] scheme
which effectively reduces GPU device memory space requirements. Temporal data and variables,
xij , zij and ?ij , are sent to a working space on GPU device memory on-the-fly. Computation and
data transfer are carried out simultaneously, i.e. data transfer latency is hidden by computation.
5
dataset
Number of documents, D
Number of words, W
Number of word tokens, N
Number of unique document/word pairs, M
KOS
3, 430
6, 906
467, 714
353, 160
NIPS
1, 500
12, 419
1, 932, 365
746, 316
NYT
300, 000
102, 660
99, 542, 125
69, 679, 427
Table 1: datasets used in the experiments.
1 4 0 0
1 5 5 0
1 3 5 0
1 3 0 0
1 5 0 0
C V B
C V B
C V B
C G S
C G S
C G S
1 4 5 0
, K =
, K =
, K =
, K =
6 4
P e r p le x ity
P e r p le x ity
1 2 5 0
C V B
C V B
C G S
C G S
1 2 8
6 4
1 2 8
1 2 0 0
1 1 5 0
, K =
, K =
, K =
, K =
, K =
, K =
6 4
1 2 8
2 5 6
6 4
1 2 8
2 5 6
1 1 0 0
1 4 0 0
1 0 5 0
1 3 5 0
1 0 0 0
P = 1
P = 1 0
P = 3 0
P = 6 0
P = 1
P = 1 0
P = 3 0
P = 6 0
Figure 1: Test set perplexity versus number of processors P for KOS (left) and NIPS (right).
4 Experiments
We used three text datasets retrieved from the UCI Machine Learning Repository2 for evaluation.
Statistical information about these datasets is shown in Table 4. For each dataset, we randomly
extracted 90% of all word tokens as the training set, and the remaining 10% of word tokens are the
test set. We set ? = 50/K and ? = 0.1 in all experiments [4]. We use ?sync = ?(1) in the parallel
CVB, and this setting works well in all of our experiments.
4.1
Perplexity
We measure the performance of the parallel algorithms using test set perplexity. Test set perplexity
is defined as exp(? N1test log p(xtest )). For CVB, test set likelihood p(xtest ) is computed as
p(xtest ) =
Y
log
ij
X
??jk ??xij k
??jk =
k
? + E[njk ]
P
K? + k E[njk ]
? + E[nwk ]
??wk =
W ? + E[nk ]
(7)
We report the average perplexity and the standard deviation of 10 randomly initialized runs for the
parallel CVB. The typical burn-in period of CGS is about 200 iterations. We compute the likelihood
p(xtest ) for CGS by averaging S = 10 samples at the end of 1000 iterations from different chains.
p(xtest ) =
Y
ij
log
1 X X ?s ?s
?jk ?xij k
S s
s
??jk
=
k
? + nsjk
P
K? + k nsjk
? + nswk
??swk =
W ? + nsk
(8)
Two small datasets, KOS and NIPS, are used in the perplexity experiment. We computed test perplexity for different values of K and P . Figure 1 shows the test set perplexity on KOS (left) and NIPS
(right). We used the CPU to compute perplexity for P = 1 and the GPU for P = 10, 30, 60. For a
fixed number of K, there is no significant difference between the parallel and the single-processor
algorithms. It suggests our parallel algorithms converge to models having the same predictive power
in terms of perplexity as single-processor LDA algorithms.
Perplexity as a function of iteration number for parallel CGS and parallel CVB on NIPS are shown
in Figure 2 (a) and (b) respectively. Since CVB actually maxmizes the variational lower bound L(q)
on the training set, so we also investigated the convergence rate of the variational lower bound. The
variational lower bound is computed using an exact method suggested in [9]. Figure 2 (c) shows the
per word token variational lower bound as a function of iteration for P = 1, 10, 30 on a sampled
2
http://archive.ics.uci.edu/ml/datasets/Bag+of+Words
6
subset of KOS (K = 64). Both parallel algorithms converge as rapidly as the single-processor
LDA algorithms. Therefore, when P gets larger, convergence rate does not curtail the speedup. We
surmise that these results in Figure 2 may be due to frequent synchronization and relative big step
sizes in our algorithms. In fact, as we decreased the number of synchronizations in the parallel CVB,
the result became significantly worse. The curve ??=1/P, P=10? in Figure 2 (right) was obtained by
setting ?sync = ?( P1 ). It converged considerably slower than the other curves because of its small
step size.
2 6 0 0
?=1/P , P = 1 0
2 4 0 0
2 2 0 0
P e r p le x ity
2 0 0 0
1 8 0 0
1 6 0 0
-6 .6 8
C V B
P a r a lle l C V B , P = 1 0
P a r a lle l C V B , P = 3 0
V a r ia tio n a l lo w e r b o u n d
2 2 0 0
P e r p le x ity
-6 .6 6
2 6 0 0
C G S
P a r a lle l C G S , P = 1 0
P a r a lle l C G S , P = 3 0
2 4 0 0
2 0 0 0
1 8 0 0
1 6 0 0
1 4 0 0
1 4 0 0
1 2 0 0
1 2 0 0
-6 .7 0
-6 .7 2
-6 .7 4
C V B , P = 1
P a r a lle l C V B , P = 1 0
P a r a lle l C V B , P = 3 0
-6 .7 6
-6 .7 8
-6 .8 0
-6 .8 2
1 0 0 0
0
5 0
1 0 0
1 5 0
2 0 0
2 5 0
3 0 0
0
5 0
1 0 0
Ite r a tio n
1 5 0
2 0 0
2 5 0
0
3 0 0
5 0
(a)
1 0 0
1 5 0
Ite r a tio n
Ite r a tio n
(b)
(c)
Figure 2: (a) Test set perplexity as a function of iteration number for the parallel CGS on NIPS,
K = 256. (b) Test set perplexity as a function of iteration number for the parallel CVB on NIPS,
K = 128. (c) Variational lower bound on a dataset sampled from KOS, K = 64.
4.2
Speedup
The speedup is compared with a PC equipped with an Intel quad-core 2.4GHz CPU and 4 GBytes
memory. Only one core of the CPU is used. All CPU implementations are compiled by Microsoft
C++ compiler 8.0 with -O2 optimization. We did our best to optimize the code through experiments,
such as using better data layout and reducing redundant computation. The final CPU code is almost
twice as fast as the initial code.
Our speedup experiments are conducted on the NIPS dataset for both parallel algorithms and the
large NYT dataset for only the parallel CGS, because ?ij of the NYT dataset requires too much
memory space to fit into our PC?s host memory. We measure the speedup on a range of P with or
without data streaming. As the baseline, average running times on the CPU are: 4.24 seconds on
NIPS (K = 256) and 22.1 seconds on NYT (K = 128) for the parallel CGS, and 11.1 seconds
(K = 128) on NIPS for the parallel CVB. Figure 3 shows the speedup of the parallel CGS (left)
and the speedup of the parallel CVB (right) with the data partition efficiency ? under the speedup.
We note that when P > 30, more threads are deployed on a multiprocessor. Therefore data transfer
between the device memory and the multiprocessor is better hidden by computation on the threads.
As a result, we have extra speedup when the number of ?processors? (thread blocks) is larger than
the number of multiprocessors on the GPU.
2 8
N Y T , S tr e a m in g
N IP S , S tr e a m in g
N IP S , N o S tr e a m in g
2 0
S p e e d u p
S p e e d u p
2 4
1 6
1 2
8
?
1 .0
0 .9
0 .8
0 .7
0 .6
0 .5
N Y T
N IP S
P = 1 0
P = 1 0
P = 3 0
P = 3 0
?
P = 6 0
P = 6 0
2 4 0
2 2 0
2 0 0
1 8 0
1 6 0
1 4 0
1 2 0
1 0 0
8 0
6 0
4 0
2 0
1 .0
0 .9
0 .8
0 .7
0 .6
0 .5
S tr e a m in g
N o S tr e a m in g
P = 1 0
P = 3 0
P = 6 0
P = 1 0
P = 3 0
P = 6 0
Figure 3: Speedup of parallel CGS (left) on NIPS and NYT, and speedup of parallel CVB (right)
on NIPS. Average running times on the CPU are 4.24 seconds on NIPS and 22.1 seconds on NYT
for the parallel CGS, and 11.1 seconds on NIPS for the parallel CVB, respectively. Although using
data streaming reduces the speedup of parallel CVB due to the low bandwidth between the PC host
memory and the GPU device memory, it enables us to use a GPU card to process large-volume data.
7
The synchronization overhead is very small since P
c u rre n t
e v e n
ra n d o m
N and the speedup is largely determined by the maxi1 .0
mal number of nonzero elements in all partitioned sub0 .9
matrices. As a result, the speedup (when not using data
0 .8
0 .7
streaming) is proportional to ?P . The bandwidth be0 .6
tween the PC host memory and the GPU device mem0 .5
? 0 .4
ory is ? 1.0 GB/s, which is higher than the computa0 .3
tion bandwidth (size of data processed by the GPU per
0 .2
second) of the parallel CGS. Therefore, the speedup
0 .1
with or without data streaming is almost the same for
0 .0
P = 1 0
P = 3 0
P = 6 0
the parallel CGS. But the speedup with or without data
streaming differs dramatically for the parallel CVB, Figure 4: data partition efficiency ? of
because its computation bandwidth is roughly ? 7.2 various data partition algorithms for P =
GB/s for K = 128 due to large memory usage of ?ij , 10, 30, 60. Due to the negligible overheads
higher than the maximal bandwidth that data stream- for the synchronization steps, the speedup
ing can provide. The high speedup of the parallel CVB is proportional to ? in practice.
without data streaming is due to a hardware supported
exponential function and a high performance implementation of parallel reduction that is used to
normalize ?ij calculated from (2). Figure 3 (right) shows that the larger the P , the smaller the
speedup for the parallel CVB with data streaming. The reason is when P becomes large, the data
streaming management becomes more complicated and introduces more latencies on data transfer.
Figure 4 shows data partition efficiency ? of various data partition algorithms for P = 10, 30, 60 on
NIPS. ?current? is the data partition algorithm proposed in section 3.3, ?even? partitions documents
nW
and word vocabulary into roughly equal-sized subsets by setting jm = b mD
P c and wn = b P c.
?random? is a data partition obtained by randomly partitioning documents and words. We see that
the proposed data partition algorithm outperforms the other algorithms.
More than 20x speedup is achieved for both parallel algorithms with data streaming. The speedup
of the parallel CGS enables us to run 1000 iterations (K=128) Gibbs sampling on the large NYT
dataset within 1.5 hours, and it yields the same perplexity 3639 (S = 5) as the result obtained from
30-hour training on a CPU.
5
Related Works and Discussion
Our work is closely related to several previous works, including the distributed LDA by Newman
et al. [8], asynchronous distributed LDA by Asuncion et al. [1] and the parallelized variational
EM algorithm for LDA by Nallapati et al. [7]. For these works LDA training was parallelized on
distributed CPU clusters and achieved impressive speedup. Unlike their works, ours shows how
to use GPUs to achieve significant, scalable speedup for LDA training while maintaining correct,
accurate predictions.
Masada et al. recently proposed a GPU implementation of CVB [6]. Masada et al. keep one copy
of nwk while simply maintaining the same algorithmic structure for their GPU implementation as
Newman et al. did on a CPU cluster. However, with the limited memory size of a GPU, compared
to that of a CPU cluster, this can lead to memory access conflicts. This issue becomes severe when
one performs many parallel jobs (threadblocks) and leads to wrong inference results and operation
failure, as reported by Masada et al. Therefore, their method is not easily scalable due to memory
access conflicts. Different from their approach, ours are scalable with more multiprocessors with the
the proposed partitioning scheme and data streaming. They can also be used as general techniques
to parallelize other machine learning models that involve sequential operations on matrix, such as
online training of matrix factorization.
Acknowledgements
We thank Max Welling and David Newman for providing us with the link to the experimental data.
We also thank the anonymous reviewers, Dong Zhang and Xianxing Zhang for their invaluable
inputs. F. Yan conducted this research at Microsoft Research Asia. F. Yan and Y. Qi were supported
by NSF IIS-0916443 and Microsoft Research.
8
References
[1] A. Asuncion, P. Smyth, and M. Welling. Asynchronous distributed learning of topic models. In
D. Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, NIPS, pages 81?88. MIT Press,
2008.
[2] A. Asuncion, M. Welling, P. Smyth, and Y. W. Teh. On smoothing and inference for topic
models. In Proceedings of the International Conference on Uncertainty in Artificial Intelligence,
2009.
[3] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, March 2004.
[4] T. L. Griffiths and M. Steyvers. Finding scientific topics. Proceedings of the National Academy
Science, 101 (suppl. 1):5228?5235, April 2004.
[5] F. Labonte, P. Mattson, W. Thies, I. Buck, C. Kozyrakis, and M. Horowitz. The stream virtual machine. In PACT ?04: Proceedings of the 13th International Conference on Parallel
Architectures and Compilation Techniques, pages 267?277, Washington, DC, USA, 2004. IEEE
Computer Society.
[6] T. Masada, T. Hamada, Y. Shibata, and K. Oguri. Accelerating collapsed variational bayesian
inference for latent Dirichlet allocation with Nvidia CUDA compatible devices. In IEA-AIE,
2009.
[7] R. Nallapati, W. Cohen, and J. Lafferty. Parallelized variational EM for latent Dirichlet allocation: An experimental evaluation of speed and scalability. 2007.
[8] D. Newman, A. Asuncion, P. Smyth, and M. Welling. Distributed inference for latent Dirichlet
allocation. In NIPS, 2007.
[9] Y. W. Teh, D. Newman, and M. Welling. A collapsed variational Bayesian inference algorithm
for Latent Dirichlet allocation. In B. Sch?olkopf, J. C. Platt, and T. Hoffman, editors, NIPS, pages
1353?1360. MIT Press, 2006.
9
| 3788 |@word mild:1 briefly:1 xtest:5 pick:1 curtail:1 tr:5 reduction:1 initial:2 zij:13 njk:9 document:26 ours:2 o2:1 outperforms:1 current:4 gpu:20 numerical:1 partition:24 j1:4 enables:3 update:5 intelligence:1 prohibitive:1 device:13 ith:1 core:2 provides:1 zhang:2 yuan:1 consists:3 combine:1 sync:6 overhead:2 inside:1 manner:1 introduce:1 theoretically:2 sacrifice:1 ra:1 roughly:2 p1:4 nor:1 cpu:16 jm:8 equipped:1 quad:1 becomes:4 maximizes:1 factorized:1 null:1 unified:2 finding:1 guarantee:2 temporal:1 every:2 rm:1 wrong:1 platt:1 partitioning:7 unit:3 producing:1 before:2 negligible:1 engineering:1 local:7 limit:1 parallelize:4 approximately:1 burn:1 twice:1 china:1 suggests:1 challenging:1 co:1 limited:3 factorization:1 range:1 lafayette:2 unique:3 responsible:1 practice:2 block:6 implement:1 differs:1 procedure:1 j0:1 yan:3 submatrices:1 significantly:4 boyd:1 word:32 road:1 integrating:1 griffith:1 wait:1 get:1 cannot:2 collapsed:14 optimize:1 equivalent:3 map:1 reviewer:1 maximizing:2 layout:1 independently:1 convex:2 immediately:1 assigns:1 utilizing:1 regarded:1 vandenberghe:1 ity:4 steyvers:1 handle:2 justification:1 updated:1 massive:3 exact:2 programming:3 smyth:3 us:1 hypothesis:1 element:1 approximated:1 jk:10 updating:5 satisfying:1 surmise:1 fly:1 calculate:1 mal:1 removed:1 highest:1 consumes:1 substantial:1 balanced:1 environment:1 convexity:1 solving:1 rre:1 predictive:4 negatively:1 efficiency:4 easily:2 workload:2 chip:1 various:3 train:1 distinct:1 fast:3 effective:1 artificial:1 newman:6 aggregate:2 neighborhood:1 whose:2 larger:4 solve:1 say:1 otherwise:1 statistic:1 emergence:1 varq:4 superscript:1 final:1 ip:3 online:1 advantage:1 sequence:1 exemplary:1 propose:3 maximal:3 frequent:1 j2:2 inserting:1 uci:2 rapidly:1 achieve:4 academy:1 description:1 asserts:1 normalize:1 scalability:1 olkopf:1 convergence:7 cluster:4 requirement:1 produce:2 generating:1 executing:1 develop:2 ij:32 job:2 eq:6 dividing:1 c:2 involves:1 convention:1 direction:2 closely:1 correct:2 stochastic:1 virtual:1 anonymous:1 opt:2 biological:1 tighter:1 summation:1 extension:1 hold:2 ic:1 normal:1 exp:2 algorithmic:5 nw:1 purpose:1 bag:1 successfully:1 hoffman:1 hope:1 mit:2 concurrently:1 gaussian:1 avoid:2 consistently:1 likelihood:2 slowest:2 greatly:1 industrial:1 equipment:1 baseline:1 inference:18 nsk:1 multiprocessor:13 streaming:16 hidden:2 koller:1 quasi:1 shibata:1 issue:3 arg:2 denoted:1 platform:1 special:1 initialize:1 smoothing:1 equal:3 once:1 having:4 washington:1 sampling:18 kw:2 throughput:2 report:1 randomly:3 simultaneously:2 national:1 replaced:1 microsoft:4 cvb:26 interest:1 evaluation:2 severe:1 introduces:1 mixture:2 pc:4 compilation:1 chain:1 accurate:1 divide:3 old:7 initialized:1 sacrificing:1 minimal:1 column:2 mhz:1 assignment:7 cost:8 deviation:1 subset:5 entry:1 ory:1 conducted:2 graphic:3 too:1 stored:1 reported:1 considerably:1 peak:2 international:2 dong:1 discipline:1 w1:2 central:1 management:1 worse:1 horowitz:1 distribute:2 summarized:1 wk:1 caused:1 ad:5 stream:2 tion:1 view:1 analyze:1 compiler:1 parallel:68 complicated:1 asuncion:5 contribution:2 accuracy:1 became:1 variance:2 ningyi:1 efficiently:1 largely:1 yield:1 vp:3 bayesian:9 produced:1 mc:2 processor:31 converged:1 simultaneous:1 failure:1 energy:2 pp:1 geforce:1 obvious:1 associated:1 sampled:2 sub0:1 dataset:8 logical:1 actually:2 centric:1 higher:3 day:1 asia:2 april:1 execute:1 furthermore:3 until:2 working:1 web:1 lda:23 scientific:2 modulus:1 usage:1 usa:1 true:1 gtx:1 counterpart:2 assigned:2 read:2 symmetric:2 nonzero:1 noted:1 complete:1 demonstrate:1 performs:1 invaluable:1 image:1 variational:29 novel:1 recently:1 rmn:3 multinomial:4 empirically:2 cohen:1 jp:4 volume:1 significant:2 cambridge:1 gibbs:9 unconstrained:2 consistency:2 similarly:2 pointed:1 access:5 similarity:1 compiled:1 impressive:1 posterior:5 own:1 recent:1 showed:1 retrieved:1 belongs:1 perplexity:14 massively:1 store:3 nvidia:2 greater:1 parallelized:3 converge:3 period:1 monotonically:1 redundant:1 ii:1 multiple:2 full:1 reduces:3 alan:1 ing:1 faster:2 calculation:1 divided:1 host:3 qi:2 prediction:2 scalable:5 basic:7 ko:6 expectation:1 iteration:11 sometimes:1 represent:2 suppl:1 achieved:4 addition:2 fine:1 decreased:1 sch:1 extra:2 w2:2 unlike:1 archive:1 maxi1:1 sent:1 flow:1 mod:1 lafferty:1 integer:2 near:2 bengio:1 wn:4 independence:2 fit:1 architecture:2 bandwidth:6 reduce:1 idea:1 intensive:1 thread:15 motivated:1 gb:3 accelerating:1 rjw:11 xianxing:1 hessian:1 dramatically:1 buck:1 latency:2 involve:1 amount:1 locally:1 hardware:1 processed:1 ken:1 reduced:1 generate:1 http:1 xij:11 nsf:1 cuda:3 disjoint:4 per:3 write:2 shall:1 drawn:4 neither:1 v1:2 nyt:7 fraction:1 beijing:1 run:6 powerful:1 uncertainty:1 striking:1 family:1 almost:3 vn:4 vb:1 submatrix:1 bound:9 iea:1 hamada:1 constraint:1 speed:2 extremely:1 min:2 relatively:1 gpus:18 speedup:29 department:2 developing:1 according:1 watt:1 combination:2 march:1 across:3 smaller:1 em:2 partitioned:2 computationally:1 equation:2 count:16 mechanism:2 ascending:1 end:5 adopted:1 operation:10 occurrence:2 alternative:1 slower:1 denotes:3 dirichlet:10 running:4 include:1 remaining:1 opportunity:1 maintaining:2 npk:10 society:1 unchanged:1 feng:1 sweep:1 realized:1 occurs:1 exclusive:1 md:1 copt:3 thank:2 card:2 link:1 capacity:1 consumption:2 nx:1 topic:25 w0:2 reason:1 code:3 index:5 providing:2 balance:2 minimizing:1 executed:1 potentially:1 aie:1 implementation:6 perform:3 teh:3 observation:2 datasets:13 purdue:2 flop:3 dc:1 parallelizing:1 nwk:11 introduced:1 pair:3 david:1 extensive:2 conflict:6 engine:1 hour:2 nip:19 address:3 suggested:1 program:2 built:1 max:2 memory:31 nkw:3 including:1 power:6 unrealistic:1 ia:1 natural:3 pact:1 normality:1 scheme:9 popt:2 carried:1 text:3 review:1 prior:2 epoch:10 acknowledgement:1 relative:1 law:2 synchronization:14 fully:1 permutation:1 limitation:3 allocation:8 proportional:2 var:6 versus:1 affine:1 sufficient:1 minp:1 viewpoint:1 editor:2 nsjk:2 row:2 lo:1 compatible:1 token:14 repeat:1 supported:2 copy:5 keeping:1 free:1 asynchronous:2 allow:1 lle:6 taking:1 distributed:7 ghz:1 overcome:1 calculated:2 vocabulary:6 world:1 curve:2 concavity:1 doesn:1 author:1 commonly:1 gbytes:3 welling:5 swk:1 approximate:3 keep:2 ml:1 global:7 corpus:3 don:1 search:1 latent:10 table:2 nature:1 transfer:4 schuurmans:1 investigated:1 complex:1 necessarily:1 bottou:1 domain:1 did:3 cmn:9 main:1 significance:1 tween:1 big:2 whole:1 nallapati:2 xu:1 west:2 intel:1 nphard:1 board:1 deployed:2 ny:1 fashion:1 exceeding:1 exponential:2 cgs:29 formula:2 theorem:1 load:1 specific:2 topicword:1 intractable:1 sequential:4 effectively:2 magnitude:1 tio:4 demand:1 nk:22 simply:1 explore:1 extracted:1 conditional:1 lth:1 month:1 sized:2 ite:3 price:1 shared:1 lilliefors:1 specifically:1 typical:1 reducing:2 determined:1 averaging:1 called:1 total:2 accepted:1 experimental:2 ijk:6 repository2:1 |
3,077 | 3,789 | Bilinear classifiers for visual recognition
Hamed Pirsiavash
Deva Ramanan
Charless Fowlkes
Department of Computer Science
University of California at Irvine
{hpirsiav,dramanan,fowlkes}@ics.uci.edu
Abstract
We describe an algorithm for learning bilinear SVMs. Bilinear classifiers are a
discriminative variant of bilinear models, which capture the dependence of data
on multiple factors. Such models are particularly appropriate for visual data that
is better represented as a matrix or tensor, rather than a vector. Matrix encodings allow for more natural regularization through rank restriction. For example,
a rank-one scanning-window classifier yields a separable filter. Low-rank models have fewer parameters and so are easier to regularize and faster to score at
run-time. We learn low-rank models with bilinear classifiers. We also use bilinear classifiers for transfer learning by sharing linear factors between different
classification tasks. Bilinear classifiers are trained with biconvex programs. Such
programs are optimized with coordinate descent, where each coordinate step requires solving a convex program - in our case, we use a standard off-the-shelf
SVM solver. We demonstrate bilinear SVMs on difficult problems of people detection in video sequences and action classification of video sequences, achieving
state-of-the-art results in both.
1
Introduction
Linear classifiers (i.e., wT x > 0) are the basic building block of statistical prediction. Though quite
standard, they produce many competitive approaches for various prediction tasks. We focus here
on the task of visual recognition in video - ?does this spatiotemporal window contain an object??
In this domain, scanning-window templates trained with linear classification yield state of the art
performance on many benchmark datasets [6, 10, 7].
Bilinear models, introduced into the vision community by [23], provide an interesting generalization
of linear models. Here, data points are modelled as the confluence of a pair of factors. Typical examples include digits affected by style and content factors or faces affected by pose and illumination
factors. Conditioned on one factor, the model is linear in the other. More generally, one can define
multilinear models [25] that are linear in one factor conditioned on the others.
Inspired by the success of bilinear models in data modeling, we introduce discriminative bilinear
models for classification. We describe a method for training bilinear (multilinear) SVMs with biconvex (multiconvex) programs. A function f : X ? Y ? R is called biconvex if f (x, y) is convex
in y for fixed x ? X and is convex in x for fixed y ? Y . Such functions are well-studied in
the optimization literature [1, 14]. While not convex, they admit efficient coordinate descent algorithms that solve a convex program at each step. We show bilinear SVM classifiers can be optimized
with an off-the-shelf linear SVM solver. This is advantageous because we can leverage large-scale,
highly-tuned solvers (we use [13]) to learn bilinear classifiers with tens of thousands of features with
hundreds of millions of examples.
While bilinear models are often motivated from the perspective of increasing the flexibility of a
linear model, our motivation is reversed - we use them to reduce the number of parameters of a
1
Figure 1: Many approaches for visual recognition employ linear classifiers on scanned windows.
Here we illustrate windows processed into gradient-based features [6, 12]. We show an image
window (left) and a visualization of the extracted HOG descriptor (middle), which itself is better
represented as gradient features extracted from different orientation channels (right). Most learning
formulations ignore this natural representation of visual data as matrices or tensors. Wolf et al. [26]
show that one can produce more meaningful schemes for regularization and parameter reduction
through low-rank approximations of a tensor model. Our contribution involves casting the resulting
learning problem as a biconvex optimization. Such formulations can leverage off-the-shelf solvers
in an efficient two-stage optimization. We also demonstrate that bilinear models have additional
advantages for transfer learning and run-time efficiency.
weight vector that is naturally represented as a matrix or tensor W . We reduce parameters by
factorizing W into a product of low-rank factors. This parameter reduction can reduce over-fitting
and improve run-time efficiency because fewer operations are needed to score an example. These are
important considerations when training large-scale spatial or spatiotemporal template-classifiers. In
our case, the state-of-the-art features we use to detect pedestrians are based on histograms of gradient
(HOG) features [6] or spatio-temporal generalizations [7] as shown in Fig.1. The extracted feature
set of both gradient and optical flow histogram is quite large, motivating the need for dimensionality
reduction.
Finally, by sharing factors across different classification problems, we introduce a novel formulation
of transfer learning. We believe that transfer through shared factors is an important benefit of
multilinear classifiers which can help ameliorate overfitting.
We begin with a discussion of related work in Sec.2. We then explicitly define our bilinear classifier
in Sec. 3. We illustrate several applications and motivations for the bilinear framework in Sec. 4.
In Sec. 5, We describe extensions to our model for the multilinear and multiclass case. We provide
several experiments on visual recognition in the video domain in Sec. 6, significantly improving on
the state-of-the-art system for finding people in video sequences [7] both in performance and speed.
We also illustrate our approach on the task of action recognition, showing that transfer learning can
ameliorate the small-sample problem that plagues current benchmark datasets [18, 19].
2
Related Work
Tenenbaum and Freeman [23] introduced bilinear models into the vision community to model data
generated from multiple linear factors. Such methods have been extended to the multilinear setting, e.g. by [25], but such models were generally used as a factor analysis or density estimation
technique. Recent work has explored extensions of tensor models to discriminant analysis [22, 27],
while our work focuses on an efficient max-margin formulation of multilinear models.
There is also a body of related work on learning low-rank matrices from the collaborative filtering literature [21, 17, 16]. Such approaches typically define a convex objective
by replacing the
?
T
T
Tr(W W ) regularization term in our objective (6) with the trace norm Tr( W W ). This can be
seen as an alternate ?soft? rank restriction on W that retains convexity. This is because the trace
norm of a matrix is equivalent to the sum of its singular values rather than the number of nonzero
eigenvalues (the rank) [3]. Such a formulation would be interesting to pursue in our scenario, but as
[17, 16] note, the resulting SDP is difficult to solve. Our approach, though non-convex, leverages
existing SVM solvers in the inner loop of a coordinate descent optimization that enforces a hard
low-rank condition.
2
Our bilinear-SVM formulation is closely related to the low-rank SVM formulation of [26]. Wolf
et. al. convincingly argue that many forms of visual data are better modeled as matrices rather than
vectors - an important motivation for our work (see Fig.1). They analyze the VC dimension of rank
constrained linear classifiers and demonstrate an iterative weighting algorithm for approximately
solving an SVM problem in which the rank of W acts as a regularizer. They also outline an algorithm similar to the one we propose here which has a hard constraint on the rank, but they include an
additional orthogonality constraint on the columns of the factors that compose W . This requires cycling through each column separately during the optimization which is presumably slower and may
introduce additional local minima. This in turn may explain why they did not present experimental
results for their hard-rank formulation.
Our work also stands apart from Wolf et. al. in our focus on the multi-task learning, which dates
back at least to the work of Caruna [4]. Our formulation is most similar to that of Ando and Zhang
[2]. They describe a procedure for learning linear prediction models for multiple tasks with the
assumption that all models share a component living in a common low-dimensional subspace. While
this formulation allows for sharing, it does not reduce the number of model parameters as does our
approach of sharing factors.
3
Model definition
Linear predictors are of the form
fw (x) = wT x.
(1)
Existing formulations of linear classification typically treat x as a vector. We argue for many problems, particularly in visual recognition, x is more naturally represented as a matrix or tensor. For
example, many state-of-the-art window scanning approaches train a classifier defined over local
feature vectors extracted over a spatial neighborhood. The Dalal and Triggs detector [6] is a particularly popular pedestrian detector where x is naturally represented as a concatenation of histogram
of gradient (HOG) feature vectors extracted from a spatial grid of ny ? nx , where each local HOG
descriptor is itself composed of nf features. In this case, it is natural to represent an example x
as a tensor X ? Rny ?nx ?nf . For ease of exposition, we develop the mathematics for a simpler
matrix representation, fixing nf = 1. This holds, for example, when learning templates defined on
grayscale pixel values.
We generalize (1) for a matrix X with
fW (X) = Tr(W T X).
(2)
where both X and W are ny ? nx matrices. One advantage of the matrix representation is that it
is more natural to regularize W and restrict the number of parameters. For example, one natural
mechanism for reducing the degrees of freedom in a matrix is to reduce its rank. We show that one
can obtain a biconvex objective function by enforcing a hard restriction on the rank. Specifically,
we enforce the rank of W to be at most d ? min(ny , nx ). This restriction can be implemented by
writing
W = Wy WxT
where
Wy ? Rny ?d , Wx ? Rnx ?d .
This allows us to write the final predictor explicitly as the following bilinear function:
fWy ,Wx (X) = Tr(Wy WxT X) = Tr(WyT XWx ).
3.1
(3)
(4)
Learning
Assume we are given a set of training data and label pairs {xn , yn }. We would like to learn a model
with low error on the training data. One successful approach is a support vector machine (SVM).
We can rewrite the linear SVM formulation for w and xn with matrices W and Xn using the trace
operator.
X
1
L(w) = wT w + C
max(0, 1 ? yn wT xn ).
(5)
2
n
X
1
L(W ) = Tr(W T W ) + C
max(0, 1 ? yn Tr(W T Xn )).
(6)
2
n
3
The above formulations are identical when w and xn are the vectorized elements of matrices W and
Xn . Note that (6) is convex. We wish to restrict the rank of W to be d. Plugging in W = Wy WxT ,
we obtain our biconvex objective function:
X
1
L(Wy , Wx ) = Tr(Wx WyT Wy WxT ) + C
max(0, 1 ? yn Tr(Wx WyT Xn )).
(7)
2
n
In the next section, we show that optimizing (7) over one matrix holding the other fixed is a convex
program - specifically, a QP equivalent to a standard SVM. This makes (7) biconvex.
3.2
Coordinate descent
We can optimize (7) with a coordinate descent algorithm that solves for one set of parameters holding
the other fixed. Each step in this descent is a convex optimization that can be solved with a standard
SVM solver. Specifically, consider
min L(Wy , Wx ) =
Wy
X
1
max(0, 1 ? yn Tr(WyT Xn Wx )).
Tr(Wy AWyT ) + C
2
n
(8)
The above optimization is convex in Wy but does not directly translate into the trace-based SVM
? y:
formulation from (6). To do so, let us reparametrize Wy as W
X
? y , Wx ) = 1 Tr(W
? yT W
? y) + C
? yT X
? n ))
min L(W
max(0, 1 ? yn Tr(W
(9)
?y
2
W
n
where
? y = Wy A 21
W
and
? n = Xn Wx A? 21
X
and
A = WxT Wx .
One can see that (9) is structurally equivalent to (6) and hence (5). Hence it can be solved with
a standard off-the-shelf SVM solver. Given a solution, we can recover the original parameters by
? y A? 12 . Recall that A = WxT Wx is matrix of size d ? d that is in general invertible for
Wy = W
small d. Using a similar derivation, one can show that minWx L(Wy , Wx ) is also equivalent to a
standard convex SVM formulation.
4
Motivation
We outline here a number of motivations for the biconvex objective function defined above.
4.1
Regularization
Bilinear models allow a natural way of restricting the number of parameters in a linear model. From
this perspective, they are similar to approaches that apply PCA for dimensionality reduction prior
to learning. Felzenszwalb et al. [11] find that PCA can reduce the size of HOG features by a
factor of 4 without a loss in performance. Image windows are naturally represented as a 3D tensor
X ? Rny ?nx ?nf , where nf is the dimensionality of a HOG feature. Let us ?reshape? X into a 2D
matrix X ? Rnxy ?nf where nxy = nx ny . We can restrict the rank of the corresponding model
to d by defining W = Wxy WfT . Wxy ? Rnxy ?d is equivalent to a vectorized spatial template
defined over d features at each spatial location, while Wf ? Rnf ?d defines a set of d basis vectors
spanning Rnf . This basis can be loosely interpreted as the PCA-basis estimated in [11]. In our
biconvex formulation, the basis vectors are not constrained to be orthogonal, but they are learned
discriminatively and jointly with the template Wxy . We show in Sec. 6 this often significantly
outperforms PCA-based dimensionality reduction.
4.2
Efficiency
Scanning window classifiers are often implemented using convolutions [6, 12]. For example, the
product Tr(W T X) can be computed for all image windows X with nf convolutions. By restricting
W to be Wxy WfT , we project features into a d dimensional subspace spanned by Wf , and compute the final score with d convolutions. One can further improve efficiency by using the same
4
d-dimensional feature space for a large number of different object templates - this is precisely the
basis of our transfer approach in Sec.4.3. This can result in significant savings in computation. For
example, spatio-temporal templates for finding objects in video tend to have large nf since multiple
features are extracted from each time-slice.
Consider a rank-1 restriction of Wx and Wy . This corresponds to a separable filter Wxy . Hence, our
formulation can be used to learn separable scanning-window classifiers. Separable filters can be
evaluated efficiently with two one-dimensional convolutions. This can result in significant savings
because computing the score at the window is now O(nx + ny ) rather than O(nx ny ).
4.3
Transfer
m
Assume we wish to train M predictors and are given {xm
n , yn } training data pairs for each prediction
problem 1 ? m ? M . One can write all M learning problems with a single optimization:
X
X
T
T
1X
L(W 1 , . . . , W M ) =
Tr(W m W m ) +
Cm
max(0, 1 ? ynm Tr(W m Xnm )). (10)
2 m
m
n
As written, the problem above can be optimized over each W m independently. We can introduce
a rank constraint on W m that induces a low-dimensional subspace projection of Xnm . To transfer
knowledge between the classification tasks, we require all tasks to use the same low-dimensional
subspace projection by sharing the same feature matrix:
m
W m = Wxy
WfT
Note that the leading dimension of
can depend on m. This fact allows for Xnm from different
tasks to be of varying sizes. In our motivating application, we can learn a family of HOG templates
of varying spatial dimension that share a common HOG feature subspace. The coordinate descent
algorithm from Sec.3.2 naturally applies to the multi-task setting. Given a fixed Wf , it is straightform
m
, a single
by defining A = WfT Wf . Given a fixed set of Wxy
ward to independently optimize Wxy
matrix Wf is learned for all classes by computing:
X
X
1
1
M
? fT W
?f) +
? fT X
? nm ))
? f , Wxy
Cm
max(0, 1 ? ynm Tr(W
min L(W
, . . . , Wxy
) = Tr(W
?
2
Wf
m
n
X
1
1
m
m
m
?
mT
m
? f = Wf A 2
? n = Xn Wxy A 2
where
W
and
X
and
A=
Wxy
Wxy
.
m
Wxy
m
If all problems share the same slack penalty (Cm = C), the above can be optimized with an off-theshelf SVM solver. In the general case, a minor modification is needed to allow for slack-rescaling
[24].
In practice, nf can be large for spatio-temporal features extracted from multiple temporal windows.
The above formulation is convenient in that we can use data examples from many classification tasks
to learn a good subspace for spatiotemporal features.
5
5.1
Extensions
Multilinear
In many cases, a data point x is more natural represented as a multidimensional matrix or a highorder tensor. For example, spatio-temporal templates are naturally represented as a 4th -order tensor
capturing the width, height, temporal extent, and the feature dimension of a spatio-temporal window.
For ease of exposition let us assume the feature dimension is 1 and so we write a feature vector x as
X ? Rnx ?ny ?nt . We denote the element of a tensor X as xijk . Following [15], we define a scalar
product of two tensors W and X as the sum of their element-wise products:
X
hW, Xi =
wijk xijk .
(11)
ijk
With the above definition, we can generalize our trace-based objective function (6) to higher-order
tensors:
X
1
max(0, 1 ? yn hW, Xn i).
(12)
L(W ) = hW, W i + C
2
n
5
We wish to impose a rank restriction on the tensor W . The notion of rank for tensors of order
greater than two is subtle - for example, there are alternate approaches for defining a high-order
SVD [25, 15]. For our purposes, we follow [20] and define W as a rank d tensor by writing it as
product of matrices W y ? Rny ?d , W x ? Rnx ?d , W t ? Rnt ?d :
wijk =
d
X
y x
t
wis
wjs wks
.
(13)
s=1
Combining (11) - (13), it is straightforward to show that L(W y , W x , W t ) is convex in one matrix
given the others. This means our coordinate descent algorithm from Sec.3.2 still applies. As an
example, consider the case when d = 1. This rank restriction forces the spatio-temporal template
W to be separable in along the x, y, and t axes, allowing for window-scan scoring by three onedimensional convolutions. This greatly increases run-time efficiency for spatio-temporal templates.
5.2
Bilinear structural SVMs
We outline here an extension of our formalism to structural SVMs [24]. Structural SVMs learn
models that predict a structured label yn given a data point xn . Given training data of the form
{xn , yn }, the learning problem is:
L(w) =
where
X
1 T
w w+C
max(l(yn , y) ? wT ??(xn , yn , y))
y
2
n
(14)
??(xn , yn , y) = ?(xn , yn ) ? ?(xn , y),
and where l(yn , y) is the loss of assigning example i with label y given that its true label is yn . The
above optimization problem is convex in w. As a concrete example, consider the task of learning a
multiclass SVM for nc classes using the formalism of Crammer and Singer [5]. Here,
w = w1T . . . wnTc ,
where each wi ? Rnx can be interpreted as a classifier for class i. The corresponding ?(x, y) will
be a sparse vector with nx nonzero values at those indices associated with the y th class. It is natural
to model the relevant vectors as matrices W, Xn , ?? that lie in Rnc ?nx . We can enforce W to be
of rank d < min(nc , nx ) by defining W = Wc WxT where Wc ? Rnc ?d and Wx ? Rnx ?d . For
example, one may expect template classifiers that classify nc different human actions to reside in a
d dimensional subspace. The resulting biconvex objective function is
L(Wc , Wx ) =
X
1
Tr(Wx WcT Wc WxT ) + C
max(l(yn , y) ? Tr(Wx WcT ?(Xn , yn , y)).
y
2
n
(15)
Using our previous arguments, it is straightforward to show that the above objective is biconvex and
that each step of the coordinate descent algorithm reduces to a standard structural SVM problem.
6
Experiments
We focus our experiments on the task of visual recognition using spatio-temporal templates. This
problem domain has large feature sets obtained by histograms of gradients and histograms of optical
flow computing from a frame pair. We illustrate our method on two challenging tasks using two
benchmark datasets - detecting pedestrians in video sequences from the INRIA-Motion database [7]
and classifying human actions in UCF-Sports dataset [18].
We model features computed from frame pairs x as matrices X ? Rnxy ?nf , where nxy = nx ny
is the vectorized spatial template and nf is the dimensionality of our combined gradient and flow
feature space. We use the histogram of gradient and flow feature set from [7]. Our bilinear model
learns a classifier of the form Wxy WfT where Wxy ? Rnxy ?d and Wf ? Rnf ?d . Typical values
include ny = 14, nx = 6, nf = 84, and d = 5 or 10.
6
6.1
Spatiotemporal pedestrian detection
Scoring a detector: Template classifiers are often scored using missed detections versus falsepositives-per-window statistics. However, recent analysis suggests such measurements can be misleading [9]. We opt for the scoring criteria outlined by the widely-acknowledged PASCAL competition [10], which looks at average precision (AP) results obtained after running the detector on
cluttered video sequences and suppressing overlapping detections.
Baseline: We compare with the linear spatiotemporal-template classifier from [7]. The static-image
detector counterpart is a well-known state-of-the-art system for finding pedestrians [6]. Surprisingly,
when scoring AP for person detection in the INRIA-motion dataset, we find the spatiotemporal
model performed worse than the static-image model. This is corroborated by personal communication with the authors as well as Dalal?s thesis [8]. We found that aggressive SVM cutting-plane
optimization algorithms [13] were needed for the spatiotemporal model to outperform the spatial
model. This suggests our linear baseline is the true state-of-the-art system for finding people in
video sequences. We also compare results with an additional rank-reduced baseline obtained by setting wf to the basis returned by a PCA projection of the feature space from nf to d dimensions. We
use this PCA basis to initialize our coordinate descent algorithm when training our bilinear models.
We show precision-recall curves in Fig.2. We refer the reader to the caption for a detailed analysis,
but our bilinear optimization seems to produce the state-of-the-art system for finding people in video
sequences, while being an order-of-magnitude faster than previous approaches.
6.2
Human action classification
Action classification requires labeling a video sequence with one of nc action labels. We do this
by training nc 1-vs-all action templates. Template detections from a video sequence are pooled
together to output a final action label. We experimented with different voting schemes and found
that a second-layer SVM classifier defined over the maximum score (over the entire video) for each
template performed well. Our future plan is to integrate the video class directly into the training
procedure using our bilinear structural SVM formulation.
Action recognition datasets tend to be quite small and limited. For example, up until recently, the
norm consisted of scripted activities on controlled, simplistic backgrounds. We focus our results
on the relatively new UCF Sports Action dataset, consisting of non-scripted sequences of cluttered
sports videos. Unfortunately, there has been few published results on this dataset, and the initial
work [18] uses a slightly different set of classes than those which are available online. The published
average class confusion is 69.2%, obtained with leave-one-out cross validation. Using 2-fold cross
validation (and hence significantly less training data), our bilinear template achieves a score of
64.8% (Fig. 3). Again, we see a large improvement over linear and PCA-based approaches. While
not directly comparable, these results suggest our model is competitive with the state of the art.
Transfer: We use the UCF dataset to evaluate transfer-learning in Fig.4. We consider a smallsample scenario when one has only two example video sequences of each action class. Under this
scenario, we train one bilinear model in which the feature basis Wf is optimized independently for
each action class, and another where the basis is shared across all classes. The independently-trained
model tends to overfit to the training data for multiple values of C, the slack penalty from (6). The
joint model clearly outperforms the independently-trained models.
7
Conclusion
We have introduced a generic framework for multilinear classifiers that are efficient to train with
existing linear solvers. Multilinear classifiers exploit the natural matrix and/or tensor representation
of spatiotemporal data. For example, this allows one to learn separable spatio-temporal templates
for finding objects in video. Multilinear classifiers also allow for factors to be shared across classification tasks, providing a novel form of transfer learning. In our future experiments, we wish to
demonstrate transfer between domains such as pedestrian detection and action classification.
7
Prec/Rec curve
1
Precision
0.8
0.6
0.4
0.2
Bilinear AP = 0.795
Baseline AP = 0.765
PCA AP = 0.698
0
0
0.2
0.4
0.6
0.8
1
Recall
Figure 2: Our results on the INRIA-motion database [7]. We evaluate results using average precision, using the well-established protocol outlined in [10]. The baseline curve is our implementation
of the HOG+flow template from [7]. The size of the feature vector is over 7,000 dimensions. Using
PCA to reduce the dimensionality by 10X results in a significant performance hit. Using our bilinear formulation with the same low-dimensional restriction, we obtain better performance than the
original detector while being 10X faster. We show example detections on video clips on the right.
Linear (.518)
Bilinear (.648)
Dive?Side
Golf?Back
Golf?Front
Golf?Side
Kick?Front
Kick?Side
Ride?Horse
Run?Side
Skate?Front
Swing?Bench
Swing?Side
Walk?Front
D
iv
G e?S
o
G lf?B ide
ol a
f
G ?F ck
o r
Ki lf? ont
ck Si
d
K ?F e
R ick? ron
id S t
e? id
R H e
S un ors
Swkate ?S e
in ?F ide
g
Sw ?B ron
i e t
W ng? nch
al S
k? id
Fr e
on
t
D
iv
G e?S
o
G lf?B ide
ol a
f
G ?F ck
o r
Ki lf? ont
ck Si
d
K ?F e
R ick? ron
id S t
e? id
R H e
S un ors
Swkate ?S e
in ?F ide
g
Sw ?B ron
i e t
W ng? nch
al S
k? id
Fr e
on
t
Dive?Side
Golf?Back
Golf?Front
Golf?Side
Kick?Front
Kick?Side
Ride?Horse
Run?Side
Skate?Front
Swing?Bench
Swing?Side
Walk?Front
D
iv
G e?S
ol i
G f?B de
ol a
f
G ?F ck
o r
Ki lf? ont
ck Si
d
K ?F e
R ick? ron
id S t
e? id
R H e
S un ors
Swkate ?S e
in ?F ide
g
Sw ?B ron
i e t
W ng? nch
al S
k? id
Fr e
on
t
PCA (.444)
Dive?Side
Golf?Back
Golf?Front
Golf?Side
Kick?Front
Kick?Side
Ride?Horse
Run?Side
Skate?Front
Swing?Bench
Swing?Side
Walk?Front
Figure 3: Our results on the UCF Sports Action dataset [18]. We show classification results obtained
from 2-fold cross validation. Our bilinear model provides a strong improvement over both the linear
and PCA baselines. We show class confusion matrices, where light values correspond to correct
classification. We label each matrix with the average classification rate over all classes.
Walk?Iter1 Walk?Iter2
Walk?Iter1
Walk?Iter2
UCF Sport Action Dataset
closeup
closeup
(2 training videos per class)
Ind (C=.01)
Joint (C=.1)
Iter1
.222
.267
Iter2
.289
.356
Figure 4: We show results for transfer learning on the UCF action recognition dataset with limited
training data - 2 training videos for each of 12 action classes. In the top table row, we show results
for independently learning a subspace for each action class. In the bottom table row, we show
results for jointly learning a single subspace that is transfered across classes. In both cases, the
regularization parameter C was set on held-out data. The jointly-trained model is able to leverage
training data from across all classes to learn the feature space Wf , resulting in overall better performance. On the right, We show low-rank models W = Wxy WfT during iterations of the coordinate
descent. Note that the head and shoulders of the model are blurred out in iteration 1 which uses
PCA, but after the biconvex training procedure discriminatively updates the basis, the final model is
sharper at the head and shoulders.
References
[1] F.A. Al-Khayyal and J.E. Falk. Jointly constrained biconvex programming. Mathematics of Operations
Research, pages 273?286, 1983.
8
[2] R.K. Ando and T. Zhang. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817?1853, 2005.
[3] S.P. Boyd and L. Vandenberghe. Convex optimization. Cambridge university press, 2004.
[4] R. Caruana. Multitask learning. Machine Learning, 28(1):41?75, 1997.
[5] K. Crammer and Y. Singer. On the algorithmic implementation of multiclass kernel-based vector machines. The Journal of Machine Learning Research, 2:265?292, 2002.
[6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In IEEE Computer Society
Conference on Computer Vision and Pattern Recognition, 2005. CVPR 2005, volume 1, 2005.
[7] N. Dalal, B. Triggs, and C. Schmid. Human detection using oriented histograms of flow and appearance.
Lecture Notes in Computer Science, 3952:428, 2006.
[8] Navneet Dalal. Finding People in Images and Video. PhD thesis, Institut National Polytechnique de
Grenoble / INRIA Grenoble, July 2006.
[9] P. Doll?ar, C. Wojek, B. Schiele, and P. Perona. Pedestrian detection: A benchmark. In CVPR, June 2009.
[10] M. Everingham, L. Van Gool, C. K. I. Williams, J. Winn, and A. Zisserman.
The
PASCAL Visual Object Classes Challenge 2008 (VOC2008) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2008/workshop/index.html.
[11] P. Felzenszwalb, R. Girshick, D. McAllester, and D. Ramanan. Object detection with discriminatively
trained part based models. PAMI, In submission.
[12] P. Felzenszwalb, D. McAllester, and D. Ramanan. A discriminatively trained, multiscale, deformable part
model. Computer Vision and Pattern Recognition, Anchorage, USA, June, 2008.
[13] V. Franc and S. Sonnenburg. Optimized cutting plane algorithm for support vector machines. In Proceedings of the 25th international conference on Machine learning, pages 320?327. ACM New York, NY,
USA, 2008.
[14] J. Gorski, F. Pfeuffer, and K. Klamroth. Biconvex sets and optimization with biconvex functions: a survey
and extensions. Mathematical Methods of Operations Research, 66(3):373?407, 2007.
[15] L.D. Lathauwer, B.D. Moor, and J. Vandewalle. A multilinear singular value decomposition. SIAM J.
Matrix Anal. Appl, 1995.
[16] N. Loeff and A. Farhadi. Scene Discovery by Matrix Factorization. In Proceedings of the 10th European
Conference on Computer Vision: Part IV, pages 451?464. Springer-Verlag Berlin, Heidelberg, 2008.
[17] J.D.M. Rennie and N. Srebro. Fast maximum margin matrix factorization for collaborative prediction. In
International Conference on Machine Learning, volume 22, page 713, 2005.
[18] M.D. Rodriguez, J. Ahmed, and M. Shah. Action MACH a spatio-temporal Maximum Average Correlation Height filter for action recognition. In IEEE Conference on Computer Vision and Pattern Recognition,
2008. CVPR 2008, pages 1?8, 2008.
[19] C. Schuldt, I. Laptev, and B. Caputo. Recognizing human actions: A local SVM approach. In Pattern
Recognition, 2004. ICPR 2004. Proceedings of th e17th International Conference on, volume 3, 2004.
[20] A. Shashua and T. Hazan. Non-negative tensor factorization with applications to statistics and computer
vision. In International Conference on Machine Learning, volume 22, page 793, 2005.
[21] N. Srebro, J.D.M. Rennie, and T.S. Jaakkola. Maximum-margin matrix factorization. Advances in Neural
Information Processing Systems, 17:1329?1336, 2005.
[22] D. Tao, X. Li, X. Wu, and S.J. Maybank. General tensor discriminant analysis and Gabor features for gait
recognition. IEEE Transactions on Pattern Analysis and Machine Intelligence, 29(10):1700, 2007.
[23] J.B. Tenenbaum and W.T. Freeman. Separating style and content with bilinear models. Neural Computation, 12(6):1247?1283, 2000.
[24] I. Tsochantaridis, T. Joachims, T. Hofmann, and Y. Altun. Large margin methods for structured and
interdependent output variables. Journal of Machine Learning Research, 6(2):1453, 2006.
[25] M.A.O. Vasilescu and D. Terzopoulos. Multilinear analysis of image ensembles: Tensorfaces. Lecture
Notes in Computer Science, pages 447?460, 2002.
[26] L. Wolf, H. Jhuang, and T. Hazan. Modeling appearances with low-rank SVM. In IEEE Conference on
Computer Vision and Pattern Recognition (CVPR), pages 1?6. Citeseer, 2007.
[27] S. Yan, D. Xu, Q. Yang, L. Zhang, X. Tang, and H.J. Zhang. Discriminant analysis with tensor representation. In IEEE Computer Society Conference on Computer Vision and Pattern Recognition, volume 1,
page 526. Citeseer, 2005.
9
| 3789 |@word multitask:1 middle:1 dalal:5 advantageous:1 norm:3 seems:1 triggs:3 everingham:1 decomposition:1 citeseer:2 tr:20 reduction:5 initial:1 score:6 tuned:1 suppressing:1 outperforms:2 existing:3 current:1 nt:1 si:3 assigning:1 written:1 wx:17 dive:3 hofmann:1 update:1 v:1 intelligence:1 fewer:2 plane:2 detecting:1 provides:1 wxy:17 location:1 ron:6 org:1 simpler:1 zhang:4 height:2 mathematical:1 rnt:1 along:1 anchorage:1 lathauwer:1 xnm:3 fitting:1 compose:1 introduce:4 sdp:1 multi:2 ol:4 inspired:1 freeman:2 voc:1 ont:3 window:16 solver:9 increasing:1 farhadi:1 begin:1 project:1 cm:3 interpreted:2 pursue:1 finding:7 temporal:12 nf:13 act:1 multidimensional:1 voting:1 voc2008:2 classifier:26 hit:1 ramanan:3 yn:18 local:4 treat:1 tends:1 bilinear:34 encoding:1 mach:1 id:9 approximately:1 ap:5 inria:4 pami:1 hpirsiav:1 studied:1 suggests:2 rnx:5 challenging:1 appl:1 ease:2 limited:2 factorization:4 enforces:1 practice:1 block:1 lf:5 digit:1 procedure:3 yan:1 significantly:3 gabor:1 projection:3 convenient:1 boyd:1 suggest:1 altun:1 confluence:1 unlabeled:1 tsochantaridis:1 operator:1 closeup:2 writing:2 restriction:8 equivalent:5 optimize:2 www:1 yt:2 straightforward:2 williams:1 independently:6 convex:15 cluttered:2 survey:1 spanned:1 regularize:2 vandenberghe:1 notion:1 coordinate:11 iter2:3 caption:1 programming:1 us:2 element:3 recognition:17 particularly:3 rec:1 submission:1 corroborated:1 database:2 bottom:1 ft:2 solved:2 capture:1 thousand:1 rnf:3 sonnenburg:1 convexity:1 schiele:1 personal:1 highorder:1 trained:7 depend:1 deva:1 solving:2 rewrite:1 laptev:1 predictive:1 efficiency:5 basis:10 pfeuffer:1 joint:2 represented:8 various:1 regularizer:1 reparametrize:1 derivation:1 train:4 fast:1 describe:4 labeling:1 horse:3 neighborhood:1 quite:3 widely:1 solve:2 cvpr:4 rennie:2 statistic:2 ward:1 jointly:4 itself:2 final:4 online:1 sequence:11 advantage:2 eigenvalue:1 propose:1 gait:1 product:5 fr:3 uci:1 loop:1 combining:1 date:1 relevant:1 translate:1 flexibility:1 deformable:1 competition:1 produce:3 leave:1 object:6 help:1 illustrate:4 develop:1 fixing:1 pose:1 minor:1 strong:1 solves:1 implemented:2 involves:1 wyt:4 closely:1 correct:1 filter:4 vc:1 human:6 mcallester:2 require:1 generalization:2 opt:1 multilinear:12 extension:5 hold:1 ic:1 presumably:1 algorithmic:1 predict:1 rnc:2 achieves:1 theshelf:1 purpose:1 estimation:1 label:7 moor:1 ick:3 wft:6 clearly:1 rather:4 ck:6 shelf:4 varying:2 casting:1 jaakkola:1 ax:1 focus:5 june:2 joachim:1 improvement:2 rank:30 greatly:1 baseline:6 detect:1 wf:11 typically:2 entire:1 perona:1 tao:1 pixel:1 overall:1 classification:15 orientation:1 pascal:2 html:1 plan:1 art:9 spatial:8 constrained:3 initialize:1 saving:2 ng:3 nxy:2 identical:1 look:1 future:2 others:2 employ:1 few:1 grenoble:2 oriented:2 franc:1 composed:1 falk:1 national:1 consisting:1 ando:2 freedom:1 detection:12 highly:1 wijk:2 multiconvex:1 light:1 ynm:2 held:1 orthogonal:1 institut:1 iv:4 loosely:1 walk:7 girshick:1 formalism:2 column:2 modeling:2 soft:1 classify:1 ar:1 retains:1 caruana:1 hundred:1 predictor:3 recognizing:1 successful:1 vandewalle:1 front:12 motivating:2 wks:1 scanning:5 spatiotemporal:8 combined:1 person:1 density:1 international:4 dramanan:1 siam:1 off:5 invertible:1 together:1 transfered:1 concrete:1 thesis:2 again:1 nm:1 worse:1 admit:1 style:2 leading:1 rescaling:1 li:1 aggressive:1 de:2 sec:9 pooled:1 pedestrian:7 blurred:1 explicitly:2 performed:2 analyze:1 hazan:2 shashua:1 competitive:2 recover:1 wjs:1 contribution:1 collaborative:2 descriptor:2 efficiently:1 ensemble:1 yield:2 correspond:1 generalize:2 modelled:1 published:2 hamed:1 explain:1 detector:6 sharing:5 definition:2 vasilescu:1 naturally:6 associated:1 static:2 irvine:1 dataset:8 popular:1 recall:3 knowledge:1 dimensionality:6 subtle:1 back:4 higher:1 follow:1 zisserman:1 klamroth:1 formulation:20 evaluated:1 though:2 stage:1 until:1 overfit:1 correlation:1 schuldt:1 replacing:1 multiscale:1 overlapping:1 rodriguez:1 defines:1 believe:1 building:1 usa:2 pascalnetwork:1 contain:1 true:2 consisted:1 counterpart:1 swing:6 regularization:5 hence:4 nonzero:2 nch:3 ind:1 during:2 width:1 ide:5 biconvex:15 criterion:1 outline:3 demonstrate:4 confusion:2 polytechnique:1 motion:3 image:7 wise:1 consideration:1 novel:2 recently:1 charles:1 common:2 mt:1 qp:1 volume:5 million:1 onedimensional:1 significant:3 measurement:1 refer:1 cambridge:1 maybank:1 grid:1 mathematics:2 outlined:2 ride:3 ucf:6 wxt:8 recent:2 perspective:2 optimizing:1 apart:1 scenario:3 verlag:1 success:1 scoring:4 seen:1 minimum:1 additional:4 greater:1 impose:1 wojek:1 living:1 july:1 multiple:7 reduces:1 gorski:1 faster:3 ahmed:1 cross:3 plugging:1 controlled:1 prediction:5 variant:1 basic:1 simplistic:1 vision:9 histogram:8 represent:1 iteration:2 kernel:1 scripted:2 background:1 separately:1 winn:1 wct:2 singular:2 tend:2 flow:6 structural:5 leverage:4 kick:6 yang:1 restrict:3 reduce:7 inner:1 multiclass:3 golf:9 motivated:1 pca:12 penalty:2 returned:1 york:1 action:22 generally:2 detailed:1 ten:1 tenenbaum:2 induces:1 svms:6 processed:1 clip:1 reduced:1 http:1 outperform:1 estimated:1 per:2 write:3 affected:2 tensorfaces:1 xwx:1 achieving:1 acknowledged:1 sum:2 run:7 ameliorate:2 family:1 reader:1 wu:1 missed:1 loeff:1 comparable:1 capturing:1 layer:1 ki:3 fold:2 activity:1 scanned:1 constraint:3 orthogonality:1 precisely:1 scene:1 wc:4 speed:1 argument:1 min:5 separable:6 optical:2 relatively:1 department:1 structured:2 alternate:2 icpr:1 across:5 slightly:1 wi:2 modification:1 visualization:1 turn:1 slack:3 mechanism:1 needed:3 singer:2 available:1 operation:3 doll:1 apply:1 appropriate:1 enforce:2 reshape:1 generic:1 prec:1 fowlkes:2 shah:1 slower:1 original:2 falsepositives:1 top:1 running:1 include:3 sw:3 exploit:1 society:2 tensor:20 objective:8 dependence:1 cycling:1 gradient:9 subspace:9 reversed:1 berlin:1 concatenation:1 separating:1 nx:13 argue:2 extent:1 discriminant:3 spanning:1 enforcing:1 modeled:1 index:2 providing:1 nc:5 difficult:2 unfortunately:1 skate:3 sharper:1 hog:9 holding:2 trace:5 negative:1 implementation:2 anal:1 allowing:1 convolution:5 datasets:4 benchmark:4 descent:11 defining:4 extended:1 communication:1 head:2 shoulder:2 frame:2 community:2 introduced:3 pair:5 optimized:6 california:1 plague:1 learned:2 established:1 able:1 wy:15 pattern:7 xm:1 challenge:2 convincingly:1 program:6 pirsiavash:1 max:11 video:21 gool:1 natural:9 force:1 scheme:2 improve:2 misleading:1 schmid:1 prior:1 literature:2 discovery:1 interdependent:1 loss:2 expect:1 discriminatively:4 lecture:2 interesting:2 filtering:1 srebro:2 versus:1 validation:3 integrate:1 degree:1 vectorized:3 classifying:1 share:3 navneet:1 row:2 jhuang:1 surprisingly:1 side:15 allow:4 terzopoulos:1 template:22 face:1 felzenszwalb:3 sparse:1 benefit:1 slice:1 curve:3 dimension:7 xn:20 stand:1 van:1 reside:1 author:1 transaction:1 ignore:1 cutting:2 overfitting:1 spatio:10 discriminative:2 xi:1 grayscale:1 factorizing:1 un:3 iterative:1 why:1 table:2 learn:9 transfer:13 channel:1 xijk:2 improving:1 caputo:1 heidelberg:1 european:1 domain:4 protocol:1 did:1 motivation:5 scored:1 w1t:1 body:1 xu:1 fig:5 ny:10 precision:4 structurally:1 wish:4 lie:1 iter1:3 weighting:1 learns:1 hw:3 tang:1 showing:1 explored:1 experimented:1 svm:22 workshop:1 restricting:2 phd:1 magnitude:1 illumination:1 conditioned:2 margin:4 easier:1 appearance:2 visual:10 sport:5 scalar:1 applies:2 springer:1 wolf:4 corresponds:1 rny:4 extracted:7 acm:1 exposition:2 shared:3 content:2 hard:4 fw:2 typical:2 specifically:3 reducing:1 wt:5 called:1 experimental:1 svd:1 ijk:1 meaningful:1 people:5 support:2 scan:1 crammer:2 evaluate:2 bench:3 |
3,078 | 379 | FEEDBACK SYNAPSE TO CONE AND LIGHT ADAPTATION
Josef Skrzypek
Machine Perception Laboratory
UCLA - Los Angeles, California 90024
INTERNET: [email protected]
Abstract
Light adaptation (LA) allows cone vIslOn to remain functional between
twilight and the brightest time of day even though, at anyone time, their
intensity-response (I-R) characteristic is limited to 3 log units of the stimulating light. One mechanism underlying LA, was localized in the outer segment of an isolated cone (1,2). We found that by adding annular illhmination,
an I-R characteristic of a cone can be shifted along the intensity domain.
Neural network involving feedback synapse from horizontal cells to cones is
involved to be in register with ambient light level of the periphery. An
equivalent electrical circuit with three different transmembrane channels
leakage, photocurrent and feedback was used to model static behavior of a
cone. SPICE simulation showed that interactions between feedback synapse
and the light sensitive conductance in the outer segment can shift the I-R
curves along the intensity domain, provided that phototransduction mechanism is not saturated during maximally hyperpolarized light response.
1 INTRODUCTION
1.1
Light response in cones
In the vertebrate retina, cones respond to a small spot of light with sustained hyperpolarization which is graded with the stimulus over three log units of intensity [5]. Mechanisms underlying this I-R relation was suggested to result from statistical superposition of
invariant single-photon, hyperpolarizing responses involvnig sodium conductance
changes that are gated by cyclic nuclcotides (see 6). The shape of the response measured
in cones depends on the size of the stimulating spot of light, presumably because of peripheral signals mediated by a negative feedback synapse from horizontal cells [7,8]; the
hyperpolarizing response to the spot illumination in the central portion of the cone receptive field is antagonized by light in the surrounding periphery [11,12,13]. Thus the cone
391
392
Skrzypek
membrane is influenced by two antagonistic effects; 1) feedback, driven by peripheral illumination and 2) the light sensitive conductance, in the cone outer segment Although it
has been shown that key aspects of adaptation can be observed in isolated cones [1,2,3],
the effects of peripheral illumination on adaptation as related to feedback input from horizontal cells have not been examined. It was reported that under appropriate stimulus
conditions the resting membrane potential for a cone can be reached at two drastically
different intensities for a spot/annulus combinations [8,14].
We present here experimental data and modeling results which suggests that results of
feedback from horizontal cells to cones resemble the effect of the neural component of
light adaptation in cones. Specifically, peripheral signals mediated via feedback synapse
reset the cone sensitivity by instantaneously shifting the I-R curves to a new intensity
domain. The full range of light response potentials is preserved without noticeable
compression.
2 RESULTS
2.1
Identification of cones
Preparation and the general experimental procedure as well as criteria for identification
of cones has been detailed in [15,8]. Several criteria were used to distinguish cones from
other cells in the OPL such as: 1) the depth of recording in the retina [II, 13],2) the sequence of penetrations concomitant with characteristic light responses, 3) spectral
response curves [18],4) receptive field diameter [8], 5) the fastest time from dark potential to the peak of the light response [8, 15], 6)domain of I-R curves and 7) staining with
Lucipher Yellow [8, 11, 13]. These values represent averages derived from all intracellular recordings in 37 cones, 84 bipolar cells, more than 1000 horizontal cells, and more
than 100 rods.
2.2
Experimental procedure
After identifying a cone, its I-R curve was recorded. Then, in a presence of center illumination (diameter = 100 urn) which elicited maximal hyperpolarization from a cone, the
periphery of the receptive field was stimulated with an annulus of inner diameter (ID) =
750 urn and the outer diameter (OD) = 1500 urn. The annular intensity was adjusted to elicit depolarization of the membrane back to the dark potential level. Finally, the center
intensity was increased again in a stepwise manner to antagonize the effect of peripheral
illumination, and this new I-R curve was recorded.
2.3
Peripheral illumination shifts the I-R curve in cones
Sustained illumination of a cone with a small spot of light, evokes a hyperpolarizing
response, which after transient peak gradually repolarizes to some steady level (Fig. 1a).
When the periphery of the relina is illuminated with a ring of light in the presence of
center spot, the antagonistic component of response can be recorded in a form of sustained depolarization. It has been argued previously that in the tiger salamander cones,
this type of response in cones is mediated via synaptic input from horizontal cells. [11,
12].
Feedback Synapse to Cone and Light Adaptation
The significance of this result is that the resting membrane potential for this cone can be
reached at two drastically different intensities for a spot/annulus combinations; The action of an annular illumination is a fast depolarization of the membrane; the whole process is completed in a fraction of a second unlike the previous reports where the course
of light-adaptation lasted for seconds or even minutes.
Response due to spot of light measured at the peak of hyperpolarization, increased in
magnitude with increasing intensity over three log units (fig. l.a). The same data is plotted as open circles in fig. l.b. Initially, annulus presented during the central illumination
did not produce a noticeable response. Its amplitude reached maximum when the center
spot intensity was increased to 3 log units. Further increase of center intensity resulted in
disappearance of the annulus- elicited depolarization. Feedback action is graded with annular intensity and it depends on the balance between amount of light falling on the
center and the surround of the cone receptive field. The change in cone's membrane potential, due to combined effects of central and annular illumination is plotted as filled circles in fig. lb. This new intensity-response curve is shifted along the intensity axis by
approximately two log units. Both I-R curves span approximately three log units of intensity. The I-R curve due to combined center and surround illumination can be described
by the function VNm I/(I+k) [16] where Vm is a peak hyperpolarization and k is a
constant intensity generating half-maximal response. This relationship [x/(x+k)] was suggested to be an indication of the light adaptation [2]. The I-R curve plotted using peak
response values (open circles), fits a continuous line drawn according to equation (1exp(-kx?. This has been argued previously to indicate absence of light adaptation [2,1].
There is little if any compression or change in gain after the shift of the cone operating
point to some new domain of intensity. The results suggest that peripheral illumination
can shift the center-spot elicited I-R curve of the cone thus resetting the responsegenerating mechanism in cones.
=
2.4
Simulation of a cone model
The results presented in the previous sections imply that maximal hyperpolarization for
the cone membrane is not limited by the saturation in the phototransduction process
alone. It seems reasonable to assume that such a limit may be in part detennined by the
batteries of involved ions. Furthennore, it appears that shifting I-R curves along the intensity domain is not dependent solely on the light adaptation mechanism localized to the
outer segment of a cone. To test these propositions we developed a simplified compartmental model of a cone (Fig.2.) and we exercised it using SPICE (Vladimirescu et al.,
1981).
All interactions can be modeled using Kirchoffs current law; membrane current is
cm(dv/dt)+l ionic ' The leakage current is lleak = Gleak(Vm-EleaJ, light sensitive current is
Ilight = Glight*(Vm-Elight) and the feedback current is lib = Gfb*(Vm-Efb). The left branch
represents ohmic leakage channels (G leak) which are associated with a constant battery
Eleak ( -70 mY). The middle branch represents the light sensitive conductance (Glight) in
series with + 1 m V ionic battery (Elight) [18]. Light adaptation effects could be incorporated here by making Glight time varying and dependent on internal concentration of Calcium ions. In our preliminary studies we were only interested in examining whether the
shift of I-R is possible and if it would explain the disappearance of depolarizing FB reponse with hyperpolarization by the center light. This can be done with passive measurements of membrane potential amplitude. The right-most branch represents ionic channels
that are controlled by the feedback synapse. With, Efb -65 mV [11] Gfb is a time and
voltage independent feedback conductance.
=
393
394
Skrzypek
The input resistance of an isolated cone is taken to be near 500 Mohm (270 Mohm
Attwell, et al., 82). Assuming specific membrane resistance of 5000 Ohm*cm*cm and
that a cone is 40 microns long and has a 8 micron diameter at the base we get the leakage
conductance G1eak = 1/(lGohm). In our studies we assume G1eak to be linear altghouth
there is evidence that cone membrane rectifies (Skrzypek. 79). The Glight and Gfb are assumed to be equal and add up to l/(lGohm). The Glight varies with light intensity in proportion of two to three log units of intensity for a tenfold change in conductance. This relation was derived empirically, by comparing intensity response data obtained from a
cone (Vm=f(LogI)} to {Vm=f(LogGli$hJ} generated by the model. The changes in Glb
have not been calibrated to changes In light intensity of the annulus. However, we assume that G lb can not undergo variation larger that Glight.
Figure 3 shows the membrane potential changes generated by the model plotted as a
function of Rlighh at different settings of the "feedback" resistance Rib. With increasing
Rfb , there is a parallel shift along the abscissa without any changes in the shape of the
curve. Increase in Rlight corresponds to increase in light intensity and the increasing magnitude of the light response from Om V (Eligh0 all the way down to -65 mV (Efb). The increase in Rfb is associated with increasing intensity of the annular illumination, which
causes additional hyperpolarization of the horizontal cell and consequently a decrease in
"feedback" transmitter released from HC to cones. Since we assume the Etb =--65mV, a
more negative level than the nonnal resting membrane potential. a decrease in Gfb would
cause a depolarizing response in the cone. This can be observed here as a shift of the
curve along the abscissa. In our model, a hundred fold change in feedback resistance
from O.OlGohm to IGohm, resulted in shift of the "response-intensity" curve by approximately two log units along the abscissa. The relationship between changes in Rfb and the
shift of the "response-intensity" curve is nonlinear and additional increases in Rfb from
1Gohm to lOOGohm results in decreasing shifts.
Membrane current undergoes similar parallel shift with changes in feedback conductance. However, the photocurrent (lligh0 and the feedback current (lfb), show only saturation with increasing Glight (not shown). The limits of either l light or Ifb currents are defined
by the batteries of the model. Since these currents are associated with batteries of opposite polarities, the difference between them at various settings of the feedback conductance Gfb determines the amount of shift for Ileak along the abscissa. The compression in
shift of "response intensity" curves at smaller values of Glb results from smaller and
smaller current flowing through the feedback branch of the circuit. Consequently. a
smaller Gib changes are required to get response in the dark than in the light.
The shifting of the "response-intensity" curves generated by our model is not due to light
adaptation as described by [1,2] although it is possible that feedback effects could be involved in modulating light-sensitive channels. Our model suggests that in order to generate additional light response after the membrane of a cone was fully hyperpolarized by
light, it is insufficient to have a feedback effect alone that would depolarize the cone
membrane. Light sensitive channels that were not previously closed [18] must also be
available.
Feedback Synapse to Cone and Light Adaptation
3 DISCUSSION
The results presented here suggest that synaptic feedback from horizontal cells to cones
could contribute to the process of light adaptation at the photoreceptor level. A complete
explanation of the underlying mechanism requires further studies but the results seem to
suggest that depolarization of the cone membrane by a peripheral illumination, resets the
response-generating process in the cone. This result can be explained withing the framework of the current hypothesis of the light adaptation, recently summarized by [6].
It is conceivable that feedback transmitter released from horizontal cells in the dark,
opens channels to ions with reversal potential near -65 mV [11]. Hence, hyperpolarizing
cone membrane by increasing center spot intensity would reduce the depolarizing feedback response as cone nears the battery of involved ions. Additional increase in annular
illumination, further reduces the feedback transmitter and the associated feedback conductance thus pushing cone's membrane potential away from the "feedback" battery.
Eventually, at some values of the center intensity, cone membrane is so close to -65 mV
that no change in feedback conductance can produce a depolarizing response.
ACKNOWLEDGEMENTS
Special gratitude to Prof. Werblin for providing a superb research environment and generous support during early part of this project We acknowledge partial support by NSF
grant ECS-8307553, ARCO-UCLA Grant #1, UCLA-SEASNET Grant KF-21, MICROHughes grant #541122-57442, ONR grant #NOOOI4-86-K-0395, ARO grant DAAL0388- K-00S2
REFERENCES
1. Nakatani, K., & Yau. K.W. (1988). Calcium and light adaptation in retinal rods and
cones. Nature. 334,69-71.
2. Matthews, H.R., Murphy, R.L.W., Fain, G.L., & Lamb, T.D. (1988). Photoreceptor
light adaptation is mediated by cytoplasmic calcium concentration. Nature, 334, 67-69.
3. Normann, R.A. & Werblin, F.S. (1974). Control of retinal sensitivity. I. Light and
dark-adaptation of vertebrate rods and cones. J. Physiol. 63, 37-61.
4. Werblin, F.S. & Dowling, J.E. (1969). Organization of the retina of the mudpuppy,
Necturus maculosus. II. Intracellular recording. J. Neurophysiol. 32, (1969),315-338.
5. Pugh, E.N. & Altman, J. Role for calcium in adaptation. Nature 334, (1988), 16-17.
6. O'Bryan P.M., Properties of the dpolarizing synaptic potential evoked by peripheral
illumination in cones of the turtle retina. J.Physiol. Lond. 253, (1973), 207-223.
7. Skrzypek J., Ph.D. Thesis, University of California at Berkeley, (1979).
8. Skrzypek, J. & Werblin, F.S.,(1983). Lateral interactions in absence of feedback to
cones. J. Neurophysiol. 49, (1983), 1007-1016.
395
396
Skrzypek
9. Skrzypek. J. & Werblin. F.S., All horizontal cells have center-surround antagonistic receptive fields. ARVO Abstr., (1978).
10. Lasansky. A. Synaptic action mediating cone responses to annular illumination in the
retina of the larval tiger salamander. J. Physiol. Lond. 310, (1981), 206-214.
11. Skrzypek J.? Electrical coupling between horizontal vell bodies in the tiger
salamander retina. Vision Res. 24, (1984), 701-711.
12. Naka, K.I. & Rushton, W.A.H. (1967). The generation and spread of S-potentials in
fish (Cyprinidae) J. Physiol., 192, (1967),437-461.
13. AttweU, D., Werblin, F.S. & Wilson. M. (1982a). The properties of single cones isolated from the tiger salamander retina. J. Physiol. 328.259-283.
Feedback Synapse to Cone and Light Adaptation
G
I
I
C
--L-
G
IUk
G
fa
Ifgnt
m
--r-
E'Uk
-70mv
+ I mv
Fig. 2 Equivalent circuit model of a cone based on three different transmembrane channels. The ohmic leakage channel consists of a constant conductance GlcGJ: in series with
constant battery ElcGJ:. Light sensitive channels are represented in the middle branch by
G/i,J". Battery Eli,ltt. represents the reversal potential for light response at approximately
Om V. Feedback synapse is shown in the right-most branch as a series combination of
Gfb and the battery Efb =-65mV. representing reversal potential for annulus elicited,
depolarizing response measured in a cone.
0.02
:!
--..
>
0.00
c
0
-0.02
a.
?c
?
.a
--0-
---
~
?o.Ot
~
e
?
:I
'Vmlat Rfbo.Ol Q
'Vmlat RIb-.' Q
'Vm lor
Q
At.,
'Vmlat Rlb-I OG
.0.01
.0.01
.?
?2
0
2
?
Log Alight
Fig. 3 Plot of the membrane potential versus the logarithm of light-sensitive resistance.
The data was synthesized with the cone model simulated by SPICE. Both current and
voltage curves can be fitted by x/(x+k) relation (not shown) at all different settings of Gtb
(Rfb) indicated in the legend. The shift of the curves. measured at 1/2 maximal value
(k=x) spans about two log units. With increasing settings of Rtb (10 Gohms). curves begin to cross (Vm at -6SmV) signifying decreasing contribution of "feedback" synapse.
397
?
t.IJ
\D
00
b
til
~""
-
to,
11
!
>
0
>
-
'0
n>
'l--q
'Q,
?
,--
E
>
?10
........."".
,......
'\0,
"
6,
?20
IOnN
'lie
s
C
,..,____
~
I
-.~-------
""
.... ,,- ......
E
b.e.Q_~.DO
-31,
?1
i'
?1
??
i....
?2
?
..........
Fig. 1 (a) Series of responses to a combination of center spot and annulus. SurroWld illumination (S) was fixed at -3.2 l.u.
throughout the experiment. Center spot intensity (C) was increased in 0.5 I.u. steps as indicated by the numbers near each trace.
In the dark (upper-most trace) surround illumination had no measurable effect on the cone membrane potential. Annulus-elicited
depolarizing response increased with intensity in the center up 10 about -3 l.u. Further increase of the spot intensity diminished
the surround response. Plot of the peak hyperpolarizing response versus center spot intensity in log units in shown in (b) as open
circles. It fits the dashed curve drawn according to equation l-exp(-kx). The curve indicated by filled circles represents the
membrane potential measurements taken in the middle of the depolarizing response. This data can be approximated by a continuous curve derived from x/(x+k). All membrane potential measurement are made with respect 10 the resting level in the dark.
This result shows that in the presence of peripheral illumination, when the feedback is activated, membrane potential follows the
intensity-response curve which is shifted along the Log I axis.
2
| 379 |@word middle:3 compression:3 seems:1 proportion:1 hyperpolarized:2 open:4 simulation:2 mohm:2 cyclic:1 series:4 current:12 comparing:1 od:1 must:1 physiol:5 hyperpolarizing:5 shape:2 plot:2 alone:2 half:1 gfb:6 contribute:1 lor:1 along:9 consists:1 sustained:3 manner:1 behavior:1 abscissa:4 ol:1 decreasing:2 little:1 tenfold:1 vertebrate:2 increasing:7 provided:1 lib:1 underlying:3 project:1 circuit:3 ileak:1 begin:1 cm:3 depolarization:5 developed:1 superb:1 berkeley:1 bipolar:1 nakatani:1 uk:1 control:1 unit:10 grant:6 werblin:6 rushton:1 limit:2 id:1 solely:1 approximately:4 examined:1 evoked:1 suggests:2 fastest:1 limited:2 range:1 spot:15 procedure:2 elicit:1 suggest:3 get:2 close:1 pugh:1 equivalent:2 measurable:1 center:16 identifying:1 variation:1 antagonistic:3 altman:1 hypothesis:1 approximated:1 observed:2 role:1 electrical:2 decrease:2 transmembrane:2 environment:1 leak:1 battery:10 segment:4 neurophysiol:2 various:1 represented:1 ohmic:2 surrounding:1 fast:1 cytoplasmic:1 larger:1 arvo:1 compartmental:1 furthennore:1 mudpuppy:1 sequence:1 indication:1 aro:1 interaction:3 maximal:4 reset:2 adaptation:20 detennined:1 los:1 abstr:1 produce:2 generating:2 ring:1 coupling:1 measured:4 ij:1 noticeable:2 kirchoffs:1 c:1 resemble:1 indicate:1 gib:1 transient:1 argued:2 preliminary:1 proposition:1 larval:1 adjusted:1 rfb:5 brightest:1 exp:2 presumably:1 matthew:1 efb:4 generous:1 early:1 released:2 superposition:1 sensitive:8 exercised:1 modulating:1 smv:1 instantaneously:1 hj:1 varying:1 voltage:2 wilson:1 og:1 derived:3 transmitter:3 salamander:4 lasted:1 dependent:2 nears:1 initially:1 relation:3 interested:1 josef:1 nonnal:1 antagonize:1 special:1 field:5 equal:1 represents:5 report:1 stimulus:2 retina:7 resulted:2 murphy:1 conductance:12 organization:1 rectifies:1 saturated:1 rlb:1 light:49 activated:1 ambient:1 partial:1 filled:2 logarithm:1 circle:5 plotted:4 withing:1 isolated:4 re:1 fitted:1 increased:5 modeling:1 hundred:1 glb:2 examining:1 ohm:1 reported:1 varies:1 my:1 combined:2 calibrated:1 peak:6 sensitivity:2 vm:8 again:1 central:3 recorded:3 thesis:1 yau:1 til:1 potential:20 photon:1 retinal:2 summarized:1 fain:1 register:1 mv:8 depends:2 closed:1 portion:1 reached:3 elicited:5 parallel:2 depolarizing:7 contribution:1 om:2 characteristic:3 resetting:1 lfb:1 yellow:1 identification:2 annulus:9 ionic:3 explain:1 influenced:1 synaptic:4 involved:4 naka:1 associated:4 static:1 gain:1 noooi4:1 gleak:1 amplitude:2 back:1 appears:1 dt:1 day:1 response:39 maximally:1 synapse:11 reponse:1 flowing:1 done:1 though:1 horizontal:11 nonlinear:1 undergoes:1 indicated:3 rtb:1 depolarize:1 effect:9 hence:1 laboratory:1 during:3 steady:1 q_:1 criterion:2 complete:1 passive:1 recently:1 functional:1 hyperpolarization:7 empirically:1 resting:4 synthesized:1 measurement:3 dowling:1 surround:5 phototransduction:2 had:1 operating:1 base:1 add:1 showed:1 driven:1 periphery:4 onr:1 additional:4 dashed:1 ii:2 branch:6 full:1 signal:2 reduces:1 annular:8 cross:1 long:1 controlled:1 involving:1 vision:1 represent:1 cell:12 ion:4 preserved:1 ot:1 unlike:1 recording:3 undergo:1 legend:1 seem:1 near:3 presence:3 fit:2 opposite:1 inner:1 reduce:1 angeles:1 shift:14 rod:3 whether:1 resistance:5 cause:2 action:3 detailed:1 amount:2 dark:7 ph:1 diameter:5 skrzypek:10 generate:1 spice:3 nsf:1 shifted:3 fish:1 bryan:1 key:1 falling:1 drawn:2 fraction:1 cone:65 eli:1 vladimirescu:1 micron:2 respond:1 evokes:1 throughout:1 reasonable:1 lamb:1 illuminated:1 internet:1 distinguish:1 fold:1 vnm:1 ucla:4 etb:1 aspect:1 turtle:1 anyone:1 span:2 lond:2 urn:3 according:2 peripheral:10 combination:4 membrane:25 remain:1 smaller:4 penetration:1 making:1 ltt:1 dv:1 invariant:1 gradually:1 explained:1 taken:2 equation:2 previously:3 eventually:1 mechanism:6 reversal:3 available:1 away:1 appropriate:1 spectral:1 photocurrent:2 completed:1 pushing:1 prof:1 graded:2 leakage:5 receptive:5 opl:1 concentration:2 disappearance:2 fa:1 conceivable:1 lateral:1 simulated:1 outer:5 assuming:1 modeled:1 relationship:2 polarity:1 concomitant:1 balance:1 insufficient:1 providing:1 mediating:1 trace:2 negative:2 twilight:1 calcium:4 gated:1 upper:1 acknowledge:1 incorporated:1 lb:2 intensity:36 gratitude:1 required:1 california:2 suggested:2 perception:1 saturation:2 explanation:1 shifting:3 sodium:1 representing:1 imply:1 axis:2 mediated:4 normann:1 acknowledgement:1 kf:1 law:1 fully:1 generation:1 versus:2 localized:2 eleak:1 course:1 drastically:2 feedback:36 curve:26 depth:1 arco:1 fb:1 made:1 simplified:1 ec:1 rib:2 assumed:1 continuous:2 stimulated:1 channel:9 nature:3 hc:1 domain:6 did:1 significance:1 spread:1 intracellular:2 whole:1 s2:1 body:1 fig:8 lie:1 minute:1 down:1 specific:1 evidence:1 stepwise:1 adding:1 magnitude:2 illumination:20 kx:2 ifb:1 corresponds:1 determines:1 stimulating:2 consequently:2 absence:2 change:13 tiger:4 diminished:1 specifically:1 attwell:1 experimental:3 la:2 photoreceptor:2 internal:1 support:2 signifying:1 preparation:1 staining:1 |
3,079 | 3,790 | Measuring Invariances in Deep Networks
Ian J. Goodfellow, Quoc V. Le, Andrew M. Saxe, Honglak Lee, Andrew Y. Ng
Computer Science Department
Stanford University
Stanford, CA 94305
{ia3n,quocle,asaxe,hllee,ang}@cs.stanford.edu
Abstract
For many pattern recognition tasks, the ideal input feature would be invariant to
multiple confounding properties (such as illumination and viewing angle, in computer vision applications). Recently, deep architectures trained in an unsupervised
manner have been proposed as an automatic method for extracting useful features.
However, it is difficult to evaluate the learned features by any means other than
using them in a classifier. In this paper, we propose a number of empirical tests
that directly measure the degree to which these learned features are invariant to
different input transformations. We find that stacked autoencoders learn modestly
increasingly invariant features with depth when trained on natural images. We find
that convolutional deep belief networks learn substantially more invariant features
in each layer. These results further justify the use of ?deep? vs. ?shallower? representations, but suggest that mechanisms beyond merely stacking one autoencoder
on top of another may be important for achieving invariance. Our evaluation metrics can also be used to evaluate future work in deep learning, and thus help the
development of future algorithms.
1
Introduction
Invariance to abstract input variables is a highly desirable property of features for many detection
and classification tasks, such as object recognition. The concept of invariance implies a selectivity
for complex, high level features of the input and yet a robustness to irrelevant input transformations.
This tension between selectivity and robustness makes learning invariant features nontrivial. In the
case of object recognition, an invariant feature should respond only to one stimulus despite changes
in translation, rotation, complex illumination, scale, perspective, and other properties. In this paper,
we propose to use a suite of ?invariance tests? that directly measure the invariance properties of
features; this gives us a measure of the quality of features learned in an unsupervised manner by a
deep learning algorithm.
Our work also seeks to address the question: why are deep learning algorithms useful? Bengio and
LeCun gave a theoretical answer to this question, in which they showed that a deep architecture is
necessary to represent many functions compactly [1]. A second answer can also be found in such
work as [2, 3, 4, 5], which shows that such architectures lead to useful representations for classification. In this paper, we give another, empirical, answer to this question: namely, we show that
with increasing depth, the representations learned can also enjoy an increased degree of invariance.
Our observations lend credence to the common view of invariances to minor shifts, rotations and
deformations being learned in the lower layers, and being combined in the higher layers to form
progressively more invariant features.
In computer vision, one can view object recognition performance as a measure of the invariance of
the underlying features. While such an end-to-end system performance measure has many benefits,
it can also be expensive to compute and does not give much insight into how to directly improve
representations in each layer of deep architectures. Moreover, it cannot identify specific invariances
1
that a feature may possess. The test suite presented in this paper provides an alternative that can
identify the robustness of deep architectures to specific types of variations. For example, using
videos of natural scenes, our invariance tests measure the degree to which the learned representations
are invariant to 2-D (in-plane) rotations, 3-D (out-of-plane) rotations, and translations. Additionally,
such video tests have the potential to examine changes in other variables such as illumination. We
demonstrate that using videos gives similar results to the more traditional method of measuring
responses to sinusoidal gratings; however, the natural video approach enables us to test invariance
to a wide range of transformations while the grating test only allows changes in stimulus position,
orientation, and frequency.
Our proposed invariance measure is broadly applicable to evaluating many deep learning algorithms
for many tasks, but the present paper will focus on two different algorithms applied to computer
vision. First, we examine the invariances of stacked autoencoder networks [2]. These networks
were shown by Larochelle et al. [3] to learn useful features for a range of vision tasks; this suggests
that their learned features are significantly invariant to the transformations present in those tasks.
Unlike the artificial data used in [3], however, our work uses natural images and natural video
sequences, and examines more complex variations such as out-of-plane changes in viewing angle.
We find that when trained under these conditions, stacked autoencoders learn increasingly invariant
features with depth, but the effect of depth is small compared to other factors such as regularization.
Next, we show that convolutional deep belief networks (CDBNs) [5], which are hand-designed to be
invariant to certain local image translations, do enjoy dramatically increasing invariance with depth.
This suggests that there is a benefit to using deep architectures, but that mechanisms besides simple
stacking of autoencoders are important for gaining increasing invariance.
2
Related work
Deep architectures have shown significant promise as a technique for automatically learning features for recognition systems. Deep architectures consist of multiple layers of simple computational
elements. By combining the output of lower layers in higher layers, deep networks can represent
progressively more complex features of the input. Hinton et al. introduced the deep belief network,
in which each layer consists of a restricted Boltzmann machine [4]. Bengio et al. built a deep network using an autoencoder neural network in each layer [2, 3, 6]. Ranzato et al. and Lee et al.
explored the use of sparsity regularization in autoencoding energy-based models [7, 8] and sparse
convolutional DBNs with probabilistic max-pooling [5] respectively. These networks, when trained
subsequently in a discriminative fashion, have achieved excellent performance on handwritten digit
recognition tasks. Further, Lee et al. and Raina et al. show that deep networks are able to learn
good features for classification tasks even when trained on data that does not include examples of
the classes to be recognized [5, 9].
Some work in deep architectures draws inspiration from the biology of sensory systems. The human
visual system follows a similar hierarchical structure, with higher levels representing more complex
features [10]. Lee et al., for example, compared the response properties of the second layer of a
sparse deep belief network to V2, the second stage of the visual hierarchy [11]. One important property of the visual system is a progressive increase in the invariance of neural responses in higher
layers. For example, in V1, complex cells are invariant to small translations of their inputs. Higher
in the hierarchy in the medial temporal lobe, Quiroga et al. have identified neurons that respond with
high selectivity to, for instance, images of the actress Halle Berry [12]. These neurons are remarkably invariant to transformations of the image, responding equally well to images from different
perspectives, at different scales, and even responding to the text ?Halle Berry.? While we do not
know exactly the class of all stimuli such neurons respond to (if tested on a larger set of images, they
may well turn out to respond also to other stimuli than Halle Berry related ones), they nonetheless
show impressive selectivity and robustness to input transformations.
Computational models such as the neocognitron [13], HMAX model [14], and Convolutional Network [15] achieve invariance by alternating layers of feature detectors with local pooling and subsampling of the feature maps. This approach has been used to endow deep networks with some
degree of translation invariance [8, 5]. However, it is not clear how to explicitly imbue models with
more complicated invariances using this fixed architecture. Additionally, while deep architectures
provide a task-independent method of learning features, convolutional and max-pooling techniques
are somewhat specialized to visual and audio processing.
2
3
Network architecture and optimization
We train all of our networks on natural images collected separately (and in geographically different
areas) from the videos used in the invariance tests. Specifically, the training set comprises a set of
still images taken in outdoor environments free from artificial objects, and was not designed to relate
in any way to the invariance tests.
3.1 Stacked autoencoder
The majority of our tests focus on the stacked autoencoder of Bengio et al. [2], which is a deep
network consisting of an autoencoding neural network in each layer. In the single-layer case, in
response to an input pattern x ? Rn , the activation of each neuron, hi , i = 1, ? ? ? , m is computed as
h(x) = tanh (W1 x + b1 ) ,
where h(x) ? Rm is the vector of neuron activations, W1 ? Rm?n is a weight matrix, b1 ? Rm is a
bias vector, and tanh is the hyperbolic tangent applied componentwise. The network output is then
computed as
x
? = tanh (W2 h(x) + b2 ) ,
n
where x
? ? R is a vector of output values, W2 ? Rn?m is a weight matrix, and b2 ? Rn is a bias
vector. Given a set of p input patterns x(i) , i = 1, ? ? ? , p, the weight matrices W1 and W2 are adapted
2
Pp
using backpropagation [16, 17, 18] to minimize the reconstruction error i=1
x(i) ? x
?(i)
.
Following [2], we successively train up layers of the network in a greedy layerwise fashion. The
first layer receives a 14 ? 14 patch of an image as input. After it achieves acceptable levels of
reconstruction error, a second layer is added, then a third, and so on.
In some of our experiments, we use the method of [11], and constrain the expected activation of the
hidden units to be sparse. We never constrain W1 = W2T , although we found this to approximately
hold in practice.
3.2 Convolutional Deep Belief Network
We also test a CDBN [5] that was trained using two hidden layers. Each layer includes a collection
of ?convolution? units as well as a collection of ?max-pooling? units. Each convolution unit has
a receptive field size of 10x10 pixels, and each max-pooling unit implements a probabilistic maxlike operation over four (i.e., 2x2) neighboring convolution units, giving each max-pooling unit an
overall receptive field size of 11x11 pixels in the first layer and 31x31 pixels in the second layer.
The model is regularized in a way that the average hidden unit activation is sparse. We also use a
small amount of L2 weight decay.
Because the convolution units share weights and because their outputs are combined in the maxpooling units, the CDBN is explicitly designed to be invariant to small amounts of image translation.
4
Invariance measure
An ideal feature for pattern recognition should be both robust and selective. We interpret the hidden
units as feature detectors that should respond strongly when the feature they represent is present in
the input, and otherwise respond weakly when it is absent. An invariant neuron, then, is one that
maintains a high response to its feature despite certain transformations of its input. For example,
a face selective neuron might respond strongly whenever a face is present in the image; if it is
invariant, it might continue to respond strongly even as the image rotates.
Building on this intuition, we consider hidden unit responses above a certain threshold to be firing,
that is, to indicate the presence of some feature in the input. We adjust this threshold to ensure that
the neuron is selective, and not simply always active. In particular we choose a separate threshold
for each hidden unit such that all units fire at the same rate when presented with random stimuli.
After identifying an input that causes the neuron to fire, we can test the robustness of the unit by
calculating its firing rate in response to a set of transformed versions of that input.
More formally, a hidden unit i is said to fire when si hi (x) > ti , where ti is a threshold chosen
by our test for that hidden unit and si ? {?1, 1} gives the sign of that hidden unit?s values. The
sign term si is necessary because, in general, hidden units are as likely to use low values as to
use high values to indicate the presence of the feature that they detect. We therefore choose si to
maximize the invariance score. For hidden units that are regularized to be sparse, we assume that
si = 1, since their mean activity has been regularized to be low. We define the indicator function
3
fi (x) = 1{si hi (x) > ti }, i.e., it is equal to one if the neuron fires in response to input x, and zero
otherwise.
A transformation function ? (x, ?) transforms a stimulus x into a new, related stimulus, where the
degree of transformation is parametrized by ? ? R. (One could also imagine a more complex
transformation parametrized by ? ? Rn .) In order for a function ? to be useful with our invariance
measure, |?| should relate to the semantic dissimilarity between x and ? (x, ?). For example, ? might
be the number of degrees by which x is rotated.
A local trajectory T (x) is a set of stimuli that are semantically similar to some reference stimulus
x, that is
T (x) = {? (x, ?) | ? ? ?}
where ? is a set of transformation amounts of limited size, for example, all rotations of less than 15
degrees.
The global firing rate is the firing rate of a hidden unit when applied to stimuli drawn randomly
from a distribution P (x):
G(i) = E[fi (x)],
where P (x) is a distribution over the possible inputs x defined for each implementation of the test.
Using these definitions, we can measure the robustness of a hidden unit as follows. We define the
set Z as a set of inputs that activate hi near maximally. The local firing rate is the firing rate of a
hidden unit when it is applied to local trajectories surrounding inputs z ? Z that maximally activate
the hidden unit,
X
1 X 1
L(i) =
fi (x),
|Z|
|T (z)|
z?Z
x?T (z)
i.e., L(i) is the proportion of transformed inputs that the neuron fires in response to, and hence is a
measure of the robustness of the neuron?s response to the transformation ? .
Our invariance score for a hidden unit hi is given by
S(i) =
L(i)
.
G(i)
The numerator is a measure of the hidden unit?s robustness to transformation ? near the unit?s optimal inputs, and the denominator ensures that the neuron is selective and not simply always active.
In our tests, we tried to select the threshold ti for each hidden unit so that it fires one percent of the
time in response to random inputs, that is, G(i) = 0.01. For hidden units that frequently repeat the
same activation value (up to machine precision), it is sometimes not possible to choose ti such that
G(i) = 0.01 exactly. In such cases, we choose the smallest value of t(i) such that G(i) > 0.01.
Each of the tests presented in the paper is implemented by providing a different definition of P (x),
? (x, ?), and ?.
S(i) gives the invariance score for a single hidden unit. The invariance score Invp (N ) of a network
N is given by the mean of S(i) over the top-scoring proportion p of hidden units in the deepest layer
of N . We discard the (1 ? p) worst hidden units because different subpopulations of units may be
invariant to different transformations. Reporting the mean of all unit scores would strongly penalize
networks that discover several hidden units that are invariant to transformation ? but do not devote
more than proportion p of their hidden units to such a task.
Finally, note that while we use this metric to measure invariances in the visual features learned
by deep networks, it could be applied to virtually any kind of feature in virtually any application
domain.
5
Grating test
Our first invariance test is based on the response of neurons to synthetic images. Following such authors as Berkes et al.[19], we systematically vary the parameters used to generate images of gratings.
We use as input an image I of a grating, with image pixel intensities given by
I(x, y) = b + a sin (?(x cos(?) + y sin(?) ? ?)) ,
4
where ? is the spatial frequency, ? is the orientation of the grating, and ? is the phase. To implement our invariance measure, we define P (x) as a distribution over grating images. We measure
invariance to translation by defining ? (x, ?) to change ? by ?. We measure invariance to rotation by
defining ? (x, ?) to change ? by ?.1
6
Natural video test
While the grating-based invariance test allows us to systematically vary the parameters used to
generate the images, it shares the difficulty faced by a number of other methods for quantifying
invariance that are based on synthetic (or nearly synthetic) data [19, 20, 21]: it is difficult to generate
data that systematically varies a large variety of image parameters.
Our second suite of invariance tests uses natural video data. Using this method, we will measure
the degree to which various learned features are invariant to a wide range of more complex image
parameters. This will allow us to perform quantitative comparisons of representations at each layer
of a deep network. We also verify that the results using this technique align closely with those
obtained with the grating-based invariance tests.
6.1 Data collection
Our dataset consists of natural videos containing common image transformations such as translations, 2-D (in-plane) rotations, and 3-D (out-of-plane) rotations. In contrast to labeled datasets like
the NORB dataset [21] where the viewpoint changes in large increments between successive images,
our videos are taken at sixty frames per second, and thus are suitable for measuring more modest
invariances, as would be expected in lower layers of a deep architecture. After collection, the images
are reduced in size to 320 by 180 pixels and whitened by applying a band pass filter. Finally, we
adjust the constrast of the whitened images with a scaling constant that varies smoothly over time
and attempts to make each image use as much of the dynamic range of the image format as possible.
Each video sequence contains at least one hundred frames. Some video sequences contain motion
that is only represented well near the center of the image; for example, 3-D (out-of-plane) rotation
about an object in the center of the field of view. In these cases we cropped the videos tightly in
order to focus on the relevant transformation.
6.2 Invariance calculation
To implement our invariance measure using natural images, we define P (x) as a uniform distribution
over image patches contained in the test videos, and ? (x, ?) to be the image patch at the same
image location as x but occurring ? video frames later in time. We define ? = {?5, . . . , 5}. To
measure invariance to different types of transformation, we simply use videos that involve each type
of transformation. This obviates the need to define a complex ? capable of synthetically performing
operations such as 3-D rotation.
7
Results
7.1 Stacked autoencoders
7.1.1 Relationship between grating test and natural video test
Sinusoidal gratings are already used as a common reference stimulus. To validate our approach
of using natural videos, we show that videos involving translation give similar test results to the
phase variation grating test. Fig. 1 plots the invariance score for each of 378 one layer autoencoders
regularized with a range of sparsity and weight decay parameters (shown in Fig. 3). We were not able
to find as close of a correspondence between the grating orientation test and natural videos involving
2-D (in-plane) rotation. Our 2-D rotations were captured by hand-rotating a video camera in natural
environments, which introduces small amounts of other types of transformations. To verify that
the problem is not that rotation when viewed far from the image center resembles translation, we
compare the invariance test scores for translation and for rotation in Fig. 2. The lack of any clear
1
Details: We define P (x) as a uniform distribution over patches produced by varying ? ? {2, 4, 6, 8},
? ? {0, ? ? ? , ?} in steps of ?/20, and ? ? {0, ? ? ? , ?} in steps of ?/20. After identifying a grating that
strongly activates the neuron, further local gratings T (x) are generated by varying one parameter while holding
all other optimal parameters fixed. For the translation test, local trajectories T (x) are generated by modifying
? from the optimal value ?opt to ? = ?opt ? {0, ? ? ? , ?} in steps of ?/20, where ?opt is the optimal grating
phase shift. For the rotation test, local trajectories T (x) are generated by modifying ? from the optimal value
?opt to ? = ?opt ? {0, ? ? ? , ?} in steps of ?/40, where ?opt is the optimal grating orientation.
5
Grating and natural video test comparison
Natural 2?D rotation and translation test
20
Natural 2?D rotation test
Natural translation test
25
20
15
10
5
0
0
10
20
30
40
50
60
70
80
90
18
16
14
12
10
8
6
4
2
0
100
Grating phase test
0
5
10
15
20
25
Natural translation test
Figure 1: Videos involving translation
give similar test results to synthetic
videos of gratings with varying phase.
Figure 2: We verify that our translation
and 2-D rotation videos do indeed capture different transformations.
Layer 1 Natural Video Test
Invariance Score
40
30
20
0
10
?0.5
0
2
?1
?1.5
1
?2
0
?2.5
?1
?3
?2
?3.5
?3
log10 Weight Decay
?4
?4
log
10
Target Mean Activation
Figure 3: Our invariance measure selects networks that learn edge detectors resembling Gabor functions as the maximally invariant single-layer networks. Unregularized networks that learn highfrequency weights also receive high scores, but are not able to match the scores of good edge detectors. Degenerate networks in which every hidden unit learns essentially the same function tend to
receive very low scores.
trend makes it obvious that while our 2-D rotation videos do not correspond exactly to rotation, they
are certainly not well-approximated by translation.
7.1.2 Pronounced effect of sparsity and weight decay
We trained several single-layer autoencoders using sparsity regularization with various target mean
activations and amounts of weight decay. For these experiments, we averaged the invariance scores
of all the hidden units to form the network score, i.e., we used p = 1. Due to the presence of the
sparsity regularization, we assume si = 1 for all hidden units. We found that sparsity and weight
decay have a large effect on the invariance of a single-layer network. In particular, there is a semicircular ridge trading sparsity and weight decay where invariance scores are high. We interpret this
to be the region where the problem is constrained enough that the autoencoder must throw away
some information, but is still able to extract meaningful patterns from its input. These results are
visualized in Fig. 3. We find that a network with no regularization obtains a score of 25.88, and the
best-scoring network receives a score of 32.41.
7.1.3 Modest improvements with depth
To investigate the effect of depth on invariance, we chose to extensively cross-validate several depths
of autoencoders using only weight decay. The majority of successful image classification results in
6
Figure 4: Left to right: weight visualizations from layer 1, layer 2, and layer 3 of the autoencoders;
layer 1 and layer 2 of the CDBN. Autoencoder weight images are taken from the best autoencoder at
each depth. All weight images are contrast normalized independently but plotted on the same spatial
scale. Weight images in deeper layers are formed by making linear combinations of weight images
in shallower layers. This approximates the function computed by each unit as a linear function.
the literature do not use sparsity, and cross-validating only a single parameter frees us to sample the
search space more densely. We trained a total of 73 networks with weight decay at each layer set to
a value from {10, 1, 10?1 , 10?2 , 10?3 , 10?5 , 0}. For these experiments, we averaged the invariance
scores of the top 20% of the hidden units to form the network score, i.e., we used p = .2, and chose
si for each hidden unit to maximize the invariance score, since there was no sparsity regularization
to impose a sign on the hidden unit values.
After performing this grid search, we trained 100 additional copies of the network with the best
mean invariance score at each depth, holding the weight decay parameters constant and varying
only the random weights used to initialize training. We found that the improvement with depth was
highly significant statistically (see Fig. 5). However, the magnitude of the increase in invariance is
limited compared to the increase that can be gained with the correct sparsity and weight decay.
7
Invariance Score
Invariance Score
Invariance Score
Invariance Score
7.2 Convolutional Deep Belief Networks
We also ran our invariance tests on a two layer
CDBN. This provides a measure of the effecMean Invariance
Translation
2?D Rotation
3?D Rotation
11
21
tiveness of hard-wired techniques for achiev35.5
19.5
10.5
ing invariance, including convolution and max20.5
35
19
pooling. The results are summarized in Table
10
20
1. These results cannot be compared directly to
34.5
18.5
9.5
19.5
the results for autoencoders, because of the dif34
18
9
ferent receptive field sizes. The receptive field
19
33.5
17.5
sizes in the CDBN are smaller than those in the
8.5
18.5
autoencoder for the lower layers, but larger than
33
17
8
18
those in the autoencoder for the higher layers
32.5
16.5
7.5
due to the pooling effect. Note that the great17.5
32
16
est relative improvement comes in the natural
7
17
image tests, which presumably require greater
31.5
15.5
6.5
16.5
sophistication than the grating tests. The single
31
15
test with the greatest relative improvement is
1 2 3
1 2 3
1 2 3
1 2 3
Layer
Layer
Layer
Layer
the 3-D (out-of-plane) rotation test. This is the
most complex transformation included in our
tests, and it is where depth provides the greatest Figure 5: To verify that the improvement in invariance score of the best network at each layer is an
percentagewise increase.
effect of the network architecture rather than the
random initialization of the weights, we retrained
8 Discussion and conclusion
In this paper, we presented a set of tests for the best network of each depth 100 times. We find
the mean is statistically signifmeasuring invariances in deep networks. We that the increase in?60
icant
with
p
<
10
. Looking at the scores for
defined a general formula for a test metric, and
individual
invariances,
we see that the deeper netdemonstrated how to implement it using synthetic grating images as well as natural videos works trade a small amount of translation invariwhich reveal more types of invariances than ance for a larger amount of 2-D (in-plane) rotation
just 2-D (in-plane) rotation, translation and fre- and 3-D (out-of-plane) rotation invariance. All
plots are on the same scale but with different basequency.
lines so that the worst invariance score appears at
At the level of a single hidden unit, our firing the same height in each plot.
rate invariance measure requires learned features to balance high local firing rates with low global firing rates. This concept resembles the
trade-off between precision and recall in a detection problem. As learning algorithms become more
Test
Grating phase
Grating orientation
Natural translation
Natural 3-D rotation
Layer 1
68.7
52.3
15.2
10.7
Layer 2
95.3
77.8
23.0
19.3
% change
38.2
48.7
51.0
79.5
Table 1: Results of the CDBN invariance tests.
advanced, another appropriate measure of invariance may be a hidden unit?s invariance to object
identity. As an initial step in this direction, we attempted to score hidden units by their mutual
information with categories in the Caltech 101 dataset [22]. We found that none of our networks
gave good results. We suspect that current learning algorithms are not yet sophisticated enough to
learn, from only natural images, individual features that are highly selective for specific Caltech 101
categories, but this ability will become measurable in the future.
At the network level, our measure requires networks to have at least some subpopulation of hidden
units that are invariant to each type of transformation. This is accomplished by using only the
top-scoring proportion p of hidden units when calculating the network score. Such a qualification
is necessary to give high scores to networks that decompose the input into separate variables. For
example, one very useful way of representing a stimulus would be to use some subset of hidden units
to represent its orientation, another subset to represent its position, and another subset to represent
its identity. Even though this would be an extremely powerful feature representation, a value of p
set too high would result in penalizing some of these subsets for not being invariant.
We also illustrated extensive findings made by applying the invariance test on computer vision tasks.
However, the definition of our metric is sufficiently general that it could easily be used to test, for
example, invariance of auditory features to rate of speech, or invariance of textual features to author
identity.
A surprising finding in our experiments with visual data is that stacked autoencoders yield only
modest improvements in invariance as depth increases. This suggests that while depth is valuable,
mere stacking of shallow architectures may not be sufficient to exploit the full potential of deep
architectures to learn invariant features.
Another interesting finding is that by incorporating sparsity, networks can become more invariant.
This suggests that, in the future, a variety of mechanisms should be explored in order to learn better
features. For example, one promising approach that we are currently investigating is the idea of
learning slow features [19] from temporal data.
We also document that explicit approaches to achieving invariance such as max-pooling and weightsharing in CDBNs are currently successful strategies for achieving invariance. This is not suprising
given the fact that invariance is hard-wired into the network, but it validates the fact that our metric
faithfully measures invariances. It is not obvious how to extend these explicit strategies to become
invariant to more intricate transformations like large-angle out-of-plane rotations and complex illumination changes, and we expect that our metrics will be useful in guiding efforts to develop learning
algorithms that automatically discover much more invariant features without relying on hard-wired
strategies.
Acknowledgments This work was supported in part by the National Science Foundation under
grant EFRI-0835878, and in part by the Office of Naval Research under MURI N000140710747.
Andrew Saxe is supported by a Scott A. and Geraldine D. Macomber Stanford Graduate Fellowship.
We would also like to thank the anonymous reviewers for their helpful comments.
References
[1] Y. Bengio and Y. LeCun. Scaling learning algorithms towards ai. In L. Bottou, O. Chapelle,
D. DeCoste, and J. Weston, editors, Large-Scale Kernel Machines. MIT Press, 2007.
8
[2] Y. Bengio, P. Lamblin, D. Popovici, and H. Larochelle. Greedy layer-wise training of deep
networks. In NIPS, 2007.
[3] H. Larochelle, D. Erhan, A. Courville, J. Bergstra, and Y. Bengio. An empirical evaluation of
deep architectures on problems with many factors of variation. ICML, pages 473?480, 2007.
[4] G.E. Hinton, S. Osindero, and Y.-W. Teh. A fast learning algorithm for deep belief nets. Neural
Computation, 18(7):1527?1554, 2006.
[5] H. Lee, R. Grosse, R. Ranganath, and A.Y. Ng. Convolutional deep belief networks for scalable
unsupervised learning of hierarchical representations. In ICML, 2009.
[6] H. Larochelle, Y. Bengio, J. Louradour, and P. Lamblin. Exploring strategies for training deep
neural networks. The Journal of Machine Learning Research, pages 1?40, 2009.
[7] M. Ranzato, Y-L. Boureau, and Y. LeCun. Sparse feature learning for deep belief networks. In
NIPS, 2007.
[8] M. Ranzato, F.-J. Huang, Y-L. Boureau, and Y. LeCun. Unsupervised learning of invariant
feature hierarchies with applications to object recognition. In CVPR. IEEE Press, 2007.
[9] Rajat Raina, Alexis Battle, Honglak Lee, Benjamin Packer, and Andrew Y. Ng. Self-taught
learning: Transfer learning from unlabeled data. In ICML ?07: Proceedings of the 24th international conference on Machine learning, 2007.
[10] D.J. Felleman and D.C. Van Essen. Distributed hierarchical processing in the primate cerebral
cortex. Cerebral Cortex, 1(1):1?47, 1991.
[11] H. Lee, C. Ekanadham, and A.Y. Ng. Sparse deep belief network model for visual area v2. In
NIPS, 2008.
[12] R. Quian Quiroga, L. Reddy, G. Kreiman, C. Koch, and I. Fried. Invariant visual representation
by single neurons in the human brain. Nature, 435:1102?1107, 2005.
[13] K. Fukushima and S. Miyake. Neocognitron: A new algorithm for pattern recognition tolerant
of deformations and shifts in position. Pattern Recognition, 1982.
[14] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature
neuroscience, 2(11):1019?1025, 1999.
[15] Y. LeCun, B. Boser, J.S. Denker, D. Henderson, R.E. Howard, W. Hubbard, and L.D. Jackel.
Backpropagation applied to handwritten zip code recognition. Neural Computation, 1:541?
551, 1989.
[16] P. Werbos. Beyond regression: New tools for prediction and analysis in the behavioral sciences. PhD thesis, Harvard University, 1974.
[17] Y. LeCun. Une proc?edure d?apprentissage pour r?eseau a seuil asymmetrique (a learning scheme
for asymmetric threshold networks). In Proceedings of Cognitiva 85, pages 599?604, Paris,
France, 1985.
[18] D.E. Rumelhart, G.E. Hinton, and R.J. Williams. Learning representations by backpropagating errors. Nature, 323:533?536, 1986.
[19] P. Berkes and L. Wiskott. Slow feature analysis yields a rich repertoire of complex cell properties. Journal of Vision, 5(6):579?602, 2005.
[20] L. Wiskott and T. Sejnowski. Slow feature analysis: Unsupervised learning of invariances.
Neural Computation, 14(4):715?770, 2002.
[21] Y. LeCun, F.J. Huang, and L. Bottou. Learning methods for generic object recognition with
invariance to pose and lighting. In CVPR, 2004.
[22] Li Fei-Fei, Rod Fergus, and Pietro Perona. Learning generative visual models from few training examples: An incremental bayesian approach tested on 101 object categories. page 178,
2004.
9
| 3790 |@word version:1 proportion:4 seek:1 tried:1 lobe:1 initial:1 contains:1 score:30 document:1 current:1 surprising:1 activation:7 yet:2 si:8 must:1 enables:1 designed:3 plot:3 progressively:2 medial:1 v:1 credence:1 greedy:2 generative:1 une:1 plane:12 fried:1 provides:3 location:1 successive:1 height:1 become:4 consists:2 behavioral:1 manner:2 intricate:1 indeed:1 expected:2 pour:1 examine:2 frequently:1 brain:1 relying:1 automatically:2 decoste:1 increasing:3 discover:2 underlying:1 moreover:1 kind:1 substantially:1 finding:3 transformation:24 suite:3 temporal:2 quantitative:1 every:1 ti:5 exactly:3 classifier:1 rm:3 unit:49 grant:1 enjoy:2 local:9 qualification:1 despite:2 firing:9 approximately:1 might:3 chose:2 initialization:1 resembles:2 suggests:4 co:1 limited:2 range:5 statistically:2 averaged:2 graduate:1 acknowledgment:1 lecun:7 camera:1 practice:1 implement:4 ance:1 backpropagation:2 digit:1 area:2 empirical:3 significantly:1 hyperbolic:1 gabor:1 subpopulation:2 suggest:1 cannot:2 close:1 unlabeled:1 applying:2 measurable:1 map:1 reviewer:1 center:3 resembling:1 williams:1 independently:1 miyake:1 identifying:2 constrast:1 insight:1 examines:1 seuil:1 lamblin:2 variation:4 increment:1 dbns:1 hierarchy:3 imagine:1 target:2 us:2 alexis:1 goodfellow:1 harvard:1 element:1 trend:1 recognition:13 expensive:1 approximated:1 rumelhart:1 werbos:1 asymmetric:1 muri:1 labeled:1 capture:1 worst:2 region:1 ensures:1 ranzato:3 trade:2 valuable:1 ran:1 intuition:1 environment:2 benjamin:1 dynamic:1 trained:9 weakly:1 compactly:1 easily:1 various:2 represented:1 surrounding:1 stacked:7 train:2 fast:1 activate:2 sejnowski:1 artificial:2 stanford:4 larger:3 cvpr:2 otherwise:2 ability:1 validates:1 autoencoding:2 sequence:3 net:1 propose:2 reconstruction:2 neighboring:1 relevant:1 combining:1 degenerate:1 achieve:1 validate:2 pronounced:1 wired:3 incremental:1 rotated:1 object:10 help:1 andrew:4 develop:1 pose:1 quocle:1 minor:1 grating:24 throw:1 implemented:1 c:1 suprising:1 implies:1 larochelle:4 indicate:2 trading:1 come:1 direction:1 closely:1 correct:1 filter:1 subsequently:1 modifying:2 human:2 saxe:2 viewing:2 require:1 decompose:1 anonymous:1 opt:6 repertoire:1 exploring:1 quiroga:2 hold:1 sufficiently:1 koch:1 presumably:1 achieves:1 vary:2 smallest:1 proc:1 applicable:1 tanh:3 currently:2 jackel:1 weightsharing:1 hubbard:1 faithfully:1 tool:1 mit:1 activates:1 always:2 rather:1 varying:4 office:1 geographically:1 endow:1 focus:3 naval:1 improvement:6 cdbns:2 contrast:2 detect:1 helpful:1 hidden:36 perona:1 selective:5 transformed:2 selects:1 france:1 pixel:5 overall:1 classification:4 orientation:6 x11:1 development:1 spatial:2 constrained:1 initialize:1 mutual:1 field:5 equal:1 never:1 ng:4 biology:1 progressive:1 unsupervised:5 nearly:1 icml:3 future:4 stimulus:12 few:1 randomly:1 packer:1 tightly:1 densely:1 individual:2 national:1 phase:6 consisting:1 fire:6 fukushima:1 attempt:1 detection:2 geraldine:1 highly:3 investigate:1 essen:1 cdbn:6 evaluation:2 adjust:2 certainly:1 henderson:1 introduces:1 sixty:1 asaxe:1 edge:2 capable:1 necessary:3 poggio:1 modest:3 rotating:1 plotted:1 deformation:2 theoretical:1 increased:1 instance:1 measuring:3 stacking:3 ekanadham:1 subset:4 hundred:1 uniform:2 successful:2 osindero:1 too:1 answer:3 varies:2 synthetic:5 combined:2 international:1 lee:7 probabilistic:2 off:1 w1:4 thesis:1 successively:1 containing:1 choose:4 huang:2 li:1 potential:2 sinusoidal:2 bergstra:1 b2:2 summarized:1 includes:1 explicitly:2 later:1 view:3 maintains:1 complicated:1 minimize:1 formed:1 convolutional:8 correspond:1 identify:2 yield:2 handwritten:2 bayesian:1 produced:1 none:1 mere:1 trajectory:4 lighting:1 detector:4 whenever:1 definition:3 energy:1 nonetheless:1 frequency:2 pp:1 obvious:2 auditory:1 dataset:3 fre:1 recall:1 sophisticated:1 appears:1 higher:6 tension:1 response:12 maximally:3 though:1 strongly:5 just:1 stage:1 autoencoders:10 hand:2 receives:2 lack:1 quality:1 reveal:1 building:1 effect:6 concept:2 verify:4 contain:1 normalized:1 regularization:6 inspiration:1 hence:1 alternating:1 semantic:1 illustrated:1 sin:2 numerator:1 self:1 backpropagating:1 neocognitron:2 ridge:1 demonstrate:1 felleman:1 motion:1 percent:1 image:41 wise:1 recently:1 fi:3 common:3 rotation:28 specialized:1 cerebral:2 extend:1 approximates:1 interpret:2 significant:2 honglak:2 ai:1 automatic:1 grid:1 chapelle:1 impressive:1 cortex:3 maxpooling:1 berkes:2 align:1 showed:1 confounding:1 perspective:2 irrelevant:1 discard:1 selectivity:4 certain:3 continue:1 accomplished:1 scoring:3 caltech:2 captured:1 additional:1 somewhat:1 impose:1 greater:1 zip:1 recognized:1 maximize:2 multiple:2 desirable:1 full:1 x10:1 ing:1 match:1 calculation:1 cross:2 equally:1 prediction:1 involving:3 scalable:1 regression:1 denominator:1 vision:6 metric:6 whitened:2 essentially:1 represent:6 sometimes:1 kernel:1 achieved:1 cell:2 penalize:1 receive:2 cropped:1 remarkably:1 separately:1 fellowship:1 w2:3 unlike:1 posse:1 cognitiva:1 comment:1 pooling:9 tend:1 virtually:2 validating:1 suspect:1 extracting:1 near:3 presence:3 ideal:2 synthetically:1 bengio:7 enough:2 variety:2 gave:2 architecture:17 identified:1 idea:1 shift:3 absent:1 rod:1 quian:1 effort:1 speech:1 cause:1 deep:38 dramatically:1 useful:7 clear:2 involve:1 amount:7 transforms:1 ang:1 band:1 extensively:1 visualized:1 category:3 reduced:1 generate:3 sign:3 neuroscience:1 per:1 broadly:1 promise:1 taught:1 four:1 threshold:6 actress:1 achieving:3 drawn:1 macomber:1 penalizing:1 v1:1 merely:1 pietro:1 angle:3 powerful:1 respond:8 reporting:1 patch:4 draw:1 x31:1 acceptable:1 scaling:2 layer:48 hi:5 courville:1 correspondence:1 activity:1 nontrivial:1 adapted:1 kreiman:1 constrain:2 fei:2 scene:1 x2:1 layerwise:1 extremely:1 performing:2 format:1 department:1 combination:1 battle:1 smaller:1 increasingly:2 shallow:1 making:1 primate:1 quoc:1 invariant:28 restricted:1 taken:3 unregularized:1 visualization:1 reddy:1 turn:1 mechanism:3 know:1 end:2 operation:2 denker:1 hierarchical:4 v2:2 away:1 appropriate:1 generic:1 alternative:1 robustness:8 obviates:1 top:4 responding:2 include:1 subsampling:1 ensure:1 log10:1 calculating:2 exploit:1 giving:1 question:3 added:1 already:1 receptive:4 strategy:4 traditional:1 modestly:1 said:1 devote:1 highfrequency:1 separate:2 rotates:1 thank:1 majority:2 parametrized:2 collected:1 besides:1 code:1 relationship:1 providing:1 balance:1 difficult:2 relate:2 holding:2 implementation:1 boltzmann:1 perform:1 shallower:2 teh:1 observation:1 neuron:16 convolution:5 datasets:1 howard:1 semicircular:1 riesenhuber:1 defining:2 hinton:3 looking:1 frame:3 rn:4 retrained:1 intensity:1 introduced:1 namely:1 paris:1 extensive:1 componentwise:1 learned:10 w2t:1 textual:1 boser:1 nip:3 address:1 beyond:2 able:4 pattern:7 scott:1 sparsity:11 built:1 gaining:1 max:6 lend:1 belief:10 video:28 including:1 suitable:1 greatest:2 natural:25 difficulty:1 regularized:4 indicator:1 raina:2 advanced:1 representing:2 scheme:1 improve:1 halle:3 autoencoder:10 extract:1 text:1 faced:1 literature:1 berry:3 tangent:1 l2:1 deepest:1 popovici:1 relative:2 expect:1 interesting:1 foundation:1 degree:8 sufficient:1 apprentissage:1 wiskott:2 viewpoint:1 editor:1 systematically:3 share:2 translation:22 repeat:1 supported:2 free:2 copy:1 bias:2 allow:1 deeper:2 wide:2 face:2 sparse:7 benefit:2 van:1 distributed:1 depth:15 evaluating:1 rich:1 ferent:1 sensory:1 author:2 collection:4 made:1 efri:1 far:1 erhan:1 ranganath:1 obtains:1 global:2 active:2 investigating:1 tolerant:1 b1:2 norb:1 discriminative:1 fergus:1 search:2 why:1 table:2 additionally:2 promising:1 nature:3 learn:10 robust:1 ca:1 n000140710747:1 transfer:1 excellent:1 complex:12 bottou:2 domain:1 louradour:1 fig:5 fashion:2 grosse:1 slow:3 precision:2 position:3 comprises:1 explicit:2 guiding:1 outdoor:1 third:1 learns:1 hmax:1 ian:1 formula:1 specific:3 explored:2 decay:11 consist:1 incorporating:1 gained:1 phd:1 dissimilarity:1 magnitude:1 illumination:4 occurring:1 boureau:2 smoothly:1 sophistication:1 simply:3 likely:1 visual:9 contained:1 weston:1 viewed:1 identity:3 quantifying:1 towards:1 change:9 hard:3 included:1 specifically:1 semantically:1 justify:1 total:1 pas:1 invariance:80 attempted:1 est:1 meaningful:1 formally:1 select:1 rajat:1 evaluate:2 audio:1 tested:2 |
3,080 | 3,791 | Learning transport operators for image manifolds
Bruno A. Olshausen
Helen Wills Neuroscience Institute
& School of Optometry
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Benjamin J. Culpepper
Department of EECS
Computer Science Division
University of California, Berkeley
Berkeley, CA 94720
[email protected]
Abstract
We describe an unsupervised manifold learning algorithm that represents a surface
through a compact description of operators that traverse it. The operators are
based on matrix exponentials, which are the solution to a system of first-order
linear differential equations. The matrix exponents are represented by a basis
that is adapted to the statistics of the data so that the infinitesimal generator for a
trajectory along the underlying manifold can be produced by linearly composing
a few elements. The method is applied to recover topological structure from low
dimensional synthetic data, and to model local structure in how natural images
change over time and scale.
1
Introduction
It is well known that natural images occupy a small fraction of the space of all possible images.
Moreover, as images change over time in response to observer motion or changes in the environment they trace out particular trajectories along manifolds in this space. It is reasonable to expect
that perceptual systems have evolved ways to efficiently model these manifolds, and thus mathematical models that capture their structure in operators that transport along them may be of use for
understanding perceptual systems, as well as for engineering artificial vision systems. In this paper,
we derive methods for learning these transport operators from data.
Rather than simply learning a mapping of individual data points to a low-dimensional space, we seek
a compact representation of the entire manifold via the operators that traverse it. We investigate a
direct application of the Lie approach to invariance [1] utilizing a matrix exponential generative
model for transforming images. This is in contrast to previous methods that rely mainly upon a firstorder Taylor series approximation of the matrix exponential [2,3], and bilinear models, in which the
transformation variables interact multiplicatively with the input [4,5,6]. It is also distinct from the
class of methods that learn embeddings of manifolds from point cloud data [7,8,9,10]. The spirit
of this work is similar to [11], which also uses a spectral decomposition to make learning tractable
in extremely high dimensional Lie groups, such as those over images. We share the goal of [12] of
learning a model of the manifold which can then be generalized to new data.
Here we show how a particular class of transport operators for moving along manifolds may be
learned from data. The model is first applied to synthetic datasets to demonstrate interesting cases
where it can recover topology, and that for more difficult cases it neatly approximates the local
structure. Subsequently, we apply it to time-varying natural images and extrapolate along inferred
trajectories to demonstrate super-resolution and temporal filling-in of missing video frames.
1
2
Problem formulation
Let us consider an image of the visual world at time t as a point x ? RN , where the elements of x
correspond to image pixels. We describe the evolution of x as
x? = A x ,
(1)
where the matrix A is a linear operator capturing some action in the environment that transforms
the image. Such an action belongs to a family that occupies a subspace of RN ?N given by
A=
M
X
? m cm
(2)
m=1
for some M ? N 2 (usually M << N 2 ), with ?m ? RN ?N . The amount of a particular action
from the dictionary ?m that occurs is controlled by the corresponding cm . At t = 0, a vision system
takes an image x0 , and then makes repeated observations at intervals ?t. Given x0 , the solution to
(1) traces out a continuously differentiable manifold of images given by xt = exp(At) x0 , which
we observe periodically. Our goal is to learn an appropriate set of bases, ?, that allow for a compact
description of this set of transformations by training on many pairs of related observations.
This generative model for transformed images has a number of attractive properties. First, it factors
apart the time-varying image into an invariant part (the initial image, x0 ) and variant part (the transformation, parameterized by the coefficient vector c), thus making explicit the underlying causes.
Second, the learned exponential operators are quite powerful in terms of modeling capacity, compared to their linear counterparts. Lastly, the partial derivatives of the objective function have a
simple form that may be computed efficiently.
3
Algorithm
The model parameters are learned by maximizing the log-likelihood of the model. Consider two
?close? states of the system in isolation. Let x0 be our initial condition, and x1 be a second observation. These points are related through an exponentiated matrix that itself is composed of a few basis
elements, plus zero-mean white i.i.d. Gaussian noise, n:
x1 = T(c) x0 + n
X
T(c) = exp(
?m cm ) .
(3)
(4)
m
We assume a factorial sparse prior over the transform variables c of the form P (cm ) ?
exp(?? |cm |). The negative log of the posterior probability of the data under the model is given
by
1
?X
E = ||x1 ? T(c) x0 ||22 +
||?m ||2F + ?||c||1 ,
(5)
2
2 m
where || ? ||F is the Frobenius norm, which acts to regularize the dictionary element lengths. The
1-norm encourages sparsity. Given two data points, the solution of the c variables which relate them
through ? is found by a fast minimization of E with respect to c.
Learning of the basis ? proceeds by gradient descent with respect to E. (Note that this constitutes
a variational approximation to the log-likelihood, similar to [13].) The ? variables are initialized
?E
randomly, and adjusted according to ?? = ?? ??
, using the solution, c, for a pair of observations
x0 , x1 . Figure 1 outlines the steps of the algorithm.
The partial derivatives
P of E w.r.t.T c and ? can be cast in a simple form using the spectral decomposition of A, given by ? ?? u? v?
, with right eigenvectors u? , left eigenvectors v? , and eigenvalues
?? [14]. Let U = [u1 u2 ...uN ], V = [v1 v2 ...vN ] and D be a diagonal matrix of the eigenvalues
?? . Then
X
? exp(A)ij
=
F?? Ui? Vk? Ul? Vj? ,
(6)
?Akl
??
2
1
2
3
4
5
6
7
8
9
choose M ? N 2
initialize ?
while stopping criteria is not met,
pick x0 , x1
initialize c to zeros
c ? arg minc E
?E
?? = ?? ??
sort ?m by ||?m ||F
M ? max m s.t. ||?m ||F >
Figure 1: Pseudo-code for the learning algorithm. Steps 1-2 initialize. A typical stopping criteria
in step 3 is that the reconstruction error or sparsity on some held-out data falls below a threshold.
Steps 4-6 compute an E-step on some pair of data points. Step 7 computes a ?partial? M-step.
Steps 8-9 shrink the subspace spanned by the dictionary if one or more of the elements have shrunk
sufficiently in norm.
where the matrix F is given by:
(
F?? =
exp(?? )?exp(?? )
?? ???
exp(?? )
if
?? 6= ??
otherwise
(7)
Application of the chain rule and a re-arrangement of terms yields simplified forms for the partials
of E w.r.t. c and ?. After computing two intermediate terms P and Q,
P = UT (x1 x0 T + x0 x0 T TT )V
X
Qkl =
Vk? Ul? F?? P?? ,
(8)
(9)
??
the two partial derivatives for inference and learning are:
X
?E
=
Qkl ?klm + ? sgn(cm )
?cm
(10)
kl
?E
??klm
= Qkl cm + ? ?klm .
(11)
The order of complexity for both derivatives is determined by the computation of Q, which requires
an eigen-decomposition and a few matrix multiplications, giving O(N p ) with 2 < p < 3.
4
Experiments on point sets
We first test the model by applying it to simple datasets where the solutions are known: learning
the topology of a sphere and a torus. Second, we apply the model to learn the manifold of timevarying responses to a natural movie from complex oriented filters. These demonstrations illustrate
the algorithm?s capability for learning significant non-linear structure.
We have also applied the model to the Klein bottle. Though closely related to the torus, it is an
example of a low-dimensional surface whose topology can not be captured by a first-order Lie
operator, though our model is able to interpolate between points on the surface using a piecewise
approximation (see the supplementary material accompanying this paper for further discussion of
this point).
Related pairs of points on a torus are generated by choosing two angles ?0 , ?0 uniformly at random
from [0, 2?]; two related angles ?1 , ?1 are produced by sampling from two von Mises distributions
with means ?0 and ?0 , and concentration ? = 5 using the circular statistics toolbox of [15]. For the
sphere, we generate the first pair of angles using the normal-deviate method, to avoid concentration
of samples near the poles. Though parameterized by two angles, the coordinates of points on these
surfaces are 3- and 4-dimensional; pairs of points xt for t = 0, 1 on the unit sphere are given by xt =
(sin ?t cos ?t , sin ?t sin ?t , cos ?t ), and points on a torus by xt = (cos ?t , sin ?t , cos ?t , sin ?t ).
3
1
1
0.5
0
0.5
?0.5
?1
1
0
1
0
0
?1
?1
1
?0.5
1
0.5
0.5
0
?1
1
0
?0.5
?0.5
?1
1
1
0
1
0.5
0
1
0
?1
0.5
?1
1
0
?1
?1
0
?0.5
0
?0.5
?1
?1
(a)
?1
(b)
1
1
0.5
0.5
0.5
0
0
0
?0.5
?0.5
?0.5
?1
1
?1
1
?1
1
0.5
0.5
1
0
x2
?1
?1
x2
?1
?1
x3
x1
0.5
0.5
0.5
0
0
0
x4
1
x4
1
1
0
0
?0.5
x2
?1
?1
x1
?1
x2
?1
1
?1
1
0.5
?1
?0.5
?0.5
?1
1
0
?0.5
1
?0.5
1
0
0
?0.5
x1
0.5
1
0
0
?0.5
x3
x4
1
x4
x3
Figure 2: Orbits of learned sphere operators. (a) Three ?m basis elements applied to points
at the six poles of the sphere, (1, 0, 0), (0, 1, 0), (0, 0, 1), (?1, 0, 0), (0, ?1, 0), and (0, 0, ?1). The
orbits are generated by setting x0 to a pole, then plotting xt = exp(?m t) x0 for t = [?100, 100].
(b) When superimposed on top of each other, the three sets of orbits clearly define the surface of a
sphere.
0.5
1
0
0
?0.5
x2
?1
?1
x1
0.5
1
0
0
?0.5
x3
?1
?1
x2
Figure 3: Orbits of learned torus operators. Each row shows three projections of a ?m basis
element applied to a point on the surface of the torus. The orbits shown are generated by setting
x0 = (0, 1, 0, 1) then plotting xt = exp(?m t) x0 for t = [?1000, 1000] in projections constructed
from each triplet of the four coordinates. In each plot, two coordinates always obey a circular
relationship, while the third varies more freely.
4
Figure 4: Learning transformations of oriented filter pairs across time. The orbits of three
three complex filter outputs in response to a natural movie. The blue points denote the complex
output for each frame in the movie sequence and are linked to their neighbors via the blue line. The
points circled in red were observed by the model, and the red curve shows an extrapolation along
the estimated trajectory.
For the sphere, N = 3, thus setting M = 9 gives the model the freedom to generate the full space
of A operators. The ? are initialized to mean-zero white Gaussian noise with variance 0.01, and
10, 000 learning updates are computed by generating a pair of related points, minimizing E w.r.t.
?E
c, then updating ? according to ?? = ?? ??
. In all of the point set experiments, ? = 0.0001
and ? = 0.01. For cases where topology can be recovered, the solution is robust to the settings of
? and ? ? changing either variable by an order of magnitude does not change the solution, though
it may increase the number of learning steps required to get to it. In cases where the topology can
not be recovered, the influence on the solution of the settings of ? and ? is more subtle, as their
relative values effectively trade-off the importance of data reconstruction and the sparsity of the
vector c. We adjust ? during learning as follows: when ?? causes E to decrease, we multiply ? by
1.01; otherwise, we multiply ? by 0.99. When the model has more parameters than it needs to fully
capture the topology of the sphere this fact is evident from the solution it learns: six of the dictionary
elements ?m drop out (they have norm less than 10?6 ), since the F-norm ?weight decay? term kills
off dictionary elements that are used rarely. Figure 2 shows orbits produced by applying each of the
remaining ?m operators to points on the sphere. Similar experiments are successful for the torus;
Figure 3 shows trajectories of the operators learned for the torus.
As an intermediate step towards modeling time varying natural images, we investigate the model?s
ability to learn the response surface for a single complex oriented filter to a moving image. A
complex pyramid is built from each frame in a movie, and pairs of filter responses 1 to 4 frames
apart are observed by the model. Four 2x2 basis functions are learned in the manner described
above. Figure 4 shows three representative examples that illustrate how well the model is able to
extrapolate from the solution estimated using the learned basis ?, and complex responses from the
same filter within a 4 frame time interval. In most cases, this trajectory follows the data closely for
several frames.
5
Experiments on movies
In the image domain, our model has potential applications in temporal interpolation/filling in of
video, super-resolution, compression, and geodesic distance estimation. We apply the model to
moving natural images and investigate the first three applications; the third will be the subject of
future work. Here we report on the ability of the model to learn transformations across time, as
well as across scales of the Laplacian pyramid. Our data is many short grayscale video sequences
of Africa from the BBC.
5.1
Time
We apply the model to natural movies by presenting it with patches of adjacent frame pairs. Using an
analytically generated infinitesimal shift operator, we first run a series of experiments to determine
the effect of local minima on the recovery of a known displacement through the minimization of E
5
Figure 5: Shift operator learned from synthetically transformed natural images. The operator
?1 , displayed as an array of weights that, for each output pixel, shows the strength of its connection
to each input pixel. Each of the 15x15 arrays represents one output pixel?s connections. Because of
the 1/f 2 falloff in the power spectrum of natural images, synthetic images with a wildly different
distribution of spatial frequency content, such as uncorrelated noise, will not be properly shifted by
this operator.
tt =
= 0.00
0.00
tt =
= 0.25
0.25
tt =
= 0.50
0.50
tt =
= 0.75
0.75
tt =
= 1.00
1.00
exponential
linear
1st order Taylor
approximation
Figure 6: Interpolating between shifted images to temporally fill in missing video frames. Two
images x0 and x1 are generated by convolving an image of a diagonal line and a shifted diagonal
line by a 3x3 Gaussian kernel with ? = 0.8, and the operator A is inferred. The top row shows the
sequence of images xt = exp(At) x0 for t = 0.25, 0.50, 0.75, 1.00. The middle row shows linear
interpolation between x0 and x1 . The bottom row shows the sequence of images xt = (I + At) x0 ,
that is, the first-order Taylor expansion of the matrix exponential, which performs poorly for shifts
greater than one pixel.
6
w.r.t. c. When initialized to zero, the c vector often converges to the wrong displacement, but this
problem can be avoided with high probability using a coarse-to-fine technique [16,17]. Doing so
requires a slight alteration to our inference algorithm: now we must solve a sequence of optimization problems on frame pairs convolved with a Gaussian kernel whose variance is progressively
decreased. At each step in the sequence, both frames are convolved by the kernel before a patch is
selected. For the first step, the c variables are initialized to zero; for subsequent steps they are initialized to the solution of the previous step. For our analytical shifting operator, two blurring filters
? first a 5x5 kernel with variance 10, then a 3x3 kernel with variance 5 ? reliably gives a proper
initialization for the final minimization that runs on the unaltered data.
For control purposes, the video for this experiment comes from a camera fly over; thus, most of
the motion in the scene is due to camera motion. We apply the model pairs of 11x11 patches,
selected from random locations in the video, but discarding patches near the horizon where there is
little or no motion. We initialize M = 16; after learning, the basis function with the longest norm
has the structure of a shift operator in the primary direction of motion taking place in the video
sequence. Using these 16 operators, we run inference on 1,000 randomly selected pairs of patches
from a second video, not used during learning, and measure the quality of the reconstruction as the
trajectory is used to predict into the future. At 5 frames into the future, our model is able to maintain
an average SNR of 7, compared to SNR 5 when a first-order Taylor approximation is used in place
of the matrix exponential; for comparison, the average SNR for the identity transformation model
on this data is 1.
Since the primary form of motion going on in these small patches is translation, we also train a
single operator using artificially translated natural data to make clear that the model can learn this
case completely. For this last experiment we take a 360x360 pixel frame of our natural movie, and
continuously translate the entire frame in the Fourier domain by a displacement chosen uniformly
at random from [0, 3] pixels. We then randomly select a 15x15 region on the interior of the pair of
frames and use the two 225 pixel vectors as our x0 and x1 . We modify the objective function to be
1
?X
E = ||W [x1 ? T(c) x0 ]||22 +
||?m ||2F + ?||c||1 ,
(12)
2
2 m
where W is a binary windowing function that selects the central 9x9 region from a 15x15 patch;
thus, the residual errors that come from new content translating into the patch are ignored. After
learning, the basis function ?1 (shown in figure 5) is capable of translating natural images up to 3
pixels while maintaining an average SNR of 16 in the 9x9 center region. Figure 6 shows how this
operator is able to correctly interpolate between two measurements of a shifted image in order to
temporally up-sample a movie.
5.2
Scale
The model can also learn to transform between successive scales in the Laplacian pyramid built
from a single frame of a video sequence. Figure 7 depicts the system transforming an image patch
from scale 2 to 3 of a 256x256 pixel image. We initialize M = 100, but many basis elements shrink
during learning; we use only the 16 ?m with non-negligible norm to encode a scale change. The
basis ? is initialized to mean-zero white Gaussian noise with variance 0.01; the same inference and
learning procedures as described for the point sets are then run on pairs x0 , x1 selected at random
from the corpus of image sequences in the following way. First, we choose a random frame from
a random sequence, then up-sample and blur scale 3 of its Laplacian pyramid. Second, we select
an 8x8 patch from scale 2 (x0 ) of the corresponding up-blurred image patch (x1 ). Were it not for
the highly structured manifold on which natural images live, the proposition of finding an operator
that maps a blurred, subsampled image to its high-resolution original state would seem untenable.
However, our results show that in many cases, a reduced representation of such two-way mappings
can be found, even for small patches.
6
Discussion and conclusions
We have shown that it is possible to learn low-dimensional parameterizations of operators that transport along non-linear manifolds formed by natural images, both across time and scale. Our focus
thus far has been primarily on understanding the model and how to properly optimize its parameters,
7
Scale 3
Estimated Scale 2
Actual Scale 2
Error
(a)
(b)
(c)
(d)
Figure 7: Learning transformations across scale. (a) Scale 3 of the Laplacian pyramid for a
natural scene we wish to code, by describing how it transforms across scale, in terms of our learned
dictionary. (b) The estimated scale 2, computed by transforming 8x8 regions of the up-sampled and
blurred scale 3. The estimated scale 2 has SNR 9.60; (c) shows the actual scale 2 and (d) shows the
errors made by our estimation. For reconstruction we use only 16 dictionary elements.
as little work has previously been done on learning such high dimensional Lie groups. A promising
direction for future work is to explore higher-order models capable of capturing non-commutative
operators, such as
xt = exp(?1 c1 ) exp(?2 c2 ) ? ? ? exp(?K cK ) x0 ,
(13)
as this formulation may be more parsimonious for factoring apart transformations which are prevalent in natural movies, such as combinations of translation and rotation.
Early attempts to model the manifold structure of images train on densely sampled point clouds
and find an embedding into a small number of coordinates along the manifold. However such an
approach does not actually constitute a model, since there is no function for mapping arbitrary
points, or moving along the manifold. One must always refer back to original data points on which
the model was trained ? i.e., it works as a lookup table rather than being an abstraction of the data.
Here, by learning operators that transport along the manifold we have been able to learn a compact
description of its structure.
This model-based representation can be leveraged to compute geodesics using a numerical approximation to the arc length integral:
Z 1
T
X
t
t?1
|| exp(A ) x0 ? exp(A
S=
||A exp(A t)||22 dt = lim
) x0 ||22 ,
(14)
T
??
T
T
0
t=1
where T is the number of segments chosen to use in the piecewise linear approximation of the curve,
and each term in the summation gives the length of a segment. We believe that this aspect of our
model will be of use in difficult classification problems, such as face identification, where Euclidean
distances measured in pixel-space give poor results.
Previous attempts to learn Lie group operators have focused on linear approximations. Here we show
that utilizing the full Lie operator/matrix exponential in learning, while computationally intensive,
is tractable, even in the extremely high dimensional cases required by models of natural movies.
Our spectral decomposition is the key component that enables this, and, in combination with careful
mitigation of local minima in the objective function using a coarse-to-fine technique, gives us the
power to factor out large transformations from data.
One shortcoming of the approach described here is that transformations are modeled in the original
pixel domain. Potentially these transformations may be described more economically by working in
a feature space, such as a sparse decomposition of the image. This is a direction of ongoing work.
Acknowledgments
The authors gratefully acknowledge many useful discussions with Jascha Sohl-Dickstein, Jimmy
Wang, Kilian Koepsell, Charles Cadieu, and Amir Khosrowshahi, and the insightful comments from
our anonymous reviewers.
8
References
[1] VanGool, L., Moons, T., Pauwels, E. & Oosterlinck, A. (1995) Vision and Lie?s approach to invariance.
Image and Vision Computing, 13(4): 259-277.
[2] Miao, X. & Rao, R.P.N. (2007) Learning the Lie groups of visual invariance. Neural Computation, 19(10):
2665-2693.
[3] Rao, R.P.N & Ruderman D.L. (1999) Learning Lie Groups for Invariant Visual Perception. Advances in
Neural Information Processing Systems, 11:810-816. Cambridge, MA: MIT Press.
[4] Grimes, D.B., & Rao, R.P.N. (2002). A Bilinear Model for Sparse Coding. Advances in Neural Information
Processing Systems, 15. Cambridge, MA: MIT Press.
[5] Olshausen, B.A., Cadieu, C., Culpepper, B.J. & Warland, D. (2007) Bilinear Models of Natural Images.
SPIE Proceedings vol. 6492: Human Vision Electronic Imaging XII (B.E. Rogowitz, T.N. Pappas, S.J. Daly,
Eds.), Jan 28 - Feb 1, 2007, San Jose, California.
[6] Tenenbaum, J. B. & Freeman, W. T. (2000) Separating style and content with bilinear models. Neural
Computation, 12(6):1247-1283.
[7] Roweis, S. & Saul, L. (2000) Nonlinear dimensionality reduction by locally linear embedding. Science,
290(5500): 2323-2326.
[8] Weinberger, K. Q. & Saul, L. K. (2004) Unsupervised learning of image manifolds by semidefinite programming. Computer Vision and Pattern Recognition.
[9] Tenenbaum, J. B., de Silva, V. & Langford, J. C. (2000) A Global Geometric Framework for Nonlinear
Dimensionality Reduction. Science, 22 December 2000: 2319-2323.
[10] Belkin, M., & Niyogi, P. (2002). Laplacian eigenmaps and spectral techniques for embedding and clustering. Advances in Neural Information Processing Systems, 14. Cambridge, MA: MIT Press.
[11] Wang, C. M., Sohl-Dickstein, J., & Olshausen, B. A. (2009) Unsupervised Learning of Lie Group Operators from Natural Movies. Redwood Center for Theoretical Neuroscience, Technical Report; RCTR 01-09.
[12] Dollar, P., Rabaud, V., & Belongie, S (2007) Non-isometric Manifold Learning: Analysis and an Algorithm. Int. Conf. on Machine Learning , 241-248.
[13] Olshausen, B.A. & Field, D.J. (1997) Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1? Vision Research, 37: 3311-3325.
[14] Ortiz, M., Radovitzky, R.A. & Repetto, E.A (2001) The computation of the exponential and logarithmic
mappings and their first and second linearizations. International Journal For Numerical Methods In Engineering. 52: 1431-1441.
[15] Berens, P. & Velasco, M. J. (2009) The circular statistics toolbox for Matlab. MPI Technical Report, 184.
[16] Anandan, P. (1989) A computational framework and an algorithm for the measurement of visual motion.
Int. J. Comput. Vision, 2(3): 283-310.
[17] Glazer, F. (1987) Hierarchical Motion Detection.
COINS TR 87-02.
9
Ph.D. thesis, Univ. of Massachusetts, Amherst, MA;
| 3791 |@word economically:1 unaltered:1 middle:1 compression:1 norm:7 seek:1 decomposition:5 pick:1 tr:1 reduction:2 initial:2 series:2 africa:1 recovered:2 must:2 optometry:1 subsequent:1 numerical:2 periodically:1 blur:1 enables:1 plot:1 drop:1 update:1 progressively:1 generative:2 selected:4 amir:1 short:1 mitigation:1 coarse:2 parameterizations:1 location:1 traverse:2 successive:1 mathematical:1 along:10 constructed:1 direct:1 differential:1 c2:1 manner:1 x0:27 freeman:1 little:2 actual:2 underlying:2 moreover:1 evolved:1 cm:8 akl:1 finding:1 transformation:11 temporal:2 berkeley:6 pseudo:1 firstorder:1 act:1 wrong:1 control:1 unit:1 before:1 negligible:1 engineering:2 local:4 modify:1 bilinear:4 interpolation:2 plus:1 initialization:1 co:4 acknowledgment:1 camera:2 x256:1 x3:6 procedure:1 jan:1 displacement:3 projection:2 get:1 close:1 interior:1 operator:32 applying:2 influence:1 live:1 optimize:1 map:1 reviewer:1 missing:2 maximizing:1 helen:1 center:2 jimmy:1 focused:1 resolution:3 recovery:1 jascha:1 rule:1 utilizing:2 array:2 spanned:1 regularize:1 fill:1 embedding:3 coordinate:4 programming:1 us:1 element:11 recognition:1 updating:1 observed:2 cloud:2 bottom:1 fly:1 wang:2 capture:2 region:4 kilian:1 trade:1 decrease:1 benjamin:1 transforming:3 environment:2 ui:1 complexity:1 geodesic:2 trained:1 segment:2 upon:1 division:1 bbc:1 blurring:1 basis:12 completely:1 translated:1 represented:1 train:2 univ:1 distinct:1 fast:1 describe:2 shortcoming:1 artificial:1 choosing:1 quite:1 whose:2 supplementary:1 solve:1 otherwise:2 ability:2 statistic:3 niyogi:1 transform:2 itself:1 final:1 sequence:10 differentiable:1 eigenvalue:2 analytical:1 reconstruction:4 translate:1 poorly:1 roweis:1 description:3 frobenius:1 generating:1 converges:1 derive:1 illustrate:2 measured:1 ij:1 school:1 c:1 come:2 met:1 direction:3 closely:2 filter:7 subsequently:1 shrunk:1 occupies:1 sgn:1 human:1 translating:2 material:1 anonymous:1 proposition:1 summation:1 adjusted:1 accompanying:1 sufficiently:1 normal:1 exp:16 mapping:4 predict:1 dictionary:7 early:1 purpose:1 estimation:2 daly:1 minimization:3 mit:3 clearly:1 gaussian:5 always:2 super:2 rather:2 ck:1 avoid:1 varying:3 minc:1 timevarying:1 encode:1 focus:1 vk:2 properly:2 longest:1 likelihood:2 mainly:1 superimposed:1 prevalent:1 contrast:1 dollar:1 inference:4 oosterlinck:1 abstraction:1 stopping:2 factoring:1 entire:2 transformed:2 going:1 selects:1 pixel:12 arg:1 x11:1 classification:1 exponent:1 spatial:1 initialize:5 field:1 sampling:1 cadieu:2 x4:4 represents:2 unsupervised:3 filling:2 constitutes:1 future:4 report:3 piecewise:2 culpepper:2 few:3 primarily:1 belkin:1 randomly:3 oriented:3 composed:1 densely:1 interpolate:2 individual:1 subsampled:1 maintain:1 ortiz:1 attempt:2 freedom:1 detection:1 baolshausen:1 investigate:3 circular:3 multiply:2 highly:1 adjust:1 grime:1 semidefinite:1 held:1 chain:1 integral:1 capable:2 partial:5 taylor:4 euclidean:1 initialized:6 re:1 orbit:7 overcomplete:1 theoretical:1 modeling:2 rao:3 pole:3 snr:5 successful:1 eigenmaps:1 klm:3 varies:1 eec:1 synthetic:3 st:1 international:1 amherst:1 off:2 continuously:2 von:1 central:1 x9:2 thesis:1 choose:2 leveraged:1 conf:1 convolving:1 derivative:4 style:1 potential:1 de:1 lookup:1 alteration:1 coding:2 coefficient:1 blurred:3 int:2 observer:1 extrapolation:1 linked:1 doing:1 red:2 recover:2 sort:1 capability:1 formed:1 moon:1 variance:5 efficiently:2 correspond:1 yield:1 identification:1 produced:3 trajectory:7 falloff:1 ed:1 infinitesimal:2 frequency:1 mi:1 spie:1 sampled:2 massachusetts:1 lim:1 ut:1 dimensionality:2 subtle:1 actually:1 back:1 higher:1 dt:1 miao:1 isometric:1 response:6 formulation:2 done:1 shrink:2 though:4 wildly:1 pappa:1 lastly:1 langford:1 working:1 transport:6 ruderman:1 nonlinear:2 quality:1 believe:1 olshausen:4 effect:1 counterpart:1 evolution:1 analytically:1 white:3 attractive:1 adjacent:1 sin:5 during:3 x5:1 encourages:1 mpi:1 criterion:2 generalized:1 presenting:1 outline:1 tt:6 demonstrate:2 evident:1 performs:1 motion:8 silva:1 image:42 variational:1 charles:1 rogowitz:1 rotation:1 slight:1 approximates:1 significant:1 measurement:2 refer:1 cambridge:3 neatly:1 bruno:1 gratefully:1 moving:4 surface:7 base:1 feb:1 posterior:1 belongs:1 apart:3 binary:1 captured:1 minimum:2 greater:1 anandan:1 employed:1 freely:1 determine:1 full:2 windowing:1 technical:2 sphere:9 controlled:1 laplacian:5 variant:1 vision:8 kernel:5 pyramid:5 c1:1 fine:2 interval:2 decreased:1 comment:1 subject:1 december:1 spirit:1 seem:1 near:2 synthetically:1 intermediate:2 embeddings:1 isolation:1 topology:6 pauwels:1 intensive:1 shift:4 six:2 ul:2 cause:2 constitute:1 action:3 matlab:1 ignored:1 useful:1 clear:1 eigenvectors:2 factorial:1 transforms:2 amount:1 tenenbaum:2 locally:1 ph:1 reduced:1 generate:2 occupy:1 shifted:4 neuroscience:2 estimated:5 correctly:1 klein:1 blue:2 kill:1 xii:1 vol:1 dickstein:2 group:6 key:1 four:2 threshold:1 changing:1 vangool:1 v1:2 imaging:1 fraction:1 run:4 angle:4 parameterized:2 powerful:1 jose:1 linearizations:1 place:2 family:1 reasonable:1 electronic:1 vn:1 patch:12 parsimonious:1 capturing:2 topological:1 adapted:1 strength:1 untenable:1 x2:7 scene:2 u1:1 fourier:1 aspect:1 extremely:2 department:1 structured:1 according:2 combination:2 poor:1 across:6 making:1 invariant:2 computationally:1 equation:1 previously:1 describing:1 tractable:2 bjc:1 apply:5 observe:1 obey:1 v2:1 spectral:4 appropriate:1 hierarchical:1 weinberger:1 coin:1 eigen:1 convolved:2 original:3 top:2 remaining:1 clustering:1 maintaining:1 warland:1 giving:1 objective:3 arrangement:1 occurs:1 strategy:1 concentration:2 primary:2 diagonal:3 gradient:1 subspace:2 distance:2 separating:1 capacity:1 manifold:19 length:3 code:2 modeled:1 relationship:1 multiplicatively:1 demonstration:1 minimizing:1 difficult:2 potentially:1 relate:1 trace:2 negative:1 reliably:1 proper:1 observation:4 datasets:2 arc:1 acknowledge:1 descent:1 displayed:1 frame:16 rn:3 redwood:1 arbitrary:1 inferred:2 pair:15 cast:1 kl:1 bottle:1 toolbox:2 required:2 connection:2 glazer:1 california:3 learned:10 able:5 proceeds:1 usually:1 below:1 perception:1 pattern:1 sparsity:3 built:2 max:1 video:9 shifting:1 power:2 natural:20 rely:1 residual:1 movie:11 temporally:2 x8:2 deviate:1 prior:1 understanding:2 circled:1 geometric:1 multiplication:1 relative:1 qkl:3 fully:1 expect:1 interesting:1 generator:1 plotting:2 uncorrelated:1 share:1 translation:2 row:4 last:1 allow:1 exponentiated:1 institute:1 fall:1 neighbor:1 taking:1 face:1 saul:2 sparse:4 curve:2 world:1 computes:1 author:1 made:1 san:1 simplified:1 avoided:1 rabaud:1 far:1 compact:4 global:1 corpus:1 belongie:1 grayscale:1 spectrum:1 un:1 triplet:1 table:1 promising:1 learn:10 robust:1 ca:2 composing:1 interact:1 expansion:1 complex:6 interpolating:1 artificially:1 domain:3 vj:1 berens:1 linearly:1 noise:4 repeated:1 x1:16 representative:1 depicts:1 explicit:1 torus:8 exponential:9 wish:1 lie:10 comput:1 perceptual:2 third:2 learns:1 x15:3 xt:9 discarding:1 insightful:1 decay:1 sohl:2 effectively:1 importance:1 magnitude:1 commutative:1 horizon:1 logarithmic:1 simply:1 explore:1 visual:4 u2:1 ma:4 goal:2 identity:1 careful:1 towards:1 khosrowshahi:1 content:3 change:5 typical:1 determined:1 uniformly:2 invariance:3 rarely:1 select:2 ongoing:1 extrapolate:2 |
3,081 | 3,792 | Localizing Bugs in Program Executions
with Graphical Models
Valentin Dallmeier
Saarland University
Saarbruecken, Germany
[email protected]
Laura Dietz
Max-Planck Institute for Computer Science
Saarbruecken, Germany
[email protected]
Andreas Zeller
Saarland University
Saarbruecken, Germany
[email protected]
Tobias Scheffer
Potsdam University
Potsdam, Germany
[email protected]
Abstract
We devise a graphical model that supports the process of debugging software by
guiding developers to code that is likely to contain defects. The model is trained
using execution traces of passing test runs; it reflects the distribution over transitional patterns of code positions. Given a failing test case, the model determines the least likely transitional pattern in the execution trace. The model is
designed such that Bayesian inference has a closed-form solution. We evaluate
the Bernoulli graph model on data of the software projects AspectJ and Rhino.
1
Introduction
In today?s software projects, two types of source code are developed: product and test code. Product
code, also referred to as the program, contains all functionality and will be shipped to the customer.
The program and its subroutines are supposed to behave according to a specification. The example
program in Figure 1 (left), is supposed to always return the value 10. It contains a defect in line
number 20, which lets it return a wrong value if the input variable equals five.
In addition to product code, developers write test code that consists of small test programs, each
testing a single procedure or module for compliance with the specification. For instance, Figure 1
(right) shows three test cases, the second of which reveals the defect. Development environments
provide support for running test cases automatically and would report failure of the second test case.
Localizing defects in complex programs is a difficult problem because the failure of a test case
confirms only the existence of a defect, not its location.
When a program is executed, its trace through the source code can be recorded. An executed line of
source code is identified by a code position s ? S. The stream of code positions forms the trace t
of a test case execution. The data that our model analyses consists of a set T of passing test cases
t. In addition to the passing tests we are given a single trace t? of a failing test case. The passing
test traces and the trace of the failing case refer to the same code revision; hence, the semantics of
each code position remain constant. For the failing test case, the developer is to be provided with a
ranking of code positions according to their likelihood of being defective.
The semantics of code positions may change across revisions, and modifications of code may impact
the distribution of execution patterns in the modified as well as other locations of the code. We focus
on the problem of localizing defects within a current code revision. After each defect is localized,
the code is typically revised and the semantics of code positions changes. Hence, in this setting, we
1
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
/**
* A procedure containing a defect.
*
* @param param an arbitrary parameter.
* @return 10
*/
public static int defect (int param) {
int i = 0;
while (i < 10) {
if (param == 5) {
return 100;
}
i++;
}
return i;
}
public static class TestDefect extends TestCase {
public void testParam1() {
assertEquals(10, defect(1));
}
/** Failing test case. */
public void testParam5() {
assertEquals(10, defect(5));
}
}
public void testParam10() {
assertEquals(10, defect(10));
}
Figure 1: Example with product code (left) and test code (right).
cannot assume that any negative training data?that is, previous failing test cases of the same code
revision?are available. For that reason, discriminative models do not lend themselves to our task.
Instead of representing the results as a ranked list of positions, we envision a tight integration in
development environments. For instance, on failure of a test case, the developer could navigate
between predicted locations of the defect, starting with top ranked positions.
So far, Tarantula [1] is the standard reference model for localizing defects in execution traces. The
authors propose an interface widget for test case results in which a pixel represents a code position.
The hue value of the pixel is determined by the number of failing and passing traces that execute this
position and correlates with the likelihood that s is faulty [1]. Another approach [2] includes return
values and flags for executed code blocks and builds on sensitivity and increase of failure probability.
This approach was continued in project Holmes [3] to include information about executed control
flow paths. Andrzejewski et al. [4] extend latent Dirichlet allocation (LDA) [5] to find bug patterns in
recorded execution events. Their probabilistic model captures low-signal bug patterns by explaining
passing executions from a set of usage topics and failing executions from a mix of usage and bug
topics. Since a vast amount of data is to be processed, our approach is designed to not require
estimating latent variables during prediction as is necessary with LDA-based approaches [4].
Outline. Section 2 presents the Bernoulli graph model, a graphical, generative model that explains
program executions. This section?s main result is the closed-form solution for Bayesian inference of
the likelihood of a transitional pattern in a test trace given example execution traces. Furthermore,
we discuss how to learn hyperparameters and smoothing coefficients from other revisions, despite
the fragile semantics of code positions. In Section 3, reference methods and simpler probabilistic
models are detailed. Section 4 reports on the prediction performance of the studied models for the
AspectJ and Rhino development projects. Section 5 concludes.
2
Bernoulli Graph Model
The Bernoulli graph model is a probabilistic model that generates program execution graphs. In
contrast to an execution trace, the graph is a representation of an execution that abstracts from the
number of iterations over code fragments. The model allows for Bayesian inference of the likelihood
of a transition between code positions within an execution, given previously seen executions.
The n-gram execution graph Gt = (Vt , Et , Lt ) of an execution t connects vertices Vt by edges
Et ? Vt ? Vt . Labeling function Lt : Vt ? S (n?1) injectively maps vertices to n ? 1-grams of
code positions, where S is the alphabet of code positions.
In the bigram execution graph, each vertex v represents a code position Lt (v); each arc (u, v)
indicates that code position Lt (v) has been executed directly after code position Lt (u) at least once
during the program execution. In n-gram execution graphs, each vertex v represents a fragment
Lt (v) = s1 . . . sn?1 of consecutively executed statements. Vertices u and v can only be connected
by an arc if the fragments are overlapping in all but the first code position of u and the last code
position of v; that is, Lt (u) = s1 . . . sn?1 and Lt (v) = s2 . . . sn . Such vertices u and v are
2
e 17
ee
18 17
17 18
0 ~ y22 18 17
18 19
1 ~ y22 18 19
0 ~ y22 18 18
22 18
0~
y22 18 20
19 22
18 18
1 ~ y22 18 24
18 24
0 ~ y22 18 23
18 20 ... 18 23
17 | 18 | 19 | 22 | 18| 19 | 22 |?. | 22| 18 | 24
Figure 2: Expanding vertex ?22 18? in the generation of a tri-gram execution graph corresponding
to the trace at the bottom. Graph before expansion is drawn in black, new parts are drawn in solid
red.
connected by an arc if code positions s1 . . . sn are executed consecutively at least once during the
execution. For the example program in Figure 1 the tri-gram execution graph is given in Figure 2.
Generative process. The Bernoulli graph model generates one graph Gm,t = (Vm,t , Em,t , Lm,t )
per execution t and procedure m. The model starts the graph generation with an initial vertex
representing a fragment of virtual code positions ?.
In each step, it expands a vertex u labeled Lm,t (u) = s1 . . . sn?1 that has not yet been expanded;
e.g., vertex ?22 18? in Figure 2. Expansion proceeds by tossing a coin with parameter ?m,s1 ...sn
for each appended code position sn ? S. If the coin toss outcome is positive, an edge to vertex v
labeled Lm,t (v) = s2 . . . sn is introduced. If Vm,t does not yet include a vertex v with this labeling,
it is added at this point. Each vertex is expanded only once. The process terminates if no vertex is
left that has been introduced but not yet expanded. Parameters ?m,s1 ...sn are governed by a Beta
distribution with fixed hyperparameters ?? and ?? . In the following we focus on the generation
of edges, treating the vertices as observed. Figure 3a) shows a factor graph representation of the
generative process and Algorithm 1 defines the generative process in detail.
Inference. Given a collection Gm of previously seen execution graphs for method m and a
new execution Gm = (Vm , Em , Lm ), Bayesian inference determines the likelihood p((u, v) ?
Em |Vm , Gm , ?? , ?? ) of each of the edges (u, v), thus indicating unlikely transitions in the new
execution of m represented by execution graph Gm . Since we employ independent models for all
Algorithm 1 Generative process of the Bernoulli graph model.
for all procedures m do
n
for all s1 ...sn ? (Sm ) do
draw ?m,s1 ...sn ? Beta(?? , ?? ).
for all executions t do
create a new graph Gm,t .
add a vertex u labeled ??...?.
initialize queue Q = {u}.
while queue Q is not empty do
dequeue u ? Q, with L(u) = s1 . . . sn?1 .
for all sn ? Sm do
let v be a vertex with L(v) = s2 . . . sn .
draw b ? Bernoulli(?m,s1 ...sn ).
if b = 1 then
if v ?
/ Vm,t then
add v to Vm,t .
enqueue v ? Q.
add arc (u, v) to Em,t .
3
??
??
false
??
??
Beta
Beta
Edge
coin
u V
(u,v)
b
for each graph G
for each code position s
for each vertex u
for each procedure m
a) Bernoulli graph
f
?
for each fragment f
for each procedure m
Bernoulli
E
Code Pos.
distr
?
?
true false
Bern Equals
Symm. Dirichlet
Procedure
distr
Fragment
coin
?
??
??
Symm.
Dirichlet
Multinomial
Procedure
m
t
b
for each trace t
Fragment
n
for each fragment f S
for each procedure m
b) Bernoulli fragment
f=si-1,...
m,f
Multi
Code pos.
s
for each code position in t
for each trace t
c) Multinomial n-gram
Figure 3: Generative models in directed factor graph notation with dashed rectangles indicating
gates [6].
methods m, inference can be carried out for each method separately. Since vertices Vm are observed, coin parameters ? are d-separated from each other (cf. Figure 3a). We yield independent
Beta-Bernoulli models conditioned on the presence of start vertices u. Thus, predictive distributions
for presence of edges in future graphs can be derived in closed form (Equation 1) where #Gu denotes
the number of training graphs containing vertices labeled L(u) and #G(u,v) denotes the number of
training graphs containing edges between vertices labeled L(u) and L(v). See the appendix for a
detailed derivation of Equation 1.
#G(u,v) + ??
p((u, v) ? Em |Vm , Gm , ?? , ?? ) = G
.
(1)
#u + ?? + ??
By definition, an execution graph G for an execution contains a vertex if its label is a substring of
the execution?s trace t. Likewise, an edge is contained if an aggregation of the vertex labels is a
substring of t. It follows1 that the predictive distribution can be reformulated as in Equation 2 to
predict the probability of seeing the code position s? = sn after a fragment of preceding statements
f? = s1 . . . sn?1 using the trace representation of an execution. Thus, it is not neccessary to represent
execution graphs G explicitly.
#{t ? T |f?s? ? t} + ??
p(?
s|f?, T, ?? , ?? ) =
(2)
#{t ? T |f? ? t} + ?? + ??
Estimating interpolation coefficients and hyperparameters. For given hyperparameters and
fixed context length n, Equation 2 predicts the likelihood for s?i following a fragment f? =
s?i?1 . . . s?i?n+1 . To avoid sparsity issues while maintaining good expressiveness, we smooth various
context lengths up to N by interpolation.
N
X
p(?
si |?
si?1 . . . s?i?N +1 , T, ?? , ?? , ?) =
p(n|?) ? p(?
si |?
si?1 . . . s?i?n+1 , T, ?? , ?? )
n=1
We can learn from different revisions by integrating multiple Bernoulli graphs models in a generative
process, in which coin parameters are not shared across revisions and context lengths n. This process
generates a stream of statements with defect flags. We learn hyperparameters ?? and ?? jointly with
? using an automatically derived Gibbs sampling algorithm [7].
Predicting defective code positions. Having learned point estimates for ?
? ? , ??? , and ?? from other
revisions in a leave-one-out fashion, statements s? are scored by the complementary event of being
normal for any preceding fragment f?.
?
score(?
s) = max
1 ? p(?
s|f?, T, ?
? ? , ??? , ?)
(3)
f? preceding s?
The maximum is justified because an erroneous code line may show its defective behavior only
in combination with some preceding code fragments, and even a single erroneous combination is
enough to lead to defective behavior of the software.
1
For a set A we denote its cardinality by #A rather than |A| to avoid confusion with conditioned signs.
4
3
Reference Methods
The Tarantula model is a popular scoring heuristic for defect localization in software engineering.
We will prove a connection between Tarantula and the unigram variant of a Bernoulli graph model.
Furthermore, we will discuss other reference models which we will consider in the experiments.
3.1
Tarantula
Tarantula [1] scores the likelihood of a code position s being defective according to the proportions
of failing F and passing traces T that execute this position (Equation 4).
scoreT arantula (?
s) =
#{t??F |?
s?t?}
#{t??F }
#{t??F |?
s?t?}
|?
s?t}
+ #{t?T
#{t??F }
#{t?T }
(4)
For the case that only one test case fails, we can show an interesting relationship between Tarantula,
the unigram Bernoulli graph model, and multivariate Bernoulli models (referred to in [8]). In the
unigram case, the Bernoulli graph model generates a graph in which all statements in an execution
are directly linked to an empty start vertex. In this case, the Bernoulli graph model is equal to a
multi-variate Bernoulli model generating a set of statements for each execution.
Using an improper prior ?? = ?? = 0, the unigram Bernoulli graph model scores a statement by
|?
s?t}
#{t?T |?
s?t}
scoreGraph (?
s) = 1 ? #{t?T
#{t?T } . Letting g(s) =
#{t?T } , the rank order of any two code
1
1
positions s1 , s2 is determined by 1 ? g(s1 ) > 1 ? g(s2 ) or equivalently 1+g(s
> 1+g(s
which is
1)
2)
Tarantula?s ranking criterion if #F is 1.
3.2
Bernoulli Fragment Model
Inspired by this equivalence, we study a naive n-gram extension to multi-variate Bernoulli models
which we call Bernoulli fragment model. Instead of generating a set of statements, the Bernoulli
model may generate a set of fragments for each execution.
Given a fixed order n, the Bernoulli fragment model draws a coin parameter for each possible
fragment f = s1 . . . sn over the alphabet Sm . For each execution the fragment set is generated by
tossing a fragment?s coin and including all fragments with outcome b = 1 (cf. Figure 3b). The
#{t?T |f??t}+?
probability of an unseen fragment f? is given by p(f?|T, ?? , ?? ) = #{t?T }+?? +??? .
The model deviates from reality in that it may generate fragments that may not be aggregateable
into a consistent sequence of code positions. Thus, non-zero probability mass is given to impossible
events, which is a potential source of inaccuracy.
3.3
Multinomial Models
The multinomial model is popular in the text domain?e.g., [8]. In contrast to the Bernoulli graph
model, the multinomial model takes the number of occurrences of a pattern within an execution into
account. It consists of a hierarchical process in which first a procedure m is drawn from multinomial
distribution ?, then a code position s is drawn from the multinomial distribution ?m ranging over all
code positions Sm in the procedure.
The n-gram model is a well-known extension of the unigram multinomial model, where the distributions ? are conditioned on the preceding fragment of code positions f = s1 . . . sn?1 to draw
a follow-up statement sn ? ?m,f . Using fixed symmetric Dirichlet distributions with parameter
?? and ?? as priors for the multinomial distributions, the probability for unseen code positions s?
following on fragment f? is given in Equation 5. Shorthand #Ts?m denotes how often statements in
prodecure m are executed (summing over all traces t ? T in the training set); and #Tm,s1 ...sn denotes
the number times statements s1 . . . sn are executed subsequently by procedure m.
#Tm,
+ ??
#Ts?m
? f?s?
? + ??
p(?
s, m|
? f?, T, ?? , ?? ) ? P
?
T
T
#
# ? f? + ?? #Sm
0 + ?? #M
0
?
| m ?M s?m
{z
} | m,
{z
}
?(m)
?
5
?m,
s)
? f?(?
(5)
3.4
Holmes
Chilimbi et al. [3] propose an approach that relies on a stream of sampled boolean predicates P ,
each corresponding to an executed control flow branch starting at code position s. The approach
evaluates whether P being true increases the probability of failure in contrast to reaching the code
position by chance. Each code position is scored according to the importance of its predicate P
which is the harmonic mean of sensitivity and increase in failure probability. Shorthands Fe (P ) and
Se (P ) refer to the failing/passing traces that executed the path P , where Fo (P ) and So (P ) refer to
failing/passing traces that executed the start point of P .
2
Importance(P ) =
log #F
log Fe (P )
+
Fe (P )
Se (P )+Fe (P )
?
Fo (P )
So (P )+Fo (P )
?1
This scoring procedure is not applicable to cases where a path is executed in only one failing trace, as
a division by zero occurs in the first term when Fe (P ) = 1. This issue renders Holmes inapplicable
to our case study where typically only one test case fails.
3.5
Delta LDA
Andrzejewski et al. [4] use a variant of latent Dirichlet Allocation (LDA) [5] to identify topics
of co-occurring statements. Most topics may be used to explain passing and failing traces, where
some topics are reserved to explain statements in the failing traces only. This is obtained by running
LDA with different Dirichlet priors on passing and failing traces. After inference, the topic specific
statement distributions ? = p(s|z) are converted to p(z|s) via Bayes? rule. Then statements j are
ranked according to the confidence Sij = p(z = i|s = j) ? maxk6=i p(z = k|s = j) of being rather
about a bug topic i than any other topic k.
4
Experimental Evaluation
In this section we study empirically how accurately the Bernoulli graph model and the reference
models discussed in Section 3 localize defects that occurred in two large-scale development projects.
We find that data used for previous studies is not appropriate for our investigation. The SIR repository [9] provides traces of small programs into which defects have been injected. However, as
pointed out in [10], there is no strong argument as to why results obtained on specifically designed
programs with artificial defects should necessarily transfer to realistic software development projects
with actual defects. The Cooperative Bug Isolation project [11], on the other hand, collects execution
data from real applications, but records only a random sample of 1% of the executed code positions;
complete execution traces cannot be reconstructed. Therefore, we use the development history of
two large-scale open source development projects, AspectJ and Rhino, as gathered in [12].
Data set. From Rhino?s and AspectJ?s bug database, we select defects which are reproducable by
a test case and identify corresponding revisions in the source code repository. For such revisions,
the test code contains a test case that fails in one revision, but passes in the following revision. We
use the code positions that were modified between the two revisions as ground truth for the defective
code positions D. For AspectJ, these are one or two lines of code; the Rhino project contains larger
code changes. For each such revision, traces T of passing test cases are recorded on a line number
basis. In the same manner, the failing trace t (in which the defective code is to be identified) is
recorded.
The AspectJ data set consists of 41 defective revisions and a total of 45 failing traces. Each failing
trace has a length of up to 2,000,000 executed statements covering approx. 10,000 different code
positions (of the 75,000 lines in the project), spread across 300 to 600 files and 1,000 to 4,000
procedures. For each revision, we recorded 100 randomly selected valid test cases (drawn out of
approx. 1000).
Rhino consists of 15 defective revisions with one failing trace per bug. Failing traces
have an average length of 3,500,000 executed statements, covering approx. 2,000 of 38,000
6
AspectJ: h = 0
AspectJ: h = 1
Recall
0.4
0.3
Recall
n-gram Bernoulli Graph
n-gram Bernoulli Fragment
n-gram Multinomial
Unigram Multinomial
Tarantula
Delta LDA
0.5
0.6
0.5
0.5
0.4
0.4
0.3
0.3
0.2
0.2
0.2
0.1
0.1
0.1
0
0
0
0.2
0.4
0.6
Top k%
0.8
1
0
0
0.2
Rhino: h = 0
0.4
0.6
Top k%
0.8
1
0
0.5
0.5
0.5
0.4
0.4
0.4
Recall
0.6
0.3
0.3
0.2
0.2
0.1
0.1
0.1
0
0
0.4
0.6
Top k%
0.8
1
0.8
1
0.8
1
0.3
0.2
0.2
0.4
0.6
Top k%
Rhino: h = 10
0.6
0
0.2
Rhino: h = 1
0.6
Recall
Recall
AspectJ: h = 10
0.6
Recall
0.6
0
0
0.2
0.4
0.6
Top k%
0.8
1
0
0.2
0.4
0.6
Top k%
Figure 4: Recall of defective code positions within the 1% highest scored statements for AspectJ
(top) and Rhino (bottom), for windows of h = 0, h = 1, and h = 10 code lines.
code positions, spread across 70 files and 650 procedures. We randomly selected 100 of
the 1500 valid traces for each revision as training data. Both data sets are available at
http://www.mpi-inf.mpg.de/~dietz/debugging.html.
Evaluation criterion. Following the evaluation in [1], we evaluate how well the models are able
to guide the user into the vicinity of a defective code position. The models return a ranked list of
code positions. Envisioning that the developer can navigate from the ranking into the source code
to inspect a code line within its context, we evaluate the rank k at which a line of code occurs that
lies within a window of ?h lines of code of a defective line. We plot relative ranks; that is, absolute
ranks divided by the number of covered code lines, corresponding to the fraction of code that the
developer has to walk through in order to find the defect. We examine the recall@k%, that is the
fraction of successfully localized defects over the fraction of code the user has to inspect. We expect
a typical developer to inspect the top 0.25% of the ranking, corresponding to approximately 25 ranks
for AspectJ.
Neither the AUC nor the Normalized Discounted Cummulative Gain (NDCG) appropriately measure
performance in our application. AUC does not allow for a cut-off rank; NDCG will inappropriately
reward cases in which many statements in a defect?s vicinity are ranked highly.
Reference methods. In order to study the helpfulness of each generative model, we evaluate
smoothed models with maximum length N = 5 for each the multinomial, Bernoulli fragment and
Bernoulli graph model. We compare those to the unigram multinomial model and Tarantula. Tuning
and prediction of reference methods follow in accordance to Section 2. In addition, we compare to
the latent variable model Delta LDA with nine usage and one bug topics, ? = 0.5, ? = 0.1, and 50
sampling iterations.
Results. The results are presented in Figure 4. The Bernoulli graph model is always ahead of the
reference methods that have a closed form solution in the top 0.25% and top 0.5% of the ranking.
This improvement is significant with level 0.05 in comparison to Tarantula for h = 1 and h = 10. It
is significantly better than the n-gram multinomial model for h = 1. Although increasing h makes
the prediction problem generally easier, only Bernoulli graph and the multinomial n-gram model
play to their strength.
A comparison by Area under the Curve in top 0.25% and top 0.5% indicates that the Bernoulli
graph is more than twice as effective as Tarantula for the data sets for h = 1 and h = 10. Using the
7
Bernoulli graph model, a developer finds nearly every second bug in the top 1% in both data sets,
where ranking a failing trace takes between 10 and 20 seconds.
According to a pair-t-test with 0.05-level, Bernoulli graph?s prediction performance is significantly
better than Delta LDA for the Rhino data set. No significant diffference is found for the AspectJ
data set, but Delta LDA takes much longer to compute (approx. one hour versus 20 seconds) since
parameters can not be obtained in closed form but require iterative sampling.
Analysis. Most revisions in our data sets had bugs that were equally difficult for most of the
models. From revisions where one model drastically outperformed the others we identified different
categories of suspicious code areas. In some cases, the defective procedures were executed in very
few or no passing trace; we refer such code as being insufficiently covered. Another category refers
to defective code lines in the vicinity of branching points such as if-statements. If code before the
branch point is executed in many passing traces, but code in one of the branches only rarely, we call
this a suspicious branch point.
The Bernoulli fragment model treats both kinds of suspicious code areas in a similar way. They
have a different effect on the predictive Beta-posteriors in the Bernoulli graph model: insufficient
coverage decreases the confidence; suspicious branch points will decrease the mean. The Betapriors ?? and ?? play a crucial role in weighting these two types of potential bugs in the ranking
and encode prior beliefs on expecting one or the other. Our hyperparameter estimation procedure
usually selects ?? = 1.25 and ?? = 1.03 for all context lengths.
Revisions in which Bernoulli fragment outperformed Bernoulli graph contained defects in insufficiently covered areas. Presumably, Bernoulli graph identified many suspicious branching points, and
assigned them a higher score. Revisions in which Bernoulli graph outperformed Bernoulli fragment
contained bugs at suspicious branching points.
In contrast to the Bernoulli-style models, the multinomial models take the number of occurrences of
a code position within a trace into account. Presumably, multiple occurrences of code lines within a
trace do not indicate their defectiveness.
5
Conclusions
We introduced the Bernoulli graph model, a generative model that implements a distribution over
program executions. The Bernoulli graph model generates n-gram execution graphs. Compared
to execution traces, execution graphs abstract from the number of iterations that sequences of code
positions have been executed for. The model allows for Bayesian inference of the likelihood of
transitional patterns in a new trace, given execution traces of passing test cases. We evaluated the
model and several less complex reference methods with respect to their ability to localize defects
that occurred in the development history of AspectJ and Rhino. Our evaluation does not rely on
artificially injected defects.
We find that the Bernoulli graph model outperforms Delta LDA on Rhino and performs as good
as Delta LDA on the AspectJ project, but in substantially less time. Delta LDA is based on a
multinomial unigram model, which performs worst in our study. This gives raise to the conjecture
that Delta LDA might benefit from replacing the multinomial model with a Bernoulli graph model.
this conjecture would need to be studied empirically.
The Bernoulli graph model outperforms the reference models with closed-form solution with respect
to giving a high rank to code positions that lie in close vicinity of the actual defect. In order to find
every second defect in the release history of Rhino, the Bernoulli graph model walks the developer
through approximately 0.5% of the code positions and 1% in the AspectJ project.
Acknowledgements
Laura Dietz is supported by a scholarship of Microsoft Research Cambridge. Andreas Zeller and
Tobias Scheffer are supported by a Jazz Faculty Grant.
8
References
[1] James A. Jones and Mary J. Harrold. Empirical evaluation of the tarantula automatic faultlocalization technique. In Proceedings of the International Conference on Automated Software
Engineering, 2005.
[2] Ben Liblit, Mayur Naik, Alice X. Zheng, Alex Aiken, and Michael I. Jordan. Scalable statistical bug isolation. In Proceedings of the Conference on Programming Language Design and
Implementation, 2005.
[3] Trishul Chilimbi, Ben Liblit, Krishna Mehra, Aditya Nori, and Kapil Vaswani. Holmes: Effective statistical debugging via efficient path profiling. In Proceedings of the International
Conference on Software Engineering, 2009.
[4] David Andrzejewski, Anne Mulhern, Ben Liblit, and Xiaojin Zhu. Statistical debugging using
latent topic models. In Proceedings of the European Conference on Machine Learning, 2007.
[5] David M. Blei, Andrew Y. Ng, and Michael I. Jordan. Latent Dirichlet allocation. Journal of
Machine Learning Research, 3:993?1022, 2003.
[6] Tom Minka and John Winn. Gates. In Advances in Neural Information Processing Systems,
2008.
[7] Hal Daume III. Hbc: Hierarchical Bayes Compiler. http://hal3.name/HBC, 2007.
[8] Andrew McCallum and Kamal Nigam. A comparison of event models for Naive Bayes text
classification. In Proceedings of the AAAI Workshop on Learning for Text Categorization,
1998.
[9] Hyunsook Do, Sebastian Elbaum, and Gregg Rothermel. Supporting controlled experimentation with testing techniques: An infrastructure and its potential impact. Empirical Software
Engineering, 10(4):405?435, October 2005.
[10] Lionel C. Briand. A critical analysis of empirical research in software testing. In Proceedings
of the Symposium on Empirical Software Engineering and Measurement, 2007.
[11] Ben Liblit, Mayur Naik, Alice X. Zheng, Alex Aiken, and Michael I. Jordan. Public deployment of cooperative bug isolation. In Proceedings of the Workshop on Remote Analysis and
Measurement of Software Systems, 2004.
[12] Valentin Dallmeier and Thomas Zimmermann. Extraction of bug localization benchmarks from
history. In Proceedings of the International Conference on Automated Software Engineering,
2007.
9
| 3792 |@word repository:2 faculty:1 kapil:1 bigram:1 proportion:1 open:1 confirms:1 solid:1 initial:1 contains:5 fragment:30 score:4 envision:1 outperforms:2 current:1 anne:1 si:5 yet:3 john:1 realistic:1 designed:3 treating:1 plot:1 generative:9 selected:2 mccallum:1 record:1 blei:1 infrastructure:1 provides:1 location:3 simpler:1 five:1 saarland:4 beta:6 symposium:1 mayur:2 suspicious:6 consists:5 prove:1 shorthand:2 manner:1 liblit:4 behavior:2 mpg:2 themselves:1 examine:1 multi:3 nor:1 inspired:1 discounted:1 automatically:2 actual:2 param:4 cardinality:1 window:2 revision:22 project:12 provided:1 estimating:2 notation:1 increasing:1 mass:1 follows1:1 kind:1 substantially:1 developer:9 developed:1 elbaum:1 neccessary:1 every:2 expands:1 wrong:1 control:2 grant:1 planck:1 before:2 positive:1 zeller:3 engineering:6 accordance:1 treat:1 despite:1 path:4 interpolation:2 approximately:2 ndcg:2 black:1 might:1 twice:1 studied:2 equivalence:1 collect:1 alice:2 co:1 deployment:1 vaswani:1 directed:1 testing:3 block:1 implement:1 procedure:17 area:4 empirical:4 significantly:2 confidence:2 integrating:1 refers:1 seeing:1 cannot:2 close:1 faulty:1 context:5 impossible:1 www:1 map:1 customer:1 starting:2 rule:1 continued:1 holmes:4 today:1 gm:7 user:2 play:2 programming:1 cut:1 predicts:1 labeled:5 cooperative:2 bottom:2 observed:2 module:1 database:1 role:1 capture:1 worst:1 connected:2 improper:1 remote:1 decrease:2 transitional:4 highest:1 expecting:1 environment:2 reward:1 tobias:2 trained:1 raise:1 tight:1 predictive:3 localization:2 division:1 inapplicable:1 basis:1 gu:1 po:2 represented:1 dietz:4 various:1 alphabet:2 derivation:1 separated:1 effective:2 artificial:1 labeling:2 outcome:2 nori:1 heuristic:1 larger:1 ability:1 unseen:2 jointly:1 sequence:2 propose:2 product:4 gregg:1 supposed:2 bug:16 empty:2 lionel:1 hbc:2 generating:2 categorization:1 leave:1 ben:4 andrew:2 strong:1 coverage:1 c:3 predicted:1 indicate:1 functionality:1 consecutively:2 subsequently:1 public:6 virtual:1 explains:1 require:2 investigation:1 extension:2 ground:1 normal:1 presumably:2 predict:1 lm:4 failing:21 estimation:1 outperformed:3 applicable:1 jazz:1 label:2 create:1 successfully:1 reflects:1 always:2 modified:2 rather:2 reaching:1 avoid:2 encode:1 derived:2 focus:2 release:1 improvement:1 bernoulli:49 likelihood:8 indicates:2 rank:7 contrast:4 inference:8 typically:2 unlikely:1 rhino:14 hal3:1 subroutine:1 selects:1 germany:4 semantics:4 pixel:2 issue:2 classification:1 html:1 development:8 smoothing:1 integration:1 initialize:1 equal:3 once:3 having:1 ng:1 sampling:3 extraction:1 represents:3 jones:1 nearly:1 kamal:1 future:1 report:2 others:1 employ:1 few:1 randomly:2 connects:1 microsoft:1 highly:1 zheng:2 evaluation:5 edge:8 necessary:1 shipped:1 walk:2 instance:2 boolean:1 localizing:4 vertex:25 predicate:2 valentin:2 international:3 sensitivity:2 probabilistic:3 vm:8 off:1 michael:3 aaai:1 recorded:5 containing:3 andrzejewski:3 laura:2 style:1 return:7 helpfulness:1 account:2 potential:3 converted:1 de:5 includes:1 int:3 coefficient:2 explicitly:1 ranking:7 stream:3 scoret:1 closed:6 linked:1 red:1 start:4 aggregation:1 bayes:3 compiler:1 appended:1 reserved:1 likewise:1 yield:1 identify:2 gathered:1 bayesian:5 accurately:1 substring:2 history:4 explain:2 fo:3 sebastian:1 definition:1 failure:6 evaluates:1 james:1 minka:1 static:2 sampled:1 gain:1 popular:2 recall:8 higher:1 follow:2 tom:1 execute:2 evaluated:1 furthermore:2 hand:1 replacing:1 overlapping:1 defines:1 lda:13 hal:1 mary:1 usage:3 effect:1 name:1 contain:1 true:2 normalized:1 hence:2 vicinity:4 assigned:1 symmetric:1 during:3 branching:3 auc:2 covering:2 mpi:2 criterion:2 outline:1 complete:1 confusion:1 performs:2 interface:1 ranging:1 harmonic:1 multinomial:18 empirically:2 extend:1 discussed:1 occurred:2 refer:4 significant:2 measurement:2 cambridge:1 gibbs:1 approx:4 tuning:1 automatic:1 pointed:1 language:1 had:1 specification:2 longer:1 gt:1 add:3 multivariate:1 posterior:1 inf:2 vt:5 devise:1 scoring:2 seen:2 krishna:1 preceding:5 tossing:2 signal:1 dashed:1 branch:5 multiple:2 mix:1 smooth:1 profiling:1 y22:6 divided:1 equally:1 controlled:1 impact:2 prediction:5 variant:2 scalable:1 symm:2 iteration:3 represent:1 justified:1 addition:3 separately:1 winn:1 void:3 source:7 crucial:1 appropriately:1 tri:2 pass:1 compliance:1 file:2 cummulative:1 flow:2 jordan:3 call:2 ee:1 presence:2 iii:1 enough:1 automated:2 variate:2 isolation:3 identified:4 andreas:2 tm:2 fragile:1 whether:1 render:1 queue:2 reformulated:1 passing:15 nine:1 generally:1 detailed:2 se:2 covered:3 envisioning:1 amount:1 hue:1 processed:1 category:2 generate:2 http:2 sign:1 delta:9 per:2 write:1 hyperparameter:1 aiken:2 drawn:5 localize:2 neither:1 rectangle:1 naik:2 vast:1 graph:56 defect:29 fraction:3 run:1 injected:2 extends:1 draw:4 appendix:1 strength:1 ahead:1 insufficiently:2 alex:2 software:13 generates:5 argument:1 expanded:3 conjecture:2 according:6 dequeue:1 debugging:4 combination:2 remain:1 across:4 em:5 terminates:1 widget:1 modification:1 s1:17 sij:1 zimmermann:1 equation:6 previously:2 discus:2 letting:1 available:2 distr:2 experimentation:1 hierarchical:2 appropriate:1 occurrence:3 coin:8 gate:2 existence:1 thomas:1 top:14 running:2 include:2 dirichlet:7 cf:2 graphical:3 denotes:4 maintaining:1 giving:1 scholarship:1 build:1 added:1 occurs:2 topic:10 reason:1 code:88 length:7 relationship:1 insufficient:1 equivalently:1 difficult:2 executed:19 october:1 fe:5 statement:20 trace:42 negative:1 design:1 implementation:1 inspect:3 revised:1 sm:5 arc:4 benchmark:1 behave:1 t:2 supporting:1 smoothed:1 arbitrary:1 expressiveness:1 introduced:3 david:2 pair:1 connection:1 potsdam:3 learned:1 hour:1 inaccuracy:1 able:1 proceeds:1 usually:1 pattern:8 sparsity:1 program:14 max:2 including:1 lend:1 belief:1 event:4 critical:1 ranked:5 rely:1 predicting:1 zhu:1 representing:2 concludes:1 carried:1 naive:2 xiaojin:1 sn:22 deviate:1 prior:4 text:3 acknowledgement:1 relative:1 sir:1 expect:1 generation:3 interesting:1 allocation:3 chilimbi:2 localized:2 versus:1 consistent:1 trishul:1 supported:2 last:1 bern:1 drastically:1 guide:1 allow:1 institute:1 explaining:1 absolute:1 benefit:1 curve:1 transition:2 gram:14 valid:2 author:1 collection:1 far:1 correlate:1 reconstructed:1 uni:3 reveals:1 summing:1 discriminative:1 latent:6 iterative:1 why:1 reality:1 learn:3 maxk6:1 transfer:1 expanding:1 nigam:1 expansion:2 complex:2 necessarily:1 artificially:1 domain:1 inappropriately:1 european:1 enqueue:1 main:1 spread:2 s2:5 hyperparameters:5 scored:3 daume:1 complementary:1 defective:14 referred:2 scheffer:3 fashion:1 fails:3 position:50 guiding:1 lie:2 governed:1 weighting:1 erroneous:2 unigram:8 navigate:2 specific:1 list:2 workshop:2 false:2 importance:2 execution:47 conditioned:3 occurring:1 easier:1 lt:8 likely:2 aditya:1 contained:3 truth:1 determines:2 relies:1 chance:1 toss:1 shared:1 change:3 determined:2 specifically:1 typical:1 rothermel:1 flag:2 total:1 experimental:1 indicating:2 select:1 rarely:1 support:2 evaluate:4 |
3,082 | 3,793 | Efficient Learning using Forward-Backward Splitting
John Duchi
University of California Berkeley
[email protected]
Yoram Singer
Google
[email protected]
Abstract
We describe, analyze, and experiment with a new framework for empirical loss
minimization with regularization. Our algorithmic framework alternates between
two phases. On each iteration we first perform an unconstrained gradient descent
step. We then cast and solve an instantaneous optimization problem that trades off
minimization of a regularization term while keeping close proximity to the result
of the first phase. This yields a simple yet effective algorithm for both batch penalized risk minimization and online learning. Furthermore, the two phase approach
enables sparse solutions when used in conjunction with regularization functions
that promote sparsity, such as ?1 . We derive concrete and very simple algorithms
for minimization of loss functions with ?1 , ?2 , ?22 , and ?? regularization. We
also show how to construct efficient algorithms for mixed-norm ?1 /?q regularization. We further extend the algorithms and give efficient implementations for very
high-dimensional data with sparsity. We demonstrate the potential of the proposed
framework in experiments with synthetic and natural datasets.
1
Introduction
Before we begin, we establish notation for this paper. We denote scalars by lower case letters and
vectors by lower case bold letters, e.g. w. The inner product of vectors u and v is denoted hu, vi.
We use kxkp to denote the p-norm of the vector x and kxk as a shorthand for kxk2 .
The focus of this paper is an algorithmic framework for regularized convex programming to minimize the following sum of two functions:
f (w) + r(w) ,
(1)
where both f and r are convex bounded below functions (so without loss of generality
P we assume
they are into R+ ). Often, the function f is an empirical loss and takes the form i?S ?i (w) for
a sequence of loss functions ?i : Rn ? R+ , and r(w) is a regularization term that penalizes
for excessively complex vectors, for instance r(w) = ?kwkp . This task is prevalent in machine
learning, in which a learning problem for decision and prediction problems is cast as a convex
optimization problem. To that end, we propose a general and intuitive algorithm to minimize Eq. (1),
focusing especially on derivations for and the use of non-differentiable regularization functions.
Many methods have been proposed to minimize general convex functions such as that in Eq. (1).
One of the most general is the subgradient method [1], which is elegant and very simple. Let ?f (w)
denote the subgradient set of f at w, namely, ?f (w) = {g | ?v : f (v) ? f (w) + hg, v ? wi}.
Subgradient procedures then minimize the function f (w) by iteratively updating the parameter vector w according to the update rule wt+1 = wt ? ?t g ft , where ?t is a constant or diminishing step
size and g ft ? ?f (wt ) is an arbitrary vector from the subgradient set of f evaluated at wt . A slightly
more general method than the above is the projected gradient method, which iterates
2
f
f
wt+1 = ?? wt ? ?t g t = argmin
w ? (wt ? ?t g t )
w??
1
2
where ?? (w) is the Euclidean projection of w onto the set ?. Standard results [1] show that the
(projected) subgradient
method converges at a rate of O(1/?2 ), or equivalently that the error f (w)?
?
?
f (w ) = O(1/ T ), given some simple assumptions on the boundedness of the subdifferential set
and ? (we have omitted constants dependent on k?f k or dim(?)). Using the subgradient method to
minimize Eq. (1) gives simple iterates of the form wt+1 = wt ? ?t g ft ? ?t g rt , where g rt ? ?r(wt ).
A common problem in subgradient methods is that if r or f is non-differentiable, the iterates of the
subgradient method are very rarely at the points of non-differentiability. In the case of regularization
functions such as r(w) = kwk1 , however, these points (zeros in the case of the ?1 -norm) are often
the true minima of the function. Furthermore, with ?1 and similar penalties, zeros are desirable
solutions as they tend to convey information about the structure of the problem being solved [2, 3].
There has been a significant amount of work related to minimizing Eq. (1), especially when the
function r is a sparsity-promoting regularizer. We can hardly do justice to the body of prior work,
and we provide a few references here to the research we believe is most directly related. The approach we pursue below is known as ?forward-backward splitting? or a composite gradient method
in the optimization literature and has been independently suggested by [4] in the context of sparse
signal reconstruction, where f (w) = ky ? Awk2 , though they note that the method can apply to
general convex f . [5] give proofs of convergence for forward-backward splitting in Hilbert spaces,
though without establishing strong rates of convergence. The motivation of their paper is signal
reconstruction as well. Similar projected-gradient methods, when the regularization function r is no
longer part of the objective function but rather cast as a constraint so that r(w) ? ?, are also well
known [1]. [6] give a general and efficient projected gradient method for ?1 -constrained problems.
There is also a body of literature on regret analysis for online learning and online convex programming with convex constraints upon which we build [7, 8]. Learning sparse models generally is of
great interest in the statistics literature, specifically in the context of consistency and recovery of
sparsity patterns through ?1 or mixed-norm regularization across multiple tasks [2, 3, 9].
In this paper, we describe a general gradient-based framework, which we call F OBOS, and analyze
it in batch and online learning settings. The paper is organized as follows. In the next section, we
begin by introducing and formally defining the method, giving some simple preliminary analysis.
We follow the introduction by giving in Sec. 3 rates of convergence for batch (offline) optimization.
We then provide bounds for online convex programming and give a convergence rate for stochastic
gradient descent. To demonstrate the simplicity and usefulness of the framework, we derive in Sec. 4
algorithms for several different choices of the regularizing function r. We extend these methods to
be efficient in very high dimensional settings where the input data is sparse in Sec. 5. Finally,
we conclude in Sec. 6 with experiments examining various aspects of the proposed framework, in
particular the runtime and sparsity selection performance of the derived algorithms.
2
Forward-Looking Subgradients and Forward-Backward Splitting
In this section we introduce our algorithm, laying the framework for its strategy for online or batch
convex programming. We originally named the algorithm Folos as an abbreviation for FOrwardLOoking Subgradient. Our algorithm is a distillation of known approaches for convex programming, in particular the Forward-Backward Splitting method. In order not to confuse readers of the
early draft, we attempt to stay close to the original name and use the acronym F OBOS rather than
Fobas. F OBOS is motivated by the desire to have the iterates wt attain points of non-differentiability
of the function r. The method alleviates the problems of non-differentiability in cases such as
?1 -regularization by taking analytical minimization steps interleaved with subgradient steps. Put
informally, F OBOS is analogous to the projected subgradient method, but replaces or augments the
projection step with an instantaneous minimization problem for which it is possible to derive a
closed form solution. F OBOS is succinct as each iteration consists of the following two steps:
wt+ 21
wt+1
= wt ? ?t g ft
2
1
= argmin
w ? wt+ 21
+ ?t+ 12 r(w) .
2
w
(2)
(3)
In the above, g ft is a vector in ?f (wt ) and ?t is the step size at time step t of the algorithm. The
actual value of ?t depends on the specific setting and analysis. The first step thus simply amounts
to an unconstrained subgradient step with respect to the function f . In the second step we find a
2
new vector that interpolates between two goals: (i) stay close to the interim vector wt+ 12 , and (ii)
attain a low complexity value as expressed by r. Note that the regularization function is scaled by
an interim step size, denoted ?t+ 12 . The analyses we describe in the sequel determine the specific
value of ?t+ 21 , which is either ?t or ?t+1 . A key property of the solution of Eq. (3) is the necessary
condition for optimality and gives the reason behind the name F OBOS. Namely, the zero vector must
belong to subgradient set of the objective at the optimum wt+1 , that is,
2
1
1
1
0??
?
w
+
?
r(w)
.
w
t+ 2
t+ 2
2
w=wt+1
Since wt+ 12 = wt ? ?t g ft , the above property amounts to 0 ? wt+1 ? wt + ?t g ft + ?t+ 12 ?r(wt+1 ).
This property implies that so long as we choose wt+1 to be the minimizer of Eq. (3), we are guaranteed to obtain a vector g rt+1 ? ?r(wt+1 ) such that 0 = wt+1 ? wt + ?t g ft + ?t+ 21 g rt+1 . We can
understand this as an update scheme where the new weight vector wt+1 is a linear combination of
the previous weight vector wt , a vector from the subgradient set of f at wt , and a vector from the
subgradient of r evaluated at the yet to be determined wt+1 . To recap, we can write wt+1 as
wt+1 = wt ? ?t g ft ? ?t+ 12 g rt+1 ,
(4)
where g ft ? ?f (wt ) and g rt+1 ? ?r(wt+1 ). Solving Eq. (3) with r above has two main benefits. First, from an algorithmic standpoint, it enables sparse solutions at virtually no additional
computational cost. Second, the forward-looking gradient allows us to build on existing analyses
and show that the resulting framework enjoys the formal convergence properties of many existing
gradient-based and online convex programming algorithms.
3
Convergence and Regret Analysis of F OBOS
In this section we build on known results while using the forward-looking property of F OBOS to
provide convergence rate and regret analysis. To derive convergence rates we set ?t+ 12 properly. As
we show in the sequel, it is sufficient to set ?t+ 21 to ?t or ?t+1 , depending on whether we are doing
online or batch optimization, in order to obtain convergence and low regret bounds. We provide
proofs of all theorems in this paper, as well as a few useful technical lemmas, in the appendices,
as the main foci of the paper are the simplicity of the method and derived algorithms and their
experimental usefulness. The overall proof techniques all rely on the forward-looking property in
Eq. (4) and moderately straightforward arguments with convexity and subgradient calculus.
Throughout the section we denote by w? the minimizer of f (w)+r(w). The first bounds we present
rely only on the assumption that kw? k ? D, though they are not as tight as those in the sequel. In
what follows, define k?f (w)k , supg??f (w) kgk. We begin by deriving convergence results under
the fairly general assumption [10, 11] that the subgradients are bounded as follows:
k?f (w)k2 ? Af (w) + G2 , k?r(w)k2 ? Ar(w) + G2 .
(5)
For example, any Lipschitz loss (such as the logistic or hinge/SVM) satisfies the above with A = 0
and G equal to the Lipschitz constant; least squares satisfies Eq. (5) with G = 0 and A = 4.
Theorem 1. Assume the following hold: (i) the norm of any subgradient from ?f and the norm of
any subgradient from ?r are bounded as in Eq. (5), (ii) the norm of w? is less than or equal to D,
(iii) r(0) = 0, and (iv) 21 ?t ? ?t+1 ? ?t . Then for a constant c ? 4 with w1 = 0 and ?t+ 12 = ?t+1 ,
T
X
t=1
[?t ((1 ? cA?t )f (wt ) ? f (w? )) + ?t ((1 ? cA?t )r(wt ) ? r(w? ))] ? D2 + 7G2
T
X
?t2 .
t=1
The proof of the theorem is in Appendix A. We also provide in the appendix a few?
useful corollaries.
We provide one corollary below as it underscores that the rate of convergence ? T .
Corollary 2 (Fixed step rate). Assume that the conditions of Thm. 1 hold and that we run F OBOS
D
D
for a predefined T iterations with ?t = ?7T
and that (1 ? cA ?7T
) > 0. Then
G
G
min
t?{1,...,T }
f (wt ) + r(wt ) ?
T
3DG
1X
f (w? ) + r(w? )
+
f (wt ) + r(wt ) ? ?
?
T t=1
1 ? GcAD
?
T 1 ? GcAD
7T
7T
3
Bounds of the form we present above, where the point minimizing f (wt ) + r(wt ) converges rather
than the last point wT , are standard in subgradient optimization. This occurs since there is no way
to guarantee a descent direction when using arbitrary subgradients (see, e.g., [12, Theorem 3.2.2]).
We next derive regret bounds for F OBOS in online settings in which we are given a sequence of
functions ft : Rn ? R. The goal is for the sequence of predictions wt to attain low regret when
compared to a single optimal predictor w? . Formally, let ft (w) denote the loss suffered on the
tth input loss function when using a predictor w. The regret of an online algorithm which uses
w1 , . . . , wt , . . . as its predictors w.r.t a fixed predictor w? while using a regularization function r is
Rf +r (T ) =
T
X
t=1
[ft (wt ) + r(wt ) ? (ft (w? ) + r(w? ))] .
Ideally, we would like to achieve 0 regret to a stationary w? for arbitrary length sequences.
To achieve an online bound for a sequence of convex functions ft , we modify arguments of [7]. We
begin with a slightly different assignment for ?t+ 12 : specifically, we set ?t+ 12 = ?t . We have the
following theorem, whose proof we provide in Appendix B.
Theorem 3. Assume that kwt ? w? k ? D for all iterations and the norm of the subgradient sets
?ft and ?r are bounded above by G. Let c > 0 an arbitrary
Then the regret bound of F OBOS
scalar.
?
?
D2
2
T.
with ?t = c/ t satisfies Rf +r (T ) ? GD + 2c + 7G c
For slightly technical reasons, the assumption on the boundedness of wt and the subgradients is not
actually restrictive (see Appendix A for details). It is possible to obtain an O(log T ) regret bound
for F OBOS when the sequence of loss functions ft (?) or the function r(?) is strongly convex, similar
to [8], by using the curvature of ft or r. While we can extend these results to F OBOS, we omit the
extension for lack of space (though we do perform some experiments with such functions). Using
the regret analysis
for online learning, we can also give convergence rates for stochastic F OBOS,
?
which are O( T ). Further details are given in Appendix B and the long version of this paper [13].
4
Derived Algorithms
We now give a few variants of F OBOS by considering different regularization functions. The emphasis of the section is on non-differentiable regularization functions that lead to sparse solutions.
We also give simple extensions to apply F OBOS to mixed-norm regularization [9] that build on the
first part of this section. For lack of space, we mostly give the resulting updates, skipping technical derivations. We would like to note that some of the following results were tacitly given in [4].
First, we make a few changes to notation. To simplify our derivations, we denote by v the vector
? denote ? 1 ? ?. Using this notation the problem given in Eq. (3) can
wt+ 12 = wt ? ?t g ft and let ?
t+ 2
1
? r(w). Lastly, we let [z] denote max {0, z}.
be rewritten as minw kw ? vk2 + ?
+
2
F OBOS with ?1 regularization: The update obtained by choosing r(w) = ? kwk1 is simple and
intuitive. The objective is decomposable into a sum of 1-dimensional convex problems of the form
?
minw 12 (w ? v)2 + ?|w|.
As a result, the components of the optimal solution w? = wt+1 are
computed from wt+ 12 as
h
i
h
i
f
? = sign wt,j ? ?t g f
1
w
?
?
g
?
??
(6)
wt+1,j = sign wt+ 21 ,j |wt+ 12 ,j | ? ?
t,j
t
t+
t,j
t,j
2
+
+
Note that this update leads to sparse solutions: whenever the absolute value of a component of wt+ 12
? the corresponding component in wt+1 is set to zero. Eq. (6) gives a simple online
is smaller than ?,
and offline method for minimizing a convex f with ?1 regularization. [10] recently proposed and
analyzed the same update, terming it the ?truncated gradient,? though the analysis presented here
stems from a more general framework. This update can also be implemented very efficiently when
the support of g ft is small [10], but we defer details to Sec. 5, where we describe a unified view that
facilitates an efficient implementation for all the regularization functions discussed in this paper.
2
F OBOS with ?22 regularization: When r(w) = ?2 kwk2 , we obtain a very simple optimization
2
?
problem, minw 12 kw ? vk2 + 12 ?kwk
. Differentiating the objective and setting the result equal to
4
? ? = 0, which, using the original notation, yields the update
zero, we have w? ? v + ?w
wt+1 =
wt ? ?t g ft
.
?
1+?
(7)
Informally, the update simply shrinks wt+1 back toward the origin after each gradient-descent step.
F OBOS with ?2 regularization: A lesser used regularization function is the ?2 norm of the weight
?
?
vector. By setting r(w) = ?kwk
we obtain the following problem: minw 12 kw ? vk2 + ?kwk.
The solution of the above problem must be in the direction of v and takes the form w? = sv where
s ? 0. The resulting second step of the F OBOS update with ?2 regularization amounts to
"
"
#
#
?
?
?
?
= 1?
(wt ? ?t g ft ) .
wt+1 = 1 ?
kwt+ 12 k
kwt ? ?t g ft k
+
+
? This
?2 -regularization results in a zero weight vector under the condition that kwt ? ?t g ft k ? ?.
condition is rather more stringent for sparsity than the condition for ?1 , so it is unlikely to hold in
high dimensions. However, it does constitute a very important building block when using a mixed
?1 /?2 -norm as the regularization, as we show in the sequel.
F OBOS with ?? regularization: We now turn to a less explored regularization function, the ??
norm of w. Our interest stems from the recognition that there are settings in which it is desirable to
consider blocks of variables as a group (see below). We wish to obtain an efficient solution to
1
? kwk
min kw ? vk2 + ?
? .
w 2
(8)
A solution to the dual form of Eq. (8) is well established. Recalling that the conjugate of the
quadratic function is a quadratic function and the conjugate of the ?? norm is the ?1 barrier function,
2
we immediately obtain that the dual of the problem in Eq. (8) is max? ? 12 k? ? vk2 s.t. k?k1 ?
? Moreover, the vector of dual variables ? satisfies the relation ? = v ? w. [6] describes a
?.
linear time algorithm for finding the optimal ? to this ?1 -constrained projection, and the analysis
there shows the optimal solution to Eq. (8) is wt+1,j = sign(wt+ 12 ,j ) min{|wt+ 12 ,j |, ?}. The optimal
? and otherwise ? > 0 and can be found in O(n) steps.
solution satisfies ? = 0 iff kw 1 k1 ? ?,
t+ 2
Mixed norms: We saw above that when using either the ?2 or the ?? norm as the regularizer we
? or ||w 1 ||1 ? ?,
? respectively. This phenomenon can
obtain an all zeros vector if ||wt+ 21 ||2 ? ?
t+ 2
be useful. For example, in multiclass categorization problems each class s may
1 beassociated
k with
a different weight vector ws . The prediction for an instance x is a vector
w
,
x
,
.
.
.
,
w ,x ,
j
where k is the number of classes, and the predicted class is argmaxj w , x . Since all the weight
vectors operate over the same instance space, it may be beneficial to tie the weights corresponding
to the same input feature: we would to zero the row of weights wj1 , . . . , wjk simultaneously.
Formally, let W represent an n?k matrix where the j th column of the matrix is the weight vector wj
associated with class j. Then the ith row contains weight of the ith feature for each class. The mixed
?r /?s -norm [9] of W is obtained by computing the ?s -norm of each row of WP
and then applying
n
the ?r -norm to the resulting n dimensional vector, for instance, kW k?1 /?? = j=1 maxj |Wi,j |.
In a mixed-norm regularized optimization problem, we seek the minimizer of f (W ) + ? kW k?r /?s .
Given the specific variants of norms described above, the F OBOS update for the ?1 /?? and the ?1 /?2
? s be the sth row of W . Analogously to standard norm-based
mixed-norms is readily available. Let w
regularization, we use the shorthand V = Wt+ 12 . For the ?1 /?p mixed-norm, we need to solve
min
W
n
X
i
1
1
2
i
i
2
? kW k
?
w
w
?
?
?
kW ? V kFr + ?
?
min
?
v
+
?
?1 /?p
2
p
2
2
? 1 ,...,w
?k
w
i=1
(9)
? i is the ith row of V . It is immediate to see that the problem given in Eq. (9) is decomposable
where v
into n separate problems of dimension k, each of which can be solved by the procedures described
in the prequel. The end result of solving these types of mixed-norm problems is a sparse matrix with
numerous zero rows. We demonstrate the merits of F OBOS with mixed-norms in Sec. 6.
5
5
Efficient implementation in high dimensions
In many settings, especially online learning, the weight vector wt and the gradients g ft reside in a
very high-dimensional space, but only a relatively small number of the components of g ft are nonzero. Such settings are prevalent, for instance, in text-based applications: in text categorization,
the full dimension corresponds to the dictionary or set of tokens that is being employed while each
gradient is typically computed from a single or a few documents, each of which contains words
and bigrams constituting only a small subset of the full dictionary. The need to cope with gradient
sparsity becomes further pronounced in mixed-norm problems, as a single component of the gradient
may correspond to an entire row of W . Updating the entire matrix because a few entries of g ft are
non-zero is clearly undesirable. Thus, we would like to extend our methods to cope efficiently
with gradient sparsity. For concreteness, we focus in this section on the efficient implementation
of ?1 , ?2 , and ?? regularization, since the extension to mixed-norms (as in the previous section) is
straightforward. We postpone the proof of the following proposition to Appendix C.
Proposition 4. Let wT be the end result of solving a succession of T self-similar optimization
problems for t = 1, . . . , T ,
1
P.1 : wt = argmin kw ? wt?1 k2 + ?t kwkq .
(10)
2
w
Let w? be the optimal solution of the following optimization problem,
!
T
X
1
P.2 : w? = argmin kw ? w0 k2 +
?t kwkq .
2
w
t=1
(11)
For q ? {1, 2, ?} the vectors wT and w? are identical.
The algorithmic consequence of Proposition 4 is that it is possible to perform a lazy update on each
iteration by omitting the terms of wt (or whole rows of the matrix Wt when using mixed-norms) that
are outside the support of g ft , the gradient of the loss at iteration t. We do need to maintain the stepsizes used on each iteration and have them readily available on future rounds when we newly update
coordinates of w or W . Let ?t denote the sum of the step sizes times regularization multipliers
??t used from round 1 through
t. Then a simple algebraic
manipulation yields that instead of
o
n
2
1
solving wt+1 = argminw 2 kw ? wt k2 + ??t kwkq repeatedly when wt is not changing, we
can simply cache the last time t0 that w (orna coordinate in w or a row from Wo) was updated and,
2
when it is needed, solve wt+1 = argminw 21 kw ? wt k2 + (?t ? ?t0 )kwkq . The advantage of
the lazy evaluation is pronounced when using mixed-norm regularization as it lets us avoid updating
entire rows so long as the row index corresponds to a zero entry of the gradient g ft . In sum, at the
expense of keeping a time stamp t for each entry of w or row of W and maintaining the cumulative
sums ?1 , ?2 , . . ., we get O(k) updates of w when the gradient g ft has only k non-zero components.
6
Experiments
In this section we compare F OBOS to state-of-the-art optimizers to demonstrate its relative merits
and weaknesses. We perform more substantial experiments in the full version of the paper [13].
?22 and ?1 -regularized experiments: We performed experiments using F OBOS to solve both ?1
and ?2 -regularized learning problems. For the ?2 -regularized experiments, we compared F OBOS to
Pegasos [14], a fast projected gradient solver for SVM. Pegasos was originally implemented and
evaluated on SVM-like problems by using the the hinge-loss as the empirical loss function along
with an ?22 regularization term, but it can be straightforwardly extended to the binary logistic loss
function. We thus experimented with both
m
m
X
X
f (w) =
[1 ? yi hxi , wi]+ (hinge) and f (w) =
log 1 + e?yi hxi ,wi (logistic)
i=1
i=1
as loss functions. To generate data for our experiments, we chose a vector w with entries distributed
normally with 0 mean and unit variance, while randomly zeroing 50% of the entries in the vector.
6
f(wt) ? f(w*)
10
0
10
?1
10
?2
10
?3
10
L2 Folos
Pegasos
0
10
?1
10
?2
10
10
f(wt) + r(wt) ? f(w*) ? r(w*)
1
f(wt) + r(wt) ? f(w*) ? r(w*)
0
L2 Folos
Pegasos
2
10
L2 Folos
Pegasos
?1
10
?2
10
?3
10
?4
10
?5
10
10
20
30
40
50
Number of Operations
60
70
0
20
40
60
80
100
120
140
160
Approximate Operations
180
10
20
30
40
50
60
70
80
Approximate Operations
Figure 1: Comparison of F OBOS with Pegasos on the problems of logistic regression (left and right)
and SVM (middle). The rightmost plot shows the performance of the algorithms without projection.
The examples xi ? Rn were also chosen at random with entries normally distributed. To generate
target values, we set yi = sign(hxi , wi), and flipped the sign of 10% of the examples to add label
noise. In all experiments, we used 1000 training examples of dimension 400.
The graphs of Fig. 1 show (on a log-scale) the regularized empirical loss of the algorithms minus
the optimal value of the objective function. These results were averaged over 20 independent runs
2
of the algorithms. In all experiments with the regularizer 12 ? kwk2 , we used step size ?t = ?/t to
achieve logarithmic regret. The two left graphs of Fig. 1 show that F OBOS performs comparably
to Pegasos on the logistic loss (left figure) and hinge (SVM) loss (middle figure). Both algorithms
quickly approach the optimal value. In these experiments we let both Pegasos and F OBOS employ
a projection after each gradient step into a 2-norm ball containing w? (see [14]). However, in the
experiment corresponding to the rightmost plot of Fig. 1, we eliminated this additional projection
step and ran the algorithms with the logistic loss. In this case, F OBOS slightly outperforms Pegasos.
We hypothesize that the slightly faster rate of F OBOS is due to the explicit shrinkage that F OBOS
performs in the ?2 update (see Eq. (7)).
In the next experiment, whose results are given in Fig. 2, we solved ?1 -regularized logistic regression problems. We compared F OBOS to a simple subgradient method, where the subgradient of
the ? kwk1 term is simply ? sign(w)), and a fast interior point (IP) method which was designed
specifically for solving ?1 -regularized logistic regression [15]. On the left side of Fig. 2 we show the
objective function (empirical loss plus the ?1 regularization term) obtained by each of the algorithms
minus the optimal objective value.
? We again used 1000 training examples of dimension 400. The
learning rate was set to ?t ? 1/ t. The standard subgradient method is clearly much slower than
the other two methods even though we chose the initial step size for which the subgradient method
converged the fastest. Furthermore, the subgradient method does not achieve any sparsity along its
entire run. F OBOS quickly gets close to the optimal value of the objective function, but eventually
the specialized IP method?s asymptotically faster convergence causes it to surpass F OBOS. In order
to obtain a weight vector wt such that f (wt ) ? f (w? ) ? 10?2 , F OBOS works very well, though
the IP method enjoys faster convergence rate when the weight vector is very close to optimal solution. However, the IP algorithm was specifically designed to minimize empirical logistic loss with
?1 regularization whereas F OBOS enjoys a broad range of applicable settings.
The middle plot in Fig. 2 shows the sparsity levels (fraction of non-zero weights) achieved by F OBOS
as a function of the number of iterations of the algorithm. Each line represents a different synthetic
experiment as ? is modified to give more or less sparsity to the solution vector w? . The results show
that F OBOS quickly selects the sparsity pattern of w? , and the level of sparsity persists throughout its
execution. We found this sparsity pattern common to non-stochastic versions of F OBOS we tested.
Mixed-norm experiments: Our experiments with mixed-norm regularization (?1 /?2 and ?1 /?? )
focus mostly on sparsity rather than on the speed of minimizing the objective. Our restricted focus
is a consequence of the relative paucity of benchmark methods for learning problems with mixednorm regularization. Our methods, however, as described in Sec. 4, are quite simple to implement,
and we believe could serve as benchmarks for other methods to solve mixed-norm problems.
Our experiments compared multiclass classification with ?1 , ?1 /?2 , and ?1 /?? regularization on
the MNIST handwritten digit database and the StatLog Landsat Satellite dataset [16]. The MNIST
database consists of 60,000 training examples and a 10,000 example test set with 10 classes. Each
digit is a 28 ? 28 gray scale image represented as a 784 dimensional vector. Linear classifiers
7
Figure 2: Left: Performance of F OBOS, a subgradient method, and an interior point method on ?1 regularized logistic regularization. Left: sparsity level
achieved by F OBOS along its
run.
0.8
0.7
1
Sparsity Proportion
f(wt) ? f(w*)
10
L1 Folos
L1 IP
L1 Subgrad
0
10
?1
10
0.6
0.5
0.4
0.3
0.2
?2
10
0.1
10
20
30
40
50
60
70
80
90
100
0
110
10
20
30
Number of Operations
50
60
70
80
90
100
Folos Steps
1.1
1
1
0.9
0.9
0.8
0.8
0.7
0.7
0.6
0.6
Figure 3: Left: F OBOS sparsity and test error for LandSat
dataset with ?1 -regularization.
Right: F OBOS sparsity and
test error for MNIST dataset
with ?1 /?2 -regularization.
0.5
Test Error 10%
NNZ 10%
Test Error 20%
NNZ 20%
Test Error 100%
NNZ 100%
0.5
0.4
0.3
0.4
0.3
0.2
0.2
0.1
40
0.1
0
100
200
300
400
500
600
700
800
900
1000
0
0
100
200
300
400
500
600
700
800
do not perform well on MNIST. Thus, rather than learning weights for the original features, we
learn the weights for classifier with Gaussian kernels, where value of the j th feature for the ith
2
1
example is xij = K(z i , z j ) = e? 2 kzi ?zj k . For the LandSat dataset we attempt to classify 3 ? 3
neighborhoods of pixels in a satellite image as a particular type of ground, and we expanded the
input 36 features into 1296 features by taking the product of all features.
In the left plot of Fig. 3, we show the test set error and row sparsity in W as a function of training
time (number of single-example gradient calculations) for the ?1 -regularized multiclass logistic loss
with 720 training examples. The green lines show results for using all 720 examples to calculate
the gradient, black using 20% of the examples, and blue using 10% of the examples to perform
stochastic gradient. Each used the same learning rate ?t , and the reported results are averaged
over 5 independent runs with different training data. The righthand figure shows a similar plot
but for MNIST with 10000 training examples and ?1 /?2 -regularization. The objective value in
training has a similar contour to the test loss. It is interesting to note that very quickly, F OBOS
with stochastic gradient descent gets to its minimum test classification error, and as the training
set size increases this behavior is consistent. However, the deterministic version increases the level
of sparsity throughout its run, while the stochastic-gradient version has highly variable sparsity
levels and does not give solutions as sparse as the deterministic counterpart. The slowness of nonstochastic gradient mitigates this effect for the larger sample size on MNIST in the right figure, but
for longer training times, we do indeed see similar behavior.
For comparison of the different regularization approaches, we report in Table 1 the test error as a
function of row sparsity of the learned matrix W . For the LandSat data, we see that using the block
?1 /?2 regularizer yields better performance for a given level of structural sparsity. However, on
the MNIST data the ?1 regularization and the ?1 /?2 achieve comparable performance for each level
of structural sparsity. Moreover, for a given level of structural sparsity, the ?1 -regularized solution
matrix W attains significantly higher overall sparsity, roughly 90% of the entries of each non-zero
row are zero. The performance on the different datasets might indicate that structural sparsity is
effective only when the set of parameters indeed exhibit natural grouping.
% Non-zero
5
10
20
40
?1 Test
.43
.30
.26
.22
?1 /?2 Test
.29
.25
.22
.19
?1 /?? Test
.40
.30
.26
.22
?1 Test
.37
.26
.15
.08
?1 /?2 Test
.36
.26
.15
.08
?1 /?? Test
.47
.31
.24
.16
Table 1: LandSat (left) and MNIST (right) classification error versus sparsity
8
References
[1] D.P. Bertsekas. Nonlinear Programming. Athena Scientific, 1999.
[2] P. Zhao and B. Yu. On model selection consistency of Lasso. Journal of Machine Learning
Research, 7:2541?2567, 2006.
[3] N. Meinshausen and P. B?uhlmann. High dimensional graphs and variable selection with the
Lasso. Annals of Statistics, 34:1436?1462, 2006.
[4] S. Wright, R. Nowak, and M. Figueiredo. Sparse reconstruction by separable approximation.
In IEEE International Conference on Acoustics, Speech, and Signal Processing, pages 3373?
3376, 2008.
[5] P. Combettes and V. Wajs. Signal recovery by proximal forward-backward splitting. Multiscale
Modeling and Simulation, 4(4):1168?1200, 2005.
[6] J. Duchi, S. Shalev-Shwartz, Y. Singer, and T. Chandra. Efficient projections onto the ?1 ball for learning in high dimensions. In Proceedings of the 25th International Conference on
Machine Learning, 2008.
[7] M. Zinkevich. Online convex programming and generalized infinitesimal gradient ascent. In
Proceedings of the Twentieth International Conference on Machine Learning, 2003.
[8] E. Hazan, A. Kalai, S. Kale, and A. Agarwal. Logarithmic regret algorithms for online convex
optimization. In Proceedings of the Nineteenth Annual Conference on Computational Learning
Theory, 2006.
[9] G. Obozinski, M. Wainwright, and M. Jordan. High-dimensional union support recovery in
multivariate regression. In Advances in Neural Information Processing Systems 22, 2008.
[10] J. Langford, L. Li, and T. Zhang. Sparse online learning via truncated gradient. In Advances
in Neural Information Processing Systems 22, 2008.
[11] S. Shalev-Shwartz and A. Tewari. Stochastic methods for ?1 -regularized loss minimization. In
Proceedings of the 26th International Conference on Machine Learning, 2009.
[12] Y. Nesterov. Introductory Lectures on Convex Optimization. Kluwer Academic Publishers,
2004.
[13] J. Duchi and Y. Singer. Efficient online and batch learning using forward-backward splitting.
Journal of Machine Learning Research, 10:In Press, 2009.
[14] S. Shalev-Shwartz, Y. Singer, and N. Srebro. Pegasos: Primal estimated sub-gradient solver
for SVM. In Proceedings of the 24th International Conference on Machine Learning, 2007.
[15] K. Koh, S.J. Kim, and S. Boyd. An interior-point method for large-scale ?1 -regularized logistic
regression. Journal of Machine Learning Research, 8:1519?1555, 2007.
[16] D. Spiegelhalter and C. Taylor. Machine Learning, Neural and Statistical Classification. Ellis
Horwood, 1994.
[17] R.T. Rockafellar. Convex Analysis. Princeton University Press, 1970.
9
| 3793 |@word kgk:1 middle:3 version:5 bigram:1 norm:33 proportion:1 justice:1 hu:1 calculus:1 d2:2 seek:1 simulation:1 minus:2 boundedness:2 initial:1 contains:2 document:1 rightmost:2 outperforms:1 existing:2 com:1 skipping:1 yet:2 must:2 readily:2 john:1 enables:2 hypothesize:1 plot:5 designed:2 update:15 stationary:1 folos:6 ith:4 iterates:4 draft:1 zhang:1 along:3 shorthand:2 consists:2 introductory:1 introduce:1 indeed:2 roughly:1 behavior:2 actual:1 cache:1 considering:1 solver:2 becomes:1 begin:4 notation:4 bounded:4 moreover:2 what:1 argmin:4 pursue:1 unified:1 finding:1 wajs:1 jduchi:1 guarantee:1 berkeley:2 runtime:1 tie:1 scaled:1 k2:6 classifier:2 normally:2 unit:1 omit:1 bertsekas:1 before:1 persists:1 modify:1 consequence:2 establishing:1 black:1 chose:2 emphasis:1 plus:1 might:1 meinshausen:1 fastest:1 range:1 averaged:2 union:1 regret:13 block:3 postpone:1 implement:1 optimizers:1 digit:2 procedure:2 nnz:3 empirical:6 attain:3 composite:1 projection:7 significantly:1 word:1 boyd:1 get:3 onto:2 close:5 selection:3 undesirable:1 pegasos:10 put:1 risk:1 context:2 applying:1 interior:3 zinkevich:1 deterministic:2 straightforward:2 kale:1 independently:1 convex:19 simplicity:2 splitting:7 recovery:3 decomposable:2 immediately:1 rule:1 deriving:1 coordinate:2 analogous:1 updated:1 annals:1 target:1 programming:8 us:1 origin:1 recognition:1 updating:3 database:2 ft:30 solved:3 calculate:1 wj:1 trade:1 ran:1 substantial:1 convexity:1 complexity:1 moderately:1 ideally:1 nesterov:1 tacitly:1 solving:5 tight:1 serve:1 upon:1 various:1 represented:1 regularizer:4 derivation:3 fast:2 describe:4 effective:2 choosing:1 outside:1 neighborhood:1 shalev:3 whose:2 quite:1 larger:1 solve:5 nineteenth:1 otherwise:1 statistic:2 ip:5 online:18 sequence:6 differentiable:3 advantage:1 analytical:1 propose:1 reconstruction:3 product:2 argminw:2 alleviates:1 iff:1 achieve:5 intuitive:2 pronounced:2 ky:1 wjk:1 convergence:14 optimum:1 satellite:2 categorization:2 converges:2 derive:5 depending:1 eq:17 strong:1 implemented:2 c:1 predicted:1 implies:1 indicate:1 direction:2 stochastic:7 stringent:1 preliminary:1 proposition:3 statlog:1 extension:3 hold:3 proximity:1 recap:1 ground:1 wright:1 great:1 algorithmic:4 dictionary:2 early:1 omitted:1 applicable:1 label:1 uhlmann:1 saw:1 minimization:7 clearly:2 gaussian:1 modified:1 rather:6 kalai:1 avoid:1 shrinkage:1 stepsizes:1 conjunction:1 corollary:3 derived:3 focus:5 properly:1 prevalent:2 underscore:1 attains:1 kim:1 vk2:5 dim:1 dependent:1 landsat:5 unlikely:1 typically:1 entire:4 diminishing:1 w:1 relation:1 selects:1 pixel:1 overall:2 dual:3 classification:4 denoted:2 constrained:2 art:1 fairly:1 equal:3 construct:1 eliminated:1 identical:1 kw:14 flipped:1 broad:1 represents:1 yu:1 promote:1 future:1 t2:1 report:1 simplify:1 employ:1 few:7 randomly:1 dg:1 simultaneously:1 kwt:4 maxj:1 phase:3 maintain:1 attempt:2 recalling:1 interest:2 highly:1 righthand:1 evaluation:1 weakness:1 analyzed:1 behind:1 primal:1 hg:1 predefined:1 nowak:1 necessary:1 minw:4 iv:1 euclidean:1 taylor:1 penalizes:1 instance:5 column:1 classify:1 modeling:1 elli:1 ar:1 assignment:1 cost:1 introducing:1 subset:1 entry:7 predictor:4 usefulness:2 examining:1 reported:1 straightforwardly:1 sv:1 proximal:1 synthetic:2 gd:1 international:5 stay:2 sequel:4 off:1 analogously:1 quickly:4 concrete:1 w1:2 again:1 containing:1 choose:1 zhao:1 li:1 potential:1 bold:1 sec:7 rockafellar:1 vi:1 depends:1 supg:1 performed:1 view:1 closed:1 analyze:2 doing:1 kwk:4 hazan:1 defer:1 minimize:6 square:1 variance:1 efficiently:2 succession:1 yield:4 correspond:1 handwritten:1 comparably:1 converged:1 whenever:1 infinitesimal:1 proof:6 associated:2 newly:1 dataset:4 hilbert:1 organized:1 actually:1 back:1 focusing:1 originally:2 higher:1 follow:1 evaluated:3 though:7 strongly:1 generality:1 furthermore:3 shrink:1 lastly:1 langford:1 nonlinear:1 multiscale:1 lack:2 google:2 logistic:12 gray:1 scientific:1 believe:2 name:2 effect:1 excessively:1 building:1 true:1 omitting:1 counterpart:1 multiplier:1 regularization:43 iteratively:1 wp:1 nonzero:1 round:2 self:1 generalized:1 demonstrate:4 duchi:3 performs:2 l1:3 image:2 regularizing:1 instantaneous:2 recently:1 common:2 specialized:1 extend:4 belong:1 discussed:1 kluwer:1 kwk2:2 significant:1 distillation:1 unconstrained:2 consistency:2 zeroing:1 hxi:3 longer:2 add:1 curvature:1 multivariate:1 manipulation:1 slowness:1 binary:1 kwk1:3 yi:3 minimum:2 additional:2 employed:1 determine:1 signal:4 ii:2 multiple:1 desirable:2 full:3 stem:2 technical:3 faster:3 academic:1 af:1 calculation:1 long:3 prediction:3 variant:2 regression:5 chandra:1 iteration:8 represent:1 kernel:1 agarwal:1 achieved:2 subdifferential:1 whereas:1 suffered:1 standpoint:1 publisher:1 operate:1 ascent:1 tend:1 elegant:1 virtually:1 facilitates:1 jordan:1 call:1 structural:4 iii:1 nonstochastic:1 lasso:2 inner:1 lesser:1 multiclass:3 t0:2 whether:1 motivated:1 penalty:1 wo:1 algebraic:1 interpolates:1 speech:1 cause:1 hardly:1 constitute:1 repeatedly:1 generally:1 useful:3 tewari:1 informally:2 amount:4 differentiability:3 augments:1 tth:1 generate:2 xij:1 zj:1 sign:6 estimated:1 blue:1 write:1 group:1 key:1 changing:1 backward:7 graph:3 subgradient:26 concreteness:1 asymptotically:1 sum:5 fraction:1 run:6 letter:2 named:1 throughout:3 reader:1 decision:1 appendix:7 comparable:1 interleaved:1 bound:8 guaranteed:1 replaces:1 quadratic:2 annual:1 constraint:2 kwkp:1 aspect:1 speed:1 argument:2 optimality:1 min:5 subgradients:4 expanded:1 separable:1 interim:2 relatively:1 according:1 alternate:1 combination:1 ball:2 conjugate:2 across:1 slightly:5 smaller:1 describes:1 beneficial:1 wi:5 sth:1 restricted:1 koh:1 turn:1 eventually:1 argmaxj:1 singer:5 needed:1 merit:2 horwood:1 end:3 acronym:1 available:2 operation:4 rewritten:1 promoting:1 apply:2 batch:6 slower:1 original:3 hinge:4 maintaining:1 kwkq:4 paucity:1 yoram:1 giving:2 restrictive:1 k1:2 especially:3 establish:1 build:4 objective:10 occurs:1 strategy:1 rt:6 exhibit:1 gradient:30 separate:1 athena:1 w0:1 reason:2 toward:1 laying:1 length:1 index:1 minimizing:4 equivalently:1 mostly:2 expense:1 implementation:4 perform:6 datasets:2 benchmark:2 descent:5 truncated:2 immediate:1 defining:1 extended:1 looking:4 rn:3 arbitrary:4 thm:1 cast:3 namely:2 california:1 acoustic:1 learned:1 established:1 suggested:1 below:4 pattern:3 sparsity:29 rf:2 max:2 green:1 wainwright:1 natural:2 rely:2 regularized:13 scheme:1 spiegelhalter:1 numerous:1 wj1:1 text:2 prior:1 literature:3 l2:3 relative:2 loss:23 lecture:1 mixed:18 interesting:1 srebro:1 versus:1 sufficient:1 consistent:1 kxkp:1 row:15 penalized:1 token:1 last:2 keeping:2 figueiredo:1 enjoys:3 offline:2 formal:1 side:1 understand:1 taking:2 barrier:1 differentiating:1 absolute:1 sparse:11 benefit:1 distributed:2 dimension:7 cumulative:1 contour:1 forward:11 reside:1 projected:6 kfr:1 constituting:1 cope:2 kzi:1 approximate:2 conclude:1 xi:1 shwartz:3 table:2 learn:1 ca:3 complex:1 main:2 motivation:1 whole:1 noise:1 succinct:1 convey:1 body:2 fig:7 combettes:1 sub:1 wish:1 explicit:1 kxk2:1 stamp:1 theorem:6 specific:3 mitigates:1 explored:1 experimented:1 svm:6 grouping:1 mnist:8 execution:1 confuse:1 logarithmic:2 simply:4 twentieth:1 lazy:2 kxk:1 desire:1 expressed:1 g2:3 scalar:2 corresponds:2 minimizer:3 satisfies:5 obozinski:1 abbreviation:1 goal:2 lipschitz:2 change:1 specifically:4 determined:1 wt:91 surpass:1 lemma:1 experimental:1 rarely:1 formally:3 support:3 princeton:1 tested:1 phenomenon:1 |
3,083 | 3,794 | Statistical Models of Linear and Non?linear
Contextual Interactions in Early Visual Processing
Ruben Coen?Cagli
AECOM
Bronx, NY 10461
[email protected]
Peter Dayan
GCNU, UCL
17 Queen Square, LONDON
[email protected]
Odelia Schwartz
AECOM
Bronx, NY 10461
[email protected]
Abstract
A central hypothesis about early visual processing is that it represents inputs in a
coordinate system matched to the statistics of natural scenes. Simple versions of
this lead to Gabor?like receptive fields and divisive gain modulation from local
surrounds; these have led to influential neural and psychological models of visual
processing. However, these accounts are based on an incomplete view of the visual
context surrounding each point. Here, we consider an approximate model of linear
and non?linear correlations between the responses of spatially distributed Gaborlike receptive fields, which, when trained on an ensemble of natural scenes, unifies
a range of spatial context effects. The full model accounts for neural surround
data in primary visual cortex (V1), provides a statistical foundation for perceptual
phenomena associated with Li?s (2002) hypothesis that V1 builds a saliency map,
and fits data on the tilt illusion.
1
Introduction
That visual input at a given point is greatly influenced by its spatial context is manifest in a host
of neural and perceptual effects (see, e.g., [1, 2]). For instance, stimuli surrounding the so-called
classical receptive field (RF) lead to striking nonlinearities in the responses of visual neurons [3, 4];
spatial context results in intriguing perceptual illusions, such as the misjudgment of a center stimulus
attribute in the presence of a surrounding stimulus [5?7]; it also plays a critical role in determining
the salience of points in visual space, for instance controlling pop-out, contour integration, texture
segmentation [8?10] and more generally locations where statistical homogeneity of the input breaks
down [1]. Contextual effects are widespread across sensory systems, neural areas, and stimulus
attributes ? making them an attractive target for computational modeling.
There are various mechanistic treatments of extra-classical RF effects (e.g.,[11?13]) and contour
integration [14], and V1?s suggested role in computing salience has been realized in a large-scale
dynamical model [1, 15]. There are also normative approaches to salience (e.g., [16?19]) with links
to V1. However, these have not substantially encompassed neurophysiological data or indeed made
connections with the perceptual literature on contour integration and the tilt illusion. Our aim is
to build a principled model based on scene statistics that can ultimately account for, and therefore
unify, the whole set of contextual effects above.
Much seminal work has been done in the last two decades on learning linear filters from first principles from the statistics of natural images (see e.g. [20]). However, contextual effects emerge from
the interactions among multiple filters; therefore here we address the much less well studied issue
of the learned, statistical, basis of the coordination of the group of filters ? the scene?dependent,
linear and non?linear interactions among them. We focus on recent advances in models of scene
statistics, using a Gaussian Scale Mixture generative model (GSM; [21?23]) that captures the joint
dependencies (e.g., [24?31]) between the activations of Gabor?like filters to natural scenes. The
1
GSM captures the dependencies via two components, (i) covariance in underlying Gaussian variables, which accounts for linear correlations in the activations of filters; and (ii) a shared mixer
variable, which accounts for the non?linear correlations in the magnitudes of the filter activations.
As yet, the GSM has not been applied to the wide range of contextual phenomena discussed above.
This is partly because linear correlations, which appear important to capture phenomena such as
contour integration, have largely been ignored outside image processing (e.g., [23]). In addition,
although the mixer variable of the GSM is closely related to bottom-up models of divisive normalization in cortex [32, 33], the assignment problem of grouping filters that share a common mixer for
a given scene has yet to receive a computationally and neurobiologically realistic solution. Recent
work has shown that incorporating a simple, predetermined, solution to the assignment problem in a
GSM could capture the tilt illusion [34]. Nevertheless, the approach has not been studied in a more
realistic model with Gabor-like filters, and learning assignments from natural scenes. Further, the
implications of assignment for cortical V1 data and salience have not been explored.
In this paper we extend the GSM model to learn both assignments and linear covariance (section 2).
We then apply the model to contextual neural V1 data, noting its link to the tilt illusion (section 3);
and then to perceptual salience examples (section 4). In the discussion (section 5), we also describe
the relationship between our GSM model and other recent scene statistics approaches (e.g., [31, 35]).
2
Methods
A recent focus in natural image statistics has been the joint conditional histograms of the activations
of pairs of oriented linear filters (throughout the paper, filters come from the first level of a steerable
pyramid with 4 orientations [36]). When filter pairs are proximal in space, these histograms have a
characteristic bowtie shape: the variance of one filter depends on the magnitude of activation of the
other. It has been shown [22] that this form of dependency can be captured by a class of generative
model known as Gaussian Scale Mixture (GSM), which assumes that the linear filter activations
x = vg are random variables defined as the product of two other random variables, a multivariate
Gaussian g, and a (positive) scalar v which scales the variance of all the Gaussian components.
Here, we address two additional properties of natural scenes. First, in addition to the variance
dependency, filters which are close enough in space and feature space are linearly dependent, as
shown by the tilt of the bowtie in fig. 1b. In order for the GSM to capture this effect, the multivariate
Gaussian must be endowed with a non-diagonal covariance matrix. This matrix can be approximated
by the sample covariance matrix of the filter activations or learned directly [23]; here, we learn it
by maximizing the likelihood of the observed data. The second issue is that filter dependencies
differ across image patches, implying that there is no fixed relationship between mixers and filters
[28]. The general issue of learning multiple pools of filters, each assigned to a different mixer on
an patch?dependent basis, has been addressed in recent work [30], but using a computationally and
biologically impracticable scheme [37] which allowed for arbitrary pooling.
We consider an approximation to the assignment problem, by allowing a group of surround filters to
either share or not the same mixer with a target filter. While this is clearly an oversimplified model
of natural images, here we aimed for a reasonable balance between the complexity of the model,
and biological plausibility of the computations involved.
2.1
The generative model
The basic repeating unit of our simplified model involves center and surround groups of filters: we
use nc to denote the number of center filters, and xc their activations; similarly, we use ns and xs
.
.
for the surround; finally, we define ncs = nc +ns and x = (x1c , . . . xnc c , x1s , . . . xns s )> . We consider
a single assignment choice as to whether the center group?s mixer variable vc is (case ?1 ), or is
not (case ?2 ) shared with the surround, which in the latter case would have its own mixer variable
vs . Thus, there are 2 configurations, or competing models, which are themselves combined (i.e., a
mixture of GSM?s, see also [35]). The graphical models of the two configurations are shown in Fig.
1a. We show this from the perspective of the center group, since in the implementation we will be
reporting model neuron responses in the center location given the contextual surround.
2
.
Defining Gaussian components as g = (gc1 , . . . gcnc , gs1 , . . . gsns )> , and assuming the mixers are
independent and the pools are independent given the mixers, the mixture distribution is:
p(x) = p(?1 )p(x | ?1 ) + p(?2 )p(x | ?2 )
R
p(x | ?1 ) =
dvc p(vc )p(x | vc , ?1 )
R
R
p(x | ?2 ) =
dvc p(vc )p(xc | vc , ?2 ) dvs p(vs )p(xs | vs , ?2 )
(1)
(2)
(3)
We assume a Rayleigh prior distribution on the mixers, and covariance matrix ?cs for the Gaussian
components for ?1 , and ?c and ?s for center and surround, respectively, for ?2 . The integrals in eqs.
(2,3) can then be solved analytically:
1
p(x | ?1 )
=
p(x | ?2 )
=
ncs
2
det(??1
cs ) B(1 ? 2 ; ?cs )
ncs
ncs
(
?1)
(2?) 2
?cs 2
1
nc
?1 21
2
det(??1
c ) B(1 ? 2 ; ?c ) det(?s ) B(1 ?
(2?)
nc
2
( n2c
?c
?1)
(2?)
ns
2 ; ?s )
?1)
( n2s
ns
2
?s
where B is the modified Bessel function of the second kind, and ?cs =
2.2
(4)
p
(5)
x> ??1
cs x.
Learning
The parameters to be estimated are the covariance matrices (?cs , ?c , ?s ) and the prior probability
(k) that center and surround share the same pool; we use a Generalized Expectation Maximization
algorithm, specifically Multi Cycle EM [38], where a full EM cycle is divided into three subcycles,
each involving a full E-step and a partial M-step performed only on one covariance matrix.
E-step: In the E-step we compute an estimate, Q, of the posterior distribution over the assignment
old
variable,ngiven the filter
o activations and the previous estimates of the parameters, namely k and
old .
? = ?c , ?s , ?cs . This is obtained via Bayes rule:
Q(?1 ) = p(?1 | x, ?old ) ? k old p(x | ?1 , ?old )
Q(?2 ) = p(?2 | x, ?
old
) ? (1 ? k
old
) p(x | ?2 , ?
(6)
old
)
(7)
M-step: In the M-step we increase the complete?data Log Likelihood, namely:
f = Q(?1 ) log [k p(x | ?1 , ?)] + Q(?2 ) log [(1 ? k)p(x | ?2 , ?)]
(8)
.
Solving ?f /?k = 0, we obtain k ? = arg maxk [f ] = Q(?1 ). The other terms cannot be solved analytically, and a numerical procedure must be adopted to maximize f w.r.t. the covariance matrices.
This requires an explicit form for the gradient:
?
1 B(? n2cs ; ?cs )
?f
cs
>
=
Q(?
)
?
xx
(9)
1
2
2?cs B(1 ? n2cs ; ?cs )
???1
cs
Similar expressions hold for the other partial derivatives. In practice, we add the constraint that the
covariances of the surround filters are spatially symmetric.
2.3
Inference: patch?by?patch assignment and model neural unit
Upon convergence of EM, the covariance matrices and prior k over the assignment are found. Then,
for a new image patch, the probability p(?1 | x) that the surround shares a common mixer with the
center is inferred. The output of the center group is taken to be the estimate (for the present, we
consider just the mean) of the Gaussian component E [gc | x], which we take to be our model neural
unit response. To estimate the normalized response of the center filter, we need to compute the
following expected value under the full model:
Z
E [gc | x] = dgc gc p(gc | x) = p(?1 | x)E [gc | x , ?1 ] + p(?2 | x)E [gc | xc , ?2 ]
(10)
the r.h.s., obtained by a straightforward calculation applying Bayes rule and the conditional independence of xs from xc , gc given ?2 , is the sum of the expected value of gc in the two configurations,
3
Figure 1: (a) Graphical model for the two components of the mixture of GSMs, where the center filter is (?1 ; left) or is not (?2 ; right) normalized by the surround filters; (b) joint conditional histogram
of two linear filters activations, showing the typical bowtie shape due to the variance dependency, as
well as a tilt due to linear dependencies between the two filters; (c) marginal distribution of linear activations in black, estimated Gaussian component in blue, and ideal Gaussian in red. The estimated
distribution is closer to a Gaussian than that of the original filter.
weighted by their posterior probabilities. The explicit form for the estimate of the i-th component
(corresponding in the implementation to a given orientation and phase) of gc under ?1 is:
s
p
i
|xic | B( 21 ? n2cs ; ?cs )
E gc | x , ?1 = sign(xic ) |xic |
(11)
?cs B(1 ? n2cs ; ?cs )
and a similar expression holds under ?2 , replacing the subscript cs by c. Note that in either configuration, the mixer variable?s effect on this is a form of divisive normalization or gain control,
through ? (including for stability, as in [30], an additive constant set to 1 for the ? values; we omit
1
the formul? to save space). Under ?1 , but not ?2 , this division is influenced
p by the surround . Note
also that, due to the presence of the inverse covariance matrix in ?cs = x> ??1
cs x, the gain control
signal is reduced when there is strong covariance, which in turn enhances the neural unit response.
3
Cortical neurophysiology simulations
To simulate neurophysiological experiments, we consider the following filter configuration: 3 ? 3
spatial positions separated by 6 pixels, 2 phases (quadrature pair), and one orientation (vertical),
plus 3 additional orientations in the central position to allow for cross?orientation gain control. We
first learn the parameters of the model for 25000 patches from an ensemble of 5 standard scenes
(Einstein, Goldhill, and so on). We take as our model neuron the absolute value of the complex
activation composed by the non?linear responses (eq. (11)) of two phases of the central vertical
filter.
We characterize the neuron?s basic properties with a procedure that is common in physiology experiments focusing on contextual non?linear modulation. First, we measure the so-called Area Summation curve, namely the response to gratings that are optimal in orientation and spatial frequency,
as a function of size. Cat and monkey experiments have shown striking non?linearities, with the
peak response at low contrasts being for significantly larger diameters than at high contrasts (Figure
2a). We obtain the same behavior in the model (Figure 2b; see also [33]). This behavior is due to the
assignment: for small grating sizes, center and surround have a higher posterior probability at high
contrast than at low contrast, and therefore the surround exerts stronger gain control. In a reduced
model with no assignment, we obtain a much weaker effect (Figure 2c).
We then assess the modulatory effect of a surround grating on a fixed, optimally?oriented central
grating, as a function of their relative orientations (Figure 3a). As is common, we determine the
spatial extent of the center and surround stimuli based on the area summation curves (see [4]). The
model simulations (Figure 3b), as in the data, exhibit the most reduced responses when the center
and surround have similar orientation (but note the ?blip? when they are exactly equal in Figure
3a;b, which arises in the model from the covariance of the Gaussian; see also [31]). In addition,
1
To ensure that the mixer follows the same distribution under ?1 and ?2 , after training with natural images
we rescale vc ? and therefore gc ? so that their values span the same range in both configurations; since the
assignment is made at a higher level in the hierarchy, such a rescaling is equivalent to downstream normalization
processes that make the estimates of gc comparable under ?1 and ?2 .
4
Figure 2: Area summation curves show the normalized firing rate of a neuron in response to optimal
gratings of increasing size. (a) a V1 neuron, after [4]; (b) the model neuron described in Sec. 3; (c)
a reduced model assuming that the surround filters are always in the gain pool of the center filter.
Figure 3: Orientation tuning of the surround. (a) and (b): normalized firing rate in response to a
stimulus composed by an optimal central grating surrounded by an annular grating of varying orientation, for (a) a V1 neuron, after [3]; and (b) the model neuron described in Sec. 3 (c) Probability
that the surround normalizes the center as a function of the relative orientation of the annular grating.
as the orientation difference between center and surround grows, the response increases and then
decreases, an effect that arises from the assignments. In the model simulations, we find that the
strength of this behavior depends on contrast, being larger at low contrasts, an effect of which there
are hints in the neurophysiology experimental data (using contrasts of 0.2 to 0.4) but which has yet
to be systematically explored. Figure 3c shows the posterior assignment probability for the same
two contrasts as in figure 3b, as a function of the surround orientation. These remain close to 1 at all
orientations at high contrast, but fall off more rapidly at low contrast.
Note that a previous GSM population model assumed (but did not learn) this form of fall off of the
posterior weights of of figure 3c, and showed that it is a basis for explaining the so-called direct
and indirect biases in the tilt illusion; i.e., repulsion and attraction in the perception of a center
stimulus orientation in the presence of a surround stimulus [34]. Figure 4 compares the GSM model
of [34] designed with parameters matched to perceptual data, to the result of our learned model. The
qualitative shape (although not the quantitative strength) of the effects are similar.
4
Salience popout and contour integration simulations
To address perceptual salience effects, we need a population model of oriented units. We consider
one distinct group of filters ? arranged as for the model neuron in Sec. 3 ? for each of four orientations (0, 45, 90, 135 deg, sampling more coarsely than [1]). We compute the non?linear response of
each model neuron as in Sec. 3, and take the maximum across the four orientations as the population
output, as in standard population decoding. This is performed at each pixel of the input image, and
the result is interpreted as a saliency map.
We first consider the popout of a target that differs from a background of distractors by a single
feature (eg. [8]), in our case orientation. Input image and output saliency map (the brighter, the
more salient) are shown in Fig. 5. As in [1], the target pops out since it is less suppressed by its own,
orthogonally-oriented neighbors than the surround bars are by their parallel ones; here, this emerges
straight from normative inference. [8] quantified relative target saliency above detection threshold
5
Figure 4: The tilt illusion. (a) Comparison of the learned GSM model (black, solid line with filled
squares), with the GSM model in [34] (blue, solid line; parameters set to account for the illusion
data of [39]), and the model in [34] with parameters modified to match the learned model (blue,
dashed line). The response of each neuron in the population is plotted as a function of the difference between the surround stimulus orientation and the preferred center stimulus orientation. We
assume all oriented neurons have identical properties to the learned vertical neuron (i.e., ignoring
the oblique effect). The model of [34] includes idealized tuning curves. The learned model is as in
the previous section, but with filters of narrower orientation tuning (because of denser sampling of
16 orientations in the pyramid), which results in an earlier point on the x axis of maximal response.
Model simulations are normalized to a maximum of 1. (b) Simulations of the tilt illusion using the
model in [34], based on parameters matched to the learned model (dashed line) versus parameters
matched to the data of [39] (solid line).
as a function of the difference in orientation between target and distractors using luminance (Fig.
5b). Fig. 5c plots saliency from the model; it exhibits non?linear saturation for large orientation
contrast, an effect that not all saliency models capture (see [17] for discussion). The shape of the
saturation is different for neural (Fig. 2a-b) versus perceptual (Fig. 5b-c) data, in both experiment
and model; for the latter, this arises from differences in stimuli (gratings versus bars, how the center
and surround extents were determined).
The second class of saliency effects involves collinear facilitation. One example is the so called
border effect, shown in figure 6a ? one side of the border, whose individual bars are collinear, is
more salient than the other (e.g. [1], but see also [40]). The middle and right plots in figure 6a depict
the saliency map for the full model and a reduced model that uses a diagonal covariance matrix.
Notice that the reduced model also shows an enhancement of the collinear side of the border vs the
parallel, due to the partial overlap of the linear receptive fields; but, as explained in Sec. 2.3, the
higher covariance between collinear filters in the full model, strengthens the effect. To quantify the
difference, we report also the ratio between the salience values on the collinear and parallel sides
of the border, after subtracting the saliency value of the homogeneous regions: the lower value for
the reduced model (1.28; versus 1.74 for the full model) shows that the full model enhances the
collinear relative to the parallel side. The ratio for the full model increases if we rescale the off?
diagonal terms of the covariance matrix relative to the diagonal (2.1 for a scaling factor of 1.5; 2.73
for a factor of 2). Rescaling would come from more reasonably dense spatial sampling. Fig. 6b
provides another, stronger example of the collinear facilitation.
5
Discussion
We have extended a standard GSM generative model of scene statistics to encompass contextual
effects. We modeled the covariance between the Gaussian components associated with neighboring
locations, and suggested a simple, approximate, process for choosing whether or not to pool such
locations under the same mixer. Using parameters learned from natural scenes, we showed that
this model provides a promising account of neurophysiological data on area summation and centersurround orientation contrast, and perceptual data on the saliency of image elements. This form of
model has previously been applied to the tilt illusion [34], but had just assumed the assignments of
figure 3c, in order to account for the indirect tilt illusion. Here, this emerged from first principles.
This model therefore unifies a wealth of data and ideas about contextual visual processing. To our
6
Figure 5: (a) An example of the stimulus and saliency map computed by the model. (b) Perceptual
data reproduced after [8], and (c) model output, of the saliency of the central bar as a function of the
orientation contrast between center and surround.
Figure 6: (a) Border effect: the collinear side of the border is more salient than the parallel one; the
center plot is the saliency map for the full model, right plot is for a reduced model with diagonal
covariance matrix. (b) Another example of collinear facilitation: the center row of bars is more
salient, relative to the background, when the bars are collinear (left) rather than when they are
parallel (right). In both (a) and (b), Col/P ar is the ratio between the salience values on the collinear
and parallel sides of the border, after subtracting the saliency value of the homogeneous regions.
knowledge, there have only been few previous attempts of this sort; one notable example is the
extensive salience work of [17]; here we go further in terms of simulating neural non?linearities,
and making connections with the contour integration and illusion literature: phenomena that have
previously been addressed only individually, if at all.
Our model is closely related to a number of suggestions in the literature. Previous bottom-up models
of divisive normalization, which were the original inspiration for the application by [22] of the GSM,
can account for some neural non?linearities by learning divisive weights instead of assignments
(e.g., [33]). However they do not incorporate linear correlations, and they fix the divisive weights
a priori rather than on a image?by?image basis such as in our model. Non-parametric statistical
alternatives to divisive normalization, e.g. non-linear ICA [41], have also been proposed, but have
been applied only to the orientation masking nonlinearity, therefore not addressing spatial context.
There are also various top-down models based on related principles. Compared with previous GSM
modelling [30], we have built a more computationally straightforward, and neurobiologically credible, approximate assignment mechanism. Other recent generative statistical models that capture
the statistical dependencies of the filters in slightly different ways (notably [31, 35]), might also be
able to encompass the data we have presented here. However, [35] has been applied only to the
image processing domain, and the model of [31] has not been tied to the perceptual phenomena we
have considered, nor to contrast data. There are also quantitative differences between the models,
including issues of soft versus hard assignment (see discussion in [30]); the assumption about the
link to data (here we adopted the mean of the Gaussian component of the GSM which incorporates
7
an explicit gain control, in contrast to the approach in [31]); and the richness of assignment versus
approximation in the various models (here we have purposely taken an approximate version of a full
assignment model).
There are also many models devoted to saliency. We showed that our assignment process, and the
normalization that results, is a good match for (and thus a normative justification of) at least some of
the results that [1, 15] captured in a dynamical realization of the V1 saliency hypothesis. However,
our model achieves suppression in regions of statistical homogeneity divisively rather than subtractively. The covariance between the Gaussian components captures some aspects of the long range
excitatory effects in that model, which permit contour integration. However, some of the collinear
facilitation arises just from receptive field overlap; and the structure of the covariance in natural
scenes seems rather impoverished compared with that implied by the association field [42], and
merits further examination with higher order statistics (see also [10, 26]). Note also that dynamical
models have not previously been applied to the same range of data (such as the tilt illusion).
Open theoretical issues include quantifying carefully the effect of the rather coarse assignment approximation, as well as the differences between the learned model and the idealized population
model of the tilt illusion [34]. Other important issues include characterizing the nature and effect of
uncertainty in the distributions of g and v rather than just the mean. This is critical to characterize
psychophysical results on contrast detection in the face of noise and also orientation acuity, and also
raises the issues aired by [31] as to how neural responses convey uncertainties. Open experimental
issues include a range of other contextual effects as to salience, contour integration, and even perceptual crowding. Contextual effects are equally present at multiple levels of neural processing. An
important future generalization would be to higher neural areas, and to mid and high level vision
(which themselves exhibit gain-control related phenomena, see e.g. [43]). More generally, context
is pervasive in time as well as space. The parallels are underexplored, and so pressing.
Acknowledgements
This work was funded by the Alfred P. Sloan Foundation (OS); and The Gatsby Charitable Foundation, the BBSRC, the EPSRC and the Wellcome Trust (PD). We are very grateful to Adam Kohn,
Joshua Solomon, Adam Sanborn, and Li Zhaoping for discussion.
References
[1] Z. Li. A saliency map in primary visual cortex. Trends Cogn Sci, 6(1):9?16, 2002.
[2] P. Series, J. Lorenceau, and Y. Fr?egnac. The ?silent? surround of v1 receptive fields: theory and experiments. J Physiol Paris, 97(4-6):453?474, 2003.
[3] H. E. Jones, K. L. Grieve, W. Wang, and A. M. Sillito. Surround suppression in primate v1. J Neurophysiol, 86(4):2011?2028, 2001.
[4] J. R. Cavanaugh, W. Bair, and J. A. Movshon. Selectivity and spatial distribution of signals from the
receptive field surround in macaque v1 neurons. J Neurophysiol, 88(5):2547?2556, 2002.
[5] J. J. Gibson and M. Radner. Adaptation, after-effect, and contrast in the perception of tilted lines. Journal
of Experimental Psychology, 20:553?569, 1937.
[6] C. W. Clifford, P. Wenderoth, and B. Spehar. A functional angle on some after-effects in cortical vision.
Proc R Soc Lond B Biol Sci, 1454:1705?1710, 2000.
[7] J. A. Solomon and M. J. Morgan. Stochastic re-calibration: contextual effects on perceived tilt. Proc Biol
Sci, 273(1601):2681?2686, 2006.
[8] H.C. Nothdurft. The conspicuousness of orientation and motion contrast. Spatial Vision, 7(4):341?363,
1993.
[9] D. J. Field, A. Hayes, and R. F. Hess. Contour integration by the human visual system: evidence for a
local ?association field?. Vision Res, 33(2):173?193, 1993.
[10] W. S. Geisler, J. S. Perry, B. J. Super, and D. P. Gallogly. Edge co-occurrence in natural images predicts
contour grouping performance. Vision Res, 41(6):711?724, 2001.
[11] L. Schwabe, K. Obermayer, A. Angelucci, and P. C. Bressloff. The role of feedback in shaping the extraclassical receptive field of cortical neurons: a recurrent network model. J Neurosci, 26(36):9117?9129,
2006.
[12] J. Wielaard and P. Sajda. Extraclassical receptive field phenomena and short-range connectivity in v1.
Cereb Cortex, 16(11):1531?1545, 2006.
[13] T. J. Sullivan and V. R. de Sa. A model of surround suppression through cortical feedback. Neural Netw,
19(5):564?572, 2006.
8
[14] T.N. Mundhenk and L. Itti. Computational modeling and exploration of contour integration for visual
saliency. Biological Cybernetics, 93(3):188?212, 2005.
[15] Z. Li. Visual segmentation by contextual influences via intracortical interactions in primary visual cortex.
Network: Computation in Neural Systems, 10(2):187?212, 1999.
[16] L. Itti and C. Koch. A saliency-based search mechanism for overt and covert shifts of visual attention.
Vision Research, 40(10-12):1489?1506, 2000.
[17] D. Gao, V. Mahadevan, and N. Vasconselos. On the plausibility of the discriminant center-surround
hypothesis for visual saliency. Journal of Vision, 8(7)(13):1?18, 2008.
[18] L. Zhang, M.H. Tong, T. Marks, H. Shan, and G.W. Cottrell. Sun: A bayesian framework for saliency
using natural statistics. Journal of Vision, 8(7)(32):1?20, 2008.
[19] N.D.B. Bruce and J.K. Tsotsos. Saliency, attention, and visual search: An information theoretic approach.
Journal of Vision, 9(3)(5):1?24, 2009.
[20] A. Hyv?arinen, J. Hurri, and P.O. Hoyer. Natural Image Statistics. Springer, 2009.
[21] D. Andrews and C. Mallows. Scale mixtures of normal distributions. J. Royal Stat. Soc., 36:99?102,
1974.
[22] M. J. Wainwright, E. P. Simoncelli, and A. S. Willsky. Random cascades on wavelet trees and their use in
modeling and analyzing natural imagery. Applied and Computational Harmonic Analysis, 11(1):89?123,
2001.
[23] J. Portilla, V. Strela, M. Wainwright, and E. P. Simoncelli. Image denoising using a scale mixture of
Gaussians in the wavelet domain. IEEE Trans Image Processing, 12(11):1338?1351, 2003.
[24] C. Zetzsche, B. Wegmann, and E. Barth. Nonlinear aspects of primary vision: Entropy reduction beyond
decorrelation. In Int?l Symposium, Society for Information Display, volume XXIV, pages 933?936, 1993.
[25] E. P. Simoncelli. Statistical models for images: Compression, restoration and synthesis. In Proc 31st
Asilomar Conf on Signals, Systems and Computers, pages 673?678, Pacific Grove, CA, 1997. IEEE
Computer Society.
[26] P. Hoyer and A. Hyv?arinen. A multi-layer sparse coding network learns contour coding from natural
images. Vision Research, 42(12):1593?1605, 2002.
[27] A Hyv?arinen, J. Hurri, and J. Vayrynen. Bubbles: a unifying framework for low-level statistical properties
of natural image sequences. Journal of the Optical Society of America A, 20:1237?1252, 2003.
[28] Y. Karklin and M. S. Lewicki. A hierarchical Bayesian model for learning nonlinear statistical regularities
in nonstationary natural signals. Neural Computation, 17:397?423, 2005.
[29] S. Osindero, M. Welling, and G. E. Hinton. Topographic product models applied to natural scene statistics.
Neural Computation, 18(2):381?414, 2006.
[30] O. Schwartz, T. J. Sejnowski, and P. Dayan. Soft mixer assignment in a hierarchical generative model of
natural scene statistics. Neural Comput, 18(11):2680?2718, 2006.
[31] Y. Karklin and M.S. Lewicki. Emergence of complex cell properties by learning to generalize in natural
scenes. Nature, 457(7225):83?86, 2009.
[32] D. J. Heeger. Normalization of cell responses in cat striate cortex. Visual Neuroscience, 9:181?198, 1992.
[33] O. Schwartz and E. P. Simoncelli. Natural signal statistics and sensory gain control. Nature Neuroscience,
4(8):819?825, 2001.
[34] O. Schwartz, T.J. Sejnowski, and P. Dayan. Perceptual organization in the tilt illusion. Journal of Vision,
9(4)(19):1?20, 2009.
[35] J.A. Guerrero-Colon, E.P. Simoncelli, and J. Portilla. Image denoising using mixtures of gaussian scale
mixtures. In Proc 15th IEEE Int?l Conf on Image Proc, pages 565?568, 2008.
[36] E. P. Simoncelli, W. T. Freeman, E. H. Adelson, and D. J. Heeger. Shiftable multi-scale transforms. IEEE
Trans Information Theory, 38(2):587?607, 1992.
[37] C. K. I. Williams and N. J. Adams. Dynamic trees. In M. S. Kearns, S. A. Solla, and D. A. Cohn, editors,
Adv. Neural Information Processing Systems, volume 11, pages 634?640, Cambridge, MA, 1999. MIT
Press.
[38] X.L. Meng and D.B. Rubin. Maximum likelihood estimation via the ecm algorithm: A general framework.
Biometrika, 80(2):267?278, 1993.
[39] E. Goddard, C.W.G. Clifford, and S.G. Solomon. Centre-surround effects on perceived orientation in
complex images. Vision Research, 48:1374?1382, 2008.
[40] A.V. Popple and Z. Li. Testing a v1 model: perceptual biases and saliency effects. Journal of Vision,
1,3:148, 2001.
[41] J. Malo and J. Guti?errez. V1 non?linear properties emerge from local?to?global non?linear ica. Network:
Comp. Neur. Syst., 17(1):85?102, 2006.
[42] D. J. Field. Relations between the statistics of natural images and the response properties of cortical cells.
J. Opt. Soc. Am. A, 4(12):2379?2394, 1987.
[43] Q. Li and Z. Wang. General?purpose reduced?reference image quality assessment based on perceptually
and statistically motivated image representation. In Proc 15th IEEE Int?l Conf on Image Proc, pages
1192?1195, 2008.
9
| 3794 |@word neurophysiology:2 version:2 middle:1 compression:1 stronger:2 seems:1 wenderoth:1 open:2 hyv:3 simulation:6 covariance:20 solid:3 crowding:1 reduction:1 configuration:6 series:1 contextual:14 activation:12 yet:3 intriguing:1 must:2 extraclassical:2 physiol:1 realistic:2 numerical:1 additive:1 predetermined:1 shape:4 cottrell:1 tilted:1 mundhenk:1 designed:1 plot:4 depict:1 v:4 implying:1 generative:6 cavanaugh:1 oblique:1 short:1 provides:3 coarse:1 location:4 zhang:1 direct:1 symposium:1 qualitative:1 grieve:1 notably:1 ica:2 expected:2 indeed:1 themselves:2 nor:1 behavior:3 multi:3 freeman:1 oversimplified:1 increasing:1 xx:1 matched:4 underlying:1 linearity:3 strela:1 kind:1 interpreted:1 substantially:1 monkey:1 quantitative:2 exactly:1 biometrika:1 uk:1 control:7 schwartz:4 unit:5 omit:1 appear:1 positive:1 local:3 gsms:1 conspicuousness:1 analyzing:1 meng:1 subscript:1 firing:2 modulation:2 black:2 plus:1 might:1 studied:2 quantified:1 co:1 range:7 statistically:1 testing:1 mallow:1 practice:1 differs:1 illusion:15 cogn:1 steerable:1 procedure:2 sullivan:1 area:6 gibson:1 gabor:3 physiology:1 significantly:1 cascade:1 cannot:1 close:2 context:6 applying:1 seminal:1 influence:1 equivalent:1 map:7 center:25 maximizing:1 straightforward:2 go:1 attention:2 williams:1 unify:1 rule:2 attraction:1 facilitation:4 stability:1 population:6 coordinate:1 justification:1 controlling:1 play:1 target:6 hierarchy:1 homogeneous:2 us:1 hypothesis:4 element:1 trend:1 approximated:1 strengthens:1 neurobiologically:2 predicts:1 bottom:2 role:3 observed:1 divisively:1 epsrc:1 solved:2 capture:8 wang:2 region:3 cycle:2 richness:1 sun:1 adv:1 solla:1 decrease:1 principled:1 pd:1 complexity:1 dynamic:1 ultimately:1 trained:1 raise:1 solving:1 grateful:1 upon:1 division:1 basis:4 neurophysiol:2 joint:3 indirect:2 various:3 cat:2 america:1 surrounding:3 sajda:1 separated:1 distinct:1 describe:1 london:1 sejnowski:2 n2c:1 mixer:16 outside:1 choosing:1 whose:1 emerged:1 larger:2 denser:1 statistic:14 topographic:1 emergence:1 reproduced:1 sequence:1 pressing:1 ucl:2 subtracting:2 interaction:4 product:2 maximal:1 fr:1 adaptation:1 neighboring:1 realization:1 rapidly:1 radner:1 convergence:1 enhancement:1 regularity:1 adam:3 recurrent:1 ac:1 stat:1 andrew:1 xns:1 rescale:2 x1c:1 sa:1 grating:9 soc:3 eq:2 c:19 involves:2 come:2 strong:1 quantify:1 differ:1 closely:2 attribute:2 filter:38 stochastic:1 vc:6 exploration:1 dvc:2 human:1 arinen:3 fix:1 generalization:1 opt:1 biological:2 summation:4 hold:2 koch:1 considered:1 normal:1 achieves:1 early:2 purpose:1 perceived:2 estimation:1 proc:7 overt:1 coordination:1 individually:1 weighted:1 mit:1 clearly:1 gaussian:17 always:1 aim:1 modified:2 rather:6 super:1 ecm:1 varying:1 pervasive:1 focus:2 acuity:1 modelling:1 likelihood:3 greatly:1 contrast:18 suppression:3 am:1 colon:1 inference:2 dayan:4 dependent:3 repulsion:1 wegmann:1 relation:1 pixel:2 issue:8 among:2 orientation:30 arg:1 priori:1 spatial:10 integration:10 marginal:1 field:13 equal:1 sampling:3 identical:1 represents:1 zhaoping:1 yu:2 jones:1 adelson:1 future:1 report:1 stimulus:12 hint:1 few:1 oriented:5 composed:2 homogeneity:2 individual:1 phase:3 attempt:1 detection:2 organization:1 mixture:9 zetzsche:1 devoted:1 implication:1 grove:1 integral:1 closer:1 partial:3 edge:1 filled:1 incomplete:1 old:8 tree:2 re:3 plotted:1 theoretical:1 psychological:1 instance:2 modeling:3 earlier:1 soft:2 ar:1 restoration:1 queen:1 assignment:24 gs1:1 maximization:1 addressing:1 shiftable:1 osindero:1 characterize:2 optimally:1 dependency:8 n2s:1 proximal:1 combined:1 st:1 peak:1 geisler:1 off:3 decoding:1 pool:5 synthesis:1 connectivity:1 clifford:2 central:6 imagery:1 solomon:3 conf:3 derivative:1 itti:2 rescaling:2 li:6 syst:1 account:9 nonlinearities:1 de:1 intracortical:1 sec:5 coding:2 includes:1 blip:1 int:3 notable:1 sloan:1 depends:2 idealized:2 performed:2 view:1 break:1 red:1 bayes:2 sort:1 parallel:8 masking:1 bruce:1 ass:1 square:2 variance:4 characteristic:1 largely:1 ensemble:2 saliency:23 generalize:1 bayesian:2 unifies:2 comp:1 cybernetics:1 straight:1 gsm:18 influenced:2 frequency:1 involved:1 gcnu:1 associated:2 gain:9 treatment:1 manifest:1 distractors:2 emerges:1 knowledge:1 credible:1 segmentation:2 shaping:1 impoverished:1 carefully:1 barth:1 focusing:1 higher:5 response:19 arranged:1 done:1 just:4 vayrynen:1 correlation:5 replacing:1 cohn:1 trust:1 o:1 perry:1 nonlinear:2 widespread:1 assessment:1 quality:1 grows:1 effect:31 normalized:5 analytically:2 assigned:1 inspiration:1 spatially:2 symmetric:1 bbsrc:1 eg:1 attractive:1 generalized:1 complete:1 theoretic:1 angelucci:1 cereb:1 motion:1 covert:1 image:27 harmonic:1 purposely:1 guti:1 common:4 functional:1 tilt:15 volume:2 discussed:1 extend:1 association:2 gallogly:1 egnac:1 surround:34 cambridge:1 hess:1 tuning:3 similarly:1 nonlinearity:1 centre:1 had:1 funded:1 impracticable:1 calibration:1 cortex:6 add:1 multivariate:2 own:2 recent:6 posterior:5 perspective:1 showed:3 selectivity:1 joshua:1 captured:2 morgan:1 additional:2 goldhill:1 determine:1 maximize:1 bessel:1 signal:5 ii:1 dashed:2 full:11 multiple:3 encompass:2 simoncelli:6 annular:2 match:2 plausibility:2 calculation:1 cross:1 long:1 divided:1 host:1 equally:1 involving:1 basic:2 bronx:2 expectation:1 dgc:1 exerts:1 popout:2 histogram:3 normalization:7 vision:14 pyramid:2 cell:3 receive:1 addition:3 background:2 addressed:2 wealth:1 schwabe:1 extra:1 subtractively:1 pooling:1 incorporates:1 nonstationary:1 presence:3 noting:1 ideal:1 mahadevan:1 enough:1 independence:1 fit:1 psychology:1 brighter:1 competing:1 silent:1 idea:1 det:3 shift:1 whether:2 expression:2 bair:1 kohn:1 motivated:1 collinear:12 movshon:1 peter:1 ignored:1 generally:2 modulatory:1 aimed:1 transforms:1 repeating:1 mid:1 diameter:1 reduced:9 notice:1 sign:1 estimated:3 neuroscience:2 blue:3 alfred:1 coarsely:1 group:7 four:2 salient:4 nevertheless:1 threshold:1 nothdurft:1 v1:15 luminance:1 downstream:1 tsotsos:1 sum:1 inverse:1 angle:1 uncertainty:2 striking:2 reporting:1 throughout:1 reasonable:1 patch:6 scaling:1 comparable:1 layer:1 shan:1 display:1 strength:2 constraint:1 scene:17 aspect:2 simulate:1 span:1 lond:1 optical:1 influential:1 pacific:1 neur:1 across:3 remain:1 em:3 suppressed:1 slightly:1 errez:1 making:2 biologically:1 primate:1 explained:1 taken:2 wellcome:1 computationally:3 asilomar:1 previously:3 turn:1 dvs:1 mechanism:2 mechanistic:1 merit:1 adopted:2 gaussians:1 endowed:1 permit:1 apply:1 einstein:1 hierarchical:2 simulating:1 occurrence:1 save:1 alternative:1 original:2 assumes:1 top:1 ensure:1 include:3 graphical:2 unifying:1 xc:4 goddard:1 build:2 classical:2 society:3 implied:1 psychophysical:1 realized:1 receptive:9 primary:4 parametric:1 striate:1 diagonal:5 obermayer:1 enhances:2 gradient:1 exhibit:3 sanborn:1 hoyer:2 link:3 sci:3 centersurround:1 extent:2 discriminant:1 willsky:1 assuming:2 modeled:1 relationship:2 ratio:3 balance:1 nc:8 implementation:2 allowing:1 vertical:3 neuron:16 defining:1 maxk:1 extended:1 hinton:1 gc:12 wielaard:1 portilla:2 arbitrary:1 inferred:1 pair:3 namely:3 paris:1 extensive:1 connection:2 learned:10 pop:2 macaque:1 trans:2 address:3 able:1 suggested:2 bar:6 dynamical:3 perception:2 beyond:1 saturation:2 rf:2 including:2 built:1 royal:1 wainwright:2 critical:2 overlap:2 natural:23 examination:1 decorrelation:1 karklin:2 scheme:1 orthogonally:1 axis:1 bubble:1 ruben:1 literature:3 prior:3 acknowledgement:1 determining:1 relative:6 suggestion:1 versus:6 vg:1 foundation:3 rubin:1 principle:3 editor:1 charitable:1 systematically:1 share:4 surrounded:1 normalizes:1 row:1 excitatory:1 last:1 salience:11 bias:2 allow:1 weaker:1 side:6 wide:1 fall:2 explaining:1 neighbor:1 emerge:2 absolute:1 characterizing:1 face:1 distributed:1 bressloff:1 curve:4 bowtie:3 cortical:6 feedback:2 sparse:1 contour:12 sensory:2 made:2 simplified:1 welling:1 approximate:4 netw:1 preferred:1 deg:1 global:1 hayes:1 assumed:2 hurri:2 search:2 decade:1 sillito:1 promising:1 learn:4 gsns:1 reasonably:1 nature:3 ca:1 ignoring:1 complex:3 domain:2 did:1 dense:1 linearly:1 neurosci:1 whole:1 border:7 noise:1 malo:1 allowed:1 formul:1 quadrature:1 convey:1 fig:8 encompassed:1 gatsby:2 ny:2 tong:1 n:4 position:2 explicit:3 heeger:2 col:1 comput:1 perceptual:14 tied:1 wavelet:2 learns:1 down:2 xic:3 showing:1 normative:3 explored:2 x:3 evidence:1 grouping:2 incorporating:1 texture:1 magnitude:2 perceptually:1 entropy:1 spehar:1 led:1 rayleigh:1 gao:1 neurophysiological:3 visual:18 scalar:1 lewicki:2 springer:1 coen:1 ma:1 conditional:3 narrower:1 quantifying:1 shared:2 hard:1 specifically:1 typical:1 determined:1 denoising:2 kearns:1 called:4 partly:1 divisive:7 experimental:3 guerrero:1 mark:1 odelia:1 latter:2 arises:4 phenomenon:7 incorporate:1 biol:2 |
3,084 | 3,795 | On Stochastic and Worst-case Models for Investing
Elad Hazan
IBM Almaden Research Center
650 Harry Rd, San Jose, CA 95120
[email protected]
Satyen Kale
Yahoo! Research
4301 Great America Parkway, Santa Clara, CA 95054
[email protected]
Abstract
In practice, most investing is done assuming a probabilistic model of stock price
returns known as the Geometric Brownian Motion (GBM). While often an acceptable approximation, the GBM model is not always valid empirically. This
motivates a worst-case approach to investing, called universal portfolio management, where the objective is to maximize wealth relative to the wealth earned by
the best fixed portfolio in hindsight.
In this paper we tie the two approaches, and design an investment strategy which
is universal in the worst-case, and yet capable of exploiting the mostly valid GBM
model. Our method is based on new and improved regret bounds for online convex
optimization with exp-concave loss functions.
1
Introduction
?Average-case? Investing: Much of mathematical finance theory is devoted to the modeling of
stock prices and devising investment strategies that maximize wealth gain, minimize risk while doing
so, and so on. Typically, this is done by estimating the parameters in a probabilistic model of stock
prices. Investment strategies are thus geared to such average case models (in the formal computer
science sense), and are naturally susceptible to drastic deviations from the model, as witnessed in
the recent stock market crash.
Even so, empirically the Geometric Brownian Motion (GBM) ([Osb59, Bac00]) has enjoyed great
predictive success and every year trillions of dollars are traded assuming this model. Black and
Scholes [BS73] used this same model in their Nobel prize winning work on pricing options on
stocks.
?Worst-case? Investing: The fragility of average-case models in the face of rare but dramatic deviations led Cover [Cov91] to take a worst-case approach to investing in stocks. The performance
of an online investment algorithm for arbitrary sequences of stock price returns is measured with
respect to the best CRP (constant rebalanced portfolio, see [Cov91]) in hindsight. A universal portfolio selection algorithm is one that obtains sublinear (in the number of trading periods T ) regret,
which is the difference in the logarithms of the final wealths obtained by the two.
Cover [Cov91] gave the first universal portfolio selection algorithm with regret bounded by
O(log T ). There has been much follow-up work after Cover?s seminal work, such as [HSSW96,
MF92, KV03, BK97, HKKA06], which focused on either obtaining alternate universal algorithms
or improving the efficiency of Cover?s algorithm. However, the best regret bound is still O(log T ).
This dependence of the regret on the number of trading periods is not entirely satisfactory for two
main reasons. First, a priori it is not clear why the online algorithm should have high regret (growing
with the number of iterations) in an unchanging environment. As an extreme example, consider a
setting with two stocks where one has an ?upward drift? of 1% daily, whereas the second stock
remains at the same price. One would expect to ?figure out? this pattern quickly and focus on the
1
first stock, thus attaining a constant fraction of the wealth of the best CRP in the long run, i.e.
constant regret, unlike the worst-case bound of O(log T ).
The second problem arises from trading frequency. Suppose we need to invest over a fixed period of
time, say a year. Trading more frequently potentially leads to higher wealth gain, by capitalizing on
short term stock movements. However, increasing trading frequency increases T , and thus one may
expect more regret. The problem is actually even worse: since we measure regret as a difference of
logarithms of the final wealths, a regret bound of O(log T ) implies a poly(T ) factor ratio between the
final wealths. In reality, however, experiments [AHKS06] show that some known online algorithms
actually improve with increasing trading frequency.
Bridging Worst-case and Average-case Investing: Both these issues are resolved if one can show
that the regret of a ?good? online algorithm depends on total variation in the sequence of stock
returns, rather than purely on the number of iterations. If the stock return sequence has low variation,
we expect our algorithm to be able to perform better. If we trade more frequently, then the per
iteration variation should go down correspondingly, so the total variation stays the same.
We analyze a portfolio selection algorithm and prove that its regret is bounded by O(log Q), where
Q (formally defined in Section 1.2) is the sum of squared deviations of the returns from their mean.
Since Q ? T (after appropriate normalization), we improve over previous regret bounds and retain
the worst-case robustness. Furthermore, in an average-case model such as GBM, the variation can
be tied very nicely to the volatility parameter, which explains the experimental observation the regret
doesn?t increase with increasing trading frequency. Our algorithm is efficient, and its implementation requires constant time per iteration (independent of the number of game iterations).
1.1
New Techniques and Comparison to Related Work
Cesa-Bianchi, Mansour and Stoltz [CBMS07] initiated work on relating worst case regret to the
variation in the data for the related learning problem of prediction from expert advice, and conjectured that the optimal regret bounds should depend on the observed
variation of the cost sequence.
? ?Q) were obtained in the full inforRecently, this conjectured was proved and regret bounds of O(
mation and bandit linear optimization settings [HK08, HK09], where Q is the variation in the cost
sequence. In this paper we give an exponential improvement in regret, viz. O(log Q), for the case
of online exp-concave optimization, which includes portfolio selection as a special case.
Another approach to connecting worst-case to average-case investing was taken by Jamshidian
[Jam92] and Cross and Barron [CB03]. They considered a model of ?continuous trading?, where
there are T ?trading intervals?, and in each the online investor chooses a fixed portfolio which is
rebalanced k times with k ? ?. They prove familiar regret bounds of O(log T ) (independent of
k) in this model w.r.t. the best fixed portfolio which is rebalanced T ? k times. In this model our
algorithm attains the tighter regret bounds of O(log Q), although our algorithm has more flexibility.
Furthermore their algorithms, being extensions of Cover?s algorithm, may require exponential time
in general1 .
? ?Q)
Our bounds of O(log Q) regret require completely different techniques compared to the O(
regret bounds of [HK08, HK09]. These previous bounds are based on first-order gradient descent
methods which are too weak to obtain O(log Q) regret. Instead we have to use the second-order
Newton step ideas based on [HKKA06] (in particular, the Hessian of the cost functions).
The second-order techniques of [HKKA06] are, however, not sensitive enough to obtain O(log Q)
bounds. This is because progress was measured in terms of the distance between successive portfolios in the usual Euclidean norm, which is insensitive to variation in the cost sequence. In this paper,
we introduce a different analysis technique, based on analyzing the distance between successive
predictions using norms that keep changing from iteration to iteration and are actually sensitive to
the variation.
A key technical step in the analysis is a lemma (Lemma 6) which bounds the sum of differences of
successive Cesaro means of a sequence of vectors by the logarithm of its variation. This lemma,
1
Cross and Barron give an efficient implementation for some interesting special cases, under assumptions
on the variation in returns and bounds on the magnitude of the returns, and assuming k ? ?. A truly efficient
implementation of their algorithm can probably be obtained using the techniques of Kalai and Vempala.
2
which may be useful in other contexts when variation bounds on the regret are desired, is proved
using the Kahn-Karush-Tucker conditions, and also improves the regret bounds in previous papers.
1.2
The model and statement of results
Portfolio management. In the universal portfolio management model [Cov91], an online investor
iteratively distributes her wealth over n assets before observing the change in asset price. In each
iteration P
t = 1, 2, . . . the investor commits to an n-dimensional distribution of her wealth, xt ?
?n = { i xi = 1 , x ? 0}. She then observes a price relatives vector rt ? Rn+ , where rt (i) is
the ratio between the closing price of the ith asset on trading period t and the opening price. In the
tth trading period,
Q the wealth of the investor changes by a factor of (rt ? xt ). The overall change in
wealth is thus t (rt ? xt ). Since in a typical market wealthQ
grows at an exponential
rate, we measure
P
performance by the exponential growth rate, which is log t (rt ? xt ) = t log(rt ? xt ). A constant
rebalanced portfolio (CRP) is an investment strategy which rebalances the wealth
Q in every iteration
to keep a fixed distribution. Thus, for a CRP x ? ?n , the change in wealth is t (rt ? x).
The regret of the investor is defined to be the difference between the exponential growth rate of her
investment strategy and that of the best CRP strategy in hindsight, i.e.
X
X
Regret := max
log(rt ? x? ) ?
log(rt ? xt )
?
x ??n
t
t
Note that the regret doesn?t change if we scale all the returns in any particular period by the same
amount. So we assume w.l.o.g. that in all periods t, maxi rt (i) = 1. We assume that there is known
parameter r > 0, such that for all periods t, mint,i rt (i) ? r. We call r the market variability
parameter. This is the only restriction we put on the stock price returns; they could be chosen
adversarially as long as they respect the market variability bound.
Online convex optimization. In the online convex optimization problem [Zin03], which generalizes
universal portfolio management, the decision space is a closed, bounded, convex set K ? Rn , and
we are sequentially given a series of convex cost2 functions ft : K ? R for t = 1, 2, . . .. The
algorithm iteratively produces a point xt ? K in every round t, without knowledge of ft (but using
the past sequence of cost functions), and incurs the cost ft (xt ). The regret at time T is defined to be
Regret :=
T
X
ft (xt ) ? min
x?K
t=1
T
X
ft (x).
t=1
PT
P
Usually, we will let t denote t=1 . In this paper, we restrict our attention to convex cost functions
which can be written as ft (x) = g(vt ? x) for some univariate convex function g and a parameter
vector vt ? Rn (for example, in the portfolio management problem, K = ?n , ft (x) = ? log(rt ?x),
g = ? log, and vt = rt ).
Thus, the cost functions are parametrized by the vectors v1 , v2 , . . . , vT . Our bounds will be expressed as a function of the quadratic variability of the parameter vectors v1 , v2 , . . . , vT , defined
as
T
X
Q(v1 , ..., vT ) := min
kvt ? ?k2 .
?
1
T
PT
t=1
This expression is minimized at ? =
t=1 vt , and thus the quadratic variation is just T ? 1 times
the sample variance of the sequence of vectors {v1 , ..., vt }. Note however that the sequence can be
generated adversarially rather than by some stochastic process. We shall refer to this as simply Q if
the vectors are clear from the context.
Main theorem. In the setup of the online convex optimization problem above, we have the following
algorithmic result:
Theorem 1. Let the cost functions be of the form ft (x) = g(vt ?x). Assume that there are parameters
R, D, a, b > 0 such that the following conditions hold:
2
Note the difference from the portfolio selection problem: here we have convex cost functions, rather than
concave payoff functions. The portfolio selection problem is obtained by using ? log as the cost function.
3
1.
2.
3.
4.
for all t, kvt k ? R,
for all x ? K, we have kxk ? D,
for all x ? K, and for all t, either g 0 (vt ? x) ? [0, a] or g 0 (vt ? x) ? [?a, 0], and
for all x ? K, and for all t, g 00 (vt ? x) ? b.
Then there is an algorithm that guarantees the following regret bound:
Regret = O((a2 n/b) log(1 + bQ + bR2 ) + aRD log(2 + Q/R2 ) + D2 ).
Now we apply Theorem?1 to the portfolio selection problem.
? First, we estimate the relevant parameters. We have krt k ? n since all rt (i) ? 1, thus R = n. For any x ? ?n , kxk ? 1, so D = 1.
g 0 (vt ? x) = ? (vt1?x) , and thus g 0 (vt ? x) ? [? 1r , 0], so a = 1r . Finally, g 00 (vt ? x) = (vt1?x)2 ? 1, so
b = 1. Applying Theorem 1 we get the following corollary:
Corollary 2. For the portfolio selection problem over n assets, there is an algorithm that attains
the following regret bound:
?n
?
Regret = O 2 log(Q + n) .
r
2
2.1
Bounding the Regret by the Observed Variation in Returns
Preliminaries
All matrices are assumed be real symmetric matrices in Rn?n , where n is the number of stocks. We
use the notation A ? B to say that A ? B is positive semidefinite. We require
? the notion of a norm
of a vector x induced by a positive definite matrix M , defined as kxkM = x> M x. The following
simple generalization of the Cauchy-Schwartz inequality is used in the analysis:
?x, y ? Rn :
x ? y ? kxkM kykM ?1 .
P
We denote by |A| the determinant of a matrix A, and by A ? B = Tr(AB) = ij Aij Bij . As
we are concerned with logarithmic regret bounds, potential functions which behave like harmonic
series come into play. A generalization of harmonic series to high dimensions is the vector-harmonic
series, which is a series of quadratic forms that can be expressed as (here A ? 0 is a positive definite
matrix, and v1 , v2 , . . . are vectors in Rn ):
Pt
v1> (A + v1 v1> )?1 v1 , v2> (A + v1 v1> + v2 v2> )?1 v2 , . . . , vt> (A + ? =1 v? v?> )?1 vt , . . .
The following lemma is from [HKKA06]:
Lemma 3. For a vector harmonic series given by an initial matrix A and vectors v1 , v2 , . . . , vT , we
have
#
"
PT
T
X
Pt
|A + ? =1 v? v?> |
> ?1
>
.
vt (A + ? =1 v? v? ) vt ? log
|A|
t=1
The reader can note that in one dimension, if all vectors vt = 1 and A = 1, then the series above
reduces exactly to the regular harmonic series whose sum is bounded, of course, by log(T + 1).
2.2
Algorithm and analysis
We analyze the following algorithm and prove that it attains logarithmic regret with respect to the
observed variation (rather than number of iterations). The algorithm follows the generic algorithmic
scheme of ?Follow-The-Regularized-Leader? (FTRL) with squared Euclidean regularization.
Algorithm Exp-Concave-FTL. In iteration t, use the point xt defined as:
!
? t?1
X
1
2
xt , arg min
f? (x) + kxk
x??n
2
? =1
(1)
Note the mathematical program which the algorithm solves is convex, and can be solved in time
polynomial in the dimension and number of iterations. The running time, however, for solving this
4
convex program can be quite high. In the full version of the paper, for the specific problem of
portfolio selection, where ft (x) = ? log(rt ? x), we give a faster implementation whose per iteration running time is independent of the number of iterations, using the more sophisticated ?online
Newton method? of [HKKA06]. In particular, we have the following result:
Theorem 4. For the portfolio selection problem, there is an algorithm that runs in O(n3 ) time per
iteration whose regret is bounded by
?n
?
Regret = O 3 log(Q + n) .
r
In this paper, we retain the simpler algorithm and analysis for an easier exposition. We now proceed
to prove the Theorem 1.
Proof. [Theorem 1] First, we note that the algorithm is running a ?Follow-the-leader? procedure
on the cost functions f0 , f1 , f2 , . . . where f0 (x) = 21 kxk2 is a fictitious period 0 cost function. In
other words, in each iteration, it chooses the point that would have minimized the total cost under
all the observed functions so far (and, additionally, a fictitious initial cost function f0 ). This point is
referred to as the leader in that round.
The first step in analyzing such an algorithm is to use a stability lemma from [KV05], which bounds
the regret of any Follow-the-leader algorithm by the difference in costs (under ft ) of the current prediction xt and the next one xt+1 , plus an additional error term which comes from the regularization.
Thus, we have
P
1
? 2
2
Regret ?
t ft (xt ) ? ft (xt+1 ) + (kx k ? kx0 k )
2
P
1 2
?
t ?ft (xt ) ? (xt ? xt+1 ) + D
2
P 0
1 2
(2)
=
t g (vt ? xt )[vt ? (xt ? xt+1 )] + D
2
The second inequality is because ft is convex. The last equality follows because ?ft (xt ) = g 0 (xt ?
Pt?1
vt )vt . Now, we need a handle on xt ? xt+1 . For this, define Ft =
? =0 f? , and note that xt
minimizes Ft over K. Consider the difference in the gradients of Ft+1 evaluated at xt+1 and xt :
Pt
?Ft+1 (xt+1 ) ? ?Ft+1 (xt ) =
? =0 ?f? (xt+1 ) ? ?f? (xt )
Pt
0
0
=
? =1 [g? (v? ? xt+1 ) ? g? (v? ? xt )]v? + (xt+1 ? xt )
Pt
0
t
(3)
=
? =1 [?g? (v? ? ?? ) ? (xt+1 ? xt )]v? + (xt+1 ? xt )
Pt
>
t
00
(4)
=
? =1 g? (v? ? ?? )v? v? (xt+1 ? xt ) + (xt+1 ? xt ).
Equation 3 follows by applying the Taylor expansion of the (multi-variate) function g?0 (v? ? x) at
point xt , for some point ??t on the line segment joining xt and xt+1 . The equation (4) follows from
the observation that ?g?0 (v? ? x) = g?00 (v? ? x)v? .
Pt
Define At = ? =1 g 00 (vt ? ??t )vt vt> + I, where I is the identity matrix, and ?xt = xt+1 ? xt . Then
equation (4) can be re-written as:
?Ft+1 (xt+1 ) ? ?Ft (xt ) ? g 0 (vt ? xt )vt = At ?xt .
(5)
Now, since xt minimizes the convex function Ft over the convex set K, a standard inequality of
convex optimization (see [BV04]) states that for any point y ? K, we have ?Ft (xt ) ? (y ? xt ) ? 0.
Thus, for y = xt+1 , we get that ?Ft (xt ) ? (xt+1 ? xt ) ? 0. Similarly, we get that ?Ft+1 (xt+1 ) ?
(xt ? xt+1 ) ? 0. Putting these two inequalities together, we get that
(?Ft+1 (xt+1 ) ? ?Ft (xt )) ? ?xt ? 0.
(6)
Thus, using the expression for At ?xt from (5) we have
k?xt k2At = At ?xt ? ?xt
= (?Ft+1 (xt+1 ) ? ?Ft (xt ) ? g 0 (vt ? xt )vt ) ? ?xt
? g 0 (vt ? xt )[vt ? (xt ? xt+1 )]
5
(from (6))
(7)
Assume that g 0 (vt ? x) ? [?a, 0] for all x ? K and all t. The other case is handled similarly.
Inequality (7) implies that g 0 (vt ? xt ) and vt ? (xt ? xt+1 ) have the same sign. Thus, we can upper
bound
g 0 (vt ? xt )[vt ? (xt ? xt+1 )] ? a(vt ? ?xt ).
(8)
Pt
1
v
.
Define v?t = vt ? ?t , ?t = t+1
Then,
we
have
? =1 ?
P
P
PT
?t ? ?xt + t=2 xt (?t?1 ? ?t ) ? x1 ?1 + xT +1 ?T ,
(9)
t vt ? ?xt =
tv
Pt
PT ?1
1
where v?t = vt ? ?t , ?t = t+1 ? =1 vt . Now, define ? = ?(v1 , . . . , vT ) = t=1 k?t+1 ? ?t k.
Then we bound
PT
PT
t=2 xt (?t?1 ? ?t ) ? x1 ?1 + xT +1 ?T ?
t=2 kxt kk?t?1 ? ?t k + kx1 kk?1 k + kxT +1 kk?T k
? D? + 2DR.
(10)
We will bound ? momentarily. For now, we turn to bounding the first term of (9) using the CauchySchwartz generalization as follows:
v?t ? ?xt ? k?
vt kA?1 k?xt kAt .
(11)
t
By the usual Cauchy-Schwartz inequality,
qP
qP
qP
qP
P
2
vt k2A?1 ?
vt k2A?1 ?
vt kA?1 k?xt kAt ?
t k?
t k?xt kAt ?
t k?
t a(vt ? ?xt )
t k?
t
t
t
from (7) and (8). We conclude, using (9), (10) and (11), that
qP
qP
P
2
k?
v
k
?
a(v
?
?x
)
?
a
?1
t
t
t
t
t a(vt ? ?xt ) + aD? + 2aDR.
t
A
t
This implies (using the AM-GM inequality applied to the first term on the RHS) that
P
2P
vt k2A?1 + 2aD? + 4aDR.
t k?
t a(vt ? ?xt ) ? a
t
Plugging this into the regret bound (2) we obtain, via (8),
P
1
vt k2A?1 + 2aD? + 4aDR + D2 .
Regret ? a2 t k?
t
2
The proof is completed by the following two lemmas (Lemmas 5 and 6) which bound the RHS. The
first term is a vector harmonic series, and the second term can be bounded by a (regular) harmonic
series.
?
?
P
2
Lemma 5.
vt k2A?1 ? 3n
t k?
b log 1 + bQ + bR .
t
P
Pt
Since g 00 (vt ???t ) ? b, we have At ? I +b t vt vt> .
Proof. We have At = ? =1 g 00 (vt ???t )vt vt> +I.
P
1
Using the fact that v?t = vt ? ?t and ?t = t+1
? ?t v? , we get that
!
!
?
?
t X
t
t
t
t
X
X
X
X
X
>
>
1
1
1
vs vs +
[vr vs> + vs vr> ].
v?? v?? =
1+
?s +
(? +1)2
(? +1)2
? =s
? =s
s=1 r<s
s=1
R t+1 1
1
1
1
Now, ? =s (? +1)2 ? s x2 dx = s ? t+1 . Since (vr + vs )(vr + vs )> ? 0,
vr vr> + vs vs> ? ?[vr vs> + vs vr> ], and hence we have
t X
t
t
t
X
X
X
X
?
?
?
?
>
>
1
[v
v
+
v
v
]
?
2 + 1s vs vs> ?
v?? v??> ?
1 + 1s vs vs> +
r
s
r
s
t+1
s=1 r<s
s=1
? =1
s=1
? =1
Pt
we get that
3
t
X
vs vs> .
s=1
P
Let A?t = 13 I +b t v?t v?t> . Note that the inequality above shows that 3A?t ? At . Thus, using Lemma
3, we get
h ? i
X?
X
X
?
T|
(12)
[ b?
vt ]> A??1
vt ] ? 3b log ||A
v?t A?1
?t ? 3b
k?
vt k2A?1 =
t [ b?
t v
? | .
A
t
t
t
t
0
To bound the latter quantity note that |A?0 | = |I| = 1, and that
P
P
? n
|A?T | = |I + b t v?t v?t> | ? (1 + b t k?
vt k22 )n = (1 + bQ)
P
P
? =
where Q
vt k2 = t k?
vt ? ?t k2 . Lemma 7 (proved in the full version of the paper), we
t k?
2
? ? Q + R . This implies that |A?T | ? (1 + bQ + bR2 )n and the proof is completed by
show that Q
substituting this bound into (12).
6
Lemma 6. ?(v1 , . . . , vT ) ? 2R[log(2 + Q/R2 ) + 1].
Proof. Define, for ? ? 0, the vector u? = v? ? ?T +1 . Note that by convention, we have v0 = 0.
We have
PT
PT
2
2
2
2
? =0 ku? k = k?T +1 k +
? =1 kv? ? ?T +1 k = R + Q.
Furthermore,
?
?
Pt
?
? 1 Pt+1
1
v
?
v
k?t+1 ? ?t k = ? t+2
?
?
? =0
? =0 ?
t+1
?
?
Pt
?
? 1 Pt+1
1
= ? t+2
? =0 u? ? t+1
? =0 u? ?
Pt
1
1
? (t+1)
2
? =0 ku? k + t+1 kut+1 k
Summing up over all iterations,
P
P ? 1 Pt
k?
??
k
?
t+1
t
? =0 ku? k +
t
t (t+1)2
?
1
t+1 kut+1 k
?
P
2
t t kut?1 k
? 2R[log(2+Q/R2 )+1].
The last inequality follows from Lemma 8 (proved in the full version) below by setting xt =
kut?1 k/R, for t ? 1.
? ? Q + R2 .
Lemma 7. Q
Lemma 8. Suppose that 0 ? xt ? 1 and
3
P
t
x2t ? Q. Then
PT
t=1
xt /t ? log(1 + Q) + 1.
Implications in the Geometric Brownian Motion Model
We begin with a brief description of the model. The model assumes that stocks can be traded continuously, and that at any time, the fractional change in the stock price within an infinitesimal time
interval is normally distributed, with mean and variance proportional to the length of the interval.
The randomness is due to many infinitesimal trades that jar the price, much like particles in a physical medium are jarred about by other particles, leading to the classical Brownian motion.
Formally, the model is parameterized by two quantities, the drift ?, which is the long term trend
of the stock prices, and volatility ?, which characterizes deviations from the long term trend. The
parameter ? is typically specified as annualized volatility, i.e. the standard deviation of the stock?s
logarithmic returns in one year. Thus, a trading interval of [0, 1] specifies 1 year. The model postulates that the stock price at time t, St , follows a geometric Brownian motion with drift ? and
volatility ?:
dSt = ?St dt + ?St dWt ,
where Wt is a continuous-time stochastic process known as the Wiener process or simply Brownian
motion. The Wiener process is characterized by three facts:
1. W0 = 0,
2. Wt is almost surely continuous, and
3. for any two disjoint time intervals [s1 , t1 ] and [s2 , t2 ], the random variables Wt1 ? Ws1 and
Wt2 ? Ws2 are independent zero mean Gaussian random variables with variance t1 ? s1
and t2 ? s2 respectively.
Using It?o?s lemma (see, for example, [KS04]), it can be shown that the stock price at time t is given
by
St = S0 exp((? ? ? 2 /2)t + ?Wt ).
(13)
Now, we consider a situation where we have n stocks in the GBM model. Let ? = (?1 , ?2 , . . . , ?n )
be the vector of drifts, and ? = (?1 , ?2 , . . . , ?n ) be the vector of (annualized) volatilities. Suppose
we trade for one year. We now study the effect of trading frequency on the quadratic variation
of the stock price returns. For this, assume that the year-long trading interval is sub-divided into
T equally sized intervals of length 1/T , and we trade at the end of each such interval. Let rt =
(rt (1), rt (2), . . . , rt (n)) be the vector of stock returns in the tth trading period. We assume that T is
?(i) 2
) for any i.
?large enough?, which is taken to mean that it is larger than ?(i), ?(i), ( ?(i)
7
Then using the facts of the Wiener process stated above, we can prove the following lemma, which
shows that the expected quadratic variation, and its variance, is the essentially the same regardless
of trading frequency. The proof is a straightforward calculation and deferred to the full version of
this paper.
Lemma 9. In the setup of trading n stocks in the GBM model over one year with T trading periods,
there is a vector v such that h
i
PT
2
E
? k?k2 (1 + O( T1 ))
t=1 krt ? vk
and
VAR
hP
T
t=1
2
i
krt ? vk
? 6k?k2 (1 + O( T1 )),
regardless of how the stocks are correlated.
Applying this bound in our algorithm, we obtain the following regret bound from Corollary 2.
Theorem 10. In the setup of Lemma 9, for any ? > 0, with probability at least 1 ? 2e?? , we have
Regret ? O(n(log(k?k2 + n) + ?)).
Theorem 10 shows that one expects to achieve constant regret independent of the trading frequency,
as long as the total trading period is fixed. This result is only useful if increasing trading frequency
improves the performance of the best constant rebalanced portfolio. Indeed, this has been observed
empirically (see e.g. [AHKS06], and more empirical evidence is given in the full version of this
paper.).
To obtain a theoretical justification for increasing trading frequency, we consider an example where
we have two stocks that follow independent Black-Scholes models with the same drifts, but different
volatilities ?1 , ?2 . The same drift assumption is necessary because in the long run, the best CRP is
the one that puts all its wealth on the stock with the greater drift. We normalize the drifts to be equal
to 0, this doesn?t change the performance in any qualitative manner.
Since the drift is 0, the expected return of either stock in any trading period is 1; and since the
returns in each period are independent, the expected final change in wealth, which is the product
of the returns, is also 1. Thus, in expectation, any CRP (indeed, any portfolio selection strategy)
has overall return 1. We therefore turn to a different criterion for selecting a CRP. The risk of an
investment strategy is measured by the variance of its payoff; thus, if different investment strategies
have the same expected payoff, then the one to choose is the one with minimum variance. We
therefore choose the CRP with the least variance. We prove the following lemma in the full version
of the paper:
Lemma 11. In the setup where we trade two stocks with zero drift and volatilities ?1 , ?2 , the variance of the minimum variance CRP decreases as the trading frequency increases.
Thus, increasing the trading frequency decreases the variance of the minimum variance CRP, which
implies that it gets less risky to trade more frequently; in other words, the more frequently we trade,
the more likely the payoff will be close to the expected value. On the other hand, as we show
in Theorem 10, the regret does not change even if we trade more often; thus, one expects to see
improving performance of our algorithm as the trading frequency increases.
4
Conclusions and Future Work
We have presented an efficient algorithm for regret minimization with exp-concave loss functions
whose regret strictly improves upon the state of the art. For the problem of portfolio selection,
the regret is bounded in terms of the observed variation in stock returns rather than the number of
iterations.
Recently, DeMarzo, Kremer and Mansour [DKM06] presented a novel game-theoretic framework
for option pricing. Their method prices options using low regret algorithms, and it is possible that
our analysis can be applied to options pricing via their method (although that would require a much
tighter optimization of the constants involved).
Increasing trading frequency in practice means increasing transaction costs. We have assumed no
transaction costs in this paper. It would be very interesting to extend our portfolio selection algorithm
to take into account transaction costs as in the work of Blum and Kalai [BK97].
8
References
[AHKS06] Amit Agarwal, Elad Hazan, Satyen Kale, and Robert E. Schapire. Algorithms for portfolio management based on the newton method. In ICML, pages 9?16, 2006.
?
[Bac00]
L. Bachelier. Th?eorie de la sp?eculation. Annales Scientifiques de l?Ecole
Normale
Sup?erieure, 3(17):21?86, 1900.
[BK97]
Avrim Blum and Adam Kalai. Universal portfolios with and without transaction costs.
In COLT, pages 309?313, New York, NY, USA, 1997. ACM.
[BS73]
Fischer Black and Myron Scholes. The pricing of options and corporate liabilities.
Journal of Political Economy, 81(3):637?654, 1973.
[BV04]
Stephen Boyd and Lieven Vandenberghe. Convex Optimization. Cambridge University
Press, New York, NY, USA, 2004.
[CB03]
Jason E Cross and Andrew R Barron. Efficient universal portfolios for past dependent
target classes. Mathematical Finance, 13(2):245?276, 2003.
[CBMS07] Nicol`o Cesa-Bianchi, Yishay Mansour, and Gilles Stoltz. Improved second-order
bounds for prediction with expert advice. Mach. Learn., 66(2-3):321?352, 2007.
[Cov91]
T. Cover. Universal portfolios. Math. Finance, 1:1?19, 1991.
[DKM06] Peter DeMarzo, Ilan Kremer, and Yishay Mansour. Online trading algorithms and robust option pricing. In STOC ?06: Proceedings of the thirty-eighth annual ACM symposium on Theory of computing, pages 477?486, New York, NY, USA, 2006. ACM.
[HK08]
Elad Hazan and Satyen Kale. Extracting certainty from uncertainty: Regret bounded
by variation in costs. In Proceedings of 21st COLT, 2008.
[HK09]
Elad Hazan and Satyen Kale. Better algorithms for benign bandits. In SODA, pages
38?47, Philadelphia, PA, USA, 2009. Society for Industrial and Applied Mathematics.
[HKKA06] Elad Hazan, Adam Kalai, Satyen Kale, and Amit Agarwal. Logarithmic regret algorithms for online convex optimization. In COLT, pages 499?513, 2006.
[HSSW96] David P. Helmbold, Robert E. Schapire, Yoram Singer, and Manfred K. Warmuth. Online portfolio selection using multiplicative updates. In ICML, pages 243?251, 1996.
[Jam92]
F. Jamshidian. Asymptotically optimal portfolios. Mathematical Finance, 2:131?150,
1992.
[KS04]
Ioannis Karatzas and Steven E. Shreve. Brownian Motion and Stochastic Calculus.
Springer Verlag, New York, NY, USA, 2004.
[KV03]
Adam Kalai and Santosh Vempala. Efficient algorithms for universal portfolios. J.
Mach. Learn. Res., 3:423?440, 2003.
[KV05]
[MF92]
Adam Kalai and Santosh Vempala. Efficient algorithms for online decision problems.
Journal of Computer and System Sciences, 71(3):291?307, 2005.
Neri Merhav and Meir Feder. Universal sequential learning and decision from individual data sequences. In COLT, pages 413?427, 1992.
[Osb59]
M. F. M. Osborne. Brownian motion in the stock market. Operations Research, 2:145?
173, 1959.
[Zin03]
Martin Zinkevich. Online convex programming and generalized infinitesimal gradient
ascent. In ICML, pages 928?936, 2003.
9
| 3795 |@word determinant:1 version:6 polynomial:1 norm:3 d2:2 calculus:1 incurs:1 dramatic:1 tr:1 initial:2 ftrl:1 series:10 selecting:1 ecole:1 past:2 kx0:1 current:1 com:1 ka:2 clara:1 yet:1 dx:1 written:2 ws1:1 benign:1 unchanging:1 update:1 v:16 devising:1 warmuth:1 ith:1 prize:1 short:1 manfred:1 math:1 successive:3 scientifiques:1 simpler:1 mathematical:4 symposium:1 qualitative:1 prove:6 manner:1 introduce:1 expected:5 indeed:2 market:5 frequently:4 growing:1 multi:1 karatzas:1 increasing:8 begin:1 estimating:1 bounded:8 notation:1 ws2:1 medium:1 kv05:2 minimizes:2 hindsight:3 guarantee:1 certainty:1 every:3 concave:5 growth:2 finance:4 tie:1 exactly:1 k2:6 schwartz:2 normally:1 before:1 positive:3 t1:4 mach:2 joining:1 analyzing:2 initiated:1 black:3 plus:1 annualized:2 thirty:1 practice:2 investment:8 regret:53 definite:2 kat:3 procedure:1 krt:3 kxkm:2 universal:12 empirical:1 boyd:1 word:2 regular:2 get:8 close:1 selection:14 wt1:1 put:2 risk:2 context:2 seminal:1 applying:3 restriction:1 zinkevich:1 center:1 kale:5 go:1 attention:1 regardless:2 convex:18 focused:1 straightforward:1 helmbold:1 kykm:1 vandenberghe:1 stability:1 handle:1 notion:1 variation:20 justification:1 pt:29 suppose:3 play:1 gm:1 target:1 yishay:2 programming:1 pa:1 trend:2 observed:6 ft:30 steven:1 solved:1 worst:10 momentarily:1 earned:1 movement:1 trade:8 decrease:2 observes:1 environment:1 depend:1 solving:1 segment:1 predictive:1 purely:1 upon:1 efficiency:1 f2:1 completely:1 resolved:1 stock:32 america:1 eorie:1 vt1:2 whose:4 quite:1 elad:5 larger:1 say:2 satyen:5 fischer:1 final:4 online:17 sequence:11 kxt:2 product:1 relevant:1 flexibility:1 kx1:1 achieve:1 description:1 kv:1 normalize:1 exploiting:1 invest:1 produce:1 adam:4 volatility:7 andrew:1 measured:3 ij:1 ard:1 progress:1 solves:1 c:1 trading:28 implies:5 come:2 convention:1 stochastic:4 explains:1 require:4 f1:1 karush:1 generalization:3 preliminary:1 tighter:2 extension:1 cost2:1 strictly:1 hold:1 considered:1 exp:5 great:2 skale:1 algorithmic:2 traded:2 kvt:2 substituting:1 a2:2 gbm:7 sensitive:2 minimization:1 always:1 mation:1 gaussian:1 rather:5 kalai:6 normale:1 corollary:3 focus:1 viz:1 improvement:1 she:1 vk:2 political:1 industrial:1 attains:3 sense:1 dollar:1 am:1 economy:1 dependent:1 typically:2 her:3 bandit:2 kahn:1 upward:1 issue:1 overall:2 arg:1 colt:4 almaden:1 priori:1 yahoo:2 art:1 special:2 equal:1 santosh:2 nicely:1 adversarially:2 icml:3 future:1 minimized:2 t2:2 opening:1 individual:1 familiar:1 ab:1 deferred:1 truly:1 extreme:1 semidefinite:1 devoted:1 implication:1 ehazan:1 capable:1 daily:1 necessary:1 bq:4 stoltz:2 euclidean:2 logarithm:3 taylor:1 desired:1 re:2 theoretical:1 witnessed:1 modeling:1 cover:6 cost:21 deviation:5 rare:1 expects:2 too:1 chooses:2 st:5 kut:4 stay:1 retain:2 probabilistic:2 connecting:1 quickly:1 together:1 continuously:1 squared:2 postulate:1 cesa:2 management:6 choose:2 dr:1 worse:1 expert:2 leading:1 return:18 account:1 potential:1 ilan:1 de:2 attaining:1 harry:1 ioannis:1 includes:1 inc:1 depends:1 ad:3 multiplicative:1 jason:1 closed:1 hazan:5 doing:1 analyze:2 observing:1 investor:5 option:6 characterizes:1 sup:1 minimize:1 wiener:3 variance:11 weak:1 asset:4 randomness:1 infinitesimal:3 frequency:13 tucker:1 involved:1 naturally:1 proof:6 gain:2 proved:4 knowledge:1 fractional:1 improves:3 sophisticated:1 actually:3 higher:1 dt:1 follow:5 improved:2 done:2 evaluated:1 furthermore:3 just:1 crp:11 shreve:1 hand:1 pricing:5 grows:1 usa:5 effect:1 k22:1 regularization:2 equality:1 hence:1 symmetric:1 iteratively:2 satisfactory:1 bv04:2 round:2 game:2 criterion:1 generalized:1 theoretic:1 motion:8 dkm06:2 harmonic:7 novel:1 recently:1 empirically:3 qp:6 physical:1 insensitive:1 extend:1 relating:1 lieven:1 refer:1 cambridge:1 k2a:6 rd:1 enjoyed:1 erieure:1 mathematics:1 similarly:2 hp:1 closing:1 particle:2 portfolio:32 rebalanced:5 geared:1 f0:3 v0:1 wt2:1 brownian:8 recent:1 conjectured:2 mint:1 verlag:1 inequality:9 success:1 vt:69 minimum:3 additional:1 greater:1 surely:1 maximize:2 period:14 stephen:1 full:7 corporate:1 reduces:1 technical:1 faster:1 characterized:1 calculation:1 cross:3 long:7 divided:1 equally:1 plugging:1 prediction:4 essentially:1 expectation:1 iteration:18 normalization:1 agarwal:2 whereas:1 crash:1 ftl:1 interval:8 wealth:16 unlike:1 probably:1 ascent:1 induced:1 call:1 extracting:1 enough:2 concerned:1 variate:1 gave:1 restrict:1 idea:1 br:1 expression:2 handled:1 bridging:1 feder:1 neri:1 peter:1 hessian:1 proceed:1 york:4 useful:2 santa:1 clear:2 amount:1 tth:2 schapire:2 specifies:1 meir:1 sign:1 disjoint:1 per:4 shall:1 key:1 putting:1 blum:2 changing:1 v1:14 annales:1 asymptotically:1 fraction:1 year:7 sum:3 run:3 jose:1 parameterized:1 uncertainty:1 soda:1 dst:1 almost:1 reader:1 decision:3 acceptable:1 entirely:1 bound:33 quadratic:5 annual:1 n3:1 x2:1 min:3 vempala:3 martin:1 tv:1 alternate:1 s1:2 demarzo:2 taken:2 equation:3 remains:1 turn:2 x2t:1 singer:1 drastic:1 end:1 capitalizing:1 generalizes:1 operation:1 apply:1 barron:3 appropriate:1 v2:8 generic:1 myron:1 dwt:1 robustness:1 assumes:1 running:3 completed:2 newton:3 commits:1 yoram:1 amit:2 classical:1 society:1 objective:1 quantity:2 strategy:9 dependence:1 usual:2 rt:19 gradient:3 distance:2 parametrized:1 w0:1 cauchy:2 nobel:1 reason:1 assuming:3 length:2 kk:3 ratio:2 setup:4 mostly:1 susceptible:1 robert:2 potentially:1 statement:1 br2:2 stoc:1 merhav:1 stated:1 design:1 implementation:4 motivates:1 perform:1 bianchi:2 upper:1 gilles:1 observation:2 descent:1 behave:1 payoff:4 situation:1 variability:3 mansour:4 rn:6 arbitrary:1 drift:10 david:1 specified:1 scholes:3 able:1 usually:1 pattern:1 below:1 eighth:1 program:2 max:1 regularized:1 scheme:1 improve:2 brief:1 risky:1 philadelphia:1 geometric:4 nicol:1 relative:2 loss:2 expect:3 sublinear:1 interesting:2 proportional:1 fictitious:2 var:1 s0:1 ibm:1 course:1 last:2 kremer:2 liability:1 aij:1 formal:1 face:1 correspondingly:1 distributed:1 dimension:3 valid:2 doesn:3 san:1 far:1 transaction:4 obtains:1 keep:2 sequentially:1 parkway:1 summing:1 assumed:2 conclude:1 xi:1 leader:4 continuous:3 investing:8 why:1 reality:1 additionally:1 ku:3 learn:2 robust:1 fragility:1 ca:2 obtaining:1 improving:2 expansion:1 poly:1 sp:1 main:2 rh:2 bounding:2 s2:2 zin03:2 osborne:1 x1:2 advice:2 referred:1 ny:4 vr:8 sub:1 winning:1 exponential:5 kxk2:1 tied:1 bij:1 down:1 theorem:10 xt:101 specific:1 maxi:1 r2:4 evidence:1 avrim:1 sequential:1 magnitude:1 kx:1 easier:1 jar:1 led:1 logarithmic:4 simply:2 univariate:1 likely:1 expressed:2 kxk:3 springer:1 trillion:1 acm:3 identity:1 sized:1 exposition:1 price:17 change:9 typical:1 wt:3 distributes:1 lemma:21 called:1 total:4 experimental:1 la:1 formally:2 latter:1 arises:1 princeton:1 correlated:1 |
3,085 | 3,796 | Linear-time Algorithms for Pairwise Statistical
Problems
Parikshit Ram, Dongryeol Lee, William B. March and Alexander G. Gray
Computational Science and Engineering, Georgia Institute of Technology
Atlanta, GA 30332
{p.ram@,dongryel@cc.,march@,agray@cc.}gatech.edu
Abstract
Several key computational bottlenecks in machine learning involve pairwise distance computations, including all-nearest-neighbors (finding the nearest neighbor(s) for each point, e.g. in manifold learning) and kernel summations (e.g. in
kernel density estimation or kernel machines). We consider the general, bichromatic case for these problems, in addition to the scientific problem of N-body
simulation. In this paper we show for the first time O(? ) worst case runtimes for
practical algorithms for these problems based on the cover tree data structure [1].
1
Introduction
Pairwise distance computations are fundamental to many important computations in machine learning and are some of the most expensive for large datasets. In particular, we consider the class of
all-query problems, in which the combined interactions of a reference set ? of ? points in ?? is
computed for each point ? in a query set ? of size O(? ). This class of problems includes the pairwise kernel summation used in kernel density estimation and kernel machines and the all-nearest
neighbors computation for classification and manifold learning. All-query problems can be solved
directly by scanning over the ? reference points for each of the O(? ) queries, for a total running
time of O(? 2 ). Since quadratic running times are too slow for even modestly-sized problems,
previous work has sought to reduce the number of distance computations needed.
We consider algorithms that employ space-partitioning trees to improve the running time. In all the
problems considered here, the magnitude of the effect of any reference ? on a query ? is inversely
proportional to the distance metric ?(?, ?). Therefore, the net effect on the query is dominated
by references that are ?close by?. A space-partitioning tree divides the space containing the point
set in a hierarchical fashion, allowing for variable resolution to identify major contributing points
efficiently.
Single-Tree Algorithms. One approach for employing space-partitioning trees is to consider each
query point separately ? i.e. to consider the all-query problem as many single-query problems. This
approach lends itself to single-tree algorithms, in which the references are stored in a tree, and the
tree is traversed once for each query. By considering the distance between the query and a collection
of references stored in a tree node, the effect of the references can be approximated or ignored if the
distances involved are large enough, with appropriate accuracy guarantees for some methods.
The ??-tree structure [2] was developed to obtain the nearest-neighbors of a given query in expected
logarithmic time and has also been used for efficient kernel summations [3, 4]. However, these
methods lack any guarantees on worst-case running time. A hierarchical data structure was also
developed for efficient combined potential calculation in computational physics in Barnes & Hut,
1986 [5]. This data structure provides an O(log ? ) bound on the potential computation for a single
query, but has no error guarantees. Under their definition of intrinsic dimension, Karger & Ruhl [6]
describe a randomized algorithm with O(log ? ) time per query for nearest neighbor search for lowintrinsic-dimensional data. Krauthgamer & Lee proved their navigating nets algorithm can compute
1
a single-query nearest-neighbor in O(log ? ) time under a more robust notion of low intrinsic dimensionality. The cover tree data structure [1] improves over these two results by both guaranteeing
a worst-case runtime for nearest-neighbor and providing efficient computation in practice relative to
??-trees. All of these data structures rely on the triangle inequality of the metric space containing
? in order to prune references that have little effect on the query.
Dual-Tree Algorithms. The approach described above can be applied to every single query to improve the O(? 2 ) running time of all-query problems to O(? log ? ). A faster approach to all-query
problems uses an algorithmic framework inspired by efficient particle simulation [7] and generalized to statistical machine learning [8] which takes advantage of spatial proximity in both ? and
? by constructing a space-partitioning tree on each set. Both trees are descended, allowing the
contribution from a distant reference node to be pruned for an entire node of query points. These
dual-tree algorithms have been shown to be significantly more efficient in practice than the corresponding single-tree algorithms for nearest neighbor search and kernel summations [9, 10, 11].
Though conjectured to have O(? ) growth, they lack rigorous, general runtime bounds.
All-query problems fall into two categories: monochromatic, where ? = ? and bichromatic, where
? is distinct from ?. Most of the existing work has only addressed the monochromatic case. The fast
multipole method (FMM)[7] for particle simulations, considered one of the breakthrough algorithms
of the 20th century, has a non-rigorous runtime analysis based on the uniform distribution. An
improvement to the FMM for the ? -body problem was suggested by Aluru,et.al. [12], but was
regarding the construction time of the tree and not the querying time. Methods based on the wellseparated pair decomposition (WSPD) [13] have been proposed for the all nearest neighbors problem
and particle simulations [14], but are inefficient in practice. These methods have O(? ) runtime
bounds for the monochromatic case, but it is not clear how to extend the analysis to a bichromatic
problem. In addition to this difficulty, the WSPD-based particle simulation method is restricted to
the (1/?)-kernel. In Beygelzimer et.al., 2006 [1], the authors conjecture, but do not prove, that the
cover tree data structure using a dual-tree algorithm can compute the monochromatic all-nearestneighbors problem in O(? ).
Our Contribution. In this paper, we prove O(? ) runtime bounds for several important instances
of the dual-tree algorithms for the first time using the cover tree data structure [1]. We prove the first
worst-case bounds for any practical kernel summation algorithms. We also provide the first general
runtime proofs for dual-tree algorithms on bichromatic problems. In particular, we give the first
proofs of worst-case O(? ) runtimes for the following all-query problems:
? All Nearest-neighbors: For all queries ? ? ?, find ?? (?) ? ? such that ?? (?) =
arg min??? ?(?, ?).
? Kernel ?
summations: For a given kernel function ?(?), compute the kernel summation
? (?) = ??? ?(?(?, ?)) for all ? ? ?.
? N-body?
potential calculation: Compute the net electrostatic or gravitational potential
? (?) = ???,??=? ?(?, ?)?1 at each ? ? ?.
Outline. In the remainder of this paper, we give our linear running time proofs for dual-tree algorithms. In Section 2, we review the cover tree data structure and state the lemmas necessary for
the remainder of the paper. In Section 3, we state the dual-tree all-nearest-neighbors algorithm and
prove that it requires O(? ) time. In Section 4, we state the absolute and relative error guarantees for kernel summations and again prove the linear running time of the proposed algorithms. In
the same section, we apply the kernel summation result to the ? -body simulation problem from
computational physics, and we draw some conclusions in Section 5.
2
Cover Trees
A cover tree [1] ? stores a data set ? of size ? in the form of a levelled tree. The structure has an
O(? ) space requirement and O(? log ? ) construction time. Each level is a ?cover? for the level
beneath it and is indexed by an integer scale ? which decreases as the tree is descended. Let ??
denote the set of nodes at scale ?. For all scales ?, the following invariants hold:
? (nesting invariant) ?? ? ???1
? (covering tree invariant) For every ? ? ???1 , there exists a ? ? ?? satisfying ?(?, ?) ? 2? ,
and exactly one such ? is a parent of ?.
? (separation invariant) For all ?, ? ? ?? , ?(?, ?) > 2? .
2
Representations. The cover tree has two different representations: The implicit representation
consists of infinitely many levels ?? with the level ?? containing a single node which is the root
and the level ??? containing every point in the dataset as a node. The explicit representation is
required to store the tree in O(? ) space. It coalesces all nodes in the tree for which the only child
is the self-child. This implies that every explicit node either has a parent other than the self-parent
or has a child other than a self-child.
Structural properties. The intrinsic dimensionality measure considered here is the expansion
dimension from Karger & Ruhl, 2002 [6] defined as follows:
Definition 2.1. Let ?? (?, ?) = {? ? ? ? ? : ?(?, ?) ? ?} denote a closed ball of radius
? around a ? ? ?. Then, the expansion constant of ? is defined as the smallest ? ? 2 such
??? (?, 2?)? ? ? ??? (?, ?)? ?? ? ? and ?? > 0. The intrinsic dimensionality (or expansion
dimension) of ? is given by ??? (?) = log ?.
We make use of the following lemmas from Beygelzimer et.al., 2006 [1] in our runtime proofs.
Lemma 2.1. (Width bound) The number of children of any node ? is bounded by ?4 .
Lemma 2.2. (Growth bound) For all (? ? ?)and ? > 0, if there exists a point ? ? ? such that
2? < ?(?, ?) ? 3?, then ??(?, 4?)? ? 1 + ?12 ??(?, ?)? .
Lemma 2.3. (Depth bound) The maximum depth of any point ? in the explicit representation is
O(?2 log ? ).
Single point search: Single tree nearest neighbor. Given a cover tree ? built on a set ?, the
nearest neighbor of a query ? can be found with the FindNN subroutine in Algorithm 1. The
algorithm uses the triangular inequality to prune away portions of the tree that contain points distant
from ?. The following theorem provides a runtime bound for the single point search.
Theorem 2.1. (Query time) If the dataset ? ? {?} has expansion constant ?, the nearest neighbor
of ? can be found in time O(?12 log ? ).
Batch Query: The dual tree algorithm for all-nearest-neighbor (FindAllNN subroutine in Algorithm 1) using cover trees is provided in Beygelzimer et.al., 2006 [15] as batch-nearest-neighbor.
3
Runtime Analysis of All-Nearest-Neighbors
In the bichromatic case, the performance of the FindAllNN algorithm (or any dual-tree algorithm)
will depend on the degree of difference between the query and reference sets. If the sets are nearly
identical, then the runtime will be close to the monochromatic case. If the inter-point distances in the
query set are very large relative to those between references, then the algorithm may have to descend
to the leaves of the query tree before making any descends in the reference tree. This case offers no
improvement over the performance of the single-tree algorithm applied to each query. In order to
quantify this difference in scale for our runtime analysis, we introduce the degree of bichromaticity:
Definition 3.1. Let ? and ? be cover trees built on query set ? and reference set ? respectively.
Consider a dual-tree algorithm with the property that the scales of ? and ? are kept as close as
possible ? i.e. the tree with the larger scale is always descended. Then, the degree of bichromaticity
? of the query-reference pair (?, ?) is the maximum number of descends in ? between any two
descends in ? .
In the monochromatic case, the trees are identical and the traversal alternates between them. Thus,
the degree of bichromaticity is ? = 1. As the difference in scales of the two data sets increases,
more descends in the query tree become necessary, giving a higher degree of bichromaticity. Using
this definition, we can prove the main result of this section.
Theorem 3.1. Given a reference set ? of size ? and expansion constant ?? , a query set ? of size
O(? ) and expansion constant ?? , and bounded degree of bichromaticity ? of the (?, ?) pair, the
FindAllNN subroutine of Algorithm 1 computes the nearest neighbor in ? of each point in ? in
4?
O(?12
? ?? ? ) time.
Proof. The computation at Line 3 is done for each of the query nodes at most once, hence takes
O(max? ??? ? ? ? ) computations.
The traversal of a reference node is duplicated over the set of queries only if the query tree is
descended just before the reference tree descend. For every query descend, there would be at most
O(?4? ) duplications (width bound) for every reference node traversal. Since the number of query
3
Algorithm 1 Single tree and batch query algorithm for Nearest Neighbor search and Approximate
Kernel summation
FindNN(?-Tree ? , query ?)
Initialize ?? (?) ? 0?? ? ??
Initialize ?? = ?? .
AllKernelSum(?-subtree ?? ,
for ? = ? to ?? do
?-cover set ?? )
3:
? = {????????(?) : ? ? ?? }
if ? = ?? then
???1 = {? ? ? : ?(?, ?) ? ?(?, ?) + 2? }
for ?? ? ?(?? ) do
end for
6: return arg min ?(?, ?)
3:
??(?) = ??(?)?
?????
+
?? (?(?, ?))
FindAllNN(?-subtree ?? , ?-cover set ?? )
?????
if ? = ?? then
+?? (?? )
?? ? ?(?? ) return arg min ?(?, ?).
end
for
?????
?? (?? ) = 0
// ?(?? ) is the set of all the leaves of the subtree ?? .
6:
else
3: else if ? < ? then
if ? < ? then
? = {????????(?) : ? ? ?? }
? = {????????(?) : ? ? ?? }
???1 = {? ? ? :
9:
???1 = {? ? ? :
?(?? , ?) ? ?(?? , ?) + 2? + 2?+2 }
?? (?(?? , ?) ? 2? ? 2?+1 )
6:
FindAllNN (?? , ???1 )
??? (?(?? , ?) + 2? + 2?+1 )
else
> ?}
????1 ? ????????(?? ) FindAllNN(???1 , ?? )
?? (?? ) =
9: end if
??? (?? )+
?? (?(?? , ?)) ? ??(?)?
KernelSum(?-tree ? , query ?)
???????1
Initialize ?? = ?? , ??(?) = 0
AllKernelSum(?? , ???1 )
for ? = ? to ?? do
12:
else
3:
? = {????????(?) : ? ? ?? }
for ????1 ? ????????(?? ) do
???1 = {? ? ? : ?? (?(?, ?) ? 2? )
?? (???1 ) = ?? (???1 )+?? (?? )
??? (?(?, ?) + 2? ) > ?}
15:
AllKernelSum(???1 , ?? )
?
end for
??(?) = ??(?) +
?? (?(?, ?)) ? ??(?)?
??{?????1 }
?? (?? ) = 0
6: end for
18:
end if
?
return ??(?) = ??(?) +
?? (?(?, ?))
end if
?????
descends between any two reference descends is upper bounded by ? and the number of explicit
reference nodes is O(? ), the total number of reference node considered in Line 5 in the whole
algorithm is at most O(?4?
? ? ).
Since at any level of recursion, the size of ? is bounded by ?4? max? ??? ? (width bound), and the
maximum depth of any point in the explicit tree is O(?2? log ? ) (depth bound), the number of nodes
encountered in Line 6 is O(?4+2
? max? ??? ? log ? ). Since the traversal down the query tree causes
duplication, and the duplication of any reference node is upper bounded by ?4?
? , Line 6 takes at most
6
O(?4?
? ?? max? ??? ? log ? ) in the whole algorithm.
Line 9 is executed just once for each of the explicit nodes of the query tree and hence takes at most
O(? ) time.
Consider any ???1 = {? ? ? : ?(?? , ?) ? ?+2? +2?+2 } where ? = ?(?? , ?). Given that ???1 is the
(??1)?? level of the reference tree ???1 = ?(?? , ?+2? +2?+2 )?? ? ?(?? , ?+2? +2?+2 )????1 ?
?(?? , ? + 2? + 2?+1 ) ? ???1 since ? ? ???1
If ? > 2?+2 ,
and ?? < ? in this part of the recursion.
?
?+1
2
?
??(?? , ? + 2 + 2 )? ? ??(?? , 2?)? ? ?? ?(??, 2 ). Now ? ? ?(?? , ?) + 2 since ? ? ???1
and ? > 2?+2 , ?(?? , ?) > 2?+1 , making ?(?? , ?2 ) = ?{?? }? = 1. Hence ????1 ? ? ?2? .
If ? ? 2?+2 , as in Beygelzimer et.al. [1] the number of disjoint balls of radius 2??2 that can be packed
in ?(?? , ?+2? +2?+1 ) is bounded as ??(?? , ?+2? +2?+1 +2??2 )? ? ??(?, 2(?+2? +2?+1 )+2??2 )? ?
??(?, 2?+3 + 2?+1 + 2?+2 + 2??2 )? ? ??(?, 2?+4 )? ? ??6? ?(?, 2??2 )? for some ? ? ???1 . Any such
ball ?(?, 2??2 ) can contain at most one point in ???1 , making ????1 ? ? ?6? .
4
12 4?
12 4?
Thus, the algorithm takes O(?6? ? + ?4?
? ? + ?? ?? log ? + ? ) which is O(?? ?? ? ).
Corollary 3.1. In the monochromatic case with a dataset ? of size ? having an expansion constant
?, the FindAllNN subroutine of Algorithm 1 has a runtime bound of O(?16 ? ).
Proof. In the monochromatic case, ??? = ??? = ? , ?? = ?? = ? and the degree of bichromaticity
? = 1 since the query and the reference tree are the same. Therefore, by Theorem 3.1, the result
follows.
4
Runtime Analysis of Approximate Kernel Summations
For infinite tailed kernels ?(?), the exact computation of kernel summations?is infeasible without
O(? 2 ) operations. Hence the goal is to efficiently approximate ? (?) =
? ?(?(?, ?)) where
?(?) is a monotonically decreasing non-negative kernel function. We employ the two widely used
approximating schemes listed below:
Definition 4.1. An algorithm guarantees
? absolute
error bound, if for each exact value ? (?? ) for
?
?
?? ? ?, it computes ? (?? ) such that ? (?? ) ? ? (?? ) ? ? ?.
Definition 4.2. An algorithm guarantees ? relative error
bound, if for each exact value ? (?? ) for
?? ? ?, it computes ??(?? ) ? ? such that ??(?? ) ? ? (?? ) ? ? ?? (?? )?.
Approximate kernel summation is more computationally intensive than nearest neighbors because
pruning is not based on the distances alone but also on the analytical properties of the kernel
(i.e. smoothness and extent). Therefore, we require a more extensive runtime analysis, especially for
kernels with an infinite extent, such as the Gaussian kernel. We first prove logarithmic running time
for the single-query kernel sum problem under an absolute error bound and then show linear running
time for the dual-tree algorithm. We then extend this analysis to include relative error bounds.
4.1
Single Tree Approximate Kernel Summations Under Absolute Error
The algorithm for computing the approximate kernel summation under absolute error is shown in the
KernelSum subroutine of Algorithm 1. The following theorem proves that KernelSum produces
an approximation satisfying the ? absolute error.
Theorem 4.1. The KernelSum subroutine of Algorithm 1 outputs ??(?) such that ???(?)?? (?)? ? ? ?.
Proof. A subtree rooted at ? ? ???1 is pruned as per Line 5 of KernelSum since for ??? ? ?(?),
?(?(?, ?) + 2? ) ? ?(?(?, ?? )) ? ?(?(?, ?) ? 2? ) and ??(?(?, ?)) ? ?(?(?, ?? ))? ? ?. This
amounts to limiting the error per each kernel evaluation to be less than ? (which also holds true
for each contribution computed exactly for ? ? ??? , and by the triangle inequality the kernel
approximate sum ??(?) will be within ? ? of the true kernel sum ? (?).
The following theorem proves the runtime of the single-query kernel summation with smooth and
monotonically decreasing kernels using a cover tree.
Theorem 4.2. Given a reference set ? of size ? and expansion constant ?, an error value ?, and a
monotonically decreasing smooth non-negative kernel function ?(?) concave for ? ? [0, ?] and
convex for ? ? (?, ?) for some ? > 0, the KernelSum subroutine of Algorithm 1 computes
the kernel summation at a query ? approximately up to ? absolute error with a runtime bound of
O(?2(1+max{???1 +3,???1 +4,4}) log ? ) time?where(
)?
?
?
, and ? ? (?) is the derivative of ?(?).
? = log2 ? (?1) (?) , ? = ?log2 ??, ?1 = log2 ???
? (?)
Proof. We assume that any argument of ?(?) is lower bounded at 0. Now define the following sets:
?
???1
= {? ? ???1 : ?(?, ?) ? ? ? 2? }
?
???1
= {? ? ???1 : ? ? 2? < ?(?, ?) ? ? + 2? }
?
???1
= {? ? ???1 : ?(?, ?) > ? + 2? }
?
?
?
?
such that ???1 = ???1
? ???1
? ???1
, and are pairwise disjoint. For ? ? ???1
:
? <?(max(0, (?(?, ?) ? 2? ))) ? ?(?(?, ?) + 2? )
?(?(?(?, ?) + 2? ) ? 2?+1 ? ? (?(?, ?) + 2? )) ? ?(?(?, ?) + 2? ) = ?2?+1 ? ? (?(?, ?) + 2? )
5
because of the concavity of the kernel(function
) ?(?). Now,
??
?(?1)
?[0,??2? ]
? 2? < ?(?, ?) ? ? ? 2?
2?+1
?(?1)
where ?[?,?] (?) is 1) the inverse function of the ? ? (?); 2) the output value is restricted to be in the
?
interval [?, ?] for the given argument ?. For ? ? ???1
,
? < ?(max(0, (?(?, ?) ? 2? ))) ? ?(?(?, ?) + 2? ) ? ?2?+1 ? ? (?)
which implies that
)
(
??
?1
? ? log2
? ? (?)
?
Similarly, for ? ? ???1
, ? < ?2?+1 ? ? (?(?, ?) ? 2? ) implying
(
)
??
?(?1)
?
? + 2 < ?(?, ?) < ?(?+2? ,?)
+ 2? .
2?+1
?
Note that?0 ? (
? ? (?(?,)?
?)) ? ? ? (?) for ?(?, ?) > ? + 2? , which implies that 2??
?+1 ? ? (?) and
?
?
thus ? ? log2 ???
= ?1 . Below the level ?1 , ???1
= ???1
= ?. In addition, below the level
? (?)
?
?1 ? 1, ???1 = ?.
Case 1: ? > ?1
Trivially, for ? ? ???1 , ?(???? ?2? ) > ? where ???? = max?????1 ?(?, ?). We can invert the ker(?1)
nel function to obtain: ???? < ?(?+2? ,?) (?)+2? . This implies that ?(?, ?) ? ???? < ? (?1) (?)+
(
)
2? We can count up the number of balls of radius 2??2 inside ? ?, ? (?1) (?) + 2? + 2??2 . Let
?
?
? = log2 ? (?1) (?) . Then,
?
?+1
3
???(?, 2 ) ? ???1 ? ? ? , ? < ?
max ????1 ? ? ??(?, 2? +2? +2??2 )????1 ? ? ??(?, 2?+2 ) ? ???1 ? ? ?4 , ? = ?
?
??(?, 2?+1 ) ? ???1 ? ? ????+3 = ????1 +3 , ? > ?
Case 2: ? = ?1 ? 1
Let( ? = ?log2 ??. Similar
to the case above, we count the number of balls of radius 2??2 inside
)
?
?
??2
? ?, 2 + 2 + 2
.
?
?+1
3
???(?, 2 ) ? ???1 ? ? ? , ? < ?
?
?
??2
?+2
max ????1 ? ? ??(?, 2 +2 +2 )????1 ? ? ??(?, 2 ) ? ???1 ? ? ?4 , ? = ?
?
??(?, 2?+1 ) ? ???1 ? ? ????+3 = ????1 +4 , ? > ?
From the runtime proof of the single-tree nearest neighbor algorithm using cover tree in Beygelzimer
et.al., 2006, the running time is bounded by:
O(? max ????1 ?2 + ? max ????1 ??4 ) ? O(?2(1+max{???1 +3,???1 +4,4}) log ? )
4.2
Dual Tree Approximate Kernel Summations Under Absolute Error
An algorithm for the computation of kernel sums for multiple queries is shown in the AllKernelSum
subroutine of Algorithm 1, analogous to FindAllNN for batch nearest-neighbor query. The dual-tree
version of the algorithm requires a stricter pruning rule to ensure correctness for all the queries in a
query subtree. Additionally, every query node ?? has an associated O(1) storage ?? (?? ) that accumulates the postponed kernel contribution for all query points under the subtree ?? . The following
theorem proves the correctness of the AllKernelSum subroutine of Algorithm 1.
Theorem 4.3. For all ? in the in the query set ?, the AllKernelSum subroutine of Algorithm 1
computes approximations ??(?) such that ???(?) ? ? (?)? ? ? ?.
Proof. Line 9 of the algorithm guarantees that ?? ? ?????1 at a given level ?,
??(?(?? , ?)) ? ?(?(?, ?))? ? ??(?(?? , ?) ? 2? ? 2?+1 ) ? ?(?(?? , ?) + 2? + 2?+1 )? ? ?
for all ? ? ?(?? ). Basically, the minimum distance is decreased and the maximum distance is
increased by 2?+1 , which denotes the maximum possible distance from ?? to any of its descendants.
Trivially, contributions added in Line 3 (the base case) satisfy the ? absolute error for each kernel
value and the result follows by the triangle inequality.
6
Based on the runtime analysis of the batch nearest neighbor, the runtime bound of AllKernelSum is
given by the following theorem:
Theorem 4.4. Let ? be a reference set of size ? and expansion constant ?? , and let ? be a
query set of size O(? ) and expansion constant ?? . Let the (?, ?) pair have a bounded degree of
bichromaticity. Let ?(?) be a monotonically-decreasing smooth non-negative kernel function that is
concave for ? ? [0, ?] and convex for ? ? (?, ?) for some ? > 0. Then, given an error tolerance ?,
the AllKernelSum subroutine of Algorithm 1 computes an approximation ??(?) ?? ? ? that satisfies
the ? absolute error bound in time O(? ).
Proof. We first bound max ????1 ?. Note that in Line 9 to Line 13 of the AllKernelSum, ? ? ? + 1,
and thus 2? + 2?+1 ? 2? + 2? = 2?+1 . Similar to the proof for the single-tree case, we define:
?
???1
= {? ? ???1 : ?(?, ?) ? ? ? 2?+1 }
?
???1
= {? ? ???1 : ? ? 2?+1 < ?(?, ?) ? ? + 2?+1 }
?
???1
= {? ? ???1 : ?(?, ?) > ? + 2?+1 }
?
?
?
? ???1
, and pairwise disjoint. From here, we can follow the techsuch that ???1 = ???1
? ???1
niques shown for the single-tree case to show that max ????1 ? is constant dependent on ?. Therefore,
the methodology of the runtime analysis of batch nearest neighbor gives the O(? ) runtime for batch
approximate kernel summation.
4.3
Approximations Under Relative Error
We now extend the analysis for absolute error bounds to cover approximations under the relative
error criterion given in Definition 4.2.
Single-tree case. For a query point ?, the goal is compute ??(?) satisfying Definition 4.2. An approximation algorithm for a relative error bound is similar to the KernelSum subroutine of Algorithm 1
except that the definition of ???1 (i.e. the set of reference points that are not pruned at the given
level ?) needs to be changed to satisfy the relative error constraint as follows:
?? (?)
}
?
= max ?(?, ?), and expand the set ???1 to:
???1 = {? ? ? : ?(?(?, ?) ? 2? ) ? ?(?(?, ?) + 2? ) >
where ? (?) is the unknown query sum. Hence, let ????
?
???
???1 ? {? ? ? : ?(?(?, ?) ? 2 ) ? ?(?(?, ?) + 2? ) > ??(???? )}
(1)
Note that ???? can be trivially upper bounded by: ???? ? ?(?, ????? ) + 2?+1 = ????,? where ? is
the scale of the root of the reference cover tree in the explicit representation.
Theorem 4.5. Let the conditions of Thm. 4.2 hold. Then, the KernelSum subroutine of Algorithm 1
with Line 5 redefined as Eqn. 1 computes the kernel summation ??(?) at a query ? with ? relative
error in O(log ? ) time.
Proof. A node ? ? ???1 can be pruned by the above pruning rule since for ?? ? ?(?), ?(?(?, ?) +
2? ) ? ?(?(?, ?? )) ? ?(?(?, ?) ? 2? ) and ??(?(?, ?)) ? ?(?(?, ?? ))? ? ??(????,? ). This amounts
to limiting the error per each kernel evaluation to be less than ??(????,? ) (which also holds true
for each contribution computed exactly for ? ? ??? , and by the triangle inequality the kernel
approximate sum ??(?) will be within ?? ?(????,? ) ? ?? (?) of the true kernel sum ? (?). Since the
relative error is an instance of the absolute error, the algorithm also runs in O(log ? ).
Dual-tree case. In this case, for each query point ? ? ?, an approximation ??(?) is to be computed
as per Definition 4.2. As in the absolute error case, we must satisfy a more difficult condition.
Therefore, ????,? is larger, taking into account both the maximum possible distance from the root
of the query tree to its descendants and the maximum possible distance from the root of the reference
tree to its descendants. Hence ???1 is defined as follows:
???1 = {? ? ? : ?(?(?, ?) ? 2? ? 2?+1 ) ? ?(?(?, ?) + 2? + 2?+1 ) > ??(????,? )} (2)
where ?(????? , ????? ) + 2?? +1 + 2?? +1 = ????,? and ?? , ?? are the scales of the roots of the
query and reference cover trees respectively in the explicit representations. The correctness of the
algorithm follows naturally from Theorems 4.4 and 4.5.
7
Corollary 4.1. Let the conditions of Thm. 4.4 hold. Then, given an error value ?, the AllKernelSum subroutine of Algorithm 1 with Line 11 redefined as Eq. 2 computes an approximate kernel
summation ??(?) ?? ? ? that satisfies an ? relative error bound with a runtime bound of O(? ).
Note that for the single-tree and dual-tree algorithms under the relative error criterion, the pruning
rules that generate ???1 shown above are sub-optimal in practice, because they require every pairwise kernel value that is pruned to be within ? relative error. There is a more sophisticated way of
accelerating this using an alternative method [9, 10, 11] that is preferable in practice.
4.4
? -body Simulation
? -body potential summation is an instance of the kernel summation problem that arises in computational physics and chemistry. These computations use the Coulombic kernel ?(?) = 1/?,
which describes gravitational and electrostatic interactions. This kernel is infinite at zero distance
and has no inflection point (i.e. it is convex for ? ? (0, ?)). Nevertheless, it is possible to obtain
the runtime
using the results shown in the previous sections. The single query problem
? behavior
1
is considered first under the assumption that min???,??=? ?(?, ?) > 0.
? (?) = ? ?(?,?)
Corollary 4.2. Given a reference set ? of size ? and expansion constant ?, an error value ? and
the kernel ?(?) = 1/?(?, ?), the KernelSum subroutine of Algorithm 1 computes the potential
summation at a query ? with ? error in O(log ? ) time.
Proof. Let ???? =
min ?(?, ?). Let ? ? (?) be the ? 2 continuous construction [16] such that:
???,??=?
?? (?) =
{
(
1
15
5
????
8 ? 4
1
???
?, ? ? ?
(
)2
?
????
+
3
8
(
)4
?
????
)
, ? < ????
The effective kernel ?? (?) can be constructed in O(log ? ) time using the single-tree algorithm for
nearest neighbor described in Beygelzimer et.al., 2006 [1]. Note that the second derivative
of the
?
5 ???
9?2
?5
???
??
. Thus it is concave for ? < 3 ?
effective kernel is ?? (?) = 2(???? )3 + 2(???? )5 for ? < ?
and convex otherwise, so the second derivative agrees at ? = ???? . Note that ?? (?) agrees with
?(?) for ? ? ???? . Hence, by considering ???? equivalent to the bandwidth ? in Theorem 4.2 and
applying the same theorem on the KernelSum subroutine of Algorithm 1 with the aforementioned
kernel, we prove the O(log ? ) runtime bound.
The runtime analysis for the batch case of the algorithm follows naturally.
Corollary 4.3. Given a reference set ? of size ? and expansion constant ?? and a query set ? of
size O(? ) and expansion constant ?? with a bounded degree of bichromaticity for the (?, ?) pair,
an error value ? and the kernel ?(?) = 1/?(?, ?), the AllKernelSum subroutine of Algorithm 1
approximates the potential summation ?? ? ? up to ? error with a runtime bound of O(? ).
Proof. The same effective kernel as Corollary 4.2 is used, except that ???? = min min ?(?, ?).
??? ???,??=?
The result follows from applying Theorem 4.4, and noting that running the dual-tree computation
with ?(?(?, ?)) = 1/?(?, ?) is equivalent to running the algorithm with ?? (?(?, ?)).
5
Conclusions
Extensive work has attempted to reduce the quadratic scaling of the all-query problems in statistical
machine learning. So far, the improvements in runtimes have only been empirical with no rigorous
runtime bounds [2, 8, 9, 17, 18]. Previous work has provided algorithms with rough linear runtime
arguments for certain instances of these problems [14, 5, 13], but these results only apply to the
monochromatic case. In this paper, we extend the existing work [6, 1, 19, 20] to provide algorithms
for two important instances of the all-query problem (namely all-nearest-neighbor and all-kernelsummation) and obtain for the first time a linear runtime bound for dual-tree algorithms for the more
general bichromatic case of the all-query problems.
These results provide an answer to the long-standing question of the level of improvement possible
over the quadratic scaling of the all-query problems. The techniques used here finally point the way
to analyzing a host of other tree-based algorithms used in machine learning, including those that
involve ?-tuples, such as the ?-point correlation (which na??vely require O(? ? ) computations).
8
References
[1] A. Beygelzimer, S. Kakade, and J.C. Langford. Cover Trees for Nearest Neighbor. Proceedings
of the 23rd International Conference on Machine learning, pages 97?104, 2006.
[2] J. H. Freidman, J. L. Bentley, and R. A. Finkel. An Algorithm for Finding Best Matches in
Logarithmic Expected Time. ACM Trans. Math. Softw., 3(3):209?226, September 1977.
[3] K. Deng and A. W. Moore. Multiresolution Instance-Based Learning. pages 1233?1242.
[4] D. Lee and A. G. Gray. Faster Gaussian Summation: Theory and Experiment. In Proceedings
of the Twenty-second Conference on Uncertainty in Artificial Intelligence. 2006.
[5] J. Barnes and P. Hut. A Hierarchical ?(? log ? ) Force-Calculation Algorithm. Nature, 324,
1986.
[6] D. R. Karger and M. Ruhl. Finding Nearest Neighbors in Growth-Restricted Metrics. Proceedings of the Thiry-Fourth Annual ACM Symposium on Theory of Computing, pages 741?750,
2002.
[7] L. Greengard and V. Rokhlin. A Fast Algorithm for Particle Simulations. Journal of Computational Physics, 73:325?248, 1987.
[8] A. G. Gray and A. W. Moore. ?? -Body? Problems in Statistical Learning. In NIPS, volume 4,
pages 521?527, 2000.
[9] A. G. Gray and A. W. Moore. Nonparametric Density Estimation: Toward Computational
Tractability. In SIAM International Conference on Data Mining, 2003.
[10] D. Lee, A. G. Gray, and A. W. Moore. Dual-Tree Fast Gauss Transforms. In Y. Weiss,
B. Sch?olkopf, and J. Platt, editors, Advances in Neural Information Processing Systems 18,
pages 747?754. MIT Press, Cambridge, MA, 2006.
[11] D. Lee and A. G. Gray. Fast High-dimensional Kernel Summations Using the Monte Carlo
Multipole Method. In To appear in Advances in Neural Information Processing Systems 21.
2009.
[12] S. Aluru, G. M. Prabhu, and J. Gustafson. Truly distribution-independent algorithms for the
N-body problem. In Proceedings of the 1994 conference on Supercomputing, pages 420?428.
IEEE Computer Society Press Los Alamitos, CA, USA, 1994.
[13] P. B. Callahan. Dealing with Higher Dimensions: the Well-Separated Pair Decomposition and
its applications. PhD thesis, Johns Hopkins University, Baltimore, Maryland, 1995.
[14] P. B. Callahan and S. R. Kosaraju. A Decomposition of Multidimensional Point Sets with Applications to k-Nearest-Neighbors and n-body Potential Fields. Journal of the ACM, 62(1):67?
90, January 1995.
[15] A. Beygelzimer, S. Kakade, and J.C. Langford. Cover trees for Nearest Neighbor. 2006.
http://hunch.net/?jl/projects/cover tree/paper/paper.ps.
[16] R. D. Skeel, I. Tezcan, and D. J. Hardy. Multiple Grid Methods for Classical Molecular Dynamics. Journal of Computational Chemistry, 23(6):673?684, 2002.
[17] A. G. Gray and A. W. Moore. Rapid Evaluation of Multiple Density Models. In Artificial
Intelligence and Statistics 2003, 2003.
[18] A. G. Gray and A. W. Moore. Very Fast Multivariate Kernel Density Estimation via Computational Geometry. In Joint Statistical Meeting 2003, 2003. to be submitted to JASA.
[19] R. Krauthgamer and J. R. Lee. Navigating Nets: Simple Algorithms for Proximity Search.
15th Annual ACM-SIAM Symposium on Discrete Algorithms, pages 791?801, 2004.
[20] K. Clarkson. Fast Algorithms for the All Nearest Neighbors Problem. In Proceedings of
the Twenty-fourth Annual IEEE Symposium on the Foundations of Computer Science, pages
226?232, 1983.
9
| 3796 |@word version:1 simulation:8 decomposition:3 karger:3 hardy:1 existing:2 beygelzimer:8 must:1 john:1 distant:2 alone:1 implying:1 leaf:2 intelligence:2 ruhl:3 provides:2 math:1 node:19 constructed:1 become:1 symposium:3 descendant:3 prove:8 consists:1 inside:2 introduce:1 pairwise:7 inter:1 expected:2 rapid:1 behavior:1 fmm:2 inspired:1 decreasing:4 little:1 considering:2 provided:2 project:1 bounded:11 developed:2 finding:3 guarantee:7 every:8 multidimensional:1 concave:3 growth:3 runtime:29 exactly:3 stricter:1 preferable:1 platt:1 partitioning:4 appear:1 before:2 engineering:1 accumulates:1 analyzing:1 approximately:1 practical:2 practice:5 ker:1 descended:4 empirical:1 significantly:1 ga:1 close:3 storage:1 applying:2 equivalent:2 convex:4 resolution:1 rule:3 nesting:1 century:1 notion:1 analogous:1 limiting:2 construction:3 exact:3 us:2 hunch:1 expensive:1 approximated:1 satisfying:3 solved:1 worst:5 descend:3 decrease:1 traversal:4 dynamic:1 depend:1 triangle:4 joint:1 separated:1 distinct:1 fast:6 describe:1 effective:3 monte:1 query:69 artificial:2 larger:2 widely:1 otherwise:1 triangular:1 statistic:1 itself:1 advantage:1 net:5 analytical:1 interaction:2 remainder:2 beneath:1 multiresolution:1 olkopf:1 los:1 parent:3 requirement:1 p:1 produce:1 guaranteeing:1 nearest:31 eq:1 descends:6 implies:4 quantify:1 radius:4 require:3 summation:27 traversed:1 gravitational:2 proximity:2 hut:2 considered:5 hold:5 around:1 algorithmic:1 major:1 sought:1 smallest:1 estimation:4 agrees:2 correctness:3 rough:1 mit:1 always:1 gaussian:2 finkel:1 gatech:1 corollary:5 improvement:4 rigorous:3 inflection:1 dependent:1 entire:1 expand:1 subroutine:17 arg:3 classification:1 dual:18 aforementioned:1 spatial:1 breakthrough:1 initialize:3 field:1 once:3 having:1 runtimes:3 identical:2 softw:1 nearly:1 employ:2 parikshit:1 geometry:1 william:1 atlanta:1 mining:1 evaluation:3 truly:1 wellseparated:1 necessary:2 vely:1 tree:83 indexed:1 divide:1 instance:6 increased:1 cover:22 tractability:1 uniform:1 too:1 stored:2 dongryeol:1 answer:1 scanning:1 combined:2 density:5 fundamental:1 randomized:1 international:2 siam:2 standing:1 lee:6 physic:4 hopkins:1 na:1 again:1 thesis:1 containing:4 coalesces:1 inefficient:1 derivative:3 return:3 account:1 potential:8 chemistry:2 includes:1 satisfy:3 bichromatic:6 root:5 closed:1 portion:1 contribution:6 accuracy:1 efficiently:2 identify:1 basically:1 carlo:1 cc:2 submitted:1 definition:10 involved:1 naturally:2 proof:15 associated:1 proved:1 dataset:3 duplicated:1 dimensionality:3 improves:1 sophisticated:1 higher:2 follow:1 methodology:1 wei:1 done:1 though:1 just:2 implicit:1 correlation:1 langford:2 eqn:1 lack:2 gray:8 scientific:1 bentley:1 usa:1 effect:4 contain:2 true:4 hence:7 moore:6 self:3 width:3 covering:1 rooted:1 criterion:2 generalized:1 outline:1 volume:1 jl:1 extend:4 approximates:1 cambridge:1 smoothness:1 rd:1 trivially:3 grid:1 similarly:1 particle:5 electrostatic:2 base:1 multivariate:1 conjectured:1 store:2 certain:1 inequality:5 kosaraju:1 meeting:1 postponed:1 minimum:1 deng:1 prune:2 monotonically:4 multiple:3 smooth:3 faster:2 match:1 calculation:3 offer:1 long:1 host:1 molecular:1 metric:3 kernel:58 invert:1 addition:3 separately:1 addressed:1 interval:1 else:4 decreased:1 baltimore:1 sch:1 duplication:3 monochromatic:9 integer:1 structural:1 noting:1 gustafson:1 enough:1 bandwidth:1 reduce:2 regarding:1 intensive:1 bottleneck:1 accelerating:1 clarkson:1 wspd:2 cause:1 ignored:1 clear:1 involve:2 listed:1 amount:2 nonparametric:1 transforms:1 nel:1 category:1 generate:1 http:1 disjoint:3 per:5 discrete:1 key:1 nevertheless:1 kept:1 ram:2 sum:7 run:1 inverse:1 uncertainty:1 fourth:2 separation:1 draw:1 scaling:2 bound:29 quadratic:3 encountered:1 barnes:2 annual:3 constraint:1 callahan:2 dominated:1 argument:3 min:7 pruned:5 conjecture:1 alternate:1 march:2 ball:5 describes:1 kakade:2 making:3 restricted:3 invariant:4 computationally:1 count:2 needed:1 end:7 operation:1 greengard:1 apply:2 hierarchical:3 away:1 appropriate:1 batch:8 alternative:1 denotes:1 multipole:2 running:12 include:1 krauthgamer:2 ensure:1 log2:7 freidman:1 giving:1 especially:1 prof:3 approximating:1 society:1 classical:1 added:1 dongryel:1 question:1 alamitos:1 modestly:1 september:1 navigating:2 lends:1 distance:14 maryland:1 manifold:2 extent:2 prabhu:1 toward:1 providing:1 difficult:1 executed:1 coulombic:1 negative:3 packed:1 redefined:2 unknown:1 twenty:2 allowing:2 upper:3 datasets:1 january:1 thm:2 pair:6 required:1 namely:1 extensive:2 nip:1 trans:1 suggested:1 below:3 built:2 including:2 max:16 difficulty:1 rely:1 force:1 recursion:2 scheme:1 improve:2 technology:1 inversely:1 nearestneighbors:1 review:1 contributing:1 relative:14 proportional:1 querying:1 foundation:1 degree:9 jasa:1 editor:1 changed:1 infeasible:1 institute:1 neighbor:31 fall:1 taking:1 absolute:13 tolerance:1 dimension:4 depth:4 skeel:1 computes:9 concavity:1 author:1 collection:1 supercomputing:1 employing:1 far:1 approximate:11 pruning:4 dealing:1 tuples:1 search:6 continuous:1 tailed:1 additionally:1 nature:1 robust:1 ca:1 expansion:13 agray:1 constructing:1 main:1 thiry:1 whole:2 child:5 body:9 georgia:1 fashion:1 slow:1 sub:1 explicit:8 theorem:17 down:1 intrinsic:4 exists:2 niques:1 phd:1 magnitude:1 subtree:6 logarithmic:3 infinitely:1 satisfies:2 acm:4 ma:1 sized:1 goal:2 infinite:3 except:2 lemma:5 total:2 gauss:1 attempted:1 rokhlin:1 arises:1 alexander:1 |
3,086 | 3,797 | Exploring Functional Connectivity of the Human
Brain using Multivariate Information Analysis
Barry Chai1?
Dirk B. Walther2?
Diane M. Beck2,3?
Li Fei-Fei1?
1
Computer Science Department, Stanford University, Stanford, CA 94305
2
Beckman Institute, University of Illinois at Urbana-Champaign, Urbana, IL 61801
3
Psychology Department, University of Illinois at Urbana-Champaign, Champaign, IL 61820
{bwchai,feifeili}@cs.stanford.edu {walther,dmbeck}@illinois.edu
Abstract
In this study, we present a new method for establishing fMRI pattern-based
functional connectivity between brain regions by estimating their multivariate
mutual information. Recent advances in the numerical approximation of highdimensional probability distributions allow us to successfully estimate mutual
information from scarce fMRI data. We also show that selecting voxels based
on the multivariate mutual information of local activity patterns with respect to
ground truth labels leads to higher decoding accuracy than established voxel selection methods. We validate our approach with a 6-way scene categorization fMRI
experiment. Multivariate information analysis is able to find strong information
sharing between PPA and RSC, consistent with existing neuroscience studies on
scenes. Furthermore, an exploratory whole-brain analysis uncovered other brain
regions that share information with the PPA-RSC scene network.
1 Introduction
To understand how the brain represents and processes information we must account for two complementary properties: information is represented in a distributed fashion, and brain regions are
strongly interconnected. Although heralded as a tool to address these issues, functional magnetic
resonance imaging (fMRI) initially fell short of achieving these goals because of limitations of traditional analysis methods, which treat voxels as independent. Multi-voxel pattern analysis (MVPA)
has revolutionized fMRI analysis by accounting for distributed patterns of activity rather than absolute activation levels. The analysis of functional connectivity, however, is so far mostly limited to
comparing the time courses of individual voxels. To overcome these limitations we demonstrate a
new method of pattern-based functional connectivity analysis based on mutual information of sets
of voxels. Furthermore, we show that selecting voxels based on the mutual information of local
activity with respect to ground truth outperforms other voxel selection methods.
We apply our new analysis methods to the decoding of natural scene categories from the human
brain. Human observers are able to quickly and efficiently perceive the content of natural scenes [15,
26]. It was recently shown by [23] that activity patterns in the parahippocampal place area (PPA), the
retrosplenial cortex (RSC), the lateral occipital complex (LOC), and, to some degree, primary visual
cortex (V1) contain information about the categories of natural scenes. To truly understand how
the brain categorizes natural scenes, however, it is necessary to grasp the interactions between these
regions of interest (ROIs). Our new technique for pattern-based functional connectivity enables
us to uncover shared scene category-specific information among the ROIs. When configured for
exploratory whole-brain analysis, the technique allows us to discover other brain regions that may
be involved in natural scene categorization.
Mutual information is appropriate for fMRI analysis if one considers fMRI data as a noisy communication channel in the sense of Shannon?s information theory [19]; the information contained
?
?
Barry Chai and Dirk B. Walther contributed equally to this work.
Diane M. Beck and Li Fei-Fei contributed equally to this work.
1
in a population of neurons must be communicated through hemodynamic changes and concomitant
changes in magnetization which can be measured as the blood-oxygen level dependent (BOLD)
fMRI signal, then proceed through several layers of data processing, culminating in a single time
varying value in a particular voxel. While this noisy communication concept has been embraced
by the brain machine interface community [25], information theory has, thus far, been less utilized
in the fMRI analysis community (see [8] for exceptions). This may be partly due to the numerical
difficulties in estimating the probability distributions necessary for computing mutual information.
This problem is exacerbated when patterns of voxels are considered. In this case distributions of
higher dimensionality need to be estimated from preciously few data points. Recent developments
in information theory, however, help us overcome these hurdles.
In Section 2 we review these theoretical advances and adapt them for our dual purpose of voxel
selection and pattern-based functional connectivity analysis. Following a discussion of related work
in Section 3, in Section 4 we apply our new methods to fMRI data from an experiment on distinguishing natural scene categories in the human brain. We lay conclude the paper in Section 5.
2
Multivariate mutual information for fMRI data
Information theory was originally formulated for discrete variables. In order to adapt the theory to
continuous random variables, the underlying probability distribution needs to be estimated from the
sampled data points. Previous work such as [7, 18] have used fixed bin-size histogram or Parzen
window methods for this purpose. However, these methods do not generalize to high-dimensional
data. Recently, Perez-Cruz has shown that a k-nearest-neighbor (kNN) approach to estimating information theoretic measures converges to the true information theoretic measures asymptotically with
finite k, even in higher dimensional spaces [16]. In this section we adapt this strategy to estimate
multi-voxel mutual information.
2.1
Nearest-neighbor mutual information estimate
In information theory, the randomness of a probability distribution is measured by its entropy. For a
discrete random variable x, entropy can be calculated as
n
X
H(x) = ?
p(xi ) log p(xi ).
(1)
i=1
Mutual information is intuitively defined as the reduction of the entropy of the random variable x by
the entropy of x after y is known:
I(x, y) = H(x) ? H(x|y).
(2)
The separation into entropies allow us to calculate mutual information for multivariate data. Random
variables x, y can be of arbitrary dimensions.
As shown in [24], using kNN estimation, entropies and conditional entropies can be defined as
n
H(x) = ?
1X
log pk (xi ),
n i=1
(3)
n
H(x|y) = ?
1X
pk (xi , yi )
log
,
n i=1
pk (yi )
(4)
where the summation is over n data points, each represented by xi . pk (xi ) is the kNN density
estimated at xi . pk (xi ) is defined as
pk (xi ) =
k ?(d/2 + 1)
1
.
d/2
n?1
rk (xi )d
?
(5)
where ? is the gamma function, d is the dimensionality of xi and rk (xi ) is the Euclidean distance
from xi to the k th nearest training point. pk (xi ) is the probability density function at xi , which is a
set of voxel values for a given category task(or label) in the context of our fMRI experiment.
2
2.2
fMRI multivariate information analysis
In previous work, such as [7], information theory has been used as a measure for functional connectivity of one voxel to another voxel. While such analysis is valuable for exploring connections in the
brain, it does not fully leverage the information stored in the local pattern of voxels. In this section
we propose a framework for multivariate information analysis of fMRI data for dual purposes: voxel
selection and functional connectivity.
2.2.1 Voxel selection based on mutual information with respect to ground truth label
For voxel selection we are interested in finding a subset of voxels that are highly informative for
discriminating between the ground truth labels in the experiment. This is a useful step that serves two
purposes. From a machine learning perspective, reducing the dimensionality of the brain image data
can boost classifier performance and reduce classifier variance. From a neuroscience perspective,
the locations of highly informative voxels identify functional regions involved in the experiment. To
achieve both of these goals we use a multivariate mutual information measure to analyze a localized
pattern of M voxels. This local analysis windows is moved across the brain image. At each location
we estimate the mutual information shared between the pattern of M voxels and the experiment label.
In our experiments we choose M = 7 to evaluate the smallest symmetrical pattern around a center
voxel, which consists of the center voxel and its 6 face-connected neighbors. Mutual information
between voxels V and labels L is defined as
I(V, L) = H(V ) + H(L) ? H(V, L).
(6)
Using equation 1 the entropies can be calculated by
n
I(V, L) = ?
n
n
1X
1X
1X
log pk (Vi ) ?
log pk (Li ) +
log pk (Vi , Li ),
n i=1
n i=1
n i=1
(7)
where n is the number of data-points observed, Li is the experiment label for ith data point, Vi is a
7-dimensional random variable, Vi = (vi1 ,vi2 ,vi3 ,vi4 ,vi5 ,vi6 , vi7 ) with each entry corresponding to
one of 7 voxels? values at data point i. Equation 7 can be used to compute the mutual information
of localized set of voxels Vi with respect to their ground truth label Li . We can then perform voxel
selection by selecting the locations of highest mutual information. This is useful as a preprocessing
step before applying any machine learning algorithms and as well as a way to spatially map out the
informative voxels with respect to the task.
2.2.2 Functional connectivity by shared information between distributed voxel patterns
Two distributed brain regions can be modeled as a communication channel. Measuring the mutual
information across the two regions provides an intuitive measure for their functional connectivity.
The voxel values observed in each region can be regarded as observed data from an underlying probability distribution ? the distribution that characterizes the functional region under the experiment
condition.
Previous approaches have analyzed shared information in a univariate way, computing the mutual
information between two voxels. However such univariate information analysis disregards the information stored in the local patterns of voxels. In this work we present a multivariate information
analysis that estimates shared information between two sets of voxels that leverages the information
stored in the local patterns:
I(V, S|L) = H(V |L) + H(S|L) ? H(V, S|L),
(8)
where V and S are random variables for sets of 7 voxels. L is the experiment label. Using equations
3 and 4 this can be written as
n
n
n
pk (Vi , Li )
1X
pk (Si , Li )
1X
pk (Vi , Si , Li )
1X
log
?
log
+
log
.
I(V, S|L) = ?
n i=1
pk (Li )
n i=1
pk (Li )
n i=1
pk (Li )
(9)
Equation 9 allows us to measure the functional connectivity between two distributed sets of voxels
V and S by computing the mutual information between the two sets of voxels conditioned on the
experiment task label L. We show in our experiments (sec 4.4) that by using this measurement, our
algorithm can uncover meaningful functional connectivity
patterns among regions of the brain.
3
Decoding accuracy
0.5
?
0.4
0.3
1
6
Mut. Info. (7 vox.)
Mut. Info. (1 vox.)
most active
0.1
0?
0
200
400
600
800
1000
most discr.
random
chance level
1200
1400
Number of voxels
Figure 1: Comparison of decoding accuracy3 between MI voxel selection and other standard voxel selection
methods(refer to section 4.2). The single voxel MI approach surpasses most discr.1 voxel selection but performs
on par with most active2 voxel selection. Using a pattern of 7 voxels, the MI7D approach achieves the highest
decoding accuracy. At 600 voxels, MI7D decoding accuracy3 is significantly higher than most active with pvalue < 0.05. At 1250 voxels, MI7D decoding accuracy is significantly higher than MI1D with p-value < 0.01
(This figure must be viewed in color)
3
Related work
Statistical relationships between different parts of the brain, referred to as functional connectivity,
have been computed with a number of different methods. The methods can be broadly classified as
either data-driven or model-based [10].
In data-driven approaches, no specific hypothesis of connectivity is used, but large networks of
brain regions are discovered based purely on the data. Most commonly, this is achieved with a
dimensionality-reduction procedure such as principal component analysis (PCA) or independent
component analysis (ICA). Originally applied to the analysis of PET data [5], PCA has also been
applied to fMRI data (see [12]). ICA has been gained interest for the investigation of the so-called
default network in the brain at rest [11].
Model-based approaches test a prior hypothesis about the statistical relations between a seed voxel
and a target voxel. By fixing the seed voxel and moving the target voxel all over the brain, a
connectivity map with respect to the seed voxel can be generated. The statistical relation of the
two voxels is usually modeled assuming temporal dependence between voxels in methods such as:
cross-correlation [2], coherence [21], Granger causality [1], or transfer entropy [20].
These methods compare the time courses of individual voxels. Following the same principal idea,
we model functional connectivity based on the mutual information between sets of seed and target
voxels to leverage the spatial information contained in activity patterns among voxels rather than
the temporal information between two voxels. fMRI has a higher spatial resolution than temporal resolution. We design our mutual information connectivity measure to exploit this property of
fMRI data. Yao et al. [26] have also explored pattern-based functional connectivity by modeling
the interactions between distributed sets of voxels with a generative model. We take a simpler approach by using only the multivariate information measure which allows us to explore for unknown
connections in the whole brain in a searchlight manner.
In recent years it has become apparent that patterns of fMRI activity hold more detailed information
about experimental conditions than the activation levels of individual voxels [6]. It is therefore
1
Most discri. ? Most discriminative voxels are those showing the largest difference in activity between any
pair of scene categories.
2
Most active ? Most active voxels are those showing the largest difference in activity between the fixation
condition and viewing images of any category.
4
0.17
R
L
MI
0
Figure 2: Locations of voxels with high 7D mutual information with respect to scene category label. The
known functional areas that respond to scenes and visual stimuli such as PPA, RSC, V1 are all selected, which
also explains the high decoding accuracy using the selected voxels. The brain maps shown above are based on
group analysis over 5 subjects superimposed on an MNI standard brain. (This figure must be viewed in color)
cogent to also consider the information contained in voxel patterns for the analysis of functional
connectivity. We achieve this by computing the mutual information of a pattern of locally connected
voxels at the seed location with a pattern at the target location. As with the univariate functional
connectivity analysis, this multivariate version also allows us to test hypotheses about connectivity
of brain regions as well as generate connectivity maps.
Because of the large number of voxels in the brain (many thousands, depending on resolution),
multivariate techniques usually require some kind of feature selection or dimensionality reduction.
This can be achieved by focusing on pre-defined ROIs, or by selecting voxels form the brain based
on some statistical criteria [3, 14]. Here we show that using mutual information of individual voxels
with respect to ground truth for voxel selection works at the same level as these previous methods,
but that mutual information of patterns of voxels with respect to ground truth outperforms all of the
univariate methods we tested.
Information theory has been applied to fMRI data in the context of brain machine interfaces [25],
to generate activation maps [8], for effective connectivity in patients [7], and for image registration
[17]. However, to our knowledge this is the first application to both voxel selection and functional
connectivity based on multivariate activity patterns.
4
Experiments
4.1 Data
For the experiments described in this section we use the data from the fMRI experiment on natural
scene categories by [23]. Briefly, five participants passively viewed color images belonging to six
categories of natural scenes (beaches, buildings, forests, highways, industry, and mountains). Stimuli were arranged into blocks of 10 images from the same natural scene category. Each image was
displayed sequentially for 1.6 seconds. A run was composed of 6 blocks, one for each natural scene
category, interleaved with 12 s fixation periods. Images were presented upright inverted on alternating runs, with each inverted run preserving the image and category order used in the preceding
upright run. A session contained 12 such runs, and the order of categories was randomized across
blocks. Each subject performed two blocks with a total of 24 runs. In total we have 1192 data
points per subject across all 6 categories. The data obtained from the authors in [23] contains only
localizers for V1, PPA, RSC, LOC, FFA areas. Thus we limit our seed areas to these ROIs.
4.2 Voxel selection
The goal of voxel selection is to identify the most relevant voxels for the experiment task out of the
tens of thousands of voxels in the entire brain. A quantitative evaluation of voxel selection is the
decoding accuracy3 of the selected voxels, which measures how well can the selected voxels predict
the viewing condition from the neural responses.
Fig.1 compares our mutual information-based voxel selection method to other voxel selection methods. Decoding accuracy3 using univariate kNN mutual information is comparable to most active1
5
MI 7D
MI 1D
3
Log 2 (6)
**
**
2
**
0.95**
rPPA
lPPA
**
MI
1.1**
1.5
0.84**
0.85**
0.75*
1
0.71**
0.5
lRSC
rRSC
1.16**
0
?
lPPA
rPPA
lRSC
V1
rRSC
(a) Comparing 7D and 1D mutual information within-ROI
connections
(b) 7D MI between-ROI connections
Figure 3: a) Within-ROI MI values for 7D and 1D mutual information, b) Schematic showing the significant
ROI connections found using 7D mutual information analysis. The network shows strong connections between
PPA and RSC, both ipsilaterally and contralaterally. ?? p < 10?6 , ? p < 0.01
voxel selection. Multivariate information measure is able to select the more informative voxels by
considering a local pattern of voxels jointly, leading to a boost in decoding accuracy3 .
To further understand why multivariate mutual information boosts the decoding accuracy3 , we can
look at the spatial locations of the informative voxels selected by multivariate information analysis
shown in Fig.2. The most informative voxels selected correspond to known functional regions for
scenes. In this figure we see the scene areas V1, RSC, PPA, LOC that were also identified in [23].
Interestingly, our automatic voxel selection achieves a higher decoding accuracy3 than the ROIs
selected by localizer in [23]. This may suggest that the multivariate information voxel selection is a
better segmentation of the relevant ROIs than the localizer runs.
4.3
Functional connectivity of ROIs
In the previous section, we have shown that multivariate information can effectively select informative voxels for classification. In this section, we first illustrate the increased sensitivity of a
multivariate assessment of functional connectivity within known ROIs. Then we use multivariate
information to explore connections between ROIs.
A good comparison for the functional connectivity measure is the within-ROI connectivity. Voxels
within the same ROI should exhibit high functional connectivity with each other. In Fig.3a we compared our 7D measures with equivalent one dimensional measures using within-ROI connectivity.
To this end we randomly selected 15 seed and 15 target locations within each ROI, making sure that
seed and target patterns have no voxels in common. Then we computed mutual information between
all seed and all target locations, either using individual voxels (1D case) or patterns of seven voxels
(7D case). Fig.3a shows the mean of the mutual information values for these two cases in each ROI.
In all ROIs, we find that multivariate information measure(7D) is significantly higher than the univariate measure(1D), suggesting that a pattern-based mutual information has a higher fidelity than
univariate-based mutual information in mapping out functional connections.
After having established that 7D mutual information significantly outperforms 1D mutual information we proceed to calculate the between-ROI connectivity for scene areas V1, left/right PPA, and
left/right RSC using 7D mutual information as shown in Fig.3b. Between-ROI connectivity is de3
Decoding accuracy is obtained with a leave-two-runs-out cross-validation on the our scene data. In each
fold two runs from viewing the same images upright and inverted are left out as test data. Voxel selection is
performed on the training runs using k = n/2, where n is the number of training examples in each category.
Using selected voxels, a linear SVM classifier is trained on the upright runs with C = 0.02 as in [23]. In testing
we use majority voting on the SVM prediction labels to vote for the most likely scene label for each block of
data. Decoding accuracy is the average of cross-validation accuracy over the 5 subjects.
6
right Inferior Frontal Gyrus
R
left Medial Frontal Gyrus
rRSC
left Precuneus
0.51
MI
L
lPPA
rPPA
lPPA
rPPA
Z = -13 mm
0.37
lRSC
rRSC
Z = 23 mm
Y = 52 mm
Figure 4: Connectivity map seeding from left PPA. Talairach coordinates defined in [22] are shown as the Z
and Y coordinates for axial and coronal slices respectively. The intensity of the maps shows the MI values(This
figure must be viewed in color)
left Medial Frontal Gyrus
left Medial Frontal Gyrus
left Cuneus
- 4 overlap
R
- 3 overlap
L
- 2 ovlerap
rPPA
lPPA
Y = 43 mm
rRSC
lRSC
left Cuneus
right Cuneus
right Precuneus
Z = 26 mm
Z = 29 mm
Figure 5: Overlap analysis showing areas where overlap occurs with the strongest connections from more than
two scene network ROIs. Talairach coordinates defined in [22] are shown as the Z and Y coordinates for axial
and coronal slices respectively. The color code indicates the amount of overlap.(This figure must be viewed in
color)
fined similarly as within-ROI connectivity except that seed and target locations are the chosen in
different ROIs.
A number of aspects of the connections mapped out with MI7D analysis agree with neuroscience
findings. First, it is expected that PPA and RSC should be strongly connected as part of a scene
network. Moreover, since V1 is the input to the cortical visual system, it is also likely that it should
share information with at least one member of the scene network, which in this case was the right
PPA. One novel finding from this analysis is that all of the strongest connections we discovered
included right RSC. In particular, right RSC shares strong connections with left RSC, right PPA and
left PPA, suggesting that right RSC may play a particularly important role in distinguishing natural
scene categories. More work will be needed to verify this hypothesis.
To summarize, we have verified that multivariate information analysis can reliably map out connections within and between ROIs known to be involved in processing natural scene categories. In
the next section we show how the same analysis can be extended to uncover other ROIs that share
information with this scene network in our scene classification experiment.
4.4
Functional connectivity - whole brain analysis
While it is valuable to confirm existing hypotheses about areas that represent scene categories, it
is also interesting to uncover new brain areas that might be related to scene categorization. In this
section, we show that we can use our multivariate information analysis approach to explore other
areas outside of the known ROIs that form strong connections with the known ROIs.
For each of the functional areas in the scene network, we can explore other areas connected to it.
As in section 3 we measure functional connectivity as multivariate mutual information between the
seed and candidate target areas. We fix the seed area to an ROI defined by a localizer. The candidate
area moves around the brain, at each location measuring the mutual information with respect to the
seed area.
7
4.4.1 Confirming known connections
Fig.4 shows an example of the connectivity map seeding from left PPA. Each highlighted location in
the connectivity map shows its connectivity to left PPA as measured by the multivariate information.
As shown in Fig.4, both left and right PPA are highlighted, confirming their bilateral connection.
Furthermore, we see strong connections between left PPA and left and right RSC. A minimum
cluster size of 13 is used to threshold the connectivity map. The minimum cluster size is determined
by AlphaSim in AFNI [4]. Notice in Fig.4 that the highest MI in the whole-brain analysis has MI of
0.51 whereas the within-ROI MI of left PPA in Fig.3b has a value of 1.5. The decrease in MI is due
to the smoothing of connectivity maps when we combine them across subjects.
4.4.2 Discovering new connections
Besides confirming known regions of the scene network, our connectivity maps allow us to explore
other brain areas that might be related to the scene network. In Fig.4 we not only observe known
scene network ROIs but additional areas such as the right Inferior Frontal Gyrus, left Medial Frontal
Gyrus, and left Precuneus. Interestingly, the Inferior Frontal Gyrus, typically associated with language processing [13], also showed up in a searchlight analysis for decoding accuracy in [23].
So far we have examined how the rest of the brain connects to one ROI in the scene network,
specifically we used left PPA as the example. However, to further strongly establish which regions
are functionally connected in regards to distinguishing scene category, we asked which brain areas
are strongly connected to two or more of the scene network ROIs. Areas that connect to more than
one of the scene network ROIs are particularly interesting, because having multiple connections
strengthens evidence that they play a significant role in distinguishing scene categories.
To investigate this question, we generate one connectivity map for each of the 4 scene network ROIs,
similar to Fig.4. We take the areas with the top 5 percent highest mutual information in each of the
4 maps and overlap them. Fig.5 shows this overlap analysis.
Similar to the previous analysis, the overlap analysis highlights all 4 known areas of the scene
network. Interestingly, this analysis shows that right RSC and right PPA are connected with more
regions of the scene network than left RSC and PPA. This suggests that perhaps there is a laterality
effect in the scene network that could be investigated in future studies.
Furthermore, we can also explore areas outside of the scene network with the overlap analysis.
In Fig.5, left/right Cuneus and right Precuneus, highlighted in orange, exhibit strong connections
with 3/4 of the scene network ROIs. Left Medial Frontal Gyrus is strongly connected to 2/4 of the
scene network ROIs. These exploratory areas also point to interesting future investigations for scene
category studies.
5
Conclusion
In this paper we have introduced a new method for evaluating the mutual information that patterns
of fMRI voxels share with the ground truth labels of the experiment and with patterns of voxels
elsewhere in the brain. When used as a voxel selection method for subsequent decoding of viewed
natural scene category, mutual information of patterns of voxels with respect to the ground truth
label is superior to mutual information of individual voxels.
We have shown that mutual information of voxel patterns in two ROIs is a more sensitive measure
of task-specific functional connectivity analysis than mutual information of individual voxels. We
have identified a network of regions consisting of left and right PPA and left and right RSC that
share information about the category of a natural scene viewed by the subject. Connectivity maps
generated with this method have identified left medial frontal gyrus, left/right cuenus, and right
precuneus as sharing scene-specific information with PPA and RSC. This could stimulate interesting
future work such as estimating mutual information for an even larger set of voxels and understanding
the exploratory areas highlighted by this analysis. Although we confined our experiments to data
from a scene category task, all the analysis proposed here could be used for other tasks in other
domains.
Acknoledgements
This work is funded by National Institutes of Health Grant 1 R01 EY019429 (to L.F.-F., D.M.B., D.B.W.), a
Beckman Postdoctoral Fellowship (to D.B.W.), a Microsoft Research New Faculty Fellowship (to L.F.-F.), and
the Frank Moss Gift Fund (to L.F-F.). The authors would like to thank Todd Coleman and Fernando Perez-Cruz
for the helpful discussions on entropy estimation.
8
References
[1] Granger, C. W. J. Investigating causal relations by econometric models and cross-spectral methods.
Econometrica 37, 424-438, 1969
[2] J. Cao and K. Worsley The geometry of correlation fields with an application to functional connectivity
of the brain. Ann. Appl. Probab, 9:1021C1057, 1998.
[3] D. Cox and R. Savoy Functional magnetic resonance imaging (fMRI) ?brain reading?: Detecting and
classifying distributed patterns of fMRI activity in human visual cortex. NeuroImage, 19(2):261C270,
2003.
[4] RW Cox. AFNI: Software for analysis and visualization of functional magnetic resonance neuroimages.
Computers and Biomedical Research, 29:162-173, 1996.
[5] K. J. Friston, C. D. Frith, P. F. Liddle, and R. S. Frackowiak. Functional connectivity: the principalcomponent analysis of large (PET) data sets. J Cereb Blood Flow Metab, 13(1):5C14, January 1993.
[6] J. V. Haxby, M. I. Gobbini, M. L. Furey, A. Ishai, J. L. Schouten, and P. Pietrini. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science, 293(5539):2425C30, 2001.
Journal Article United States.
[7] H. Hinrichs, H. Heinze, and M. Schoenfeld. Causal visual interactions as revealed by an information
theoretic measure and fMRI. NeuroImage, 31(3):1051 C 1060, 2006.
[8] A. T. John, J. W. F. Iii, W. M. W. Iii, J. Kim, and A. S.?Willsky. Analysis of functional MRI data using
mutual information, 1999.
[9] Fei-Fei, L., Iyer, A., Koch, C., Perona, P. (2007). What do we perceive in a glance of a real-world scene?
Journal of Vision,7(1):10, 1-29, http://journalofvision.org/7/1/10/,doi:10.1167/7.1.10.
[10] K. Li, L. Guo, J. Nie, G. Li, and T. Liu. Review of methods for functional brain connectivity detection
using fMRI. Computerized medical imaging and graphics: The Official Journal of the Computerized
Medical Imaging Society, 33(2):131C139, March 2009.
[11] M. J. Mckeown, S. Makeig, G. G. Brown, T.-P. Jung, S. S. Kindermann, R. S. Kindermann, A. J. Bell,
and T. J. Sejnowski. Analysis of fMRI data by blind separation into independent spatial components.
Human Brain Mapping, 6:160C188, 1998.
[12] A. Meyer-Baese, A. Wismueller, and O. Lange. Comparison of two exploratory data analysis methods
for fMRI: unsupervised clustering versus independent component analysis. Information Technology in
Biomedicine, IEEE Transactions on, 8(3):387C398, Sept. 2004.
[13] Geschwind N. (1970) The organization of language and the brain. Science 170:940 944..
[14] D. Neill, A. Moore, F. Pereira, and T. Mitchell. Detecting significant multidimensional spatial clusters.
In Proceedings of Neural Information Processing Systems, 2004.
[15] M. Potter. Short-term conceptual memory for pictures. Journal of Experimental Psychology: Human
Learning and Memory, 2(5):509C522, 1976.
[16] F. Perez-Cruz. Estimation of information theoretic measures for continuous random variables. In D.
Koller, D. Schuurmans, Y. Bengio, and L. Bottou, editors, NIPS, pages 1257C1264. MIT Press, 2008.
[17] Pluim, J. P. W. and Maintz, J. B. A. and Viergever, M. A. Mutual-information based registration of
medical images: a survey. IEEE Trans Med Imaging, 2003, 22:986C1004.
[18] X. U. Rui, C. H. E. N. Yen-Wei, T. A. N. G. Song-Yuan, S. Morikawa, and Y. Kurumi. Parzen-window
based normalized mutual information for medical image registration. IEICE Transactions on Information and Systems, E91-D(1):132C144, January 2008.
[19] C. E. Shannon. A Mathematical Theory of Communication. CSLI Publications, 1948.
[20] T. Schreiber. Measuring Information Transfer. Physical Review Letters, vol. 85, no. 2, pp. 461+, July
2000.
[21] F. T. Sun, L. M. Miller, and M. DEsposito. Measuring interregional functional connectivity using coherence and partial coherence analyses of fmri data. NeuroImage, 21(2):647 C 658, 2004.
[22] J. Talairach and P. Tournoux. Co-planar Stereotaxic Atlas of the Human Brain: 3-Dimensional Proportional System - an Approach to Cerebral Imaging. Thieme Medical Publishers, New York, 1988
[23] D. Walther, E. Caddigan, L. Fei-Fei*, and D. Beck* (2009), Natural scene categories revealed in
distributed patterns of activity in the human brain. The Journal of Neuroscience, 29(34):1057310581.(*indicates equal contribution)
[24] Q. Wang, S. Kulkarni, and S. Verdu. Divergence estimation of continuous distributions based on datadependent partitions. Information Theory, IEEE Transactions on, 51(9):3064C3074, Sept. 2005.
[25] B. D. Ward and Y. Mazaheri. Information transfer rate in fMRI experiments measured using mutual
information theory. Journal of Neuroscience Methods, 167(1):22 C 30, 2008. Brain-Computer Interfaces
(BCIs).
[26] B. Yao, D.B. Walther, D.M. Beck*, L. Fei-Fei*. Hierarchical Mixture of Classification Experts Uncovers
Interactions between Brain Regions. NIPS, 2009. (* indicates equal contribution)
9
| 3797 |@word cox:2 briefly:1 faculty:1 version:1 mri:1 vi1:1 uncovers:1 accounting:1 reduction:3 liu:1 uncovered:1 loc:3 selecting:4 contains:1 united:1 hemodynamic:1 interestingly:3 outperforms:3 existing:2 comparing:2 activation:3 si:2 must:6 written:1 john:1 cruz:3 numerical:2 subsequent:1 informative:7 confirming:3 partition:1 enables:1 haxby:1 seeding:2 atlas:1 medial:6 fund:1 generative:1 selected:9 discovering:1 ith:1 coleman:1 short:2 precuneus:5 provides:1 detecting:2 location:12 org:1 simpler:1 five:1 mathematical:1 vi3:1 become:1 walther:4 consists:1 retrosplenial:1 fixation:2 combine:1 yuan:1 manner:1 expected:1 ica:2 multi:2 brain:46 window:3 considering:1 gift:1 estimating:4 discover:1 underlying:2 moreover:1 furey:1 what:1 mountain:1 kind:1 thieme:1 finding:3 temporal:4 quantitative:1 multidimensional:1 voting:1 makeig:1 classifier:3 medical:5 grant:1 before:1 cuneus:4 local:7 treat:1 todd:1 limit:1 establishing:1 might:2 verdu:1 examined:1 suggests:1 appl:1 co:1 limited:1 fei1:1 testing:1 block:5 communicated:1 procedure:1 area:25 bell:1 significantly:4 pre:1 suggest:1 selection:24 parahippocampal:1 context:2 applying:1 equivalent:1 map:16 center:2 occipital:1 survey:1 resolution:3 perceive:2 regarded:1 population:1 exploratory:5 coordinate:4 target:9 play:2 distinguishing:4 hypothesis:5 ppa:23 strengthens:1 particularly:2 utilized:1 lay:1 observed:3 role:2 wang:1 calculate:2 thousand:2 region:19 connected:8 sun:1 decrease:1 highest:4 valuable:2 nie:1 asked:1 econometrica:1 ipsilaterally:1 trained:1 purely:1 frackowiak:1 represented:2 effective:1 sejnowski:1 coronal:2 doi:1 outside:2 apparent:1 csli:1 stanford:3 larger:1 knn:4 ward:1 jointly:1 noisy:2 highlighted:4 propose:1 interconnected:1 interaction:4 relevant:2 cao:1 achieve:2 intuitive:1 moved:1 validate:1 chai:1 cluster:3 categorization:3 mckeown:1 converges:1 leave:1 object:1 help:1 depending:1 illustrate:1 fixing:1 axial:2 measured:4 nearest:3 exacerbated:1 strong:6 c:1 culminating:1 human:9 viewing:3 vox:2 bin:1 explains:1 require:1 fix:1 investigation:2 feifeili:1 summation:1 exploring:2 hold:1 mm:6 around:2 considered:1 ground:9 roi:38 koch:1 seed:13 mapping:2 predict:1 achieves:2 ventral:1 smallest:1 purpose:4 estimation:4 beckman:2 label:15 kindermann:2 sensitive:1 highway:1 largest:2 schreiber:1 successfully:1 tool:1 mit:1 rather:2 varying:1 publication:1 acknoledgements:1 superimposed:1 indicates:3 kim:1 sense:1 helpful:1 dependent:1 entire:1 typically:1 initially:1 perona:1 relation:3 koller:1 interested:1 issue:1 among:3 dual:2 classification:3 fidelity:1 development:1 resonance:3 spatial:5 smoothing:1 orange:1 mutual:53 field:1 categorizes:1 contralaterally:1 equal:2 beach:1 having:2 mvpa:1 represents:1 look:1 unsupervised:1 fmri:29 future:3 stimulus:2 few:1 randomly:1 composed:1 gamma:1 national:1 divergence:1 individual:7 mut:2 beck:3 geometry:1 connects:1 consisting:1 microsoft:1 detection:1 organization:1 interest:2 highly:2 investigate:1 evaluation:1 grasp:1 truly:1 analyzed:1 mixture:1 perez:3 partial:1 necessary:2 euclidean:1 causal:2 theoretical:1 rsc:18 increased:1 industry:1 modeling:1 measuring:4 surpasses:1 subset:1 entry:1 de3:1 c14:1 graphic:1 stored:3 connect:1 ishai:1 density:2 randomized:1 discriminating:1 sensitivity:1 decoding:17 parzen:2 quickly:1 yao:2 connectivity:49 choose:1 expert:1 leading:1 worsley:1 li:14 account:1 suggesting:2 bold:1 sec:1 configured:1 vi:7 blind:1 performed:2 bilateral:1 observer:1 analyze:1 characterizes:1 participant:1 yen:1 contribution:2 il:2 accuracy:9 variance:1 efficiently:1 miller:1 correspond:1 identify:2 generalize:1 computerized:2 randomness:1 classified:1 biomedicine:1 strongest:2 sharing:2 pp:1 involved:3 associated:1 mi:14 sampled:1 mitchell:1 color:6 knowledge:1 dimensionality:5 localizers:1 segmentation:1 embraced:1 uncover:4 focusing:1 higher:9 originally:2 planar:1 response:1 wei:1 arranged:1 strongly:5 furthermore:4 biomedical:1 correlation:2 assessment:1 overlapping:1 glance:1 lrsc:4 heinze:1 perhaps:1 stimulate:1 bcis:1 ieice:1 liddle:1 building:1 laterality:1 effect:1 normalized:1 contain:1 concept:1 true:1 verify:1 brown:1 spatially:1 alternating:1 moore:1 inferior:3 criterion:1 theoretic:4 demonstrate:1 magnetization:1 cereb:1 performs:1 interface:3 percent:1 oxygen:1 image:12 novel:1 recently:2 common:1 superior:1 functional:39 physical:1 cerebral:1 functionally:1 measurement:1 refer:1 significant:3 automatic:1 session:1 similarly:1 illinois:3 language:2 funded:1 moving:1 cortex:4 multivariate:26 recent:3 showed:1 perspective:2 driven:2 revolutionized:1 yi:2 inverted:3 preserving:1 minimum:2 additional:1 preceding:1 fernando:1 period:1 barry:2 signal:1 july:1 multiple:1 champaign:3 adapt:3 cross:4 fined:1 heralded:1 equally:2 schematic:1 prediction:1 patient:1 vision:1 histogram:1 represent:1 achieved:2 confined:1 whereas:1 hurdle:1 fellowship:2 publisher:1 rest:2 fell:1 sure:1 subject:6 med:1 member:1 flow:1 leverage:3 revealed:2 iii:2 bengio:1 psychology:2 identified:3 reduce:1 idea:1 lange:1 six:1 pca:2 metab:1 song:1 discr:2 proceed:2 york:1 useful:2 detailed:1 amount:1 locally:1 ten:1 category:26 rw:1 gyrus:9 generate:3 http:1 notice:1 neuroscience:5 estimated:3 per:1 broadly:1 discrete:2 vol:1 group:1 threshold:1 achieving:1 blood:2 verified:1 registration:3 v1:7 imaging:6 asymptotically:1 econometric:1 year:1 run:11 letter:1 respond:1 place:1 separation:2 coherence:3 comparable:1 interleaved:1 layer:1 neill:1 fold:1 activity:11 mni:1 fei:9 scene:54 software:1 aspect:1 passively:1 department:2 march:1 belonging:1 across:5 making:1 intuitively:1 equation:4 agree:1 visualization:1 granger:2 needed:1 serf:1 end:1 cogent:1 apply:2 observe:1 hierarchical:1 appropriate:1 spectral:1 magnetic:3 pietrini:1 top:1 clustering:1 exploit:1 establish:1 society:1 r01:1 move:1 question:1 gobbini:1 occurs:1 strategy:1 primary:1 dependence:1 traditional:1 exhibit:2 distance:1 thank:1 mapped:1 lateral:1 majority:1 seven:1 considers:1 pet:2 willsky:1 assuming:1 potter:1 code:1 besides:1 modeled:2 relationship:1 concomitant:1 mostly:1 frank:1 info:2 design:1 reliably:1 unknown:1 contributed:2 perform:1 tournoux:1 neuron:1 urbana:3 finite:1 displayed:1 january:2 extended:1 communication:4 dirk:2 discovered:2 arbitrary:1 community:2 intensity:1 introduced:1 searchlight:2 pair:1 connection:20 established:2 boost:3 nip:2 trans:1 address:1 able:3 usually:2 pattern:36 reading:1 summarize:1 memory:2 vi2:1 overlap:9 natural:15 difficulty:1 neuroimages:1 friston:1 scarce:1 technology:1 pvalue:1 picture:1 health:1 moss:1 sept:2 review:3 voxels:62 prior:1 understanding:1 probab:1 fully:1 par:1 highlight:1 interesting:4 limitation:2 proportional:1 versus:1 localized:2 validation:2 c30:1 degree:1 consistent:1 article:1 editor:1 classifying:1 share:6 course:2 elsewhere:1 jung:1 schouten:1 allow:3 understand:3 institute:2 neighbor:3 face:2 afni:2 absolute:1 distributed:9 slice:2 overcome:2 calculated:2 dimension:1 default:1 cortical:1 regard:1 evaluating:1 world:1 author:2 commonly:1 preprocessing:1 voxel:41 far:3 transaction:3 confirm:1 active:4 sequentially:1 investigating:1 conceptual:1 symmetrical:1 conclude:1 xi:15 discriminative:1 postdoctoral:1 continuous:3 why:1 channel:2 transfer:3 ca:1 frith:1 forest:1 schuurmans:1 diane:2 investigated:1 complex:1 bottou:1 domain:1 official:1 hinrichs:1 pk:16 whole:5 complementary:1 causality:1 vi4:1 referred:1 fig:13 fashion:1 localizer:3 neuroimage:3 meyer:1 pereira:1 candidate:2 rk:2 ffa:1 specific:4 showing:4 explored:1 svm:2 evidence:1 effectively:1 gained:1 iyer:1 conditioned:1 rui:1 entropy:10 univariate:7 explore:6 likely:2 visual:5 contained:4 datadependent:1 truth:9 chance:1 talairach:3 conditional:1 goal:3 formulated:1 viewed:7 ann:1 viergever:1 shared:5 content:1 change:2 included:1 upright:4 except:1 reducing:1 determined:1 specifically:1 principal:2 called:1 total:2 partly:1 experimental:2 disregard:1 shannon:2 meaningful:1 vote:1 exception:1 select:2 highdimensional:1 guo:1 frontal:9 kulkarni:1 evaluate:1 stereotaxic:1 savoy:1 tested:1 |
3,087 | 3,798 | Evaluating multi-class learning strategies in a
hierarchical framework for object detection
Sanja Fidler
Marko Boben
Ale?s Leonardis
Faculty of Computer and Information Science
University of Ljubljana, Slovenia
{sanja.fidler, marko.boben, ales.leonardis}@fri.uni-lj.si
Abstract
Multi-class object learning and detection is a challenging problem due to the
large number of object classes and their high visual variability. Specialized detectors usually excel in performance, while joint representations optimize sharing
and reduce inference time ? but are complex to train. Conveniently, sequential
class learning cuts down training time by transferring existing knowledge to novel
classes, but cannot fully exploit the shareability of features among object classes
and might depend on ordering of classes during learning. In hierarchical frameworks these issues have been little explored. In this paper, we provide a rigorous
experimental analysis of various multiple object class learning strategies within a
generative hierarchical framework. Specifically, we propose, evaluate and compare three important types of multi-class learning: 1.) independent training of
individual categories, 2.) joint training of classes, and 3.) sequential learning of
classes. We explore and compare their computational behavior (space and time)
and detection performance as a function of the number of learned object classes
on several recognition datasets. We show that sequential training achieves the best
trade-off between inference and training times at a comparable detection performance and could thus be used to learn the classes on a larger scale.
1
Introduction
Object class detection has been one of the mainstream research areas in computer vision. In recent
years we have seen a significant trend towards larger recognition datasets with an increasing number
of object classes [1]. This necessitates representing, learning and detecting multiple object classes,
which is a challenging problem due to the large number and the high visual variability of objects.
To learn and represent multiple object classes there have mainly been two strategies: the detectors
for each class have either been trained in isolation, or trained on all classes simultaneously. Both
exert certain advantages and disadvantages. Training independently allows us to apply complex
probabilistic models that use a significant amount of class specific features and allows us to tune
the parameters for each class separately. For object class detection, these approaches had notable
success [2]. However, representing multiple classes in this way, means stacking together specific
class representations. This, on the one hand, implies that each novel class can be added in constant
time, however, the representation grows clearly linearly with the number of classes and is thus also
linear in inference. On the other hand, joint representations enlarge sublinearly by virtue of sharing
the features among several object classes [3, 4]. This means sharing common computations and
increasing the speed of the joint detector. Training, however, is usually quadratic in the number of
classes. Furthermore, adding just one more class forces us to re-train the representation altogether.
Receiving somewhat less attention, the strategy to learn the classes sequentially (but not independently) potentially enjoys the traits of both learning types [4, 5, 6]. By learning one class after
1
another, we can transfer the knowledge acquired so far to novel classes and thus likely achieve both,
sublinearity in inference and cut down training time. In order to scale to a higher number of object
classes, learning them sequentially lends itself as the best choice.
In literature, the approaches have mainly used one of these three learning strategies in isolation.
To the best of our knowledge, little research has been done on analyzing and comparing them with
respect to one another. This is important because it allows us to point to losses and gains of each
particular learning setting, which could focus further research and improve the performance. This is
exactly what this paper is set to do ? we present a hierarchical framework within which all of the
aforementioned learning strategies can be unbiasedly evaluated and put into perspective.
Prominent work on these issues has been done in the domain of flat representations [4, 3], where
each class is modeled as an immediate aggregate of local features. However, there is an increasing
literature consensus, that hierarchies provide a more suitable form of multi-class representation [7,
8, 9, 10, 11, 12]. Hierarchies not only share complex object parts among similar classes, but can
re-use features at several levels of granularity also for dissimilar objects.
In this paper, we provide a rigorous experimental evaluation of several important multi-class learning
strategies for object detection within a generative hierarchical framework. We make use of the
hierarchical learning approach by [13]. Here we propose and evaluate three types of multi-class
learning: 1.) independent training of individual categories, 2.) joint training, 3.) sequential training
of classes. Several issues were evaluated on multiple object classes: 1.) growth of representation,
2.) training and 3.) inference time, 4.) degree of feature sharing and re-use at each level of the
hierarchy, 5.) influence of class ordering in sequential learning, and 6.) detection performance, all
as a function of the number of classes learned. We show that sequential training achieves the best
trade-off between inference and training times at a comparable detection performance and could
thus be used to learn the classes on a larger scale.
Related work. Prior work on multi-class learning in generative hierarchies either learns separate
hierarchies for each class [14, 15, 16, 10, 17], trains jointly [7, 18, 9, 19, 20, 11], whereas work
on sequential learning of classes has been particularly scarce [6, 13]. However, to the best of our
knowledge, no work has dealt with, evaluated and compared multiple important learning concepts
under one hierarchical framework.
2
The hierarchical model and inference
The hierarchical model. We use the hierarchical model of [13, 21], which we summarize here. Objects are represented with a recursive compositional shape vocabulary which is learned from images.
The vocabulary contains a set of shape models or compositions at each layer. Each shape model in
the hierarchy is modeled as a conjunction of a small number of parts (shapes from the previous
layer). Each part is spatially constrained on the parent shape model via a spatial relation which is
modeled with a two-dimensional Gaussian distribution. The number and the type of parts can differ
across the shape models and is learned from the data without supervision. At the lowest layer, the
vocabulary consists of a small number of short oriented contour fragments, while the vocabulary at
the top-most layer contains models that code the shapes of the whole objects. For training, we need
a positive and a validation set of class images, while the structure of the representation is learned in
an unsupervised way (no labels on object parts or smaller subparts need to be given).
The hierarchical vocabulary V = (V, E) is represented with a directed graph, where multiple edges
between two vertices are allowed. The vertices V of the graph represent the shape models and
the edges E represent the composition relations between them. The graph V has a hierarchical
structure, where the set of vertices V is partitioned into subsets V 1 , . . . , V O , each containing the
shapes at a particular layer. The vertices {vi1 }6i=1 at the lowest layer V 1 represent 6 oriented contour
fragments. The vertices at the top-most layer V O , referred to as the object layer represent the whole
shapes of the objects. Each object class C is assigned a subset of vertices VCO ? V O that code the
object layer shapes of that particular class. We denote the set of edges between the vertex layers
` `?1
V ` and V `?1 with E ` . Each edge e`Ri = vR
vi in E ` is associated with the Gaussian parameters
`
`
`
`
`
?Ri := ?(eRi ) = (?Ri , ?Ri ) of the spatial relation between the parent shape vR
and its part vi`?1 .
`
`
`
We will use ? R = (?Ri )i to denote the vector of all the parameters of a shape model vR
. The pair
`
`
`
V := (V , E ) will be referred to as the vocabulary at layer `.
2
Inference. We infer object class instances in a query image I in the following way. We follow the
contour extraction of [13], which finds local maxima in oriented Gabor energy. This gives us the
contour fragments F and their positions X. In the process of inference we build a (directed acyclic)
inference graph G = (Z, Q). The vertices Z are partitioned into vertex layers 1 to O (object layer),
Z = Z 1 ?? ? ??Z O , and similarly also the edges, Q = Q1 ?? ? ??QO . Each vertex z ` = (v ` , x` ) ? Z `
represents a hypothesis that a particular shape v ` ? V ` from the vocabulary is present at location x` .
`
The edges in Q` connect each parent hypothesis zR
to all of its part hypotheses zi`?1 . The edges in
1
the bottom layer Q connect the hypotheses in the first layer Z 1 with the observations. With S(z)
we denote the subgraph of G that contains the vertices and edges of all descendants of vertex z.
Since our definition of each vocabulary shape model assumes that its parts are conditionally independent, we can calculate the likelihood of the part hypotheses zi`?1 = (vi`?1 , x`?1
) under a parent
i
`
`
hypothesis zR
= (vR
, x`R ) by taking a product over the individual likelihoods of the parts:
Y
`
`
`
p(v`?1 , x`?1 | vR
, x`R , ? `R ) =
p(x`?1
| x`R , vi`?1 , vR
, ?Ri
)
(1)
i
` v `?1
e`Ri =vR
i
`
`
The term pRi := p(x`?1
| x`R , vi`?1 , vR
, ?Ri
) stands for the spatial constraint imposed by a vocabi
`
`
ulary edge eRi between a parent hypothesis zR
and its part hypothesis zi`?1 . It is modeled by a
`?1
`
`
`
= (?`Ri , ?`Ri ). If the likelihood in (1) is
normal distribution, pRi = N (xi ? xR | ?Ri ), where ?Ri
`
above a threshold, we add edges between zR and its most likely part hypotheses. The log-likelihood
`
`
of the observations under a hypothesis zR
is then calculated recursively over the subgraph S(zR
):
X
X
1:`?1
`
1
log p(F, X, z
| zR ; V) =
log pR0 i0 +
log p(F, X | zi0 ),
(2)
` ))
zR0 zi0 ?E(S(zR
` ))
zi10 ?V (S(zR
`
`
`
), respectively.
)) denote the edges and vertices of the subgraph S(zR
)) and V (S(zR
where E(S(zR
The last term is the likelihood of the Gabor features under a particular contour fragment hypothesis.
3
Multi-class learning strategies
We first define the objective function for multi-class learning and show how different learning strategies can be used with it in the following subsections. Our goal is to find a hierarchical vocabulary
V that well represents the distribution p(I | C) ? p(F, X | C; V) at minimal complexity of the
representation (C denotes the class variable). Specifically, we seek for a vocabulary V = ?` V ` that
optimizes the function f over the data D = {(Fn , Xn , Cn )}N
n=1 (N training images):
?
V = arg max f (V), where f (V) = L(D | V) ? ? ? T (V)
(3)
V
The first term in (3) represents the log-likelihood:
N
N
X
X
X
L(D | V) =
log p(Fn , Xn | C; V) =
log
p(Fn , Xn , z | C; V),
n=1
n=1
(4)
z
while T (V) penalizes the complexity of the model [21] and ? controls the amount of penalization.
Several approximations are made to learn the vocabulary; namely, the vocabulary is learned layer by
layer (in a bottom-up way) by finding frequent spatial layouts of parts from the previous layer [13]
and then using f to select a minimal set of models at each layer that still produce a good wholeobject shape representation at the final, object layer [21]. The top layer models are validated on a
set of validation images and those yielding a high rate of false-positives are removed from V.
We next show how different training strategies are performed to learn a joint multi-class vocabulary.
3.1
Independent training of individual classes
In independent training, a class specific vocabulary Vc is learned using the training images of each
particular class C = c. We learn Vc by maximizing f over the data D = {(Fn , Xn , C = c)}. For
the negative images in the validation step, we randomly sample images from other classes. The joint
multi-class representation V is then obtained by stacking the class specific vocabularies Vc together,
V ` = ?c Vc` (the edges E are added accordingly). Note that Vc1 is the only layer common to all
classes (6 oriented contour fragments), thus V 1 = Vc1 .
3
3.2
Joint training of classes
In joint training, the learning phase has two steps. In the first step, the training data D for all
the classes is presented to the algorithm simultaneously, and is treated as unlabeled. The spatial
parameters ? of the models at each layer are then inferred from images of all classes, and will code
?average? spatial part dispositions. The joint statistics also influences the structure of the models by
preferring those that are most repeatable over the classes. This way, the jointly learned vocabulary
V will be the best trade-off between the likelihood L and the complexity T over all the classes in the
dataset. However, the final, top-level likelihood for each particular class could be low because the
more discriminative class-specific information has been lost. Thus, we employ a second step which
`
revisits each class separately. Here, we use the joint vocabulary V and add new models vR
to each
layer ` if they further increase the score f for each particular class. This procedure is similar to that
used in sequential training and will be explained in more detail in the following subsection. Object
layer V O is consequently learned and added to V for each class. We validate the object models after
all classes have been trained. A similarity measure is used to compare every two classes based on the
degree of feature sharing between them. In validation, we choose the negative images by sampling
the images of the classes according to the distribution defined by the similarity measure. This way,
we discard the models that poorly discriminate between the similar classes.
3.3
Sequential training of classes
When training the classes sequentially, we train on each class separately, however, our aim is to
1.) maximize the re-use of compositions learned for the previous classes, and 2.) add those missing
(class-specific) compositions that are needed to represent class k sufficiently well. Let V1:k?1 denote
the vocabulary learned for classes 1 to k ? 1. To learn a novel class k, for each layer ` we seek a
new set of shape models that maximizes f over the data D = {(Fn , Xn , C = k)} conditionally on
`
the already learned vocabulary V1:k?1
. This is done by treating the hypotheses inferred with respect
`
`
to V1:k?1 as fixed, which gives us a starting value of the score function f . Each new model vR
`
is then evaluated and selected conditionally on this value, i.e such that the difference f (V1:k?1 ?
`
`
vR
) ? f (V1:k?1
) is maximized. Since according to the definition in (4) the likelihood L increases
the most when the hypotheses have largely disjoint supports, we can greatly speed up the learning
process: the models need to be learned only with respect to those (F, X) in an image that have a
`
low likelihood under the vocabulary V1:k?1
, which can be determined prior to training.
4
Experimental results
We have evaluated the hierarchical multi-class learning strategies on several object classes. Specifically, we used: UIUC multi-scale cars [22], GRAZ [4] cows and persons, Weizmann multi-scale
horses (adapted by Shotton et al. [23]), all five classes from the ETH dataset [24], and all ten classes
from TUD shape2 [25]. Basic information is given in Table 1. A 6-layer vocabulary was learned. 1
The bounding box information was used during training.
When evaluating detection performance, a detection will be counted as correct, if the predicted
bounding box coincides with groundtruth more than 50%. On the ETH dataset alone, this threshold
is lowered to 0.3 to enable a fair comparison with the related work [24]. The performance will be
given either with recall at equal error rate (EER), positive detection rate at low FPPI, or as classif.by-detection (on TUD shape2), depending on the type of results reported on that dataset thus-far.
To evaluate the shareability of compositions between the classes, we will use the following measure:
`
1 X (# of classes that use vR
)?1
deg share(`) = `
,
|V | ` `
# of all classes ? 1
vR ?V
`
defined for each layer ` separately. By ?vR
used by class C? it is meant that there is a path of edges
`
connecting any of the class specific shapes VCO and vR
. To give some intuition behind the measure:
1
The number of layers depends on the objects? size in the training images (it is logarithmic with the number
of non-overlapping contour fragments in an image). To enable a consistent evaluation of feature sharing, etc,
we have scaled the training images in a way which produced the whole-shape models at layer 6 for each class.
4
deg share = 0 if no shape from layer ` is shared (each class uses its own set of shapes), and it is 1
if each shape is used by all the classes. Beside the mean (which defines deg share), the plots will
also show the standard deviation. In sequential training, we can additionally evaluate the degree of
re-use when learning each novel class. Higher re-use means lower training time and a more compact
representation. We expect a tendency of higher re-use as the number k of classes grows, thus we
define it with respect to the number of learned classes:
`
`
# of vR
? V1:k?1
used by ck
deg transfer(k, `) =
(5)
` ? V ` used by c
# of all vR
k
1:k
Evaluation was performed by progressively increasing the number of object classes (from 2 to 10).
The individual training will be denoted by I, joint by J, and sequential by S.
Table 2 relates the detection performances of I to those of the related work. On the left side, we
report detection accuracy at low FPPI rate for the ETH dataset, averaged over 5 random splits of
training/test images as in [24]. On the right side, recall at EER is given for a number of classes.
Two classes. We performed evaluation on two visually very similar classes (cow, horse), and two
dissimilar classes (person, car). Table 3 gives information on 1.) size (the number of compositions
at each layer), 2.) training and 3.) inference times, 4.) recall at EER. In sequential training, both
possible orders were used (denoted with S1 and S2) to see whether different learning orders (of
classes) affect the performance. The first two rows show the results for each class individually,
while the last row contains information with respect to the conjoined representations. Already for
two classes, the cumulative training time is slightly lower for S than I, while both being much
smaller than that of J.
Degree of sharing. The hierarchies learned in I, J, and S on cows and horses, and J for car-person
are shown in Fig. 2 in a respective order from left to right. The red nodes depict cow/car and blue
horse/person compositions. The green nodes depict the shared compositions. We can observe a
slightly lower number of shareable nodes for S compared to J, yet still the lower layers for cowhorse are almost completely re-used. Even for the visually dissimilar classes (car-person) sharing is
present at lower layers. Numerically, the degrees of sharing and transfer are plotted in Fig. 1.
Detection rate. The recall values for each class are reported in Table 3. Interestingly, ?knowing?
horses improved the performance for cows. For car-person, individual training produced the best
result, while training person before car turned out to be a better strategy for S. Fig. 1 shows the
detection rates for cows and horses on the joint test set (the strongest class hypothesis is evaluated),
which allows for a much higher false-positive rate. We evaluate it with F-measure (to account for
FP). A higher performance for all joint representations over the independent one can be observed.
This is due to the high degree of sharing in J and S, which puts similar hypotheses in perspective
and thus discriminates between them better.
Five classes. The results for ETH-5 are reported in Table 4. We used half of the images for training,
and the other half for testing. The split was random, but the same for I, J, and S. We also test
whether different orders in S affect performance (we report an average over 3 random S runs).
Ordering does slightly affect performance, which means we may try to find an optimal order of
classes in training. We can also observe that the number of compositions at each layer is higher for
S as for J (both being much smaller than I), but this only slightly showed in inference times.
Ten classes. The results on TUD-10 are presented in Table 5. A few examples of the learned shapes
for S are shown in Fig. 3. Due to the high training complexity of J, we have only ran J for 2, 5 and
10 classes. We report classif.-by-detection (the strongest class hypothesis in an image must overlap
with groundtruth more than 50%). To demonstrate the strength of our representation, we have also
ran (linear) SVM on top of hypotheses from Layers 1 ? 3, and compared the performances. Already
here, Layer 3 + SVM outperforms prior work [25] by 10%. Fig. 4-(11.) shows classification as a
number of learned classes. Our approach consistently outperforms SVM, which is likely due to the
high scale- and rotation- variability of images with which our approach copes well. Fig. 4 shows:
inference time, cumulative training time, degree of sharing (for the final 10-class repr.), transfer, and
classification rates as a function of the number of learned classes.
Vocabulary size. The top row in Fig 4 shows representation size for I, J and S as a function
of learned classes. With respect to worst case (I), both J and S have a highly sublinear growth.
Moreover, in layers 2 and 3, where the burden on inference is the highest (the highest number of
5
inferred hypotheses), an almost constant tendency can be seen. We also compare the curves with
those reported for a flat approach by Opelt et al. [4] in Fig 4-(5). We plot the number of models at
Layer 5 which are approximately of the same granularity as the learned boundary parts in [4]. Both,
J and S hierarchical learning types show a significantly better logarithmic tendency as in [4].
Fig 4-(6) shows the size of the hierarchy file stored on disk. It is worth emphasizing that the hierarchy
subsuming 10 classes uses only 0.5Mb on disk and could fit on an average mobile device.
50 classes. To increase the scale of the experiments we show the performance of sequential training
on 50 classes from LabelMe [1]. The results are presented in Fig. 5. For I in the inference time plot
we used the inference time for the first class linearly extrapolated with the number of classes. We
can observe that S achieves much lower inference times than I, although it is clear that for a higher
number of classes more research is needed to cut down the inference times to a practical value.
Detection rate: JOINT cow+horse dataset
Degree of sharing / re?use: cow?horse
Degree of sharing / re?use: car?person
degree of sharing
F?measure
0.95
0.9
0.85
0.8
0.75
0.7
I
J
S1
learning type
S2
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
degree of sharing
1
Joint training
Sequential 1: cow + horse
Sequential 2: horse + car
1
2
3
4
5
layer
Joint training
Sequential 1: car + person
Sequential 2: person + car
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
1
2
3
4
5
layer
Figure 1: From left to right: 1.) detection rate (F measure) on the joint cow-horse test set. 2.) degree of
sharing for cow-horse, 3.) car-person vocabularies, 4.) an example detection of a person and horse.
`
, the
Figure 2: Learned 2-class vocabularies for different learning types (the nodes depict the compositions vR
links represent the edges e`Ri between them ? the parameters ? ` are not shown). From left to right: cow-horse
hierarchy for 1.) I, 2.) J, 3.) S1, and 4.) car-person J. Green nodes denoted shared compositions.
5
Conclusions and discussion
We evaluated three types of multi-class learning strategies in a hierarchical compositional framework, namely 1.) independent, 2). joint, and 3.) sequential training. A comparison was made
through several important computational aspects as well as by detection performance. We conclude
that: 1.) Both joint and sequential training strategies exert sublinear growth in vocabulary size (more
evidently so in the lower layers) and, consequently, sublinear inference time. This is due to a high
degree of sharing and transfer within the resulting vocabularies. The hierarchy obtained by sequential training grows somewhat faster, but not significantly so. 2.) Training time was expectedly worst
for joint training, while training time even reduced with each additional class during sequential
training. 3.) Different training orders of classes did perform somewhat differently ? this means we
might try to find an ?optimal? order of learning. 4.) Training independently has mostly yielded the
best detection rates, but the discrepancy with the other two strategies was low. For similar classes
(cow-horse), sequential learning even improved the detection performance, and was in most cases
above the joint?s performance. By training sequentially, we can learn class specific features (yet
still have a high degree of sharing) which boost performance. Most importantly, sequential training
has achieved the best trade-off between detection performance, re-usability, inference and training
time. The observed computational properties of all the strategies in general, and sequential learning
in particular, go well beyond the reported behavior of flat approaches [4]. This makes sequential
learning of compositional hierarchies suitable for representing the classes on a larger scale.
Acknowledgments
This research has been supported in part by the following funds: EU FP7-215843 project POETICON, EU
FP7-215181 project CogX, Research program Computer Vision P2-0214 and Project J2-2221 (ARRS).
6
layer 1 layer 2
layer 3
layer 4
layer 5
layer 6
hammer
pliers
saucepan
scissors
Figure 3: A few examples from the learned hierarchical shape vocabulary for S on TUD-10. Each shape in
the hierarchy is a composition of shapes from the layer below. Only the mean of each shape is shown.
method
size on disk
classf. rate
Stark et al.[25]
/
44%
Level 1 + SVM
206 Kb
32%
Level 2 + SVM
3, 913 Kb
44%
size of representation
train. time
infer. time
Level 3 + SVM
34, 508 Kb
54%
L2
L3
L4
L5
Independent
1, 249 Kb
71%
207 min
12.2 sec
74
96
159
181
Joint
408 Kb
69%
752 min
2.0 sec
14
23
39
59
Sequential
490 Kb
71%
151 min
2.4 sec
9
21
49
76
Table 5: Results on the TUD-10. Classification obtained as classification-by-detection.
40
30
20
10
2
3
4
5
6
7
8
9
60
40
20
0
1
10
2
3
number of classes
12
inference time (sec)
size of representation
400
300
200
100
0
1
2
3
4
5
6
7
5
6
7
8
9
0
1
10
2
3
8
4
9
8
6
4
2
0
10
2
number of classes
4
7
8
9
50
0
1
10
2
3
6
8
1200
independent
joint
sequential
Layer 2 + SVM
Layer 3 + SVM
1000
800
600
400
200
0
10
2
4
number of classes
4
5
6
7
8
9
10
number of classes
Cumulative training time
1400
6
8
200
150
100
50
independent
sequential
0
10
2
4
6
8
10
number of classes
number of classes
Degree of transfer per layer
Degree of sharing: TUD
6
independent
joint
sequential
100
Size on disk
independent
joint
sequential
10
5
150
number of classes
Inference time per image
Opelt et al.
independent
joint
sequential
500
4
50
number of classes
Growth of representation
Layer 5
independent
joint
sequential
100
cumulative size on disk (Kb)
0
1
independent
joint
sequential
80
Growth of representation
Layer 4
150
size of representation
independent
joint
sequential
50
Growth of representation
Layer 3
size of representation
size of representation
60
size of representation
Growth of representation
100
Layer 2
cumulative training time (min)
Growth of representation
70
Classification rate
Joint training
Sequential: alphabetical order
1
2
3
4
5
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
Layer 1
Layer 2
Layer 3
Layer 4
Layer 5
1
2
3
4
5
6
7
8
9
Classification rate (%)
degree of transfer
degree of sharing
100
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1
0
90
80
70
60
50
40
10
sequential
SVM on Layer 2
SVM on Layer 3
2
number of learnt classes
layer
4
6
8
number of classes
Figure 4: Results on TUD-10. Top: (1-4) repr. size as a function of the number of learned classes. Middle:
5.) repr. size compared to [4], 6.) size of hierarchy on disk, 7.) avg. inference time per image, 8.) cumulative
train. time. Bottom: degree of 9.) sharing and 10.) transfer, 11.) classif. rates, 12.) example detection of cup.
inference time per image (sec)
number of compositions
250
Layer 1
Layer 2
Layer 3
Layer 4
Layer 5
200
150
100
50
0
10
20
30
number of classes
40
Inference time per image (# classes)
50
350
300
0.9
250
200
150
100
50
0
Degree of transfer per layer
1
independent
sequential
degree of transfer
Size of representation (# classes)
300
0.8
0.7
0.6
Layer 1
Layer 2
Layer 3
Layer 4
Layer 5
0.5
0.4
0.3
0.2
0.1
10
20
30
number of classes
40
50
0
5
10
15
20
25
30
35
40
45
50
number of learnt classes
Figure 5: Results on 50 object classes from LabelMe [1]. From left to right: Size of representation (number of
compositions per layer), inference times, and deg transfer, all as a function of the number of learned classes.
7
8
apple
19
21
ETH shape
bottle giraffe mug
21
30
24
27
57
24
swan
15
17
cup
20
10
fork
20
10
TUD shape1 (train) + TUD shape2 (test)
hammer knife mug pan pliers pot saucepan
20
20
20
20
20
20
20
10
10
10
10
10
10
10
Table 1: Dataset information.
scissors
20
10
cow
20
65
Graz
person
19
19
UIUC
car
40
108
Weizm.
horse
20
228
L2
17
12
29
6
9
15
bottle
83.2(7.5)
76.8(6.1)
86.2(2.8)
0.36 FPPI
mug
83.6(8.6)
82.7(5.1)
84.6(2.3)
0.27 FPPI
swan
75.4(13.4)
84.0(8.4)
78.2(5.4)
0.26 FPPI
average
76.8
84.8
83.7
L3 L4 L5
30 27 28
11 22 22
13 22 37
16 25 23
18 29 26
88 125 136
I
L2
14
13
15
16
16
16
ind. train.
Related work
cow
100. [4]
100. [7]
98.5 [23]
98.5
25
20
45
35
17
52
I
/
/
65
/
/
85
J
25
17
42
35
15
50
S1
19
20
39
33
17
50
S2
1.9
2.3
4.3
3.4
2.3
6.3
I
2.0
2.6
2.5
5.2
2.6
4.8
J
S2
I
S
86.4
80.0
84.6
83.3
72.7
81.4
86.4
80.0
81.3
83.3
69.7
80.1
J
FPPI
96.9
93.4
95.6
93.5
56.3
74.9
S1
S
98.5
94.3
96.4
92.4
60.4
76.4
0.34 0.27 0.28
0.4 0.34 0.32
0.19 0.16 0.18
0.31 0.22 0.22
0.28 0.22 0.21
0.30 0.24 0.24
I
96.9
93.4
95.6
91.7
58.3
75.0
J
S2
person
52.6 [4]
52.4 [23]
50.0 [28]
60.4
rec. at EER (%)
car scale
90.6 [27]
93.5 [29]
92.1 [13]
93.5
J
rate (%)
2.0
2.7
2.5
5.4
3.0
5.0
size of representation
trn.time(min)
infr.time(sec)
det.
J
S, mean (std) - over 3 runs
I
J
S
I
J
S
I
L3 L4 L5
L2
L3
L4
L5
15 21 28 10(0.6) 25(1.7) 27(12.7) 23(7.5) 23
/
23 3.6 11.1 12.1 88.6
16 21 22 9(0.6) 22(8.1)
28(2)
22(3.6) 25
/
21 3.4 11.1 12.1 85.5
19 26 26 10(0.6) 28(5.9) 35(1.7) 30(1.2) 31
/
26 3.2 11.1 12.1 82.4
19 25 34 10(0.6) 25(7.9) 30(4.7) 29(4.9) 31
/
18 3.6 11.1 12.1 84.9
20 26 27 10(0) 23(6.4) 30(4.0) 27(1.5) 17
/
12 2.8 11.1 12.1 75.8
22 32 55 11(0.6) 33(2.5) 61(9.5) 79(13.7) 127 235 100 16.6 11.1 12.1 83.4
2.1
2.7
2.6
5.3
2.8
4.9
S1
96.9
94.3
95.6
93.5
60.4
77.0
horse
89.0 [23]
93.0 [28]
/
94.3
infer. time (sec)
Table 4: Results for different learning types on the 5?class ETH shape dataset.
L5
20
24
38
18
21
38
train. time (min)
Table 3: Results for different learning types on the cow-horse, and car-person classes.
giraffe
58.6(14.6)
90.5(5.4)
83.3(4.3)
0.21 FPPI
size of representation (number of compositions per layer)
I
J
S1 (1 + 2)
S2 (2 + 1)
L3 L4 L5 L2 L3 L4 L5 L2 L3 L4 L5 L2 L3 L4
17 23 25 17 25 27 25 17 17 23 25 14 17 20
12 18 24 17 26 26 27 18 18 24 21 12 12 18
29 41 49 17 26 30 36 18 21 33 40 14 19 29
10 13 16 7 16 20 20 6 10 13 16 11 19 18
16 19 21 7 12 14 22 11 12 15 23 9 16 19
26 32 37 7 18 25 42 12 19 27 39 11 25 31
applelogo
83.2(1.7)
89.9(4.5)
87.3(2.6)
0.32 FPPI
L2
applelogo 11
bottle
7
giraffe
5
mug
9
swan
11
all
43
class
cow (1)
horse (2)
cow+hrs.
car (1)
person (2)
car+prsn.
class
[24]
[26]
ind. train.
Table 2: Comparison of detection rates with related work. Left: Average detection-rate (in %) at 0.4 FPPI for the related work, while we also report the actual
FPPI, for the ETH shape database. Right: Recall at EER for various classes. The approaches that use more than just contour information (and are thus not directly
comparable to ours) are shaded gray.
dataset
class
# train
# test
References
[1] Russell, B., Torralba, A., Murphy, K., and Freeman, W. T. (2008) Labelme: a database and web-based
tool for image annotation. IJCV, 77, 157?173.
[2] Leibe, B., Leonardis, A., and Schiele, B. (2008) Robust object detection with interleaved categorization
and segmentation. IJCV, 77, 259?289.
[3] Torralba, A., Murphy, K. P., and Freeman, W. T. (2007) Sharing visual features for multiclass and multiview object detection. IEEE PAMI, 29, 854?869.
[4] Opelt, A., Pinz, A., and Zisserman, A. (2008) Learning an alphabet of shape and appearance for multiclass object detection. IJCV, 80, 16?44.
[5] Fei-Fei, L., Fergus, R., and Perona, P. (2004) Learning generative visual models from few training examples: an incremental bayesian approach tested on 101 object categories. IEEE CVPR?04 Workshop on
Generative-Model Based Vision.
[6] Krempp, S., Geman, D., and Amit, Y. (2002) Sequential learning of reusable parts for object detection.
Tech. rep.
[7] Todorovic, S. and Ahuja, N. (2007) Unsupervised category modeling, recognition, and segmentation in
images. IEEE PAMI.
[8] Zhu, S. and Mumford, D. (2006) A stochastic grammar of images. Found. and Trends in Comp. Graphics
and Vision, 2, 259?362.
[9] Ranzato, M. A., Huang, F.-J., Boureau, Y.-L., and LeCun, Y. (2007) Unsupervised learning of invariant
feature hierarchies with applications to object recognition. CVPR.
[10] Ullman, S. and Epshtein, B. (2006) Visual Classification by a Hierarchy of Extended Features.. Towards
Category-Level Object Recognition, Springer-Verlag.
[11] Sivic, J., Russell, B. C., Zisserman, A., Freeman, W. T., and Efros, A. A. (2008) Unsupervised discovery
of visual object class hierarchies. CVPR.
[12] Bart, I., Porteous, E., Perona, P., and Wellings, M. (2008) Unsupervised learning of visual taxonomies.
CVPR.
[13] Fidler, S. and Leonardis, A. (2007) Towards scalable representations of visual categories: Learning a
hierarchy of parts. CVPR.
[14] Scalzo, F. and Piater, J. H. (2005) Statistical learning of visual feature hierarchies. W. on Learning, CVPR.
[15] Zhu, L., Lin, C., Huang, H., Chen, Y., and Yuille, A. (2008) Unsupervised structure learning: Hierarchical
recursive composition, suspicious coincidence and competitive exclusion. ECCV, vol. 2, pp. 759?773.
[16] Fleuret, F. and Geman, D. (2001) Coarse-to-fine face detection. IJCV, 41, 85?107.
[17] Schwartz, J. and Felzenszwalb, P. (2007) Hierarchical matching of deformable shapes. CVPR.
[18] Ommer, B. and Buhmann, J. M. (2007) Learning the compositional nature of visual objects. CVPR.
[19] Serre, T., Wolf, L., Bileschi, S., Riesenhuber, M., and Poggio, T. (2007) Object recognition with cortexlike mechanisms. IEEE PAMI, 29, 411?426.
[20] Sudderth, E., Torralba, A., Freeman, W. T., and Willsky, A. (2008) Describing visual scenes using transformed objects and parts. IJCV, pp. 291?330.
[21] Fidler, S., Boben, M., and Leonardis, A. (2009) Optimization framework for learning a hierarchical shape
vocabulary for object class detection. BMVC.
[22] Agarwal, S., Awan, A., and Roth, D. (2004) Learning to detect objects in images via a sparse, part-based
representation. IEEE PAMI, 26, 1475?1490.
[23] Shotton, J., Blake, A., and Cipolla, R. (2008) Multi-scale categorical object recognition using contour
fragments. PAMI, 30, 1270?1281.
[24] Ferrari, V., Fevrier, L., Jurie, F., and Schmid, C. (2007) Accurate object detection with deformable shape
models learnt from images. CVPR.
[25] Stark, M. and Schiele, B. (2007) How good are local features for classes of geometric objects? ICCV.
[26] Fritz, M. and Schiele, B. (2008) Decomposition, discovery and detection of visual categories using topic
models. CVPR.
[27] Mutch, J. and Lowe, D. G. (2006) Multiclass object recognition with sparse, localized features. CVPR,
pp. 11?18.
[28] Shotton, J., Blake, A., and Cipolla, R. (2008) Efficiently combining contour and texture cues for object
recognition. BMVC.
[29] Ahuja, N. and Todorovic, S. (2008) Connected segmentation tree ? a joint representation of region layout
and hierarchy. CVPR.
9
| 3798 |@word middle:1 faculty:1 vi1:1 disk:6 seek:2 decomposition:1 q1:1 recursively:1 contains:4 fragment:7 score:2 fevrier:1 ours:1 interestingly:1 outperforms:2 existing:1 comparing:1 si:1 yet:2 must:1 fn:5 shape:34 treating:1 plot:3 progressively:1 depict:3 fund:1 alone:1 generative:5 selected:1 half:2 device:1 bart:1 accordingly:1 cue:1 short:1 detecting:1 coarse:1 node:5 location:1 unbiasedly:1 five:2 descendant:1 consists:1 suspicious:1 ijcv:5 acquired:1 sublinearly:1 behavior:2 uiuc:2 multi:16 freeman:4 little:2 actual:1 increasing:4 project:3 moreover:1 maximizes:1 lowest:2 what:1 finding:1 every:1 growth:8 exactly:1 scaled:1 schwartz:1 control:1 before:1 positive:4 local:3 analyzing:1 path:1 approximately:1 pami:5 might:2 exert:2 challenging:2 shaded:1 zi0:2 averaged:1 weizmann:1 directed:2 practical:1 acknowledgment:1 lecun:1 testing:1 jurie:1 recursive:2 lost:1 alphabetical:1 xr:1 procedure:1 area:1 eth:7 gabor:2 significantly:2 matching:1 eer:5 cannot:1 unlabeled:1 put:2 influence:2 optimize:1 imposed:1 missing:1 maximizing:1 roth:1 layout:2 attention:1 starting:1 independently:3 go:1 importantly:1 ferrari:1 hierarchy:20 zr0:1 us:2 hypothesis:18 trend:2 recognition:9 particularly:1 rec:1 std:1 cut:3 geman:2 database:2 bottom:3 observed:2 fork:1 coincidence:1 worst:2 calculate:1 graz:2 region:1 connected:1 ranzato:1 ordering:3 trade:4 removed:1 highest:2 eu:2 ran:2 russell:2 intuition:1 discriminates:1 complexity:4 schiele:3 pinz:1 trained:3 depend:1 yuille:1 completely:1 necessitates:1 joint:32 differently:1 various:2 represented:2 alphabet:1 train:11 query:1 horse:19 aggregate:1 larger:4 cvpr:12 grammar:1 statistic:1 jointly:2 itself:1 prsn:1 final:3 advantage:1 evidently:1 propose:2 product:1 mb:1 frequent:1 j2:1 turned:1 combining:1 subgraph:3 poorly:1 achieve:1 deformable:2 validate:1 parent:5 produce:1 categorization:1 incremental:1 object:52 depending:1 p2:1 pot:1 predicted:1 implies:1 differ:1 correct:1 hammer:2 stochastic:1 vc:4 kb:7 enable:2 sufficiently:1 blake:2 normal:1 visually:2 efros:1 achieves:3 torralba:3 label:1 individually:1 tool:1 clearly:1 gaussian:2 aim:1 ck:1 mobile:1 conjunction:1 validated:1 focus:1 consistently:1 likelihood:10 mainly:2 greatly:1 tech:1 rigorous:2 plier:2 detect:1 inference:26 i0:1 lj:1 transferring:1 perona:2 relation:3 transformed:1 issue:3 among:3 aforementioned:1 arg:1 denoted:3 classification:7 constrained:1 spatial:6 equal:1 extraction:1 enlarge:1 sampling:1 represents:3 unsupervised:6 discrepancy:1 tud:9 report:4 employ:1 few:3 oriented:4 randomly:1 simultaneously:2 individual:6 murphy:2 phase:1 detection:37 highly:1 evaluation:4 yielding:1 behind:1 accurate:1 edge:14 poggio:1 respective:1 tree:1 penalizes:1 re:11 plotted:1 minimal:2 instance:1 modeling:1 ommer:1 disadvantage:1 disposition:1 stacking:2 vertex:13 subset:2 deviation:1 graphic:1 sublinearity:1 reported:5 connect:2 stored:1 learnt:3 ljubljana:1 person:17 fritz:1 preferring:1 l5:8 probabilistic:1 off:4 receiving:1 cortexlike:1 together:2 connecting:1 containing:1 choose:1 huang:2 stark:2 ullman:1 account:1 sec:7 notable:1 scissors:2 depends:1 vi:5 performed:3 try:2 lowe:1 red:1 competitive:1 annotation:1 accuracy:1 largely:1 efficiently:1 maximized:1 dealt:1 bayesian:1 produced:2 worth:1 apple:1 comp:1 detector:3 strongest:2 sharing:22 definition:2 energy:1 pp:3 associated:1 gain:1 dataset:9 recall:5 knowledge:4 subsection:2 car:18 segmentation:3 higher:7 follow:1 zisserman:2 improved:2 bmvc:2 mutch:1 done:3 evaluated:7 box:2 furthermore:1 just:2 hand:2 web:1 qo:1 overlapping:1 defines:1 gray:1 grows:3 serre:1 concept:1 classif:3 fidler:4 assigned:1 spatially:1 pri:2 conditionally:3 mug:4 marko:2 during:3 ind:2 coincides:1 prominent:1 multiview:1 demonstrate:1 slovenia:1 image:28 novel:5 common:2 rotation:1 specialized:1 trait:1 numerically:1 significant:2 composition:16 cup:2 similarly:1 repr:3 had:1 l3:8 sanja:2 lowered:1 supervision:1 mainstream:1 similarity:2 etc:1 add:3 swan:3 own:1 recent:1 showed:1 perspective:2 exclusion:1 optimizes:1 discard:1 certain:1 verlag:1 rep:1 success:1 seen:2 additional:1 somewhat:3 fri:1 maximize:1 ale:2 relates:1 multiple:7 infer:3 faster:1 usability:1 knife:1 lin:1 scalable:1 basic:1 vision:4 subsuming:1 represent:7 agarwal:1 achieved:1 whereas:1 separately:4 fine:1 sudderth:1 file:1 granularity:2 shotton:3 split:2 affect:3 isolation:2 zi:3 fit:1 cow:18 reduce:1 cn:1 knowing:1 multiclass:3 det:1 whether:2 pr0:1 vc1:2 compositional:4 todorovic:2 fleuret:1 clear:1 tune:1 amount:2 ten:2 category:7 reduced:1 conjoined:1 disjoint:1 per:8 subpart:1 blue:1 vol:1 reusable:1 threshold:2 v1:7 graph:4 year:1 run:2 fppi:10 almost:2 groundtruth:2 comparable:3 interleaved:1 layer:77 expectedly:1 quadratic:1 yielded:1 adapted:1 strength:1 constraint:1 fei:2 vco:2 ri:13 flat:3 scene:1 aspect:1 speed:2 min:6 trn:1 according:2 across:1 smaller:3 slightly:4 pan:1 partitioned:2 s1:7 explained:1 invariant:1 iccv:1 describing:1 mechanism:1 needed:2 fp7:2 apply:1 observe:3 hierarchical:20 leibe:1 altogether:1 top:7 assumes:1 denotes:1 eri:2 porteous:1 exploit:1 build:1 amit:1 objective:1 added:3 already:3 mumford:1 strategy:16 lends:1 separate:1 link:1 topic:1 consensus:1 willsky:1 code:3 modeled:4 mostly:1 potentially:1 taxonomy:1 negative:2 perform:1 observation:2 datasets:2 riesenhuber:1 immediate:1 extended:1 variability:3 inferred:3 pair:1 namely:2 bottle:3 sivic:1 learned:25 boost:1 leonardis:5 beyond:1 usually:2 below:1 fp:1 summarize:1 program:1 max:1 green:2 suitable:2 overlap:1 treated:1 force:1 buhmann:1 zr:12 scarce:1 hr:1 zhu:2 representing:3 improve:1 excel:1 categorical:1 piater:1 schmid:1 prior:3 literature:2 l2:8 discovery:2 geometric:1 beside:1 fully:1 loss:1 expect:1 sublinear:3 acyclic:1 localized:1 validation:4 penalization:1 degree:21 consistent:1 share:4 row:3 eccv:1 extrapolated:1 supported:1 last:2 enjoys:1 side:2 opelt:3 taking:1 face:1 felzenszwalb:1 sparse:2 curve:1 calculated:1 vocabulary:28 evaluating:2 stand:1 contour:10 xn:5 cumulative:6 boundary:1 made:2 avg:1 counted:1 far:2 cope:1 krempp:1 compact:1 uni:1 deg:5 sequentially:4 conclude:1 xi:1 discriminative:1 fergus:1 shareable:1 table:11 additionally:1 learn:9 transfer:11 robust:1 nature:1 complex:3 bileschi:1 domain:1 did:1 giraffe:3 linearly:2 whole:3 revisits:1 bounding:2 s2:6 allowed:1 fair:1 fig:10 referred:2 ahuja:2 vr:18 position:1 learns:1 down:3 emphasizing:1 specific:8 repeatable:1 explored:1 svm:10 virtue:1 burden:1 workshop:1 false:2 sequential:38 adding:1 texture:1 boureau:1 chen:1 logarithmic:2 explore:1 likely:3 appearance:1 visual:12 conveniently:1 cipolla:2 springer:1 wolf:1 goal:1 consequently:2 towards:3 shared:3 labelme:3 specifically:3 determined:1 discriminate:1 experimental:3 tendency:3 l4:8 select:1 support:1 meant:1 dissimilar:3 evaluate:5 tested:1 |
3,088 | 3,799 | A Smoothed Approximate Linear Program
Vijay V. Desai
IEOR, Columbia University
[email protected]
Vivek F. Farias
MIT Sloan
[email protected]
Ciamac C. Moallemi
GSB, Columbia University
[email protected]
Abstract
We present a novel linear program for the approximation of the dynamic
programming cost-to-go function in high-dimensional stochastic control
problems. LP approaches to approximate DP naturally restrict attention
to approximations that are lower bounds to the optimal cost-to-go function. Our program ? the ?smoothed approximate linear program? ? relaxes
this restriction in an appropriate fashion while remaining computationally tractable. Doing so appears to have several advantages: First, we
demonstrate superior bounds on the quality of approximation to the optimal cost-to-go function afforded by our approach. Second, experiments
with our approach on a challenging problem (the game of Tetris) show that
the approach outperforms the existing LP approach (which has previously
been shown to be competitive with several ADP algorithms) by an order of
magnitude.
1
Introduction
Many dynamic optimization problems can be cast as Markov decision problems (MDPs) and
solved, in principle, via dynamic programming. Unfortunately, this approach is frequently
untenable due to the ?curse of dimensionality?. Approximate dynamic programming (ADP)
is an approach which attempts to address this difficulty. ADP algorithms seek to compute
good approximations to the dynamic programing optimal cost-to-go function within the
span of some pre-specified set of basis functions. The approximate linear programming
(ALP) approach to ADP [1, 2] is one such well-recognized approach.
The program employed in the ALP approach is identical to the LP used for exact computation of the optimal cost-to-go function, with further constraints limiting solutions to
the low-dimensional subspace spanned by the basis functions used. The resulting low dimensional LP implicitly restricts attention to approximations that are lower bounds on the
optimal cost-to-go function. While the structure of this program appears crucial in establishing approximation guarantees for the approach, the restriction to lower bounds leads
one to ask whether the ALP is the ?right? LP. In particular, could an appropriate relaxation
of the feasible region of the ALP allow for better approximations to the cost-to-go function
while remaining computationally tractable? Motivated by this question, the present paper
presents a new linear program for ADP we call the ?smoothed? ALP (or SALP).
The SALP may be viewed as a relaxation of the ALP wherein one is allowed to violate the
ALP constraints for any given state. A user defined ?violation budget? parameter controls
the ?expected? violation across states; a budget of 0 thus yields the original ALP. We specify
a choice of this violation budget that yields a relaxation with attractive properties. In
particular, we are able to establish strong approximation guarantees for the SALP; these
guarantees are substantially stronger than the corresponding guarantees for the ALP.
The number of constraints and variables in the SALP scale with the size of the MDP
state space. We nonetheless establish sample complexity bounds that demonstrate that an
1
appropriate ?sampled? SALP provides a good approximation to the SALP solution with a
tractable number of sampled MDP states. This sampled program is no more complex than
the ?sampled? ALP and, as such, we demonstrate that the SALP is essentially no harder to
solve than the ALP.
We present a computational study demonstrating the efficacy of our approach on the game
of Tetris. The ALP has been demonstrated to be competitive with several ADP approaches
for Tetris (see [3]). In detailed comparisons with the ALP, we estimate that the SALP
provides an order of magnitude improvement over controllers designed via that approach
for the game of Tetris.
2
Problem Formulation
Our setting is that of a discrete-time, discounted infinite-horizon, cost-minimizing MDP
with a finite state space X and finite action space A. Given the state and action at time
t, xt and at , a per-stage cost g(xt , at ) is incurred. The subsequent state xt+1 is determined
according to the transition probability kernel Pat (xt , ?). A stationary policy ? : X ? A is
a mapping that determines the action at each time as a function of the state. Given each
initial state x0 = x, the expected discounted cost (cost-to-go function) of the policy ? is
given by
#
" ?
X
t
J? (x) , E?
? g(xt , ?(xt )) x0 = x .
t=0
where, ? ? (0, 1) is the discount factor. Denote by P? ? RX ?X the transition probability
matrix for the policy ?, whose (x, x0 )th entry is P?(x) (x, x0 ). Denote by g? ? RX the vector
whose xth entry is g(x, ?(x)). Then, the cost-to-go function J? is the unique solution to the
equation T? J = J, where the operator T? is defined by T? J = g? + ?P? J.
The Bellman operator T can be defined according to T J = min? T? J. Bellman?s equation
is then the fixed point equation, T J = J. It is readily shown that the optimal cost-to-go
function J ? is the unique solution to Bellman?s equation and that a corresponding optimal
policy ?? is greedy with respect to J ? ; i.e., ?? satisfies T J ? = T?? J ? .
Bellman?s equation may be solved exactly via the following linear program:
(1)
maximize
?>J
J
subject to J ? T J.
Here, ? ? RX is a vector with positive components that are known as the state-relevance
weights. The above program is indeed an LP since theP
constraint J(x) ? (T J)(x) is equivalent to the set of linear constraints J(x) ? g(x, a) + ? x0 ?X Pa (x, x0 )J(x0 ), ? a ? A. We
refer to (1) as the exact LP.
Note that if a vector J satisfies J ? T J, then J ? T k J (by monotonicity of the Bellman
operator), and thus J ? J ? (since the Bellman operator is a contraction with unique fixed
point J ? ). Then, every feasible point for (1) is a component-wise lower bound to J ? , and
J ? is the unique optimal solution to the exact LP (1).
For problems where X is prohibitively large, an ADP algorithm seeks to find a good approximation to J ? . Specifically, one considers a collection of basis functions {?1 , . . . , ?K }
where each ?i : X ? R. Defining ? , [?1 ?2 . . . ?K ] to be a matrix with columns consisting
of basis functions, one seeks an approximation of the form Jr = ?r, with the hope that
Jr ? J ? . The ALP for this task is then simply
(2)
maximize ? > ?r
r
subject to ?r ? T ?r.
The geometric intuition behind the ALP is illustrated in Figure 1(a). Supposed that rALP
is a vector that is optimal for the ALP. Then the approximate value function ?rALP will
lie on the subspace spanned by the columns of ?, as illustrated by the orange line. ?rALP
2
will also satisfy the constraints of the exact LP, illustrated by the dark gray region; this
implies that ?rALP ? J ? . In other words, the approximate cost-to-go function is necessarily
a point-wise lower bound to the true cost-to-go function in the span of ?.
J(2)
J = ?r
J(2)
J = ?r
?rSALP
J
?
J?
?rALP
?
?
J(1)
J(1)
(a) ALP case.
(b) SALP case.
Figure 1: A cartoon illustrating the feasible set and optimal solution for the ALP and
SALP, in the case of a two-state MDP. The axes correspond to the components of the value
function. A careful relaxation from the feasible set of the ALP to that of the SALP can
yield an improved approximation.
3
The Smoothed ALP
The J ? T J constraints in the exact LP, which carry over to the ALP, impose a strong
restriction on the cost-to-go function approximation: in particular they restrict us to approximations that are lower bounds to J ? at every point in the state space. In the case
where the state space is very large, and the number of basis functions is (relatively) small,
it may be the case that constraints arising from rarely visited or pathological states are
binding and influence the optimal solution. In many cases, our ultimate goal is not to find
a lower bound on the optimal cost-to-go function, but rather a good approximation to J ? .
In these instances, it may be the case that relaxing the constraints in the ALP so as not to
require a uniform lower bound may allow for better overall approximations to the optimal
cost-to-go function. This is also illustrated in Figure 1. Relaxing the feasible region of the
ALP in Figure 1(b) to the light gray region in Figure 1(b) would yield the point ?rSALP as
an optimal solution. The relaxation in this case is clearly beneficial; it allows us to compute
a better approximation to J ? than the point ?rSALP . Can we construct a fruitful relaxation
of this sort in general? The smoothed approximate linear program (SALP) is given by:
(3)
maximize
r,s
? > ?r
subject to ?r ? T ?r + s,
? > s ? ?, s ? 0.
Here, a vector s ? RX of additional decision variables has been introduced. For each state x,
s(x) is a non-negative decision variable (a slack) that allows for violation of the corresponding
ALP constraint. The parameter ? ? 0 is a non-negative scalar. The parameter ? ? RX is
a probability distribution known as the constraint violation distribution. The parameter ?
is thus a violation budget: the expected violation of the ?r ? T ?r constraint, under the
distribution ?, must be less than ?. The balance of the paper is concerned with establishing
that the SALP forms the basis of a useful ADP algorithm in large scale problems:
? We identify a concrete choice of violation budget ? and an idealized constraint
violation distribution ? for which the SALP provides a useful relaxation in that the
optimal solution can be a better approximation to the optimal cost-to-go function.
This brings the cartoon improvement in Figure 1 to fruition for general problems.
3
? We show that the SALP is tractable (i.e it is well approximated by an appropriate ?sampled? version) and present computational experiments for a hard problem
(Tetris) illustrating an order of magnitude improvement over the ALP.
4
Analysis
This section is dedicated to a theoretical analysis of the SALP. The overarching objective
of this analysis is to provide some assurance of the soundness of the proposed approach. In
addition, our analysis will serve as a crucial guide to practical implementation of the SALP.
Our analysis will present two types of results: First, we prove approximation guarantees
(Sections 4.1 and 4.2) that will indicate that the SALP computes approximations that
are of comparable quality to the projection of J ? on the linear span of ?. Second, we
show (Section 4.3) that an implementable ?sampled? version of the SALP may be used to
approximate the SALP with a tractable number of samples. All proofs can be found in the
technical appendix.
Idealized Assumptions: Given the broad scope of problems addressed by ADP algorithms, analyses of such algorithms typically rely on an ?idealized? assumption of some sort.
In the case of the ALP, one either assumes the ability to solve a linear program with as many
constraints as there are states, or absent that, knowledge of a certain idealized sampling
distribution, so that one can then proceed with solving a ?sampled? version of the ALP.
Our analysis of the SALP in this section is predicated on the knowledge of an idealized
constraint violation distribution, which is this same idealized sampling distribution. In particular, we will require access to samples drawn according to the distribution ??? ,? given
by ??>? ,? , (1 ? ?)? > (I ? ?P?? )?1 . Here ? is an arbitrary initial distribution over states.
The distribution ??? ,? may be interpreted as yielding the discounted expected frequency of
visits to a given state when the initial state is distributed according to ? and the system
runs under the optimal policy ?? . We note that the ?sampled? ALP introduced by de Farias
and Van Roy [2] requires access to states sampled according to precisely this distribution.
4.1
A Simple Approximation Guarantee
We present a first, simple approximation guarantee for the following specialization of the
SALP in (3):
maximize
r,s
(4)
? > ?r
subject to ?r ? T ?r + s,
??>? ,? s ? ?, s ? 0.
Before we proceed to state our result, we define a useful function:
`(r, ?) , minimize
?
s,?
(5)
subject to ?r ? T ?r ? s + ?1,
??>? ,? s ? ?, s ? 0.
`(r, ?) is the minimum translation (in the direction of the vector 1) of an arbitrary weight
vector r so as to result in a feasible vector for (4). We will denote by s(r, ?) the s component
of the solution to (5). The following Lemma characterizes l(r, ?):
Lemma 1. For any r ? RK and ? ? 0:
(i) `(r, ?) is a bounded, decreasing, piecewise linear, convex function of ?.
(ii) `(r, ?) ? (1 + ?)kJ ? ? ?rk? .
(iii)
?
?r `(r, 0)
= ?P
1
x??(r)
??? ,? (x)
, where ?(r) = argmaxx?X ?r(x) ? T ?r(x).
Armed with this definition, we are now in a position to state our first, crude approximation
guarantee:
4
Theorem 1. Let 1 be in the span of ? and ? be a probability distribution. Let r? be an
optimal solution to the SALP (4). Moreover, let r? satisfy r? ? argminr kJ ? ? ?rk? .
Then,
l(r? , ?) + 2?
kJ ? ? ??
rk1,? ? kJ ? ? ?r? k? +
.
1??
?
,?)+2?
as the approximation error associated
The above theorem allows us to interpret `(r 1??
with the SALP solution r?. Consider setting ? = 0, in which case (4) is identical to the ALP.
In this case, we have from Lemma 1 that `(r? , 0) ? (1 + ?)kJ ? ? ?r? k? , so that the right
2
hand side of our bound is at most 1??
kJ ? ??r? k? . This is precisely Theorem 2 in de Farias
and Van Roy [1]; we recover their approximation guarantee for the ALP. Next observe that,
from (iii), if the set ?(r? ) is of small probability according to the distribution ??? ,? , we
expect that `(r? , ?) will decrease dramatically as ? is increased from 0. In the event that
?r? (x) ? T ?r? (x) is large for only a small number of states (that is, the Bellman error
of the approximation produced by r? is large for only a small number of states), we thus
expect to have a choice of ? for which l(r? , ?) + 2? l(r? , 0). Thus, Theorem 1 reinforces
the intuition (shown via Figure 1) that the SALP will permit closer approximations to J ?
than the ALP.
The bound in Theorem 1 leaves room for improvement:
1. The right hand side of our bound measures projection error, kJ ? ? ?r? k? in the
L? -norm. Since it is unlikely that the basis functions ? will provide a uniformly
good approximation over the entire state space, the right hand side of our bound
could be quite large.
2. The choice of state relevance weights can significantly influence the solution. While
we do not show this here, this choice allows us to choose regions of the state space
where we would like a better approximation of J ? . The right hand side of our
bound, however, is independent of ?.
3. Our guarantee does not suggest a concrete choice of the violation budget, ?.
The next section will present a substantially refined approximation bound.
4.2
A Better Approximation Guarantee
With the intent of deriving stronger approximation guarantees, we begin this section by
introducing a ?nicer? measure of the quality of approximation afforded by ?. In particular,
instead of measuring kJ ? ? ?r? k in the L? norm as we did for our previous bounds, we will
use a weighted max norm defined according to: kJk?,1/? , maxx?X |J(x)|/?(x), where
? : X ? [1, ?) is a given weighting function. The weighting function ? allows us to weight
approximation error in a non-uniform fashion across the state space and in this manner
potentially ignore approximation quality in regions of the state space that ?don?t matter?.
In addition to specifying the constraint violation distribution ? as we did for our previous
bound, we will specify (implicitly) a particular choice of the violation budget ?. In particular,
we will consider solving the following SALP:
maximize
(6)
r,s
? > ?r ?
>
2??
? ,? s
1??
subject to ?r ? T ?r + s,
s ? 0.
It is clear that (6) is equivalent to (4) for a specific choice of ?. We then have:
P? ?
Theorem 2. Let ? , {y ? R|X | : y ? 1}. For every ? ? ?, let ?(?) = max?
?
.
?
Then, for an optimal solution (rSALP , s?) to (6), we have:
kJ ? ?rSALP k1,? ? inf
?
r,???
kJ ? ?rk?,1/?
?
5
2(??>? ,? ? + 1)(??(?) + 1)
? ?+
1??
>
!
.
It is worth placing the result in context to understand its implications. For this, we recall
a closely related result shown by de Farias and Van Roy [1] for the ALP. In particular,
de Farias and Van Roy [1] showed that given an appropriate weighting (or in their context,
?Lyapunov?) function ?, one may solve an ALP, with ? in the span of the basis functions
?; the solution to such an ALP then satisfies:
kJ ? ? ??
rk1,? ? inf kJ ? ? ?rk?,1/?
r
2? > ?
1 ? ??(?)
provided ?(?) ? 1/?. Selecting an appropriate ? in their context is viewed to be an
important task for practical performance and often requires a good deal of problem specific
analysis; de Farias and Van Roy [1] identify appropriate ? for several queueing models (note
that this is equivalent to identifying a desirable basis function). In contrast, the guarantee
we present optimizes over all possible ? 1 . Thus, the approximation guarantee of Theorem 2
allows us to view the SALP as automating the critical procedure of identifying a good
Lyapunov function for a given problem.
4.3
Sample Complexity
Our analysis thus far has assumed we have the ability to solve the SALP, a program with
a potentially intractable number of constraints and variables. As it turns out, a solution to
the SALP is well approximated by the solution to a certain ?sampled? program which we now
describe: Let X? = {x1 , x2 , . . . , xS } be an ordered collection of S states drawn independently
from X according to the distribution ??? ,? . Let us consider solving the following program
which we call the sampled SALP:
P
2
maximize ? > ?r ? (1??)S
x?X? s(x)
r,s
(7)
subject to (?r)(x) ? (T ?r)(x) + s(x), ? x ? X? ,
r ? N , s ? 0.
Here N ? Rm is a parameter set chosen to contain the optimal solution to the SALP (6),
rSALP . Notice that (7) is a linear program with S variables and S|A| constraints. For a
moderate number of samples S, this is is easily solved. We will provide a sample complexity
bound that indicates that for a number of samples S that scales linearly with the dimension
of ?, K, and that need not depend on the size of the state space, the solution to the sampled
SALP satisfies, with high probability, the approximation guarantee presented for the SALP
solution in Theorem 2.
Let us define the constant B , supr?N k(?r ? T ?r)+ k? . This quantity is closely related
to the diameter of the region N . We then have:
Theorem 3. Under the conditions of Theorem 2, let rSALP be an optimal solution to the
SALP (6), and let r?SALP be an optimal solution to the sampled SALP (7). Assume that
rSALP ? N . Further, given ? (0, B] and ? ? (0, 1/2], suppose that the number of sampled
states S satisfies
16eB
8
64B 2
2(K
+
2)
log
+
log
.
S?
2
?
Then, with probability at least 1 ? ? ? 2?383 ? 128 ,
kJ ? ??
rSALP k1,? ? inf kJ ? ?rk?,1/?
?
r?N
???
?
2(??>? ,? ? + 1)(??(?) + 1)
? ?+
1??
>
!
+
4
.
1??
Theorem 3 establishes that the sampled SALP provides a close approximation to the solution
of the SALP, in the sense that the approximation guarantees we established for the SALP
are approximately valid for the solution to the sampled version with high probability. The
number of samples we require to accomplish this task is specified precisely via the theorem.
This number depends linearly on the number of basis functions and the diameter of the
1
This includes those ? that do not satisfy the Lyapunov condition ?(?) ? 1/?.
6
feasible region, but is otherwise independent of the size of the state space for the MDP under
consideration. It is worth juxtaposing our sample complexity result with that available for
the ALP. In particular, we recall that the ALP has a large number of constraints but a small
number of variables; the SALP is thus, at least superficially, a significantly more complex
program. Exploiting the fact that the ALP has a small number of variables, de Farias
and Van Roy [2] establish a sample complexity bound for a sampled version of the ALP
analogous (7). The number of samples required for this sampled ALP to produce a good
approximation to the ALP can be shown to depend on the same problem parameters we
have identified here, viz. B and the number of basis functions K. The sample complexity in
that case is identical to the sample complexity bound established here up to constants and
an additional multiplicative factor of B/ (for the sampled SALP). Thus, the two sample
complexity bounds are within polynomial terms of each other and we have established that
the SALP is essentially no harder to solve than the ALP.
This section places the SALP on solid theoretical ground by establishing strong approximation guarantees for the SALP that represent a substantial improvement over those available
for the ALP and sample complexity results that indicated that the SALP was implementable
via sampling. We next present a computational study that tests the SALP relative to other
ADP methods (including the ALP) on a hard problem (the game of Tetris).
5
Case Study: Tetris
Our interest in Tetris as a case study for the SALP algorithm is motivated by several facts.
Theoretical results suggest that design of an optimal Tetris player is a difficult problem
[4?6]. Tetris represents precisely the kind of large and unstructured MDP for which it is
difficult to design heuristic controllers, and hence policies designed by ADP algorithms are
particularly relevant. Moreover, Tetris has been employed by a number of researchers as a
testbed problem [3, 7?9]. We follow the formulation of Tetris as a MDP presented by Farias
and Van Roy [3]. The SALP methodology was applied as follows:
Basis functions. We employed the 22 basis functions originally introduced in [7].
State sampling. Given a sample size S, a collection X? ? X of S states was sampled. These
samples were generated in an IID fashion from the stationary distribution of a (rather poor)
baseline policy2 .
Optimization. Given the collection X? of sampled states, an increasing sequence of choices
of the violation budget ? ? 0 is considered. For each choice of ?, the optimization program
P
maximize S1 x?X? (?r)(x)
r,s
(8)
?
subject to ?r(x)
P ? T ?r(x) + s(x), ? x ? X ,
1
s(x)
?
?,
?
x?X
S
s(x) ? 0,
? x ? X? ,
was solved. This program is a version of the original SALP (3), but with sampled empirical
distributions in place of the state-relevance weights ? and the constraint violation distribution ?. Note that (8) has K + S decision variables and S|A| linear constraints. Because
of the sparsity structure of the constraints, however, it is amenable to efficient solution via
barrier methods, even for large values of S.
Evaluation. Given a vector of weights obtained by solving (8), the performance of the
corresponding policy is evaluated via Monte Carlo simulation over 3, 000 games of Tetris.
Performance is measured in terms of the average number of lines cleared in a single game.
For each pair (S, ?), the resulting average performance (averaged over 10 different sets of
sampled states) is shown in Figure 2. It provides experimental evidence for the intuition
expressed in Section 3 and the analytic result of Theorem 1: Relaxing the constraints of
the ALP by allowing for a violation budget allows for better policy performance. As the
violation budget ? is increased from 0, performance dramatically improves. At ? = 0.16384,
the performance peaks, and we get policies that is an order of magnitude better than ALP,
and beyond that the performance deteriorates.
2
Our baseline policy had an average performance of 113 points.
7
Average Performance
?103
? = 0.65536
? = 0.16384
? = 0.02048
? = 0.01024
? = 0.00256
? = 0 (ALP)
4
2
0
50
100
150
200
250
300
?103
Sample Size S
Figure 2: Average performance of SALP for different values of the number of sampled states
S and the violation budget ?.
Table 1 summarizes the performance of best policies obtained by various ADP algorithms.
Note that all of these algorithms employ the same basis function architecture. The ALP and
SALP results are from our experiments, while the other results are from the literature. The
best performance results of SALP is better by a factor of 2 in comparison to the competitors.
Algorithm
Best Performance
CPU Time
ALP
TD-Learning [7]
ALP with bootstrapping [3]
TD-Learning [8]
Policy gradient [9]
SALP
897
3,183
4,274
4,471
5,500
10,775
hours
minutes
hours
minutes
days
hours
Table 1: Comparison of the performance of the best policy found with various ADP methods.
Note that significantly better policies are possible with this basis function architecture than
any of the ADP algorithms in Table 1 discover. Using a heuristic optimization method,
Szita and L?orincz [10] report policies with a remarkable average performance of 350,000.
Their method is computationally intensive, however, requiring one month of CPU time. In
addition, the approach employs a number of rather arbitrary Tetris specific ?modifications?
that are ultimately seen to be critical to performance - in the absence of these modifications,
the method is unable to find a policy for Tetris that scores above a few hundred points.
6
Future Directions
There are a number of interesting directions that remain to be explored. First, note that the
bounds derived in Sections 4.1 and 4.2 are approximation guarantees, which provide bounds
on the approximation error given by the SALP approach versus the best approximation possible with the particular set of basis functions. In preliminary work, we have also developed
performance guarantees. These provide bounds on the performance of the resulting SALP
policies, as a function of the basis architecture. Second, note that sample path variations
of the SALP are possible. Rather than solving a large linear program, such an algorithm
would optimize a policy in an online fashion along a single system trajectory. This would be
in a manner reminiscent of stochastic approximation algorithms like TD-learning. However,
a sample path SALP variation would inherit all of the theoretical bounds developed here.
The design and analysis of such an algorithm is an exciting future direction.
8
References
[1] D. P. de Farias and B. Van Roy. The linear programming approach to approximate
dynamic programming. Operations Research, 51(6):850?865, 2003.
[2] D. P. de Farias and B. Van Roy. On constraint sampling in the linear programming
approach to approximate dynamic programming. Mathematics of Operations Research,
293(3):462?478, 2004.
[3] V. F. Farias and B. Van Roy. Tetris: A study of randomized constraint sampling. In
Probabilistic and Randomized Methods for Design Under Uncertainty. Springer-Verlag,
2006.
[4] J. Brzustowski. Can you win at Tetris? Master?s thesis, University of British Columbia,
1992.
[5] H. Burgiel. How to lose at Tetris. Mathematical Gazette, page 194, 1997.
[6] E. D. Demaine, S. Hohenberger, and D. Liben-Nowell. Tetris is hard, even to approximate. In Proceedings of the 9th International Computing and Combinatorics Conference, 2003.
[7] D. P. Bertsekas and S. Ioffe. Temporal differences?based policy iteration and applications in neuro?dynamic programming. Technical Report LIDS?P?2349, MIT Laboratory for Information and Decision Systems, 1996.
[8] D. P. Bertsekas and J. N. Tsitsiklis. Neuro-Dynamic Programming. Athena Scientific,
Belmont, MA, 1996.
[9] S. Kakade. A natural policy gradient. In Advances in Neural Information Processing
Systems 14, Cambridge, MA, 2002. MIT Press.
[10] I. Szita and A. L?orincz. Learning Tetris using the noisy cross-entropy method. Neural
Computation, 18:2936?2941, 2006.
[11] D. Haussler. Decision theoretic generalizations of the PAC model for neural net and
other learning applications. Information and Computation, 100:78?150, 1992.
9
| 3799 |@word illustrating:2 version:6 polynomial:1 stronger:2 norm:3 seek:3 simulation:1 contraction:1 solid:1 harder:2 carry:1 initial:3 efficacy:1 selecting:1 score:1 cleared:1 outperforms:1 existing:1 must:1 readily:1 reminiscent:1 belmont:1 subsequent:1 analytic:1 designed:2 stationary:2 greedy:1 leaf:1 assurance:1 provides:5 mathematical:1 along:1 prove:1 manner:2 x0:7 indeed:1 expected:4 frequently:1 bellman:7 discounted:3 decreasing:1 td:3 cpu:2 curse:1 armed:1 increasing:1 begin:1 provided:1 bounded:1 moreover:2 discover:1 gsb:2 kind:1 interpreted:1 substantially:2 developed:2 juxtaposing:1 bootstrapping:1 guarantee:19 temporal:1 every:3 exactly:1 prohibitively:1 rm:1 control:2 bertsekas:2 positive:1 before:1 establishing:3 path:2 approximately:1 eb:1 specifying:1 challenging:1 relaxing:3 averaged:1 unique:4 practical:2 procedure:1 empirical:1 maxx:1 significantly:3 projection:2 pre:1 word:1 suggest:2 get:1 close:1 operator:4 context:3 influence:2 restriction:3 equivalent:3 fruitful:1 demonstrated:1 optimize:1 go:16 attention:2 overarching:1 independently:1 convex:1 identifying:2 unstructured:1 haussler:1 spanned:2 deriving:1 variation:2 analogous:1 limiting:1 suppose:1 user:1 exact:5 programming:10 pa:1 roy:10 approximated:2 particularly:1 argminr:1 solved:4 region:8 desai:1 decrease:1 liben:1 substantial:1 intuition:3 complexity:9 dynamic:9 ultimately:1 depend:2 solving:5 serve:1 basis:17 farias:11 easily:1 various:2 ralp:5 describe:1 monte:1 refined:1 whose:2 quite:1 heuristic:2 solve:5 otherwise:1 ability:2 soundness:1 noisy:1 online:1 advantage:1 sequence:1 net:1 relevant:1 supposed:1 exploiting:1 produce:1 measured:1 strong:3 implies:1 indicate:1 lyapunov:3 direction:4 closely:2 stochastic:2 alp:49 require:3 generalization:1 preliminary:1 rk1:2 considered:1 ground:1 mapping:1 scope:1 nowell:1 lose:1 visited:1 establishes:1 weighted:1 hope:1 mit:4 clearly:1 rather:4 ax:1 derived:1 viz:1 improvement:5 indicates:1 contrast:1 baseline:2 sense:1 typically:1 unlikely:1 entire:1 overall:1 szita:2 orange:1 construct:1 cartoon:2 sampling:6 identical:3 placing:1 broad:1 represents:1 future:2 report:2 piecewise:1 employ:2 few:1 pathological:1 consisting:1 attempt:1 interest:1 evaluation:1 violation:18 yielding:1 light:1 behind:1 kjk:1 implication:1 amenable:1 closer:1 moallemi:1 supr:1 theoretical:4 instance:1 column:2 increased:2 measuring:1 cost:19 introducing:1 entry:2 uniform:2 hundred:1 accomplish:1 peak:1 randomized:2 international:1 automating:1 probabilistic:1 concrete:2 nicer:1 thesis:1 choose:1 ieor:1 de:8 includes:1 matter:1 satisfy:3 combinatorics:1 sloan:1 idealized:6 depends:1 multiplicative:1 view:1 doing:1 characterizes:1 competitive:2 sort:2 recover:1 minimize:1 yield:4 correspond:1 identify:2 produced:1 iid:1 carlo:1 rx:5 worth:2 researcher:1 trajectory:1 definition:1 competitor:1 nonetheless:1 frequency:1 naturally:1 proof:1 associated:1 sampled:24 ask:1 recall:2 knowledge:2 dimensionality:1 improves:1 appears:2 originally:1 day:1 follow:1 methodology:1 wherein:1 specify:2 improved:1 formulation:2 evaluated:1 stage:1 predicated:1 hand:4 brings:1 quality:4 indicated:1 gray:2 scientific:1 mdp:7 requiring:1 true:1 contain:1 hence:1 laboratory:1 illustrated:4 vivek:1 attractive:1 deal:1 game:6 theoretic:1 demonstrate:3 dedicated:1 wise:2 consideration:1 novel:1 superior:1 adp:14 interpret:1 refer:1 cambridge:1 mathematics:1 had:1 access:2 showed:1 inf:3 optimizes:1 moderate:1 verlag:1 certain:2 seen:1 minimum:1 additional:2 impose:1 employed:3 recognized:1 maximize:7 ii:1 violate:1 desirable:1 policy2:1 technical:2 cross:1 visit:1 neuro:2 controller:2 essentially:2 iteration:1 kernel:1 represent:1 addition:3 addressed:1 crucial:2 subject:8 call:2 iii:2 relaxes:1 concerned:1 architecture:3 restrict:2 identified:1 intensive:1 absent:1 whether:1 motivated:2 specialization:1 ultimate:1 proceed:2 action:3 dramatically:2 useful:3 detailed:1 clear:1 discount:1 dark:1 diameter:2 fruition:1 restricts:1 notice:1 deteriorates:1 arising:1 per:1 reinforces:1 xth:1 discrete:1 demonstrating:1 drawn:2 queueing:1 relaxation:7 run:1 uncertainty:1 you:1 master:1 place:2 decision:6 appendix:1 summarizes:1 comparable:1 bound:26 untenable:1 constraint:25 precisely:4 afforded:2 x2:1 span:5 min:1 relatively:1 according:8 poor:1 jr:2 across:2 beneficial:1 remain:1 lp:10 kakade:1 lid:1 modification:2 s1:1 computationally:3 equation:5 previously:1 slack:1 turn:1 tractable:5 available:2 operation:2 permit:1 observe:1 appropriate:7 original:2 assumes:1 remaining:2 k1:2 establish:3 objective:1 question:1 quantity:1 gradient:2 dp:1 subspace:2 win:1 unable:1 athena:1 considers:1 minimizing:1 balance:1 difficult:2 unfortunately:1 potentially:2 negative:2 intent:1 implementation:1 design:4 policy:20 allowing:1 markov:1 finite:2 implementable:2 pat:1 defining:1 orincz:2 smoothed:5 arbitrary:3 introduced:3 cast:1 required:1 specified:2 pair:1 testbed:1 established:3 hour:3 address:1 able:1 beyond:1 sparsity:1 program:20 max:2 including:1 event:1 critical:2 difficulty:1 rely:1 natural:1 salp:57 mdps:1 columbia:5 kj:14 geometric:1 literature:1 relative:1 expect:2 interesting:1 demaine:1 versus:1 remarkable:1 incurred:1 principle:1 exciting:1 translation:1 tsitsiklis:1 guide:1 allow:2 side:4 understand:1 vivekf:1 barrier:1 distributed:1 van:10 dimension:1 transition:2 valid:1 superficially:1 computes:1 collection:4 far:1 approximate:12 ignore:1 implicitly:2 monotonicity:1 ioffe:1 assumed:1 thep:1 don:1 table:3 argmaxx:1 necessarily:1 complex:2 did:2 inherit:1 linearly:2 allowed:1 x1:1 fashion:4 position:1 lie:1 crude:1 weighting:3 rk:6 theorem:13 minute:2 british:1 xt:6 specific:3 ciamac:2 pac:1 explored:1 x:1 evidence:1 intractable:1 magnitude:4 budget:11 horizon:1 vijay:1 entropy:1 simply:1 expressed:1 ordered:1 scalar:1 binding:1 springer:1 satisfies:5 determines:1 ma:2 viewed:2 rsalp:9 goal:1 month:1 careful:1 room:1 absence:1 feasible:7 hard:3 programing:1 infinite:1 determined:1 specifically:1 uniformly:1 lemma:3 tetri:20 experimental:1 player:1 rarely:1 relevance:3 |
3,089 | 38 | 457
DISTRIBUTED NEURAL INFORMATION PROCESSING
IN THE VESTIBULO-OCULAR SYSTEM
Clifford Lau
Office of Naval Research Detach ment
Pasadena, CA 91106
Vicente Honrubia*
UCLA Division of Head and Neck Surgery
Los Angeles, CA 90024
ABSTRACT
A new distributed neural information-processing
model is proposed to explain the response characteristics
of the vestibulo-ocular system and to reflect more
accurately the latest anatomical and neurophysiological
data on the vestibular afferent fibers and vestibular nuclei.
In this model, head motion is sensed topographically by hair
cells in the semicircular canals. Hair cell signals are then
processed by multiple synapses in the primary afferent
neurons which exhibit a continuum of varying dynamics. The
model is an application of the concept of "multilayered"
neural networks to the description of findings in the
bullfrog vestibular nerve, and allows us to formulate
mathematically the behavior of an assembly of neurons
whose physiological characteristics vary according to their
anatomical properties.
INTRODUCTION
Traditionally the physiological properties of
individual vestibular afferent neurons have been modeled as
a linear time-invariant system based on Steinhausents
description of cupular motion. 1 The vestibular nerve input
to different parts of the central nervous system is usually
represented by vestibular primary afferents that have
*Work supported by grants NS09823 and NS08335 from the National
Institutes of Health (NINCDS) and grants from the Pauley Foundation and the
Hope for Hearing Research Foundation.
? American Institute of Physics 1988
458
response properties defined by population averages from
individual neurons. 2
A new model of vestibular nerve organization is
proposed to account for the observed variabilities in the
primary vestibular afferent's anatomical and physiological
characteristics. The model is an application of the concept
of "multilayered" neural networks,3,4 and it attempts to
describe the behavior of the entire assembly of vestibular
neurons based on new physiological and anatomical findings
in the frog vestibular nerve. It was found that primary
vestibular afferents show systematic differences in
sensitivity and dynamics and that there is a correspondence
between the individual neuron's physiological properties and
the location of innervation in the area of the crista and also
the sizes of the neuron's fibers and somas. This new view
of topological organization of the receptor and vestibular
nerve afferents is not included in previous models of
vestibular nerve function. Detailed findings from this
laboratory on the anatomical and physiological properties of
the vestibular afferents in the bullfrog have been
published. 5 ,6
REVIEW OF THE ANATOMY AND PHYSIOLOGY
OF THE VESTIBULAR NERVE
The most pertinent anatomical and physiological data
on the bullfrog vestibular afferents are summarized here.
In the vestibular nerve from the anterior canal four major
branches (bundles) innervate different parts of the crista
(Figure 1). From serial histological sections it has been
shown that fibers in the central bundle innervate hair cells
at the center of the crista, and the lateral bundles project
to the periphery of the crista. I n each nerve there is an
average of 1170 ? 171 (n = 5) fibers, of which the thick
fibers (diameter > 7.0 microns, large dots) constitute 8%
and the thin fibers ? 4.0 microns, small dots) 76%. The
remaining fibers (16%) fall into the range between 4.0 and
7.0 microns. We found that the thick fibers innervate only
the center of the crista, and the thinner ones predominantly
innervate the periphery.
459
400
(f)
~
w
())
H
LL
LL
o
~
w
())
L
::)
Z
00
2
4
6
8
DIAMETER
10
12
14
16
18
20
(micron)
Fig. 1. Number of fibers and their diameters in the anterior
semicircular canal nerve in the bullfrog.
There appears to be a physiological and anatomical
correlation between fiber size and degree of regularity of
spontaneous activity. By recording from individual neurons
and subsequently labeling them with horseradish peroxidase
intracellularly placed in the axon, it is possible to visualize
and measure individual ganglion cells and axons and to
determine the origin of the fiber in the crista as well as the
projections in different parts of the vestibular nuclei.
Figure 2 shows an example of three neurons of different
sizes and degrees of regularity of spontaneous activity. In
general, fibers with large diameters tend to be more
irregular with large coefficients of variation (CV) of the
interspike intervals, whereas thin fibers tend to be more
regular. There is also a relationship for each neuron
between CV and the magnitude of the response to
physiological rotatory stimuli, that is, the response gain.
(Gain is defined as the ratio of the response in spikes per
second to the stimulus in degrees per second.) Figure 3
shows a plot of gain as a function of CV as well as of fiber
diameter. For the more regular fibers (CV < 0.5), the gain
tends to increase as the diameter of the fiber increases.
460
y
-'x~
300um
THIN
MEDIUM
c v
c . V. = 0 . 25
11
o
200
THICK
= 0 39
,l. ,
o
200
c
V
=
0 61
..... I
I
200
MILLISECONDS
Fig. 2. Examples of thin, medium and thick fibers and their
spontaneous activity. CV - coefficient of variation.
For the more irregular fibers (CV > 0.5), the gain tends to
remain the same with increasing fiber diameter (4.9 ? 1.9
spikes/second/deg rees/seco nd).
Figure 4 shows the location of projection of the
afferent fibers at the vestibular nuclei from the anterior,
posterior, and horizontal canals and saccule. There is an
overall organization in the pattern of innervation from the
afferents of each vestibular organ to the vestibular nuclei,
with fibers from different receptors overlapping in various
461
-...?? 10
? ? ? ? ..
..
...???
.?I..? ....
..
.
...
.
.....
.
.
......
.
?
?
??
?
?
?
??
CI)
?
C)
-...?
CD
"C
??
?
?
CI)
?
~
.::t!!
a.
CI)
1
?
-
?
~.
~
?
?
??
?
?:?,i .....
?
?
?
?
? ?
?
I ?
? ?
c:
3.8
as
6. 1
(!)
8.4
10.7
Ff3ER DIAMETER
13.0
15.3
0.1r-~~~--~~~~--~~~~~
o
0.2
0.4
0.6
0.8
1
1.2
Coefficient of Variation
Fig. 3. Gain versus fiber diameters and CV. Stimulus was a
sinusoidal rotation of 0.05 Hz at 22 degrees/second peak
velocity.
parts of the vestibular nuclei. Fibers from the anterior
semicircular canal tend to travel ventrally, from the
horizontal canal dorsally, and from the posterior canal the
most dorsally.
For each canal nerve the thick fibers (indicated by
large dots) tend to group together to travel lateral to the
thin fibers (indicated by diffused shading); thus, the
topographical segregation between thick and thin fibers at
the periphery is preserved at the vestibular nuclei.
In following the trajectories of individual neurons in
the central nervous system, however, we found that each
fiber innervates all parts of the vestibular nuclei, caudally
to rostrally as well as transversely, and because of the
spread of the large number of branches, as many as 200
from each neuron, there is a great deal of overlap among the
projections.
DISTRIBUTED NEURAL INFORMATION-PROCESSING MODEL
Figure 5 represents a conceptual organization, based
on the above anatomical and physiological data, of Scarpa's
462
POST.
ANT.
-
..-
.. .
- - ? . ":r..~_..r""
--:-.- .,.;,~??<'::-
..?
-,--i!'
HOIII.
/
;
-
SAC.
:.::~~C
--:-;~~
-~
-
-;7"
-~~
-~?
.;:.:::= =~~
--~
--..--
-~~
-;#'---
'" -~ -~~-
,
Z~n_!=l.
r ~8
3OO)J
.,
Fig. 4. Three-dimensional reconstruction of the primary
afferent fibers' location in the vestibular nuclei.
ganglion cells of the vestibular nerve and their innervation
of the hair cells and of the vestibular nuclei. The diagram
depicts large Scarpa's ganglion cells with thick fibers
in nervating restricted areas of hair cells near the center of
the crista (top) and smaller Scarpa's ganglion cells with
thin fibers on the periphery of the crista innervating
multiple hair cells with a great deal of overlap among
fibers. At the vestibular nuclei, both thick and thin fibers
innervate large areas with a certain gradient of overlapping
among fibers of different diameters.
The new distributed neural information-processing
model for the vestibular system is based on this anatomical
organization, as shown in Figure 6. The response
463
H. C.
S. G.
v.
N.
Fig. 5. ft.natomical
organization of the
vestibular nerve.
H.C. - hair cells.
S.G. - Scarpa's
ganglion cells.
V.N. - vestibular
n ucle i.
H. C.
s.G.
V. N.
Fig. 6. Distributed neural information-processing model of
the vestibular nerve.
464
characteristic of the primary afferent fiber is represented
by the transfer function SGj(s). This transfer function
serves as a description of the gain and phase response of
individual neurons to angular rotation. The simplest model
would be a first-order system with d.c. gain Kj (spikesl
second over head acceleration) and a time constant Tj
(seconds) for the jth fiber as shown in equation (1):
SGj(s) = 1 + sT ..
(1 )
J
For the bullfrog, Kj can range from about 3 to 25
spikes/second/degree/second 2 , and Tj from about 10 to 0.5
second. The large and high-gain neurons are more phasic
than the small neurons and tend to have shorter time
constants. As described above, Kj and Tj for the jth neuron
are functions of location and fiber diameter. Bode plots
(gain and phase versus frequency) of experimental data
seem to indicate, however, that a better transfer function
would consist of a higher-order system that includes
fractional power. This is not surprising since the afferent
fiber response characteristic must be the weighted sum of
several electromechanical steps of transduction in the hair
cells. A plausible description of these processes is given in
equation (2):
SGj(s)
= 1:
Wjk 1 + s T k '
(2)
k
where gain Kk and time constant Tk are the electromechanical properties of the hair cell-cupula complex and
are functions of location on the crista, and Wjk is the
synaptic efficacy (strength) between the jth neuron and the
kth hair cell. In this context, the transfer function given
in equation (1) provides a measure of the "weighted
average" response of the multiple synapses given in
equation (2).
465
We also postulate that the responses of the vestibular
nuclei neurons reflect the weighted sums of the responses
of the primary vestibular afferents, as follows:
V N?1 = f ( l: T..IJ SG?)
J'
(3)
j
where f(.) is a sigmoid function describing the change in
firing rates of individual neurons due to physiological
stimulation. It is assumed to saturate between 100 to 300
spikes/second, depending on the neuron. Tij is the synaptic
efficacy (strength) between the ith vestibular neuron and
the jth afferent fiber.
C(l\JCLUSIONS
Based on anatomical and physiological data from the
bullfrog we presented a description of the organization of
the primary afferent vestibular fibers. The responses of
the afferent fibers represent the result of summated
excitatory processes. The information on head movement in
the assemblage of neurons is codified as a continuum of
varying physiological responses that reflect a sensoritopic
organization of inputs from the receptor to the central
nervous system. We postulated a new view of the
organization in the peripheral vestibular organs and in the
vestibular nuclei. This view does not require unnecessary
simplification of the varying properties of the individual
neurons. The model is capable of extracting the weighted
average response from assemblies of large groups of
neurons while the unitary contribution of individual neurons
is preserved. The model offers the opportunity to
incorporate further developments in the evaluation of the
different roles of primary afferents in vestibular function.
Large neurons with high sensitivity and high velocity of
propagation are more effective in activating reflexes that
require quick responses such as vestibulo-spinal and
vestibulo-ocular reflexes. Small neurons with high
thresholds for the generation of action potentials and lower
sensitivity are more tuned to the maintenance of posture
466
and muscle tonus. We believe the physiological differences
reflect the different physiological roles.
I n this emerging scheme of vestibular nerve
organization it appears that information about head
movement, topographically filtered in the crista, is
distributed through multiple synapses in the vestibular
centers. Consequently, there is also reason to believe that
different neurons in the vestibular nuclei preserve the
variability in response characteristics and the topological
discrimination observed in the vestibular nerve. Whether
this idea of the organization and function of the vestibular
system is valid remains to be proven experimentally.
REFERENCES
1. W. Steinhausen, Arch. Ges. Physio!. 217,747 (1927).
2. J. M. Goldberg and C. Fernandez, in: Handbook of
Physiology, Sect. 1, Vol. III, Part 2 (I. Darian-Smith,
ed., Amer. Physio!. Soc., Bethesda, MD, 1984), p. 977.
3. D. E. Rumelhart, G. E. Hinton and J. L. McClelland, in:
Parallel Distributed Processing: Explorations in the
Microstructure of Cognition, Vol. 1: Foundations
(D. E. Rumelhart, J. L. McClelland and the PDP Research
Group, eds., MIT Press, Cambridge, MA, 1986), p. 45.
4. J. Hopfield, Proc. Nat!. Acad. Sci. la, 2554 (1982).
5. V. Honrubia, S. Sitko, J. Kimm, W. Betts and I. Schwartz,
Intern. J. Neurosci. ~, 197 (1981).
6. V. Honrubia, S. Sitko, R. Lee, A. Kuruvilla and I. Schwartz,
Laryngoscope .aA., 464 (1984).
| 38 |@word nd:1 sensed:1 innervating:1 shading:1 efficacy:2 tuned:1 anterior:4 surprising:1 must:1 interspike:1 pertinent:1 plot:2 discrimination:1 nervous:3 ith:1 smith:1 filtered:1 provides:1 location:5 behavior:2 innervation:3 increasing:1 project:1 medium:2 emerging:1 finding:3 um:1 schwartz:2 grant:2 thinner:1 tends:2 acad:1 receptor:3 firing:1 frog:1 physio:2 range:2 area:3 physiology:2 projection:3 regular:2 context:1 quick:1 center:4 latest:1 formulate:1 sac:1 population:1 traditionally:1 variation:3 spontaneous:3 goldberg:1 origin:1 velocity:2 rumelhart:2 intracellularly:1 observed:2 ft:1 role:2 sect:1 innervates:1 movement:2 dynamic:2 topographically:2 division:1 hopfield:1 represented:2 fiber:40 various:1 describe:1 effective:1 labeling:1 whose:1 plausible:1 seco:1 reconstruction:1 ment:1 detach:1 description:5 wjk:2 los:1 regularity:2 tk:1 oo:1 depending:1 n_:1 ij:1 soc:1 indicate:1 dorsally:2 anatomy:1 thick:8 subsequently:1 exploration:1 require:2 activating:1 microstructure:1 mathematically:1 great:2 cognition:1 visualize:1 major:1 continuum:2 vary:1 ventrally:1 proc:1 travel:2 organ:2 weighted:4 hope:1 horseradish:1 mit:1 varying:3 office:1 naval:1 entire:1 pasadena:1 overall:1 among:3 development:1 represents:1 thin:8 stimulus:3 preserve:1 national:1 individual:10 phase:2 attempt:1 organization:11 evaluation:1 tj:3 bundle:3 capable:1 shorter:1 hearing:1 rees:1 st:1 peak:1 sensitivity:3 systematic:1 physic:1 lee:1 together:1 clifford:1 reflect:4 central:4 postulate:1 american:1 account:1 potential:1 sinusoidal:1 summarized:1 includes:1 coefficient:3 postulated:1 afferent:19 fernandez:1 view:3 parallel:1 contribution:1 characteristic:6 ant:1 accurately:1 trajectory:1 published:1 explain:1 synapsis:3 synaptic:2 ed:2 frequency:1 ocular:3 gain:11 fractional:1 electromechanical:2 nerve:16 appears:2 higher:1 response:16 amer:1 angular:1 arch:1 correlation:1 horizontal:2 overlapping:2 propagation:1 indicated:2 believe:2 concept:2 laboratory:1 deal:2 ll:2 rostrally:1 motion:2 caudally:1 predominantly:1 sigmoid:1 rotation:2 stimulation:1 spinal:1 cambridge:1 cv:7 innervate:5 dot:3 posterior:2 periphery:4 certain:1 muscle:1 determine:1 signal:1 branch:2 multiple:4 offer:1 serial:1 post:1 hair:10 maintenance:1 represent:1 cell:15 irregular:2 preserved:2 whereas:1 interval:1 diagram:1 recording:1 tend:5 hz:1 seem:1 extracting:1 unitary:1 near:1 iii:1 honrubia:3 idea:1 angeles:1 whether:1 constitute:1 action:1 tij:1 detailed:1 processed:1 mcclelland:2 diameter:11 simplest:1 millisecond:1 canal:8 per:2 anatomical:10 vol:2 group:3 four:1 soma:1 threshold:1 sum:2 micron:4 simplification:1 correspondence:1 topological:2 activity:3 strength:2 saccule:1 ucla:1 according:1 peripheral:1 bode:1 remain:1 smaller:1 bethesda:1 lau:1 invariant:1 restricted:1 segregation:1 equation:4 remains:1 describing:1 phasic:1 ge:1 serf:1 scarpa:4 laryngoscope:1 top:1 remaining:1 assembly:3 opportunity:1 surgery:1 diffused:1 spike:4 posture:1 primary:9 md:1 exhibit:1 gradient:1 kth:1 lateral:2 sci:1 reason:1 modeled:1 relationship:1 kk:1 ratio:1 peroxidase:1 neuron:28 semicircular:3 hinton:1 variability:2 head:5 pdp:1 vestibular:44 sgj:3 usually:1 pattern:1 power:1 overlap:2 scheme:1 kimm:1 health:1 kj:3 review:1 sg:1 generation:1 proven:1 versus:2 foundation:3 nucleus:13 degree:5 vestibulo:4 assemblage:1 cd:1 excitatory:1 supported:1 placed:1 histological:1 jth:4 institute:2 fall:1 distributed:7 valid:1 deg:1 handbook:1 conceptual:1 assumed:1 unnecessary:1 transfer:4 ca:2 complex:1 spread:1 codified:1 multilayered:2 neurosci:1 fig:6 depicts:1 transduction:1 axon:2 bullfrog:6 saturate:1 physiological:15 consist:1 ci:3 magnitude:1 nat:1 ganglion:5 neurophysiological:1 intern:1 reflex:2 aa:1 ma:1 acceleration:1 consequently:1 change:1 vicente:1 included:1 experimentally:1 neck:1 experimental:1 la:1 incorporate:1 topographical:1 |
3,090 | 380 | A Novel Approach to Prediction of the
3-Dimensional Structures of Protein Backbones
by Neural Networks
Henrik Fredholrn l ,5
and
2
Henrik Bol1l' , Jakob Bohr 3 , S0ren Brunak4 ,
Rodney M.J. Cotterill\ Benny Lautrup 5 and Steffen B. Petersen l
1 MR-Senteret,
SINTEF, N-7034 Trondheim, Norway.
2University of Illinois, Urbana, IL 61801, USA.
3Ris!3 National Laboratory, DK-4000 Roskilde, Denmark.
4Technical Univ. of Denmark, B. 307, DK-2800 Lyngby, Denmark.
5Niels Bohr Institute, Blegdamsvej 17, DK-2100 Cph. 0, Denmark.
Abstract
Three-dimensional (3D) structures of protein backbones have been predicted using neural networks. A feed forward neural network was trained
on a class of functionally, but not structurally, homologous proteins, using backpropagation learning. The network generated tertiary structure
information in the form of binary distance constraints for the Co atoms
in the protein backbone. The binary distance between two Co atoms was
o if the distance between them was less than a certain threshold distance,
and 1 otherwise. The distance constraints predicted by the trained neural network were utilized to generate a folded conformation of the protein
backbone, using a steepest descent minimization approach.
1
INTRODUCTION
One current aim of molecular biology is determination of the (3D) tertiary structures of proteins in their folded native state from their sequences of amino acid
523
524
Fredholm, Bohr, Bohr, Brunak, Cotterill, Lautrup, and Thtersen
residues. Since Kendrew & Perutz solved the first protein structures, myoglobin
and hemoglobin, and explained from the discovered structures how these proteins
perform their function, it has been widely recognized that protein function is intimately linked with protein structure[l].
Within the last two decades X-ray crystallographers have solved the 3-dimensional
(3D) structures of a steadily increasing number of proteins in the crystalline state,
and recently 2D-NMR spectroscopy has emerged as an alternative method for small
proteins in solution. Today approximately three hundred 3D structures have been
solved by these methods, although only about half of them can be considered as
truly different, and only around a hundred of them are solved at high resolution
(that is, less than 2A). The number of protein sequences known today is well over
20,000, and this number seems to be growing at least one order of magnitude faster
than the number of known 3D protein structures.
Obviously, it is of great importance to develop tools that can predict structural
aspects of proteins on the basis of knowledge acquired from known 3D structures.
1.1
THE PROTEIN FOLDING PROBLEM
It is generally accepted that most aspects of protein structure derive from the properties of the particular sequence of amino acids that make up the protein 1 ? The
classical experiment is that of Anfinsen et al. [2] who demonstrated that ribonuclease could be denatured and refolded without loss of enzymatic activity.
This has led to the formulation of the so-called protein folding problem: given the
sequence of amino acids of a protein, what will be its native folded conformation?
1.2
SECONDARY STRUCTURE PREDICTION
Several methods have been developed for protein structure prediction. Most abundant are the methods for protein secondary structure prediction [3, 4, 5, 6]. These
methods predict for each amino acid in the protein sequence what type of secondary
structure the amino acid is part of. Several strategies have been suggested, most
of which are based on statistical analysis of the occurrence of single amino acids
or very short stretches of amino acids in secondary structural elements in known
proteins. In general, these prediction schemes have a prediction accuracy of 50-60%
for a three-category prediction of helix-, sheet- and coil conformations.
Recently neural networks have been applied to secondary structure prediction with
encouraging results [7, 8, 9, 10]; on three-category prediction the accuracy is 65%;
on two-catagory prediction of helix- and coil conformations the accuracy is 73%;
and on a two-category prediction of turn- and coil conformations the accuracy is
71 %. In all the three cases this is an improvement of the traditional methods.
1 Although recent results indicate that certain proteins catalyze, but do not alter, the
course of protein folding.
A Novel Approach to Prediction of the 3-Dimensional Structures
1.3
TERTIARY STRUCTURE PREDICTION
The methods that exist for 3D structure prediction fall in three broad categories: (1)
use of sequence homology with other protein with know 3D structure; (2) prediction
of secondary structure units followed by the assembly of these units into a compact
structure; and (3) use of empirical energy functions ab initio to derive the 3D
structure.
No general method for 3D structure prediction exists today, and novel methods
are most often documented through case stories that illustrate best or single case
performance. The most successful methods so far has been those based on sequence
homology; if significant sequence and functional homology exists between a protein
of interest and proteins for which the 3D structures are known, it is possible (but
cumbersome) to build a reasonable 3D model of the protein structure.
2
METHOD
\fo,le here describe a new method for predicting the 3D structure of a protein backbone
from its amino acid sequence [11]. The main idea behind this approach is to use a
noise tolerant representation of the protein backbone that is invariant to rotation
and translation of the backbone 2 , and then train a neural network to map protein
sequences to this representation.
2.1
REPRESENTATION OF 3D BACKBONE STRUCTURES
The folded backbone structure of a protein brings residues that are distantly positioned in sequence close to each other in space. One may identify such close contacts
and use them as constraints on the backbone conformation.
We define the binary distance D( i, j) between two residues i and j as 0 if the
distance between the Ca atom in residue i and the Ca atom in residue j is less than
a given threshold and as 1 if it is above or equal to the threshold, a typical choice
of threshold being sA. Organizing these distances as a binary distance matrix gives
rise to a two dimensional representation of the protein backbone (figure 2a depict
such matrix).
Most secondary motifs can be distinguished in this representation; helices appear
as thickenings of the diagonal and anti-parallel and parallel sheets appear as stripes
orthogonal and parallel to the diagonal.
It is possible to reconstruct the 3D backbone from the binary distance matrix representation by minimizing the "energy function" ,
E
= L g(dij(1 ta(i) -
taU)
I-
0?
itj
where dij = 1 - 2D(i,j), g(x) = 1/(1 + exp(-x? and 0 is the distance threshold.
The initial positions of the ta atoms are chosen at random. The motif for this
2The (?,,p) torsion-angle representation is also rotation- and translation invariant, but
it is not noise tolerant.
525
526
Fredholm, Bohr, Bohr, Brunak, Cotterill, Lautrup, and ~tersen
SequellC8 of 4IJ1ino ilC:lds
, './I
_ 1
?1
c
0
S
o 10
?.lIl
I
?
-
?
II I
-I
0iscaDce caasnina
'0
1
E
L
ex:?
o
1 _
Output
Secooclary sauc:ture
Figure 1: The input to the network consists of 61 contiguous amino acids, where each
amino acid is represented by a group of 20 neurons (only seven neurons/group are illustrated). At the output layer, a set of binary distances, between the centrally positioned
residue and those lying to the left of it in the input window, is produced. Secondary
structure assignment for the centrally positioned residue, in the three categories of helix,
sheet and coil, is also produced. Regarding the binary distance matrix, the network is
trained to report which of the 30 preceding Ca atoms are positioned within a distance of
sA to the centrally placed amino acid. The input layer had 1220 (61 x 20) neurons, the
hidden layer had 300 neurons and the output layer had '33 neurons.
energy function is that constraints that do not hold should contribute with large
values, while constraints that do hold should contribute with small values.
For small proteins of the order of 60 residues the reconstruction is very accurate.
For Bovine Pancreatic Trypsin Inhibitor (6PTI), a 56 residue long protein, we were
able to generate a correctly folded backbone structure. The binary distance matrix
was generated from the crystallographic data of 6PTI using a distance threshold
of B.A. After convergence of the minimization procedure the errors between the
reconstructed structure and the correct structure lay within 1.2A root mean square
(rms).
Preliminary results (unpublished) indicate that backbone structures for larger prcr
teins can be reconstructed with a deviation from the correct structure down to 2A
rms, when a distance threshold of 16A is used. \Vhen 5% random noise is added to
the distance matrix the deviation from the correct structure grows to 4-5A rms.
2.2
DISTANCE MATRIX PREDICTION
A backpropagation network [12] was used to map protein sequences to distance
matrices. To simplify the task the neural network had to learn, it was not taught
to predict all constraints in the distance matrix. Only a band along the diagonal
was to be predicted. More specifically, the network was taught to predict for each
residue in the protein sequence the binary distances to the 30 previous residues.
Furthermore it had to classify the central residue in question as either helix, sheet
or coil, see figure 1. Hence, the trained neural network produced, when given a
protein sequence, a secondary structure prediction and a distance matrix containing
A Novel Approach to Prediction of the 3-Dimensional Structures
RESIDUE NliMBER
(a)
RESIDUE NUMBER
(b)
Figure 2: Binary distance matrices for 1TRM. The matrices (:!23 x 223) show which
Co. atoms are within an sA distance to each other Ca atom in the folded protein. a) The
matrix corresponding to the structure determined from the X-ray data. b) Neural network
prediction of an sA distance matrix. A 61-residue band centered along the diagonal is
generated. The network predicts this band with an accuracy of 96.6%.
binary distance constraints for a lower diagonal-band matrix of width 30. Due to
symmetry in the distance matrix and the diagonal being always zero, the resulting
binary distance matrix contained a diagonal-band of predicted distance constraints
of width 61.
3
CASE STORY
A neural network with this architecture was trained on 13 different proteases [13]
from the Brookhaven Protein Data Bank, all having their data collected to a nominal resolution better than 2A. The 13 proteases were of several structural classes
including trypsins and subtilisins. This training set generated 3171 different examples (input windows) which were presented to the network. After 200 presentations
of each example, the network had learned the training set to perfection 3 . A 14th
protease, 1TRM (Rat Trypsin), with a length of 223 residues, was used to test the
network. This protease was 74% homologous to one of the 13 proteases that the
network was trained on. The distance matrix derived from X-ray diffraction for
this protein is shown in figure 2a. The ability of the network to correctly assign
structural information is amply illustrated in figure 2b, where the network is predicting the distance constraints around the diagonal for 1TRM. Although a high
degree of sequence homology exists between 1TRM and the trypsins included in the
training set, not a single input window presented to the network was identical to
any window in the training set. The prediction thus illustrates the ability of the
network to generalize from the training set. In the prediction (figure 2b), a clear
distinction can be made between helices and anti-parallel sheets as well as other
tertiary motifs.
If the whole binary distances matrix had been predicted, it would have been possible
3The training lasted 2 weeks on an Apollo 10000 running at 10 Mflops.
527
528
Fredholm, Bohr, Bohr, Brunak, Cotterill, Lautrup, and ~tersen
(a)
(b)
Figure 3: Backbone conformation for the 223 residue long trypsin 1TRM. a) The crystal
structure for 1TRM, as determined by X-ray data. b) The predicted structure of 1TRM
superimposed on the crystal structure. The nns deviation calculated over all the C atoms
was 3A. The largest deviations were present in surface loops, some of which are fixed by
several disulphide bridges.
Q
to construct the backbone conformation directly from the prediction. However, since
only a truncated version was predicted, a good guess of the backbone conformation
is needed for the minimization 4 . By using as initial guess the backbone conformation
for a homologous protein, the backbone conformation of 1TRM was predicted with a
3A rms deviation from the coordinates determined by X-ray diffraction, see figure 3.
In this particular case, the length of the sequence used for the starting configuration
was identical to that of the protein to be reconstructed. vVhen the sequences are of
unequal length, on the other hand, it is clear that additional considerations would
have to be taken into account during the minimization process.
4
DISCUSSION
The single main achievement of this study has been the generation of a 3D structure
of a protein from its amino acid sequence. The approach involved first the prediction
of a distance matrix using a neural network and subsequently a minimization fitting
procedure.
Binary distance matrices were introduced as a noise tolerant translation- and rotation invariant representation of 3D protein backbones, and a neural network was
trained to map protein sequences to this representation.
The results reported here are predictions of folded conformations, illustrated with
the trypsin ITRM. Our neural network is clearly capable of generalizing the folding
4For large proteins, where the band of distance constraints does not cover all spatial
contacts, local folding domains may acquire different chiralities, leading to improper packing of the domains in the protein. However, new experiments indicate that the backbone
structure of proteins that are 200-300 residues long can be reconstructed with good results
from a random configuration, if the width of the band in the distance matrix is 121 and
the distance threshold is 16A.
A Novel Approach to Prediction of the 3-Dimensional Structures
information stemming from known proteins with homologous function. Current
investigations have shown that the network is robust towards mutation of amino
acids in the protein sequence, whereas it is very sensitive to insertions and deletions
in the sequence. Thus, new network architectures will have to be developed, if this
method is to be useful for proteins with low homology; a bigger training set alone
will not do it.
Distance constraints can also be derived from experimental procedures such as
NMR, in which they take the form of nuclear Overhauser enhancement (nOe) factors. Structural information can be successfully derived from such data using restraint dynamics which in its essential form bears some resemblance to the approach
employed here, the most salient difference being that the potential energy function
in our work is much simpler.
Acknowledgements
HF thanks the Danish Research Academy, Novo-Nordisk and UNI-C for grants.
References
[1] Jaenicke, R. (1987) Prog. Biophys. Molec. BioI. 49,117-237.
[2] Anfinsen, C.B et al. (1961) Proc. Natl. Acad. Sci. USA, 47,1309-1314.
[3] Chou, P.Y, and Fasman, G.D. (1974) Biochemistry 13, 211-245.
[4] Garnier, J., Osguthorpe, D.J., and Robson, B. J. (1978) Mol. BioI., 120, 97120.
[5] Lim, V.I. (1974) J. Mol. BioI., 88, 857-894.
[6] Robson, D., and Suzuki, E. (1976) J. Mol. BioI., 107, 327-356.
[7] Qian N., and Sejnowski, T.J. (1988) J. Mol. BioI., 202, 865-884.
[8] Bohr, H., Bohr, J., Brunak, S., Cotterill, R.M.J., Lautrup, B., N~rskov, L.,
Olsen, O.H, and Petersen, S.B (1988) FEBS Letters, 241, 223-228.
[9] McGregor, M.J., Flores, T.P., and Sternberg, M.J .E. (1989) Protein Engineering, 2, 521-526.
[10] Kneller, D.G., Cohen, F.E., and Langridge, L. (1990) J. Mol. BioI., 214,171182.
[11] Bohr, H., Bohr. J, Brunak, S., Cotterill, R.M.J., Fredholm, H., Lautrup, B.,
and Petersen, S.B. (1990) FEBS Letters, 261,43-46.
[12] Rummelhart, D.E., Hinton, G.E., and Williams, R.J. (1986) Parallel Distributed Processing, 1, 318-362. Bradford Books, Cambridge, MA.
[13] Brookhaven Protein Data Bank entry codes: ISGT (Streptomyces Trypsin),
2EST (Porcine Pancreatic Elastase), 4PTP (Bovine Pancreatic beta Trypsin),
2KAI (Porcine Pancreatic Kallikrein A), lCHG (Bovine Chymotrypsin A),
2PRK (Fungal Proteinase K), ISEC (Subtilisin Carlsberg), ISGC (Streptomyces Proteinase A), 2ALP (Lysobacter Alfalytic Protease), 3APR (Rhizopus
Acid Proteinase), 3RP2 (rat Mast Cell Proteinase), 2SBT (Subtilisin NOVO)
and ISAV (Subtilisin Savinase).
529
| 380 |@word version:1 seems:1 pancreatic:4 amply:1 initial:2 configuration:2 current:2 stemming:1 prk:1 depict:1 alone:1 half:1 guess:2 steepest:1 tertiary:4 short:1 contribute:2 simpler:1 along:2 beta:1 consists:1 fitting:1 ray:5 acquired:1 growing:1 steffen:1 encouraging:1 window:4 increasing:1 what:2 backbone:21 developed:2 noe:1 nmr:2 unit:2 grant:1 appear:2 engineering:1 local:1 acad:1 approximately:1 co:3 backpropagation:2 procedure:3 empirical:1 sauc:1 protein:56 petersen:3 close:2 sheet:5 map:3 demonstrated:1 williams:1 starting:1 resolution:2 qian:1 nuclear:1 coordinate:1 today:3 nominal:1 element:1 utilized:1 lay:1 stripe:1 native:2 predicts:1 solved:4 improper:1 benny:1 insertion:1 dynamic:1 trained:7 basis:1 packing:1 represented:1 train:1 univ:1 describe:1 sejnowski:1 emerged:1 widely:1 larger:1 kai:1 otherwise:1 reconstruct:1 novo:2 ability:2 obviously:1 sequence:21 reconstruction:1 itrm:1 loop:1 organizing:1 academy:1 achievement:1 convergence:1 enhancement:1 derive:2 develop:1 illustrate:1 conformation:12 sa:4 predicted:8 indicate:3 correct:3 subsequently:1 centered:1 alp:1 assign:1 preliminary:1 investigation:1 molec:1 stretch:1 initio:1 lying:1 around:2 considered:1 hold:2 exp:1 great:1 predict:4 week:1 biochemistry:1 niels:1 proc:1 robson:2 bridge:1 sensitive:1 largest:1 successfully:1 tool:1 minimization:5 clearly:1 inhibitor:1 always:1 aim:1 derived:3 protease:6 improvement:1 superimposed:1 lasted:1 chou:1 motif:3 hidden:1 bovine:3 spatial:1 equal:1 construct:1 having:1 atom:9 biology:1 identical:2 broad:1 alter:1 distantly:1 rummelhart:1 report:1 simplify:1 national:1 hemoglobin:1 ab:1 restraint:1 interest:1 mflop:1 cotterill:6 truly:1 behind:1 natl:1 accurate:1 bohr:12 capable:1 orthogonal:1 abundant:1 classify:1 contiguous:1 cover:1 assignment:1 deviation:5 entry:1 hundred:2 successful:1 dij:2 reported:1 nns:1 thanks:1 itj:1 central:1 containing:1 book:1 leading:1 account:1 potential:1 root:1 linked:1 hf:1 parallel:5 rodney:1 mutation:1 il:1 square:1 accuracy:5 acid:14 who:1 identify:1 generalize:1 lds:1 produced:3 fredholm:4 fo:1 cumbersome:1 ptp:1 danish:1 energy:4 steadily:1 involved:1 knowledge:1 lim:1 positioned:4 feed:1 norway:1 ta:2 crystallographic:1 formulation:1 trm:8 furthermore:1 hand:1 brings:1 resemblance:1 grows:1 usa:2 homology:5 hence:1 laboratory:1 vhen:1 illustrated:3 during:1 width:3 rat:2 crystal:2 ilc:1 consideration:1 novel:5 recently:2 rotation:3 functional:1 cohen:1 myoglobin:1 functionally:1 significant:1 cambridge:1 illinois:1 had:7 surface:1 feb:2 ribonuclease:1 recent:1 certain:2 binary:14 additional:1 preceding:1 mr:1 employed:1 recognized:1 ii:1 technical:1 faster:1 determination:1 long:3 molecular:1 bigger:1 prediction:26 cell:1 folding:5 whereas:1 residue:18 structural:5 ture:1 architecture:2 idea:1 regarding:1 rms:4 enzymatic:1 generally:1 useful:1 clear:2 band:7 category:5 documented:1 generate:2 exist:1 correctly:2 crystallographer:1 taught:2 group:2 salient:1 threshold:8 angle:1 letter:2 prog:1 reasonable:1 diffraction:2 layer:4 followed:1 centrally:3 activity:1 constraint:11 ri:1 rp2:1 sternberg:1 aspect:2 trypsin:8 catalyze:1 pti:2 intimately:1 explained:1 invariant:3 trondheim:1 taken:1 lyngby:1 sbt:1 turn:1 needed:1 know:1 occurrence:1 distinguished:1 alternative:1 apollo:1 running:1 assembly:1 build:1 lautrup:6 classical:1 contact:2 added:1 question:1 strategy:1 traditional:1 diagonal:8 fungal:1 distance:39 sci:1 blegdamsvej:1 seven:1 collected:1 denmark:4 length:3 code:1 minimizing:1 acquire:1 rise:1 lil:1 perform:1 neuron:5 urbana:1 disulphide:1 descent:1 anti:2 truncated:1 hinton:1 kneller:1 vvhen:1 discovered:1 jakob:1 introduced:1 unpublished:1 unequal:1 learned:1 distinction:1 deletion:1 flores:1 able:1 suggested:1 nordisk:1 including:1 crystalline:1 tau:1 homologous:4 predicting:2 scheme:1 torsion:1 perfection:1 acknowledgement:1 loss:1 bear:1 generation:1 thickening:1 degree:1 story:2 helix:6 bank:2 translation:3 course:1 placed:1 last:1 institute:1 fall:1 distributed:1 calculated:1 forward:1 made:1 suzuki:1 far:1 reconstructed:4 compact:1 uni:1 olsen:1 tolerant:3 decade:1 brunak:5 learn:1 robust:1 ca:4 symmetry:1 spectroscopy:1 mol:5 domain:2 apr:1 main:2 whole:1 noise:4 amino:13 henrik:2 structurally:1 position:1 down:1 dk:3 exists:3 essential:1 importance:1 magnitude:1 illustrates:1 biophys:1 generalizing:1 led:1 contained:1 ma:1 coil:5 bioi:6 presentation:1 towards:1 included:1 folded:7 typical:1 specifically:1 determined:3 called:1 secondary:9 accepted:1 brookhaven:2 experimental:1 bradford:1 est:1 mast:1 mcgregor:1 ex:1 |
3,091 | 3,800 | Exponential Family Graph Matching and Ranking
James Petterson, Tib?erio S. Caetano, Julian J. McAuley and Jin Yu
NICTA, Australian National University
Canberra, Australia
Abstract
We present a method for learning max-weight matching predictors in bipartite
graphs. The method consists of performing maximum a posteriori estimation
in exponential families with sufficient statistics that encode permutations and
data features. Although inference is in general hard, we show that for one very
relevant application?document ranking?exact inference is efficient. For general
model instances, an appropriate sampler is readily available. Contrary to existing
max-margin matching models, our approach is statistically consistent and, in addition, experiments with increasing sample sizes indicate superior improvement
over such models. We apply the method to graph matching in computer vision as
well as to a standard benchmark dataset for learning document ranking, in which
we obtain state-of-the-art results, in particular improving on max-margin variants.
The drawback of this method with respect to max-margin alternatives is its runtime for large graphs, which is comparatively high.
1
Introduction
The Maximum-Weight Bipartite Matching Problem (henceforth ?matching problem?) is a fundamental problem in combinatorial optimization [22]. This is the problem of finding the ?heaviest?
perfect match in a weighted bipartite graph. An exact optimal solution can be found in cubic time
by standard methods such as the Hungarian algorithm.
This problem is of practical interest because it can nicely model real-world applications. For example, in computer vision the crucial problem of finding a correspondence between sets of image
features is often modeled as a matching problem [2, 4]. Ranking algorithms can be based on a
matching framework [13], as can clustering algorithms [8].
When modeling a problem as one of matching, one central question is the choice of the weight
matrix. The problem is that in real applications we typically observe edge feature vectors, not edge
weights. Consider a concrete example in computer vision: it is difficult to tell what the ?similarity score? is between two image feature points, but it is straightforward to extract feature vectors
(e.g. SIFT) associated with those points.
In this setting, it is natural to ask whether we could parameterize the features, and use labeled
matches in order to estimate the parameters such that, given graphs with ?similar? features, their
resulting max-weight matches are also ?similar?. This idea of ?parameterizing algorithms? and then
optimizing for agreement with data is called structured estimation [27, 29].
[27] and [4] describe max-margin structured estimation formalisms for this problem. Max-margin
structured estimators are appealing in that they try to minimize the loss that one really cares about
(?structured losses?, of which the Hamming loss is an example). However structured losses are typically piecewise constant in the parameters, which eliminates any hope of using smooth optimization
directly. Max-margin estimators instead minimize a surrogate loss which is easier to optimize,
namely a convex upper bound on the structured loss [29]. In practice the results are often good,
but known convex relaxations produce estimators which are statistically inconsistent [18], i.e., the
algorithm in general fails to obtain the best attainable model in the limit of infinite training data. The
inconsistency of multiclass support vector machines is a well-known issue in the literature that has
received careful examination recently [16, 15].
1
Motivated by the inconsistency issues of max-margin structured estimators as well as by the wellknown benefits of having a full probabilistic model, in this paper we present a maximum a posteriori
(MAP) estimator for the matching problem. The observed data are the edge feature vectors and
the labeled matches provided for training. We then maximize the conditional posterior probability
of matches given the observed data. We build an exponential family model where the sufficient
statistics are such that the mode of the distribution (the prediction) is the solution of a max-weight
matching problem. The resulting partition function is ]P-complete to compute exactly. However,
we show that for learning to rank applications the model instance is tractable. We then compare
the performance of our model instance against a large number of state-of-the-art ranking methods,
including DORM [13], an approach that only differs from our model instance by using max-margin
instead of a MAP formulation. We show very competitive results on standard document ranking
datasets, and in particular we show that our model performs better than or on par with DORM. For
intractable model instances, we show that the problem can be approximately solved using sampling
and we provide experiments from the computer vision domain. However the fastest suitable sampler
is still quite slow for large models, in which case max-margin matching estimators like those of [4]
and [27] are likely to be preferable even in spite of their potential inferior accuracy.
2
2.1
Background
Structured Prediction
In recent years, great attention has been devoted in Machine Learning to so-called structured predictors, which are predictors of the kind
g? : X 7! Y,
(1)
where X is an arbitrary input space and Y is an arbitrary discrete space, typically exponentially
large. Y may be, for example, a space of matrices, trees, graphs, sequences, strings, matches, etc.
This structured nature of Y is what structured prediction refers to. In the setting of this paper, X is the
set of vector-weighted bipartite graphs (i.e., each edge has a feature vector associated with it), and
Y is the set of perfect matches induced by X. If N graphs are available, along with corresponding
annotated matches (i.e., a set {(xn , y n )}N
n=1 ), our task will be to estimate ? such that when we apply
the predictor g? to a new graph it produces a match that is similar to matches of similar graphs from
the annotated set. Structured learning or structured estimation refers to the process of estimating a
vector ? for predictor g? when data {(x1 , y 1 ), . . . , (xN , y N )} 2 (X ? Y)N are available. Structured
prediction for input x means computing y = g(x; ?) using the estimated ?.
Two generic estimation strategies have been popular in producing structured predictors. One is based
on max-margin estimators [29, 27], and the other on maximum-likelihood (ML) or MAP estimators
in exponential family models [12].
The first approach is a generalization of support vector machines to the case where the set Y is
structured. However the resulting estimators are known to be inconsistent in general: in the limit
of infinite training data the algorithm fails to recover the best model in the model class [18, 16, 15].
McAllester recently provided an interesting analysis on this issue, where he proposed new upper
bounds whose minimization results in consistent estimators, but no such bounds are convex [18].
The other approach uses ML or MAP estimation in conditional exponential families with ?structured? sufficient statistics, such as in probabilistic graphical models, where they are decomposed
over the cliques of the graph (in which case they are called Conditional Random Fields, or CRFs
[12]). In the case of tractable graphical models, dynamic programming can be used to efficiently
perform inference. ML and MAP estimators in exponential families not only amount to solving an
unconstrained and convex optimization problem; in addition they are statistically consistent. The
main problem with these types of models is that often the partition function is intractable. This has
motivated the use of max-margin methods in many scenarios where such intractability arises.
2.2
The Matching Problem
Consider a weighted bipartite graph with m nodes in each part, G = (V, E, w), where V is the set
of vertices, E is the set of edges and w : E 7! R is a set of real-valued weights associated with
the edges. G can be simply represented by a matrix (wij ) where the entry wij is the weight of the
edge ij. Consider also a bijection y : {1, 2, . . . , m} 7! {1, 2, . . . , m}, i.e., a permutation. Then the
matching problem consists of computing
2
QN
to attain the graph G = (V, E, w). See Figure 1 for an
illustration.
Gx
n=1
p(y n |xn ; ?). Therefore,
p(?|Y, X) / p(?)
G
= exp log p(?) +
N
Y
exp (h (xn , y n ), ?i
n=1
N
X
(h (xn , y n ), ?i
g(x
n=1
j
i
i
xij
wij = hxij , ?i
j
We impose a Gaussian prior on ?. Instead o
Figure
1. Left:
of an input
vector-weighted
bi- 3 ?
ing
posterior
Figure 1: Left: Illustration
of an
inputIllustration
vector-weighted
bipartite
graph Gx with
3 the
edges.
There we can instead minimize th
partite
graph
G
with
3
?
3
edges.
There
is
a
vector
xe
x
log-posterior
`(Y |X; ?), which becomes our
is a vector xe associated with each edge e (for clarity only xij is shown, corresponding
to the solid
to each
edge e (for
only G
xij on
is shown,
(we ?suppress
edge). Right: weighted associated
bipartite graph
G obtained
byclarity
evaluating
the learnedtion
vector
(again the constant term):
x
only edge ij is shown). corresponding to the solid edge). Right: weighted biparN
tite graph G obtained by evaluating Gx on the learned
1 X
2
m
X
k?k
+
(g(xn ; ?) h (x
`(Y
|X;
?)
=
vector ? (again only
edge
ij
is
shown).
2 (2)
N n=1
y ? = argmax
wiy(i) .
y
i=1
More formally, assume that a training set
1 1
N be
This is a well-studied problem;
it is
tractable
solved
in O(m3for
) time
This model
is a can
regularization constant. `(Y
{X, Y } =
{(x
, y ), . .and
. , (xcan
, y N )}
is available,
n = [22].where
n
n
n
n
be used to match features
in
images
[4],
improve
classification
algorithms
[8]
and
rank
documents
convex
function
of ? since the log-partitio
1, 2, . . . , N (where x := (x11 , x12 . . . , xM (n)M (n) )).
[13], to cite a few applications.
The
typical
setting
consists
of
engineering
the
score
matrix
w
g(?)
is
a
convex
function
of ? (Wainwright
ij
Here M (n) is the number of nodes in each part of
according to domain knowledge
and
subsequently
solving
the
combinatorial
problem.
n
2003)
and
the
other
terms
are clearly conve
the vector-weighted bipartite graph x . We then
parameterize xij as wiy(i) = f (xiy(i) ; ?), and the
3.3. Feature Parameterization
3 The Model
goal is to find the ? which maximizes the posterior
likelihood
of
the
observed
data.
We
will
assume
f
to
?
?
3.1 Basic Goal
The critical observation now is that we equ
be bilinear, i.e. f (xiy(i) ; ?) = xiy(i) , ? .
lutiondata.
of the
matching problem (2) to theP
In this paper we assume that the weights wij are instead to be estimated from training
More
of
the
exponential
precisely, the weight wij associated with the edge ij in a graph will be the result of an appropriate family model (5), i.e.
3.2.
Exponential
Family
h (x,
y), ?i.
Since our goal is to parame
composition of a feature
vector
xij (observed)
and Model
a parameter vector ? (estimated
from
training
tures
data). Therefore, in practice, our input is a vector-weighted bipartite graph Gx =
(V, of
E,individual
x) (x : pairs of nodes (so as t
We assume
an exponential
familyfrom
model,
where
the
theasweight
of the
an edge), the most natural m
E 7! Rn ), which is ?evaluated?
at a particular
? (obtained
previous
training)
so
to attain
probability
model
is
graph G = (V, E, w). See Figure 1 for an illustration. More formally, assume that a training set
M
X
n
n
n
n
{X, Y } = {(xn , y n )}N
the
n=1 is available, where x := (x11 , x12 . . . , xM (n)M (n) ). Here M (n) is(x,
y) =
xiy(i) , which gives
?) = exp (h (x, bipartite
y), ?i g(x;
?)),xn . We
(3)then parameterize xij
number of nodes in each part ofp(y|x;
the vector-weighted
graph
i=1
?
as wiy(i) = f (xiy(i) ; ?), and the goal is to find the ? which maximizes? the posterior
probability
of = ?x
wiy(i)
?
iy(i) , ? ,
where
the observed data. We will
assume f to be bilinear, i.e., f (xiy(i) ; ?) = xiy(i) , ? .
i.e. linear in both x and ? (see Figure 1, r
X
specific form for xij will be discussed in t
3.2 Exponential Family Model g(x; ?) = log
exp h (x, y), ?i
(4)
mental section. In light of (10), (2) now clea
y
We assume an exponential family model, where the probability model is
a prediction of the best match for Gx under
p(y|x;
?) = exp (h
(x, y), which
?i g(x;
where
(3)
is the
log-partition
function,
is a ?)),
convex
and dif?.
ferentiable function of ?X
(Wainwright & Jordan, 2003).
g(x; ?)in=this
log model
expishthe
(x,most
y), ?ilikely y, i.e.
The prediction
(4)the Model
4. Learning
y
4.1. Basics
= argmax
p(y|x; and
?) =differentiable
argmax h (x,function
y), ?i of(5)
is the log-partition function,ywhich
isy a convex
? [31].
y
?
We need to solve ?? = argmin? `(Y |X; ?). `
a convex and di?erentiable function of ? (W
and ?ML estimation amounts to maximizing the cony = likelihood
argmax p(y|x;
?) = argmax
h i.e.
(x, y),
?i
(5) therefore gradient descen
& Jordan, 2003),
ditional
of a sample
{X, Y },
computing
y
y
the
global
optimum.
In order to compute r?
argmax? p(Y |X; ?). In practice we will in general inwe
need
to
compute
r? g(?). It is a stand
troduce
a
prior
on
?
and
perform
MAP
estimation:
and ML estimation amounts to maximizing the conditional likelihood of the training set {X, Y },
of exponential
i.e., computing argmax? p(Y |X; ?). In practice we will in general introduce a prior
on ? and per-families that the gradient o
partition function is the expectation of the
form MAP estimation: ?? = argmax p(Y |X; ?)p(?) = argmax p(?|Y, X). (6)
?
?
statistics:
The prediction in this model is the most likely y, i.e.,
?? = argmax p(Y |X; ?)p(?) = argmax p(?|Y, X).
Assuming? iid sampling,
Assuming iid sampling, we have p(Y |X; ?) =
QN
p(?|Y, X) / exp log p(?) +
?
we have
p(Y |X; ?)
n=1
N
X
n=1
3
(6)
p(y n |xn ; ?). Therefore,
(h (x , y ), ?i
n
n
r? g(x; ?) = Ey?p(y|x;?) [ (x, y)].
=
!
g(x ; ?)) .
n
(7)
We impose a Gaussian prior on ?. Instead of maximizing the posterior we can instead minimize the
negative log-posterior `(Y |X; ?), which becomes our loss function (we suppress the constant term):
N
1 X
2
`(Y |X; ?) = k?k +
(g(xn ; ?) h (xn , y n ), ?i)
(8)
2
N n=1
where is a regularization constant. `(Y |X; ?) is a convex function of ? since the log-partition
function g(?) is a convex function of ? [31] and the other terms are clearly convex in ?.
3.3 Feature Parameterization
The critical observation now is that we equate the solution
of the matching problem (2) to the preP
diction of the exponential family model (5), i.e., i wiy(i) = h (x, y), ?i. Since our goal is to
parameterize features of individual pairs of nodes (so as to produce the weight of an edge), the most
natural model is
M
X
(x, y) =
xiy(i) , which gives
(9)
i=1
?
?
wiy(i) = xiy(i) , ? ,
(10)
i.e., linear in both x and ? (see Figure 1, right). The specific form for xij will be discussed in the
experimental section. In light of (10), (2) now clearly means a prediction of the best match for Gx
under the model ?.
4
Learning the Model
4.1 Basics
We need to solve ?? = argmin? `(Y |X; ?). `(Y |X; ?) is a convex and differentiable function of
? [31], therefore gradient descent will find the global optimum. In order to compute r? `(Y |X; ?),
we need to compute r? g(?). It is a standard result of exponential families that the gradient of the
log-partition function is the expectation of the sufficient statistics:
r? g(x; ?) = Ey?p(y|x;?) [ (x, y)].
(11)
Therefore in order to perform gradient descent we need to compute the above expectation. Opening
the above expression gives
X
Ey?p(y|x;?) [ (x, y)] =
(x, y)p(y|x; ?)
(12)
y
=
M
Y
X
?
?
1
(x, y)
exp( xiy(i) , ? ),
Z(x; ?) y
i=1
(13)
which reveals that the partition function Z(x; ?) needs to be computed. The partition function is:
Z(x; ?) =
M
XY
y
?
?
exp( xiy(i) , ? ) .
|
{z
}
i=1
(14)
=:Biy(i)
Note that the above is the expression for the permanent of matrix B [19]. The permanent is similar
in definition to the determinant, the difference being that for the latter sgn(y) comes before the
product. However, unlike the determinant, which is computable efficiently and exactly by standard
linear algebra manipulations, computing the permanent is a ]P-complete problem [30]. Therefore
we have no realistic hope of computing (11) exactly for general problems.
4.2
Exact Expectation
The exact partition function itself can be efficiently computed for up to about M = 30 using the
O(M 2M ) algorithm by Ryser [25]. However for arbitrary expectations we are not aware of any
exact algorithm which is more efficient than full enumeration (which would constrain tractability
to very small graphs). However we will see that even in the case of very small graphs we find a
very important application: learning to rank. In our experiments, we successfully apply a tractable
instance of our model to benchmark document ranking datasets, obtaining very competitive results.
For larger graphs, we have alternative options as indicated below.
4
4.3
Approximate Expectation
If we have a situation in which the set of feasible permutations is too large to be fully enumerated
efficiently, we need to resort to some approximation for the expectation of the sufficient statistics.
The best solution we are aware of is one by Huber and Law, who recently presented an algorithm to
approximate the permanent of dense non-negative matrices [10]. The algorithm works by producing
exact samples from the distribution of perfect matches on weighted bipartite graphs. This is in
precisely the same form as the distribution we have here, p(y|x; ?) [10]. We can use this algorithm
for applications that involve larger graphs.We generate K samples from the distribution p(y|x; ?),
and directly approximate (12) with a Monte Carlo estimate
Ey?p(y|x;?) [ (x, y)] ?
K
1 X
(x, yi ).
K i=1
(15)
In our experiments, we apply this algorithm to an image matching application.
5
5.1
Experiments
Ranking
Here we apply the general matching model introduced in previous sections to the task of learning
to rank. Ranking is a fundamental problem with applications in diverse areas such as document
retrieval, recommender systems, product rating and others. Early learning to rank methods applied a
pairwise approach, where pairs of documents were used as instances in learning [7, 6, 3]. Recently
there has been interest in listwise approaches, where document lists are used as instances, as in our
method. In this paper we focus, without loss of generality, on document ranking.
We are given a set of queries {qk } and, for each query qk , a list of D(k) documents {dk1 , . . . , dkD(k) }
k
with corresponding ratings {r1k , . . . , rD(k)
} (assigned by a human editor), measuring the relevance
degree of each document with respect to query qk . A rating or relevance degree is usually a nominal
value in the list {1, . . . , R}, where R is typically between 2 and 5. We are also given, for every
retrieved document dki , a joint feature vector ik for that document and the query qk .
Training At training time, we model each query qk as a vector-weighted bipartite graph (Figure
1) where the nodes on one side correspond to a subset of cardinality M of all D(k) documents
retrieved by the query, and the nodes on the other side correspond to all possible ranking positions for
these documents (1, . . . , M ). The subset itself is chosen randomly, provided at least one exemplar
document of every rating is present. Therefore M must be such that M R.
The process is then repeated in a bootstrap manner: we resample (with replacement) from the set
of documents {dk1 , . . . , dkD(k) }, M documents at a time (conditioned on the fact that at least one
exemplar of every rating is present, but otherwise randomly). This effectively boosts the number of
training examples since each query qk ends up being selected many times, each time with a different
subset of M documents from the original set of D(k) documents.
In the following we drop the query index k to examine a single query. Here we follow the construction used in [13] to map matching problems to ranking problems (indeed the only difference
between our ranking model and that of [13] is that they use a max-margin estimator and we use MAP
in an exponential family.) Our edge feature vector xij will be the product of the feature vector i
associated with document i, and a scalar cj (the choice of which will be explained below) associated
with ranking position j
xij =
(16)
i cj .
i is dataset specific (see details below). From (10) and (16), we have wij = cj h i , ?i, and training
proceeds as explained in Section 4.
Testing At test time, we are given a query q and its corresponding list of D associated documents.
We then have to solve the prediction problem, i.e.,
y ? = argmax
y
D
X
?
i=1
D
X
?
xiy(i) , ? = argmax
cy(i) h i , ?i .
y
5
i=1
(17)
TD2003
TD2004
0.55
0.65
DORM
RankBoost
RankBoost
RankSVM
0.5
RankSVM
0.56
FRank
FRank
ListNet
RankSVM
FRank
ListNet
ListNet
AdaRank?MAP
AdaRank?MAP
AdaRank?MAP
AdaRank?NDCG
0.45
AdaRank?NDCG
0.55
RankMatch (Our Method), M=3
DORM
RankBoost
0.6
OHSUMED
0.58
RankMatch (Our Method), M=2 (all)
RankMatch (Our Method), M=2
DORM
0.54
QBRank
IsoRank
QBRank
SortNet 20 hiddens MAP
SortNet 10 hiddens MAP
AdaRank?NDCG
QBRank
IsoRank
0.4
SortNet 20 hiddens P@10
IsoRank
StructRank
SortNet 10 hiddens P@10
0.52
C?CRF
0.5
NDCG
NDCG
NDCG
StructRank
C?CRF
0.35
0.5
0.3
0.48
0.25
0.46
0.2
0.44
0.45
0.4
0.35
1
2
3
4
5
6
k
7
8
9
10
1
2
3
4
5
6
7
8
9
10
k
0.42
1
2
3
4
5
6
7
8
9
10
k
Figure 2: Results of NDCG@k for state-of-the-art methods on TD2004 (left), TD2003 (middle) and
OHSUMED (right). This is best viewed in color.
We now notice that if the scalar cj = c(j), where c is a non-increasing function of rank position
j, then (17) can be solved simply by sorting the values of h i , ?i in decreasing order.1 In other
words, the matching problem becomes one of ranking the values h i , ?i. Inference in our model is
therefore very fast (linear time).2 In this setting it makes sense to interpret the quantity h i , ?i as a
score of document di for query q. This leaves open the question of which non-increasing function c
should be used. We do not solve this problem in this paper, and instead choose a fixed c. In theory
it is possible to optimize over c during learning, but in that case the optimization problem would no
longer be convex. We describe the results of our method on LETOR 2.0 [14], a publicly available
benchmark data collection for comparing learning to rank algorithms. It is comprised of three data
sets: OHSUMED, TD2003 and TD2004.
Data sets OHSUMED contains features extracted from query-document pairs in the OHSUMED
collection, a subset of MEDLINE, a database of medical publications. It contains 106 queries. For
each query there are a number of associated documents, with relevance degrees judged by humans
on three levels: definitely, possibly or not relevant. Each query-document pair is associated with a
25 dimensional feature vector, i . The total number of query-document pairs is 16,140. TD2003
and TD2004 contain features extracted from the topic distillation tasks of TREC 2003 and TREC
2004, with 50 and 75 queries, respectively. Again, for each query there are a number of associated
documents, with relevance degrees judged by humans, but in this case only two levels are provided:
relevant or not relevant. Each query-document pair is associated with a 44 dimensional feature
vector, i . The total number of query-document pairs is 49,171 for TD2003 and 74,170 for TD2004.
All datasets are already partitioned for 5-fold cross-validation. See [14] for more details.
Evaluation Metrics In order to measure the effectiveness of our method we use the normalized
discount cumulative gain (NDCG) measure [11] at rank position k, which is defined as
NDCG@k =
k
1 X 2r(j) 1
,
Z j=1 log(1 + j)
(18)
where r(j) is the relevance of the j th document in the list, and Z is a normalization constant so that
a perfect ranking yields an NDCG score of 1.
1
If r(v) denotes the vector of ranks of entries of vector v, then ha, ?(b)i is maximized by the permutation
? ? such that r(a) = r(? ? (b)), a theorem due to Polya, Littlewood, Hardy and Blackwell [26].
2
Sorting the top k items of a list of D items takes O(k log k + D) time [17].
6
Table 1: Training times (per observation, in seconds, Intel Core2 2.4GHz) for the exponential model
and max-margin. Runtimes for M = 3, 4, 5 are from the ranking experiments, computed by full
enumeration; M = 20 corresponds to the image matching experiments, which use the sampler from
[10]. A problem of size 20 cannot be practically solved by full enumeration.
M exponential model max margin
3
0.0006661
0.0008965
4
0.0011277
0.0016086
5
0.0030187
0.0015328
20
36.0300000
0.9334556
External Parameters The regularization constant is chosen by 5-fold cross-validation, with the
partition provided by the LETOR package. All experiments are repeated 5 times to account for the
randomness of the sampling of the training data. We use c(j) = M j on all experiments.
Optimization To optimize (8) we use a standard BFGS Quasi-Newton method with a backtracking
line search, as described in [21].
Results For the first experiment training was done on subsets sampled as described above, where
for each query qk we sampled 0.4 ? D(k) ? M subsets, therefore increasing the number of samples
linearly with M . For TD2003 we also trained with all possible subsets (M = 2(all) in the plots).
In Figure 2 we plot the results of our method (named RankMatch), for M = R, compared to those
achieved by a number of state-of-the-art methods which have published NDCG scores in at least two
of the datasets: RankBoost [6], RankSVM [7], FRank [28], ListNet [5], AdaRank [32], QBRank
[34], IsoRank [33], SortNet [24], StructRank [9] and C-CRF [23]. We also included a plot of our
implementation of DORM [13], using precisely the same resampling methodology and data for a
fair comparison. RankMatch performs among the best methods on both TD2004 and OHSUMED,
while on TD2003 it performs poorly (for low k) or fairly well (for high k).
We notice that there are four methods which only report results in two of the three datasets: the
two SortNet versions are only reported on TD2003 and TD2004, while StructRank and C-CRF
are only reported on TD2004 and OHSUMED. RankMatch compares similarly with SortNet and
StructRank on TD2004, similarly to C-CRF and StructRank on OHSUMED and similarly to the
two versions of SortNet on TD2003. This exhausts all the comparisons against the methods which
have results reported in only two datasets. A fairer comparison could be made if these methods had
their performance published for the respective missing dataset.
When compared to the methods which report results in all datasets, RankMatch entirely dominates
their performance on TD2004 and is second only to IsoRank on OHSUMED.
These results should be interpreted cautiously; [20] presents an interesting discussion about issues
with these datasets. Also, benchmarking of ranking algorithms is still in its infancy and we don?t yet
have publicly available code for all of the competitive methods. We expect this situation to change
in the near future so that we are able to compare them on a fair and transparent basis.
Consistency In a second experiment we trained RankMatch with different training subset sizes,
starting with 0.03?D(k)?M and going up to 1.0?D(k)?M . Once again, we repeated the experiments
with DORM using precisely the same training subsets. The purpose here is to see whether we
observe a practical advantage of our method with increasing sample size, since statistical consistency
only provides an asymptotic indication. The results are plotted in Figure 3-right, where we can see
that, as more training data is available, RankMatch improves more saliently than DORM.
Runtime The runtime of our algorithm is competitive with that of max-margin for small graphs, such
as those that arise from the ranking application. For larger graphs, the use of the sampling algorithm
will result in much slower runtimes than those typically obtained in the max-margin framework.
This is certainly the benefit of the max-margin matching formulations of [4, 13]: it is much faster
for large graphs. Table 1 shows the runtimes for graphs of different sizes, for both estimators.
5.2 Image Matching
For our computer vision application we used a silhouette image from the Mythological Creatures
2D database3 . We randomly selected 20 points on the silhouette as our interest points and applied
3
http://tosca.cs.technion.ac.il
7
shear to the image creating 200 different images. We then randomly selected N pairs of images for
training, N for validation and 500 for testing, and trained our model to match the interest points
in the pairs ? that is, given two images with corresponding points, we computed descriptors for
each pair i, j of points (one from each image) and learned ? such that the solution to the matching
problem (2) with the weights set to wij = hxij , ?i best matches the expected solution that a human
would manually provide. In this setup,
xij = |
and
i
i
2
j | , where
| ? | denotes the elementwise difference
(19)
is the Shape Context feature vector [1] for point i.
For a graph of this size computing the exact expectation is not feasible, so we used the sampling
method described in Section 4.3. Once again, the regularization constant was chosen by crossvalidation. Given the fact that the MAP estimator is consistent while the max-margin estimator is
not, one is tempted to investigate the practical performance of both estimators as the sample size
grows. However, since consistency is only an asymptotic property, and also since the Hamming
loss is not the criterion optimized by either estimator, this does not imply a better large-sample
performance of MAP in real experiments. In any case, we present results with varying training set
sizes in Figure 3-left. The max-margin method is that of [4]. After a sufficiently large training set
size, our model seems to enjoy a slight advantage.
0.2
OHSUMED
exponential model
max margin
0.57
0.18
0.565
0.56
0.16
NDCG?1
error
0.555
0.14
RankMatch
DORM
0.55
0.545
0.12
0.54
0.1
0.535
0.08
0
50
100
150
200
250
300
350
number of training pairs
400
450
0.53
500
?1
10
0
10
sample size (x M D)
Figure 3: Performance with increasing sample size. Left: Hamming loss for different numbers of
training pairs in the image matching problem (test set size fixed to 500 pairs). Right: results of
NDCG@1 on the ranking dataset OHSUMED. This evidence is in agreement with the fact that our
estimator is consistent, while max-margin is not.
6
Conclusion and Discussion
We presented
a method
for learning
max-weight
bipartite matching
predictors,
and applied pairs
it ex- (test
rning image matching.
Left:
hamming
loss
for different
numbers
of training
tensively to well-known document ranking datasets, obtaining state-of-the-art results. We also
to 500 pairs). Right:
an example
matchapplication?that
from the larger
test problems
set (blue
arebe correct
and red
illustrated?with
an image matching
can also
solved, albeit
slowly,
with
a
recently
developed
sampler.
The
method
has
a
number
of
convenient
features.
First,
ches).
it consists of performing maximum-a-posteriori estimation in an exponential family model, which
results in a simple unconstrained convex optimization problem solvable by standard algorithms such
as BFGS. Second, the estimator is not only statistically consistent but also in practice it seems to
benefit more from increasing sample sizes than its max-margin alternative. Finally, being fully probabilistic, the model can be easily integrated as a module in a Bayesian framework, for example. The
main direction for future research consists of finding more efficient ways to solve large problems.
This will most likely arise from appropriate exploitation of data sparsity in the permutation group.
mmender systems, product rating and others. We are going to focus on web page
References
em we are given a set of queries {qk } and, for each query qk , a list of D(k) documents
[1] Belongie, S., & Malik, Jk(2000). Matching
k with shape contexts. CBAIVL00.
ratings {r1 ,Puzicha,
. . . , J.r(2002).
} (assigned
by a human editor), measur) } with corresponding
D(k)Shape
[2] Belongie, S., Malik, J., &
matching and object recognition using shape contexts.
IEEE Trans. on PAMI, 24, 509?521.
nce degree of each document
with respect to query qk . A rating or relevance degree is
[3] Burges, C. J. C., Shaked, T., Renshaw, E., Lazier, A., Deeds, M., Hamilton, N. & Hulldender, G. (2005).
inal value in the list
{1,to. rank
. . ,using
R},gradient
where
RICML.
is typically between 2 and 5. We are also
Learning
descent.
k
ry retrieved document di , a joint feature vector ik for that document and the query
8
raining time, we model each query q as a vector-weighted bipartite graph (Figure
[4] Caetano, T. S., Cheng, L., Le, Q. V., & Smola, A. J. (2009). Learning graph matching. IEEE Trans. on
PAMI, 31, 1048?1058.
[5] Cao, Z., Qin, T., Liu, T.-Y., Tsai, M.-F., & Li, H. (2007). Learning to rank: from pairwise approach to
listwise approach. ICML
[6] Freund, Y., Iyer, R., Schapire, R. E., & Singer, Y. (2003). An efficient boosting algorithm for combining
preferences. J. Mach. Learn. Res., 4, 933?969.
[7] Herbrich, A., Graepel, T., & Obermayer, K. (2000). Large margin rank boundaries for ordinal regression.
In Advances in Large Margin Classifiers.
[8] Huang, B., & Jebara, T. (2007). Loopy belief propagation for bipartite maximum weight b-matching.
AISTATS.
[9] Huang, J. C., & Frey, B. J. (2008). Structured ranking learning using cumulative distribution networks. In
NIPS.
[10] Huber, M., & Law, J. (2008). Fast approximation of the permanent for very dense problems. SODA.
[11] Jarvelin, K., & Kekalainen, J. (2002). Cumulated gain-based evaluation of ir techniques. ACM Transactions on Information Systems, 20, 2002.
[12] Lafferty, J. D., McCallum, A., & Pereira, F. (2001). Conditional random fields: Probabilistic modeling
for segmenting and labeling sequence data. ICML.
[13] Le, Q., & Smola, A. (2007). Direct optimization of ranking measures. http://arxiv.org/abs/0704.3359.
[14] Liu, T.-Y., Xu, J., Qin, T., Xiong, W., & Li, H. (2007). Letor: Benchmark dataset for research on learning
to rank for information retrieval. LR4IR.
[15] Liu, Y. & Shen, X. (2005) Multicategory -learning and support vector machine: Computational tools.
J. Computational and Graphical Statistics, 14, 219?236.
[16] Liu, Y. & Shen, X. (2006) Multicategory -learning. JASA, 101, 500?509.
[17] Martinez, C. (2004). Partial quicksort. SIAM.
[18] McAllester, D. (2007). Generalization bounds and consistency for structured labeling. Predicting Structured Data.
[19] Minc, H. (1978). Permanents. Addison-Wesley.
[20] Minka, T., & Robertson, S. (2008). Selection bias in the letor datasets. LR4IR.
[21] Nocedal, J., & Wright, S. J. (1999). Numerical optimization. Springer Series in Operations Research.
Springer.
[22] Papadimitriou, C. H., & Steiglitz, K. (1982). Combinatorial optimization: Algorithms and complexity.
New Jersey: Prentice-Hall.
[23] Qin, T., Liu, T.-Y., Zhang, X.-D., Wang, D.-S., & Li, H. (2009). Global ranking using continuous conditional random fields. NIPS.
[24] Rigutini, L., Papini, T., Maggini, M., & Scarselli, F. (2008). Sortnet: Learning to rank by a neural-based
sorting algorithm. LR4IR.
[25] Ryser, H. J. (1963). Combinatorial mathematics. The Carus Mathematical Monographs, No. 14, Mathematical Association of America.
[26] Sherman, S. (1951). On a Theorem of Hardy, Littlewood, Polya, and Blackwell. Proceedings of the
National Academy of Sciences, 37, 826?831.
[27] Taskar, B. (2004). Learning structured prediction models: a large-margin approach. Doctoral dissertation, Stanford University.
[28] Tsai, M., Liu, T., Qin, T., Chen, H., & Ma, W. (2007). Frank: A ranking method with fidelity loss. SIGIR.
[29] Tsochantaridis, I., Joachims, T., Hofmann, T., & Altun, Y. (2005). Large margin methods for structured
and interdependent output variables. JMLR, 6, 1453?1484.
[30] Valiant, L. G. (1979). The complexity of computing the permanent. Theor. Comput. Sci. (pp. 189?201).
[31] Wainwright, M. J., & Jordan, M. I. (2003). Graphical models, exponential families, and variational
inference (Technical Report 649). UC Berkeley, Department of Statistics.
[32] Xu, J., & Li, H. (2007). Adarank: a boosting algorithm for information retrieval. SIGIR.
[33] Zheng, Z., Zha, H., & Sun, G. (2008a). Query-level learning to rank using isotonic regression. LR4IR.
[34] Zheng, Z., Zha, H., Zhang, T., Chapelle, O., Chen, K., & Sun, G. (2008b). A general boosting method
and its application to learning ranking functions for web search. NIPS.
9
| 3800 |@word determinant:2 version:2 middle:1 exploitation:1 seems:2 open:1 fairer:1 attainable:1 solid:2 mcauley:1 liu:6 contains:2 score:5 xiy:12 series:1 hardy:2 document:36 existing:1 comparing:1 yet:1 must:1 readily:1 realistic:1 partition:11 numerical:1 shape:4 hofmann:1 drop:1 plot:3 resampling:1 selected:3 leaf:1 item:2 parameterization:2 mccallum:1 dissertation:1 renshaw:1 mental:1 provides:1 boosting:3 node:7 bijection:1 gx:6 preference:1 herbrich:1 org:1 zhang:2 mathematical:2 along:1 direct:1 ik:2 consists:5 manner:1 introduce:1 pairwise:2 huber:2 expected:1 indeed:1 examine:1 ry:1 decomposed:1 decreasing:1 core2:1 enumeration:3 ohsumed:11 cardinality:1 increasing:7 becomes:3 provided:5 estimating:1 dk1:2 maximizes:2 what:2 kind:1 argmin:2 string:1 interpreted:1 developed:1 finding:3 berkeley:1 every:3 runtime:3 exactly:3 preferable:1 classifier:1 medical:1 enjoy:1 producing:2 hamilton:1 segmenting:1 before:1 engineering:1 frey:1 limit:2 bilinear:2 mach:1 approximately:1 ndcg:13 pami:2 doctoral:1 studied:1 dif:1 fastest:1 bi:1 statistically:4 practical:3 testing:2 practice:5 differs:1 bootstrap:1 area:1 rning:1 attain:2 matching:32 convenient:1 word:1 deed:1 refers:2 spite:1 altun:1 cannot:1 selection:1 tsochantaridis:1 judged:2 prentice:1 context:3 shaked:1 isotonic:1 optimize:3 map:16 missing:1 crfs:1 maximizing:3 straightforward:1 attention:1 starting:1 convex:15 sigir:2 shen:2 kekalainen:1 parameterizing:1 estimator:19 construction:1 nominal:1 exact:7 programming:1 us:1 agreement:2 robertson:1 recognition:1 jk:1 labeled:2 database:1 observed:5 taskar:1 tib:1 module:1 solved:5 wang:1 parameterize:4 cy:1 caetano:2 sun:2 cautiously:1 monograph:1 complexity:2 dynamic:1 erentiable:1 ryser:2 trained:3 solving:2 algebra:1 bipartite:15 basis:1 easily:1 joint:2 represented:1 jersey:1 america:1 fast:2 describe:2 monte:1 query:27 dorm:9 tell:1 labeling:2 quite:1 whose:1 larger:4 valued:1 solve:5 stanford:1 otherwise:1 statistic:8 itself:2 sequence:2 differentiable:2 advantage:2 indication:1 product:4 qin:4 relevant:4 cao:1 combining:1 poorly:1 academy:1 crossvalidation:1 optimum:2 r1:1 letor:4 produce:3 perfect:4 object:1 ac:1 exemplar:2 ij:5 polya:2 received:1 hungarian:1 inwe:1 indicate:1 australian:1 come:1 c:1 direction:1 drawback:1 annotated:2 correct:1 subsequently:1 human:5 australia:1 mcallester:2 sgn:1 transparent:1 generalization:2 really:1 theor:1 enumerated:1 td2004:10 practically:1 sufficiently:1 hall:1 wright:1 exp:8 great:1 ranksvm:4 r1k:1 early:1 ditional:1 resample:1 purpose:1 estimation:11 combinatorial:4 tosca:1 troduce:1 successfully:1 tool:1 weighted:13 hope:2 minimization:1 clearly:3 gaussian:2 rankboost:4 varying:1 minc:1 publication:1 encode:1 focus:2 joachim:1 improvement:1 rank:15 likelihood:4 sense:1 posteriori:3 inference:5 typically:6 integrated:1 wij:7 quasi:1 going:2 x11:2 issue:4 classification:1 among:1 fidelity:1 art:5 fairly:1 uc:1 field:3 aware:2 once:2 nicely:1 having:1 sampling:6 runtimes:3 manually:1 yu:1 icml:2 jarvelin:1 future:2 papadimitriou:1 others:2 report:3 piecewise:1 few:1 opening:1 randomly:4 petterson:1 national:2 individual:2 scarselli:1 argmax:13 replacement:1 ab:1 interest:4 investigate:1 zheng:2 evaluation:2 certainly:1 light:2 devoted:1 edge:19 partial:1 xy:1 respective:1 tree:1 re:1 plotted:1 instance:8 formalism:1 modeling:2 measuring:1 loopy:1 tractability:1 vertex:1 subset:9 lazier:1 entry:2 predictor:7 comprised:1 technion:1 too:1 reported:3 fundamental:2 hiddens:4 definitely:1 siam:1 ches:1 probabilistic:4 iy:1 concrete:1 heaviest:1 central:1 again:5 medline:1 choose:1 possibly:1 slowly:1 huang:2 henceforth:1 external:1 creating:1 resort:1 li:4 account:1 potential:1 bfgs:2 exhaust:1 permanent:7 ranking:26 try:1 red:1 competitive:4 recover:1 option:1 zha:2 minimize:4 il:1 ir:1 accuracy:1 partite:1 qk:10 who:1 efficiently:4 equate:1 correspond:2 yield:1 maximized:1 descriptor:1 bayesian:1 iid:2 carlo:1 published:2 randomness:1 descen:1 definition:1 against:2 pp:1 james:1 minka:1 associated:13 di:3 hamming:4 gain:2 sampled:2 dataset:5 popular:1 ask:1 knowledge:1 color:1 improves:1 cj:4 graepel:1 wesley:1 follow:1 methodology:1 listnet:4 formulation:2 evaluated:1 done:1 generality:1 smola:2 web:2 propagation:1 mode:1 indicated:1 grows:1 contain:1 normalized:1 regularization:4 assigned:2 illustrated:1 during:1 inferior:1 criterion:1 complete:2 crf:5 performs:3 image:15 variational:1 recently:5 superior:1 shear:1 exponentially:1 td2003:9 discussed:2 he:1 slight:1 elementwise:1 interpret:1 association:1 distillation:1 composition:1 rd:1 unconstrained:2 consistency:4 mathematics:1 similarly:3 had:1 sherman:1 chapelle:1 similarity:1 longer:1 etc:1 posterior:7 recent:1 retrieved:3 optimizing:1 wellknown:1 scenario:1 manipulation:1 inconsistency:2 xe:2 yi:1 care:1 impose:2 ey:4 maximize:1 full:4 lr4ir:4 smooth:1 ing:1 match:16 isy:1 adarank:8 cross:2 faster:1 retrieval:3 ofp:1 technical:1 maggini:1 prediction:10 variant:1 basic:3 regression:2 vision:5 expectation:8 metric:1 arxiv:1 normalization:1 achieved:1 background:1 addition:2 crucial:1 eliminates:1 unlike:1 induced:1 contrary:1 inconsistent:2 lafferty:1 effectiveness:1 jordan:3 near:1 publicly:2 idea:1 multiclass:1 computable:1 whether:2 motivated:2 expression:2 saliently:1 involve:1 amount:3 discount:1 cony:1 generate:1 http:2 schapire:1 xij:11 notice:2 estimated:3 per:2 blue:1 diverse:1 discrete:1 group:1 four:1 clarity:1 nocedal:1 graph:36 relaxation:1 year:1 nce:1 package:1 soda:1 named:1 family:16 entirely:1 bound:4 correspondence:1 fold:2 cheng:1 precisely:4 constrain:1 performing:2 x12:2 structured:22 department:1 according:1 em:1 partitioned:1 appealing:1 explained:2 singer:1 ordinal:1 addison:1 tractable:4 end:1 available:8 operation:1 apply:5 observe:2 appropriate:3 generic:1 structrank:6 xiong:1 alternative:3 slower:1 database3:1 original:1 denotes:2 clustering:1 top:1 graphical:4 newton:1 multicategory:2 build:1 comparatively:1 malik:2 question:2 quantity:1 already:1 strategy:1 surrogate:1 obermayer:1 gradient:6 sci:1 parame:1 topic:1 nicta:1 assuming:2 code:1 modeled:1 index:1 illustration:3 julian:1 difficult:1 setup:1 frank:5 negative:2 suppress:2 implementation:1 perform:3 upper:2 recommender:1 observation:3 datasets:10 benchmark:4 jin:1 descent:3 erio:1 situation:2 trec:2 rn:1 steiglitz:1 arbitrary:3 jebara:1 rating:8 introduced:1 namely:1 pair:16 blackwell:2 optimized:1 learned:2 diction:1 boost:1 nip:3 trans:2 able:1 proceeds:1 below:3 usually:1 xm:2 sparsity:1 max:26 including:1 belief:1 wainwright:3 suitable:1 critical:2 natural:3 examination:1 predicting:1 solvable:1 improve:1 imply:1 extract:1 biy:1 prior:4 literature:1 interdependent:1 asymptotic:2 law:2 qbrank:4 loss:12 par:1 permutation:5 fully:2 expect:1 interesting:2 freund:1 tures:1 conve:1 validation:3 degree:6 jasa:1 sufficient:5 consistent:6 editor:2 intractability:1 side:2 bias:1 burges:1 benefit:3 listwise:2 ghz:1 raining:1 xn:11 world:1 evaluating:2 stand:1 qn:2 cumulative:2 boundary:1 collection:2 made:1 transaction:1 approximate:3 silhouette:2 clique:1 ml:5 global:3 quicksort:1 reveals:1 belongie:2 equ:1 thep:1 don:1 search:2 continuous:1 table:2 nature:1 learn:1 obtaining:2 improving:1 dkd:2 domain:2 aistats:1 main:2 dense:2 linearly:1 arise:2 prep:1 martinez:1 repeated:3 fair:2 x1:1 xu:2 canberra:1 intel:1 benchmarking:1 creature:1 cubic:1 slow:1 fails:2 position:4 pereira:1 exponential:20 comput:1 infancy:1 jmlr:1 theorem:2 specific:3 sift:1 littlewood:2 list:8 dki:1 dominates:1 evidence:1 intractable:2 albeit:1 papini:1 effectively:1 cumulated:1 valiant:1 iyer:1 conditioned:1 margin:26 sorting:3 easier:1 chen:2 backtracking:1 simply:2 likely:3 scalar:2 springer:2 cite:1 corresponds:1 extracted:2 acm:1 ma:1 inal:1 conditional:6 goal:5 viewed:1 careful:1 tempted:1 feasible:2 hard:1 change:1 included:1 infinite:2 typical:1 sampler:4 called:3 total:2 experimental:1 formally:2 puzicha:1 support:3 latter:1 arises:1 relevance:6 tsai:2 ex:1 |
3,092 | 3,801 | Sparse and Locally Constant Gaussian Graphical
Models
Jean Honorio, Luis Ortiz, Dimitris Samaras
Department of Computer Science
Stony Brook University
Stony Brook, NY 11794
{jhonorio,leortiz,samaras}@cs.sunysb.edu
Nikos Paragios
Laboratoire MAS
Ecole Centrale Paris
Chatenay-Malabry, France
[email protected]
Rita Goldstein
Medical Department
Brookhaven National Laboratory
Upton, NY 11973
[email protected]
Abstract
Locality information is crucial in datasets where each variable corresponds to a
measurement in a manifold (silhouettes, motion trajectories, 2D and 3D images).
Although these datasets are typically under-sampled and high-dimensional, they
often need to be represented with low-complexity statistical models, which are
comprised of only the important probabilistic dependencies in the datasets. Most
methods attempt to reduce model complexity by enforcing structure sparseness.
However, sparseness cannot describe inherent regularities in the structure. Hence,
in this paper we first propose a new class of Gaussian graphical models which,
together with sparseness, imposes local constancy through `1 -norm penalization.
Second, we propose an efficient algorithm which decomposes the strictly convex
maximum likelihood estimation into a sequence of problems with closed form
solutions. Through synthetic experiments, we evaluate the closeness of the recovered models to the ground truth. We also test the generalization performance of
our method in a wide range of complex real-world datasets and demonstrate that
it captures useful structures such as the rotation and shrinking of a beating heart,
motion correlations between body parts during walking and functional interactions
of brain regions. Our method outperforms the state-of-the-art structure learning
techniques for Gaussian graphical models both for small and large datasets.
1
Introduction
Structure learning aims to discover the topology of a probabilistic network of variables such that
this network represents accurately a given dataset while maintaining low complexity. Accuracy
of representation is measured by the likelihood that the model explains the observed data, while
complexity of a graphical model is measured by its number of parameters. Structure learning faces
several challenges: the number of possible structures is super-exponential in the number of variables
while the required sample size might be even exponential. Therefore, finding good regularization
techniques is very important in order to avoid over-fitting and to achieve a better generalization
performance. In this paper, we propose local constancy as a prior for learning Gaussian graphical
models, which is natural for spatial datasets such as those encountered in computer vision [1, 2, 3].
For Gaussian graphical models, the number of parameters, the number of edges in the structure
and the number of non-zero elements in the inverse covariance or precision matrix are equivalent
1
measures of complexity. Therefore, several techniques focus on enforcing sparsity of the precision matrix. An approximation method proposed in [4] relied on a sequence of sparse regressions.
Maximum likelihood estimation with an `1 -norm penalty for encouraging sparseness is proposed in
[5, 6, 7]. The difference among those methods is the optimization technique: a sequence of boxconstrained quadratic programs in [5], solution of the dual problem by sparse regression in [6] or
an approximation via standard determinant maximization with linear inequality constraints in [7]. It
has been shown theoretically and experimentally, that only the covariance selection [5] as well as
graphical lasso [6] converge to the maximum likelihood estimator.
In datasets which are a collection of measurements for variables with some spatial arrangement,
one can define a local neighborhood for each variable or manifold. Such variables correspond to
points in silhouettes, pixels in 2D images or voxels in 3D images. Silhouettes define a natural
one-dimensional neighborhood in which each point has two neighbors on each side of the closed
contour. Similarly, one can define a four-pixel neighborhood for 2D images as well as six-pixel
neighborhood for 3D images. However, there is little research on spatial regularization for structure
learning. Some methods assume a one-dimensional spatial neighborhood (e.g. silhouettes) and that
variables far apart are only weakly correlated [8], interaction between a priori known groups of
variables as in [9], or block structures as in [10] in the context of Bayesian networks.
Our contribution in this paper is two-fold. First, we propose local constancy, which encourages
finding connectivities between two close or distant clusters of variables, instead of between isolated
variables. It does not heavily constrain the set of possible structures, since it only imposes restrictions of spatial closeness for each cluster independently, but not between clusters. We impose an
`1 -norm penalty for differences of spatially neighboring variables, which allows obtaining locally
constant models that preserve sparseness, unlike `2 -norm penalties. Our model is strictly convex
and therefore has a global minimum. Positive definiteness of the estimated precision matrix is also
guaranteed, since this is a necessary condition for the definition of a multivariate normal distribution.
Second, since optimization methods for structure learning on Gaussian graphical models [5, 6, 4, 7]
are unable to handle local constancy constraints, we propose an efficient algorithm by maximizing
with respect to one row and column of the precision matrix at a time. By taking directions involving
either one variable or two spatially neighboring variables, the problem reduces to minimization of a
piecewise quadratic function, which can be performed in closed form.
We initially test the ability of our method to recover the ground truth structure from data, of a
complex synthetic model which includes locally and not locally constant interactions as well as
independent variables. Our method outperforms the state-of-the-art structure learning techniques
[5, 6, 4] for datasets with both small and large number of samples. We further show that our method
has better generalization performance on real-world datasets. We demonstrate the ability of our
method to discover useful structures from datasets with a diverse nature of probabilistic relationships
and spatial neighborhoods: manually labeled silhouettes in a walking sequence, cardiac magnetic
resonance images (MRI) and functional brain MRI.
Section 2 introduces Gaussian graphical models as well as techniques for learning such structures
from data. Section 3 presents our sparse and locally constant Gaussian graphical models. Section 4
describes our structure learning algorithm. Experimental results on synthetic and real-world datasets
are shown and explained in Section 5. Main contributions and results are summarized in Section 6.
2
Background
In this paper, we use the notation in Table 1. For convenience, we define two new operators: the
zero structure operator and the diagonal excluded product.
A Gaussian graphical model [11] is a graph in which all random variables are continuous and jointly
Gaussian. This model corresponds to the multivariate normal distribution for N variables x ? RN
with mean vector ? ? RN and a covariance matrix ? ? RN ?N , or equivalently x ? N (?, ?)
where ? ? 0. Conditional independence in a Gaussian graphical model is simply reflected in the
zero entries of the precision matrix ? = ??1 [11]. Let ? = {?n1 n2 }, two variables xn1 and
xn2 are conditionally independent if and only if ?n1 n2 = 0. The precision matrix representation
is preferred because it allows detecting cases in which two seemingly correlated variables, actually
depend on a third confounding variable.
2
Notation
kck1
kck?
|c|
diag(c) ? RN ?N
kAk1
hA, Bi
A ? B ? RM ?N
J(A) ? RM ?N
A ? B ? RM ?N
A?0
diag(A) ? RN ?N
vec(A) ? RM N
Description
P
`1 -norm of c ? RN , i.e. n |cn |
N
`? -norm of c ? R , i.e. maxn |cn |
entrywise absolute value of c ? RN , i.e. (|c1 |, |c2 |, . . . , |cN |)T
N
matrix with elements of c ? RP
on its diagonal
M ?N
`1 -norm of A ? R
, i.e. mn |amn
P|
scalar product of A, B ? RM ?N , i.e. mn amn bmn
Hadamard or entrywise product of A, B ? RM ?N , i.e. (A ? B)mn = amn bmn
zero structure operator of A ? RM ?N , by using the Iverson bracket jmn (A) =
[amn = 0]
diagonal excluded product of A ? RM ?N and B ? RN ?N , i.e. A ? B =
J(A) ? (AB). It has the property that no diagonal entry of B is used in A ? B
A ? RN ?N is symmetric and positive definite
matrix with diagonal elements of A ? RN ?N only
vector containing all elements of A ? RM ?N
Table 1: Notation used in this paper.
The concept of robust estimation by performing covariance selection was first introduced in [12]
where the number of parameters to be estimated is reduced by setting some elements of the precision
matrix ? to zero. Since finding the most sparse precision matrix which fits a dataset is a NP-hard
problem [5], in order to overcome it, several `1 -regularization methods have been proposed for
learning Gaussian graphical models from data.
b and fits a sparse preciCovariance selection [5] starts with a dense sample covariance matrix ?
sion matrix ? by solving a maximum likelihood estimation problem with a `1 -norm penalty which
encourages sparseness of the precision matrix or conditional independence among variables:
?
?
b ?i ? ?k?k
max log det ? ? h?,
1
??0
(1)
for some ? > 0. Covariance selection computes small perturbations on the sample covariance
matrix such that it generates a sparse precision matrix, which results in a box-constrained quadratic
programming. This method has moderate run time.
The Meinshausen-B?uhlmann approximation [4] obtains the conditional dependencies by performing
a sparse linear regression for each variable, by using lasso regression [13]. This method is very fast
but does not yield good estimates for lightly regularized models, as noted in [6]. The constrained
optimization version of eq.(1) is solved in [7] by applying a standard determinant maximization
with linear inequality constraints, which requires iterative linearization of k?k1 . This technique in
general does not yield the maximum likelihood estimator, as noted in [14]. The graphical lasso
technique [6] solves the dual form of eq.(1), which results in a lasso regression problem. This
method has run times comparable to [4] without sacrificing accuracy in the maximum likelihood
estimator.
Structure learning through `1 -regularization has been also proposed for different types of graphical
models: Markov random fields (MRFs) by a clique selection heuristic and approximate inference
[15]; Bayesian networks on binary variables by logistic regression [16]; Conditional random fields
by pseudo-likelihood and block regularization in order to penalize all parameters of an edge simultaneously [17]; and Ising models, i.e. MRFs on binary variables with pairwise interactions, by logistic
regression [18] which is similar in spirit to [4].
There is little work on spatial regularization for structure learning. Adaptive banding on the
Cholesky factors of the precision matrix has been proposed in [8]. Instead of using the traditional
lasso penalty, a nested lasso penalty is enforced. Entries at the right end of each row are promoted to
zero faster than entries close to the diagonal. The main drawback of this technique is the assumption
that the more far apart two variables are the more likely they are to be independent. Grouping of
entries in the precision matrix into disjoint subsets has been proposed in [9]. Such subsets can model
for instance dependencies between different groups of variables in the case of block structures. Although such a formulation allows for more general settings, its main disadvantage is the need for an
a priori segmentation of the entries in the precision matrix.
3
Related approaches have been proposed for Bayesian networks. In [10] it is assumed that variables
belong to unknown classes and probabilities of having edges among different classes were enforced
to account for structure regularity, thus producing block structures only.
3
Sparse and Locally Constant Gaussian Graphical Models
First, we describe our local constancy assumption and its use to model the spatial coherence of
dependence/independence relationships. Local constancy is defined as follows: if variable xn1 is
dependent (or independent) of variable xn2 , then a spatial neighbor xn01 of xn1 is more likely to
be dependent (or independent) of xn2 . This encourages finding connectivities between two close or
distant clusters of variables, instead of between isolated variables. Note that local constancy imposes
restrictions of spatial closeness for each cluster independently, but not between clusters.
In this paper, we impose constraints on the difference of entries in the precision matrix ? ? RN ?N
b ? RN ?N be the dense
for N variables, which correspond to spatially neighboring variables. Let ?
sample covariance matrix and D ? RM ?N be the discrete derivative operator on the manifold,
where M ? O(N ) is the number of spatial neighborhood relationships. For instance, in a 2D image,
M is the number of pixel pairs that are spatial neighbors on the manifold. More specifically, if pixel
n1 and pixel n2 are spatial neighbors, we include a row m in D such that dmn1 = 1, dmn2 = ?1 and
dmn3 = 0 for n3 ?
/ {n1 , n2 }. The following penalized maximum likelihood estimation is proposed:
?
?
b ?i ? ?k?k ? ? kD ? ?k
max log det ? ? h?,
1
1
(2)
??0
for some ?, ? > 0. The first two terms model the quality of the fit of the estimated multivariate
normal distribution to the dataset. The third term ?k?k1 encourages sparseness while the fourth
term ? kD ? ?k1 encourages local constancy in the precision matrix by penalizing the differences
of spatially neighboring variables.
In conjunction with the `1 -norm penalty for sparseness, we introduce an `1 -norm penalty for local
constancy. As discussed further in [19], `1 -norm penalties lead to locally constant models which
preserve sparseness, where as `2 -norm penalties of differences fail to do so.
The use of the diagonal excluded product for penalizing differences instead of the regular product of
matrices, is crucial. The regular product of matrices would penalize the difference between the diagonal and off-diagonal entries of the precision matrix, and potentially destroy positive definiteness
of the solution for strongly regularized models.
Even though the choice of the linear operator in eq.(2) does not affect the positive definiteness
properties of the estimated precision matrix or the optimization algorithm, in the following Section
4, we discuss positive definiteness properties and develop an optimization algorithm for the specific
case of the discrete derivative operator D.
4
Coordinate-Direction Descent Algorithm
Positive definiteness of the precision matrix is a necessary condition for the definition of a multivariate normal distribution. Furthermore, strict convexity is a very desirable property in optimization,
since it ensures the existence of a unique global minimum. Notice that the penalized maximum likelihood estimation problem in eq.(2) is strictly convex due to the convexity properties of log det ? on
the space of symmetric positive definite matrices [20]. Maximization can be performed with respect
to one row and column of the precision matrix ? at a time. Without loss of generality, we use the
last row and column in our derivation, since permutation of rows and columns is always possible.
Also, note that rows in D can be freely permuted without affecting the objective function. Let:
?
?=
W
yT
y
z
?
?
b =
, ?
S
uT
u
v
?
?
, D=
D1
D2
0M ?L
d3
?
(3)
where W, S ? RN ?1?N ?1 , y, u ? RN ?1 , d3 ? RL is a vector with all entries different than zero,
which requires a permutation of rows in D, D1 ? RM ?L?N ?1 and D2 ? RL?N ?1 .
4
In term of the variables y, z and the constant matrix W, the penalized maximum likelihood estimation problem in eq.(2) can be reformulated as:
max
??0
?
log(z ? yT W?1 y) ? 2uT y ? (v + ?)z ? 2? kyk1 ? ? kAy ? bk1
?
(4)
where kAy ? bk1 can be written in an extended form:
?
?
kAy ? bk1 = kD1 yk1 + ?vec(J(D2 ) ? (d3 yT + D2 W))?1
(5)
Intuitively, the term kD1 yk
rows of ? which affect only val? 1 penalizes differences across different
?
ues in y, while the term ?vec(J(D2 ) ? (d3 yT + D2 W))?1 penalizes differences across different
columns of ? which affect values of y as well as W.
It can be shown that the precision matrix ? is positive definite since its Schur complement z ?
yT W?1 y is positive. By maximizing eq.(4) with respect to z, we get:
z ? yT W?1 y =
1
v+?
(6)
and since v > 0 and ? > 0, this implies that the Schur complement in eq.(6) is positive.
Maximization with respect to one variable at a time leads to a strictly convex, non-smooth, piecewise
quadratic function. By replacing the optimal value for z given by eq.(6) into the objective function
in eq.(4), we get:
?
? 1 T
?1
y + uT y + ? kyk1 + ?2 kAy ? bk1
(7)
min
2 y (v + ?)W
y?RN ?1
Since the objective function in eq.(7) is non-smooth, its derivative is not continuous and therefore
methods such as gradient descent cannot be applied. Although coordinate descent methods [5, 6]
are suitable when only sparseness is enforced, they are not when local constancy is encouraged. As
shown in [21], when penalizing an `1 -norm of differences, a coordinate descent algorithm can get
stuck at sharp corners of the non-smooth optimization function; the resulting coordinates are stationary only under single-coordinate moves but not under diagonal moves involving two coordinates at
a time.
For a discrete derivative operator D used in the penalized maximum likelihood estimation problem
in eq.(2), it suffices to take directions involving either one variable g = (0, . . . , 0, 1, 0, . . . , 0)T or
two spatially neighboring variables g = (0, . . . , 0, 1, 0, . . . , 0, 1, 0, . . . , 0)T such that 1s appear in
the position corresponding to the two neighbor variables. Finally, assuming an initial value y0 and
a direction g, the objective function in eq.(7) can be reduced to find t in y(t) = y0 + tg such that it
minimizes:
?
?
P
mint?R 21 pt2 + qt + m rm |t ? sm |
p = ?(v + ?)gT?W?1 g , ?q = ((v + ?)W?1 y0 + u)T g?
(8)
?|g|
?diag(g)?1 (y0 )
, s=
r= ?
?diag(Ag)?1 (Ay0 ? b)
2 |Ag|
For simplicity of notation, we assume that r, s ? RM use only non-zero entries of g and Ag on
its definition in eq.(8). We sort and remove duplicate values in s, and propagate changes to r by
adding the entries corresponding to the duplicate values in s. Note that these apparent modifications
do not change the objective function, but they simplify its optimization. The resulting minimization
problem in eq.(8) is convex, non-smooth and piecewise quadratic. Furthermore, since the objective function is quadratic on each interval [??; s1 ], [s1 ; s2 ], . . . , [sM ?1 ; sM ], [sM ; +?], it admits
a closed form solution.
The coordinate-direction descent algorithm is presented in detail in Table 2. A careful implementation of the algorithm allows obtaining a time complexity of O(KN 3 ) for K iterations and N
variables, in which W?1 , W?1 y and Ay are updated at each iteration. In our experiments, the
5
Coordinate-direction descent algorithm
b sparseness parameter ?, local constancy parameter
1. Given a dense sample covariance matrix ?,
? and a discrete derivative operator D, find the precision matrix ? ? 0 that maximizes:
b ?i ? ?k?k ? ? kD ? ?k
log det ? ? h?,
1
1
b ?1
2. Initialize ? = diag(?)
3. For each iteration 1, . . . K and each variable 1, . . . , N
b into S, u, v as described in eq.(3)
(a) Split ? into W, y, z and ?
(b) Update W?1 by using the Sherman-Woodbury-Morrison formula (Note that when iterating from one variable to the next one, only one row and column change on matrix W)
(c) Transform local constancy regularization term from D into A and b as described in eq.(5)
(d) Compute W?1 y and Ay
(e) For each direction g involving either one variable or two spatially neighboring variables
i. Find t that minimizes eq.(8) in closed form
ii. Update y ? y + tg
iii. Update W?1 y ? W?1 y + tW?1 g
iv. Update Ay ? Ay + tAg
1
+ yT W?1 y
(f) Update z ? v+?
Table 2: Coordinate-direction descent algorithm for learning sparse and locally constant Gaussian graphical
models.
-0.35
1
2
3
4
5
0.45
6
7
8
9
0.4
(a)
(b)
(c)
(d)
Figure 1: (a) Ground truth model on an open contour manifold. Spatial neighbors are connected with black
dashed lines. Positive interactions are shown in blue, negative interactions in red. The model contains two
locally constant interactions between (x1 , x2 ) and (x6 , x7 ), and between (x4 , x5 ) and (x8 , x9 ), a not locally
constant interaction between x1 and x4 , and an independent variable x3 ; (b) colored precision matrix of the
ground truth, red for negative entries, blue for positive entries; learnt structure from (c) small and (d) large
datasets. Note that for large datasets all connections are correctly recovered.
algorithm converges quickly in usually K = 10 iterations. The polynomial dependency on the number of variables of O(N 3 ) is expected since we cannot produce an algorithm faster than computing
the inverse of the sample covariance in the case of an infinite sample.
Finally, in the spirit of [5], a method for reducing the size of the original problem is presented.
Given a P -dimensional spatial neighborhood or manifold (e.g. P = 1 for silhouettes, P = 2 for
a four-pixel neighborhood on 2D images, P = 3 for a six-pixel neighborhood on 3D images), the
objective function in eq.(7) has the maximizer y = 0 for variables on which kuk? ? ? ? P ? . Since
this condition does not depend on specific entries in the iterative estimation of the precision matrix,
this property can be used to reduce the size of the problem in advance by removing such variables.
5
Experimental Results
Convergence to Ground Truth. We begin with a small synthetic example to test the ability of
the method for recovering the ground truth structure from data, in a complex scenario in which
our method has to deal with both locally and not locally constant interactions as well as independent
variables. The ground truth Gaussian graphical model is shown in Figure 1 and it contains 9 variables
arranged in an open contour manifold.
In order to measure the closeness of the recovered models to the ground truth, we measure the
Kullback-Leibler divergence, average precision (one minus the fraction of falsely included edges),
average recall (one minus the fraction of falsely excluded edges) as well as the Frobenius norm between the recovered model and the ground truth. For comparison purposes, we picked two of the
6
7
0.8
0.6
0.4
r
M
B?
an
d
C
ov
Se
l
G
La
ss
o
SL
C
G
G
M
B?
o
ll
B?
or
B?
an
d
C
ov
Se
l
G
La
SL sso
C
G
G
M
M
M
M
3
2
0
Fu
B?
or
B?
an
d
C
ov
Se
l
G
La
SL sso
C
G
G
M
M
M
Fu
0.2
4
1
0.4
0
5
Fu
M ll
B?
o
B? r
an
C d
ov
S
G el
La
SL sso
C
G
G
M
1
0.6
Frobenius norm
2
0.8
6
In
d
3
Recall
Precision
1
M
1
ep
1.2
4
ll
Kullback?Leibler divergence
5
Figure 2: Kullback-Leibler divergence with respect to the best method, average precision, recall and Frobenius
norm between the recovered model and the ground truth. Our method (SLCGGM) outperforms the fully connected model (Full), Meinshausen-B?uhlmann approximation (MB-or, MB-and), covariance selection (CovSel),
graphical lasso (GLasso) for small datasets (in blue solid line) and for large datasets (in red dashed line). The
fully independent model (Indep) resulted in relative divergences of 2.49 for small and 113.84 for large datasets.
state-of-the-art structure learning techniques: covariance selection [5] and graphical lasso [6], since
it has been shown theoretically and experimentally that they both converge to the maximum likelihood estimator. We also test the Meinshausen-B?uhlmann approximation [4]. The fully connected as
well as fully independent model are also included as baseline methods.
Two different scenarios are tested: small datasets of four samples, and large datasets of 400 samples.
Under each scenario, 50 datasets are randomly generated from the ground truth Gaussian graphical
model. It can be concluded from Figure 2 that our method outperforms the state-of-the-art structure
learning techniques both for small and large datasets. This is due to the fact that the ground truth
data contains locally constant interactions, and our method imposes a prior for local constancy.
Although this is a complex scenario which also contains not locally constant interactions as well as
an independent variable, our method can recover a more plausible model when compared to other
methods. Note that even though other methods may exhibit a higher recall for small datasets, our
method consistently recovers a better probability distribution.
A visual comparison of the ground truth versus the best recovered model by our method from small
and large datasets is shown in Figure 1. The image shows the precision matrix in which red squares
represent negative entries, while blue squares represent positive entries. There is very little difference between the ground truth and the recovered model from large datasets. Although the model
is not fully recovered from small datasets, our technique performs better than the MeinshausenB?uhlmann approximation, covariance selection and graphical lasso in Figure 2.
Real-World Datasets. In the following experiments, we demonstrate the ability of our method to
discover useful structures from real-world datasets. Datasets with a diverse nature of probabilistic
relationships are included in our experiments: from cardiac MRI [22], our method recovers global
deformation in the form of rotation and shrinking; from a walking sequence1 , our method finds the
long range interactions between different parts; and from functional brain MRI [23], our method
recovers functional interactions between different regions and discover differences in processing
monetary rewards between cocaine addicted subjects versus healthy control subjects. Each dataset
is also diverse in the type of spatial neighborhood: one-dimensional for silhouettes in a walking
sequence, two-dimensional for cardiac MRI and three-dimensional for functional brain MRI.
Generalization. Cross-validation was performed in order to measure the generalization performance of our method in estimating the underlying distribution. Each dataset was randomly split
into five sets. On each round, four sets were used for training and the remaining set was used for
measuring the log-likelihood. Table 3 shows that our method consistently outperforms techniques
that encourage sparsity only. This is strong evidence that datasets that are measured over a spatial manifold are locally constant, as well as that our method is a good regularization technique
that avoids over-fitting and allows for better generalization. Another interesting fact is that for the
brain MRI dataset, which is high dimensional and contains a small number of samples, the model
that assumes full independence performed better than the Meinshausen-B?uhlmann approximation,
covariance selection and graphical lasso. Similar observations has been already made in [24, 25]
where it was found that assuming independence often performs better than learning dependencies
among variables.
1
Human Identification at a Distance dataset http://www.cc.gatech.edu/cpl/projects/hid/
7
(a)
(e)
(b)
(c)
(d)
(f)
Figure 3: Real-world datasets: cardiac MRI displacement (a) at full contraction and (b) at full expansion, (c)
2D spatial manifold and (d) learnt structure, which captures contraction and expansion (in red), and similar displacements between neighbor pixels (in blue); (e) silhouette manifold and (f) learnt structure from a manually
labeled walking sequence, showing similar displacements from each independent leg (in blue) and opposite displacements between both legs as well as between hands and feet (in red); and structures learnt from functional
brain MRI in a monetary reward task for (g) drug addicted subjects with more connections in the cerebellum
(in yellow) versus (h) control subjects with more connections in the prefrontal cortex (in green).
Method
Synthetic
Indep
MB-and
MB-or
CovSel
GLasso
SLCGGM
-6428.23
-5595.87*
-5595.13*
-5626.32
-5625.79
-5623.52
Cardiac
MRI
-5150.58
-5620.45
-4135.98*
-5044.41
-5041.52
-4017.56
Walking
Sequence
-12957.72
-12542.15
-11317.24
-12051.51
-12035.50
-10718.62
Brain MRI
Drug-addicted
-324724.24
-418605.02
-398725.04
-409402.60
-413176.45
-297318.61
Brain MRI
Control
-302729.54
-317034.67
-298186.66
-300829.98
-305307.25
-278678.35
Table 3: Cross-validated log-likelihood on the testing set. Our method (SLCGGM) outperforms the
Meinshausen-B?uhlmann approximation (MB-and, MB-or), covariance selection (CovSel), graphical lasso
(GLasso) and the fully independent model (Indep). Values marked with an asterisk are not statistically significantly different from our method.
6
Conclusions and Future Work
In this paper, we proposed local constancy for Gaussian graphical models, which encourages finding
probabilistic connectivities between two close or distant clusters of variables, instead of between
isolated variables. We introduced an `1 -norm penalty for local constancy into a strictly convex
maximum likelihood estimation. Furthermore, we proposed an efficient optimization algorithm and
proved that our method guarantees positive definiteness of the estimated precision matrix. We tested
the ability of our method to recover the ground truth structure from data, in a complex scenario with
locally and not locally constant interactions as well as independent variables. We also tested the
generalization performance of our method in a wide range of complex real-world datasets with a
diverse nature of probabilistic relationships as well as neighborhood type.
There are several ways of extending this research. Methods for selecting regularization parameters
for sparseness and local constancy need to be further investigated. Although the positive definiteness properties of the precision matrix as well as the optimization algorithm still hold when including
operators such as the Laplacian for encouraging smoothness, benefits of such a regularization approach need to be analyzed. In practice, our technique converges in a small number of iterations, but
a more precise analysis of the rate of convergence needs to be performed. Finally, model selection
consistency when the number of samples grows to infinity needs to be proved.
Acknowledgments
This work was supported in part by NIDA Grant 1 R01 DA020949-01 and NSF Grant CNS-0721701
8
References
[1] D. Crandall, P. Felzenszwalb, and D. Huttenlocher. Spatial priors for part-based recognition using statistical models. IEEE Conf. Computer Vision and Pattern Recognition, 2005.
[2] P. Felzenszwalb and D. Huttenlocher. Pictorial structures for object recognition. International Journal of
Computer Vision, 2005.
[3] L. Gu, E. Xing, and T. Kanade. Learning GMRF structures for spatial priors. IEEE Conf. Computer
Vision and Pattern Recognition, 2007.
[4] N. Meinshausen and P. B?uhlmann. High dimensional graphs and variable selection with the lasso. The
Annals of Statistics, 2006.
[5] O. Banerjee, L. El Ghaoui, A. d?Aspremont, and G. Natsoulis. Convex optimization techniques for fitting
sparse Gaussian graphical models. International Conference on Machine Learning, 2006.
[6] J. Friedman, T. Hastie, and R. Tibshirani. Sparse inverse covariance estimation with the graphical lasso.
Biostatistics, 2007.
[7] M. Yuan and Y. Lin. Model selection and estimation in the Gaussian graphical model. Biometrika, 2007.
[8] E. Levina, A. Rothman, and J. Zhu. Sparse estimation of large covariance matrices via a nested lasso
penalty. The Annals of Applied Statistics, 2008.
[9] J. Duchi, S. Gould, and D. Koller. Projected subgradient methods for learning sparse Gaussians. Uncertainty in Artificial Intelligence, 2008.
[10] V. Mansinghka, C. Kemp, J. Tenenbaum, and T. Griffiths. Structured priors for structure learning. Uncertainty in Artificial Intelligence, 2006.
[11] S. Lauritzen. Graphical Models. Oxford Press, 1996.
[12] A. Dempster. Covariance selection. Biometrics, 1972.
[13] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical Society,
1996.
[14] O. Banerjee, L. El Ghaoui, and A. d?Aspremont. Model selection through sparse maximum likelihood
estimation for multivariate Gaussian or binary data. Journal of Machine Learning Research, 2008.
[15] S. Lee, V. Ganapathi, and D. Koller. Efficient structure learning of Markov networks using `1 regularization. Advances in Neural Information Processing Systems, 2006.
[16] M. Schmidt, A. Niculescu-Mizil, and K. Murphy. Learning graphical model structure using `1 regularization paths. AAAI Conf. Artificial Intelligence, 2007.
[17] M. Schmidt, K. Murphy, G. Fung, and R. Rosales. Structure learning in random fields for heart motion
abnormality detection. IEEE Conf. Computer Vision and Pattern Recognition, 2008.
[18] M. Wainwright, P. Ravikumar, and J. Lafferty. High dimensional graphical model selection using `1 regularized logistic regression. Advances in Neural Information Processing Systems, 2006.
[19] R. Tibshirani, M. Saunders, S. Rosset, J. Zhu, and K. Knight. Sparsity and smoothness via the fused lasso.
Journal of the Royal Statistical Society, 2005.
[20] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, 2006.
[21] J. Friedman, T. Hastie, H. H?ofling, and R. Tibshirani. Pathwise coordinate optimization. The Annals of
Applied Statistics, 2007.
[22] J. Deux, A. Rahmouni, and J. Garot. Cardiac magnetic resonance and 64-slice cardiac CT of lipomatous
metaplasia of chronic myocardial infarction. European Heart Journal, 2008.
[23] R. Goldstein, D. Tomasi, N. Alia-Klein, L. Zhang, F. Telang, and N. Volkow. The effect of practice on a
sustained attention task in cocaine abusers. NeuroImage, 2007.
[24] P. Domingos and M. Pazzani. On the optimality of the simple Bayesian classifier under zero-one loss.
Machine Learning, 1997.
[25] N. Friedman, D. Geiger, and M. Goldszmidt. Bayesian network classifiers. Machine Learning, 1997.
9
| 3801 |@word determinant:2 version:1 mri:12 polynomial:1 norm:17 open:2 d2:6 propagate:1 covariance:18 contraction:2 natsoulis:1 minus:2 solid:1 initial:1 contains:5 selecting:1 ecole:1 outperforms:6 recovered:8 stony:2 written:1 luis:1 distant:3 remove:1 update:5 stationary:1 intelligence:3 colored:1 detecting:1 zhang:1 five:1 c2:1 iverson:1 yuan:1 sustained:1 fitting:3 introduce:1 falsely:2 pairwise:1 theoretically:2 expected:1 brain:8 gov:1 encouraging:2 little:3 begin:1 discover:4 notation:4 estimating:1 maximizes:1 underlying:1 biostatistics:1 project:1 banding:1 minimizes:2 finding:5 ag:3 guarantee:1 pseudo:1 ofling:1 biometrika:1 rm:13 classifier:2 control:3 medical:1 grant:2 appear:1 producing:1 positive:15 local:17 oxford:1 path:1 bnl:1 might:1 black:1 meinshausen:6 cpl:1 range:3 bi:1 statistically:1 unique:1 woodbury:1 acknowledgment:1 testing:1 practice:2 block:4 definite:3 x3:1 displacement:4 drug:2 significantly:1 boyd:1 regular:2 griffith:1 get:3 cannot:3 close:4 selection:17 convenience:1 operator:9 context:1 applying:1 restriction:2 equivalent:1 www:1 yt:7 maximizing:2 chronic:1 attention:1 independently:2 convex:8 simplicity:1 gmrf:1 estimator:4 d1:2 deux:1 vandenberghe:1 kay:4 leortiz:1 handle:1 coordinate:10 updated:1 annals:3 heavily:1 programming:1 domingo:1 rita:1 element:5 recognition:5 walking:6 ising:1 labeled:2 yk1:1 observed:1 constancy:16 ep:1 huttenlocher:2 solved:1 capture:2 region:2 ensures:1 connected:3 indep:3 knight:1 yk:1 dempster:1 convexity:2 complexity:6 reward:2 weakly:1 depend:2 solving:1 ov:4 samara:2 gu:1 represented:1 derivation:1 malabry:1 fast:1 describe:2 artificial:3 crandall:1 neighborhood:12 saunders:1 jean:1 heuristic:1 apparent:1 plausible:1 s:1 ability:5 statistic:3 jointly:1 transform:1 seemingly:1 sequence:7 propose:5 interaction:14 product:7 mb:6 fr:1 neighboring:6 hid:1 hadamard:1 monetary:2 kak1:1 achieve:1 description:1 frobenius:3 convergence:2 regularity:2 cluster:7 extending:1 produce:1 converges:2 object:1 develop:1 measured:3 lauritzen:1 qt:1 mansinghka:1 eq:18 strong:1 solves:1 recovering:1 c:1 implies:1 rosales:1 direction:8 foot:1 drawback:1 human:1 explains:1 suffices:1 generalization:7 rothman:1 strictly:5 hold:1 ground:15 normal:4 purpose:1 estimation:14 uhlmann:7 healthy:1 minimization:2 gaussian:20 always:1 aim:1 super:1 avoid:1 shrinkage:1 sion:1 gatech:1 conjunction:1 validated:1 focus:1 consistently:2 likelihood:17 baseline:1 inference:1 mrfs:2 dependent:2 el:3 niculescu:1 honorio:1 typically:1 initially:1 koller:2 france:1 pixel:9 among:4 dual:2 priori:2 resonance:2 art:4 spatial:20 constrained:2 initialize:1 field:3 having:1 manually:2 encouraged:1 represents:1 x4:2 future:1 myocardial:1 np:1 simplify:1 piecewise:3 inherent:1 duplicate:2 randomly:2 preserve:2 national:1 simultaneously:1 divergence:4 resulted:1 pictorial:1 murphy:2 cns:1 n1:4 ortiz:1 attempt:1 ab:1 friedman:3 detection:1 kd1:2 introduces:1 analyzed:1 bracket:1 nida:1 edge:5 fu:3 encourage:1 necessary:2 biometrics:1 iv:1 penalizes:2 sacrificing:1 isolated:3 deformation:1 covsel:3 instance:2 column:6 disadvantage:1 measuring:1 maximization:4 tg:2 sso:3 entry:16 subset:2 comprised:1 dependency:5 kn:1 learnt:4 synthetic:5 rosset:1 international:2 probabilistic:6 off:1 lee:1 together:1 quickly:1 fused:1 connectivity:3 sunysb:1 aaai:1 x9:1 containing:1 prefrontal:1 corner:1 cocaine:2 conf:4 derivative:5 ganapathi:1 account:1 summarized:1 includes:1 performed:5 picked:1 closed:5 red:6 start:1 relied:1 recover:3 sort:1 xing:1 contribution:2 square:2 accuracy:2 correspond:2 yield:2 yellow:1 bayesian:5 identification:1 accurately:1 trajectory:1 cc:1 bk1:4 definition:3 recovers:3 xn1:3 sampled:1 dataset:7 proved:2 recall:4 ut:3 segmentation:1 goldstein:2 actually:1 higher:1 x6:1 reflected:1 entrywise:2 formulation:1 arranged:1 box:1 strongly:1 though:2 furthermore:3 generality:1 correlation:1 hand:1 replacing:1 maximizer:1 banerjee:2 logistic:3 quality:1 grows:1 effect:1 concept:1 hence:1 regularization:12 spatially:6 excluded:4 laboratory:1 symmetric:2 leibler:3 deal:1 conditionally:1 amn:4 x5:1 during:1 ll:3 encourages:6 round:1 cerebellum:1 noted:2 ay:4 demonstrate:3 performs:2 motion:3 duchi:1 image:10 rotation:2 permuted:1 functional:6 rl:2 belong:1 discussed:1 measurement:2 bmn:2 vec:3 cambridge:1 smoothness:2 consistency:1 similarly:1 sherman:1 cortex:1 gt:1 multivariate:5 confounding:1 moderate:1 apart:2 mint:1 scenario:5 inequality:2 binary:3 minimum:2 nikos:2 impose:2 promoted:1 freely:1 converge:2 morrison:1 ii:1 xn2:3 dashed:2 desirable:1 full:4 reduces:1 smooth:4 faster:2 levina:1 cross:2 long:1 lin:1 ravikumar:1 laplacian:1 involving:4 regression:9 vision:5 iteration:5 represent:2 c1:1 penalize:2 background:1 affecting:1 interval:1 laboratoire:1 concluded:1 crucial:2 unlike:1 strict:1 subject:4 lafferty:1 spirit:2 schur:2 abnormality:1 split:2 iii:1 independence:5 fit:3 affect:3 hastie:2 lasso:16 topology:1 opposite:1 reduce:2 cn:3 det:4 six:2 penalty:12 reformulated:1 useful:3 iterating:1 se:3 locally:17 tenenbaum:1 addicted:3 reduced:2 http:1 sl:4 nsf:1 notice:1 estimated:5 disjoint:1 correctly:1 tibshirani:4 klein:1 blue:6 diverse:4 discrete:4 kck:1 group:2 four:4 d3:4 penalizing:3 kuk:1 destroy:1 graph:2 subgradient:1 fraction:2 enforced:3 run:2 inverse:3 fourth:1 uncertainty:2 ecp:1 upton:1 geiger:1 coherence:1 comparable:1 ct:1 guaranteed:1 fold:1 quadratic:6 encountered:1 constraint:4 infinity:1 constrain:1 n3:1 x2:1 tag:1 generates:1 lightly:1 x7:1 alia:1 min:1 optimality:1 performing:2 gould:1 department:2 structured:1 maxn:1 fung:1 centrale:1 kd:3 describes:1 cardiac:7 across:2 y0:4 tw:1 infarction:1 modification:1 s1:2 leg:2 explained:1 intuitively:1 ghaoui:2 heart:3 discus:1 fail:1 end:1 gaussians:1 jmn:1 magnetic:2 schmidt:2 rp:1 existence:1 original:1 assumes:1 remaining:1 include:1 graphical:31 maintaining:1 pt2:1 k1:3 society:2 r01:1 objective:7 move:2 arrangement:1 already:1 dependence:1 diagonal:10 traditional:1 exhibit:1 gradient:1 distance:1 unable:1 manifold:10 kemp:1 enforcing:2 assuming:2 relationship:5 equivalently:1 potentially:1 negative:3 implementation:1 unknown:1 observation:1 datasets:30 markov:2 sm:4 descent:7 extended:1 precise:1 rn:15 perturbation:1 sharp:1 introduced:2 complement:2 pair:1 paris:1 required:1 connection:3 tomasi:1 brook:2 usually:1 pattern:3 dimitris:1 beating:1 sparsity:3 challenge:1 program:1 max:3 green:1 including:1 royal:2 wainwright:1 suitable:1 natural:2 regularized:3 mn:3 zhu:2 mizil:1 x8:1 aspremont:2 ues:1 prior:5 voxels:1 val:1 relative:1 loss:2 fully:6 permutation:2 glasso:3 volkow:1 interesting:1 versus:3 penalization:1 validation:1 asterisk:1 imposes:4 row:10 penalized:4 abuser:1 supported:1 last:1 side:1 wide:2 neighbor:7 face:1 taking:1 felzenszwalb:2 absolute:1 sparse:15 benefit:1 slice:1 overcome:1 kck1:1 world:7 avoids:1 contour:3 computes:1 stuck:1 collection:1 adaptive:1 made:1 projected:1 far:2 approximate:1 obtains:1 preferred:1 kullback:3 silhouette:8 clique:1 global:3 assumed:1 continuous:2 iterative:2 decomposes:1 table:6 kanade:1 nature:3 robust:1 pazzani:1 obtaining:2 expansion:2 investigated:1 complex:6 european:1 diag:5 main:3 dense:3 s2:1 n2:4 body:1 x1:2 definiteness:7 ny:2 shrinking:2 paragios:2 precision:29 position:1 neuroimage:1 exponential:2 sequence1:1 third:2 kyk1:2 formula:1 removing:1 specific:2 showing:1 admits:1 closeness:4 grouping:1 evidence:1 adding:1 linearization:1 sparseness:12 locality:1 simply:1 likely:2 visual:1 pathwise:1 scalar:1 corresponds:2 truth:15 nested:2 ma:1 conditional:4 marked:1 careful:1 experimentally:2 hard:1 change:3 specifically:1 infinite:1 reducing:1 included:3 brookhaven:1 experimental:2 la:4 cholesky:1 goldszmidt:1 evaluate:1 tested:3 correlated:2 |
3,093 | 3,802 | Speaker Comparison with Inner Product
Discriminant Functions
W. M. Campbell
MIT Lincoln Laboratory
Lexington, MA 02420
[email protected]
Z. N. Karam
DSPG, MIT RLE, Cambridge MA
MIT Lincoln Laboratory, Lexington, MA
[email protected]
D. E. Sturim
MIT Lincoln Laboratory
Lexington, MA 02420
[email protected]
Abstract
Speaker comparison, the process of finding the speaker similarity between two
speech signals, occupies a central role in a variety of applications?speaker verification, clustering, and identification. Speaker comparison can be placed in a
geometric framework by casting the problem as a model comparison process. For
a given speech signal, feature vectors are produced and used to adapt a Gaussian
mixture model (GMM). Speaker comparison can then be viewed as the process of
compensating and finding metrics on the space of adapted models. We propose
a framework, inner product discriminant functions (IPDFs), which extends many
common techniques for speaker comparison?support vector machines, joint factor analysis, and linear scoring. The framework uses inner products between the
parameter vectors of GMM models motivated by several statistical methods. Compensation of nuisances is performed via linear transforms on GMM parameter
vectors. Using the IPDF framework, we show that many current techniques are
simple variations of each other. We demonstrate, on a 2006 NIST speaker recognition evaluation task, new scoring methods using IPDFs which produce excellent
error rates and require significantly less computation than current techniques.
1 Introduction
Comparing speakers in speech signals is a common operation in many applications including forensic speaker recognition, speaker clustering, and speaker verification. Recent popular approaches
to text-independent comparison include Gaussian mixture models (GMMs) [1], support vector machines [2, 3], and combinations of these techniques. When comparing two speech utterances, these
approaches are used in a train and test methodology. One utterance is used to produce a model which
is then scored against the other utterance. The resulting comparison score is then used to cluster,
verify or identify the speaker.
Comparing speech utterances with kernel functions has been a common theme in the speaker recognition SVM literature [2, 3, 4]. The resulting framework has an intuitive geometric structure. Variable length sequences of feature vectors are mapped to a large dimensional SVM expansion vector.
These vectors are ?smoothed? to eliminate nuisances [2]. Then, a kernel function is applied to the
?
This work was sponsored by the Federal Bureau of Investigation under Air Force Contract FA8721-05C-0002. Opinions, interpretations, conclusions, and recommendations are those of the authors and are not
necessarily endorsed by the United States Government.
1
two vectors. The kernel function is an inner product which induces a metric on the set of vectors, so
comparison is analogous to finding the distances between SVM expansion vectors.
A recent trend in the speaker recognition literature has been to move towards a more linear geometric view for non-SVM systems. Compensation via linear subspaces and supervectors of mean
parameters of GMMs is presented in joint factor analysis [5]. Also, comparison of utterances via
linear scoring is presented in [6]. These approaches have introduced many new ideas and perform
well in speaker comparison tasks.
An unrealized effort in speaker recognition is to bridge the gap between SVMs and some of the new
proposed GMM methods. One difficulty is that most SVM kernel functions in speaker comparison
satisfy the Mercer condition. This restricts the scope of investigation of potential comparison strategies for two speaker utterances. Therefore, in this paper, we introduce the idea of inner product
discriminant functions (IPDFs).
IPDFs are based upon the same basic operations as SVM kernel functions with some relaxation in
structure. First, we map input utterances to vectors of fixed dimension. Second, we compensate the
input feature vectors. Typically, this compensation takes the form of a linear transform. Third, we
compare two compensated vectors with an inner product. The resulting comparison function is then
used in an application specific way.
The focus of our initial investigations of the IPDF structure are the following. First, we show that
many of the common techniques such as factor analysis, nuisance projection, and various types of
scoring can be placed in the framework. Second, we systematically describe the various inner product and compensation techniques used in the literature. Third, we propose new inner products and
compensation. Finally, we explore the space of possible combinations of techniques and demonstrate several novel methods that are computationally efficient and produce excellent error rates.
The outline of the paper is as follows. In Section 2, we describe the general setup for speaker
comparison using GMMs. In Section 3, we introduce the IPDF framework. Section 4 explores inner
products for the IPDF framework. Section 5 looks at methods for compensating for variability. In
Section 6, we perform experiments on the NIST 2006 speaker recognition evaluation and explore
different combinations of IPDF comparisons and compensations.
2 Speaker Comparison
A standard distribution used for text-independent speaker recognition is the Gaussian mixture
model [1],
N
X
g(x) =
?i N (x|mi , ?i ).
(1)
i=1
Feature vectors are typically cepstral coefficients with associated smoothed first- and second-order
derivatives.
x
We map a sequence of feature vectors, xN
1 , from a speaker to a GMM by adapting a GMM universal
background model (UBM). Here, we use the shorthand x1Nx to denote the sequence, x1 , ? ? ? , xNx .
For the purpose of this paper, we will assume only the mixture weights, ?i , and means, mi , in (1)
are adapted. Adaptation of the means is performed with standard relevance MAP [1]. We estimate
the mixture weights using the standard ML estimate. The adaptation yields new parameters which
we stack into a parameter vector, ax , where
t
ax = ?tx mtx
(2)
t
t
t
(3)
= ?x,1 ? ? ? ?x,N mx,1 ? ? ? mx,N .
x
In speaker comparison, the problem is to compare two sequences of feature vectors, e.g., xN
1 and
Ny
y1 . To compare these two sequences, we adapt a GMM UBM to produce two sets of parameter
vectors, ax and ay , as in (2). The goal of our speaker comparison process can now be recast as a
function that compares the two parameter vectors, C(ax , ay ), and produces a value that reflects the
similarity of the speakers. Initial work in this area was performed using kernels from support vector
machines [4, 7, 2], but we expand the scope to other types of discriminant functions.
2
3 Inner Product Discriminant Functions
The basic framework we propose for speaker comparison functions is composed of two parts?
compensation and comparison. For compensation, the parameter vectors generated by adaptation
in (2) can be transformed to remove nuisances or projected onto a speaker subspace. The second
part of our framework is comparison. For the comparison of parameter vectors, we will consider
natural distances that result in inner products between parameter vectors.
We propose the following inner product discriminant function (IPDF) framework for exploring
speaker comparison,
C(ax , ay ) = (Lx ax )t D2 (Ly ay )
(4)
where Lx , Ly are linear transforms and potentially dependent on ?x and/or ?y . The matrix D is
positive definite, usually diagonal, and possibly dependent on ?x and/or ?y . Note, we also consider
simple combinations of IPDFs to be in our framework?e.g., positively-weighted sums of IPDFs.
Several questions from this framework are: 1) what inner product gives the best speaker comparison
performance, 2) what compensation strategy works best, 3) what tradeoffs can be made between
accuracy and computational cost, and 4) how do the compensation and the inner product interact.
We explore theoretical and experimental answers to these questions in the following sections.
4 Inner Products for IPDFs
In general, an inner product of the parameters should be based on a distance arising from a statistical
comparison. We derive three straightforward methods in this section. We also relate some other
methods, without being exhaustive, that fall in this framework that have been described in detail in
the literature.
4.1 Approximate KL Comparison (CKL )
A straightforward strategy for comparing the GMM parameter vectors is to use an approximate
form of the KL divergence applied to the induced GMM models. This strategy was used in [2]
successfully with an approximation based on the log-sum inequality; i.e., for the GMMs, gx and gy ,
with parameters ax and ay ,
D(gx kgy ) ?
N
X
?x,i D (N (?; mx,i , ?i )kN (?; my,i , ?i )) .
(5)
i=1
Here, D(?k?) is the KL divergence, and ?i is from the UBM.
By symmetrizing (5) and substituting in the KL divergence between two Gaussian distributions, we
obtain a distance, ds , which upper bounds the symmetric KL divergence,
ds (ax , ay ) = Ds (?x k?y ) +
N
X
(0.5?x,i + 0.5?y,i )(mx,i ? my,i )t ??1
i (mx,i ? my,i ).
(6)
i=1
We focus on the second term in (6) for this paper, but note that the first term could also be converted
to a comparison function on the mixture weights. Using polarization on the second term, we obtain
the inner product
CKL (ax , ay ) =
N
X
(0.5?x,i + 0.5?y,i )mtx,i ??1
i my,i .
(7)
i=1
Note that (7) can also be expressed more compactly as
CKL (ax , ay ) = mtx ((0.5?x + 0.5?y ) ? In ) ??1 my
(8)
where ? is the block matrix with the ?i on the diagonal, n is the feature vector dimension, and ?
is the Kronecker product. Note that the non-symmetric form of the KL distance in (5) would result
in the average mixture weights in (8) being replaced by ?x . Also, note that shifting the means by
the UBM will not affect the distance in (6), so we can replace means in (8) by the UBM centered
means.
3
4.2 GLDS kernel (CGLDS )
An alternate inner product approach is to use generalized linear discriminants and the corresponding
kernel [4]. The overall structure of this GLDS kernel is as follows. A per feature vector expansion
function is defined as
t
b(xi ) = [b1 (xi ) ? ? ? bm (xi )] .
(9)
x
The mapping between an input sequence, xN
1 is then defined as
x
xN
1 7? bx =
Nx
1 X
b(xi ).
Nx i=1
(10)
The corresponding kernel between two sequences is then
N
where
y
t ?1
x
KGLDS (xN
by
1 , y1 ) = bx ?
(11)
Nz
1 X
?=
b(zi )b(zi )t ,
Nz i=1
(12)
z
and zN
1 is a large set of feature vectors which is representative of the speaker population.
In the context of a GMM UBM, we can define an expansion as follows
t
b(xi ) = p(1|xi )(xi ? m1 )t ? ? ? p(N |xi )(xi ? mN )t
(13)
where p(j|xi ) is the posterior probability of mixture component j given xi , and mj is from a UBM.
Using (13) in (10), we see that
bx = (?x ? In )(mx ? m) and by = (?y ? In )(my ? m)
(14)
where m is the stacked means of the UBM. Thus, the GLDS kernel inner product is
CGLDS (ax , ay ) = (mx ? m)t (?x ? In )??1 (?y ? In )(my ? m).
(15)
Note that ? in (12) is almost the UBM covariance matrix, but is not quite the same because of a
squaring of the p(j|zi ) in the diagonal. As is commonly assumed, we will consider a diagonal
approximation of ?, see [4].
4.3 Gaussian-Distributed Vectors
A common assumption in the factor analysis literature [5] is that the parameter vector mx as x varies
has a Gaussian distribution. If we assume a single covariance for the entire space, then the resulting
likelihood ratio test between two Gaussian distributions results in a linear discriminant [8].
More formally, suppose that we have a distribution with mean mx and we are trying to distinguish
from a distribution with the UBM mean m, then the discriminant function is [8],
h(x) = (mx ? m)t ??1 (x ? m) + cx
(16)
where cx is a constant that depends on mx , and ? is the covariance in the parameter vector space.
We will assume that the comparison function can be normalized (e.g., by Z-norm [1]), so that cx can
be dropped. We now apply the discriminant function to another mean vector, my , and obtain the
following comparison function
CG (ax , ay ) = (mx ? m)t ??1 (my ? m).
(17)
4.4 Other Methods
Several other methods are possible for comparing the parameter vectors that arise either from ad hoc
methods or from work in the literature. We describe a few of these in this section.
Geometric Mean Comparison (CGM ). A simple symmetric function that is similar to the KL (8)
and GLDS (15) comparison functions is arrived at by replacing the arithmetic mean in CKL by a
geometric mean. The resulting kernel is
?1
CGM (ax , ay ) = (mx ? m)t (?1/2
(?1/2
? In )(my ? m)
x ? In )?
y
4
(18)
where ? is the block diagonal UBM covariances.
Fisher Kernel (CF ). The Fisher kernel specialized to the UBM case has several forms [3]. The
main variations are the choice of covariance in the inner product and the choice of normalization
of the gradient term. We took the best performing configuration for this paper?we normalize the
gradient by the number of frames which results in a mixture weight scaling of the gradient. We also
use a diagonal data-trained covariance term. The resulting comparison function is
t
CF (ax , ay ) = (?x ? In )??1 (mx ? m) ??1 (?y ? In )??1 (my ? m)
(19)
where ? is a diagonal matrix acting as a variance normalizer.
Linearized Q-function (CQ ). Another form of inner product may be derived from the linear Qscoring shown in [6]. In this case, the scoring is given as (mtrain ? m)t ??1 (F ? Nm) where N
and F are the zeroth and first order sufficient statistics of a test utterance, m is the UBM means,
mtrain is the mean of a training model, and ? is the block diagonal UBM covariances. A close
approximation of this function can be made by using a small relevance factor in MAP adaptation of
the means to obtain the following comparison function
CQ (ax , ay ) = (mx ? m)t ??1 (?y ? In )(my ? m).
(20)
Note that if we symmetrize CQ , this gives us CKL ; this analysis ignores for a moment that in [6],
compensation is also asymmetric.
KL Kernel (KKL ). By assuming the mixture weights are constant and equal to the UBM mixture
in the comparison function CKL (7), we obtain the KL kernel,
KKL (mx , my ) = mtx (? ? In ) ??1 my
(21)
where ? are the UBM mixture weights. This kernel has been used extensively in SVM speaker
recognition [2].
An analysis of the different inner products in the preceding sections shows that many of the methods
presented in the literature have a similar form, but are interestingly derived with quite disparate
techniques. Our goal in the experimental section is to understand how these comparison function
perform and how they interact with compensation.
5 Compensation in IPDFs
Our next task is to explore compensation methods for IPDFs. Our focus will be on subspace-based
methods. With these methods, the fundamental assumption is that either speakers and/or nuisances
are confined to a small subspace in the parameter vector space. The problem is to use this knowledge
to produce a higher signal (speaker) to noise (nuisance) representation of the speaker. Standard
notation is to use U to represent the nuisance subspace and to have V represent the speaker subspace.
Our goal in this section is to recast many of the methods in the literature in a standard framework
with oblique and orthogonal projections.
To make a cohesive presentation, we introduce some notation. We define an orthogonal projection
with respect to a metric, PU,D , where D and U are full rank matrices as
PU,D = U (U t D2 U )?1 U t D2
(22)
where DU is a linearly independent set, and the metric is kx ? ykD = kDx ? Dyk2 . The
process of projection, e.g. y = PU,D b, is equivalent to solving the least-squares problem,
x
? = argminx kU x ? bkD and letting y = U x
?. For convenience, we also define the projection
onto the orthogonal complement of U , U ? , as QU,D = PU ? ,D = I ? PU,D . Note that we can regularize the projection PU,D by adding a diagonal term to the inverse in (22); the resulting operation
remains linear but is no longer a projection.
We also define the oblique projection onto V with null space U + (U + V )? and metric induced by
D. Let QR be the (skinny) QR decomposition of the matrix [U V ] in the D norm (i.e., Qt D2 Q = I),
and QV be the columns corresponding to V in the matrix Q. Then, the oblique (non-orthogonal)
projection onto V is
OV,U,D = V (QtV D2 V )?1 QtV D2 .
(23)
The use of projections in our development will add geometric understanding to the process of compensation.
5
5.1 Nuisance Attribute Projection (NAP)
A framework for eliminating nuisances in the parameter vector based on projection was shown in [2].
The basic idea is to assume that nuisances are confined to a small subspace and can be removed via
an orthogonal projection, mx 7? QU,D mx . One justification for using subspaces comes from the
perspective that channel classification can be performed with inner products along one-dimensional
subspaces. Therefore, the projection removes channel specific directions from the parameter space.
The NAP projection uses the metric induced by a kernel in an SVM. For the GMM context, the
standard kernel used is the approximate KL comparison (8) [2]. We note that since D is known a
priori to speaker comparison, we can orthonormalize
DU and apply the projection as a
the matrix
1/2
matrix multiply. The resulting projection has D = ? ? In ??1/2 .
5.2 Factor Analysis and Joint Factor Analysis
The joint factor analysis (JFA) model assumes that the mean parameter vector can be expressed as
ms,sess = m + U x + V y
(24)
where ms,sess is the speaker- and session-dependent mean parameter vector, U and V are matrices
with small rank, and m is typically the UBM. Note that for this section, we will use the standard
variables for factor analysis, x and y, even though they conflict with our earlier development. The
goal of joint factor analysis is to find solutions to the latent variables x and y given training data.
In (24), the matrix U represents a nuisance subspace, and V represents a speaker subspace. Existing
work on this approach for speaker recognition uses both maximum likelihood (ML) estimates and
MAP estimates of x and y [9, 5]. In the latter case, a Gaussian prior with zero mean and diagonal
covariance for x and y is assumed. For our work, we focus on the ML estimates [9] of x and y
in (24), since we did not observe substantially different performance from MAP estimates in our
experiments.
Another form of modeling that we will consider is factor analysis (FA). In this case, the term V y is
replaced by a constant vector representing the true speaker model, ms ; the goal is then to estimate
x. Typically, as a simplification, ms is assumed to be zero when calculating sufficient statistics for
estimation of x [10].
The solution to both JFA and FA can be unified. For the JFA problem, if we stack the matrices [U V ],
then the problem reverts to the FA problem. Therefore, we initially study the FA problem. Note that
we also restrict our work to only one EM iteration of the estimation of the factors, since this strategy
works well in practice.
The standard ML solution to FA [9] for one EM iteration can be written as
t ?1
U ? (N ? In )U x = U t ??1 [F ? (N ? In )m]
(25)
where F is the vector of first order sufficient statistics, and N is the diagonal matrix of zeroth order
statistics (expected counts). The sufficient statistics are obtained from the UBM applied to an input
PN
set of feature vectors. We first let Nt = i=1 Ni and multiply both sides of (25) by 1/Nt . Now
use relevance MAP with a small relevance factor and F and N to obtain ms ; i.e., both ms ? m and
F ? (N ? In )m will be nearly zero in the entries corresponding to small Ni . We obtain
t ?1
U ? (?s ? In )U x = U t ??1 (?s ? In ) [ms ? m]
(26)
where ?s is the speaker dependent mixture weights. We note that (26) are the normal equations
? = argminx kU x ? (ms ? m)kD where D is given below. This
for the least-squares problem, x
solution is not unexpected since ML estimates commonly lead to least-squares problems with GMM
distributed data [11].
Once the solution to (26) is obtained, the resulting U x is subtracted from an estimate of the speaker
mean, ms to obtain the compensated mean. If we assume that ms is obtained by a relevance map
adaptation from the statistics F and N with a small relevance factor, then the FA process is well
approximated by
ms 7? QU,D ms
(27)
where
D = ?1/2
? In ??1/2 .
(28)
s
6
JFA becomes an extension of the FA process we have demonstrated. One first projects onto the
stacked U V space. Then another projection is performed to eliminate the U component of variability. This can be expressed as a single oblique projection; i.e., the JFA process is
ms 7? OV,U,I P[UV ],D ms = OV,U,D ms .
(29)
5.3 Comments and Analysis
Several comments should be made on compensation schemes and their use in speaker comparison.
First, although NAP and ML FA (27) were derived in substantially different ways, they are essentially the same operation, an orthogonal projection. The main difference is in the choice of metrics
under which they were originally proposed. For NAP, the metric depends on the UBM only, and for
FA it is utterance and UBM dependent.
A second observation is that the JFA oblique projection onto V has substantially different properties
than a standard orthogonal projection. When JFA is used in speaker recognition [5, 6], typically
JFA is performed in training, but the test utterance is compensated only with FA. In our notation,
applying JFA with linear scoring [6] gives
CQ (OV,U,D1 m1 , QU,D2 m2 )
(30)
where m1 and m2 are the mean parameter vectors estimated from the training and testing utterances,
1/2
1/2
respectively; also, D1 = (?1 ? In )??1/2 and D2 = (?2 ? In )??1/2 . Our goal in the experiments section is to disentangle and understand some of the properties of scoring methods such
as (30). What is significant in this process?mismatched train/test compensation, data-dependent
metrics, or asymmetric scoring?
A final note is that training the subspaces for the various projections optimally is not a process
that is completely understood. One difficulty is that the metric used for the inner product may
not correspond to the metric for compensation. As a baseline, we used the same subspace for all
comparison functions. The subspace was obtained with an ML style procedure for training subspaces
similar to [11] but specialized to the factor analysis problem as in [5].
6 Speaker Comparison Experiments
Experiments were performed on the NIST 2006 speaker recognition evaluation (SRE) data set. Enrollment/verification methodology and the evaluation criterion, equal error rate (EER) and minDCF,
were based on the NIST SRE evaluation plan [12]. The main focus of our efforts was the one conversation enroll, one conversation verification task for telephone recorded speech. T-Norm models
and Z-Norm [13] speech utterances were drawn from the NIST 2004 SRE corpus. Results were
obtained for both the English only task (Eng) and for all trials (All) which includes speakers that
enroll/verify in different languages.
Feature extraction was performed using HTK [14] with 20 MFCC coefficients, deltas, and acceleration coefficients for a total of 60 features. A GMM UBM with 512 mixture components was trained
using data from NIST SRE 2004 and from Switchboard corpora. The dimension of the nuisance
subspace, U , was fixed at 100; the dimension of the speaker space, V , was fixed at 300.
Results are in Table 1. In the table, we use the following notation,
1/2
1/2
DUBM = ?1/2 ? In ??1/2 , D1 = ?1 ? In ??1/2 , D2 = ?2 ? In ??1/2
(31)
where ? are the UBM mixture weights, ?1 are the mixture weights estimated from the enrollment
utterance, and ?2 are the mixture weights estimated from the verification utterance. We also use
the notation DL , DG , and DF to denote the parameters of the metric for the GLDS, Gaussian, and
Fisher comparison functions from Sections 4.2, 4.3, and 4.4, respectively.
An analysis of the results in Table 1 shows several trends. First, the performance of the best IPDF
configurations is as good or better than the state of the art SVM and JFA implementations. Second,
the compensation method that dominates good performance is an orthogonal complement of the
nuisance subspace, QU,D . Combining a nuisance projection with an oblique projection is fine, but
7
Table 1: A comparison of baseline systems and different IPDF implementations
Comparison
Function
Baseline SVM
Baseline JFA, CQ
CKL
CKL
CKL
CKL
CKL
CGM
CGM
CGM
KKL
KKL
CGLDS
CG
CF
Enroll
Comp.
QU,DUBM
OV,U,D1
OV,U,D1
OV,U,D1
QU,D1
QU,DUBM
I ? OU,V,D1
QU,D1
QU,DUBM
QU,DUBM
QU,DUBM
QU,D1
QU,DL
QU,DG
QU,DF
Verify
Comp.
QU,DUBM
QU,D2
QU,D2
OV,U,D2
QU,D2
QU,DUBM
I ? OU,V,D2
QU,D2
QU,DUBM
I
QU,DUBM
QU,D2
QU,DL
QU,DG
QU,DF
EER
All (%)
3.82
3.07
3.21
8.73
2.93
3.03
7.10
2.90
3.01
3.95
4.95
5.52
3.60
5.07
3.56
minDCF
All (?100)
1.82
1.57
1.70
5.06
1.55
1.55
3.60
1.59
1.66
1.93
2.46
2.85
1.93
2.52
1.89
EER
Eng (%)
2.62
2.11
2.32
8.06
1.89
1.92
6.49
1.73
1.89
2.76
3.73
4.43
2.27
3.89
2.22
minDCF
Eng (?100)
1.17
1.23
1.32
4.45
0.93
0.95
3.13
0.98
1.05
1.26
1.75
2.15
1.23
1.87
1.12
Table 2: Summary of some IPDF performances and computation time normalized to a baseline system. Compute time includes compensation and inner product only.
Comparison
Function
CQ
CGM
CGM
CGM
Enroll
Comp.
OV,U,D1
QU,D1
QU,DUBM
QU,DUBM
Verify
Comp.
QU,D2
QU,D2
QU,DUBM
I
EER
Eng (%)
2.11
1.73
1.89
2.76
minDCF
Eng (?100)
1.23
0.98
1.05
1.26
Compute
time
1.00
0.17
0.08
0.04
using only oblique projections onto V gives high error rates. A third observation is that comparison
functions whose metrics incorporate ?1 and ?2 perform significantly better than ones with fixed ?
from the UBM. In terms of best performance, CKL , CQ , and CGM perform similarly. For example,
the 95% confidence interval for 2.90% EER is [2.6, 3.3]%.
We also observe that a nuisance projection with fixed DUBM gives similar performance to a projection involving a ?variable? metric, Di . This property is fortuitous since a fixed projection can
be precomputed and stored and involves significantly reduced computation. Table 2 shows a comparison of error rates and compute times normalized by a baseline system. For the table, we used
precomputed data as much as possible to minimize compute times. We see that with an order of
magnitude reduction in computation and a significantly simpler implementation, we can achieve the
same error rate.
7 Conclusions and future work
We proposed a new framework for speaker comparison, IPDFs, and showed that several recent systems in the speaker recognition literature can be placed in this framework. We demonstrated that
using mixture weights in the inner product is the key component to achieve significant reductions in
error rates over a baseline SVM system. We also showed that elimination of the nuisance subspace
via an orthogonal projection is a computationally simple and effective method of compensation.
Most effective methods of compensation in the literature (NAP, FA, JFA) are straightforward variations of this idea. By exploring different IPDFs using these insights, we showed that computation
can be reduced substantially over baseline systems with similar accuracy to the best performing
systems. Future work includes understanding the performance of IPDFs for different tasks, incorporating them into an SVM system, and hyperparameter training.
8
References
[1] Douglas A. Reynolds, T. F. Quatieri, and R. Dunn, ?Speaker verification using adapted Gaussian mixture
models,? Digital Signal Processing, vol. 10, no. 1-3, pp. 19?41, 2000.
[2] W. M. Campbell, D. E. Sturim, D. A. Reynolds, and A. Solomonoff, ?SVM based speaker verification
using a GMM supervector kernel and NAP variability compensation,? in Proc. ICASSP, 2006, pp. I97?
I100.
[3] C. Longworth and M. J. F. Gales, ?Derivative and parametric kernels for speaker verification,? in Proc.
Interspeech, 2007, pp. 310?313.
[4] W. M. Campbell, ?Generalized linear discriminant sequence kernels for speaker recognition,? in Proc.
ICASSP, 2002, pp. 161?164.
[5] P. Kenny, P. Ouellet, N. Dehak, V. Gupta, and P. Dumouchel, ?A study of inter-speaker variability in
speaker verification,? IEEE Transactions on Audio, Speech and Language Processing, 2008.
[6] Ondrej Glembek, Lukas Burget, Najim Dehak, Niko Brummer, and Patrick Kenny, ?Comparison of
scoring methods used in speaker recognition with joint factor analysis,? in Proc. ICASSP, 2009.
[7] Pedro J. Moreno, Purdy P. Ho, and Nuno Vasconcelos, ?A Kullback-Leibler divergence based kernel for
SVM classification in multimedia applications,? in Adv. in Neural Inf. Proc. Systems 16, S. Thrun, L. Saul,
and B. Sch?lkopf, Eds. MIT Press, Cambridge, MA, 2004.
[8] Keinosuke Fukunaga, Introduction to Statistical Pattern Recognition, Academic Press, 1990.
[9] Simon Lucey and Tsuhan Chen, ?Improved speaker verification through probabilistic subspace adaptation,? in Proc. Interspeech, 2003, pp. 2021?2024.
[10] Robbie Vogt, Brendan Baker, and Sridha Sriharan, ?Modelling session variability in text-independent
speaker verification,? in Proc. Interspeech, 2005, pp. 3117?3120.
[11] Mark J. F. Gales, ?Cluster adaptive training of hidden markov models,? IEEE Trans. Speech and Audio
Processing, vol. 8, no. 4, pp. 417?428, 2000.
[12] M. A. Przybocki, A. F. Martin, and A. N. Le, ?NIST speaker recognition evaluations utilizing the Mixer
corpora?2004,2005,2006,? IEEE Trans. on Speech, Audio, Lang., vol. 15, no. 7, pp. 1951?1959, 2007.
[13] Roland Auckenthaler, Michael Carey, and Harvey Lloyd-Thomas, ?Score normalization for textindependent speaker verification systems,? Digital Signal Processing, vol. 10, pp. 42?54, 2000.
[14] J. Odell, D. Ollason, P. Woodland, S. Young, and J. Jansen, The HTK Book for HTK V2.0, Cambridge
University Press, Cambridge, UK, 1995.
9
| 3802 |@word trial:1 eliminating:1 norm:4 vogt:1 supervectors:1 d2:18 linearized:1 covariance:8 decomposition:1 eng:5 fortuitous:1 reduction:2 moment:1 configuration:2 initial:2 score:2 united:1 interestingly:1 reynolds:2 existing:1 current:2 comparing:5 nt:2 lang:1 written:1 remove:2 moreno:1 sponsored:1 oblique:7 i100:1 lx:2 gx:2 simpler:1 along:1 shorthand:1 introduce:3 inter:1 expected:1 compensating:2 becomes:1 project:1 notation:5 baker:1 null:1 what:4 substantially:4 unified:1 lexington:3 finding:3 ykd:1 uk:1 ly:2 positive:1 dropped:1 understood:1 nap:6 zeroth:2 nz:2 dehak:2 testing:1 practice:1 block:3 definite:1 procedure:1 dunn:1 area:1 universal:1 significantly:4 adapting:1 projection:30 burget:1 confidence:1 eer:5 onto:7 close:1 convenience:1 context:2 applying:1 equivalent:1 map:8 demonstrated:2 compensated:3 glds:5 straightforward:3 m2:2 insight:1 utilizing:1 dumouchel:1 regularize:1 population:1 variation:3 justification:1 analogous:1 suppose:1 quatieri:1 us:3 trend:2 recognition:16 approximated:1 asymmetric:2 role:1 adv:1 removed:1 trained:2 ov:9 solving:1 upon:1 completely:1 compactly:1 icassp:3 joint:6 various:3 tx:1 train:2 stacked:2 describe:3 effective:2 mixer:1 exhaustive:1 quite:2 whose:1 statistic:6 transform:1 final:1 hoc:1 sequence:8 took:1 propose:4 product:27 adaptation:6 ckl:12 combining:1 achieve:2 lincoln:3 intuitive:1 normalize:1 qr:2 cluster:2 produce:6 derive:1 qt:1 involves:1 come:1 direction:1 attribute:1 centered:1 occupies:1 opinion:1 elimination:1 require:1 government:1 investigation:3 exploring:2 extension:1 normal:1 wcampbell:1 scope:2 mapping:1 substituting:1 purpose:1 estimation:2 proc:7 xnx:1 bridge:1 successfully:1 qv:1 reflects:1 weighted:1 federal:1 mit:8 gaussian:10 pn:1 casting:1 ax:15 focus:5 derived:3 cgm:9 rank:2 likelihood:2 modelling:1 normalizer:1 cg:2 baseline:8 brendan:1 dependent:6 squaring:1 purdy:1 eliminate:2 typically:5 entire:1 initially:1 hidden:1 expand:1 transformed:1 overall:1 classification:2 ubm:23 priori:1 development:2 plan:1 art:1 jansen:1 equal:2 once:1 extraction:1 vasconcelos:1 represents:2 look:1 nearly:1 future:2 few:1 composed:1 dg:3 divergence:5 replaced:2 argminx:2 mtx:4 skinny:1 multiply:2 evaluation:6 mixture:19 sre:4 orthogonal:9 theoretical:1 column:1 earlier:1 modeling:1 enrollment:2 zn:1 cost:1 entry:1 kdx:1 optimally:1 stored:1 kn:1 answer:1 varies:1 my:14 explores:1 fundamental:1 contract:1 probabilistic:1 michael:1 central:1 nm:1 recorded:1 possibly:1 gale:2 book:1 derivative:2 style:1 bx:3 potential:1 converted:1 gy:1 lloyd:1 karam:1 coefficient:3 includes:3 satisfy:1 depends:2 ad:1 performed:8 view:1 keinosuke:1 simon:1 carey:1 minimize:1 air:1 ni:2 accuracy:2 square:3 variance:1 jfa:12 yield:1 identify:1 correspond:1 lkopf:1 identification:1 produced:1 mfcc:1 comp:4 ed:1 against:1 pp:9 niko:1 nuno:1 associated:1 mi:2 di:1 popular:1 knowledge:1 conversation:2 ou:2 campbell:3 ondrej:1 higher:1 originally:1 htk:3 methodology:2 improved:1 though:1 robbie:1 d:3 replacing:1 verify:4 normalized:3 true:1 polarization:1 symmetric:3 leibler:1 laboratory:3 cohesive:1 ll:2 interspeech:3 nuisance:16 speaker:63 m:15 generalized:2 trying:1 criterion:1 arrived:1 outline:1 ay:13 demonstrate:2 novel:1 common:5 specialized:2 discriminants:1 interpretation:1 m1:3 significant:2 cambridge:4 uv:1 session:2 similarly:1 language:2 similarity:2 longer:1 pu:6 add:1 patrick:1 disentangle:1 posterior:1 recent:3 showed:3 perspective:1 inf:1 harvey:1 inequality:1 scoring:9 preceding:1 kenny:2 signal:6 arithmetic:1 odell:1 full:1 adapt:2 academic:1 compensate:1 roland:1 involving:1 basic:3 essentially:1 metric:14 df:3 iteration:2 kernel:23 normalization:2 represent:2 confined:2 background:1 fine:1 interval:1 sch:1 comment:2 induced:3 gmms:4 dubm:14 variety:1 affect:1 zi:3 restrict:1 inner:26 idea:4 tradeoff:1 motivated:1 solomonoff:1 effort:2 speech:10 woodland:1 endorsed:1 transforms:2 extensively:1 induces:1 svms:1 reduced:2 restricts:1 estimated:3 arising:1 per:1 delta:1 hyperparameter:1 vol:4 key:1 drawn:1 gmm:14 douglas:1 relaxation:1 sum:2 inverse:1 extends:1 almost:1 scaling:1 bound:1 distinguish:1 simplification:1 adapted:3 kronecker:1 fukunaga:1 performing:2 martin:1 alternate:1 combination:4 kd:1 em:2 qu:34 computationally:2 equation:1 remains:1 count:1 precomputed:2 symmetrizing:1 letting:1 operation:4 apply:2 observe:2 v2:1 subtracted:1 ho:1 bureau:1 thomas:1 assumes:1 clustering:2 include:1 cf:3 calculating:1 move:1 question:2 strategy:5 fa:11 parametric:1 diagonal:11 gradient:3 subspace:19 distance:6 mx:18 mapped:1 thrun:1 nx:2 discriminant:10 przybocki:1 kkl:4 assuming:1 length:1 cq:7 ratio:1 setup:1 tsuhan:1 potentially:1 relate:1 disparate:1 implementation:3 perform:5 upper:1 observation:2 markov:1 nist:7 compensation:23 variability:5 y1:2 frame:1 orthonormalize:1 stack:2 smoothed:2 introduced:1 complement:2 kl:10 conflict:1 ollason:1 trans:2 usually:1 below:1 pattern:1 reverts:1 recast:2 including:1 shifting:1 difficulty:2 force:1 natural:1 forensic:1 mn:1 representing:1 scheme:1 supervector:1 utterance:14 text:3 prior:1 geometric:6 literature:10 understanding:2 mtrain:2 digital:2 switchboard:1 verification:12 sufficient:4 mercer:1 systematically:1 summary:1 placed:3 english:1 lucey:1 side:1 understand:2 mismatched:1 fall:1 saul:1 lukas:1 cepstral:1 distributed:2 dimension:4 xn:5 symmetrize:1 ignores:1 author:1 made:3 commonly:2 projected:1 adaptive:1 bm:1 najim:1 transaction:1 approximate:3 kullback:1 ml:7 b1:1 corpus:3 assumed:3 glembek:1 xi:11 latent:1 ouellet:1 table:7 mj:1 ku:2 channel:2 interact:2 expansion:4 du:2 excellent:2 necessarily:1 did:1 main:3 linearly:1 noise:1 scored:1 arise:1 x1:1 positively:1 representative:1 ny:1 theme:1 third:3 young:1 specific:2 svm:14 gupta:1 dominates:1 dl:3 incorporating:1 adding:1 magnitude:1 kx:1 gap:1 chen:1 sess:2 cx:3 explore:4 expressed:3 unexpected:1 recommendation:1 pedro:1 ma:5 fa8721:1 viewed:1 goal:6 presentation:1 acceleration:1 towards:1 replace:1 fisher:3 rle:1 telephone:1 acting:1 total:1 multimedia:1 experimental:2 formally:1 support:3 mark:1 latter:1 relevance:6 incorporate:1 audio:3 d1:12 |
3,094 | 3,803 | Optimal Scoring for Unsupervised Learning
Zhihua Zhang and Guang Dai
College of Computer Science & Technology
Zhejiang University
Hangzhou, Zhejiang, 310027 China
Abstract
We are often interested in casting classification and clustering problems as a regression framework, because it is feasible to achieve some statistical properties
in this framework by imposing some penalty criteria. In this paper we illustrate
optimal scoring, which was originally proposed for performing the Fisher linear
discriminant analysis by regression, in the application of unsupervised learning. In
particular, we devise a novel clustering algorithm that we call optimal discriminant
clustering. We associate our algorithm with the existing unsupervised learning algorithms such as spectral clustering, discriminative clustering and sparse principal
component analysis. Experimental results on a collection of benchmark datasets
validate the effectiveness of the optimal discriminant clustering algorithm.
1 Introduction
The Fisher linear discriminant analysis (LDA) is a classical method that considers dimensionality reduction and classification jointly. LDA estimates a low-dimensional discriminative space defined by
linear transformations through maximizing the ratio of between-class scatter to within-class scatter.
It is well known that LDA is equivalent to a least mean squared error procedure in the binary classification problem [4]. It is of great interest to obtain a similar relationship in multi-class problems. A
significant literature has emerged to address this issue [6, 8, 12, 14]. This provides another approach
to performing LDA by regression, in which penalty criteria are tractably introduced to achieve some
statistical properties such as regularized LDA [5] and sparse discriminant analysis [2].
It is also desirable to explore unsupervised learning problems in a regression framework. Recently,
Zou et al. [17] reformulated principal component analysis (PCA) as a regression problem and then
devised a sparse PCA by imposing the lasso (the elastic net) penalty [10, 16] on the regression
vector. In this paper we consider unsupervised learning problems by optimal scoring, which was
originally proposed to perform LDA by regression [6]. In particular, we devise a novel unsupervised
framework by using the optimal scoring and the ridge penalty.
This framework can be used for dimensionality reduction and clustering simultaneously. We are
mainly concerned with the application in clustering. In particular, we propose a clustering algorithm
that we called optimal discriminant clustering (ODC). Moreover, we establish a connection of our
clustering algorithm with discriminative clustering algorithms [3, 13] and spectral clustering algorithms [7, 15]. This implies that we can cast these clustering algorithms as regression-type problems.
In turn, this facilitates the introduction of penalty terms such as the lasso and elastic net so that we
have sparse unsupervised learning algorithms.
Throughout this paper, Im denotes the m?m identity matrix, 1m the m?1 vector of ones, 0 the zero
1
vector or matrix with appropriate size, and Hm = Im ? m
1m 10m the m?m centering matrix. For
0
an m?1 vector a = (a1 , . . . , am ) , diag(a) represents the m?m diagonal matrix with a1 , . . . , am
as its diagonal entries. For an m?m matrix A = [aij ], we let A+ bepthe Moore-Penrose inverse
of A, tr(A) be the trace of A, rk(A) be the rank of A and kAkF = tr(A0 A) be the Frobenius
norm of A.
1
2
Problem Formulation
We are concerned with a multi-class classification problem. Given a set of n p-dimensional data
points, {x1 , . . . , xn } ? X ? Rp , we assume that the xi are grouped into c disjoint classes and that
each xi belongs to one class. Let V = {1, 2, . . . , n} denote the index set of the data points xi and
partition V into c disjoint subsets Vj ; i.e., Vi ? Vj = ? for i 6= j and ?cj=1 Vj = V , where the
Pc
cardinality of Vj is nj so that j=1 nj = n.
We also make use of a matrix representation for the problem in question. In particular, we let X =
[x1 , . . . , xn ]0 be an n?p data matrix, and E = [eij ] be an n?c indicator matrix with eij = 1 if input
?
?
1
xi is in class j and eij ?
= 0 otherwise.
Let?? = diag(n1 , . . . , nc ), ? 2 = diag( n1 , . . . , nc ),
?
? = (n1 , . . . , nc )0 and ? = ( n1 , . . . , nc )0 . It follows that 10n E = 10c ? = ? 0 , E1c = 1n ,
10c ? = n, E0 E = ? and ??1 ? = 1c .
2.1
Scoring Matrices
Hastie et al. [6] defined a scoring matrix for the c-class classification problem. That is, it is such
a c?(c?1) matrix ? ? Rc?(c?1) that ?0 (E0 E)? = ?0 ?? = Ic?1 . The jth row of ? defines a
scoring or scaling for the jth class. Here we refine this definition as:
Definition 1 Given a c-class classification problem with the cardinality of the jth class being nj , a
c?(c?1) matrix ? is referred to as the class scoring matrix if it satisfies
?0 ?? = Ic?1
and
? 0 ? = 0.
It follows from this definition that ??0 = ??1 ? n1 1c 10c . In?the literature [15], the authors
? presented
?
?
n
n?n
0
1
a specific example for ? = (? 1 , . . . , ? c?1 )0 . That is, ? 1 = ?nn 1 , ? ?
10c?1 and
1
qP
c
?
? 0l =
j=l+1
0 ? 10l?1 , q
nl
Pc
j=l
nj
nj
for l = 2, . . . , c?1. Especially, when c = 2, ? =
?
, qP
c
nl
Pc
n(n?n1 )
?
10c?l
j=l nj
j=l+1 nj
?
?
n
n
( ?nn21 , ? ?nn12 )0 is a 2-dimensional
vector.
Let Y = E? (n?(c?1)). We then have Y0 Y = Ic?1 and 10n Y = 0. To address an unsupervised
clustering problem with c classes, we relax the setting of Y = E? and give the following definition.
Definition 2 An n?(c?1) matrix Y is referred to as the sample scoring matrix if it satisfies
Y0 Y = Ic?1
and
10n Y = 0.
Note that c does not necessarily represent the number of classes in this definition. For example, we
view c?1 as the dimension of a reduced dimensional space in the dimensionality reduction problem.
2.2
Optimal Scoring for LDA
To devise a classifier for the c-class classification problem, we consider a penalized optimal scoring
model, which is defined by
n
o
1
?2
min f (?, W) , kE? ? Hn XWk2F +
tr(W0 W)
(1)
2
2
?, W
under the constraints ?0 ?? = Ic?1 and ? 0 ? = 0 where ? ? Rc?(c?1) and W ? Rp?(c?1) .
Compared with the setting in [6], we add the constraint ? 0 ? = 0. The reason is due to
10n Hn XW = 0. We thus impose 10n E? = ? 0 ? = 0 for consistency.
Denote
1
1
R = ?? 2 E0 Hn X(X0 Hn X + ? 2 Ip )?1 X0 Hn E?? 2 .
1
Since R? 2 = 0, there exists a c?(c?1) orthogonal matrix ?, the columns of which are the eigen1
vectors of R. That is, ? satisfies ?0 ? = Ic?1 and ?0 ? 2 = 0.
2
? = ?? 12 ? and W
? = (X0 Hn X + ? 2 Ip )?1 X0 Hn E?.
?
Theorem 1 A minimizer of Problem (1) is ?
1
1
Here [?, ?n ? 2 ] is the c?c matrix of the orthonormal eigenvectors of R.
? where ? is
Since for an arbitrary class scoring matrix ?, its rank is c?1, we have ? = ??
0
?1
some (c?1)?(c?1) orthonormal matrix. Moreover, it follows from ?? = ? ? n1 1c 10c that the
between-class scatter matrix is given by
??
? 0 E0 Hn X.
?b = X0 Hn E??0 E0 Hn X = X0 Hn E?
Accordingly, we can also write the generalized eigenproblem for the penalized LDA as
0
??
? E0 Hn XA = (X0 Hn X + ? 2 Ip )A?,
X0 Hn E?
because the total scatter matrix ? is ? = X0 Hn X. We now obtain
??
? 0 E0 Hn XA = A?.
W
??
? 0 E0 Hn X and ?
? 0 E0 Hn XW
? have the same nonzero eigenvalues. MoreIt is well known that W
0 0
0 0
? E Hn XA is the eigenvector matrix of ?
? E Hn XW.
? We thus establish the relationship
over, ?
between A in the penalized LDA and W in the penalized optimal scoring model (1).
3
Optimal Scoring for Unsupervised Learning
In this section we extend the notion of optimal scoring to unsupervised learning problems, leading
to a new framework for dimensionality reduction and clustering analysis simultaneously.
3.1
Framework
In particular, we relax E? in (1) as a sample scoring matrix Y and define the following penalized
model:
n
o
1
?2
min f (Y, W) , kY ? Hn XWk2F +
tr(W0 W)
(2)
Y, W
2
2
under the constraints 10n Y = 0 and Y0 Y = Ic?1 . The following theorem provides a solution for
this problem.
? and W
? = (X0 Hn X + ? 2 Ip )?1 X0 Hn Y,
? where Y
? is
Theorem 2 A minimizer of Problem (2) is Y
the n?(c?1) orthogonal matrix of the top eigenvectors of Hn X(X0 Hn X + ? 2 Ip )?1 X0 Hn .
The proof is given in Appendix A. Note that all the eigenvalues of Hn X(X0 Hn X + ? 2 Ip )?1 X0 Hn
are between 0 and 1. Especially, when ? 2 = 0, the eigenvalues are either 1 or 0. In this
? W)
? achieves its minimum 0, otherwise the minimum value is
case, if rk(Hn X) ? c?1, f (Y,
c?1?rk(Hn X)
.
2
With the estimates of Y and W, we can develop an unsupervised learning procedure. It is clear that
W can be treated as a non-orthogonal projection matrix and Hn XW is then the low-dimensional
configuration of X. Using this treatment, we obtain a new alternative to the regression formulation of
PCA by Zou et al. [17]. In this paper, however, we concentrate on the application of the framework
in clustering analysis.
3.2 Optimal Discriminant Clustering
Our clustering procedure is given in Algorithm 1. We refer to this procedure as optimal discriminant
clustering due to its relationship with LDA, which is shown by the connection between (1) and (2).
? = [?
? n ]0 (n?r) is a feature matrix corresponding to the data matrix X. In
Assume that X
x1 , . . . , x
this case, we have
? X
? 0 Hn X
? + ? 2 Ir )?1 X
? 0 Hn = C(C + ? 2 In )?1 ,
S = Hn X(
3
?X
? 0 Hn is the n?n centered kernel matrix. This implies that we can obtain Y
?
where C = Hn X
?
without the explicit use of the feature matrix X. Moreover, we can compute Z by
? X
? 0 Hn X
? + ? 2 Ir )?1 X
? 0 Hn Y = SY.
Z = Hn X(
We are thus able to devise this clustering algorithm by using the reproducing kernel k(?, ?) :
?X
? 0.
? 0i x
? j and K = X
X ?X ? R such that K(xi , xj ) = x
Algorithm 1 Optimal Discriminant Clustering Algorithm
1: procedure ODC(Hn X, c, ? 2 )
? and W
? according to Theorem 2;
2:
Estimate Y
?
3:
Calculate Z = [z1 , . . . , zn ]0 = Hn XW;
4:
Perform K-means on the zi ;
5:
Return the partition of the zi as the partition of the xi .
6: end procedure
3.3
Related Work
We now explore the connection of the optimal discriminant clustering with the discriminative clus? is the matrix of the c?1 top eigenvectering algorithm [3] and spectral clustering [7]. Recall that Y
tors of C(C + ? 2 In )?1 . Consider that if ? 6= 0 is an eigenvalue of C with associated eigenvector
u, then ?/(? + ? 2 ) (6= 0) is an eigenvalue of C(C + ? 2 In )?1 with associated eigenvector u. More? is also the matrix of the c?1 top
over, ?/(? + ? 2 ) is increasing as ? increases. This implies that Y
eigenvectors of C. As we know, the spectral clustering applies a rounding scheme such as K-means
? We thus have a relationship between the spectral clustering and optimal discriminant
directly on Y.
clustering.
We study the relationship between the discriminative clustering algorithm and the spectral cluster? to an s-dimensional
ing algorithm. Let M be a linear transformation from the r-dimensional X
transformed feature space F, namely
?
F = XM,
where M is an r?s matrix of rank s (s < r). The corresponding scatter matrices in the F-space
are thus given by M0 ?M and M0 ?b M. The discriminative clustering algorithm [3, 13] in the
reproducing kernel Hilbert space (RKHS) tries to solve the problem of
argmax f (E, M) , tr((M0 (?+? 2 Ir )M)?1 M0 ?b M)
E, M
?
?
?
?
2
? 0 Hn X+?
?
? 0 Hn E E0 E ?1 E0 Hn XM
?
= tr (M0 (X
Ir )M)?1 M0 X
0 ?0
2
?
?
? 0 Hn , we have the folApplying the discussion in [15] to Hn XM(M
(X Hn X+?
Ir )M)?1 M0 X
lowing relaxation problem
0 ?0
2
?
?
? 0 Hn Y),
max Y?Rn?(c?1) ,M?Rr?s tr(Y0 Hn XM(M
(X Hn X+?
Ir )M)?1 M0 X
0
0
s.t. Y Y = Ic?1 and Y 1n = 0.
(3)
? 0 Hn B + N where N satisfies N0 X
? 0 Hn = 0 (i.e., N ? span{X
? 0 Hn }? ) and B is
Express M = X
2
? 0 Hn }), we
some n?s matrix. Under the condition of either ? = 0 or N = 0 (i.e., M ? span{X
can obtain that
0 ?0
2
?
?
? 0 Hn = CB(B0 (CC + ? 2 C)B)?1 B0 C.
Hn XM(M
(X Hn X+?
Ir )M)?1 M0 X
Again consider that if ? 6= 0 is an eigenvalue of C with associated eigenvector u, then ?/(?+? 2 ) 6=
0 is an eigenvalue of C(CC + ? 2 C)+ C with associated eigenvector u. Moreover, ?/(? + ? 2 ) is
increasing in ?. We now directly obtain the following theorem from Theorem 3.1 in [13].
Theorem 3 Let Y? and M? be the solution of Problem (3). Then
4
Table 1: Summary of the benchmark datasets, where c is the number of classes, p is the dimension
of the input vector, and n is the number of samples in the dataset.
Types
Face
Gene
UCI
Dataset
ORL
Yale
PIE
SRBCT
Iris
Yeast
Image segmentation
Statlog landsat satellite
c
40
15
68
4
4
10
7
7
p
1024
1024
1024
2308
4
8
19
36
n
400
165
6800
63
150
1484
2100
2000
(i) If ? 2 = 0, Y? is the solution of the following problem
argmaxY?Rn?(c?1) tr(Y0 CC+ Y),
s.t. Y0 Y = Ic?1 and Y0 1n = 0.
? 0 Hn }, Y? is the solution of the following problem:
(ii) If M ? span{X
argmaxY ?Rn?(c?1) tr(Y0 CY),
s.t. Y0 Y = Ic?1 and Y0 1n = 0.
Theorem 3 shows that discriminative clustering is essentially equivalent to spectral clustering. This
further leads us to a relationship between the discriminative clustering and optimal discriminant
clustering from the relationship between the spectral clustering and optimal discriminant clustering.
In summary, we are able to unify the discriminative clustering as well as spectral clustering into the
optimal scoring framework in (2).
4
Experimental Study
To evaluate the performance of our optimal discriminant clustering (ODC) algorithm, we conducted
experimental comparisons with other related clustering algorithms on several real-world datasets. In
particular, the comparison was implemented on three face datasets, the ?SRBCT? gene dataset, and
four UCI datasets. Further details of these datasets are summarized in Table 1.
To effectively evaluate the performance, we employed two typical measurements: the Normalized
Mutual Information (NMI) and the Clustering Error (CE). It should be mentioned that for NMI, the
larger this value, the better the performance. For CE, the smaller the value, the better the performance. More details and the corresponding implementations for both can be found in [11].
In the experiments, we compared our ODC with four different clustering algorithms, i.e., the
conventional K-means [1], normalized cut (NC) [9], DisCluster [3] and DisKmeans [13]. It is
worth noting that two discriminative clustering algorithms: DisCluster [3] and DisKmeans [13],
are very closely related to our ODC, because they are derived from the discriminant analysis criteria in essence (also see the analysis in Section 3.3). In addition, the implementation code for NC is available at http://www.cis.upenn.edu/?jshi/software/.
For the sake of simplicity, the parameter ? 2 in ODC is sought from the range ? 2 ?
{10?3 , 10?2.5 , 10?2 , 10?1.5 , 10?1 , 10?0.5 , 100 , 100.5 , 101 , 101.5 , 102 , 102.5 , 103 }. Similarly, the
parameters in other clustering algorithms compared here are also searched in a wide range.
For simplicity, we just reported the best results of clustering algorithms with respect to different
parameters on each dataset. Table 2 summaries the NMI and CE on all datasets. According to the
NMI values in Table 2, our ODC outperforms other clustering algorithms on five datasets: ORL,
SRBCT, iris, yeast and image segmentation. According to the CE values in Table 2, it
is obvious that the performance of our ODC is best in comparison with other algorithms on all the
datasets, and NC and DisKmeans algorithms can achieve the almost same performance with ODC
on the SRBCT and iris datasets respectively. Also, it is seen that the DisCluster algorithm has
dramatically different performance based on the NMI and CE. The main reason is that the final
solution in DisCluster is very sensitive to the initial variables and numerical computation.
5
0.88
0.64
0.4
0.62
0.35
0.86
0.4
K?means
ODC
K?means
ODC
0.35
0.84
0.58
0.56
0.76
0.74
K?means
ODC
0.3
0.25
NMI
0.78
NMI
NMI
0.8
NMI
0.3
0.6
0.82
0.2
0.25
0.15
0.54
0.2
0.72
0.68
?6
?4
0.1
0.52
K?means
ODC
0.7
?2
0
2
4
0.5
?6
6
?4
?2
0
2
4
0.05
?6
6
0.305
K?means
ODC
0
2
4
6
?6
0.29
0
2
4
6
2log(?)
(d)
0.6
0.63
0.59
0.62
0.61
0.58
0.28
NMI
NMI
0.66
NMI
0.285
0.68
0.57
0.275
0.58
0.265
0.55
0.62
K?means
ODC
0.26
?2
0
2
4
6
0.255
?6
0.6
0.59
0.56
0.27
0.64
?4
?2
0.3
0.7
0.6
?6
?4
2log(?)
(c)
0.295
NMI
?2
(b)
0.74
0.72
?4
2log(?)
2log(?)
(a)
?4
2log(?)
(e)
?2
0
2
4
0.57
K?means
ODC
6
0.54
?6
?4
?2
2log(?)
0
2
4
6
2log(?)
(f)
(g)
?6
K?means
ODC
?4
?2
0
2
4
6
2log(?)
(h)
Figure 1: The NMI versus the parameter ? tuning in ODC on all datasets, where the NMI of Kmeans is used as the baseline: (a) ORL; (b) Yale; (c) PIE; (d) SRBCT; (e) iris; (f) yeast; (g)
image segmentation; (h) statlog landsat satellite.
In order to reveal the effect of the parameter ? on ODC, Figures 1 and 2 depict the NMI and CE
results of ODC with respect to different parameters ? on all datasets. Similar to [11, 13], we used the
results of K-means as a baseline. From Figures 1 and 2, we can see that similar to the conventional
clustering algorithms (including the compared algorithms), the parameter ? has a significant impact
on the performance of ODC, especially when the evaluation results are measured by NMI. In contrast
to the result in Figure 1, the effect of the parameter ? becomes less pronounced in Figure 2.
Table 2: Clustering results: the Normalized Mutual Information (NMI) and the Clustering Error
(CE) (%) of all clustering algorithms are calculated on different datasets.
Measure
NMI
CE (%)
5
Dataset
ORL
Yale
PIE
SRBCT
Iris
Yeast
Image segmentation
Statlog landsat satellite
ORL
Yale
PIE
SRBCT
Iris
Yeast
Image segmentation
Statlog landsat satellite
K-means
0.7971
0.6237
0.1140
0.2509
0.6595
0.2968
0.5830
0.6126
38.25
45.45
79.82
55.55
16.66
59.43
45.14
32.30
NC
0.8015
0.6203
0.2232
0.3722
0.6876
0.2915
0.5500
0.6316
34.50
46.06
79.82
47.61
15.33
59.90
49.47
32.65
DisCluster
0.7978
0.5974
0.1940
0.3216
0.7248
0.2993
0.5700
0.6152
38.75
45.45
77.35
50.79
12.66
59.43
45.95
32.25
DisKmeans
0.8531
0.5641
0.3360
0.2683
0.7353
0.3020
0.5934
0.6009
29.00
45.45
66.23
53.96
11.33
57.07
41.66
31.20
ODC
0.8567
0.5766
0.3035
0.3966
0.7353
0.3041
0.5942
0.6166
28.50
44.84
65.52
47.61
11.33
56.73
40.23
30.50
Concluding Remarks
In this paper we have proposed a regression framework to deal with unsupervised dimensionality
reduction and clustering simultaneously. The framework is based on the optimal scoring and ridge
penalty. In particular, we have developed a new clustering algorithm which is called optimal discriminant clustering (ODC). ODC can efficiently identify the optimal solution and it has an underlying
relationship with the discriminative clustering and spectral clustering.
6
38
45.6
82
56
45.5
80
55
78
54
45.4
K?means
ODC
34
76
45.3
CE (%)
CE (%)
CE (%)
36
K?means
ODC
45.2
45.1
32
45
30
28
?6
?4
?2
0
2
4
6
68
49
64
?6
2
4
6
48
?4
?2
2log(?)
2log(?)
(a)
K?means
ODC
51
50
66
0
52
70
44.8
?6
?2
K?means
ODC
72
44.9
?4
53
74
CE (%)
40
0
2
4
47
?6
6
?4
?2
2log(?)
(b)
(c)
17
0
2log(?)
2
4
6
(d)
46
32.5
59.5
15
32
44
K?means
ODC
14
CE (%)
58.5
CE (%)
CE (%)
45
59
K?means
ODC
58
57.5
13
CE (%)
16
K?means
ODC
43
31.5
K?means
ODC
31
42
57
12
41
30.5
56.5
11
?6
?4
?2
0
2
4
6
?6
?4
?2
2log(?)
0
2
4
6
2log(?)
(e)
40
?6
?4
?2
0
2
4
6
?6
?4
2log(?)
(f)
(g)
?2
0
2
4
6
2log(?)
(h)
Figure 2: The CE (%) versus the parameter ? tuning in ODC on all datasets, where the CE (%) of
K-means is used as the baseline: (a) ORL; (b) Yale; (c) PIE; (d) SRBCT; (e) iris; (f) yeast;
(g) image segmentation; (h) statlog landsat satellite.
This framework allows us for developing a sparse unsupervised learning algorithm; that is, we alternatively consider the following optimization problem:
min f (Y, W) =
Y, W
1
?1
kY ? Hn XWk2F + tr(W0 W) + ?2 kWk1
2
2
under the constraints 10n Y = 0 and Y0 Y = Ic?1 . We will study this further.
Acknowledgement
This work has been supported in part by program for Changjiang Scholars and Innovative Research
Team in University (IRT0652, PCSIRT), China.
A Proof of Theorem 2
For simplicity, we replace Hn X by X and let q = c?1 in the following derivation. Consider the
Lagrange function:
L(Y, W, B, b)
1
1
1
=
tr(Y0 Y) ? tr(Y0 XW) + tr(W0 (X0 X+? 2 Ip )W) ? tr(B(Y0 Y?Iq )) ? tr(b0 Y0 1n ),
2
2
2
where B is a q?q symmetric matrix of Lagrange multipliers and b is a q?1 vector of Lagrange
multipliers. By direct differentiation, it can be shown that
?L
= Y ? XW ? YB ? 1n b0 ,
?Y
?L
= (X0 X + ? 2 Ip )W ? X0 Y.
?W
Letting
?L
?Y
= 0, we have
Y ? XW ? YB ? 1n b0 = 0.
Pre-multiplying both sides of the above equation by 10n , we obtain b = 0. Thus, it follows from
?L
?L
?Y = 0 and ?W = 0 that
?
Y ? XW ? YB = 0,
W = (X0 X + ? 2 Ip )?1 X0 Y.
7
Substituting the second equation into the first equation, we further have
(In ? X(X0 X + ? 2 Ip )?1 X0 )Y = YB.
Now we take the spectral decomposition of B as B = UB ?B U0B where UB is a q?q orthonormal
matrix and ?B is a q?q diagonal matrix. We thus have (In ? X(X0 X + ? 2 Ip )?1 X0 )YUB =
YUB ?B . This shows that the diagonal entries of ?B and the columns of YUB are the eigenvalues
and the associated eigenvectors of In ? X(X0 X + ? 2 Ip )?1 X0 .
We consider the case that n ? p. Let the SVD of X be X = U?V0 where U (n?p) and V (p?p) are
orthogonal, and ? = diag(?1 , . . . , ?p ) (p?p) is a diagonal matrix with ?1 ? ?2 ? ? ? ? ? ?p ? 0. We
then have X(X0 X + ? 2 Ip )?1 X0 = U?U0 , where ? = diag(?1 , . . . , ?p ) with ?i = ?i2 /(?i2 + ? 2 ).
There exists such an n?(n?p) orthogonal matrix U3 that its last column is ?1n 1n and [U, U3 ]
is an n?n orthonormal matrix. That is, U3 is the eigenvector matrix of X(X0 X + ? 2 Ip )?1 X0
corresponding to the eigenvalue 0. Let U1 be the n?q matrix of the first q columns of [U, U3 ].
? = U1 , W
? = (X0 X + ? 2 Ip )?1 X0 U1 , UB = Iq and ?B = diag(1 ? ?1 , . . . , 1 ?
We now define Y
? satisfies Y
? 0Y
? = Iq and Y
? 0 1n = 0
?q ) where ?i = 0 whenever i > p. It is easily seen that such a Y
0
0
0
due to U1 U1 = Iq and X 1n = 0. Moreover, we have
q
q
X
q
1 X ?i2
? W)
? = q?1
f (Y,
?i = ?
2 2 i=1
2 2 i=1 ?i2 + ? 2
where ?i = 0 whenever i > p. Note that all the eigenvalues of X(X0 X + ? 2 Ip )?1 X0 are between
0 and 1. Especially, when ? 2 = 0, the eigenvalues are either 1 or 0. In this case, if rk(X) ? q,
? W)
? achieves its minimum 0, otherwise the minimum value is q?rk(X) .
f (Y,
2
? W)
? is a minimizer of problem (2), we consider the Hessian matrix of
To verify that (Y,
L with respect to (Y, W). Let vec(Y0 ) = (y11 , . . . , y1q , y21 , . . . , ynq )0 and vec(W0 ) =
(w11 , . . . , w1q , w21 , . . . , wpq )0 . The Hessian matrix is then given by
"
H(Y, W) =
?2L
?vec(Y 0 )?vec(Y 0 )0
?2L
?vec(W0 )?vec(Y 0 )0
?2L
?vec(Y 0 )?vec(W0 )0
?2L
?vec(W0 )?vec(W0 )0
#
?
=
(Iq ?B)?In
?Iq ?X0
?Iq ?X
Iq ?(X0 X + ? 2 Ip )
?
.
Let C0 = [C01 , C02 ], where C1 and C2 are n?q and p?q, be an arbitrary nonzero (n+p)?q matrix
? = 0, which is equivalent to C0 1n = 0 and C0 U1 = 0.
such that C01 [1n , Y]
1
1
If rk(X) ? q, we have C01 X = 0. Hence,
0
? W)vec(C
?
vec(C0 )0 H(Y,
) = tr(C0 C1 (Iq ? B)) ? 2tr(C0 XC2 ) + tr(C0 (X0 X + ? 2 Ip )C2 )
1
1
2
= tr(C01 C1 (Iq ? B)) + tr(C02 (X0 X + ? 2 Ip )C2 ) ? 0.
? W)
? is a minimizer of problem (2).
This implies that (Y,
In the case that rk(X) = m > q, we have p > q. Thus we can partition U and V into U = [U1 , U2 ]
and V = [V1 , V2 ] where V1 and V2 are p?q and p?(p?q). Thus,
0
? W)vec(C
?
vec(C0 )0 H(Y,
) = tr(C0 C1 (Iq ? B)) ? 2tr(C0 XC2 ) + tr(C0 (X0 X + ? 2 Ip )C2 )
1
1
2
? tr(C01 U2 ?2 U02 C1 )?2tr(C01 U2 ?2 V20 C2 )+tr(C02 V2 D2 V20 C2 )
+tr(C01 U3 U03 C1 ?1 ) + tr(C02 V1 D1 V10 C2 )
? 1/2
?
1/2
1/2
1/2
= tr (?2 U02 C1 ? D2 V20 C2 )0 (?2 U02 C1 ? D2 V20 C2 )
+tr(C01 U3 U03 C1 ?1 ) + tr(C02 V1 D1 V10 C2 ) ? 0.
Here ?1 = diag(?1 , . . . , ?q ), ?2 = diag(?q+1 , . . . , ?p ), ?1 = diag(?1 , . . . , ?q ), ?2 =
1/2 1/2
diag(?q+1 , . . . , ?p ), D1 = ?21 + ? 2 Iq and D2 = ?22 + ? 2 Ip?q , so we have ?2 = D2 ?2 .
Moreover, we use the fact that
tr(C01 U2 U02 C1 ?1 ) ? tr(C01 U2 ?2 U02 C1 )
because ?i Iq ? ?2 for i = 1, . . . , q are positive semidefinite.
If n < p, we also make the SVD of X as X = U?V0 . But, right now, U is n?n, V is n?p, and ?
is n?n. Using this SVD, we have the same result as the case of n ? p.
8
References
[1] C. M. Bishop. Pattern Recognition and Machine Learning. Springer, first edition, 2007.
[2] L. Clemmensen, T. Hastie, and B. Erb?ll. Sparse discriminant analysis. Technical report, June
2008.
[3] F. De la Torre and T. Kanade. Discriminative cluster analysis. In The 23rd International
Conference on Machine Learning, 2006.
[4] R. O. Duda, P. E. Hart, and D. G. Stork. Pattern Classification. John Wiley and Sons, New
York, second edition, 2001.
[5] T. Hastie, A. Buja, and R. Tibshirani. Penalized discriminant analysis. The Annals of Statistics,
23(1):73?102, 1995.
[6] T. Hastie, R. Tibshirani, and A. Buja. Flexible discriminant analysis by optimal scoring. Journal of the American Statistical Association, 89(428):1255?1270, 1994.
[7] A. Y. Ng, M. I. Jordan, and Y. Weiss. On spectral clustering: analysis and an algorithm. In
Advances in Neural Information Processing Systems 14, volume 14, 2002.
[8] C. H. Park and H. Park. A relationship between linear discriminant analysis and the generalized minimum squared error solution. SIAM Journal on Matrix Analysis and Applications,
27(2):474?492, 2005.
[9] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern
Analysis and Machine Intelligence, 22(8):888?905, 2000.
[10] R. Tibshirani. Regression shrinkage and selection via the lasso. Journal of the Royal Statistical
Society, Series B, 58:267?288, 1996.
[11] M. Wu and B. Sch?olkopf. A local learning approach for clustering. In Advances in Neural
Information Processing Systems 19, 2007.
[12] J. Ye. Least squares linear discriminant analysis. In The Twenty-Fourth International Conference on Machine Learning, 2007.
[13] J. Ye, Z. Zhao, and M. Wu. Discriminative k-means for clustering. In Advances in Neural
Information Processing Systems 20, 2008.
[14] Z. Zhang, G. Dai, and M. I. Jordan. A flexible and efficient algorithm for regularized Fisher
discriminant analysis. In The European Conference on Machine Learning and Principles and
Practice of Knowledge Discovery in Databases (ECML PKDD), 2009.
[15] Z. Zhang and M. I. Jordan. Multiway spectral clustering: A margin-based perspective. Statistical Science, 23(3):383?403, 2008.
[16] H. Zou and T. Hastie. Regularization and variable selection via the elastic net. Journal of the
Royal Statistical Society, Series B, 67:301?320, 2005.
[17] H. Zou, T. Hastie, and R. Tibshirani. Sparse principal component analysis. Journal of Computational and Graphical Statistics, 15:265?286, 2006.
9
| 3803 |@word norm:1 duda:1 c0:11 d2:5 decomposition:1 tr:33 reduction:5 initial:1 configuration:1 series:2 rkhs:1 outperforms:1 existing:1 scatter:5 john:1 numerical:1 partition:4 depict:1 n0:1 intelligence:1 accordingly:1 provides:2 zhang:3 five:1 rc:2 c2:10 direct:1 x0:39 upenn:1 pkdd:1 multi:2 cardinality:2 increasing:2 becomes:1 moreover:6 underlying:1 diskmeans:4 eigenvector:6 developed:1 lowing:1 c01:10 transformation:2 differentiation:1 nj:7 classifier:1 positive:1 local:1 china:2 range:2 zhejiang:2 practice:1 procedure:6 projection:1 pre:1 selection:2 www:1 equivalent:3 conventional:2 shi:1 maximizing:1 ke:1 unify:1 simplicity:3 orthonormal:4 notion:1 annals:1 associate:1 recognition:1 cut:2 database:1 calculate:1 cy:1 mentioned:1 easily:1 derivation:1 ynq:1 emerged:1 larger:1 solve:1 relax:2 otherwise:3 statistic:2 jointly:1 ip:21 final:1 eigenvalue:11 rr:1 net:3 propose:1 uci:2 achieve:3 frobenius:1 validate:1 pronounced:1 ky:2 olkopf:1 cluster:2 satellite:5 illustrate:1 develop:1 iq:13 v10:2 measured:1 b0:5 implemented:1 implies:4 concentrate:1 closely:1 torre:1 centered:1 scholar:1 statlog:5 im:2 ic:11 great:1 cb:1 substituting:1 m0:9 tor:1 achieves:2 sought:1 u3:6 sensitive:1 grouped:1 nn12:1 shrinkage:1 casting:1 derived:1 june:1 rank:3 mainly:1 contrast:1 discluster:5 baseline:3 am:2 hangzhou:1 landsat:5 nn:1 a0:1 transformed:1 interested:1 issue:1 classification:8 flexible:2 mutual:2 eigenproblem:1 ng:1 represents:1 park:2 unsupervised:13 report:1 simultaneously:3 argmax:1 n1:7 interest:1 evaluation:1 argmaxy:2 nl:2 semidefinite:1 pc:3 orthogonal:5 e0:11 column:4 zn:1 entry:2 subset:1 rounding:1 conducted:1 v20:4 reported:1 international:2 siam:1 squared:2 again:1 hn:61 american:1 zhao:1 leading:1 return:1 de:1 summarized:1 changjiang:1 vi:1 view:1 try:1 square:1 ir:7 efficiently:1 sy:1 identify:1 multiplying:1 worth:1 cc:3 w21:1 whenever:2 definition:6 centering:1 obvious:1 proof:2 associated:5 dataset:5 treatment:1 recall:1 knowledge:1 dimensionality:5 cj:1 hilbert:1 segmentation:7 originally:2 wei:1 yb:4 formulation:2 xa:3 just:1 defines:1 lda:10 reveal:1 yeast:6 effect:2 ye:2 normalized:4 multiplier:2 verify:1 hence:1 regularization:1 symmetric:1 moore:1 nonzero:2 i2:4 deal:1 ll:1 essence:1 iris:7 criterion:3 generalized:2 ridge:2 image:7 novel:2 recently:1 jshi:1 qp:2 stork:1 volume:1 extend:1 association:1 significant:2 refer:1 measurement:1 imposing:2 vec:14 tuning:2 rd:1 consistency:1 similarly:1 multiway:1 v0:2 add:1 perspective:1 belongs:1 binary:1 kwk1:1 devise:4 scoring:19 seen:2 minimum:5 dai:2 impose:1 employed:1 ii:1 u0:1 desirable:1 ing:1 technical:1 devised:1 hart:1 a1:2 impact:1 regression:11 y21:1 essentially:1 represent:1 kernel:3 c1:11 addition:1 sch:1 facilitates:1 effectiveness:1 clemmensen:1 call:1 jordan:3 noting:1 concerned:2 xj:1 zi:2 hastie:6 lasso:3 pca:3 penalty:6 reformulated:1 hessian:2 york:1 remark:1 dramatically:1 clear:1 eigenvectors:4 reduced:1 http:1 disjoint:2 tibshirani:4 write:1 express:1 erb:1 four:2 e1c:1 ce:18 v1:4 relaxation:1 inverse:1 fourth:1 throughout:1 almost:1 c02:5 wu:2 appendix:1 scaling:1 orl:6 yale:5 refine:1 constraint:4 software:1 sake:1 u1:7 min:3 span:3 concluding:1 performing:2 innovative:1 wpq:1 developing:1 according:3 smaller:1 nmi:19 son:1 y0:16 equation:3 turn:1 know:1 letting:1 end:1 available:1 v2:3 spectral:13 appropriate:1 alternative:1 rp:2 top:3 clustering:58 denotes:1 graphical:1 xw:9 especially:4 establish:2 classical:1 society:2 malik:1 question:1 diagonal:5 xc2:2 w0:9 considers:1 discriminant:22 reason:2 code:1 index:1 relationship:9 ratio:1 nc:8 pie:5 trace:1 y11:1 implementation:2 twenty:1 perform:2 datasets:14 benchmark:2 ecml:1 srbct:8 team:1 rn:3 reproducing:2 arbitrary:2 buja:2 introduced:1 cast:1 namely:1 odc:33 connection:3 z1:1 tractably:1 address:2 able:2 pattern:3 xm:5 program:1 max:1 including:1 royal:2 treated:1 regularized:2 indicator:1 u02:5 scheme:1 w11:1 technology:1 hm:1 literature:2 acknowledgement:1 discovery:1 kakf:1 versus:2 principle:1 row:1 penalized:6 summary:3 supported:1 last:1 jth:3 aij:1 side:1 wide:1 face:2 sparse:7 dimension:2 xn:2 world:1 calculated:1 author:1 collection:1 transaction:1 gene:2 discriminative:13 xi:6 alternatively:1 table:6 kanade:1 elastic:3 necessarily:1 zou:4 european:1 diag:10 vj:4 main:1 edition:2 guang:1 x1:3 clus:1 referred:2 wiley:1 explicit:1 rk:7 theorem:9 specific:1 bishop:1 exists:2 effectively:1 ci:1 margin:1 eij:3 explore:2 penrose:1 zhihua:1 lagrange:3 u2:5 applies:1 springer:1 minimizer:4 satisfies:5 identity:1 kmeans:1 replace:1 fisher:3 feasible:1 typical:1 principal:3 total:1 called:2 experimental:3 svd:3 la:1 college:1 searched:1 ub:3 evaluate:2 d1:3 |
3,095 | 3,804 | Multiple Incremental Decremental Learning of
Support Vector Machines
Masayuki Karasuyama and Ichiro Takeuchi
Department of Engineering, Nagoya Institute of Technology
Gokiso-cho, Syouwa-ku, Nagoya, Aichi, 466-8555, JAPAN
[email protected], [email protected]
Abstract
We propose a multiple incremental decremental algorithm of Support Vector Machine (SVM). Conventional single incremental decremental SVM can update the
trained model efficiently when single data point is added to or removed from the
training set. When we add and/or remove multiple data points, this algorithm is
time-consuming because we need to repeatedly apply it to each data point. The
proposed algorithm is computationally more efficient when multiple data points
are added and/or removed simultaneously. The single incremental decremental
algorithm is built on an optimization technique called parametric programming.
We extend the idea and introduce multi-parametric programming for developing
the proposed algorithm. Experimental results on synthetic and real data sets indicate that the proposed algorithm can significantly reduce the computational cost
of multiple incremental decremental operation. Our approach is especially useful
for online SVM learning in which we need to remove old data points and add new
data points in a short amount of time.
1 Introduction
Incremental decremental algorithm for online learning of Support Vector Machine (SVM) was previously proposed in [1], and the approach was adapted to other variants of kernel machines [2?4].
When a single data point is added and/or removed, these algorithms can efficiently update the
trained model without re-training it from scratch. These algorithms are built on an optimization
technique called parametric programming [5?7], in which one solves a series of optimization problems parametrized by a single parameter. In particular, one solves a solution path with respect to
the coefficient parameter corresponding to the data point to be added or removed. When we add
and/or remove multiple data points using these algorithms, one must repeat the updating operation
for each single data point. It often requires too much computational cost to use it for real-time online
learning. In what follows, we refer this conventional algorithm as single incremental decremental
algorithm or single update algorithm.
In this paper, we develop a multiple incremental decremental algorithm of the SVM. The proposed
algorithm can update the trained model more efficiently when multiple data points are added and/or
removed simultaneously. We develop the algorithm by introducing multi-parametric programming
[8] in the optimization literature. We consider a path-following problem in the multi-dimensional
space spanned by the coefficient parameters corresponding to the set of data points to be added or
removed. Later, we call our proposed algorithm as multiple incremental decremental algorithm or
multiple update algorithm.
The main computational cost of parametric programming is in solving a linear system at each breakpoint (see Section 3 for detail). Thus, the total computational cost of parametric programming is
roughly proportional to the number of breakpoints on the solution path. In the repeated use of
1
single update algorithm for each data point, one follows the coordinate-wise solution path in the
multi-dimensional coefficient parameter space. On the other hand, in multiple update algorithm, we
establish a direction in the multi-dimensional coefficient parameter space so that the total length of
the path becomes much shorter than the coordinate-wise one. Because the number of breakpoints in
the shorter path followed by our algorithm is less than that in the longer coordinate-wise path, we
can gain relative computational efficiency. Figure 2 in Section 3.4 schematically illustrates our main
idea.
This paper is organized as follows. Section 2 formulates the SVM and the KKT conditions. In Section 3, after briefly reviewing single update algorithm, we describe our multiple update algorithm.
In section 4, we compare the computational cost of our multiple update algorithm with the single update algorithm and with the LIBSVM (the-state-of-the-art batch SVM solver based on SMO
algorithm) in numerical experiments on synthetic and real data sets. We close in Section 5 with
concluding remarks.
2
Support Vector Machine and KKT Conditions
Suppose we have a set of training data {(xi , yi )}ni=1 , where xi ? X ? Rd is the input and
yi ? {?1, +1} is the output class label. Support Vector Machines (SVM) learn the following
discriminant function:
f (x) = wT ?(x) + b,
where ?(x) denotes a fixed feature-space transformation. The model parameter w and b can be
obtained by solving an optimization problem:
n
?
1
min
||w||2 + C
?i
2
i=1
s.t. yi f (xi ) ? 1 ? ?i , ?i ? 0, i = 1, ? ? ? , n,
where C ? R is the regularization parameter. Introducing Lagrange ?
multipliers ?i ? 0, the
n
optimal discriminant function f : X ? R can be formulated as f (x) = i=1 ?i yi K(x, xi ) + b,
where K(xi , xj ) = ?(xi )T ?(xj ) is a kernel function. From the Karush-Kuhn-Tucker (KKT)
optimality conditions, we obtain the following relationships:
yi f (xi ) > 1 ? ?i = 0,
(1a)
yi f (xi ) = 1 ? ?i ? [0, C],
(1b)
yi f (xi ) < 1 ? ?i = C,
(1c)
n
?
yi ?i = 0.
(1d)
+
i=1
Using (1a)-(1c), let us define the following index sets:
O = {i | yi f (xi ) > 1, ?i = 0},
(2a)
M = {i | yi f (xi ) = 1, 0 ? ?i ? C},
(2b)
I = {i | yi f (xi ) < 1, ?i = C}.
(2c)
In what follows, the subscription by an index set, such as v I for a vector v ? Rn , indicates a
subvector of v whose elements are indexed by I. Similarly, the subscription by two index sets,
such as M M,O for a matrix M ? Rn?n , denotes a submatrix whose rows are indexed by M
and columns are indexed by O. If the submatrix is the principal submatrix such as QM,M , we
abbreviate as QM .
3
3.1
Incremental Decremental Learning for SVM
Single Incremental Decremental SVM
In this section, we briefly review the conventional single incremental decremental SVM [1]. Using
the SV sets (2b) and (2c), we can expand yi f (xi ) as
?
?
yi f (xi ) =
Qij ?j +
Qij ?j + yi b,
j?M
j?I
2
where Qij = yi yj K(xi , xj ). When a new data point (xc , yc ) is added, we increase the corresponding new parameter ?c from 0 while keeping the optimal conditions of the other parameters satisfied.
Let us denote the amount of the change of each variable with an operator ?. To satisfy the equality
conditions (1b) and (1d), we need
?
Qic ??c +
Qij ??j + yi ?b = 0, i ? M,
j?M
yc ??c +
?
yj ??j
=
0.
j?M
Solving this linear system with respect to ??i , i ? M, and b, we obtain the update direction of the
parameters. We maximize the ??c under the constraint that no element moves across M, I and O.
After updating the index sets M, I and O, we repeat the process until the new data point satisfies the
optimality condition. Decremental algorithm can be derived similarly, in which the target parameter
moves toward 0.
3.2
Multiple Incremental Decremental SVM
Suppose we add m new data points and remove ? data points simultaneously. Let us denote the
index set of new adding data points and removing data points as
A = {n + 1, n + 2, ? ? ? , n + m} and R ? {1, ? ? ? , n},
respectively, where |R| = ?. We remove the elements of R from the sets M, I and O (i.e. M ?
M \ R, I ? I \ R and O ? O \ R). Let us define y = [y1 , ? ? ? , yn+m ]? , ? = [?1 , ? ? ? , ?n+m ]? ,
and Q ? R(n+m)?(n+m) , where (i, j)-th entry of Q is Qij . When m = 1, ? = 0 or m = 0, ? = 1,
our method corresponds to the conventional single incremental decremental algorithm. We initially
set ?i = 0, ?i ? A. If we have yi f (xi ) > 1, i ? A, we can append these indices to O and remove
them from A because these points already satisfy the optimality condition (1a). Similarly, we can
append the indices {i | yi f (xi ) = 1, i ? A} to M and remove them from A. In addition, we can
remove the points {i | ?i = 0, i ? R} because they already have no influence on the model. Unlike
single incremental decremental algorithm, we need to determine the directions of ??A and ??R .
These directions have a critical influence on the computational cost. For ??R , we simply trace the
shortest path to 0, i.e.,
??R = ???R ,
(3)
where ? is a step length. For ??A , we do not know the optimal value of ?A beforehand. To
determine this direction, we may be able to use some optimization techniques (e.g. Newton method).
However, such methods usually need additional computational burden. In this paper, we simply take
??A = ?(C1 ? ?A ).
(4)
This would become the shortest path if ?i = C, ?i ? A, at optimality.
When we move parameters ?i , ?i ? A ? R, the optimality conditions of the other parameters must
be kept satisfied. From yi f (xi ) = 1, i ? M, and the equality constraint (1d), we need
?
?
?
Qij ??j +
Qij ??j +
Qij ??j + yi ?b = 0, i ? M,
(5)
j?A
j?R
?
j?A
?
yj ??j +
j?M
yj ??j +
j?R
[
M=
yj ??j
=
0.
(6)
j?M
Using matrix notation, (5) and (6) can be written as
[
] [
?b
y?
A
M
+
??M
QM,A
where
?
y?
R
QM,R
y?
M
QM
0
yM
3
]
.
][
??A
??R
]
= 0,
(7)
From the definitions of the index sets in (2a)-(2c), the following inequality constraints must also be
satisfied:
0 ? ?i + ??i ? C,
yi {f (xi ) + ?f (xi )} > 1,
yi {f (xi ) + ?f (xi )} < 1,
i ? M,
i ? O,
i ? I.
(8a)
(8b)
(8c)
Since we removed the indices {i | f (xi ) ? 1} from A, we obtain
yi {f (xi ) + ?f (xi )} < 1,
i ? A.
(9)
During the process of moving ?i , i ? A, to C from 0, if the inequality (9) becomes equality for any
i, we can append the point to M and remove it from A. On the other hand, if (9) holds until ?i
becomes C, the point moves to I. In the path following literature [8], the region that satisfies (8)
and (9) is called critical region (CR).
We decide update direction by the linear system (7) while monitoring inequalities (8) and (9). Substituting (3) and (4) to (7), we obtain the update direction
[
]
[
][
]
?b
C1 ? ?A
y?
y?
A
R
= ??, where ? = ?M ?1
.
(10)
??M
??R
QM,A QM,R
To determine step length ?, we need to check inequalities (8) and (9). Using vector notation and the
hadamard product ? (element-wise product [9]), we can write
y ? ?f = ? ?, where ? = [ y Q:,M ] ? + Q:,A (C1 ? ?A ) ? Q:,R ?R ,
(11)
and the subscription ?:? of Q denotes the index of all the elements {1, ? ? ? , n + m}. Since (10) and
(11) are linear function of ?, we can calculate the set of the largest step length ?s for each i at which
the inequalities (8) and (9) becomes equality for i. The size of such ?s is |M| ? 2 + |O| + |I| + |A|
and we define this set as H. We determine the step length as follows:
? = min({?
? | ?? ? H, ?? ? 0} ? {1}).
If ? becomes 1, we can terminate the algorithm because all the new data points in A and existing
points in M, O and I satisfy the optimality conditions and ?R is 0. Once we decide ?, we can
update ?M and b using (10), and ?A and ?R using (3) and (4). In the path-following literature,
the points at which the size of linear system (7) is changed are called breakpoints. If the ith data
point reaches bound of any one of the constraints (8) and (9) we need to update M, O and I. After
updating, we re-calculate ?, ? to determine the next step length.
3.3
Empty Margin
We need to establish the way of dealing with the empty margin M. In such case, we can not obtain
the bias from yi f (xi ) = 1, i ? M. Then we can only obtain the interval of the bias from
i ? O,
i ? I ? A.
yi f (xi ) > 1,
yi f (xi ) < 1,
To keep these inequality constraints, the bias term must be in
max yi gi ? b ? min yi gi ,
i?L
where
gi = 1 ?
?
(12)
i?U
?i Qij ?
i?I
?
i?A
?i Qij ?
?
?i Qij ,
i?R
and
L = {i | i ? O, yi = +1} ? {i | i ? I ? A, yi = ?1},
U = {i | i ? O, yi = ?1} ? {i | i ? I ? A, yi = +1}.
If this empty margin happens during the path-following, we look for the new data points which
re-enter the margin. When the set M is empty, equality constraint (6) becomes
?
?
yi ??i +
yi ??i = ??(?) = 0,
(13)
i?A
i?R
4
Figure 1: An illustration of the bias in empty margin case. Dotted lines represent yi (gi + ?gi (?)),
for each i. Solid lines are the upper bound and the lower bound of the bias. The bias term is uniquely
determined when u(?) and l(?) intersect.
where
?(?) =
?
yi (C ? ?i ) ?
i?A
?
yi ?i .
i?R
We take two different strategies depending on ?(?).
First, if ?(?) ?= 0, we can not simply increase ? from 0 while keeping (13) satisfied. Then we need
new margin data point m1 which enables equality constraint to be satisfied. The index m1 is either
ilow = argmax yi gi or iup = argmax yi gi .
i?L
i?U
If ilow , iup ? O ? I, we can update b and M as follows:
?(?) > 0
?(?) < 0
? b = yiup giup , M = {iup },
? b = yilow gilow , M = {ilow }.
By setting the bias terms as above, equality condition
??(?) + ym1 ??m1 = 0
is satisfied. If ilow ? A or iup ? A, we can put either of these points to margin.
On the other hand, if ?(?) = 0, we can increase ? while keeping (13) satisfied. Then, we consider
increasing ? until the upper bound and the lower bound of the bias (12) take the same value (the bias
term can be uniquely determined). If we increase ?, gi changes linearly:
?
?
?
{ ?
}
?gi (?) = ?
??j Qij ?
??j Qij = ? ?
(C ? ?j )Qij +
?j Qij .
j?A
j?R
j?A
j?R
Since each yi (gi + ?gi (?)) may intersect, we need to consider the following piece-wise linear
boundaries:
u(?)
=
max yi (gi + ?gi (?)),
l(?)
=
min yj (gj + ?gj (?)).
i?U
j?L
Figure 1 shows an illustration of these functions. We can trace the upper bound and the lower bound
until two bounds become the same value.
3.4
The number of breakpoints
The main computational cost of incremental decremental algorithm is in solving the linear system
(10) at each breakpoint (The cost is O(|M|2 ) because we use Cholesky factor update except the first
step). Thus, the number of breakpoints is an important factor of the computational cost. To simplify
the discussion, let us introduce the following assumptions:
? The number of breakpoints is proportional to the total length of the path.
? The path obtained by our algorithm is the shortest one.
5
initial
final
path
breakpoints
borders of CR
(a) Adding 2 data points.
(b) Adding and Removing 1 data point
Figure 2: The schematic illustration of the difference of path length and the number of breakpoints.
Each polygonal region enclosed by dashed lines represents the region in which M, I, O and A
are constant (CR: critical region). The intersection of the path and the borders are the breakpoints.
The update of matrices and vectors at the breakpoints are the main computational cost of pathfollowing. In the case of Figure 2(a), we add 2 data points. If optimal ?1 = ?2 = C, our proposed
algorithm can trace shortest path to optimal point from the origin (left plot). On the other hand,
single incremental algorithm moves one coordinate at a time (right plot). Figure 2(b) shows the case
that we add and remove 1 data point, respectively. In this case, if ?2 = C, our algorithm can trace
shortest path to ?1 = 0, ?2 = C (left plot), while single incremental algorithm again moves one
coordinate at a time (right plot).
The first assumption means that the breakpoints are uniformly distributed on the path. The second
assumption holds for the removing parameters ?R because we know that we should move ?R to 0.
On the other hand, for some of ?A , the second assumption does not necessarily hold because we do
not know the optimal ?A beforehand. In particular, if the point i ? A which was located inside the
margin before the update moved to M during the update (i.e. the equality (9) holds), the path with
respect to this parameter is not really the shortest one.
To simplify the discussion further, let us consider only the case of |A| = m > 0 and |R| = 0 (the
same discussion holds for other cases too). In this simplified scenario, the ratio of the number of
breakpoints of multiple update algorithm to that of repeated use of single update algorithm is
??A ?2 : ??A ?1 ,
where ? ? ?2 is ?2 norm and ? ? ?1 is ?1 norm. Figure 2 illustrates the
? concept in the case of m = 2.
If we consider only the case of ?i = C, ?i ? A, the ratio is simply m : m.
4
Experiments
We compared the computational cost of the proposed multiple incremental decremental algorithm
(MID-SVM) with (repeated use of) single incremental decremental algorithm [1] (SID-SVM) and
with the LIBSVM [10], the-state-of-the-art batch SVM solver based on sequential minimal optimization algorithm (SMO).
In LIBSVM, we examined several tolerances for termination criterion: ? = 10?3 , 10?6 , 10?9 .
When we use LIBSVM for online-learning, alpha seeding [11, 12] sometimes works well. The
basic idea of alpha seeding is to use the parameters before the update as the initial parameter. In
alpha seeding, we need to take care of the fact that the summation constraint ?? y = 0 may not be
satisfied after removing ?s in R. In that case, we simply re-distribute
?
?=
?i yi
i?R
to the in-bound ?i , i ? {i | 0 < ?i < C}, uniformly. If ? cannot be distributed to in-bound ?s, it is
also distributed to other ?s. If we still can not distribute ? by this way, we did not use alpha-seeding.
For kernel function, we used RBF kernel K(xi , xj ) = exp(??||xi ? xj ||2 ). In this paper, we
assume that the kernel matrix K is positive definite. If the kernel matrix happens to be singular,
which typically arise when there are two or more identical data points in M, our algorithm may not
work. As far as we know, this degeneracy problem is not fully solved in path-following literature.
Many heuristics are proposed to circumvent the problem. In the experiments described below, we
6
4
3
x2
2
1
0
?1
?2
?2
?1
0
1
x
2
3
1
Figure 3: Artificial data set. For graphical simplicity, we plot only a part of data points. The cross
points are generated from a mixture of two Gaussian while the circle points come from a single
Gaussian. Two classes have equal prior probabilities.
use one of them: adding small positive constant to the diagonal elements of kernel matrix. We set
this constant as 10?6 . In the LIBSVM we can specify cache size of kernel matrix. We set this cache
size enough large to store the entire matrix.
4.1
Artificial Data
First, we used simple artificial data set to see the computational cost for various number of adding
and/or removing points. We generated data points (x, y) ? R2 ? {+1, ?1} using normal distributions. Figure 3 shows the generated data points. The size of initial data points is n = 500. As
discussed, adding or removing the data points with ?i = 0 at optimal can be performed with almost no cost. Thus, to make clear comparison, we restrict the adding and/or removing points as
those with ?i = C at optimal. Figure 4 shows the log plot of the CPU time. We examined several
scenarios: (a) adding m ? {1, ? ? ? , 50} data points, (b) removing ? ? {1, ? ? ? , 50} data points, (c)
adding m ? {1, ? ? ? , 25} data points and removing ? ? {1, ? ? ? , 25} data points simultaneously.
The horizontal axis is the number of adding and/or removing data points. We see that MID-SVM
is significantly faster than SID-SVM. When m = 1 or ? = 1, SID-SVM and MID-SVM are identical. The relative difference of SID-SVM and MID-SVM grows as the m and/or ? increase because
MID-SVM can add or remove multiple data points simultaneously while SID-SVM merely iterates
the algorithm m + ? times. In this experimental setting, the CPU time of SMO does not change
largely because m and ? are relatively smaller than n. Figure 5 shows the number of breakpoints
of SID-SVM and MID-SVM along with the theoretical number of breakpoints of the MID-SVM
in
?
Section 3.4 (e.g., for scenario (a), the number of breakpoints of SID-SVM multiplied by m/m).
The results are very close to the theoretical one.
4.2
Application to Online Time Series Learning
We applied the proposed algorithm to a online time series learning problem, in which we update the
model when some new observations arrive (adding the new ones and removing the obsolete ones).
We used Fisher river data set in StatLib [13]. In this data set, the task is to predict whether the mean
daily flow of the river increases or decreases using the previous 7 days temperature, precipitation
and flow (xi ? R21 ). This data set contains the observations from Jan 1 1988 to Dec 31 1991.
The size of the initial data points is n = 1423 and we set m = ? = 30 (about a month). Each
dimension of x is normalized to [0, 1]. We add new m data points and remove the oldest ? data
points. We investigate various settings of the regularization parameter C ? {10?1 , 100 , ? ? ? , 105 }
and kernel parameter ? ? {10?3 , 10?2 , 10?1 , 100 }. Unlike previous experiments, we did not choose
the adding or removing data points by its parameter. Figure 6 shows the elapsed CPU times and
Figure 7 shows 10-fold cross-validation error of each setting. Each figure has 4 plots corresponding
to different settings of kernel parameter ?. The horizontal axis denotes the regularization parameter
C. Figure 6 shows that our algorithm is faster than the others, especially in large C. It is well
known that the computational cost of SMO algorithm becomes large when C gets large [14]. Crossvalidation error in Figure 7 indicates that the relative computational cost of our proposed algorithm
is especially low for the hyperparameters with good generalization performances in this application
problem.
7
?1.2
?1.6
10
MID?SVM
SID?SVM
SMO(?=1e?3)
SMO(?=1e?6)
SMO(?=1e?9)
?1.4
10
MID?SVM
SID?SVM
SMO(?=1e?3)
SMO(?=1e?6)
SMO(?=1e?9)
?1.3
10
CPU time (sec)
?1.4
10
CPU time (sec)
CPU time (sec)
10
MID?SVM
SID?SVM
SMO(?=1e?3)
SMO(?=1e?6)
SMO(?=1e?9)
?1.2
10
?1.6
10
?1.5
10
?1.7
10
?1.8
?1.8
10
10
?1.9
10
0
10
20
m
30
40
50
0
10
20
30
l
40
50
0
5
10
15
m and l
20
25
(a) Adding m data points. (b) Removing ? data points. (c) Adding m data points
and removing ? data points
simultaneously (m = ?).
Figure 4: Log plot of the CPU time (artificial data set)
600
MID?SVM
SID?SVM
Theoretical
500
the number of breakpoints
the number of breakpoints
500
400
300
200
500
MID?SVM
SID?SVM
Theoretical
450
400
the number of breakpoints
600
400
300
200
100
MID?SVM
SID?SVM
Theoretical
350
300
250
200
150
100
100
50
0
0
10
20
30
40
0
0
50
10
20
30
m
40
0
0
50
5
10
15
l
20
25
m and l
(a) Adding m data points. (b) Removing ? data points. (c) Adding m data points
and removing ? data points
simultaneously (m = ?).
Figure 5: The number of breakpoints (artificial data set)
3
10
1
10
0
10
10
MID?SVM
SID?SVM
SMO(?=1e?3)
SMO(?=1e?6)
SMO(?=1e?9)
CPU time (sec)
CPU time (sec)
10
2
CPU time (sec)
2
2
10
MID?SVM
SID?SVM
SMO(?=1e?3)
SMO(?=1e?6)
SMO(?=1e?9)
1
10
0
1
10
0.1
MID?SVM
SID?SVM
SMO(?=1e?3)
SMO(?=1e?6)
SMO(?=1e?9)
10
CPU time (sec)
3
10
0
10
?0.1
10
MID?SVM
SID?SVM
SMO(?=1e?3)
SMO(?=1e?6)
SMO(?=1e?9)
?0.3
10
10
?0.5
?1
10 ?2
10
?1
0
10
2
10
C
(a) ? = 10
4
10
10 ?2
10
6
10
0
10
?1
0
10
2
10
C
(b) ? = 10
4
10
10 ?2
10
6
10
?1
0
10
2
10
C
(c) ? = 10
4
10
0
6
5
10
10
?2
10
C
(d) ? = 10
?3
Figure 6: Log plot of the CPU time (Fisher river data set)
0.42
0.46
0.46
0.46
0.44
0.44
0.41
0.44
0.37
0.36
0.35
0.4
0.38
0.36
Cross Validation Error
0.38
0.42
Cross Validation Error
Cross Validation Error
Cross Validation Error
0.4
0.39
0.42
0.4
0.38
0.42
0.4
0.38
0.34
0.36
0.34
0.36
0.33
0.32 ?2
10
0
10
2
10
C
4
10
(a) ? = 100
6
10
0.32 ?2
10
0
10
2
10
C
4
10
6
10
(b) ? = 10?1
0.34 ?2
10
0
10
2
10
C
4
10
6
10
(c) ? = 10?2
0.34 ?2
10
0
10
2
10
C
4
10
(d) ? = 10?3
Figure 7: Cross-validation error (Fisher river data set)
5
Conclusion
We proposed multiple incremental decremental algorithm of the SVM. Unlike single incremental decremental algorithm, our algorithm can efficiently work with simultaneous addition and/or
removal of multiple data points. Our algorithm is built on multi-parametric programming in the
optimization literature [8]. We previously proposed an approach to accelerate Support Vector Regression (SVR) cross-validation using similar technique [15]. These multi-parametric programming
frameworks can be easily extended to other kernel machines.
8
6
10
References
[1] G. Cauwenberghs and T. Poggio, ?Incremental and decremental support vector machine learning,? in
Advances in Neural Information Processing Systems (T. K. Leen, T. G. Dietterich, and V. Tresp, eds.),
vol. 13, (Cambridge, Massachussetts), pp. 409?415, The MIT Press, 2001.
[2] M. Martin, ?On-line support vector machines for function approximation,? tech. rep., Software Department, University Politecnica de Catalunya, 2002.
[3] J. Ma and J. Theiler, ?Accurate online support vector regression,? Neural Computation, vol. 15, no. 11,
pp. 2683?2703, 2003.
[4] P. Laskov, C. Gehl, S. Kruger, and K.-R. Muller, ?Incremental support vector learning: Analysis, implementation and applications,? Journal of Machine Learning Research, vol. 7, pp. 1909?1936, 2006.
[5] T. Hastie, S. Rosset, R. Tibshirani, and J. Zhu, ?The entire regularization path for the support vector
machine,? Journal of Machine Learning Research, vol. 5, pp. 1391?1415, 2004.
[6] L. Gunter and J. Zhu, ?Efficient computation and model selection for the support vector regression,?
Neural Computation, vol. 19, no. 6, pp. 1633?1655, 2007.
[7] G. Wang, D.-Y. Yeung, and F. H. Lochovsky, ?A new solution path algorithm in support vector regression,?
IEEE Transactions on Neural Networks, vol. 19, no. 10, pp. 1753?1767, 2008.
[8] E. N. Pistikopoulos, M. C. Georgiadis, and V. Dua, Process Systems Engineering: Volume 1: MultiParametric Programming. WILEY-VCH, 2007.
[9] J. R. Schott, Matrix Analysis For Statistics. Wiley-Interscience, 2005.
[10] C.-C. Chang and C.-J. Lin, ?LIBSVM: a library for support vector machines,? 2001. Software available
at http://www.csie.ntu.edu.tw/?cjlin/libsvm.
[11] D. DeCoste and K. Wagstaff, ?Alpha seeding for support vector machines,? in Proceedings of the International Conference on Knowledge Discovery and Data Mining, pp. 345?359, 2000.
[12] M. M. Lee, S. S. Keerthi, C. J. Ong, and D. DeCoste, ?An efficient method for computing leave-one-out
error in support vector machines,? IEEE transaction on neural networks, vol. 15, no. 3, pp. 750?757,
2004.
[13] M. Meyer, ?Statlib.? http://lib.stat.cmu.edu/index.php.
[14] L. Bottou and C.-J. Lin, ?Support vector machine solvers,? in Large Scale Kernel Machines (L. Bottou,
O. Chapelle, D. DeCoste, and J. Weston, eds.), pp. 301?320, Cambridge, MA.: MIT Press, 2007.
[15] M. Karasuyama, I. Takeuchi, and R.Nakano, ?Efficient leave-m-out cross-validation of support vector
regression by generalizing decremental algorithm,? New Generation Computing, vol. 27, no. 4, Special
Issue on Data-Mining and Statistical Science, pp. 307?318, 2009.
9
| 3804 |@word briefly:2 norm:2 termination:1 solid:1 initial:4 series:3 contains:1 existing:1 must:4 written:1 numerical:1 enables:1 remove:12 plot:9 seeding:5 update:25 obsolete:1 oldest:1 ith:1 short:1 iterates:1 along:1 become:2 qij:15 interscience:1 inside:1 introduce:2 roughly:1 multi:7 cpu:12 decoste:3 cache:2 solver:3 increasing:1 becomes:7 precipitation:1 lib:1 notation:2 what:2 r21:1 transformation:1 qm:7 yn:1 before:2 positive:2 engineering:2 path:24 examined:2 yj:6 definite:1 jan:1 intersect:2 significantly:2 ym1:1 get:1 cannot:1 close:2 selection:1 operator:1 svr:1 put:1 influence:2 www:1 conventional:4 simplicity:1 spanned:1 coordinate:5 target:1 suppose:2 programming:9 origin:1 element:6 updating:3 located:1 csie:1 solved:1 wang:1 calculate:2 region:5 decrease:1 removed:7 ong:1 trained:3 reviewing:1 solving:4 efficiency:1 accelerate:1 easily:1 various:2 describe:1 artificial:5 whose:2 heuristic:1 statistic:1 gi:13 final:1 online:7 propose:1 product:2 hadamard:1 moved:1 crossvalidation:1 empty:5 incremental:24 leave:2 depending:1 develop:2 ac:2 stat:1 solves:2 indicate:1 come:1 direction:7 kuhn:1 dua:1 karush:1 really:1 generalization:1 ntu:1 summation:1 hold:5 ic:1 normal:1 exp:1 predict:1 substituting:1 label:1 largest:1 gunter:1 mit:2 gaussian:2 cr:3 derived:1 indicates:2 check:1 tech:1 typically:1 entire:2 initially:1 expand:1 issue:1 art:2 special:1 equal:1 once:1 schott:1 identical:2 represents:1 look:1 breakpoint:2 others:1 simplify:2 simultaneously:7 argmax:2 keerthi:1 investigate:1 mining:2 mixture:1 accurate:1 beforehand:2 daily:1 poggio:1 shorter:2 indexed:3 old:1 masayuki:1 re:4 circle:1 theoretical:5 minimal:1 column:1 formulates:1 cost:15 introducing:2 entry:1 too:2 sv:1 synthetic:2 cho:1 rosset:1 international:1 river:4 lee:1 ym:1 again:1 satisfied:8 choose:1 japan:1 distribute:2 de:1 sec:7 coefficient:4 satisfy:3 piece:1 later:1 performed:1 ichiro:2 cauwenberghs:1 ni:1 takeuchi:3 php:1 largely:1 efficiently:4 karasuyama:2 sid:17 monitoring:1 simultaneous:1 reach:1 ed:2 definition:1 pp:10 tucker:1 degeneracy:1 gain:1 knowledge:1 organized:1 day:1 specify:1 leen:1 until:4 hand:5 horizontal:2 grows:1 dietterich:1 concept:1 multiplier:1 normalized:1 regularization:4 equality:8 during:3 uniquely:2 subscription:3 criterion:1 temperature:1 wise:5 jp:2 volume:1 extend:1 iup:4 m1:3 discussed:1 aichi:1 refer:1 cambridge:2 enter:1 rd:1 similarly:3 moving:1 chapelle:1 longer:1 gj:2 add:8 nagoya:2 scenario:3 store:1 inequality:6 rep:1 yi:42 muller:1 additional:1 care:1 catalunya:1 determine:5 shortest:6 maximize:1 dashed:1 multiple:19 faster:2 cross:9 lin:2 schematic:1 variant:1 basic:1 regression:5 cmu:1 yeung:1 kernel:12 represent:1 sometimes:1 dec:1 c1:3 schematically:1 addition:2 interval:1 singular:1 unlike:3 flow:2 call:1 enough:1 xj:5 hastie:1 restrict:1 reduce:1 idea:3 nitech:2 whether:1 remark:1 repeatedly:1 useful:1 clear:1 kruger:1 amount:2 qic:1 mid:17 http:2 dotted:1 ilow:4 tibshirani:1 write:1 vol:8 libsvm:7 kept:1 merely:1 arrive:1 almost:1 decide:2 submatrix:3 bound:10 breakpoints:19 followed:1 laskov:1 fold:1 adapted:1 constraint:8 x2:1 software:2 politecnica:1 min:4 concluding:1 optimality:6 relatively:1 martin:1 department:2 developing:1 across:1 smaller:1 tw:1 happens:2 wagstaff:1 computationally:1 previously:2 cjlin:1 know:4 available:1 operation:2 multiplied:1 apply:1 massachussetts:1 batch:2 denotes:4 graphical:1 newton:1 nakano:1 xc:1 especially:3 establish:2 move:7 added:7 already:2 parametric:8 strategy:1 diagonal:1 parametrized:1 discriminant:2 toward:1 length:8 index:12 relationship:1 illustration:3 ratio:2 trace:4 append:3 implementation:1 upper:3 observation:2 extended:1 y1:1 rn:2 subvector:1 vch:1 smo:25 elapsed:1 able:1 usually:1 below:1 yc:2 built:3 max:2 critical:3 circumvent:1 abbreviate:1 zhu:2 technology:1 library:1 axis:2 tresp:1 review:1 literature:5 prior:1 removal:1 discovery:1 relative:3 fully:1 generation:1 proportional:2 enclosed:1 validation:8 theiler:1 row:1 statlib:2 changed:1 repeat:2 keeping:3 bias:9 institute:1 distributed:3 tolerance:1 boundary:1 dimension:1 simplified:1 far:1 transaction:2 decremental:23 alpha:5 keep:1 dealing:1 kkt:3 consuming:1 xi:31 ku:1 learn:1 terminate:1 bottou:2 necessarily:1 did:2 main:4 linearly:1 border:2 arise:1 hyperparameters:1 repeated:3 wiley:2 meyer:1 removing:16 r2:1 svm:48 burden:1 polygonal:1 adding:16 sequential:1 illustrates:2 margin:8 intersection:1 generalizing:1 simply:5 lagrange:1 chang:1 corresponds:1 satisfies:2 ma:2 weston:1 month:1 formulated:1 rbf:1 fisher:3 change:3 determined:2 except:1 uniformly:2 wt:1 principal:1 called:4 total:3 experimental:2 support:18 cholesky:1 scratch:1 |
3,096 | 3,805 | Variational Gaussian-process factor analysis for
modeling spatio-temporal data
Alexander Ilin
Adaptive Informatics Research Center
Helsinki University of Technology, Finland
[email protected]
Jaakko Luttinen
Adaptive Informatics Research Center
Helsinki University of Technology, Finland
[email protected]
Abstract
We present a probabilistic factor analysis model which can be used for studying
spatio-temporal datasets. The spatial and temporal structure is modeled by using
Gaussian process priors both for the loading matrix and the factors. The posterior
distributions are approximated using the variational Bayesian framework. High
computational cost of Gaussian process modeling is reduced by using sparse approximations. The model is used to compute the reconstructions of the global
sea surface temperatures from a historical dataset. The results suggest that the
proposed model can outperform the state-of-the-art reconstruction systems.
1 Introduction
Factor analysis and principal component analysis (PCA) are widely used linear techniques for finding dominant patterns in multivariate datasets. These methods find the most prominent correlations
in the data and therefore they facilitate studies of the observed system. The found principal patterns can also give an insight into the observed data variability. In many applications, the quality of
this kind of modeling can be significantly improved if extra knowledge about the data structure is
used. For example, taking into account the temporal information typically leads to more accurate
modeling of time series.
In this work, we present a factor analysis model which makes use of both temporal and spatial
information for a set of collected data. The method is based on the standard factor analysis model
Y = WX + noise =
D
X
w:d xT
d: + noise ,
(1)
d=1
where Y is a matrix of spatio-temporal data in which each row contains measurements in one spatial
location and each column corresponds to one time instance. Here and in the following, we denote
by ai: and a:i the i-th row and column of a matrix A, respectively (both are column vectors). Thus,
each xd: represents the time series of one of the D factors whereas w:d is a vector of loadings which
are spatially distributed. The matrix Y can contain missing values and the samples can be unevenly
distributed in space and time.1
We assume that both the factors xd: and the corresponding loadings w:d have prominent structures.
We describe them by using Gaussian processes (GPs) which is a flexible and theoretically solid
tool for smoothing and interpolating non-uniform data [8]. Using separate GP models for xd: and
w:d facilitates analysis of large spatio-temporal datasets. The application of the GP methodology
to modeling data Y directly could be unfeasible in real-world problems because the computational
1
In practical applications, it may be desirable to diminish the effect of uneven sampling over space or time
by, for example, using proper weights for different data points.
1
complexity of inference scales cubically w.r.t. the number of data points. The advantage of the
proposed approach is that we perform GP modeling only either in the spatial or temporal domain at
a time. Thus, the dimensionality can be remarkably reduced and modeling large datasets becomes
feasible. Also, good interpretability of the model makes it easy to explore the results in the spatial
and temporal domain and to set priors reflecting our modeling assumptions. The proposed model is
symmetrical w.r.t. space and time.
Our model bears similarities with the latent variable models presented in [13, 16]. There, GPs were
used to describe the factors and the mixing matrix was point-estimated. Therefore the observations
Y were modeled with a GP. In contrast to that, our model is not a GP model for the observations
because the marginal distribution of Y is not Gaussian. This makes the posterior distribution of
the unknown parameters intractable. Therefore we use an approximation based on the variational
Bayesian methodology. We also show how to use sparse variational approximations to reduce the
computational load. Models which use GP priors for both W and X in (1) have recently been proposed in [10, 11]. The function factorization model in [10] is learned using a Markov chain Monte
Carlo sampling procedure, which may be computationally infeasible for large-scale datasets. The
nonnegative matrix factorization model in [11] uses point-estimates for the unknown parameters,
thus ignoring posterior uncertainties. In our method, we take into account posterior uncertainties,
which helps reduce overfitting and facilitates learning a more accurate model.
In the experimental part, we use the model to compute reconstruction of missing values in a realworld spatio-temporal dataset. We use a historical sea surface temperature dataset which contains
monthly anomalies in the 1856-1991 period to reconstruct the global sea surface temperatures. The
same dataset was used in designing the state-of-the-art reconstruction methodology [5]. We show the
advantages of the proposed method as a Bayesian technique which can incorporate all assumptions
in one model and which uses all available data. Since reconstruction of missing values can be an
important application for the method, we give all the formulas assuming missing values in the data
matrix Y.
2 Factor analysis model with Gaussian process priors
We use the factor analysis model (1) in which Y has dimensionality M ? N and the number of
factors D is much smaller than the number of spatial locations M and the number of time instances
N . The m-th row of Y corresponds to a spatial location lm (e.g., a location on a two-dimensional
map) and the n-th column corresponds to a time instance tn .
We assume that each time signal xd: contains values of a latent function ?(t) computed at time
instances tn . We use independent Gaussian process priors to describe each signal xd: :
p(X) = N (X: |0, Kx ) =
D
Y
N (xd: |0, Kd ) ,
[Kd ]ij = ?d (ti , tj ; ?d ) ,
(2)
d=1
where X: denotes a long vector formed by concatenating the columns of X, Kd is the part of the
large covariance matrix Kx which corresponds to the d-th row of X and N (a |b, C ) denotes the
Gaussian probability density function for variable a with mean b and covariance C. The ij-th
element of Kd is computed using the covariance function ?d with the kernel hyperparameters ?d .
The priors for W are defined similarly assuming that each spatial pattern w:d contains measurements
of a function ?(l) at different spatial locations lm :
p(W) =
D
Y
N (w:d |0, Kw
d ) ,
[Kw
d ]ij = ?d (li , lj ; ?d ) ,
(3)
d=1
where ?d is a covariance function with hyperparameters ?d . Any valid (positive semidefinite) kernels can be used to define the covariance functions ?d and ?d . A good list of possible covariance
functions is given in [8]. The prior model reduces to the one used in probabilistic PCA [14] when
Kd = I and a uniform prior is used for W.
The noise term in (1) is modeled with a Gaussian distribution, resulting in a likelihood function
Y
T
2
p(Y|W, X, ?) =
x:n , ?mn
N ymn wm:
,
(4)
mn?O
2
where the product is evaluated over the observed elements in Y whose indices are included in the set
O. We will refer to the model (1)?(4) as GPFA. In practice, the noise level can be assumed spatially
(?mn = ?m ) or temporally (?mn = ?n ) varying. One can also use spatially and temporally varying
2
noise level ?mn
if this variability can be estimated somehow.
There are two main difficulties which should be addressed when learning the model: 1) The posterior
p(W, X|Y) is intractable and 2) the computational load for dealing with GPs can be too large for
real-world datasets. We use the variational Bayesian framework to cope with the first difficulty and
we also adopt the variational approach when computing sparse approximations for the GP posterior.
3 Learning algorithm
In the variational Bayesian framework, the true posterior is approximated using some restricted class
of possible distributions. An approximate distribution which factorizes as
p(W, X|Y) ? q(W, X) = q(W)q(X) .
is typically used for factor analysis models. The approximation q(W, X) can be found by minimizing the Kullback-Leibler divergence from the true posterior. This optimization is equivalent to the
maximization of the lower bound of the marginal log-likelihood:
Z
p(Y|W, X)p(W)p(X)
log p(Y) ? q(W)q(X) log
dWdX .
(5)
q(W)q(X)
Free-form maximization of (5) w.r.t. q(X) yields that
q(X) ? p(X) exphlog p(Y|W,X)iq(W) ,
where h?i refers to the expectation over the approximate posterior distribution q. Omitting the derivations here, this boils down to the following update rule:
?1
?1
?1
q(X) = N X: K?1
,
(6)
+
U
Z
,
K
+
U
:
x
x
where Z: is a DN ? 1 vector formed by concatenation of vectors
X
?2
z:n =
?mn
hwm: iymn .
(7)
m?On
The summation in (7) is over a set On of indices m for which ymn is observed. Matrix U in (6) is a
DN ? DN block-diagonal matrix with the following D ? D matrices on the diagonal:
X
T
?2
,
n = 1, . . . , N .
(8)
Un =
wm: wm:
?mn
m?On
Note that the form of the approximate posterior (6) is similar to the regular GP regression: One
?1
can interpret U?1
n z:n as noisy observations with the corresponding noise covariance matrices Un .
Then, q(X) in (6) is simply the posterior distribution of the latent functions values ?d (tn ).
The optimal q(W) can be computed using formulas symmetrical to (6)?(8) in which X and W are
appropriately exchanged. The variational EM algorithm for learning the model consists of alternate
updates of q(W) and q(X) until convergence. The noise level can be estimated by using a point
estimate or adding a factor factor q(?mn ) to the approximate posterior distribution. For example,
2
the update rules for the case of isotropic noise ?mn
= ? 2 are given in [2].
3.1 Component-wise factorization
In practice, one may need to factorize further the posterior approximation in order to reduce the
computational burden. This can be done in two ways: by neglecting the posterior correlations between different factors xd: (and between spatial patterns w:d , respectively) or by neglecting the
posterior correlations between different time instances x:n (and between spatial locations wm: , respectively). We suggest to use the first way which is computationally more expensive but allows to
3
Method
GP on Y
GPFA
GPFA
GPFA
Approximation
Update rule
q(X: )
q(xd: )
q(xd: ), inducing inputs
(6)
(9)
(12)
Complexity
O(N 3 M 3 )
O(D3 N 3 + D3 M 3 )
O(DN 3 + DM 3 )
PD
PD
O( d=1 Nd2 N + d=1 Md2 M )
Table 1: The computational complexity of different algorithms
QD
capture stronger posterior correlations. This yields a posterior approximation q(X) = d=1 q(xd: )
which can be updated as follows:
?1
?1
,
d = 1, . . . , D ,
(9)
cd , K?1
q(xd: ) = N xd: K?1
d + Vd
d + Vd
where cd is an N ? 1 vector whose n-th component is
X
X
?2
[cd ]n =
?mn
hwmd i ymn ?
hwmj ihxjn i
(10)
j6=d
m?On
2
P
?2
wmd .
and Vd is an N ?N diagonal matrix whose n-th diagonal element is [Vd ]nn = m?On ?mn
The main difference to (6) is that each component is fitted to the residuals of the reconstruction based
on the rest of the components. The computational complexity is now reduced compared to (9), as
shown in Table 1.
The component-wise factorization may provide a meaningful representation of data because the
model is biased in favor of solutions with dynamically and spatially decoupled components. When
the factors are modeled using rather general covariance functions, the proposed method is somewhat
related to the blind source separation techniques using time structure (e.g., [1]). The advantage here
is that the method can handle more sophisticated temporal correlations and it is easily applicable to
incomplete data. In addition, one can use the method in semi-blind settings when prior knowledge
is used to extract components with specific types of temporal or spatial features [9]. This problem
can be addressed using the proposed technique with properly chosen covariance functions.
3.2 Variational learning of sparse GP approximations
One of the main issues with Gaussian processes is the high computational cost with respect to the
number of observations. Although the variational learning of the GPFA model works only in either
spatial or temporal domain at a time, the size of the data may still be too large in practice. A common
way to reduce the computational cost is to use sparse approximations [7]. In this work, we follow
the variational formulation of sparse approximations presented in [15].
The main idea is to introduce a set of auxiliary variables {w, x} which contain the values of the
latent functions ?d (l), ?d (t) in some locations {l = ?dm |m = 1, . . . , Md }, {t = ?nd |n = 1, . . . , Nd }
called inducing inputs. Assuming that the auxiliary variables {w, x} summarize the data well, it
holds that p(W, X|w, x, Y) ? p(W, X|w, x) , which suggests a convenient form of the approximate posterior:
q(W, X, w, x) = p(W|w)p(X|x)q(w)q(x) ,
(11)
where p(W|w), p(X|x) can be easily computed from the GP priors. Optimal q(w), q(x) can be
computed by maximizing the variational lower bound of the marginal log-likelihood similar to (5).
Free-form maximization w.r.t. q(x) yields the following update rule:
?1
?1 ?1
,
q(x) = N x ?K?1
? = K?1
x Kxx Z: , ? ,
x + Kx Kxx UKxx Kx
(12)
where x is the vector of concatenated auxiliary variables for all factors, Kx is the GP prior covariance matrix of x and Kxx is the covariance between x and X: . This equation can be seen as a
replacement of (6). A similar formula is applicable to the update of q(w). The advantage here is that
the number of inducing inputs is smaller than then the number of data samples, that is, Md < M and
Nd < N , and therefore the required computational load can be reduced (see more details in [15]).
Eq. (12) can be quite easily adapted to the component-wise factorization of the posterior in order to
reduce the computational load of (9). See the summary for the computational complexity in Table 1.
4
3.3 Update of GP hyperparameters
The hyperparameters of the GP priors can be updated quite similarly to the standard GP regression
by maximizing the lower bound of the marginal log-likelihood. Omitting the derivations here, this
lower bound for the temporal covariance functions {?d (t)}D
d=1 equals (up to a constant) to
N
i
1 hX
?
log N U?1 Z: 0, U?1 + Kxx K?1
K
tr
Un D ,
xx
x
2
n=1
(13)
where U and Z: have the same meaning as in (6) and D is a D ? D (diagonal) matrix of variances
of x:n given the auxiliary variables x. The required gradients are shown in the appendix. The
equations without the use of auxiliary variables are similar except that Kxx K?1
x Kxx = Kx and
the second term disappears. A symmetrical equation can be derived for the hyperparameters of the
spatial functions ?d (t). The extension of (13) to the case of component-wise factorial approximation
is straightforward. The inducing inputs can also be treated as variational parameters and they can be
changed to optimize the lower bound (13).
4 Experiments
4.1 Artificial example
We generated a dataset with M = 30 sensors (two-dimensional spatial locations) and N = 200
time instances using the generative model (1) with a moderate amount of observation noise, assuming ?mn = ?. D = 4 temporal signals xd: were generated by taking samples from GP priors
with different covariance functions: 1) a squared exponential function to model a slowly changing
component:
r2
k(r; ?1 ) = exp ? 2 ,
(14)
2?1
2) a periodic function with decay to model a quasi-periodic component:
2 sin2 (?r/?1 )
r2
k(r; ?1 , ?2 , ?3 ) = exp ?
,
?
?22
2?32
(15)
where r = |tj ? ti |, and 3) a compactly supported piecewise polynomial function to model two fast
changing components with different timescales:
k(r; ?1 ) =
1
(1 ? r)b+2 (b2 + 4b + 3)r2 + (3b + 6)r + 3 ,
3
(16)
where r = min(1, |tj ? ti |/?1 ) and b = 3 for one-dimensional inputs with the hyperparameter
?1 defining a threshold such that k(r) = 0 for |tj ? ti | ? ?1 . The loadings were generated from
GPs over the two-dimensional space using the squared exponential covariance function (14) with an
additional scale parameter ?2 :
k(r; ?1 , ?2 ) = ?22 exp ?r2 /(2?12 ) .
(17)
We randomly selected 452 data points from Y as being observed, thus most of the generated data
points were marked as missing (see Fig. 1a for examples). We also removed observations from all
the sensors for a relatively long time interval. Note a resulting gap in the data marked with vertical
lines in Fig. 1a. The hyperparameters of the Gaussian processes were initialized randomly close
to the values used for data generation, assuming that a good guess about the hidden signals can be
obtained by exploratory analysis of data.
Fig. 1b shows the components recovered by GPFA using the update rule (6). Note that the algorithm separated the four signals with the different variability timescales. The posterior predictive
distributions of the missing values presented in Fig. 1a show that the method was able to capture
temporal correlations on different timescales. Note also that although some of the sensors contain
very few observations, the missing values are reconstructed pretty well. This is a positive effect of
the spatially smooth priors.
5
x2(t)
x1(t)
y19(t)
y1(t)
x4(t)
x3(t)
y20(t)
y5(t)
time, t
time, t
(a)
(b)
Figure 1: Results for the artificial experiment. (a) Posterior predictive distribution for four randomly
selected locations with the observations shown as crosses, the gap with no training observations
marked with vertical lines and some test values shown as circles. (b) The posteriors of the four
latent signals xd: . In both figures, the solid lines show the posterior mean and gray color shows two
standard deviations.
4.2 Reconstruction of global SST using the MOHSST5 dataset
We demonstrate how the presented model can be used to reconstruct global sea surface temperatures
(SST) from historical measurements. We use the U.K. Meteorological Office historical SST data set
(MOHSST5) [6] that contain monthly SST anomalies in the 1856-1991 period for 5? ?5? longitudelatitude bins. The dataset contains in total approximately 1600 time instances and 1700 spatial
locations. The dataset is sparse, especially during the 19th century and the World Wars, having 55%
of the values missing, and thus, consisting of more than 106 observations in total.
We used the proposed algorithm to estimate D = 80 components, the same number was used in [5].
We withdrew 20% of the data from the training set and used this part for testing the reconstruction
accuracy. We used five time signals xd: with the squared exponential function (14) to describe climate trends. Another five temporal components were modeled with the quasi-periodic covariance
function (15) to capture periodic signals (e.g. related to the annual cycle). We also used five components with the squared exponential function to model prominent interannual phenomena such as El
Ni?no. Finally we used the piecewise polynomial functions to describe the rest 65 time signals xd: .
These dimensionalities were chosen ad hoc. The covariance function for each spatial pattern w:d
was the scaled squared exponential (17). The distance r between the locations li and lj was measured on the surface of the Earth using the spherical law of cosines. The use of the extra parameter
?2 in (17) allowed automatic pruning of unnecessary factors, which happens when ?2 = 0.
We used the component-wise factorial approximation of the posterior described in Section 3.1. We
also introduced 500 inducing inputs for each spatial function ?d (l) in order to use sparse variational
approximations. Similar sparse approximations were used for the 15 temporal functions ?(t) which
modeled slow climate variability: the slowest, quasi-periodic and interannual components had 80,
300 and 300 inducing inputs, respectively. The inducing inputs were initialized by taking a random
subset from the original inputs and then kept fixed throughout learning because their optimization
would have increased the computational burden substantially. For the rest of the temporal phenomena, we used the piecewise polynomial functions (16) that produce priors with a sparse covariance
matrix and therefore allow efficient computations.
The dataset was preprocessed by weighting the data points by the square root of the corresponding
latitudes in order to diminish the effect of denser sampling in the polar regions, then the same noise
level was assumed for all measurements (?mn = ?). Preprocessing by weighting data points ymn
with weights sm is essentially equivalent to assuming spatially varying noise level ?mn = ?/sm .
The GP hyperparameters were initialized taking into account the assumed smoothness of the spatial patterns and the variability timescale of the temporal factors. The factors X were initialized
6
?0.5
0
0.5
?0.5
1875
?0.5
0
0.5
?1
1900
0.5
1875
0
?0.5
?0.5
0
1925
0
0.5
?1
1900
1925
0.5
1
?0.5
1950
?0.5
0
0.5
1950
0
0.5
1975
1
?0.5
0
0.5
1975
Figure 2: Experimental results for the MOHSST5 dataset. The spatial and temporal patterns of the
four most dominating principal components for GPFA (above) and VBPCA (below). The solid lines
and gray color in the time series show the mean and two standard deviations of the posterior distribution. The uncertainties of the spatial patterns are not shown, and we saturated the visualizations
of the VBPCA spatial components to reduce the effect of the uncertain pole regions.
randomly by sampling from the prior and the weights W were initialized to zero. The variational
EM-algorithm of GPFA was run for 200 iterations. We also applied the variational Bayesian PCA
(VBPCA) [2] to the same dataset for comparison. VBPCA was initialized randomly as the initialization did not have much effect on the VBPCA results. Finally, we rotated the GPFA components
such that the orthogonal basis in the factor analysis subspace was ordered according to the amount of
explained data variance (where the variance was computed by averaging over time). Thus, ?GPFA
principal components? are mixtures of the original factors found by the algorithm. This was done
for comparison with the most prominent patterns found with VBPCA.
Fig. 2 shows the spatial and temporal patterns of the four most dominant principal components for
both models. The GPFA principal components and the corresponding spatial patterns are generally
smoother, especially in the data-sparse regions, for example, in the period before 1875. The first and
the second principal components of GPFA as well as the first and the third components of VBPCA
are related to El Ni?no. We should make a note here that the rotation within the principal subspace
may be affected by noise and therefore the components may not be directly comparable. Another
observation was that the model efficiently used only some of the 15 slow components: about three
very slow and two interannual components had relatively large weights in the loading matrix W.
Therefore the selected number of slow components did not affect the results significantly. None
7
of the periodic components had large weights, which suggests that the fourth VBPCA component
might contain artifacts.
Finally, we compared the two models by computing a weighted root mean square reconstruction
error on the test set, similarly to [4]. The prediction errors were 0.5714 for GPFA and 0.6180
for VBPCA. The improvement obtained by GPFA can be considered quite significant taking into
account the substantial amount of noise in the data.
5 Conclusions and discussion
In this work, we proposed a factor analysis model which can be used for modeling spatio-temporal
datasets. The model is based on using GP priors for both spatial patterns and time signals corresponding to the hidden factors. The method can be seen as a combination of temporal smoothing,
empirical orthogonal functions (EOF) analysis and kriging. The latter two methods are popular in
geostatistics (see, e.g., [3]). We presented a learning algorithm that can be applicable to relatively
large datasets.
The proposed model was applied to the problem of reconstruction of historical global sea surface
temperatures. The current state-of-the-art reconstruction methods [5] are based on the reduced space
(i.e. EOF) analysis with smoothness assumptions for the spatial and temporal patterns. That approach is close to probabilistic PCA [14] with fitting a simple auto-regressive model to the posterior
means of the hidden factors. Our GPFA model is based on probabilistic formulation of essentially
the same modeling assumptions. The gained advantage is that GPFA takes into account the uncertainty about the unknown parameters, it can use all available data and it can combine all modeling
assumptions in one estimation procedure. The reconstruction results obtained with GPFA are very
promising and they suggest that the proposed model might be able to improve the existing SST
reconstructions. The improvement is possible because the method is able to model temporal and
spatial phenomena on different scales by using properly selected GPs.
A
The gradients for the updates of GP hyperparameters
The gradient of the first term of (13) w.r.t. a hyperparameter (or inducing input) ? of any covariance
function is given by
h
?Kx i
1 h ?1
?Kxx i
1 ?Kx
?Kxx
? tr UKxx A?1
+ ? bT
tr Kx ? A?1
b + bT
(Z: ? UKxx b)
2
??
??
2
??
??
where A = Kx + Kxx UKxx , b = A?1 Kxx Z: . This part is similar to the gradient reported in
[12]. Without the sparse approximation, it holds that Kx = Kx = Kxx = Kxx and the equation
simplifies to the regular gradient in GP regression for projected observations U?1 Z: with the noise
covariance U?1 . The second part of (13) results in the extra terms
?Kx ?1
?Kxx ?1
?Kx
?1
U + tr
Kx Kxx UKxx Kx
? 2 tr
Kx Kxx U .
(18)
tr
??
??
??
The terms in (18) cancel out when the sparse approximation is not used. Both parts of the gradient can be efficiently evaluated using the Cholesky decomposition. The positivity constraints of
the hyperparameters can be taken into account by optimizing with respect to the logarithms of the
hyperparameters.
Acknowledgments
This work was supported in part by the Academy of Finland under the Centers for Excellence in Research
program and Alexander Ilin?s postdoctoral research project. We would like to thank Alexey Kaplan for fruitful
discussions and providing his expertise on the problem of sea surface temperature reconstruction.
References
[1] A. Belouchrani, K. A. Meraim, J.-F. Cardoso, and E. Moulines. A blind source separation technique based
on second order statistics. IEEE Transactions on Signal Processing, 45(2):434?444, 1997.
8
[2] C. M. Bishop. Variational principal components. In Proceedings of the 9th International Conference on
Artificial Neural Networks (ICANN?99), pages 509?514, 1999.
[3] N. Cressie. Statistics for Spatial Data. Wiley-Interscience, New York, 1993.
[4] A. Ilin and A. Kaplan. Bayesian PCA for reconstruction of historical sea surface temperatures. In Proceedings of the International Joint Conference on Neural Networks (IJCNN 2009), pages 1322?1327,
Atlanta, USA, June 2009.
[5] A. Kaplan, M. Cane, Y. Kushnir, A. Clement, M. Blumenthal, and B. Rajagopalan. Analysis of global sea
surface temperatures 1856?1991. Journal of Geophysical Research, 103:18567?18589, 1998.
[6] D. E. Parker, P. D. Jones, C. K. Folland, and A. Bevan. Interdecadal changes of surface temperature since
the late nineteenth century. Journal of Geophysical Research, 99:14373?14399, 1994.
[7] J. Qui?nonero-Candela and C. E. Rasmussen. A unifying view of sparse approximate Gaussian process
regression. Journal of Machine Learning Research, 6:1939?1959, Dec. 2005.
[8] C. E. Rasmussen and C. K. I. Williams. Gaussian Processes for Machine Learning. MIT Press, 2006.
[9] J. S?arel?a and H. Valpola. Denoising source separation. Journal of Machine Learning Research, 6:233?
272, 2005.
[10] M. N. Schmidt. Function factorization using warped Gaussian processes. In L. Bottou and M. Littman,
editors, Proceedings of the 26th International Conference on Machine Learning (ICML?09), pages 921?
928, Montreal, June 2009. Omnipress.
[11] M. N. Schmidt and H. Laurberg. Nonnegative matrix factorization with Gaussian process priors. Computational Intelligence and Neuroscience, 2008:1?10, 2008.
[12] M. Seeger, C. K. I. Williams, and N. D. Lawrence. Fast forward selection to speed up sparse Gaussian process regression. In Proceedings of the 9th International Workshop on Artificial Intelligence and Statistics
(AISTATS?03), pages 205?213, 2003.
[13] Y. W. Teh, M. Seeger, and M. I. Jordan. Semiparametric latent factor models. In Proceedings of the 10th
International Workshop on Artificial Intelligence and Statistics (AISTATS?05), pages 333?340, 2005.
[14] M. E. Tipping and C. M. Bishop. Probabilistic principal component analysis. Journal of the Royal
Statistical Society Series B, 61(3):611?622, 1999.
[15] M. K. Titsias. Variational learning of inducing variables in sparse Gaussian processes. In Proceedings of
the 12th International Workshop on Artificial Intelligence and Statistics (AISTATS?09), pages 567?574,
2009.
[16] B. M. Yu, J. P. Cunningham, G. Santhanam, S. I. Ryu, K. V. Shenoy, and M. Sahani. Gaussian-process
factor analysis for low-dimensional single-trial analysis of neural population activity. In Advances in
Neural Information Processing Systems 21, pages 1881?1888. 2009.
9
| 3805 |@word trial:1 polynomial:3 loading:5 stronger:1 nd:3 covariance:19 decomposition:1 tr:6 solid:3 series:4 contains:5 existing:1 recovered:1 current:1 wx:1 update:9 generative:1 selected:4 guess:1 intelligence:4 isotropic:1 regressive:1 location:11 five:3 dn:4 ilin:4 consists:1 fitting:1 combine:1 interscience:1 introduce:1 excellence:1 theoretically:1 moulines:1 spherical:1 becomes:1 project:1 xx:1 kind:1 substantially:1 finding:1 temporal:26 ti:4 blumenthal:1 xd:16 scaled:1 shenoy:1 positive:2 before:1 approximately:1 might:2 alexey:1 initialization:1 dynamically:1 suggests:2 factorization:7 practical:1 acknowledgment:1 testing:1 practice:3 block:1 x3:1 procedure:2 empirical:1 significantly:2 convenient:1 refers:1 regular:2 suggest:3 unfeasible:1 close:2 selection:1 optimize:1 equivalent:2 map:1 folland:1 fruitful:1 center:3 missing:8 straightforward:1 williams:2 maximizing:2 insight:1 rule:5 his:1 century:2 handle:1 exploratory:1 population:1 updated:2 anomaly:2 gps:5 us:2 designing:1 cressie:1 element:3 trend:1 approximated:2 expensive:1 observed:5 capture:3 region:3 cycle:1 removed:1 kriging:1 substantial:1 pd:2 complexity:5 jaakko:2 littman:1 predictive:2 titsias:1 basis:1 compactly:1 easily:3 joint:1 derivation:2 separated:1 fast:2 describe:5 monte:1 artificial:6 whose:3 quite:3 widely:1 dominating:1 denser:1 nineteenth:1 reconstruct:2 favor:1 statistic:5 gp:20 timescale:1 noisy:1 hoc:1 advantage:5 reconstruction:15 product:1 nonero:1 mixing:1 academy:1 inducing:9 convergence:1 sea:8 produce:1 rotated:1 help:1 iq:1 montreal:1 measured:1 ij:3 eq:1 auxiliary:5 qd:1 bin:1 hx:1 summation:1 extension:1 hold:2 diminish:2 considered:1 exp:3 lawrence:1 lm:2 finland:3 adopt:1 earth:1 polar:1 estimation:1 applicable:3 tool:1 weighted:1 mit:1 sensor:3 gaussian:18 rather:1 varying:3 factorizes:1 office:1 derived:1 june:2 nd2:1 properly:2 improvement:2 likelihood:4 slowest:1 contrast:1 seeger:2 sin2:1 inference:1 el:2 nn:1 cubically:1 typically:2 lj:2 bt:2 cunningham:1 hidden:3 quasi:3 issue:1 flexible:1 eof:2 spatial:28 art:3 smoothing:2 marginal:4 equal:1 having:1 sampling:4 x4:1 represents:1 kw:2 jones:1 cancel:1 exphlog:1 icml:1 yu:1 piecewise:3 few:1 randomly:5 divergence:1 consisting:1 replacement:1 atlanta:1 saturated:1 mixture:1 semidefinite:1 tj:4 chain:1 accurate:2 neglecting:2 decoupled:1 orthogonal:2 incomplete:1 kxx:15 logarithm:1 exchanged:1 initialized:6 circle:1 fitted:1 uncertain:1 instance:7 column:5 modeling:11 increased:1 maximization:3 cost:3 pole:1 deviation:2 subset:1 uniform:2 too:2 reported:1 periodic:6 density:1 international:6 probabilistic:5 informatics:2 squared:5 slowly:1 positivity:1 warped:1 li:2 account:6 b2:1 blind:3 ad:1 root:2 view:1 candela:1 wm:4 formed:2 ni:2 accuracy:1 square:2 variance:3 efficiently:2 yield:3 bayesian:7 none:1 carlo:1 expertise:1 j6:1 meraim:1 hwm:1 dm:2 boil:1 dataset:11 popular:1 knowledge:2 color:2 dimensionality:3 sophisticated:1 reflecting:1 tipping:1 follow:1 methodology:3 improved:1 formulation:2 evaluated:2 done:2 tkk:2 correlation:6 until:1 meteorological:1 somehow:1 artifact:1 gray:2 quality:1 usa:1 effect:5 omitting:2 contain:5 true:2 facilitate:1 spatially:6 leibler:1 climate:2 during:1 cosine:1 prominent:4 ymn:4 demonstrate:1 tn:3 temperature:9 omnipress:1 meaning:1 variational:18 wise:5 fi:2 recently:1 common:1 rotation:1 interpret:1 measurement:4 monthly:2 refer:1 significant:1 ai:1 smoothness:2 automatic:1 clement:1 similarly:3 had:3 similarity:1 surface:10 dominant:2 posterior:26 multivariate:1 optimizing:1 moderate:1 luttinen:2 seen:2 additional:1 somewhat:1 period:3 signal:11 semi:1 smoother:1 desirable:1 reduces:1 smooth:1 cross:1 long:2 prediction:1 regression:5 essentially:2 expectation:1 iteration:1 kernel:2 dec:1 whereas:1 remarkably:1 addition:1 semiparametric:1 addressed:2 interval:1 unevenly:1 source:3 appropriately:1 extra:3 rest:3 biased:1 facilitates:2 jordan:1 easy:1 affect:1 reduce:6 idea:1 simplifies:1 pca:5 war:1 wmd:1 york:1 generally:1 sst:5 cardoso:1 factorial:2 amount:3 reduced:5 outperform:1 estimated:3 neuroscience:1 hyperparameter:2 affected:1 santhanam:1 four:5 threshold:1 d3:2 changing:2 preprocessed:1 kept:1 realworld:1 run:1 uncertainty:4 fourth:1 throughout:1 separation:3 appendix:1 qui:1 comparable:1 bound:5 nonnegative:2 annual:1 activity:1 adapted:1 ijcnn:1 constraint:1 helsinki:2 md2:1 x2:1 speed:1 min:1 relatively:3 according:1 alternate:1 combination:1 kd:5 smaller:2 em:2 bevan:1 happens:1 explained:1 restricted:1 taken:1 computationally:2 equation:4 visualization:1 studying:1 available:2 schmidt:2 original:2 denotes:2 unifying:1 concatenated:1 especially:2 society:1 md:2 diagonal:5 gradient:6 subspace:2 distance:1 separate:1 thank:1 valpola:1 concatenation:1 vd:4 y5:1 collected:1 arel:1 assuming:6 modeled:6 index:2 providing:1 minimizing:1 kaplan:3 proper:1 kushnir:1 unknown:3 perform:1 teh:1 vertical:2 observation:12 datasets:8 markov:1 sm:2 defining:1 variability:5 y1:1 introduced:1 required:2 learned:1 ryu:1 geostatistics:1 able:3 below:1 pattern:13 latitude:1 summarize:1 rajagopalan:1 program:1 interpretability:1 royal:1 difficulty:2 treated:1 residual:1 mn:14 improve:1 technology:2 temporally:2 disappears:1 extract:1 auto:1 sahani:1 prior:18 law:1 bear:1 generation:1 editor:1 cd:3 row:4 summary:1 changed:1 supported:2 belouchrani:1 free:2 rasmussen:2 infeasible:1 allow:1 taking:5 sparse:16 distributed:2 valid:1 world:3 forward:1 adaptive:2 preprocessing:1 projected:1 historical:6 cope:1 transaction:1 reconstructed:1 approximate:6 pruning:1 kullback:1 dealing:1 global:6 overfitting:1 symmetrical:3 assumed:3 unnecessary:1 spatio:6 factorize:1 postdoctoral:1 un:3 latent:6 pretty:1 table:3 promising:1 ignoring:1 bottou:1 interpolating:1 domain:3 did:2 icann:1 main:4 timescales:3 aistats:3 noise:14 hyperparameters:10 allowed:1 x1:1 fig:5 parker:1 withdrew:1 slow:4 wiley:1 concatenating:1 exponential:5 weighting:2 third:1 late:1 formula:3 down:1 load:4 xt:1 specific:1 bishop:2 list:1 r2:4 decay:1 intractable:2 burden:2 workshop:3 adding:1 gained:1 kx:17 gap:2 simply:1 explore:1 ordered:1 gpfa:17 corresponds:4 marked:3 feasible:1 y20:1 change:1 included:1 except:1 averaging:1 denoising:1 principal:10 called:1 total:2 experimental:2 geophysical:2 meaningful:1 uneven:1 cholesky:1 latter:1 alexander:3 incorporate:1 phenomenon:3 |
3,097 | 3,806 | Gaussian process regression with Student-t likelihood
Pasi Jyl?anki
Department of Biomedical Engineering
and Computational Science
Helsinki University of Technology
Finland
[email protected]
Jarno Vanhatalo
Department of Biomedical Engineering
and Computational Science
Helsinki University of Technology
Finland
[email protected]
Aki Vehtari
Department of Biomedical Engineering
and Computational Science
Finland
Helsinki University of Technology
[email protected]
Abstract
In the Gaussian process regression the observation model is commonly assumed
to be Gaussian, which is convenient in computational perspective. However, the
drawback is that the predictive accuracy of the model can be significantly compromised if the observations are contaminated by outliers. A robust observation
model, such as the Student-t distribution, reduces the influence of outlying observations and improves the predictions. The problem, however, is the analytically
intractable inference. In this work, we discuss the properties of a Gaussian process
regression model with the Student-t likelihood and utilize the Laplace approximation for approximate inference. We compare our approach to a variational approximation and a Markov chain Monte Carlo scheme, which utilize the commonly
used scale mixture representation of the Student-t distribution.
1
Introduction
A commonly used observation model in the Gaussian process (GP) regression is the Normal distribution. This is convenient since the inference is analytically tractable up to the covariance function
parameters. However, a known limitation with the Gaussian observation model is its non-robustness,
and replacing the normal distribution with a heavy-tailed one, such as the Student-t distribution, can
be useful in problems with outlying observations.
If both the prior and the likelihood are Gaussian, the posterior is Gaussian with mean between
the prior mean and the observations. In conflict this compromise is not supported by either of
the information sources. Thus, outlying observations may significantly reduce the accuracy of the
inference. For example, a single corrupted observation may pull the posterior expectation of the
unknown function value considerably far from the level described by the other observations (see
Figure 1). A robust, or outlier-prone, observation model would, however, weight down the outlying
observations the more, the further away they are from the other observations and prior mean.
The idea of robust regression is not new. Outlier rejection was described already by De Finetti [1]
and theoretical results were given by Dawid [2], and O?Hagan [3]. Student-t observation model with
linear regression was studied already by West [4] and Geweke [5], and Neal [6] introduced it for GP
regression. Other robust observation models include, for example, mixtures of Gaussians, Laplace
1
(a) Gaussian observation model.
(b) Student-t observation model.
Figure 1: An example of regression with outliers by Neal [6]. On the left Gaussian and on the right
the Student-t observation model. The real function is plotted with black line.
distribution and input dependent observation models [7?10]. The challenge with the Student-t model
is the inference, which is analytically intractable. A common approach has been to use the scalemixture representation of the Student-t distribution [5], which enables Gibbs sampling [5, 6], and a
factorized variational approximation (VB) for the posterior inference [7, 11].
Here, we discuss the properties of the GP regression with a Student-t likelihood and utilize the
Laplace approximation for the approximate inference. We discuss the known weaknesses of the
approximation scheme and show that in practice it works very well and quickly. We use several
different data sets to compare it to both a full MCMC and a factorial VB, which utilize the scale
mixture equivalent of the Student-t distribution. We show that the predictive performances are similar and that the Laplace?s method approximates the posterior covariance somewhat better than VB.
We also point out some of the similarities between these two methods and discuss their differences.
2
Robust regression with Gaussian processes
Consider a regression problem, where the data comprise observations yi = f (xi ) + i at input
locations X = {xi }ni=1 , where the observation errors 1 , ..., n are zero-mean exchangeable random
variables. The object of inference is the latent function f , which is given a Gaussian process prior.
This implies that any finite subset of latent variables, f = {f (xi )}ni=1 , has a multivariate Gaussian
distribution. In particular, at the observed input locations X the latent variables have a distribution
p(f |X) = N (f |?, Kf,f ),
(1)
where Kf,f is the covariance matrix and ? the mean function. For the notational simplicity, we will
use a zero-mean Gaussian process. Each element in the covariance matrix is a realization of covariance function, [Kf,f ]ij = k(xi , xj ), which represents the prior assumptions of the smoothness of the
latent function (for a detailed introduction on GP regression see [12]). The covariance function used
PD
2
in this work is the stationary squared exponential kse (xi , xj ) = ?se
exp(? d=1 (xi,d ? xj,d )2 /ld2 ),
2
where ?se
is the scaling parameter and ld are the length-scales.
A formal definition of robustness is given, for example, in terms of an outlier-prone observation
model. The observation model is defined to be outlier-prone of order n, if p(f |y1 , ..., yn+1 ) ?
p(f |y1 , ..., yn ) as yn+1 ? ? [3, 4]. That is, the effect of a single conflicting observation to the
posterior becomes asymptotically negligible as the observation approaches infinity. This contrasts
heavily with the Gaussian observation model where each observation influences the posterior no
matter how far it is from the others. The zero-mean Student-t distribution
?(?+1)/2
?((? + 1)/2)
(yi ? fi )2
?
p(yi |fi , ?, ?) =
1+
,
(2)
?? 2
?(?/2) ???
where ? is the degrees of freedom and ? the scale parameter [13], is outlier prone of order 1, and
it can reject up to m outliers if there are at least 2m observations in all [3]. From this on we will
2
collect all the hyperparameters into ? = {?se
, l1 , ..., lD , ?, ?}.
2
3
3.1
Inference with the Laplace approximation
The conditional posterior of the latent variables
Our approach is motivated by the Laplace approximation in GP classification [14]. A similar approximation has been considered by West [4] in the case of robust linear regression and by Rue
et al. [15] in their integrated nested Laplace approximation (INLA). Below we follow the notation
of Rasmussen and Williams [12].
A second order Taylor expansion of log p(f | y, ?) around the mode, gives a Gaussian approximation
p(f | y, ?) ? q(f | y, ?) = N (f |?f , ?),
where ?f = arg maxf p(f | y, ?) and ??1 is the Hessian of the negative log conditional posterior at
the mode ?f [12, 13]:
??1 = ??? log p(f | y, ?)|f =?f = K-1
(3)
f,f +W,
where
Wii = ?(? + 1)
ri2 ? ?? 2
,
(ri2 + ?? 2 )2
(4)
ri = (yi ? fi ), and Wji = 0 if i 6= j.
3.2
The maximum a posterior estimate of the hyperparameters
To find a maximum a posterior estimate (MAP) for the hyperparameters, we write p(?| y) ?
p(y |?)p(?), where
Z
p(y |?) = p(y| f )p(f |X, ?)d f ,
(5)
is the marginal likelihood. To find an approximation, q(y |?), for the marginal likelihood one can
utilize the Laplace method second time [12]. A Taylor expansion of the logarithm of the integrand
in (5) around ?f gives a Gaussian integral over f multiplied by a constant, giving
1 T
? 1 log | Kf,f | ? 1 log | K-1 +W|.
(6)
log q(y |?) = log p(y|?f ) ? ?f K-1
f,f f ?
f,f
2
2
2
The hyperparameters can then be optimized by maximizing the approximate log marginal posterior,
log q(?| y) ? log q(y |?) + log p(?). This is differentiable with respect to ?, which enables the use
of gradient based optimization to find ?? = arg max? q(?| y) [12].
3.3
Making predictions
The approximate posterior distribution of a latent variable f? at a new input location x? is also
Gaussian, and therefore defined by its mean and variance [12]
E
q(f | y,?)
?
?
[f? |X, y, x? ] = K?,f K-1
f,f f = K?,f ? log p(y |f )
Var [f? |X, y, x? ] = K?,? ? K?,f (Kf,f +W?1 )?1 Kf,? .
q(f | y,?)
(7)
(8)
The predictive distribution of a new observation is obtained by marginalizing over the posterior
distribution of f?
Z
q(y? |X, y, x? ) = p(y? |f? )q(f? |X, y, x? )df? ,
(9)
which can be evaluated, for example, with a Gaussian quadrature integration.
3.4
Properties of the Laplace approximation
The Student-t distribution is not log-concave, and therefore the posterior distribution may be multimodal. The immediate concern from this is that a unimodal Laplace approximation may give
a poor estimate for the posterior. This is, however, a problem for all unimodal approximations,
3
Var(f|D),Var(f)
0
?15
4
?10
?5
0
latent value f
p(f), p(f|D), p(y|f)
prior
likelih
real posterior
Laplace app
VB approx
0
?15
5
2
0
?15
?10
?5
0
latent value f
5
Var(f|D),Var(f)
0.3
0.6
p(f), p(f|D), p(y|f)
1
?10
?5
0
posterior mean of latent value f
5
?15
(a) Greater prior variance than likelihood variance
?10
?5
0
posterior mean of latent value f
5
(b) Equal prior and likelihood variance
Figure 2: A comparison of the Laplace and VB approximation for p(f |?, y) in the case of a single
observation with the Student-t likelihood and a Gaussian prior. The likelihood is centered at zero
and the prior mean is altered. The upper plots show the probability density functions and the lower
plots the variance of the true posterior and its approximations as a function of the posterior mean.
such as the VB in [7, 11]. An other concern is that the estimate of the posterior precision,
??1 = ??? log p(f | y, ?)|f =?f , is essentially uncontrolled. However, at a posterior mode ?f , the
Hessian ??1 is always positive definite and in practice approximates the truth rather well according
to our experiments. If the optimization for f ends up in a saddle point or the mode is very flat, ??1
may be close to singular, which leads to problems in the implementation. In this section, we will
discuss these issues with simple examples and address the implementation in the section 4.
Consider a single observation yi = 0 from a Student-t distribution with a Gaussian prior for its
mean, fi . The behavior of the true posterior, the Laplace approximation, and VB as a function of
prior mean are illustrated in the upper plots of the Figure 2. The dotted lines represent the situation,
where the observation is a clear outlier in which case the posterior is very close to the prior (cf.
section 2). The solid lines represent a situation where the prior and data agree, and the dashed lines
represent a situation where the prior and data conflict moderately.
The posterior of the mean is unimodal if ?(fi )?1 = ?i?2 + W (fi ) > 0, for all fi ? <, where ?i2 is
the prior variance and W (fi ) is the Hessian of the negative log likelihood at fi (see
? equations (3) and
(4)). With ? and ? fixed, W (fi ) reaches its (negative) minimum at |yi ?fi | = ? 3??, where ??1 =
?i?2 ? (? + 1)/(8?? 2 ). Therefore, the posterior distribution is unimodal if ?i?2 > (? + 1)/(8?? 2 ),
or in terms of variances if Var[yi |fi , ?, ?]/?i2 > (? + 1)/(8(? ? 2)) (for ? > 2). It follows that the
most problematic situation for the Laplace approximation is when the prior
? is much wider than the
likelihood. Then in the case of a moderate conflict (|yi ? f?i | is close to 3??) the posterior may be
multimodal (see the Figure 2(a)), meaning that it is unclear whether the observation is an outlier or
not. In this case, W (fi ) is negative and ??1 may be close to zero, which reflects uncertainty on the
location. In the implementation this may lead to numerical problems but in practice, the problem
becomes concrete only seldom as described in the section 4.
The negative values of W relate to a decrease in the posterior precision compared to the prior precision. As long as the total precision remains positive it approximates the behavior of the true posterior
rather well. The Student-t likelihood leads to a decrease in the variance from prior to posterior only
if the prior mean and the observation are consistent with each other as shown in the Figure 2. This
behavior is not captured with the factorized VB approximation [7], where W in q(f |?, y) is replaced
with a strictly positive diagonal that always increases the precision as illustrated in the Figure 2.
4
4
4.1
On the implementation
Posterior mode of the latent variables
The mode of the latent variables, ?f , can be found with general optimization methods such as the
scaled conjugate gradients. The most robust and efficient method, however, proved to be the expectation maximization (EM) algorithm that utilizes the scale mixture representation of the Student-t
distribution
yi |fi ? N (fi , Vi )
(10)
Vi ? Inv-?2 (?, ? 2 )
(11)
where each observation has its own noise variance Vi that is Inv-?2 distributed. Following Gelman
et al. [13], p. 456 the E-step of the algorithm consists of evaluating the expectation
1
?+1
old
E
,
(12)
yi , fi , ?, ? =
Vi
?? 2 + (yi ? fiold )2
after which the latent variables are updated in the M-step as
?f new = (K-1 +V?1 )?1 V?1 y,
f,f
(13)
where V?1 is a diagonal matrix of the expectations in (12). In practice, we do not invert Kf,f and,
thus, ?f is updated using the Woodbury-Sherman-Morrison [e.g. 16] lemma
?f new = (Kf,f ? Kf,f V?1/2 B?1 V?1/2 Kf,f )V?1 y
= Kf,f a
(14)
where matrix B = I + V?1/2 Kf,f V?1/2 . This is numerically more stable than directly inverting
?
the covariance matrix, and gives as an intermediate result the vector a = K-1
f,f f for later use.
4.2
Approximate marginal likelihood
Rasmussen and Williams [12] discuss a numerically stable formulation to evaluate the approximate
marginal likelihood and its gradients with a classification model. Their approach relies on W being
non-negative, for which reason it requires some modification for our setting. With the Student-t
likelihood, we found the most stable formulation for (6) is
n
n
X
X
1 T
log Rii +
log Lii ,
log q(y |?) = log p(y|?f ) ? ?f a ?
2
i=1
i=1
(15)
?1
where R and L are the Cholesky decomposition of Kf,f and ? = (K-1
, and a is obtained
f,f +W)
from the EM algorithm. The only problematic term is the last one, which is numerically unstable
if evaluated directly. We could evaluate first ? = Kf,f ? Kf,f (W?1 + Kf,f )?1 Kf,f , but this is in
many cases even worse than the direct evaluation, since W?1 might have arbitrary large negative
values. For this reason, we evaluate LLT = ? using a rank one Cholesky updates in a specific order.
After L is found it can also be used in the predictive variance (8) and in the gradients of (6) with
only minor modification to equations given in [12]. We write first the posterior covariance as
?1
T
T
T
?1
? = (K-1
= (K-1
,
f,f +W)
f,f +e1 e1 W11 + e2 e2 W22 + ...en en Wnn )
(16)
where ei is the ith unit vector. The terms ei eTi Wii are added iteratively and the Cholesky decomposition of ? is updated accordingly. At the beginning L = chol(Kf,f ), and at iteration step i+1 we
use the rank one Cholesky update to find
L(i+1) = chol L(i) (L(i) )T ? si sTi ?i ,
(17)
(i)
(i)
where si is the ith column of ?(i) and ?i = Wii (?ii )?1 /((?ii )?1 + Wii ). If Wii is positive we
(i)
conduct a Cholesky downdate, and if Wii < 0 and (?ii )?1 + Wii > 0 we have a Cholesky update
(i)
which increases the covariance. The increase may be arbitrary large if (?ii )?1 ? ?Wii , but in
5
(i)
practice it can be limited. Problems arise also if Wii < 0 and (?ii )?1 + Wii ? 0, since then the
resulting Cholesky downdate is not positive definite. This should not happen if ?f is at local maxima,
but in practice it may be in a saddle point or this happens because of numerical instability or the
iterative framework to update the Cholesky decomposition. The problem is prevented by adding the
diagonals in a decreasing order, that is, first the ?normal? observations and last the outliers.
A single Cholesky update is analogous to the discussion in section 3.4 in that the posterior covariance
is updated using the result of the previous iteration as a prior. If we added the negative W values
(i)
at the beginning, ?ii , (the prior variance) could be so large that either (?ii )?1 + Wii ? 0 or
(i)
(i+1)
(?ii )?1 ? ?Wii , in which case the posterior covariance ?ii
could become singular or arbitrary
large and lead to problems in the later iterations (compare to the dashed black line in the Figure
2(a)). Adding first the largest W we reduce ? so that negative values of W are less problematic
(compare to the dashed black line in the Figure 2(b)), and the updates are numerically more stable.
(i)
During the Cholesky updates, we cross-check with the condition (?ii )?1 +Wii ? 0 that everything
(i)
is fine. If the condition is not fulfilled our code prints a warning and replaces Wii with ?1/(2?ii ).
This ensures that the Cholesky update will remain positive definite and doubles the marginal variance instead. However, in practice we never encountered any warnings in our experiments if the
hyperparameters were initialized sensibly so that the prior was tight compared to the likelihood.
5
Relation to other work
Neal [6] implemented the Student-t model for the Gaussian process via Markov chain Monte Carlo
utilizing the scale mixture representation. However, the most similar approaches to the Laplace
approximation are the VB approximation [7, 11] and the one in INLA [15]. Here we will shortly
summarize them.
The difference between INLA and GP framework is that INLA utilizes Gaussian Markov random
fields (GMRF) in place of the Gaussian process. The Gaussian approximation for p(f | y, ?) in INLA
is the same as the Laplace approximation here with the covariance function replaced by a precision
matrix. Rue et al. [15] derive the approximation for the log marginal posterior, log p(?| y), from
p(y, f , ?)
p(y | f )p(f |?)p(?)
p(?| y) ? q(?| y) ?
(18)
?=
?.
q(f |?, y) f =f
q(f |?, y)
f =f
The proportionality sign is due to the fact that the normalization constant for p(f , ?| y) is unknown.
This is exactly the same as the approximation derived in the section 3.2. Taking the logarithm of
(18) we end up in log q(?| y) ? log q(y |?) + log p(?), where log q(y |?) is given in (6).
In the variational approximation [7], the joint posterior of the latent variables and the scale parameters in the scale mixture representation (10)-(11) is approximated with a factorizing distribution
p(f , V| y, ?) ? q(f )q(V), where q(f ) = N (f |m, A) and q(V) = ?ni=1 Inv-?2 (Vii |?
? /2, ?
? 2 /2),
2
?
where ? = {m, A, ??, ?
? } are the parameters of the variational approximation. The approximate
distributions and the hyperparameters are updated in turns so that ?? are updated with current esti?
mate for ? and after that ? is updated with fixed ?.
? V)
? ? N (f |m, A). Here,
The variational approximation for the conditional posterior is p(f | y, ?,
-1
?1
?1
? ) , and the iterative search for the posterior parameters m and A is the same as
A = (Kf,f +V
the EM algorithm described in section 4 except that the update of E Vii?1 in (12) is replaced with
?1
old 2
E Vii = (? + 1)/(? 2 + Aold
ii + (yi ? mi ) ). Thus, the Laplace and the variational approximation
are very similar. In practice, the posterior mode, m, is very close to the mode ?f , and the main
difference between the approximations is in the covariance and the hyperparameter estimates.
In the variational approximation ?? is searched by maximizing the variational lower bound
p(y, f , V, ?)
p(y | f , V)p(f |?)p(V|?)p(?)
V = Eq(f ,V| y,?) log
= Eq(f ,V| y,?) log
, (19)
q(f | y, ?)q(V| y, ?)
q(f , V| y, ?)
where we have made visible the implicit dependence of the approximations q(f ) and q(V) to the
data and hyperparameters, and included prior for ?. The variational lower bound is similar to the ap6
Table 1: The RMSE and NLP statistics on the experiments.
G
T-lapl
T-vb
T-mcmc
Neal
0.393
0.028
0.029
0.055
The RMSE error
Friedman Housing
0.324
0.324
0.220
0.289
0.220
0.294
0.253
0.287
Concrete
0.230
0.231
0.212
0.197
Neal
0.254
-2.181
-2.228
-1.907
The NLP statistics
Friedman Housing
0.227
1.249
-0.16
0.080
-0.049
0.091
-0.106
0.029
Concrete
0.0642
-0.116
-0.132
-0.241
proximate log marginal posterior (18). Only the point estimate ?f is replaced with averaging over the
approximating distribution q(f , V| y, ?). The other difference is that in the Laplace approximation
the scale parameters V are marginalized out and it approximates directly p(f | y, ?).
6
Experiments
We studied four data sets: 1) Neal data [6] with 100 data points and one input shown in Figure 1.
2) Friedman data with a nonlinear function of 10 inputs, from which we generated 10 data sets with
100 training points including 10 randomly selected outliers as described by Kuss [7], p. 83. 3) The
Boston housing data that summarize median house prices in Boston metropolitan area for 506 data
points and 13 input variables [7]. 4) Concrete data that summarize the quality of concrete casting as
a function of 27 variables for 215 measurements [17]. In earlier experiments, the Student-t model
has worked better than the Gaussian observation model in all of these data sets.
The predictive performance is measured with a root mean squared error (RMSE) and a negative
log predictive density (NLP). With simulated data these are evaluated for a test set of 1000 latent
variables. With real data we use 10-fold cross-validation. The compared observation models are
Gaussian (G) and Student-t (T). The Student-t model is inferred using the Laplace approximation
(lapl), VB (vb) [7] and full MCMC (mcmc) [6]. The Gaussian observation model, the Laplace
? and in MCMC we sample ?. INLA is excluded from
approximation and VB are evaluated at ?,
the experiments since GMRF model can not be constructed naturally for these non-regularly distributed data sets. The results are summarized in the Table 1. The significance of the differences in
performance is approximated using a Gaussian approximation for the distribution of the NLP and
RMSE statistics [17]. The Student-t model is significantly better than the Gaussian with higher than
95% probability in all other tests but in the RMSE with the concrete data. There is no significant
difference between the Laplace approximation, VB and MCMC.
The inference time was the shortest with Gaussian observation model and the longest with the
Student-t model utilizing full MCMC. The Laplace approximation for the Student-t likelihood took
in average 50% more time than the Gaussian model, and VB was in average 8-10 times slower than
? are
the Laplace approximation. The reason for this is that in VB two sets of parameters, ? and ?,
updated in turns, which slows down the convergence of hyperparameters. In the Laplace approx? y) for
imation we have to optimize only ?. Figure 3 shows the mean and the variance of p(f |?,
MCMC versus the Laplace approximation and VB. The mean of the Laplace approximation and VB
match equally well the mean of the MCMC solution, but VB underestimates the variance more than
the Laplace approximation (see also the figure 2). In the housing data, both approximations underestimate the variance remarkably for few data points (40 of 506) that were located as clusters at
places where inputs, x are truncated along one or more dimension. At these locations, the marginal
posteriors were slightly skew and their tails were rather heavy, and thus a Gaussian approximation
presumably underestimates the variance.
The degrees of freedom of the Student-t likelihood were optimized only in Neal data and Boston
housing data using the Laplace approximation. In other data sets, there was not enough information
to infer ? and it was set to 4. Optimizing ? was more problematic for VB than for the Laplace
approximation probably because the factorized approximation makes it harder to identify ?. The
MAP estimates ?? found by the Laplace approximation and VB were slightly different. This is
reasonable since the optimized functions (18) and (19) are also different.
7
(a) Neal data
(b) Friedman data
(c) Boston housing data
(d) Concrete data
Figure 3: Scatter plot of the posterior mean and variance of the latent variables. Upper row consists
means, and lower row variances. In each figure, left plot is for MCMC (x-axis) vs the Laplace
approximation (y-axis) and the right plot is MCMC (x-axis) vs. VB (y-axis).
7
Discussion
In our experiments we found that the predictive performance of both the Laplace approximation and
the factorial VB is similar with the full MCMC. Compared to the MCMC the Laplace approximation
? y] similarly but VB underestimates the posterior variance
and VB estimate the posterior mean E[f |?,
?
Var[f |?, y] more than the Laplace approximation. Optimizing the hyperparameters is clearly faster
with the Laplace approximation than with VB.
Both the Laplace and the VB approximation estimate the posterior precision as a sum of a prior precision and a diagonal matrix. In VB the diagonal is strictly positive, whereas in the Laplace approximation the diagonal elements corresponding to outlying observations are negative. The Laplace approximation is closer to the reality in that respect since the outlying observations have a negative effect on the (true) posterior precision. This happens because VB minimizes KL(q(f )q(V)||p(f , V)),
which requires that the q(f , V) must be close to zero whenever p(f , V) is (see for example [18]).
Since a posteriori f and V are correlated, the marginal q(f ) underestimates the effect of marginalizing over the scale parameters. The Laplace approximation, on the other hand, tries to estimate
directly the posterior p(f ) of the latent variables. Recently, Opper and Archambeau [19] discussed
the relation between the Laplace approximation and VB, and proposed a variational approximation
directly for the latent variables and tried it with a Cauchy likelihood (they did not perform extensive
experiments though). Presumably their implementation would give better estimate for p(f ) than the
factorized approximation. However, experiments on that respect are left for future.
The advantage of VB is that the objective function (19) is a rigorous lower bound for p(y |?),
whereas the Laplace approximation (18) is not. However, the marginal posteriors p(f | y, ?) in
our experiments (inferred with MCMC) were so close to Gaussian that the Laplace approximation
q(f |?, y) should be very accurate and, thus, the approximation for p(?| y) (18) should also be close
to the truth (see also justifications in [15]).
In recent years the expectation propagation (EP) algorithm [20] has been demonstrated to be very accurate and efficient method for approximate inference in many models with factorizing likelihoods.
However, the Student-t likelihood is problematic for EP since it is not log-concave, for which reason EPs estimate for the posterior covariance may become singular during the site updates [21]. The
reason for this is that the variance parameters of the site approximations may become negative. As
demonstrated with Laplace approximation here, this reflects the behavior of the true posterior. We
assume that the problem can be overcome, but we are not aware of any work that would have solved
this problem.
Acknowledgments
This research was funded by the Academy of Finland, and the Graduate School in Electronics and
Telecommunications and Automation (GETA). The first and second author thank also the Finnish
Foundation for Economic and Technology Sciences - KAUTE, Finnish Cultural Foundation, Emil
Aaltonen Foundation, and Finnish Foundation for Technology Promotion for supporting their post
graduate studies.
8
References
[1] Bruno De Finetti. The Bayesian approach to the rejection of outliers. In Proceedings of
the fourth Berkeley Symposium on Mathematical Statistics and Probability, pages 199?210.
University of California Press, 1961.
[2] A. Philip Dawid. Posterior expectations for large observations. Biometrika, 60(3):664?667,
December 1973.
[3] Anthony O?Hagan. On outlier rejection phenomena in Bayes inference. Royal Statistical
Society. Series B., 41(3):358?367, 1979.
[4] Mike West. Outlier models and prior distributions in Bayesian linear regression. Journal of
Royal Statistical Society. Serires B., 46(3):431?439, 1984.
[5] John Geweke. Bayesian treatment of the independent Student-t linear model. Journal of
Applied Econometrics, 8:519?540, 1993.
[6] Radford M. Neal. Monte Carlo Implementation of Gaussian Process Models for Bayesian Regression and Classification. Technical Report 9702, Dept. of statistics and Dept. of Computer
Science, University of Toronto, January 1997.
[7] Malte Kuss. Gaussian Process Models for Robust Regression, Classification, and Reinforcement Learning. PhD thesis, Technische Universit?at Darmstadt, 2006.
[8] Paul W. Goldberg, Christopher K.I. Williams, and Christopher M. Bishop. Regression with
input-dependent noise: A Gaussian process treatment. In M. I. Jordan, M. J. Kearns, and S. A
Solla, editors, Advances in Neural Information Processing Systems 10. MIT Press, Cambridge,
MA, 1998.
[9] Andrew Naish-Guzman and Sean Holden. Robust regression with twinned gaussian processes.
In J.C. Platt, D. Koller, Y. Singer, and S. Roweis, editors, Advances in Neural Information
Processing Systems 20, pages 1065?1072. MIT Press, Cambridge, MA, 2008.
[10] Oliver Stegle, Sebastian V. Fallert, David J. C. MacKay, and S?ren Brage. Gaussian process
robust regression for noisy heart rate data. Biomedical Engineering, IEEE Transactions on, 55
(9):2143?2151, September 2008. ISSN 0018-9294. doi: 10.1109/TBME.2008.923118.
[11] Michael E. Tipping and Neil D. Lawrence. Variational inference for Student-t models: Robust
bayesian interpolation and generalised component analysis. Neurocomputing, 69:123?141,
2005.
[12] Carl Edward Rasmussen and Christopher K. I. Williams. Gaussian Processes for Machine
Learning. The MIT Press, 2006.
[13] Andrew Gelman, John B. Carlin, Hal S. Stern, and Donald B. Rubin. Bayesian Data Analysis.
Chapman & Hall/CRC, second edition, 2004.
[14] Christopher K. I. Williams and David Barber. Bayesian classification with Gaussian processes.
IEEE Transactions on Pattern Analysis and Machine Intelligence, 20(12):1342?1351, 1998.
[15] H?avard Rue, Sara Martino, and Nicolas Chopin. Approximate Bayesian inference for latent
Gaussian models by using integrated nested Laplace approximations. Journal of Royal statistical Society B, 71(2):1?35, 2009.
[16] David A. Harville. Matrix Algebra From a Statistician?s Perspective. Springer-Verlag, 1997.
[17] Aki Vehtari and Jouko Lampinen. Bayesian model assessment and comparison using crossvalidation predictive densities. Neural Computation, 14(10):2439?2468, 2002.
[18] Christopher M. Bishop. Pattern Recognition and Machine Learning. Springer Science +Business Media, LLC, 2006.
[19] Manfred Opper and C?edric Archambeau. The variational Gaussian approximation revisited.
Neural Computation, 21(3):786?792, March 2009.
[20] Thomas Minka. A family of algorithms for approximate Bayesian inference. PhD thesis,
Massachusetts Institute of Technology, 2001.
[21] Matthias Seeger. Bayesian inference and optimal design for the sparse linear model. Journal
of Machine Learning Research, 9:759?813, 2008.
9
| 3806 |@word proportionality:1 vanhatalo:2 tried:1 covariance:14 decomposition:3 solid:1 harder:1 edric:1 ld:2 electronics:1 series:1 current:1 si:2 scatter:1 must:1 john:2 numerical:2 visible:1 happen:1 enables:2 plot:6 update:10 v:2 stationary:1 intelligence:1 selected:1 accordingly:1 beginning:2 ith:2 manfred:1 revisited:1 location:5 toronto:1 mathematical:1 along:1 constructed:1 direct:1 become:3 symposium:1 consists:2 behavior:4 decreasing:1 becomes:2 notation:1 cultural:1 factorized:4 medium:1 minimizes:1 warning:2 esti:1 berkeley:1 concave:2 exactly:1 sensibly:1 scaled:1 biometrika:1 anki:1 exchangeable:1 unit:1 universit:1 platt:1 yn:3 generalised:1 positive:7 negligible:1 engineering:4 local:1 interpolation:1 black:3 might:1 studied:2 collect:1 sara:1 archambeau:2 limited:1 graduate:2 acknowledgment:1 woodbury:1 practice:8 definite:3 area:1 ri2:2 significantly:3 reject:1 convenient:2 donald:1 close:8 gelman:2 influence:2 instability:1 optimize:1 equivalent:1 map:2 demonstrated:2 maximizing:2 williams:5 simplicity:1 gmrf:2 utilizing:2 pull:1 justification:1 laplace:44 updated:8 analogous:1 heavily:1 carl:1 goldberg:1 dawid:2 element:2 approximated:2 recognition:1 located:1 hagan:2 econometrics:1 observed:1 ep:2 mike:1 solved:1 ensures:1 solla:1 decrease:2 vehtari:3 pd:1 moderately:1 tight:1 algebra:1 compromise:1 predictive:8 multimodal:2 joint:1 monte:3 doi:1 statistic:5 neil:1 gp:6 noisy:1 housing:6 advantage:1 differentiable:1 matthias:1 took:1 emil:1 tbme:1 realization:1 roweis:1 academy:1 crossvalidation:1 convergence:1 double:1 cluster:1 object:1 wider:1 derive:1 andrew:2 measured:1 ij:1 school:1 minor:1 eq:2 edward:1 implemented:1 implies:1 drawback:1 centered:1 everything:1 crc:1 darmstadt:1 strictly:2 around:2 considered:1 hall:1 normal:3 exp:1 presumably:2 lawrence:1 inla:6 finland:4 pasi:2 largest:1 metropolitan:1 reflects:2 eti:1 promotion:1 clearly:1 mit:3 gaussian:43 always:2 rather:3 casting:1 derived:1 jylanki:1 notational:1 longest:1 rank:2 likelihood:22 check:1 martino:1 contrast:1 rigorous:1 seeger:1 posteriori:1 inference:16 dependent:2 integrated:2 holden:1 relation:2 koller:1 chopin:1 arg:2 classification:5 issue:1 integration:1 mackay:1 jarno:2 marginal:11 equal:1 comprise:1 never:1 field:1 aware:1 sampling:1 chapman:1 represents:1 future:1 report:1 contaminated:1 others:1 guzman:1 few:1 randomly:1 neurocomputing:1 replaced:4 statistician:1 friedman:4 freedom:2 evaluation:1 weakness:1 mixture:6 chain:2 accurate:2 oliver:1 integral:1 closer:1 conduct:1 taylor:2 logarithm:2 old:2 initialized:1 plotted:1 theoretical:1 jyl:1 column:1 earlier:1 maximization:1 technische:1 subset:1 corrupted:1 considerably:1 density:3 michael:1 quickly:1 concrete:7 squared:2 thesis:2 worse:1 lii:1 de:2 student:30 summarized:1 automation:1 matter:1 vi:4 later:2 root:1 try:1 bayes:1 rmse:5 ni:3 accuracy:2 variance:20 identify:1 bayesian:11 ren:1 carlo:3 app:1 kuss:2 llt:1 reach:1 whenever:1 sebastian:1 definition:1 underestimate:5 minka:1 e2:2 naturally:1 mi:1 proved:1 treatment:2 massachusetts:1 improves:1 geweke:2 sean:1 higher:1 tipping:1 follow:1 formulation:2 evaluated:4 though:1 biomedical:4 tkk:3 ld2:1 implicit:1 hand:1 replacing:1 ei:2 nonlinear:1 christopher:5 propagation:1 assessment:1 mode:8 quality:1 hal:1 effect:3 true:5 analytically:3 excluded:1 iteratively:1 neal:9 illustrated:2 i2:2 during:2 aki:3 l1:1 meaning:1 variational:12 recently:1 fi:19 common:1 tail:1 discussed:1 approximates:4 numerically:4 eps:1 measurement:1 significant:1 cambridge:2 gibbs:1 smoothness:1 approx:2 seldom:1 similarly:1 bruno:1 sherman:1 funded:1 stable:4 similarity:1 posterior:51 multivariate:1 own:1 recent:1 perspective:2 optimizing:2 moderate:1 verlag:1 yi:12 wji:1 captured:1 minimum:1 greater:1 somewhat:1 shortest:1 dashed:3 morrison:1 ii:12 full:4 unimodal:4 downdate:2 reduces:1 infer:1 technical:1 match:1 faster:1 cross:2 long:1 proximate:1 e1:2 prevented:1 equally:1 post:1 prediction:2 regression:19 essentially:1 expectation:6 df:1 iteration:3 represent:3 normalization:1 invert:1 wnn:1 remarkably:1 fine:1 whereas:2 singular:3 source:1 median:1 finnish:3 probably:1 december:1 regularly:1 jordan:1 intermediate:1 naish:1 enough:1 xj:3 carlin:1 reduce:2 idea:1 economic:1 maxf:1 whether:1 motivated:1 hessian:3 useful:1 chol:2 detailed:1 se:3 clear:1 factorial:2 problematic:5 dotted:1 sign:1 fulfilled:1 write:2 hyperparameter:1 finetti:2 four:1 twinned:1 harville:1 utilize:5 asymptotically:1 sum:1 year:1 sti:1 uncertainty:1 telecommunication:1 fourth:1 place:2 family:1 reasonable:1 utilizes:2 scaling:1 vb:31 bound:3 uncontrolled:1 fold:1 replaces:1 encountered:1 infinity:1 worked:1 w22:1 helsinki:3 ri:1 flat:1 integrand:1 department:3 according:1 march:1 poor:1 conjugate:1 remain:1 slightly:2 em:3 making:1 modification:2 happens:2 aold:1 outlier:15 heart:1 equation:2 agree:1 remains:1 discus:6 turn:2 skew:1 singer:1 imation:1 tractable:1 end:2 gaussians:1 wii:14 multiplied:1 away:1 robustness:2 shortly:1 slower:1 thomas:1 include:1 cf:1 nlp:4 marginalized:1 giving:1 approximating:1 society:3 objective:1 already:2 added:2 print:1 dependence:1 diagonal:6 unclear:1 september:1 gradient:4 thank:1 simulated:1 philip:1 evaluate:3 barber:1 cauchy:1 unstable:1 reason:5 length:1 code:1 issn:1 lapl:2 relate:1 negative:13 slows:1 implementation:6 rii:1 stern:1 design:1 unknown:2 perform:1 upper:3 observation:44 markov:3 finite:1 mate:1 kse:1 truncated:1 immediate:1 situation:4 supporting:1 january:1 y1:2 arbitrary:3 scalemixture:1 inv:3 inferred:2 introduced:1 inverting:1 david:3 kl:1 extensive:1 optimized:3 conflict:3 california:1 conflicting:1 address:1 below:1 stegle:1 pattern:2 challenge:1 summarize:3 max:1 including:1 royal:3 malte:1 business:1 scheme:2 altered:1 w11:1 technology:6 axis:4 prior:26 kf:19 marginalizing:2 limitation:1 var:7 versus:1 validation:1 foundation:4 degree:2 consistent:1 rubin:1 editor:2 heavy:2 row:2 prone:4 supported:1 last:2 rasmussen:3 formal:1 institute:1 taking:1 sparse:1 distributed:2 overcome:1 dimension:1 opper:2 evaluating:1 llc:1 author:1 commonly:3 made:1 reinforcement:1 outlying:6 far:2 transaction:2 approximate:10 assumed:1 xi:6 factorizing:2 search:1 compromised:1 latent:19 iterative:2 tailed:1 table:2 reality:1 robust:11 nicolas:1 expansion:2 anthony:1 rue:3 did:1 significance:1 main:1 noise:2 hyperparameters:9 arise:1 paul:1 edition:1 quadrature:1 site:2 west:3 en:2 precision:9 exponential:1 house:1 down:2 specific:1 bishop:2 concern:2 intractable:2 adding:2 phd:2 rejection:3 vii:3 boston:4 saddle:2 radford:1 springer:2 nested:2 truth:2 relies:1 ma:2 conditional:3 price:1 included:1 except:1 averaging:1 lemma:1 kearns:1 total:1 cholesky:11 searched:1 dept:2 mcmc:14 phenomenon:1 correlated:1 |
3,098 | 3,807 | Tracking Dynamic Sources of Malicious Activity at
Internet-Scale
Shobha Venkataraman?, Avrim Blum? , Dawn Song? , Subhabrata Sen? , Oliver Spatscheck?
?
AT&T Labs ? Research {shvenk,sen,spatsch}@research.att.com
?
Carnegie Mellon University [email protected]
?
University of California, Berkeley [email protected]
Abstract
We formulate and address the problem of discovering dynamic malicious regions
on the Internet. We model this problem as one of adaptively pruning a known
decision tree, but with additional challenges: (1) severe space requirements, since
the underlying decision tree has over 4 billion leaves, and (2) a changing target
function, since malicious activity on the Internet is dynamic. We present a novel
algorithm that addresses this problem, by putting together a number of different
?experts? algorithms and online paging algorithms. We prove guarantees on our
algorithm?s performance as a function of the best possible pruning of a similar
size, and our experiments show that our algorithm achieves high accuracy on large
real-world data sets, with significant improvements over existing approaches.
1 Introduction
It is widely acknowledged that identifying the regions that originate malicious traffic on the Internet
is vital to network security and management, e.g., in throttling attack traffic for fast mitigation, isolating infected sub-networks, and predicting future attacks [6, 18, 19, 24, 26]. In this paper, we show
how this problem can be modeled as a version of a question studied by Helmbold and Schapire [11]
of adaptively learning a good pruning of a known decision tree, but with a number of additional challenges and difficulties. These include a changing target function and severe space requirements due
to the enormity of the underlying IP address-space tree. We develop new algorithms able to address
these difficulties that combine the underlying approach of [11] with the sleeping experts framework
of [4, 10] and the online paging problem of [20]. We show how to deal with a number of practical
issues that arise and demonstrate empirically on real-world datasets that this method substantially
improves over existing approaches of /24 prefixes and network-aware clusters [6,19,24] in correctly
identifying malicious traffic. Our experiments on data sets of 126 million IP addresses demonstrate
that our algorithm is able to achieve a clustering that is both highly accurate and meaningful.
1.1 Background
Multiple measurement studies have indicated that malicious traffic tends to cluster in a way that
aligns with the structure of the IP address space, and that this is true for many different kinds of
malicious traffic ? spam, scanning, botnets, and phishing [6, 18, 19, 24]. Such clustered behaviour
can be easily explained: most malicious traffic originates from hosts in poorly-managed networks,
and networks are typically assigned contiguous blocks of the IP address space. Thus, it is natural
that malicious traffic is clustered in parts of the IP address space that belong to poorly-managed
networks.
From a machine learning perspective, the problem of identifying regions of malicious activity can
be viewed as one of finding a good pruning of a known decision tree ? the IP address space may be
naturally interpreted as a binary tree (see Fig.1(a)), and the goal is to learn a pruning of this tree that
is not too large and has low error in classifying IP addresses as malicious or non-malicious. The
structure of the IP address space suggests that there may well be a pruning with only a modest number of leaves that can classify most of the traffic accurately. Thus, identifying regions of malicious
activity from an online stream of labeled data is much like the problem considered by Helmbold and
Schapire [11] of adaptively learning a good pruning of a known decision tree. However, there are a
1
number of real-world challenges, both conceptual and practical, that must be addressed in order to
make this successful.
One major challenge in our application comes from the scale of the data and size of a complete
decision tree over the IP address space. A full decision tree over the IPv4 address space would
have 232 leaves, and over the IPv6 address space (which is slowly being rolled out), 2128 leaves.
With such large decision trees, it is critical to have algorithms that do not build the complete tree,
but instead operate in space comparable to the size of a good pruning. These space constraints are
also important because of the volume of traffic that may need to be analyzed ? ISPs often collect
terabytes of data daily and an algorithm that needs to store all its data in memory simultaneously
would be infeasible.
A second challenge comes from the fact that the regions of malicious activity may shift longitudinally over time [25]. This may happen for many reasons, e.g., administrators may eventually
discover and clean up already infected bots, and attackers may target new vulnerabilities and attack
new hosts elsewhere. Such dynamic behaviour is a primary reason why individual IP addresses tend
to be such poor indicators of future malicious traffic [15, 26]. Thus, we cannot assume that the data
comes from a fixed distribution over the IP address space; the algorithm needs to adapt to dynamic
nature of the malicious activity, and track these changes accurately and quickly. That is, we must
consider not only an online sequence of examples but also a changing target function.
While there have been a number of measurement studies [6,18, 19,24] that have examined the origin
of malicious traffic from IP address blocks that are kept fixed apriori, none of these have focused on
developing online algorithms that find the best predictive IP address tree. Our challenge is to develop
an efficient high-accuracy online algorithm that handles the severe space constraints inherent in this
problem and accounts for the dynamically changing nature of malicious behavior. We show that
we can indeed do this, both proving theoretical guarantees on adaptive regret and demonstrating
successful performance on real-world data.
1.2 Contributions
In this paper, we formulate and address the problem of discovering and tracking malicious regions of
the IP address space from an online stream of data. We present an algorithm that adaptively prunes
the IP address tree in a way that maintains at most m leaves and performs nearly as well as the
optimum adaptive pruning of the IP address tree with a comparable size. Intuitively, we achieve the
required adaptivity and the space constraints by combining several ?experts? algorithms together
with a tree-based version of paging. Our theoretical results prove that our algorithm can predict
nearly as well as the best adaptive decision tree with k leaves when using O(k log k) leaves.
Our experimental results demonstrate that our algorithm identifies malicious regions of the IP address space accurately, with orders of magnitude improvement over previous approaches. Our experiments focus on classifying spammers and legitimate senders on two mail data sets, one with 126
million messages collected over 38 days from the mail servers of a tier-1 ISP, and a second with
28 million messages collected over 6 months from an enterprise mail server. Our experiments also
highlight the importance of allowing the IP address tree to be dynamic, and the resulting view of the
IP address space that we get is both compelling and meaningful.
2 Definitions and Preliminaries
We now present some basic definitions as well as our formal problem statement.
The IP address hierarchy can be naturally interpreted as a full binary tree, as in Fig. 1: the leaves of
the tree correspond to individual IP addresses, and the non-leaf nodes correspond to the remaining
IP prefixes. Let P denote the set of all IP prefixes, and I denote the set of all IP addresses. We also
use term clusters to denote the IP prefixes.
We define an IPTree TP to be a pruning of the full IP address tree: a tree whose nodes are IP
prefixes P ? P, and whose leaves are each associated with a label, i.e., malicious or non-malicious.
An IPtree can thus be interpreted as a classification function for the IP addresses I: an IP address i
gets the label associated with its longest matching prefix in P . Fig. 1 shows an example of an IPtree.
We define the size of an IPtree to be the number of leaves it has. For example, in Fig. 1(a), the size
of the IPtree is 6.
As described in Sec. 1, we focus on online learning in this paper. A typical point of comparison
used in the online learning model is the error of the optimal offline fixed algorithm. In this case,
the optimal offline fixed algorithm is the IPtree of a given size k i.e., the tree of size k that makes
2
0.0.0.0/0
128.0.0.0/1
0.0.0.0/1
0.0.0.0/2
192.0.0.0/2
+
+
+
128.0.0.0/4
+
-
160.0.0.0/3
152.0.0.0/4
(a) An example IPTree
(b) A real IPTree (Color coding explained in Sec. 5)
Figure 1: IPTrees: example and real. Recall that an IP address is interpreted as a 32-bit string, read
from left to right. This defines a path on the binary tree, going left for 0 and right for 1. An IP prefix
is denoted by IP/n, where n indicates the number of bits relevant to the prefix.
the fewest mistakes on the entire sequence. However, if the true underlying IPtree may change over
time, a better point of comparison would allow the offline tree to also change over time. To make
such a comparison meaningful, the offline tree must pay an additional penalty each time it changes
(otherwise the offline tree would not be a meaningful point of comparison ? it could change for each
IP address in the sequence, and thus make no mistakes). We therefore limit the kinds of changes the
offline tree can make, and compare the performance of our algorithm to every IPtree with k leaves,
as a function of the errors it makes and the changes it makes.
We define an adaptive IPtree of size k to be an adaptive tree that can (a) grow nodes over time so
long as it never has more than k leaves, (b) change the labels of its leaf nodes, and (c) occasionally
reconfigure itself completely. Our goal is to develop an online algorithm T such that for any sequence of IP addresses, (1) for every adaptive tree T ? of size k, the number of mistakes made by T
is bounded by a (small) function of the mistakes and the changes of types (a), (b), and (c) made by
?
T ? , and (2) T uses no more than O(k)
space. In the next section, we describe an algorithm meeting
these requirements.
3 Algorithms and Analysis
In this section, we describe our main algorithm TrackIPTree, and present theoretical guarantees on
its performance. At a high-level, our approach keeps a number of experts in each prefix of the
IPtree, and combines their predictions to classify every IP address. The inherent structure in the
IPtree allows us to decompose the problem into a number of expert problems, and provide lower
memory bounds and better guarantees than earlier approaches.
We begin with an overview. Define the path-nodes of an IP address to be the set of all prefixes of i
in T , and denote this set by Pi,T . To predict the label of an IP i, the algorithm looks up all the pathnodes in Pi,T , considers their predictions, and combines these predictions to produce a final label
for i. To update the tree, the algorithm rewards the path-nodes that predicted correctly, penalizes the
incorrect ones, and modifies the tree structure if necessary.
To fill out this overview, there are four technical questions that we need to address: (1) Of all the
path-nodes in Pi,T , how do we learn the ones that are the most important? (2) How do we learn the
correct label to predict at a particular path-node in Pi,T (i.e., positive or negative)? (3) How do we
grow the IPtree appropriately, ensuring that it grows primarily the prefixes needed to improve the
classification accuracy? (4) How do we ensure that the size of the IPtree stays bounded by m? We
address these questions by treating them as separate subproblems, and we show how they fit together
to become the complete algorithm in Figure 3.1.
3.1 Subproblems of TrackIPTree
We now describe our algorithm in detail. Since our algorithm decomposes naturally into the four
subproblems mentioned above, we focus on each subproblem separately to simplify the presentation.
We use the following notation in our descriptions: Recall from Sec. 2 that m is the maximum number
of leaves allowed to our algorithm, k is the size of the optimal offline tree, and Pi,T denotes the set
of path-nodes, i.e., the prefixes of IP i in the current IPtree T .
Relative Importance of the Path Nodes First, we consider the problem of deciding which of the
prefix nodes in the path Pi,T is most important. We formulate this as a sleeping experts problem [4,
10]. We set an expert in each node, and call them the path-node experts, and for an IP i, we consider
the set of path-node experts in Pi,T to be the ?awake? experts, and the rest to be ?asleep?. The
3
x0
x1
x2
x3
x4
y+
x5
y-
x6
(a) Sleeping Experts: Relative Importance
of Path-Nodes
(b) Shifting Experts: Determining
Node Labels
Figure 2: Decomposing the TrackIPTree Algorithm
sleeping experts algorithm makes predictions using the awake experts, and intuitively, has the goal
of predicting nearly as well as the best awake expert on the instance i 1 . In our context, the best
awake expert on the IP i corresponds to the prefix of i in the optimal IPtree, which remains sleeping
until the IPtree grows that prefix. Fig. 2(a) illustrates the sleeping experts framework in our context:
the shaded nodes are ?awake? and the rest are ?asleep?.
P
Specifically, let xt denote the weight of the path-node expert at node t, and let Si,T = t?Pi,T xt .
To predict on IP address i, the algorithm chooses the expert at node t with probability xt /Si,T . To
update, the algorithm penalizes all incorrect experts in Pi,T , reducing their weight xt to ?xt . (e.g.,
? = 0.8). It then renormalizes the weights of all the experts in Pi,T so that their sum Si,T does not
change. (In our proof, we use a slightly different version of the sleeping experts algorithm [4]).
Deciding Labels of Individual Nodes Next, we need to decide whether the path-node expert at
a node n should predict positive or negative. We use a different experts algorithm to address this
subproblem ? the shifting experts algorithm [12]. Specifically, we allow each node n to have two
additional experts ? a positive expert, which always predicts positive, and a negative expert, which
always predicts negative. We call these experts node-label experts.
Let yn,+ and yn,? denote the weights of the positive and negative node-label experts respectively,
with yn,? + yn,+ = 1. The algorithm operates as follows: to predict, the node predicts positive with
probability yn,+ and negative with probability yn,? . To update, when the node receives a label, it
increases the weight of the correct node-label expert by ?, and decreases the weight of the incorrect
node-label expert by ? (upto a maximum of 1 and a minimum of 0). Note that this algorithm naturally
adapts when a leaf of the optimal IPtree switches labels ? the relevant node in our IPtree will slowly
shift weights from the incorrect node-label expert to the correct one, making an expected 1? mistakes
in the process. Fig. 2(b) illustrates the shifting experts setting on an IPtree: each node has two
experts, a positive and a negative. Fig. 3 shows how it fits in with the sleeping experts algorithm.
Building Tree Structure We next address the subproblem of building the appropriate structure for
the IPtree. The intuition here is: when a node in the IPtree makes many mistakes, then either
that node has a subtree in the optimal IPtree that separates the positive and negative instances,
or the optimal IPtree must also make the same mistakes. Since TrackIPTree cannot distinguish
between these two situations, it simply splits any node that makes sufficient mistakes. In particular,
TrackIPTree starts with only the root node, and tracks the number of mistakes made at every node.
Every time a leaf makes 1? mistakes, TrackIPTree splits that leaf into its children, and instantiates
and initializes the relevant path-node experts and node-label experts of the children. In effect, it is
as if the path-node experts of the children had been asleep till this point, but will now be ?awake?
for the appropriate IP addresses.
TrackIPTree waits for 1? mistakes at each node before growing it, so that there is a little resilence
with noisy data ? otherwise, it would split a node every time the optimal tree made a mistake, and the
IPtree would grow very quickly. Note also that it naturally incorporates the optimal IPtree growing
a leaf; our tree will grow the appropriate nodes when that leaf has made 1? mistakes.
Bounding Size of IPtree Since TrackIPTree splits any node after it makes 1? mistakes, it is likely
that the IPtree it builds is split much farther than the optimal IPtree ? TrackIPTree does not know
when to stop growing a subtree, and it splits even if the same mistakes are made by the optimal
IPtree. While this excessive splitting does not impact the predictions of the path-node experts or the
node-label experts significantly, we still need to ensure that the IPtree built by our algorithm does
not become too large.
1
We leave the exact statement of the guarantee to the proof in [23]
4
T RACK IPT REE
Input: tree size m, learning rate ?, penalty factor ?
Initialize:
Set T := root
InitializeNode(root)
Update Rule (Contd.):
//Update
P path-node experts
s := n?P xn
t,T
for n ? Pi,T
if predict[n] 6= r,
penalize xn := ?xn
mistakes[xn ] + +
if mistakes[xn ] > 1/? and n
is leaf, GrowT ree(n)
Renormalize xn := xn P s
Prediction Rule: Given IP i
//Select a node-label expert
for n ? Pi,T
flip coin of bias yn,+
if heads, predict[n] := +
else predict[n] := ?
//Select a path-node expert
rval :=P
predict[n] with weight
xn / t?P xt
Return rval
j?Pi,T
xj
sub I NITIALIZE N ODE
Input: node t
xt := 1; yt,+ := yt,? := 0.5
mistakes[t] := 0
sub G ROW T REE
Input: leaf l
if size(T ) ? m
Select nodes N to discard with
paging algorithm
Split leaf l into children lc, rc.
InitializeNode(lc), InitializeNode(rc)
Update Rule: Given IP i, label r
//Update node-label experts
for n ? Pi,T
for label z ? {+, ?}
if z = r, yn,z := yn,z + ?
else yn,z := yn,z ? ?
Figure 3: The Complete TrackIPTree Algorithm
We do this by framing it as a paging problem [20]: consider each node in the IPtree to be a page,
and the maximum allowed nodes in the IPtree to be the size of the cache. The offline IPtree, which
has k leaves, needs a cache of size 2k. The IPtree built by our algorithm may have at most m leaves
(and thus, 2m nodes, since it is a binary tree), and so the size of its cache is 2m and the offline
cache is 2k. We may then select nodes to be discarded as if they were pages in the cache once the
IPtree grows beyond 2m nodes; so, for example, we may choose the least recently used nodes in
the IPtree, with LRU as the paging algorithm. Our analysis shows that setting m = O( ?k2 log k? )
suffices, when TrackIPTree uses F LUSH -W HEN -F ULL (FWF) as its paging algorithm ? this is a
simple paging algorithm that discards all the pages in the cache when the cache is full, and restarts
with an empty cache. We use FWF here for a clean analysis, and especially since in simple paging
models, many algorithms achieve no better guarantees [20]. For our experiments, we implement
LRU, and our results show that this approach, while perhaps not sophisticated, still maintains an
accurate predictive IPtree.
3.2 Analysis
In this section, we present theoretical guarantees on TrackIPTree?s performance. We show our
algorithm performs nearly as well as best adaptive k-IPtree, bounding the number of mistakes made
by our algorithm as a function of the number of mistakes, number of labels changes and number of
complete reconfigurations of the optimal such tree in hindsight.
Theorem 3.1 Fix k. Set the maximum number of leaves allowed to the TrackIPTree algorithm m to
k
be 10k
?2 log ? . Let T be an adaptive k-IPtree. Let ?T,z denote the number of times T changes labels
on the its leaves over the sequence z, and RT,z denote the number of times times T has completely
reconfigured itself over z.
The algorithm TrackIPTreeensures that on any sequence of instances z, for each T , the number of
k
mistakes made by TrackIPTree is at most (1 + 3?)MT,z + ( 1? + 3)?T,z + 10k
?3 log ? (RT,z + 1) with
k
probability at least 1 ? k1 2?2 .
In other words, if there is an offline adaptive k-IPtree, that makes few changes and few mistakes
on the input sequence of IP addresses, then TrackIPTree will also make only a small number of
mistakes. Due to space constraints, we present the proof in the technical report [23].
4 Evaluation Setup
We now describe our evaluation set-up: data, practical changes to the algorithm, and baseline
schemes that compare against. While there are many issues that go into converting the algorithm in
Sec. 3 for practical use, we describe here those most important to our experiments, and defer the
rest to the technical report [23].
5
Data We focus on IP addresses derived from mail data, since spammers represent a significant fraction of the malicious activity and compromised hosts on the Internet [6], and labels are relatively
easy to obtain from spam-filtering run by the mail servers. For our evaluation, we consider labels
from the mail servers? spam-filtering to be ground truth. Any errors in the spam-filtering will influence the tree that we construct and our experimental results are limited by this assumption.
One data set consists of log extracts collected at the mail servers of a tier-1 ISP with 1 million
active mailboxes. The extracts contain the IP addresses of the mail servers that send mail to the
ISP, the number of messages they sent, and the fraction of those messages that are classified as
spam, aggregated over 10 minute intervals. The mail server?s spam-filtering software consists of a
combination of hand-crafted rules, DNS blacklists, and Brightmail [1], and we take their results as
labels for our experiments. The log extracts were collected over 38 days from December 2008 to
January 2009, and contain 126 million IP addresses, of which 105 million are spam and 21 million
are legitimate.
The second data set consists of log extracts from the enterprise mail server of a large corporation with
1300 active mailboxes. These extracts also contain the IP addresses of mail servers that attempted to
send mail, along with the number of messages they sent and the fraction of these messages that were
classified spam by SpamAssassin [2], aggregated over 10 minute intervals. The extracts contain 28
million IP addresses, of which around 1.2 million are legitimate and the rest are spammers.
Note that in both cases, our data only contains aggregate information about the IP addresses of the
mail servers sending mail to the ISP and enterprise mail servers, and so we do not have the ability
to map any information back to individual users of the ISP or enterprise mail servers.
TrackIPTree For the experimental results, we use LRU as the paging algorithm when nodes need
to be discarded from the IPtree (Sec. 3.1). In our implementation, we set TrackIPTree to discard
1% of m, the maximum leaves allowed, every time it needs to expire nodes. The learning rate ? is
set to 0.05 and the penalty factor ? for sleeping experts is set to 0.1 respectively. Our results are not
affected if these parameters are changed by a factor of 2-3.
While we have presented an online learning algorithm, in practice, it will often need to predict
on data without receiving labels of the instances right away. Therefore, we study TrackIPTree?s
accuracy on the following day?s data, i.e., to compute prediction accuracy of day i, TrackIPTree is
allowed to update until day i?1. We choose intervals of a day?s length to allow the tree?s predictions
to be updated at least every day.
Apriori Fixed Clusters We compare TrackIPTree to two sets of apriori fixed clusters: (1) networkaware clusters, which are a set of unique prefixes derived from BGP routing table snapshots [17], and
(2) /24 prefixes. We choose these clusters as a baseline, as they have been the basis of measurement
studies discussed earlier (Sec. 1), prior work in IP-based classification [19, 24], and are even used
by popular DNS blacklists [3].
We use the fixed clusters to predict the label of an IP in the usual manner: we simply assign an
IP the label of its longest matching prefix among the clusters.Of course, we first need to assign
these clusters their own labels. To ensure that they classify as well as possible, we assign them the
optimal labeling over the data they need to classify; we do this by allowing them to make multiple
passes over the data. That is, for each day, we assign labels so that the fixed clusters maximize their
accuracy on spam for a given required accuracy on legitimate mail 2 . It is clear that this experimental
set-up is favourable to the apriori fixed clusters.
We do not directly compare against the algorithm in [11], as it requires every unique IP address in
the data set to be instantiated in the tree. In our experiments (e.g., with the ISP logs), this means that
it requires over 90 million leaves in the tree. We instead focus on practical prior approaches with
more cluster sizes in our experiments.
5 Results
We report three sets of experimental results regarding the prediction accuracy of TrackIPTree using
the experimental set-up of Section 4. While we do not provide an extensive evaluation of our algorithm?s computational efficiency, we note that our (unoptimized) implementation of TrackIPTree
takes under a minute to learn over a million IP addresses, on a 2.4GHz Sparc64-VI core.
2
For space reasons, we defer the details of how we assign this labeling to the technical report [23]
6
0.8
0.6
0.2
0
0.5
Coverage on Legit IPs
Error on Legit IPs
Accuracy on Spam IPs
0.5
Coverage on Legit IPs
0.9
0.85
0
1
50k
10k
5k
1k
0.2
Dynamic
Static: 5 Days
Static: 10 Days
0.15
0.1
200k
100k
50k
20k
0.5
Coverage on Legit IPs
1
(c) Expt 2: ISP logs
0.25
0.98
0.92
0
0.4
TrackIPTree
Network?Aware
/24 Prefixes
0.95
(b) Expt 1: Enterprise logs
1
0.94
0.6
0.2
0
1
(a) Expt 1: ISP logs
0.96
0.8
Error on Spam IPs
0.4
TrackIPTree
Network?Aware
/24 Prefixes
1
Accuracy on Spam IPs
1
Accuracy on Spam IPs
Accuracy on Spam IPs
1
0.15
Dynamic
Static: 5 Days
Static: 10 Days
0.1
0.05
0.5
Coverage on Legit IPs
1
(d) Expt 2: Enterprise logs
0.05
10
20
Time in days
30
(e) Expt 3: Legitimate IPs
10
20
Time in days
30
(f) Expt 3: Spam IPs
Figure 4: Results for Experiments 1, 2, and 3
Our results compare the fraction of spamming IPs that the clusters classify correctly, subject to
the constraint that they classify at least x% legitimate mail IPs correctly (we term this to be the
coverage of the legitimate IPs required). Thus, we effectively plot the true positive rate against
the true negative rate. (This is just the ROC curve with the x-axis reversed, since we plot the true
positive against the true negative, instead of plotting the true positive against the false positive.)
Experiment 1: Comparisons with Apriori Fixed Clusters Our first set of experiments compares
the performance of our algorithm with network-aware clusters and /24 IP prefixes. Figs. 4(a) & 4(b)
illustrate the accuracy tradeoff of the three sets of clusters on the two data sets. Clearly, the accuracy
of TrackIPTree is a tremendous improvement on both sets of apriori fixed clusters ? for any choice
of coverage on legitimate IPs, the accuracy of spam IPs by TrackIPTree is far higher than the apriori
fixed clusters, even by as much as a factor of 2.5. In particular, note that when the coverage required
on legitimate IPs is 95%, TrackIPTree achieves 95% accuracy in classifying spam on both data sets,
compared to the 35 ? 45% achieved by the other clusters.
In addition, TrackIPTree gains this classification accuracy using a far smaller tree. Table 1 shows
the median number of leaves instantiated by the tree at the end of each day. (To be fair to the fixed
clusters, we only instantiate the prefixes required to classify the day?s data, rather than all possible
prefixes in the clustering scheme.) Table 1 shows that the tree produced by TrackIPTree is a factor
of 2.5-17 smaller with the ISP logs, and a factor of 20-100 smaller with the enterprise logs. These
numbers highlight that the apriori fixed clusters are perhaps too coarse to classify accurately in parts
of the IP address space, and also are insufficiently aggregated in other parts of the address space.
Experiment 2: Changing the Maximum Leaves Allowed Next, we explore the effect of changing
m, the maximum number of leaves allowed to TrackIPTree. Fig. 4(c) & 4(d) show the accuracycoverage tradeoff for TrackIPTree when m ranges between 20,000-200,000 leaves for the ISP logs,
and 1,000-50,000 leaves for the enterprise logs. Clearly, in both cases, the predictive accuracy
increases with m only until m is ?sufficiently large? ? once m is large enough to capture all the
distinct subtrees in the underlying optimal IPtree, the predictive accuracy will not increase. While
the actual values of m are specific to our data sets, the results highlight the importance of having a
space-efficient and flexible algorithm ? both 10,000 and 100,000 are very modest sizes compared to
the number of possible apriori fixed clusters, or the size of the IPv4 address space, and this suggests
that the underlying decision tree required is indeed of a modest size.
Experiment 3: Does a Dynamic Tree Help? In this experiment, we demonstrate empirically that
our algorithm?s dynamic aspects do indeed significantly enhance its accuracy over static clustering
schemes. The static clustering that we compare to is a tree generated by our algorithm, but one that
learns over the first z days, and then stays unchanged. For ease of reference, we call such a tree a
z-static tree; in our experiments, we set z = 5 and z = 10. We compare these trees by examining
separately the errors incurred on legitimate and spam IPs.
7
TrackIPTree
/24 Prefixes
Network-aware
ISP
99942
1732441
260132
Enterprise
9963
1426445
223025
wt
? 0.2
[0, 0.2)
(?0.2, 0)
? ?0.2
Implication
Strongly Legit
Weakly Legit
Weakly Malicious
Strongly Malicious
Colour
Dark Green
Light Green
Blue
White
Table 1: Sizes of Clustering Schemes
Table 2: Colour coding for IPtree in Fig 1(b)
Fig. 4(e) & 4(f) compare the errors of the z-static trees and the dynamic tree on legitimate and spam
IPs respectively, using the ISP logs. Clearly, both z-static trees degrade in accuracy over time, and
they do so on both legitimate and spam IPs. On the other hand, the accuracy of the dynamic tree
does not degrade over this period. Further, the in error grows with time; after 28 days, the 10-static
tree has almost a factor of 2 higher error on both spam IPs and legitimate IPs.
Discussion and Implications Our experiments demonstrate that our algorithm is able to achieve
high accuracy in predicting legitimate and spam IPs, e.g., it can predict 95% of the spam IPs correctly, when misclassifying only 5% of the legitimate IPs. However, it does not classify the IPs
perfectly. This is unsurprising ? achieving zero classification error in these applications is practically infeasible, given IP address dynamics [25]. Nevertheless, our IPtree still provides insight into
the malicious activity on the Internet.
As an example, we examine a high-level view of the Internet obtained from our tree, and its implications. Fig. 1(b) visualizes an IPtree on the ISP logs with 50,000 leaves. It is laid out so that the
root prefix is near the center, and the prefixes grow their children outwards. P
The nodes are coloured
depending on their weights, as shown in Table 2: for node t, define wt = j?Q xj (yj,+ ? yj,? ),
where Q is the set of prefixes of node t (including node t itself. Thus, the blue central nodes are the
large prefixes (e.g., /8 prefixes), and the classification they output is slightly malicious; this means
that an IP address without a longer matching prefix in the tree is typically classified to be malicious.
This suggests, for example, that an unseen IP address is typically classified as a spammer by our
IPtree, which is consistent with the observations of network administrators. A second observation
we can make is that the tree has many short branches as well as long branches, suggesting that some
IP prefixes are grown to much greater depth than others. This might happen, for instance, if active IP
addresses for this application are not distributed uniformly in the address space (and so all prefixes
do not need to be grown at uniform rates), which is also what we might expect to see based on prior
work [16].
Of course, these observations are only examples; a complete analysis of our IPtree?s implications is
part of our future work. Nevertheless, these observations suggest that our tree does indeed capture
an appropriate picture of the malicious activity on the Internet.
6 Other Related Work
In the networking and databases literature, there has been much interest in designing streaming
algorithms to identify IP prefixes with significant network traffic [7, 9, 27], but these algorithms
do not explore how to predict malicious activity. Previous IP-based approaches to reduce spam
traffic [22, 24], as mentioned earlier, have also explored individual IP addresses, which are not
particularly useful since they are so dynamic [15, 19, 25]. Zhang et al [26] also examine how to
predict whether known malicious IP addresses may appear at a given network, by analyzing the
co-occurence of all known malicious IP addresses at a number of different networks. More closely
related is [21], who present algorithms to extract prefix-based filtering rules for IP addresses that may
be used in offline settings. There has also been work on computing decision trees over streaming
data [8, 13], but this work assumes that data comes from a fixed distribution.
7 Conclusion
We have addressed the problem of discovering dynamic malicious regions on the Internet. We model
this problem as one of adaptively pruning a known decision tree, but with the additional challenges
coming from real-world settings ? severe space requirements and a changing target function. We
developed new algorithms to address this problem, by combining ?experts? algorithms and online
paging algorithms. We showed guarantees on our algorithm?s performance as a function of the best
possible pruning of a similar size, and our experimental results on real-world datasets are orders of
magnitude better than current approaches.
Acknowledgements We are grateful to Alan Glasser and Gang Yao for their help with the data
analysis efforts.
8
References
[1]
[2]
[3]
[4]
[5]
[6]
[7]
[8]
[9]
[10]
[11]
[12]
[13]
[14]
[15]
[16]
[17]
[18]
[19]
[20]
[21]
[22]
[23]
[24]
[25]
[26]
[27]
Brightmail. http://www.brightmail.com.
SpamAssassin. http://www.spamassassin.apache.org.
SpamHaus. http://www.spamhaus.net.
B LUM , A., AND M ANSOUR , Y. From external to internal regret. In In Proceedings of 18th Annual
Conference on Computational Learning Theory (COLT 2005) (2005).
C ESA -B IANCHI , N., F REUND , Y., H AUSSLER , D., H ELMBOLD , D. P., S CHAPIRE , R. E., AND WAR MUTH , M. K. How to use expert advice. J. ACM 44, 3 (1997), 427?485.
C OLLINS , M. P., S HIMEALL , T. J., FABER , S., NAIES , J., W EAVER , R., AND S HON , M. D. Using
uncleanliness to predict future botnet addresses. In Proceedings of the Internet Measurement Conference
(2007).
C ORMODE , G., KORN , F., M UTHUKRISHNAN, S., AND S RIVASTAVA , D. Diamond in the rough: Finding hierarchical heavy hitters in multi-dimensional data. In SIGMOD ?04: Proceedings of the 2004 ACM
SIGMOD international conference on Management of data (2004).
D OMINGOS , P., AND H ULTEN , G. Mining high-speed data streams. In Proceedings of ACM SIGKDD
(2000), pp. 71?80.
E STAN , C., S AVAGE , S., AND VARGHESE , G. Automatically inferring patterns of resource consumption
in network traffic. In Proceedings of SIGCOMM?03 (2003).
F REUND , Y., S CHAPIRE , R. E., S INGER , Y., AND WARMUTH , M. K. Using and combining predictors
that specialize. In Proceedings of the Twenty-Ninth Annual Symposium on the Theory of Computing
(STOC) (1997), pp. 334?343.
H ELMBOLD , D. P., AND S CHAPIRE , R. E. Predicting nearly as well as the best pruning of a decision
tree. Machine Learning 27, 1 (1997), 51?68.
H ERBSTER , M., AND WARMUTH , M. Tracking the best expert. Machine Learning 32, 2 (August 1998).
J IN , R., AND AGARWAL , G. Efficient and effective decision tree construction on streaming data. In
Proceedings of ACM SIGKDD (2003).
J UNG , J., K RISHNAMURTHY, B., AND R ABINOVICH , M. Flash crowds and denial of service attacks:
Characterization and implications for cdns and websites. In Proceedings of the International World Wide
Web Conference (May 2002).
J UNG , J., AND S IT, E. An empirical study of spam traffic and the use of DNS black lists. In Proceedings
of Internet Measurement Conference (IMC) (2004).
KOHLER , E., L I , J., PAXSON , V., AND S HENKER , S. Observed structure of addresses in IP traffic.
IEEE/ACM Transactions in Networking 14, 6 (2006).
K RISHNAMURTHY, B., AND WANG , J. On network-aware clustering of web clients. In Proceedings of
ACM SIGCOMM (2000).
M AO , Z. M., S EKAR , V., S PATSCHECK , O., VAN DER M ERWE , J., AND VASUDEVAN , R. Analyzing
large ddos attacks using multiple data sources. In ACM SIGCOMM Workshop on Large Scale Attack
Defense (2006).
R AMACHANDRAN , A., AND F EAMSTER , N. Understanding the network-level behavior of spammers. In
Proceedings of ACM SIGCOMM (2006).
S LEATOR , D. D., AND TARJAN , R. E. Amortized efficiency of list update and paging rules. In Communications of the ACM (1985), vol. 28, pp. 202?208.
S OLDO , F., M ARKOPOULO, A., AND A RGYRAKI, K. Optimal filtering of source address prefixes:
Models and algorithms. In Proceedings of IEEE Infocom 2009 (2009).
T WINING , D., W ILLIAMSON , M. M., M OWBRAY, M., AND R AHMOUNI , M. Email prioritization:
Reducing delays on legitimate mail caused by junk mail. In USENIX Annual Technical Conference (2004).
V ENKATARAMAN, S., B LUM , A., S ONG , D., S EN , S., AND S PATSCHECK , O. Tracking dynamic
sources of malicious activity at internet-scale. Tech. Rep. TD-7NZS8K, AT&T Labs, 2009.
V ENKATARAMAN, S., S EN , S., S PATSCHECK , O., H AFFNER , P., AND S ONG , D. Exploiting network
structure for proactive spam mitigation. In Proceedings of Usenix Security?07 (2007).
X IE , Y., Y U , F., ACHAN , K., G ILLUM , E., , G OLDSZMIDT, M., AND W OBBER , T. How dynamic are
IP addresses? In Proceedings of ACM SIGCOMM (2007).
Z HANG , J., P ORRAS , P., AND U LRICH , J. Highly predictive blacklists. In Proceedings of Usenix
Security?08 (2008).
Z HANG , Y., S INGH , S., S EN , S., D UFFIELD , N., AND L UND , C. Online identification of hierarchical heavy hitters: algorithms, evaluation, and applications. In IMC ?04: Proceedings of the 4th ACM
SIGCOMM conference on Internet measurement (New York, NY, USA, 2004), ACM, pp. 101?114.
9
| 3807 |@word version:3 contains:1 att:1 prefix:35 existing:2 current:2 com:2 si:3 must:4 happen:2 treating:1 plot:2 update:9 discovering:3 leaf:35 instantiate:1 warmuth:2 website:1 core:1 farther:1 short:1 mitigation:2 coarse:1 provides:1 node:66 characterization:1 attack:6 org:1 zhang:1 rc:2 along:1 enterprise:9 become:2 symposium:1 incorrect:4 prove:2 consists:3 specialize:1 combine:3 nay:1 manner:1 x0:1 expected:1 indeed:4 longitudinally:1 behavior:2 examine:2 growing:3 multi:1 automatically:1 td:1 little:1 actual:1 cache:8 begin:1 discover:1 underlying:6 bounded:2 notation:1 what:1 kind:2 interpreted:4 substantially:1 string:1 developed:1 finding:2 hindsight:1 corporation:1 guarantee:8 lru:3 berkeley:2 every:9 ull:1 k2:1 originates:1 yn:11 appear:1 positive:12 before:1 service:1 tends:1 mistake:23 limit:1 analyzing:2 path:18 ree:3 might:2 black:1 studied:1 examined:1 dynamically:1 suggests:3 collect:1 shaded:1 co:1 ease:1 limited:1 range:1 chapire:3 practical:5 unique:2 yj:2 practice:1 block:2 regret:2 implement:1 x3:1 elmbold:2 faber:1 empirical:1 significantly:2 matching:3 word:1 wait:1 suggest:1 get:2 cannot:2 context:2 influence:1 www:3 map:1 yt:2 center:1 modifies:1 go:1 send:2 focused:1 formulate:3 identifying:4 splitting:1 helmbold:2 legitimate:16 rule:6 insight:1 fill:1 nitialize:1 proving:1 handle:1 ianchi:1 updated:1 target:5 hierarchy:1 construction:1 user:1 exact:1 prioritization:1 us:2 designing:1 spamming:1 origin:1 amortized:1 particularly:1 predicts:3 labeled:1 database:1 observed:1 subproblem:3 wang:1 capture:2 region:8 venkataraman:1 decrease:1 mentioned:2 intuition:1 und:1 reward:1 ong:2 dynamic:17 inger:1 rval:2 denial:1 weakly:2 grateful:1 predictive:5 efficiency:2 completely:2 basis:1 easily:1 isp:13 grown:2 fewest:1 instantiated:2 fast:1 describe:5 distinct:1 effective:1 labeling:2 aggregate:1 crowd:1 whose:2 widely:1 otherwise:2 ability:1 unseen:1 itself:3 noisy:1 ip:98 online:13 final:1 sequence:7 net:1 sen:2 coming:1 relevant:3 combining:3 till:1 poorly:2 achieve:4 adapts:1 description:1 billion:1 exploiting:1 cluster:23 requirement:4 optimum:1 empty:1 produce:1 leave:1 help:2 illustrate:1 develop:3 depending:1 coverage:7 c:2 predicted:1 throttling:1 come:4 expt:6 closely:1 correct:3 routing:1 behaviour:2 assign:5 suffices:1 clustered:2 fix:1 preliminary:1 decompose:1 dns:3 ao:1 practically:1 around:1 considered:1 ground:1 sufficiently:1 deciding:2 predict:16 ansour:1 major:1 achieves:2 label:31 vulnerability:1 ddos:1 rough:1 clearly:3 always:2 reconfigured:1 rather:1 derived:2 focus:5 improvement:3 longest:2 indicates:1 tech:1 sigkdd:2 baseline:2 reconfigurations:1 cdns:1 renormalizes:1 streaming:3 typically:3 entire:1 going:1 unoptimized:1 issue:2 classification:6 among:1 flexible:1 denoted:1 hon:1 colt:1 initialize:1 apriori:9 construct:1 aware:6 ung:2 never:1 once:2 having:1 x4:1 look:1 nearly:5 excessive:1 future:4 report:4 others:1 simplify:1 inherent:2 shobha:1 primarily:1 few:2 simultaneously:1 individual:5 interest:1 message:6 highly:2 mining:1 evaluation:5 severe:4 rolled:1 analyzed:1 light:1 subtrees:1 accurate:2 implication:5 oliver:1 daily:1 necessary:1 modest:3 tree:65 penalizes:2 renormalize:1 isolating:1 theoretical:4 instance:5 classify:9 earlier:3 compelling:1 infected:2 contiguous:1 tp:1 paging:12 enormity:1 uniform:1 predictor:1 delay:1 successful:2 examining:1 too:3 unsurprising:1 scanning:1 chooses:1 adaptively:5 lush:1 international:2 ie:1 stay:2 receiving:1 enhance:1 together:3 quickly:2 yao:1 central:1 management:2 ipt:1 choose:3 slowly:2 administrator:2 external:1 expert:50 return:1 account:1 suggesting:1 sec:6 coding:2 caused:1 vi:1 stream:3 proactive:1 view:2 root:4 lab:2 infocom:1 traffic:16 start:1 maintains:2 defer:2 contribution:1 accuracy:23 who:1 correspond:2 identify:1 ollins:1 identification:1 accurately:4 produced:1 none:1 visualizes:1 classified:4 networking:2 aligns:1 email:1 definition:2 against:5 pp:4 naturally:5 associated:2 proof:3 ormode:1 static:10 junk:1 stop:1 gain:1 popular:1 recall:2 color:1 improves:1 sophisticated:1 back:1 higher:2 day:18 x6:1 restarts:1 strongly:2 just:1 until:3 hand:2 receives:1 web:2 rack:1 defines:1 indicated:1 perhaps:2 grows:4 building:2 effect:2 usa:1 contain:4 true:7 managed:2 vasudevan:1 assigned:1 read:1 deal:1 white:1 x5:1 complete:6 demonstrate:5 performs:2 novel:1 dawn:1 recently:1 mt:1 empirically:2 overview:2 apache:1 volume:1 million:11 belong:1 discussed:1 mellon:1 significant:3 measurement:6 imc:2 had:1 longer:1 phishing:1 own:1 showed:1 perspective:1 discard:3 store:1 occasionally:1 server:12 binary:4 rep:1 meeting:1 der:1 minimum:1 additional:5 greater:1 terabyte:1 prune:1 converting:1 aggregated:3 maximize:1 period:1 branch:2 multiple:3 full:4 alan:1 technical:5 adapt:1 long:2 host:3 sigcomm:6 ensuring:1 prediction:9 impact:1 basic:1 cmu:1 represent:1 agarwal:1 achieved:1 sleeping:9 penalize:1 background:1 addition:1 separately:2 ode:1 addressed:2 interval:3 else:2 grow:5 source:4 malicious:34 median:1 appropriately:1 operate:1 rest:4 pass:1 subject:1 tend:1 sent:2 december:1 incorporates:1 call:3 fwf:2 near:1 vital:1 split:7 easy:1 enough:1 switch:1 xj:2 fit:2 perfectly:1 reduce:1 regarding:1 tradeoff:2 shift:2 whether:2 glasser:1 war:1 defense:1 colour:2 effort:1 penalty:3 song:1 spammer:5 york:1 useful:1 clear:1 dark:1 schapire:2 http:3 misclassifying:1 bot:1 correctly:5 track:2 blue:2 carnegie:1 vol:1 affected:1 putting:1 four:2 demonstrating:1 blum:1 acknowledged:1 achieving:1 nevertheless:2 changing:7 expire:1 clean:2 kept:1 achan:1 fraction:4 sum:1 run:1 almost:1 laid:1 decide:1 decision:14 comparable:2 bit:2 bound:1 internet:13 pay:1 distinguish:1 annual:3 activity:11 gang:1 insufficiently:1 constraint:5 awake:6 x2:1 software:1 aspect:1 speed:1 relatively:1 rishnamurthy:2 developing:1 combination:1 poor:1 instantiates:1 smaller:3 slightly:2 making:1 explained:2 intuitively:2 tier:2 resource:1 remains:1 eventually:1 needed:1 know:1 flip:1 end:1 sending:1 hitter:2 decomposing:1 hierarchical:2 away:1 upto:1 appropriate:4 coin:1 denotes:1 clustering:6 include:1 remaining:1 ensure:3 assumes:1 sigmod:2 k1:1 build:2 especially:1 unchanged:1 initializes:1 question:3 already:1 primary:1 rt:2 usual:1 reversed:1 separate:2 spatscheck:1 consumption:1 degrade:2 originate:1 mail:21 collected:4 considers:1 reason:3 length:1 modeled:1 reund:2 setup:1 statement:2 stoc:1 subproblems:3 negative:10 paxson:1 implementation:2 twenty:1 attacker:1 allowing:2 diamond:1 observation:4 snapshot:1 datasets:2 discarded:2 january:1 situation:1 communication:1 head:1 ninth:1 esa:1 august:1 tarjan:1 usenix:3 required:6 extensive:1 security:3 california:1 framing:1 tremendous:1 address:68 able:3 beyond:1 pattern:1 challenge:7 built:2 green:2 memory:2 including:1 shifting:3 critical:1 difficulty:2 natural:1 client:1 predicting:4 indicator:1 korn:1 scheme:4 improve:1 picture:1 identifies:1 axis:1 stan:1 extract:7 occurence:1 lum:2 prior:3 coloured:1 literature:1 acknowledgement:1 hen:1 understanding:1 determining:1 relative:2 expect:1 highlight:3 adaptivity:1 filtering:6 incurred:1 sufficient:1 consistent:1 plotting:1 classifying:3 pi:14 heavy:2 row:1 elsewhere:1 changed:1 course:2 infeasible:2 offline:11 formal:1 allow:3 bias:1 wide:1 ghz:1 distributed:1 curve:1 depth:1 xn:8 world:7 van:1 made:8 adaptive:9 spam:26 far:2 transaction:1 pruning:13 hang:2 keep:1 active:3 conceptual:1 compromised:1 decomposes:1 why:1 table:6 learn:4 nature:2 main:1 bounding:2 arise:1 allowed:7 child:5 fair:1 x1:1 fig:12 crafted:1 advice:1 en:3 roc:1 ny:1 lc:2 sub:3 inferring:1 reconfigure:1 learns:1 theorem:1 minute:3 xt:7 specific:1 favourable:1 explored:1 list:2 workshop:1 avrim:2 false:1 effectively:1 importance:4 magnitude:2 subtree:2 illustrates:2 outwards:1 simply:2 likely:1 sender:1 explore:2 tracking:4 corresponds:1 truth:1 acm:12 asleep:3 viewed:1 goal:3 month:1 presentation:1 flash:1 change:14 typical:1 specifically:2 reducing:2 operates:1 spamassassin:3 wt:2 uniformly:1 blacklist:3 experimental:7 attempted:1 meaningful:4 select:4 internal:1 kohler:1 |
3,099 | 3,808 | An Additive Latent Feature Model for Transparent
Object Recognition
Mario Fritz
UC Berkeley
Gary Bradski
Willow Garage
Michael Black
Brown University
Sergey Karayev
UC Berkeley
Trevor Darrell
UC Berkeley
Abstract
Existing methods for visual recognition based on quantized local features can perform poorly when local features exist on transparent surfaces, such as glass or
plastic objects. There are characteristic patterns to the local appearance of transparent objects, but they may not be well captured by distances to individual examples or by a local pattern codebook obtained by vector quantization. The appearance of a transparent patch is determined in part by the refraction of a background
pattern through a transparent medium: the energy from the background usually
dominates the patch appearance. We model transparent local patch appearance
using an additive model of latent factors: background factors due to scene content, and factors which capture a local edge energy distribution characteristic of
the refraction. We implement our method using a novel LDA-SIFT formulation
which performs LDA prior to any vector quantization step; we discover latent topics which are characteristic of particular transparent patches and quantize the SIFT
space into transparent visual words according to the latent topic dimensions. No
knowledge of the background scene is required at test time; we show examples
recognizing transparent glasses in a domestic environment.
1
Introduction
Household scenes commonly contain transparent objects such as glasses and bottles made of various materials (like those in Fig. 6). Instance and category recognition of such objects is important
for applications including domestic service robotics and image search. Despite the prevalence of
transparent objects in human environments, the problem of transparent object recognition has received relatively little attention. We argue that current appearance-based methods for object and
category recognition are not appropriate for transparent objects where the appearance can change
dramatically depending on the background and illumination conditions. A full physically plausible
generative model of transparent objects is currently impractical for recognition tasks. Instead we
propose a new latent component representation that allows us to learn transparent visual words that
capture locally discriminative visual features on transparent objects.
Figure 1 shows an example of a transparent object observed in front of several different background
patterns; the local edge energy histogram is shown around a fixed point on the object for each
image. While the overall energy pattern is quite distinct, there is a common structure that can
be observed across each observation. This structure can be estimated from training examples and
detected reliably in test images: we form a local statistical model of transparent patch appearance
by estimating a latent local factor model from training data which includes varying background
imagery. The varying background provides examples of how the transparent objects refracts light,
1
traditional approach:
quantization
(k-means)
bag of
words
LDA
our approach:
LDA
quantization
(axis-aligned
threshold)
p(z|P)
Figure 1: Left: Images of a transparent object in different environments. A point on the object
is highlighted in each image, and the local orientation edge energy map is shown. While the background dominates the local patch, there is a latent structure that is discriminative of the object. Right:
Our model finds local transparent structure by applying a latent factor model (e.g., LDA) before a
quantization step. In contrast to previous approaches that applied such models to a quantized visual word model, we apply them directly to the SIFT representation, and then quantize the resulting
model into descriptors according to the learned topic distribution.
an idea has been used as a way of capturing the refractive properties of glass [34] but not, to our
knowledge, as a way of training an object recognition system.
Specifically, we adopt a hybrid generative-discriminative model in the spirit of [13] in which a generative latent factor model discovers a vocabulary of locally transparent patterns, and a discriminant
classifier is applied to the space of these activations to detect a category of interest. Our latent
component representation decomposes patch appearance into sub-components based on an additive
model of local patch formation; in particular we use the latent Dirichlet allocation (LDA) model
in our experiments below. Transparent object recognition is achieved using a simple probabilistic model of likely local object features. A latent topic model is learned over the space of local
patches in images of a given object observed with varying backgrounds; clustering in this space
yields descriptors that can be used to infer transparent structures in an image at test time without any
knowledge of the underlying background pattern or environmental illumination. Each image patch
at test time is then labeled with one or more candidate quantized latent structures (topics), which
define our transparent visual word identifiers.
Currently, the study of transparent object recognition is extremely limited and we believe ours is
the first to consider category recognition of transparent objects in natural settings, with varying pose
and unconstrained illumination. The paper provides a first exploration of the problem, establishes
a baseline, demonstrates feasibility and suggests problems for future work. Our results show that
recognition of transparent objects is possible without explicit physically-based refraction and reflection models, using a learning-based additive latent local feature appearance model.
2
Related Work
There is an extensive literature of local feature detection and description techniques; here we focus on those related to our transparent object recognition formulation. Existing methods for object
category and object instance recognition are generally designed for opaque objects, typically finding characteristic local patches using descriptors based on weighted histograms of local orientation
energy [2, 18, 6], locally stable region characteristics [19], local self-similarity [29], etc.
We explore a similar direction but extend this work to transparent objects. Specifically, we base
our method on a novel combination of SIFT [18] and latent Dirichlet allocation (LDA) [4], two
techniques used in many previous object recognition methods. The SIFT descriptor (see also the related HOG [6] and neurally plausible HMAX models [27]) generally characterizes local appearance
2
with a spatial grid of histograms, with each histogram aggregating a number of edges at a particular
orientation in a grid cell.
Approaches based on quantizing or matching local appearance from single observations can perform
poorly on objects that are made of transparent material. The local appearance of a transparent object
is governed, in general, by a complex rendering process including multi-layer refraction and specular
reflections. The local appearance of a particular point on a transparent object may be dominated by
environmental characteristics, i.e., the background pattern and illumination field. Models that search
for nearest neighbor local appearance patterns from training instances may identify the environment
(e.g. the background behind the object) rather than the object of interest. Methods that vector
quantize individual observations of local appearance may learn a representation that partitions well
the variation in the environment. Neither approach is likely to learn salient characteristics of local
transparent appearance.
Bag-of-words (c.f., [31, 5, 24], and many others), Pyramid-match [14, 17], and many generative
methods [11, 32] exploit the ?visual word? metaphor, establishing a vocabulary of quantized SIFT
appearance. Typically a k-means clustering method (or a discriminative variant) is used to associate nearby appearances into a single cluster. Unfortunately when background energy dominates
transparent foreground energy, averaging similar local appearances may simply find a cluster center
corresponding to background structure, not foreground appearance.
For transparent objects, we argue that there is local latent structure that can be used to recognize
objects; we formulate the problem as learning this structure in a SIFT representation using a latent
factor model. Early methods for probabilistic topic modeling (e.g. [16]) were developed in the
domain of text analysis to factor word occurrence distributions of documents in to multiple latent
topics in an unsupervised manner. Latent Dirichlet Allocation [4, 15] is an additive topic model, that
allows for prior distributions on mixing proportions as well as the components.
SIFT and LDA have been combined before, but the conventional application of LDA to SIFT is
to form a topic representation over the quantized SIFT descriptors [30, 32, 10, 22]. As previous
methods apply vector quantization before latent modeling, they are inappropriate for uncovering
latent (and possibly subtle) transparent structures. To our knowledge, ours is the first work to infer a
latent topic model from a SIFT representation before quantizing into a ?visual word? representation.
Related work on latent variable models includes [9], which reports a ?LatentSVM? model to solve
for a HOG descriptor with enumerated local high resolution patches. The offset of the patches is
regarded as a latent variable in the method, and is solved using a semi-convex optimization. Note
that the latent variable here is distinct from a latent topic variable, and that there are no explicitly
shared structures across the parts in their model. Quattoni et al. [26] report an object recognition
model that uses latent or hidden variables which have CRF-like dependencies to observed image
features, including a representation that is formed with local oriented structures. Neither method
has an LDA component, but both of these methods have considerable representational flexibility
and the ability to learn weight factors from large amounts of training data. Our method is similar
in spirit to [13], which uses local oriented gradient strength in a HOG descriptor as a word in an
LDA model. However, our method is based on local patches, while theirs is evaluated over a global
descriptor; their model also did not include any explicit quantization into discrete (and overlapping)
visual words. No results have been reported on transparent objects using these methods.
In addition to the above work on generic (non-transparent) object recognition, there has been some
limited work in the area of transparent object recognition. Most relevant is that of [25], which
focuses on recognition from specular reflections. If the lighting conditions and pose of the object are
known, then specularities on the glass surface can be highly discriminative of different object shapes.
The initial work in [25] however assumes a highly simplified environment and has not been tested
with unknown 3D shape, or with varying and unknown pose and complex illumination. By focusing
on specularities they also ignore the potentially rich source of information about transparent object
shape caused by the refraction of the background image structure. We take a different approach
and do not explicitly model specular reflections or their relationship to 3D shape. Rather than focus
on a few highlights we focus on how transparent objects appear against varied backgrounds. Our
learning approach is designed to automatically uncover the most discriminative latent features in the
data (which may include specular reflections).
3
It is important to emphasize that we are approaching this problem as one of transparent object
recognition. This is in contrast to previous work that has explored glass material recognition [20,
21]. This is analogous to the distinction between modeling ?things? and ?stuff? [1]. There has been
significant work on detecting and modeling surfaces that are specular or transparent [7, 12, 23, 28].
These methods, which focus on material recognition, may give important insight into the systematic
deformations of the image statistics caused by transparent objects and may inform the design of
features for object recognition. Note that a generic ?glass material? detector would complement
our approach in that it could focus attention on regions of a scene that are most likely to contain
transparent objects. Thus, while material recognition and surface modeling are distinct from our
problem, we consider them complimentary.
3
Local Transparent Features
Local transparent patch appearance can be understood as a combination of different processes that
involve illuminants in the scene, overall 3D structure, as well as the geometry and material properties
of the transparent object. Many of these phenomena can be approximated with an additive image
formation model, subject to certain deformations. A full treatment of the refractive properties of
different transparent materials and their geometry is beyond our scope and likely intractable for
most contemporary object recognition tasks.
Rather than analytically model interactions between scene illumination, material properties and object geometry, we take a machine learning perspective and assume that observed image patches factor into latent components ? some originating from the background, others reflecting the structure
of the transparent object. To detect a transparent object it may be sufficient to detect characteristic
patterns of deformation (e.g. in the stem of a wine glass) or features that are sometimes present in
the image and sometimes not (like the rim of a thin glass).
We assume a decomposition of an image I into a set of densely sampled image patches IP , each
represented by a local set of edge responses in the style of [18, 6], which we further model with
an additive process. From each IP we obtain local gradient estimates GP . We model local patch
appearance as an additive combination of image structures originating from a background patch
appearance A0 as well as a one or more patterns Ai that has been affected by e.g., refraction of the
transparent object. An image patch is thus described by:
GP = [ gP (0, 0, 0), . . . , gP (M, N, T ) ] =
?
?(i) Ai
(1)
i
where gP (i, j, o) is the edge count for a histogram bin at position (i, j) in patch IP at orientation
index o; M, N, T give the dimensions of the descriptor histogram and ?(i) is the scalar weight
associated with pattern Ai . We further assume non-negative ?(i) , reflecting the image formation
process.
Based on this model, we formulate a corresponding generative process for the local gradient statistics
p(GP ) for patch P. The model constitutes a decomposition of p(GP ) into components p(G|z = j)
and mixing proportions p(z = j).
p(GP ) =
T
?
p(G|z = j)p(z = j).
(2)
j
Both the components as well as their mixing proportions are unknown to us wherefore we treat
them as latent variables in our model. However, we may reasonably assume that each observed
patch was only generated from a few components, so we employ a sparseness prior over the component weights. To estimate this mixture model we use methods for probabilistic topic modeling
that allow us to place prior distributions on mixing proportions as well as the components. Based
on a set of training patches, we learn a model over the patches which captures the salient structures characterizing the object patch appearance as a set of latent topics. We have investigated both
supervised and unsupervised latent topic formation strategies; as reported below both outperform
4
?
?
?(z)
T
?(i)
v
z
y
g
V
|GP |
? (c))
C
P
Figure 2: Left: Graphical model representing our latent topic model of patch appearance and quantization into a potentially overlapping set of visual words. See text for details. Right: Local factors
learned by latent topic model for example training data.
Figure 3: Detected quantized transparent local features (transparent visual words) on an example
image. Each image shows the detected locations for the transparent visual word corresponding to
the latent topics depicted on the left.
traditional quantized appearance techniques. Figure 2 illustrates examples of the latent topics ?(z)
learned by decomposing a local SIFT representation into underlying components. At test time, a
patch is presented to the LDA model and topic activation weights are inferred given the fixed topic
vectors.
To obtain a discrete representation, we can quantize the space of topic vectors into ?transparent
visual words?. The essence of transparency is that more than one visual word may be present in a
single local patch, so we have an overlapping set of clusters in the topic space. We quantize the
topic activation levels ?(i) into a set of overlapping visual words by forming axis-aligned partitions
of the topic space and associate a distinct visual word detection with each topic activation value that
is above a threshold activation level ?.
Figure 2 summarizes our transparent visual word model in a graphical model representation. Our
method follows the standard LDA presentation, with the addition of a plate of variables corresponding to visual word detections. These boolean detection variables deterministically depend on the
latent topic activation vector: word vi is set when ?(i) ? ?. Figure 3 illustrates detected local
features on an example image.
Latent topics can be found using an unsupervised process, where topics are trained from a generic
corpus of foreground and/or background imagery. More discriminative latent factors can be found
by taking advantage of supervised patch labels. In this case we employ a supervised extension to the
LDA model1 (sLDA [3]), which allows us to provide the model with class labels per patch in order
to train a discriminant representation. This revised model is displayed in the dashed box in Figure 2.
The foreground/background label for each patch is provided at training time by the observed variable
y; the parameters ? (c) for each class c = 1, . . . , C are trained in order to fit to the observed label
variables y by a linear classification model on the topic activations. We make use of these weights ?
by deriving a per topic thresholding according to learned importance for each topic: ?(i) ? ?/? (i) .
1
our implementation is based on the one of [33]
5
Figure 4: Example images from our training set of transparent objects in front of varying background.
4
Experiments
We have evaluated the proposed method on a glass detection task in a domestic environment under
different view-point and illumination conditions; we compared to two baseline methods, HOG and
vector quantized SIFT.
We collected data2 for four glass objects (two wine glasses and two water glasses) in front of a LCD
monitor with varying background (we used images from flickr.com under the search term ?breakfast
table?) in order to capture 200 training images of transparent objects. Figure 4 shows some example
images of the training set.
We extracted a dense grid of 15 by 37 patches of each of the 800 glass examples as well as 800
background crops. Each patch is represented as a 4 by 4 grid of 9 dimensional edge orientation
histograms. Neighboring patches overlap by 75%. We explored training the LDA model either only
on foreground (glass) patches, only on background (non-transparent) patches, or on both, as reported
below. The prior parameters for the LDA model were set to be ? = 2 and ? = 0.01 and a total of 25
components were estimated. The components learnt from foreground patches are shown in Figure
2; patches from background or mixed data were qualitatively similar.
We infer latent topic activations for each patch and set detections of transparent visual words according to the above-threshold topic dimensions. We set the threshold corresponding to an average
activation of 2 latent components per patch on the training set. Based on these 15 by 37 grids of
transparent visual word occurrences, we train a linear, binary SVM in order to classify glasses vs.
background.
For detection we follow the same procedure to infer the latent topic activations. Figure 3 shows
example detections of transparent visual words on an example test image. We run a scanning window
algorithm to detect likely object locations, examining all spatial location in the test image, and a
range of scales from 0.5 to 1.5 with respect to the training size, in increments of 0.1. In each
window latent topic activations are inferred for all descriptors and classification by the linear SVM
is performed on the resulting grid of transparent visual word occurrences. For inference we use the
implementation of [4], that results in an averaged computation time of 8.4ms per descriptor on a
single core of an Intel Core2 2.3 Ghz machine. This is substantial but not prohibitive, as we can
reuse computation by choosing an appropriate stride of our scanning technique.
We compare to 2 baseline methods: traditional visual words and the histogram of oriented gradients
(HOG) detector [6]. Both baselines share the same detection approach - namely obtaining detections
by applying a linear SVM classifier in sliding window approach - but are based on very different
2
all data is available at http://www.eecs.berkeley.edu/?mfritz/transparency
6
1
Transparent Visual Word model (sLDA)
Transparent Visual Word model (LDA glass)
0.9
Transparent Visual Word model (LDA bg)
HOG
0.8
Traditional Visual Word model
0.7
precision
0.6
0.5
0.4
0.3
0.2
0.1
0
0
0.1
0.2
0.3
recall
0.4
0.5
0.6
Figure 5: Performance evaluation of detector based on transparent visual words w.r.t. baseline. See
text for details.
representations. For the traditional visual words baseline we replace the transparent visual words
by visual words formed in a conventional fashion: sampled patches are directly input to a vector
quantization scheme. We tried different number of clusters from 100 to 1000 and found k = 100
to work slightly better than the other choices. The HOG baseline basically leaves out any feature
quantization and operates directly on the gradient histograms. We use the code provided by the
authors [6].
To evaluate our approach, we recorded 14 test images of the above transparent objects in a home
environment containing 49 glass instances in total; note that this test set is very different in nature
from the training data. The training images were all collected with background illumination patterns
obtained entirely from online image sources whereas the test data is under natural home illumination
conditions. Further the training images were collected from a single viewpoint while viewpoint
varies in the test data. In order to quantify our detection results we use the evaluation metric proposed
in [8] with a matching threshold of 0.3.
Our methods based on transparent visual words outperform both baselines across all ranges of operating points as shown in the precision-recall curve in Figure 5. We show results for the LDA model
trained only on glass patches (LDA glass) as well as trained only on background patches (LDA bg).
While neither of the methods achieve performance that would indicate glass detection is a solved
problem, the results point in a promising direction. Example detections of our system on the test
data are shown in Figure 6.
We also evaluated the supervised LDA as described above on data with mixed foreground and background patches, where the class label for each patch was provided to the training regime. The performance of sLDA is also displayed in Figure 5. In all of our experiments the transparent visual word
models outperformed the conventional appearance baselines. Remarkably, latent topics learned on
background data performed nearly as well as those trained on foreground data; those learned using
a discriminative paradigm tended to outperform those trained in an unsupervised fashion, but the
difference was not dramatic. Further investigation is needed to determine when discriminative models may have significant value, and/or whether a single latent representation is sufficient for a broad
range of category recognition tasks.
5
Conclusion and Future Work
We have shown how appearance descriptors defined with an additive local factor model can capture local structure of transparent objects. Structures which are only weakly present in individual
training instances can be revealed in a local factor model and inferred in test images. Learned latent
topics define our ?transparent visual words?; multiple such words can be detected at a single location. Recognition is performed using a conventional discriminative method and we show results for
7
Figure 6: Example of transparent object detection with transparent local features.
detection of transparent glasses in a domestic environment. These results support our claim that an
additive model of local patch appearance can be advantageous when modeling transparent objects,
and that latent topic models such as LDA are appropriate for discovering locally transparent ?visual
words?. This also demonstrates the advantage of estimating a latent appearance representation prior
to a vector quantization step, in contrast to the conventional current approach of doing so in reverse.
We see this work as a first step toward transparent object recognition in complex environments. Our
evaluation establishes a first baseline for transparent object recognition. While limited in scope, the
range of test objects, poses and environments considered are varied and natural (i.e. not a laboratory
environment). More extensive evaluation of these methods is needed with a wider range of poses,
with more objects, occlusion and more varied illumination conditions.
There are several avenues of potential future work. We have not explicitly addressed specularity,
which is often indicative of local shape, though specular features may be captured in our representation. Dense sampling may be suboptimal and it would be valuable to explore invariant detection
schemes in the context of this overall method. Finally, we assume no knowledge of background
statistics at test time, which may be overly restrictive; inferred background statistics may be informative in determining whether observed local appearance statistics are discriminative for a particular
object category.
8
Acknowledgements. This work was supported in part by Willow Garage, Google, NSF grants IIS0905647 and IIS-0819984, and a Feodor Lynen Fellowship granted by the Alexander von Humboldt
Foundation.
References
[1] E. H. Adelson. On seeing stuff: the perception of materials by humans and machines. In SPIE, 2001.
[2] A. C. Berg, T. L. Berg, and J. Malik. Shape matching and object recognition using low distortion correspondence. In CVPR, pages 26?33, 2005.
[3] D. Blei and J. McAuliffe. Supervised topic models. In NIPS, 2007.
[4] D. Blei, A. Ng, and M. Jordan. Latent dirichlet allocation. JMLR, 2003.
[5] G. Csurka, C. Dance, L. Fan, J. Willarnowski, and C. Bray. Visual categorization with bags of keypoints.
In SLCV?04, pages 59?74, Prague, Czech Republic, May 2004.
[6] N. Dalal and B. Triggs. Histograms of oriented gradients for human detection. In CVPR, 2005.
[7] A. DelPozo and S. Savarese. Detecting specular surfaces on natural images. In CVPR, 2007.
[8] M. Everingham, A. Zisserman, C. K. I. Williams, and L. Van Gool.
The PASCAL Visual Object Classes Challenge 2005 (VOC2005) Results.
http://www.pascalnetwork.org/challenges/VOC/voc2005/results.pdf, 2005.
[9] P. F. Felzenszwalb, D. McAllester, and D. Ramana. A discriminatively trained, multiscale, deformable
part model. In CVPR, 2008.
[10] R. Fergus, L. Fei-Fei, P. Perona, and A. Zisserman. Learning object categories from google?s image
search. In ICCV, 2005.
[11] R. Fergus, A. Zisserman, and P. Perona. Object class recognition by unsupervised scale-invariant learning.
In CVPR, 2003.
[12] R. W. Fleming and H. H. B?ulthoff. Low-level image cues in the perception of translucent materials. ACM
Trans. Appl. Percept., 2(3):346?382, 2005.
[13] M. Fritz and B. Schiele. Decomposition, discovery and detection of visual categories using topic models.
In CVPR, 2008.
[14] K. Grauman and T. Darrell. The pyramid match kernel: Efficient learning with sets of features. JMLR,
8:725?760, 2007.
[15] T. L. Griffiths and M. Steyvers. Finding scientific topics. PNAS USA, 2004.
[16] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Machine Learning, 2001.
[17] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing
natural scene categories. In CVPR, pages 2169?2178, 2006.
[18] D. Lowe. Distinctive image features from scale-invariant keypoints. IJCV, 60(2):91?110, 2004.
[19] J. Matas, O. Chum, U. Martin, and T. Pajdla. Robust wide baseline stereo from maximally stable extremal
regions. In P. L. Rosin and A. D. Marshall, editors, BMVC, pages 384?393, 2002.
[20] K. McHenry and J. Ponce. A geodesic active contour framework for finding glass. In CVPR, 2006.
[21] K. McHenry, J. Ponce, and D. A. Forsyth. Finding glass. In CVPR, 2005.
[22] J. C. Niebles, H. Wang, and L. Fei-Fei. Unsupervised learning of human action categories using spatialtemporal words. Int. J. Comput. Vision, 79(3):299?318, 2008.
[23] P. Nillius and J.-O. Eklundh. Classifying materials from their reflectance properties. In T. Pajdla and
J. Matas, editors, ECCV, volume 3021, 2004.
[24] D. Nister and H. Stewenius. Scalable recognition with a vocabulary tree. In CVPR, 2006.
[25] M. Osadchy, D. Jacobs, and R. Ramamoorthi. Using specularities for recognition. In ICCV, 2003.
[26] A. Quattoni, M. Collins, and T. Darrell. Conditional random fields for object recognition. In NIPS, 2004.
[27] M. Riesenhuber and T. Poggio. Hierarchical models of object recognition in cortex. Nature Neuroscience,
2:1019?1025, 1999.
[28] S. Roth and M. J. Black. Specular flow and the recovery of surface structure. In CVPR, 2006.
[29] E. Shechtman and M. Irani. Matching local self-similarities across images and videos. In CVPR, 2007.
[30] J. Sivic, B. C. Russell, A. A. Efros, A. Zisserman, and W. T. Freeman. Discovering objects and their
locations in images. In ICCV, 2005.
[31] J. Sivic and A. Zisserman. Video Google: A text retrieval approach to object matching in videos. In
ICCV, 2003.
[32] E. Sudderth, A. Torralba, W. Freeman, and A. Willsky. Learning hierarchical models of scenes, objects,
and parts. In ICCV, 2005.
[33] C. Wang, D. Blei, and L. Fei-Fei. Simultaneous image classification and annotation. In CVPR, 2009.
[34] D. Zongker, D. Werner, B. Curless, and D. Salesin. Environment matting and compositing. In SIGGRAPH, pages 205?214, 1999.
9
| 3808 |@word dalal:1 proportion:4 advantageous:1 everingham:1 triggs:1 tried:1 decomposition:3 jacob:1 dramatic:1 shechtman:1 initial:1 ours:2 document:1 existing:2 current:2 com:1 activation:11 wherefore:1 additive:10 partition:2 informative:1 shape:6 hofmann:1 designed:2 v:1 generative:5 prohibitive:1 leaf:1 discovering:2 cue:1 indicative:1 data2:1 core:1 blei:3 provides:2 quantized:8 codebook:1 detecting:2 location:5 org:1 ijcv:1 manner:1 multi:1 freeman:2 voc:1 core2:1 automatically:1 little:1 metaphor:1 inappropriate:1 window:3 domestic:4 provided:3 discover:1 estimating:2 underlying:2 medium:1 translucent:1 complimentary:1 developed:1 finding:4 impractical:1 berkeley:4 stuff:2 grauman:1 classifier:2 demonstrates:2 grant:1 appear:1 mcauliffe:1 before:4 service:1 understood:1 aggregating:1 local:52 treat:1 osadchy:1 despite:1 establishing:1 black:2 suggests:1 appl:1 limited:3 range:5 averaged:1 ulthoff:1 implement:1 prevalence:1 procedure:1 area:1 matching:6 word:40 griffith:1 seeing:1 context:1 applying:2 www:2 conventional:5 map:1 center:1 roth:1 williams:1 attention:2 convex:1 formulate:2 resolution:1 recovery:1 insight:1 regarded:1 deriving:1 steyvers:1 variation:1 increment:1 analogous:1 us:2 associate:2 recognition:34 approximated:1 labeled:1 observed:9 solved:2 capture:5 wang:2 region:3 russell:1 contemporary:1 valuable:1 substantial:1 environment:13 schiele:1 lcd:1 geodesic:1 trained:7 depend:1 weakly:1 distinctive:1 siggraph:1 various:1 represented:2 train:2 distinct:4 detected:5 formation:4 choosing:1 quite:1 slda:3 plausible:2 solve:1 distortion:1 garage:2 cvpr:13 tested:1 ability:1 statistic:5 gp:9 highlighted:1 ip:3 online:1 karayev:1 advantage:2 quantizing:2 propose:1 interaction:1 refractive:2 slcv:1 neighboring:1 aligned:2 relevant:1 mixing:4 poorly:2 flexibility:1 representational:1 achieve:1 deformable:1 compositing:1 description:1 cluster:4 darrell:3 categorization:1 object:78 wider:1 depending:1 pose:5 nearest:1 received:1 indicate:1 quantify:1 direction:2 exploration:1 human:4 mcallester:1 material:12 bin:1 humboldt:1 transparent:82 investigation:1 niebles:1 enumerated:1 extension:1 around:1 considered:1 scope:2 claim:1 efros:1 adopt:1 early:1 torralba:1 wine:2 outperformed:1 bag:4 label:5 currently:2 extremal:1 establishes:2 weighted:1 rather:3 varying:7 focus:6 ponce:3 contrast:3 baseline:11 detect:4 glass:24 inference:1 typically:2 a0:1 hidden:1 perona:2 originating:2 willow:2 overall:3 uncovering:1 orientation:5 classification:3 pascal:1 spatial:3 uc:3 field:2 ng:1 sampling:1 broad:1 adelson:1 unsupervised:7 constitutes:1 thin:1 nearly:1 foreground:8 future:3 others:2 report:2 few:2 employ:2 oriented:4 recognize:1 latentsvm:1 individual:3 densely:1 geometry:3 occlusion:1 detection:18 interest:2 bradski:1 highly:2 evaluation:4 mixture:1 light:1 behind:1 edge:7 poggio:1 tree:1 savarese:1 deformation:3 instance:5 classify:1 modeling:7 boolean:1 marshall:1 werner:1 republic:1 rosin:1 recognizing:2 examining:1 front:3 reported:3 dependency:1 scanning:2 eec:1 learnt:1 varies:1 combined:1 fritz:2 probabilistic:4 systematic:1 michael:1 imagery:2 von:1 recorded:1 containing:1 possibly:1 style:1 potential:1 stride:1 includes:2 int:1 forsyth:1 explicitly:3 caused:2 vi:1 bg:2 stewenius:1 performed:3 view:1 csurka:1 lowe:1 mario:1 characterizes:1 doing:1 annotation:1 formed:2 descriptor:12 characteristic:8 percept:1 yield:1 identify:1 salesin:1 plastic:1 curless:1 basically:1 lighting:1 quattoni:2 detector:3 inform:1 flickr:1 tended:1 simultaneous:1 trevor:1 against:1 energy:8 refraction:6 associated:1 spie:1 sampled:2 treatment:1 recall:2 knowledge:5 subtle:1 rim:1 uncover:1 reflecting:2 focusing:1 supervised:5 follow:1 breakfast:1 response:1 zisserman:5 maximally:1 bmvc:1 formulation:2 evaluated:3 box:1 though:1 multiscale:1 overlapping:4 google:3 lda:23 scientific:1 believe:1 usa:1 pascalnetwork:1 brown:1 contain:2 analytically:1 irani:1 laboratory:1 semantic:1 self:2 essence:1 m:1 plate:1 pdf:1 crf:1 feodor:1 performs:1 reflection:5 image:40 lazebnik:1 novel:2 discovers:1 common:1 volume:1 extend:1 theirs:1 significant:2 ai:3 unconstrained:1 grid:6 stable:2 zongker:1 similarity:2 surface:6 operating:1 etc:1 base:1 cortex:1 perspective:1 reverse:1 certain:1 binary:1 captured:2 determine:1 paradigm:1 dashed:1 semi:1 ii:1 neurally:1 full:2 multiple:2 infer:4 stem:1 transparency:2 keypoints:2 pnas:1 match:2 retrieval:1 feasibility:1 variant:1 crop:1 scalable:1 vision:1 metric:1 physically:2 histogram:10 sergey:1 sometimes:2 kernel:1 pyramid:3 robotics:1 achieved:1 cell:1 background:33 addition:2 whereas:1 remarkably:1 addressed:1 fellowship:1 sudderth:1 source:2 subject:1 thing:1 ramamoorthi:1 flow:1 spirit:2 jordan:1 prague:1 revealed:1 rendering:1 fit:1 specular:8 approaching:1 suboptimal:1 idea:1 avenue:1 whether:2 reuse:1 granted:1 stereo:1 action:1 dramatically:1 generally:2 involve:1 amount:1 locally:4 nister:1 category:11 http:2 outperform:3 exist:1 nsf:1 chum:1 estimated:2 overly:1 per:4 voc2005:2 neuroscience:1 discrete:2 affected:1 salient:2 four:1 threshold:5 monitor:1 neither:3 run:1 opaque:1 place:1 patch:47 home:2 summarizes:1 capturing:1 layer:1 entirely:1 correspondence:1 fan:1 strength:1 bray:1 fei:6 scene:8 dominated:1 nearby:1 extremely:1 relatively:1 martin:1 according:4 combination:3 across:4 slightly:1 invariant:3 iccv:5 count:1 needed:2 available:1 decomposing:1 ramana:1 apply:2 hierarchical:2 appropriate:3 generic:3 occurrence:3 assumes:1 dirichlet:4 clustering:2 include:2 graphical:2 household:1 exploit:1 restrictive:1 reflectance:1 malik:1 matas:2 strategy:1 traditional:5 gradient:6 distance:1 topic:41 argue:2 collected:3 discriminant:2 water:1 toward:1 willsky:1 code:1 index:1 relationship:1 unfortunately:1 potentially:2 hog:7 pajdla:2 negative:1 design:1 reliably:1 implementation:2 unknown:3 perform:2 observation:3 revised:1 riesenhuber:1 displayed:2 varied:3 inferred:4 complement:1 bottle:1 required:1 namely:1 extensive:2 sivic:2 learned:8 distinction:1 czech:1 fleming:1 nip:2 trans:1 beyond:2 usually:1 pattern:13 below:3 perception:2 regime:1 sliding:1 challenge:2 model1:1 including:3 video:3 gool:1 overlap:1 natural:5 hybrid:1 representing:1 scheme:2 axis:2 schmid:1 refracts:1 text:4 prior:6 literature:1 acknowledgement:1 discovery:1 determining:1 highlight:1 discriminatively:1 mixed:2 allocation:4 specularity:1 foundation:1 sufficient:2 thresholding:1 viewpoint:2 editor:2 classifying:1 share:1 eccv:1 supported:1 allow:1 neighbor:1 wide:1 characterizing:1 taking:1 felzenszwalb:1 matting:1 ghz:1 van:1 curve:1 dimension:3 vocabulary:3 rich:1 contour:1 author:1 commonly:1 made:2 qualitatively:1 simplified:1 emphasize:1 ignore:1 eklundh:1 global:1 active:1 corpus:1 discriminative:11 fergus:2 search:4 latent:50 decomposes:1 table:1 promising:1 learn:5 reasonably:1 nature:2 robust:1 obtaining:1 quantize:5 investigated:1 complex:3 domain:1 did:1 dense:2 identifier:1 fig:1 intel:1 fashion:2 precision:2 sub:1 position:1 explicit:2 deterministically:1 comput:1 candidate:1 governed:1 jmlr:2 hmax:1 sift:13 offset:1 explored:2 svm:3 dominates:3 intractable:1 quantization:11 importance:1 illumination:10 illustrates:2 sparseness:1 spatialtemporal:1 depicted:1 specularities:3 simply:1 explore:2 appearance:31 likely:5 forming:1 visual:38 scalar:1 gary:1 environmental:2 extracted:1 acm:1 conditional:1 presentation:1 shared:1 replace:1 content:1 change:1 considerable:1 determined:1 specifically:2 operates:1 averaging:1 total:2 berg:2 support:1 collins:1 alexander:1 illuminant:1 evaluate:1 dance:1 phenomenon:1 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.